text
stringlengths 389
68.1k
|
---|
|
February 2009
Hacker News was two years old last week. Initially it was supposed to be a
side project—an application to sharpen Arc on, and a place for current and
future Y Combinator founders to exchange news. It's grown bigger and taken up
more time than I expected, but I don't regret that because I've learned so
much from working on it.
**Growth**
When we launched in February 2007, weekday traffic was around 1600 daily
uniques. It's since grown to around 22,000. This growth rate is a bit higher
than I'd like. I'd like the site to grow, since a site that isn't growing at
least slowly is probably dead. But I wouldn't want it to grow as large as Digg
or Reddit—mainly because that would dilute the character of the site, but also
because I don't want to spend all my time dealing with scaling.
I already have problems enough with that. Remember, the original motivation
for HN was to test a new programming language, and moreover one that's focused
on experimenting with language design, not performance. Every time the site
gets slow, I fortify myself by recalling McIlroy and Bentley's famous quote
> The key to performance is elegance, not battalions of special cases.
and look for the bottleneck I can remove with least code. So far I've been
able to keep up, in the sense that performance has remained consistently
mediocre despite 14x growth. I don't know what I'll do next, but I'll probably
think of something.
This is my attitude to the site generally. Hacker News is an experiment, and
an experiment in a very young field. Sites of this type are only a few years
old. Internet conversation generally is only a few decades old. So we've
probably only discovered a fraction of what we eventually will.
That's why I'm so optimistic about HN. When a technology is this young, the
existing solutions are usually terrible; which means it must be possible to do
much better; which means many problems that seem insoluble aren't. Including,
I hope, the problem that has afflicted so many previous communities: being
ruined by growth.
**Dilution**
Users have worried about that since the site was a few months old. So far
these alarms have been false, but they may not always be. Dilution is a hard
problem. But probably soluble; it doesn't mean much that open conversations
have "always" been destroyed by growth when "always" equals 20 instances.
But it's important to remember we're trying to solve a new problem, because
that means we're going to have to try new things, most of which probably won't
work. A couple weeks ago I tried displaying the names of users with the
highest average comment scores in orange. That was a mistake. Suddenly a
culture that had been more or less united was divided into haves and have-
nots. I didn't realize how united the culture had been till I saw it divided.
It was painful to watch.
So orange usernames won't be back. (Sorry about that.) But there will be other
equally broken-seeming ideas in the future, and the ones that turn out to work
will probably seem just as broken as those that don't.
Probably the most important thing I've learned about dilution is that it's
measured more in behavior than users. It's bad behavior you want to keep out
more than bad people. User behavior turns out to be surprisingly malleable. If
people are expected to behave well, they tend to; and vice versa.
Though of course forbidding bad behavior does tend to keep away bad people,
because they feel uncomfortably constrained in a place where they have to
behave well. But this way of keeping them out is gentler and probably also
more effective than overt barriers.
It's pretty clear now that the broken windows theory applies to community
sites as well. The theory is that minor forms of bad behavior encourage worse
ones: that a neighborhood with lots of graffiti and broken windows becomes one
where robberies occur. I was living in New York when Giuliani introduced the
reforms that made the broken windows theory famous, and the transformation was
miraculous. And I was a Reddit user when the opposite happened there, and the
transformation was equally dramatic.
I'm not criticizing Steve and Alexis. What happened to Reddit didn't happen
out of neglect. From the start they had a policy of censoring nothing except
spam. Plus Reddit had different goals from Hacker News. Reddit was a startup,
not a side project; its goal was to grow as fast as possible. Combine rapid
growth and zero censorship, and the result is a free for all. But I don't
think they'd do much differently if they were doing it again. Measured by
traffic, Reddit is much more successful than Hacker News.
But what happened to Reddit won't inevitably happen to HN. There are several
local maxima. There can be places that are free for alls and places that are
more thoughtful, just as there are in the real world; and people will behave
differently depending on which they're in, just as they do in the real world.
I've observed this in the wild. I've seen people cross-posting on Reddit and
Hacker News who actually took the trouble to write two versions, a flame for
Reddit and a more subdued version for HN.
**Submissions**
There are two major types of problems a site like Hacker News needs to avoid:
bad stories and bad comments. So far the danger of bad stories seems smaller.
The stories on the frontpage now are still roughly the ones that would have
been there when HN started.
I once thought I'd have to weight votes to keep crap off the frontpage, but I
haven't had to yet. I wouldn't have predicted the frontpage would hold up so
well, and I'm not sure why it has. Perhaps only the more thoughtful users care
enough to submit and upvote links, so the marginal cost of one random new user
approaches zero. Or perhaps the frontpage protects itself, by advertising what
type of submission is expected.
The most dangerous thing for the frontpage is stuff that's too easy to upvote.
If someone proves a new theorem, it takes some work by the reader to decide
whether or not to upvote it. An amusing cartoon takes less. A rant with a
rallying cry as the title takes zero, because people vote it up without even
reading it.
Hence what I call the Fluff Principle: on a user-voted news site, the links
that are easiest to judge will take over unless you take specific measures to
prevent it.
Hacker News has two kinds of protections against fluff. The most common types
of fluff links are banned as off-topic. Pictures of kittens, political
diatribes, and so on are explicitly banned. This keeps out most fluff, but not
all of it. Some links are both fluff, in the sense of being very short, and
also on topic.
There's no single solution to that. If a link is just an empty rant, editors
will sometimes kill it even if it's on topic in the sense of being about
hacking, because it's not on topic by the real standard, which is to engage
one's intellectual curiosity. If the posts on a site are characteristically of
this type I sometimes ban it, which means new stuff at that url is auto-
killed. If a post has a linkbait title, editors sometimes rephrase it to be
more matter-of-fact. This is especially necessary with links whose titles are
rallying cries, because otherwise they become implicit "vote up if you believe
such-and-such" posts, which are the most extreme form of fluff.
The techniques for dealing with links have to evolve, because the links do.
The existence of aggregators has already affected what they aggregate. Writers
now deliberately write things to draw traffic from aggregators—sometimes even
specific ones. (No, the irony of this statement is not lost on me.) Then there
are the more sinister mutations, like linkjacking—posting a paraphrase of
someone else's article and submitting that instead of the original. These can
get a lot of upvotes, because a lot of what's good in an article often
survives; indeed, the closer the paraphrase is to plagiarism, the more
survives.
I think it's important that a site that kills submissions provide a way for
users to see what got killed if they want to. That keeps editors honest, and
just as importantly, makes users confident they'd know if the editors stopped
being honest. HN users can do this by flipping a switch called showdead in
their profile.
**Comments**
Bad comments seem to be a harder problem than bad submissions. While the
quality of links on the frontpage of HN hasn't changed much, the quality of
the median comment may have decreased somewhat.
There are two main kinds of badness in comments: meanness and stupidity. There
is a lot of overlap between the two—mean comments are disproportionately
likely also to be dumb—but the strategies for dealing with them are different.
Meanness is easier to control. You can have rules saying one shouldn't be
mean, and if you enforce them it seems possible to keep a lid on meanness.
Keeping a lid on stupidity is harder, perhaps because stupidity is not so
easily distinguishable. Mean people are more likely to know they're being mean
than stupid people are to know they're being stupid.
The most dangerous form of stupid comment is not the long but mistaken
argument, but the dumb joke. Long but mistaken arguments are actually quite
rare. There is a strong correlation between comment quality and length; if you
wanted to compare the quality of comments on community sites, average length
would be a good predictor. Probably the cause is human nature rather than
anything specific to comment threads. Probably it's simply that stupidity more
often takes the form of having few ideas than wrong ones.
Whatever the cause, stupid comments tend to be short. And since it's hard to
write a short comment that's distinguished for the amount of information it
conveys, people try to distinguish them instead by being funny. The most
tempting format for stupid comments is the supposedly witty put-down, probably
because put-downs are the easiest form of humor. So one advantage of
forbidding meanness is that it also cuts down on these.
Bad comments are like kudzu: they take over rapidly. Comments have much more
effect on new comments than submissions have on new submissions. If someone
submits a lame article, the other submissions don't all become lame. But if
someone posts a stupid comment on a thread, that sets the tone for the region
around it. People reply to dumb jokes with dumb jokes.
Maybe the solution is to add a delay before people can respond to a comment,
and make the length of the delay inversely proportional to some prediction of
its quality. Then dumb threads would grow slower.
**People**
I notice most of the techniques I've described are conservative: they're aimed
at preserving the character of the site rather than enhancing it. I don't
think that's a bias of mine. It's due to the shape of the problem. Hacker News
had the good fortune to start out good, so in this case it's literally a
matter of preservation. But I think this principle would also apply to sites
with different origins.
The good things in a community site come from people more than technology;
it's mainly in the prevention of bad things that technology comes into play.
Technology certainly can enhance discussion. Nested comments do, for example.
But I'd rather use a site with primitive features and smart, nice users than a
more advanced one whose users were idiots or trolls.
So the most important thing a community site can do is attract the kind of
people it wants. A site trying to be as big as possible wants to attract
everyone. But a site aiming at a particular subset of users has to attract
just those—and just as importantly, repel everyone else. I've made a conscious
effort to do this on HN. The graphic design is as plain as possible, and the
site rules discourage dramatic link titles. The goal is that the only thing to
interest someone arriving at HN for the first time should be the ideas
expressed there.
The downside of tuning a site to attract certain people is that, to those
people, it can be too attractive. I'm all too aware how addictive Hacker News
can be. For me, as for many users, it's a kind of virtual town square. When I
want to take a break from working, I walk into the square, just as I might
into Harvard Square or University Ave in the physical world. But an online
square is more dangerous than a physical one. If I spent half the day
loitering on University Ave, I'd notice. I have to walk a mile to get there,
and sitting in a cafe feels different from working. But visiting an online
forum takes just a click, and feels superficially very much like working. You
may be wasting your time, but you're not idle. Someone is wrong on the
Internet, and you're fixing the problem.
Hacker News is definitely useful. I've learned a lot from things I've read on
HN. I've written several essays that began as comments there. So I wouldn't
want the site to go away. But I would like to be sure it's not a net drag on
productivity. What a disaster that would be, to attract thousands of smart
people to a site that caused them to waste lots of time. I wish I could be
100% sure that's not a description of HN.
I feel like the addictiveness of games and social applications is still a
mostly unsolved problem. The situation now is like it was with crack in the
1980s: we've invented terribly addictive new things, and we haven't yet
evolved ways to protect ourselves from them. We will eventually, and that's
one of the problems I hope to focus on next.
** |
|
May 2008
Adults lie constantly to kids. I'm not saying we should stop, but I think we
should at least examine which lies we tell and why.
There may also be a benefit to us. We were all lied to as kids, and some of
the lies we were told still affect us. So by studying the ways adults lie to
kids, we may be able to clear our heads of lies we were told.
I'm using the word "lie" in a very general sense: not just overt falsehoods,
but also all the more subtle ways we mislead kids. Though "lie" has negative
connotations, I don't mean to suggest we should never do this—just that we
should pay attention when we do.
One of the most remarkable things about the way we lie to kids is how broad
the conspiracy is. All adults know what their culture lies to kids about:
they're the questions you answer "Ask your parents." If a kid asked who won
the World Series in 1982 or what the atomic weight of carbon was, you could
just tell him. But if a kid asks you "Is there a God?" or "What's a
prostitute?" you'll probably say "Ask your parents."
Since we all agree, kids see few cracks in the view of the world presented to
them. The biggest disagreements are between parents and schools, but even
those are small. Schools are careful what they say about controversial topics,
and if they do contradict what parents want their kids to believe, parents
either pressure the school into keeping quiet or move their kids to a new
school.
The conspiracy is so thorough that most kids who discover it do so only by
discovering internal contradictions in what they're told. It can be traumatic
for the ones who wake up during the operation. Here's what happened to
Einstein:
> Through the reading of popular scientific books I soon reached the
> conviction that much in the stories of the Bible could not be true. The
> consequence was a positively fanatic freethinking coupled with the
> impression that youth is intentionally being deceived by the state through
> lies: it was a crushing impression.
I remember that feeling. By 15 I was convinced the world was corrupt from end
to end. That's why movies like _The Matrix_ have such resonance. Every kid
grows up in a fake world. In a way it would be easier if the forces behind it
were as clearly differentiated as a bunch of evil machines, and one could make
a clean break just by taking a pill.
**Protection**
If you ask adults why they lie to kids, the most common reason they give is to
protect them. And kids do need protecting. The environment you want to create
for a newborn child will be quite unlike the streets of a big city.
That seems so obvious it seems wrong to call it a lie. It's certainly not a
bad lie to tell, to give a baby the impression the world is quiet and warm and
safe. But this harmless type of lie can turn sour if left unexamined.
Imagine if you tried to keep someone in as protected an environment as a
newborn till age 18. To mislead someone so grossly about the world would seem
not protection but abuse. That's an extreme example, of course; when parents
do that sort of thing it becomes national news. But you see the same problem
on a smaller scale in the malaise teenagers feel in suburbia.
The main purpose of suburbia is to provide a protected environment for
children to grow up in. And it seems great for 10 year olds. I liked living in
suburbia when I was 10. I didn't notice how sterile it was. My whole world was
no bigger than a few friends' houses I bicycled to and some woods I ran around
in. On a log scale I was midway between crib and globe. A suburban street was
just the right size. But as I grew older, suburbia started to feel
suffocatingly fake.
Life can be pretty good at 10 or 20, but it's often frustrating at 15\. This
is too big a problem to solve here, but certainly one reason life sucks at 15
is that kids are trapped in a world designed for 10 year olds.
What do parents hope to protect their children from by raising them in
suburbia? A friend who moved out of Manhattan said merely that her 3 year old
daughter "saw too much." Off the top of my head, that might include: people
who are high or drunk, poverty, madness, gruesome medical conditions, sexual
behavior of various degrees of oddness, and violent anger.
I think it's the anger that would worry me most if I had a 3 year old. I was
29 when I moved to New York and I was surprised even then. I wouldn't want a 3
year old to see some of the disputes I saw. It would be too frightening. A lot
of the things adults conceal from smaller children, they conceal because
they'd be frightening, not because they want to conceal the existence of such
things. Misleading the child is just a byproduct.
This seems one of the most justifiable types of lying adults do to kids. But
because the lies are indirect we don't keep a very strict accounting of them.
Parents know they've concealed the facts about sex, and many at some point sit
their kids down and explain more. But few tell their kids about the
differences between the real world and the cocoon they grew up in. Combine
this with the confidence parents try to instill in their kids, and every year
you get a new crop of 18 year olds who think they know how to run the world.
Don't all 18 year olds think they know how to run the world? Actually this
seems to be a recent innovation, no more than about 100 years old. In
preindustrial times teenage kids were junior members of the adult world and
comparatively well aware of their shortcomings. They could see they weren't as
strong or skillful as the village smith. In past times people lied to kids
about some things more than we do now, but the lies implicit in an artificial,
protected environment are a recent invention. Like a lot of new inventions,
the rich got this first. Children of kings and great magnates were the first
to grow up out of touch with the world. Suburbia means half the population can
live like kings in that respect.
**Sex (and Drugs)**
I'd have different worries about raising teenage kids in New York. I'd worry
less about what they'd see, and more about what they'd do. I went to college
with a lot of kids who grew up in Manhattan, and as a rule they seemed pretty
jaded. They seemed to have lost their virginity at an average of about 14 and
by college had tried more drugs than I'd even heard of.
The reasons parents don't want their teenage kids having sex are complex.
There are some obvious dangers: pregnancy and sexually transmitted diseases.
But those aren't the only reasons parents don't want their kids having sex.
The average parents of a 14 year old girl would hate the idea of her having
sex even if there were zero risk of pregnancy or sexually transmitted
diseases.
Kids can probably sense they aren't being told the whole story. After all,
pregnancy and sexually transmitted diseases are just as much a problem for
adults, and they have sex.
What really bothers parents about their teenage kids having sex? Their dislike
of the idea is so visceral it's probably inborn. But if it's inborn it should
be universal, and there are plenty of societies where parents don't mind if
their teenage kids have sex—indeed, where it's normal for 14 year olds to
become mothers. So what's going on? There does seem to be a universal taboo
against sex with prepubescent children. One can imagine evolutionary reasons
for that. And I think this is the main reason parents in industrialized
societies dislike teenage kids having sex. They still think of them as
children, even though biologically they're not, so the taboo against child sex
still has force.
One thing adults conceal about sex they also conceal about drugs: that it can
cause great pleasure. That's what makes sex and drugs so dangerous. The desire
for them can cloud one's judgement—which is especially frightening when the
judgement being clouded is the already wretched judgement of a teenage kid.
Here parents' desires conflict. Older societies told kids they had bad
judgement, but modern parents want their children to be confident. This may
well be a better plan than the old one of putting them in their place, but it
has the side effect that after having implicitly lied to kids about how good
their judgement is, we then have to lie again about all the things they might
get into trouble with if they believed us.
If parents told their kids the truth about sex and drugs, it would be: the
reason you should avoid these things is that you have lousy judgement. People
with twice your experience still get burned by them. But this may be one of
those cases where the truth wouldn't be convincing, because one of the
symptoms of bad judgement is believing you have good judgement. When you're
too weak to lift something, you can tell, but when you're making a decision
impetuously, you're all the more sure of it.
**Innocence**
Another reason parents don't want their kids having sex is that they want to
keep them innocent. Adults have a certain model of how kids are supposed to
behave, and it's different from what they expect of other adults.
One of the most obvious differences is the words kids are allowed to use. Most
parents use words when talking to other adults that they wouldn't want their
kids using. They try to hide even the existence of these words for as long as
they can. And this is another of those conspiracies everyone participates in:
everyone knows you're not supposed to swear in front of kids.
I've never heard more different explanations for anything parents tell kids
than why they shouldn't swear. Every parent I know forbids their children to
swear, and yet no two of them have the same justification. It's clear most
start with not wanting kids to swear, then make up the reason afterward.
So my theory about what's going on is that the _function_ of swearwords is to
mark the speaker as an adult. There's no difference in the meaning of "shit"
and "poopoo." So why should one be ok for kids to say and one forbidden? The
only explanation is: by definition.
Why does it bother adults so much when kids do things reserved for adults? The
idea of a foul-mouthed, cynical 10 year old leaning against a lamppost with a
cigarette hanging out of the corner of his mouth is very disconcerting. But
why?
One reason we want kids to be innocent is that we're programmed to like
certain kinds of helplessness. I've several times heard mothers say they
deliberately refrained from correcting their young children's
mispronunciations because they were so cute. And if you think about it,
cuteness is helplessness. Toys and cartoon characters meant to be cute always
have clueless expressions and stubby, ineffectual limbs.
It's not surprising we'd have an inborn desire to love and protect helpless
creatures, considering human offspring are so helpless for so long. Without
the helplessness that makes kids cute, they'd be very annoying. They'd merely
seem like incompetent adults. But there's more to it than that. The reason our
hypothetical jaded 10 year old bothers me so much is not just that he'd be
annoying, but that he'd have cut off his prospects for growth so early. To be
jaded you have to think you know how the world works, and any theory a 10 year
old had about that would probably be a pretty narrow one.
Innocence is also open-mindedness. We want kids to be innocent so they can
continue to learn. Paradoxical as it sounds, there are some kinds of knowledge
that get in the way of other kinds of knowledge. If you're going to learn that
the world is a brutal place full of people trying to take advantage of one
another, you're better off learning it last. Otherwise you won't bother
learning much more.
Very smart adults often seem unusually innocent, and I don't think this is a
coincidence. I think they've deliberately avoided learning about certain
things. Certainly I do. I used to think I wanted to know everything. Now I
know I don't.
**Death**
After sex, death is the topic adults lie most conspicuously about to kids. Sex
I believe they conceal because of deep taboos. But why do we conceal death
from kids? Probably because small children are particularly horrified by it.
They want to feel safe, and death is the ultimate threat.
One of the most spectacular lies our parents told us was about the death of
our first cat. Over the years, as we asked for more details, they were
compelled to invent more, so the story grew quite elaborate. The cat had died
at the vet's office. Of what? Of the anaesthesia itself. Why was the cat at
the vet's office? To be fixed. And why had such a routine operation killed it?
It wasn't the vet's fault; the cat had a congenitally weak heart; the
anaesthesia was too much for it; but there was no way anyone could have known
this in advance. It was not till we were in our twenties that the truth came
out: my sister, then about three, had accidentally stepped on the cat and
broken its back.
They didn't feel the need to tell us the cat was now happily in cat heaven. My
parents never claimed that people or animals who died had "gone to a better
place," or that we'd meet them again. It didn't seem to harm us.
My grandmother told us an edited version of the death of my grandfather. She
said they'd been sitting reading one day, and when she said something to him,
he didn't answer. He seemed to be asleep, but when she tried to rouse him, she
couldn't. "He was gone." Having a heart attack sounded like falling asleep.
Later I learned it hadn't been so neat, and the heart attack had taken most of
a day to kill him.
Along with such outright lies, there must have been a lot of changing the
subject when death came up. I can't remember that, of course, but I can infer
it from the fact that I didn't really grasp I was going to die till I was
about 19. How could I have missed something so obvious for so long? Now that
I've seen parents managing the subject, I can see how: questions about death
are gently but firmly turned aside.
On this topic, especially, they're met half-way by kids. Kids often want to be
lied to. They want to believe they're living in a comfortable, safe world as
much as their parents want them to believe it.
**Identity**
Some parents feel a strong adherence to an ethnic or religious group and want
their kids to feel it too. This usually requires two different kinds of lying:
the first is to tell the child that he or she is an X, and the second is
whatever specific lies Xes differentiate themselves by believing.
Telling a child they have a particular ethnic or religious identity is one of
the stickiest things you can tell them. Almost anything else you tell a kid,
they can change their mind about later when they start to think for
themselves. But if you tell a kid they're a member of a certain group, that
seems nearly impossible to shake.
This despite the fact that it can be one of the most premeditated lies parents
tell. When parents are of different religions, they'll often agree between
themselves that their children will be "raised as Xes." And it works. The kids
obligingly grow up considering themselves as Xes, despite the fact that if
their parents had chosen the other way, they'd have grown up considering
themselves as Ys.
One reason this works so well is the second kind of lie involved. The truth is
common property. You can't distinguish your group by doing things that are
rational, and believing things that are true. If you want to set yourself
apart from other people, you have to do things that are arbitrary, and believe
things that are false. And after having spent their whole lives doing things
that are arbitrary and believing things that are false, and being regarded as
odd by "outsiders" on that account, the cognitive dissonance pushing children
to regard themselves as Xes must be enormous. If they aren't an X, why are
they attached to all these arbitrary beliefs and customs? If they aren't an X,
why do all the non-Xes call them one?
This form of lie is not without its uses. You can use it to carry a payload of
beneficial beliefs, and they will also become part of the child's identity.
You can tell the child that in addition to never wearing the color yellow,
believing the world was created by a giant rabbit, and always snapping their
fingers before eating fish, Xes are also particularly honest and industrious.
Then X children will grow up feeling it's part of their identity to be honest
and industrious.
This probably accounts for a lot of the spread of modern religions, and
explains why their doctrines are a combination of the useful and the bizarre.
The bizarre half is what makes the religion stick, and the useful half is the
payload.
**Authority**
One of the least excusable reasons adults lie to kids is to maintain power
over them. Sometimes these lies are truly sinister, like a child molester
telling his victims they'll get in trouble if they tell anyone what happened
to them. Others seem more innocent; it depends how badly adults lie to
maintain their power, and what they use it for.
Most adults make some effort to conceal their flaws from children. Usually
their motives are mixed. For example, a father who has an affair generally
conceals it from his children. His motive is partly that it would worry them,
partly that this would introduce the topic of sex, and partly (a larger part
than he would admit) that he doesn't want to tarnish himself in their eyes.
If you want to learn what lies are told to kids, read almost any book written
to teach them about "issues." Peter Mayle wrote one called _Why Are We
Getting a Divorce?_ It begins with the three most important things to remember
about divorce, one of which is:
> You shouldn't put the blame on one parent, because divorce is never only one
> person's fault.
Really? When a man runs off with his secretary, is it always partly his wife's
fault? But I can see why Mayle might have said this. Maybe it's more important
for kids to respect their parents than to know the truth about them.
But because adults conceal their flaws, and at the same time insist on high
standards of behavior for kids, a lot of kids grow up feeling they fall
hopelessly short. They walk around feeling horribly evil for having used a
swearword, while in fact most of the adults around them are doing much worse
things.
This happens in intellectual as well as moral questions. The more confident
people are, the more willing they seem to be to answer a question "I don't
know." Less confident people feel they have to have an answer or they'll look
bad. My parents were pretty good about admitting when they didn't know things,
but I must have been told a lot of lies of this type by teachers, because I
rarely heard a teacher say "I don't know" till I got to college. I remember
because it was so surprising to hear someone say that in front of a class.
The first hint I had that teachers weren't omniscient came in sixth grade,
after my father contradicted something I'd learned in school. When I protested
that the teacher had said the opposite, my father replied that the guy had no
idea what he was talking about—that he was just an elementary school teacher,
after all.
_Just_ a teacher? The phrase seemed almost grammatically ill-formed. Didn't
teachers know everything about the subjects they taught? And if not, why were
they the ones teaching us?
The sad fact is, US public school teachers don't generally understand the
stuff they're teaching very well. There are some sterling exceptions, but as a
rule people planning to go into teaching rank academically near the bottom of
the college population. So the fact that I still thought at age 11 that
teachers were infallible shows what a job the system must have done on my
brain.
**School**
What kids get taught in school is a complex mix of lies. The most excusable
are those told to simplify ideas to make them easy to learn. The problem is, a
lot of propaganda gets slipped into the curriculum in the name of
simplification.
Public school textbooks represent a compromise between what various powerful
groups want kids to be told. The lies are rarely overt. Usually they consist
either of omissions or of over-emphasizing certain topics at the expense of
others. The view of history we got in elementary school was a crude
hagiography, with at least one representative of each powerful group.
The famous scientists I remember were Einstein, Marie Curie, and George
Washington Carver. Einstein was a big deal because his work led to the atom
bomb. Marie Curie was involved with X-rays. But I was mystified about Carver.
He seemed to have done stuff with peanuts.
It's obvious now that he was on the list because he was black (and for that
matter that Marie Curie was on it because she was a woman), but as a kid I was
confused for years about him. I wonder if it wouldn't have been better just to
tell us the truth: that there weren't any famous black scientists. Ranking
George Washington Carver with Einstein misled us not only about science, but
about the obstacles blacks faced in his time.
As subjects got softer, the lies got more frequent. By the time you got to
politics and recent history, what we were taught was pretty much pure
propaganda. For example, we were taught to regard political leaders as
saints—especially the recently martyred Kennedy and King. It was astonishing
to learn later that they'd both been serial womanizers, and that Kennedy was a
speed freak to boot. (By the time King's plagiarism emerged, I'd lost the
ability to be surprised by the misdeeds of famous people.)
I doubt you could teach kids recent history without teaching them lies,
because practically everyone who has anything to say about it has some kind of
spin to put on it. Much recent history _consists_ of spin. It would probably
be better just to teach them metafacts like that.
Probably the biggest lie told in schools, though, is that the way to succeed
is through following "the rules." In fact most such rules are just hacks to
manage large groups efficiently.
**Peace**
Of all the reasons we lie to kids, the most powerful is probably the same
mundane reason they lie to us.
Often when we lie to people it's not part of any conscious strategy, but
because they'd react violently to the truth. Kids, almost by definition, lack
self-control. They react violently to things—and so they get lied to a lot.
A few Thanksgivings ago, a friend of mine found himself in a situation that
perfectly illustrates the complex motives we have when we lie to kids. As the
roast turkey appeared on the table, his alarmingly perceptive 5 year old son
suddenly asked if the turkey had wanted to die. Foreseeing disaster, my friend
and his wife rapidly improvised: yes, the turkey had wanted to die, and in
fact had lived its whole life with the aim of being their Thanksgiving dinner.
And that (phew) was the end of that.
Whenever we lie to kids to protect them, we're usually also lying to keep the
peace.
One consequence of this sort of calming lie is that we grow up thinking
horrible things are normal. It's hard for us to feel a sense of urgency as
adults over something we've literally been trained not to worry about. When I
was about 10 I saw a documentary on pollution that put me into a panic. It
seemed the planet was being irretrievably ruined. I went to my mother
afterward to ask if this was so. I don't remember what she said, but she made
me feel better, so I stopped worrying about it.
That was probably the best way to handle a frightened 10 year old. But we
should understand the price. This sort of lie is one of the main reasons bad
things persist: we're all trained to ignore them.
**Detox**
A sprinter in a race almost immediately enters a state called "oxygen debt."
His body switches to an emergency source of energy that's faster than regular
aerobic respiration. But this process builds up waste products that ultimately
require extra oxygen to break down, so at the end of the race he has to stop
and pant for a while to recover.
We arrive at adulthood with a kind of truth debt. We were told a lot of lies
to get us (and our parents) through our childhood. Some may have been
necessary. Some probably weren't. But we all arrive at adulthood with heads
full of lies.
There's never a point where the adults sit you down and explain all the lies
they told you. They've forgotten most of them. So if you're going to clear
these lies out of your head, you're going to have to do it yourself.
Few do. Most people go through life with bits of packing material adhering to
their minds and never know it. You probably never can completely undo the
effects of lies you were told as a kid, but it's worth trying. I've found that
whenever I've been able to undo a lie I was told, a lot of other things fell
into place.
Fortunately, once you arrive at adulthood you get a valuable new resource you
can use to figure out what lies you were told. You're now one of the liars.
You get to watch behind the scenes as adults spin the world for the next
generation of kids.
The first step in clearing your head is to realize how far you are from a
neutral observer. When I left high school I was, I thought, a complete
skeptic. I'd realized high school was crap. I thought I was ready to question
everything I knew. But among the many other things I was ignorant of was how
much debris there already was in my head. It's not enough to consider your
mind a blank slate. You have to consciously erase it.
** |
|
November 2008
One of the differences between big companies and startups is that big
companies tend to have developed procedures to protect themselves against
mistakes. A startup walks like a toddler, bashing into things and falling over
all the time. A big company is more deliberate.
The gradual accumulation of checks in an organization is a kind of learning,
based on disasters that have happened to it or others like it. After giving a
contract to a supplier who goes bankrupt and fails to deliver, for example, a
company might require all suppliers to prove they're solvent before submitting
bids.
As companies grow they invariably get more such checks, either in response to
disasters they've suffered, or (probably more often) by hiring people from
bigger companies who bring with them customs for protecting against new types
of disasters.
It's natural for organizations to learn from mistakes. The problem is, people
who propose new checks almost never consider that the check itself has a cost.
_Every check has a cost._ For example, consider the case of making suppliers
verify their solvency. Surely that's mere prudence? But in fact it could have
substantial costs. There's obviously the direct cost in time of the people on
both sides who supply and check proofs of the supplier's solvency. But the
real costs are the ones you never hear about: the company that would be the
best supplier, but doesn't bid because they can't spare the effort to get
verified. Or the company that would be the best supplier, but falls just short
of the threshold for solvency—which will of course have been set on the high
side, since there is no apparent cost of increasing it.
Whenever someone in an organization proposes to add a new check, they should
have to explain not just the benefit but the cost. No matter how bad a job
they did of analyzing it, this meta-check would at least remind everyone there
had to _be_ a cost, and send them looking for it.
If companies started doing that, they'd find some surprises. Joel Spolsky
recently spoke at Y Combinator about selling software to corporate customers.
He said that in most companies software costing up to about $1000 could be
bought by individual managers without any additional approvals. Above that
threshold, software purchases generally had to be approved by a committee. But
babysitting this process was so expensive for software vendors that it didn't
make sense to charge less than $50,000. Which means if you're making something
you might otherwise have charged $5000 for, you have to sell it for $50,000
instead.
The purpose of the committee is presumably to ensure that the company doesn't
waste money. And yet the result is that the company pays 10 times as much.
Checks on purchases will always be expensive, because the harder it is to sell
something to you, the more it has to cost. And not merely linearly, either. If
you're hard enough to sell to, the people who are best at making things don't
want to bother. The only people who will sell to you are companies that
specialize in selling to you. Then you've sunk to a whole new level of
inefficiency. Market mechanisms no longer protect you, because the good
suppliers are no longer in the market.
Such things happen constantly to the biggest organizations of all,
governments. But checks instituted by governments can cause much worse
problems than merely overpaying. Checks instituted by governments can cripple
a country's whole economy. Up till about 1400, China was richer and more
technologically advanced than Europe. One reason Europe pulled ahead was that
the Chinese government restricted long trading voyages. So it was left to the
Europeans to explore and eventually to dominate the rest of the world,
including China.
In more recent times, Sarbanes-Oxley has practically destroyed the US IPO
market. That wasn't the intention of the legislators who wrote it. They just
wanted to add a few more checks on public companies. But they forgot to
consider the cost. They forgot that companies about to go public are usually
rather stretched, and that the weight of a few extra checks that might be easy
for General Electric to bear are enough to prevent younger companies from
being public at all.
Once you start to think about the cost of checks, you can start to ask other
interesting questions. Is the cost increasing or decreasing? Is it higher in
some areas than others? Where does it increase discontinuously? If large
organizations started to ask questions like that, they'd learn some
frightening things.
I think the cost of checks may actually be increasing. The reason is that
software plays an increasingly important role in companies, and the people who
write software are particularly harmed by checks.
Programmers are unlike many types of workers in that the best ones actually
prefer to work hard. This doesn't seem to be the case in most types of work.
When I worked in fast food, we didn't prefer the busy times. And when I used
to mow lawns, I definitely didn't prefer it when the grass was long after a
week of rain.
Programmers, though, like it better when they write more code. Or more
precisely, when they release more code. Programmers like to make a difference.
Good ones, anyway.
For good programmers, one of the best things about working for a startup is
that there are few checks on releases. In true startups, there are no external
checks at all. If you have an idea for a new feature in the morning, you can
write it and push it to the production servers before lunch. And when you can
do that, you have more ideas.
At big companies, software has to go through various approvals before it can
be launched. And the cost of doing this can be enormous—in fact,
discontinuous. I was talking recently to a group of three programmers whose
startup had been acquired a few years before by a big company. When they'd
been independent, they could release changes instantly. Now, they said, the
absolute fastest they could get code released on the production servers was
two weeks.
This didn't merely make them less productive. It made them hate working for
the acquirer.
Here's a sign of how much programmers like to be able to work hard: these guys
would have _paid_ to be able to release code immediately, the way they used
to. I asked them if they'd trade 10% of the acquisition price for the ability
to release code immediately, and all three instantly said yes. Then I asked
what was the maximum percentage of the acquisition price they'd trade for it.
They said they didn't want to think about it, because they didn't want to know
how high they'd go, but I got the impression it might be as much as half.
They'd have sacrificed hundreds of thousands of dollars, perhaps millions,
just to be able to deliver more software to users. And you know what? It would
have been perfectly safe to let them. In fact, the acquirer would have been
better off; not only wouldn't these guys have broken anything, they'd have
gotten a lot more done. So the acquirer is in fact getting worse performance
at greater cost. Just like the committee approving software purchases.
And just as the greatest danger of being hard to sell to is not that you
overpay but that the best suppliers won't even sell to you, the greatest
danger of applying too many checks to your programmers is not that you'll make
them unproductive, but that good programmers won't even want to work for you.
Steve Jobs's famous maxim "artists ship" works both ways. Artists aren't
merely capable of shipping. They insist on it. So if you don't let people
ship, you won't have any artists.
---
* * *
--- |
|
December 2010
I was thinking recently how inconvenient it was not to have a general term for
iPhones, iPads, and the corresponding things running Android. The closest to a
general term seems to be "mobile devices," but that (a) applies to any mobile
phone, and (b) doesn't really capture what's distinctive about the iPad.
After a few seconds it struck me that what we'll end up calling these things
is tablets. The only reason we even consider calling them "mobile devices" is
that the iPhone preceded the iPad. If the iPad had come first, we wouldn't
think of the iPhone as a phone; we'd think of it as a tablet small enough to
hold up to your ear.
The iPhone isn't so much a phone as a replacement for a phone. That's an
important distinction, because it's an early instance of what will become a
common pattern. Many if not most of the special-purpose objects around us are
going to be replaced by apps running on tablets.
This is already clear in cases like GPSes, music players, and cameras. But I
think it will surprise people how many things are going to get replaced. We
funded one startup that's replacing keys. The fact that you can change font
sizes easily means the iPad effectively replaces reading glasses. I wouldn't
be surprised if by playing some clever tricks with the accelerometer you could
even replace the bathroom scale.
The advantages of doing things in software on a single device are so great
that everything that can get turned into software will. So for the next couple
years, a good recipe for startups will be to look around you for things that
people haven't realized yet can be made unnecessary by a tablet app.
In 1938 Buckminster Fuller coined the term ephemeralization to describe the
increasing tendency of physical machinery to be replaced by what we would now
call software. The reason tablets are going to take over the world is not
(just) that Steve Jobs and Co are industrial design wizards, but because they
have this force behind them. The iPhone and the iPad have effectively drilled
a hole that will allow ephemeralization to flow into a lot of new areas. No
one who has studied the history of technology would want to underestimate the
power of that force.
I worry about the power Apple could have with this force behind them. I don't
want to see another era of client monoculture like the Microsoft one in the
80s and 90s. But if ephemeralization is one of the main forces driving the
spread of tablets, that suggests a way to compete with Apple: be a better
platform for it.
It has turned out to be a great thing that Apple tablets have accelerometers
in them. Developers have used the accelerometer in ways Apple could never have
imagined. That's the nature of platforms. The more versatile the tool, the
less you can predict how people will use it. So tablet makers should be
thinking: what else can we put in there? Not merely hardware, but software
too. What else can we give developers access to? Give hackers an inch and
they'll take you a mile.
**Thanks** to Sam Altman, Paul Buchheit, Jessica Livingston, and Robert Morris
for reading drafts of this.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
January 2012
There are great startup ideas lying around unexploited right under our noses.
One reason we don't see them is a phenomenon I call _schlep blindness_. Schlep
was originally a Yiddish word but has passed into general use in the US. It
means a tedious, unpleasant task.
No one likes schleps, but hackers especially dislike them. Most hackers who
start startups wish they could do it by just writing some clever software,
putting it on a server somewhere, and watching the money roll in—without ever
having to talk to users, or negotiate with other companies, or deal with other
people's broken code. Maybe that's possible, but I haven't seen it.
One of the many things we do at Y Combinator is teach hackers about the
inevitability of schleps. No, you can't start a startup by just writing code.
I remember going through this realization myself. There was a point in 1995
when I was still trying to convince myself I could start a company by just
writing code. But I soon learned from experience that schleps are not merely
inevitable, but pretty much what business consists of. A company is defined by
the schleps it will undertake. And schleps should be dealt with the same way
you'd deal with a cold swimming pool: just jump in. Which is not to say you
should seek out unpleasant work per se, but that you should never shrink from
it if it's on the path to something great.
The most dangerous thing about our dislike of schleps is that much of it is
unconscious. Your unconscious won't even let you see ideas that involve
painful schleps. That's schlep blindness.
The phenomenon isn't limited to startups. Most people don't consciously decide
not to be in as good physical shape as Olympic athletes, for example. Their
unconscious mind decides for them, shrinking from the work involved.
The most striking example I know of schlep blindness is Stripe, or rather
Stripe's idea. For over a decade, every hacker who'd ever had to process
payments online knew how painful the experience was. Thousands of people must
have known about this problem. And yet when they started startups, they
decided to build recipe sites, or aggregators for local events. Why? Why work
on problems few care much about and no one will pay for, when you could fix
one of the most important components of the world's infrastructure? Because
schlep blindness prevented people from even considering the idea of fixing
payments.
Probably no one who applied to Y Combinator to work on a recipe site began by
asking "should we fix payments, or build a recipe site?" and chose the recipe
site. Though the idea of fixing payments was right there in plain sight, they
never saw it, because their unconscious mind shrank from the complications
involved. You'd have to make deals with banks. How do you do that? Plus you're
moving money, so you're going to have to deal with fraud, and people trying to
break into your servers. Plus there are probably all sorts of regulations to
comply with. It's a lot more intimidating to start a startup like this than a
recipe site.
That scariness makes ambitious ideas doubly valuable. In addition to their
intrinsic value, they're like undervalued stocks in the sense that there's
less demand for them among founders. If you pick an ambitious idea, you'll
have less competition, because everyone else will have been frightened off by
the challenges involved. (This is also true of starting a startup generally.)
How do you overcome schlep blindness? Frankly, the most valuable antidote to
schlep blindness is probably ignorance. Most successful founders would
probably say that if they'd known when they were starting their company about
the obstacles they'd have to overcome, they might never have started it. Maybe
that's one reason the most successful startups of all so often have young
founders.
In practice the founders grow with the problems. But no one seems able to
foresee that, not even older, more experienced founders. So the reason younger
founders have an advantage is that they make two mistakes that cancel each
other out. They don't know how much they can grow, but they also don't know
how much they'll need to. Older founders only make the first mistake.
Ignorance can't solve everything though. Some ideas so obviously entail
alarming schleps that anyone can see them. How do you see ideas like that? The
trick I recommend is to take yourself out of the picture. Instead of asking
"what problem should I solve?" ask "what problem do I wish someone else would
solve for me?" If someone who had to process payments before Stripe had tried
asking that, Stripe would have been one of the first things they wished for.
It's too late now to be Stripe, but there's plenty still broken in the world,
if you know how to see it.
**Thanks** to Sam Altman, Paul Buchheit, Patrick Collison, Aaron Iba, Jessica
Livingston, Emmett Shear, and Harj Taggar for reading drafts of this.
* * *
--- |
|
November 2015
A few months ago an article about Y Combinator said that early on it had been
a "one-man show." It's sadly common to read that sort of thing. But the
problem with that description is not just that it's unfair. It's also
misleading. Much of what's most novel about YC is due to Jessica Livingston.
If you don't understand her, you don't understand YC. So let me tell you a
little about Jessica.
YC had 4 founders. Jessica and I decided one night to start it, and the next
day we recruited my friends Robert Morris and Trevor Blackwell. Jessica and I
ran YC day to day, and Robert and Trevor read applications and did interviews
with us.
Jessica and I were already dating when we started YC. At first we tried to act
"professional" about this, meaning we tried to conceal it. In retrospect that
seems ridiculous, and we soon dropped the pretense. And the fact that Jessica
and I were a couple is a big part of what made YC what it was. YC felt like a
family. The founders early on were mostly young. We all had dinner together
once a week, cooked for the first couple years by me. Our first building had
been a private home. The overall atmosphere was shockingly different from a
VC's office on Sand Hill Road, in a way that was entirely for the better.
There was an authenticity that everyone who walked in could sense. And that
didn't just mean that people trusted us. It was the perfect quality to instill
in startups. Authenticity is one of the most important things YC looks for in
founders, not just because fakers and opportunists are annoying, but because
authenticity is one of the main things that separates the most successful
startups from the rest.
Early YC was a family, and Jessica was its mom. And the culture she defined
was one of YC's most important innovations. Culture is important in any
organization, but at YC culture wasn't just how we behaved when we built the
product. At YC, the culture was the product.
Jessica was also the mom in another sense: she had the last word. Everything
we did as an organization went through her first — who to fund, what to say to
the public, how to deal with other companies, who to hire, everything.
Before we had kids, YC was more or less our life. There was no real
distinction between working hours and not. We talked about YC all the time.
And while there might be some businesses that it would be tedious to let
infect your private life, we liked it. We'd started YC because it was
something we were interested in. And some of the problems we were trying to
solve were endlessly difficult. How do you recognize good founders? You could
talk about that for years, and we did; we still do.
I'm better at some things than Jessica, and she's better at some things than
me. One of the things she's best at is judging people. She's one of those rare
individuals with x-ray vision for character. She can see through any kind of
faker almost immediately. Her nickname within YC was the Social Radar, and
this special power of hers was critical in making YC what it is. The earlier
you pick startups, the more you're picking the founders. Later stage investors
get to try products and look at growth numbers. At the stage where YC invests,
there is often neither a product nor any numbers.
Others thought YC had some special insight about the future of technology.
Mostly we had the same sort of insight Socrates claimed: we at least knew we
knew nothing. What made YC successful was being able to pick good founders. We
thought Airbnb was a bad idea. We funded it because we liked the founders.
During interviews, Robert and Trevor and I would pepper the applicants with
technical questions. Jessica would mostly watch. A lot of the applicants
probably read her as some kind of secretary, especially early on, because she
was the one who'd go out and get each new group and she didn't ask many
questions. She was ok with that. It was easier for her to watch people if they
didn't notice her. But after the interview, the three of us would turn to
Jessica and ask "What does the Social Radar say?"
Having the Social Radar at interviews wasn't just how we picked founders who'd
be successful. It was also how we picked founders who were good people. At
first we did this because we couldn't help it. Imagine what it would feel like
to have x-ray vision for character. Being around bad people would be
intolerable. So we'd refuse to fund founders whose characters we had doubts
about even if we thought they'd be successful.
Though we initially did this out of self-indulgence, it turned out to be very
valuable to YC. We didn't realize it in the beginning, but the people we were
picking would become the YC alumni network. And once we picked them, unless
they did something really egregious, they were going to be part of it for
life. Some now think YC's alumni network is its most valuable feature. I
personally think YC's advice is pretty good too, but the alumni network is
certainly among the most valuable features. The level of trust and helpfulness
is remarkable for a group of such size. And Jessica is the main reason why.
(As we later learned, it probably cost us little to reject people whose
characters we had doubts about, because how good founders are and how well
they do are _not orthogonal_. If bad founders succeed at all, they tend to
sell early. The most successful founders are almost all good.)
If Jessica was so important to YC, why don't more people realize it? Partly
because I'm a writer, and writers always get disproportionate attention. YC's
brand was initially my brand, and our applicants were people who'd read my
essays. But there is another reason: Jessica hates attention. Talking to
reporters makes her nervous. The thought of giving a talk paralyzes her. She
was even uncomfortable at our wedding, because the bride is always the center
of attention.
It's not just because she's shy that she hates attention, but because it
throws off the Social Radar. She can't be herself. You can't watch people when
everyone is watching you.
Another reason attention worries her is that she hates bragging. In anything
she does that's publicly visible, her biggest fear (after the obvious fear
that it will be bad) is that it will seem ostentatious. She says being too
modest is a common problem for women. But in her case it goes beyond that. She
has a horror of ostentation so visceral it's almost a phobia.
She also hates fighting. She can't do it; she just shuts down. And
unfortunately there is a good deal of fighting in being the public face of an
organization.
So although Jessica more than anyone made YC unique, the very qualities that
enabled her to do it mean she tends to get written out of YC's history.
Everyone buys this story that PG started YC and his wife just kind of helped.
Even YC's haters buy it. A couple years ago when people were attacking us for
not funding more female founders (than exist), they all treated YC as
identical with PG. It would have spoiled the narrative to acknowledge
Jessica's central role at YC.
Jessica was boiling mad that people were accusing _her_ company of sexism.
I've never seen her angrier about anything. But she did not contradict them.
Not publicly. In private there was a great deal of profanity. And she wrote
three separate essays about the question of female founders. But she could
never bring herself to publish any of them. She'd seen the level of vitriol in
this debate, and she shrank from engaging.
It wasn't just because she disliked fighting. She's so sensitive to character
that it repels her even to fight with dishonest people. The idea of mixing it
up with linkbait journalists or Twitter trolls would seem to her not merely
frightening, but disgusting.
But Jessica knew her example as a successful female founder would encourage
more women to start companies, so last year she did something YC had never
done before and hired a PR firm to get her some interviews. At one of the
first she did, the reporter brushed aside her insights about startups and
turned it into a sensationalistic story about how some guy had tried to chat
her up as she was waiting outside the bar where they had arranged to meet.
Jessica was mortified, partly because the guy had done nothing wrong, but more
because the story treated her as a victim significant only for being a woman,
rather than one of the most knowledgeable investors in the Valley.
After that she told the PR firm to stop.
You're not going to be hearing in the press about what Jessica has achieved.
So let me tell you what Jessica has achieved. Y Combinator is fundamentally a
nexus of people, like a university. It doesn't make a product. What defines it
is the people. Jessica more than anyone curated and nurtured that collection
of people. In that sense she literally made YC.
Jessica knows more about the qualities of startup founders than anyone else
ever has. Her immense data set and x-ray vision are the perfect storm in that
respect. The qualities of the founders are the best predictor of how a startup
will do. And startups are in turn the most important source of growth in
mature economies.
The person who knows the most about the most important factor in the growth of
mature economies — that is who Jessica Livingston is. Doesn't that sound like
someone who should be better known?
** |
|
January 2017
People who are powerful but uncharismatic will tend to be disliked. Their
power makes them a target for criticism that they don't have the charisma to
disarm. That was Hillary Clinton's problem. It also tends to be a problem for
any CEO who is more of a builder than a schmoozer. And yet the builder-type
CEO is (like Hillary) probably the best person for the job.
I don't think there is any solution to this problem. It's human nature. The
best we can do is to recognize that it's happening, and to understand that
being a magnet for criticism is sometimes a sign not that someone is the wrong
person for a job, but that they're the right one.
---
* * *
--- |
|
September 2017
The most valuable insights are both general and surprising. F = ma for
example. But general and surprising is a hard combination to achieve. That
territory tends to be picked clean, precisely because those insights are so
valuable.
Ordinarily, the best that people can do is one without the other: either
surprising without being general (e.g. gossip), or general without being
surprising (e.g. platitudes).
Where things get interesting is the moderately valuable insights. You get
those from small additions of whichever quality was missing. The more common
case is a small addition of generality: a piece of gossip that's more than
just gossip, because it teaches something interesting about the world. But
another less common approach is to focus on the most general ideas and see if
you can find something new to say about them. Because these start out so
general, you only need a small delta of novelty to produce a useful insight.
A small delta of novelty is all you'll be able to get most of the time. Which
means if you take this route, your ideas will seem a lot like ones that
already exist. Sometimes you'll find you've merely rediscovered an idea that
did already exist. But don't be discouraged. Remember the huge multiplier that
kicks in when you do manage to think of something even a little new.
Corollary: the more general the ideas you're talking about, the less you
should worry about repeating yourself. If you write enough, it's inevitable
you will. Your brain is much the same from year to year and so are the stimuli
that hit it. I feel slightly bad when I find I've said something close to what
I've said before, as if I were plagiarizing myself. But rationally one
shouldn't. You won't say something exactly the same way the second time, and
that variation increases the chance you'll get that tiny but critical delta of
novelty.
And of course, ideas beget ideas. (That sounds _familiar_.) An idea with a
small amount of novelty could lead to one with more. But only if you keep
going. So it's doubly important not to let yourself be discouraged by people
who say there's not much new about something you've discovered. "Not much new"
is a real achievement when you're talking about the most general ideas.
It's not true that there's nothing new under the sun. There are some domains
where there's almost nothing new. But there's a big difference between nothing
and almost nothing, when it's multiplied by the area under the sun.
**Thanks** to Sam Altman, Patrick Collison, and Jessica Livingston for reading
drafts of this.
---
---
Japanese Translation
* * *
--- |
|
January 2016
One advantage of being old is that you can see change happen in your lifetime.
A lot of the change I've seen is fragmentation. US politics is much more
polarized than it used to be. Culturally we have ever less common ground. The
creative class flocks to a handful of happy cities, abandoning the rest. And
increasing economic inequality means the spread between rich and poor is
growing too. I'd like to propose a hypothesis: that all these trends are
instances of the same phenomenon. And moreover, that the cause is not some
force that's pulling us apart, but rather the erosion of forces that had been
pushing us together.
Worse still, for those who worry about these trends, the forces that were
pushing us together were an anomaly, a one-time combination of circumstances
that's unlikely to be repeated — and indeed, that we would not want to repeat.
The two forces were war (above all World War II), and the rise of large
corporations.
The effects of World War II were both economic and social. Economically, it
decreased variation in income. Like all modern armed forces, America's were
socialist economically. From each according to his ability, to each according
to his need. More or less. Higher ranking members of the military got more (as
higher ranking members of socialist societies always do), but what they got
was fixed according to their rank. And the flattening effect wasn't limited to
those under arms, because the US economy was conscripted too. Between 1942 and
1945 all wages were set by the National War Labor Board. Like the military,
they defaulted to flatness. And this national standardization of wages was so
pervasive that its effects could still be seen years after the war ended.
Business owners weren't supposed to be making money either. FDR said "not a
single war millionaire" would be permitted. To ensure that, any increase in a
company's profits over prewar levels was taxed at 85%. And when what was left
after corporate taxes reached individuals, it was taxed again at a marginal
rate of 93%.
Socially too the war tended to decrease variation. Over 16 million men and
women from all sorts of different backgrounds were brought together in a way
of life that was literally uniform. Service rates for men born in the early
1920s approached 80%. And working toward a common goal, often under stress,
brought them still closer together.
Though strictly speaking World War II lasted less than 4 years for the US, its
effects lasted longer. Wars make central governments more powerful, and World
War II was an extreme case of this. In the US, as in all the other Allied
countries, the federal government was slow to give up the new powers it had
acquired. Indeed, in some respects the war didn't end in 1945; the enemy just
switched to the Soviet Union. In tax rates, federal power, defense spending,
conscription, and nationalism, the decades after the war looked more like
wartime than prewar peacetime. And the social effects lasted too. The kid
pulled into the army from behind a mule team in West Virginia didn't simply go
back to the farm afterward. Something else was waiting for him, something that
looked a lot like the army.
If total war was the big political story of the 20th century, the big economic
story was the rise of a new kind of company. And this too tended to produce
both social and economic cohesion.
The 20th century was the century of the big, national corporation. General
Electric, General Foods, General Motors. Developments in finance,
communications, transportation, and manufacturing enabled a new type of
company whose goal was above all scale. Version 1 of this world was low-res: a
Duplo world of a few giant companies dominating each big market.
The late 19th and early 20th centuries had been a time of consolidation, led
especially by J. P. Morgan. Thousands of companies run by their founders were
merged into a couple hundred giant ones run by professional managers.
Economies of scale ruled the day. It seemed to people at the time that this
was the final state of things. John D. Rockefeller said in 1880
> The day of combination is here to stay. Individualism has gone, never to
> return.
He turned out to be mistaken, but he seemed right for the next hundred years.
The consolidation that began in the late 19th century continued for most of
the 20th. By the end of World War II, as Michael Lind writes, "the major
sectors of the economy were either organized as government-backed cartels or
dominated by a few oligopolistic corporations."
For consumers this new world meant the same choices everywhere, but only a few
of them. When I grew up there were only 2 or 3 of most things, and since they
were all aiming at the middle of the market there wasn't much to differentiate
them.
One of the most important instances of this phenomenon was in TV. Here there
were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists.
The programs that the 3 networks offered were indistinguishable. In fact, here
there was a triple pressure toward the center. If one show did try something
daring, local affiliates in conservative markets would make them stop. Plus
since TVs were expensive, whole families watched the same shows together, so
they had to be suitable for everyone.
And not only did everyone get the same thing, they got it at the same time.
It's difficult to imagine now, but every night tens of millions of families
would sit down together in front of their TV set watching the same show, at
the same time, as their next door neighbors. What happens now with the Super
Bowl used to happen every night. We were literally in sync.
In a way mid-century TV culture was good. The view it gave of the world was
like you'd find in a children's book, and it probably had something of the
effect that (parents hope) children's books have in making people behave
better. But, like children's books, TV was also misleading. Dangerously
misleading, for adults. In his autobiography, Robert MacNeil talks of seeing
gruesome images that had just come in from Vietnam and thinking, we can't show
these to families while they're having dinner.
I know how pervasive the common culture was, because I tried to opt out of it,
and it was practically impossible to find alternatives. When I was 13 I
realized, more from internal evidence than any outside source, that the ideas
we were being fed on TV were crap, and I stopped watching it. But it
wasn't just TV. It seemed like everything around me was crap. The politicians
all saying the same things, the consumer brands making almost identical
products with different labels stuck on to indicate how prestigious they were
meant to be, the balloon-frame houses with fake "colonial" skins, the cars
with several feet of gratuitous metal on each end that started to fall apart
after a couple years, the "red delicious" apples that were red but only
nominally apples. And in retrospect, it _was_ crap.
But when I went looking for alternatives to fill this void, I found
practically nothing. There was no Internet then. The only place to look was in
the chain bookstore in our local shopping mall. There I found a copy of
_The Atlantic_. I wish I could say it became a gateway into a wider world, but
in fact I found it boring and incomprehensible. Like a kid tasting whisky for
the first time and pretending to like it, I preserved that magazine as
carefully as if it had been a book. I'm sure I still have it somewhere. But
though it was evidence that there was, somewhere, a world that wasn't red
delicious, I didn't find it till college.
It wasn't just as consumers that the big companies made us similar. They did
as employers too. Within companies there were powerful forces pushing people
toward a single model of how to look and act. IBM was particularly notorious
for this, but they were only a little more extreme than other big companies.
And the models of how to look and act varied little between companies. Meaning
everyone within this world was expected to seem more or less the same. And not
just those in the corporate world, but also everyone who aspired to it — which
in the middle of the 20th century meant most people who weren't already in it.
For most of the 20th century, working-class people tried hard to look middle
class. You can see it in old photos. Few adults aspired to look dangerous in
1950.
But the rise of national corporations didn't just compress us culturally. It
compressed us economically too, and on both ends.
Along with giant national corporations, we got giant national labor unions.
And in the mid 20th century the corporations cut deals with the unions where
they paid over market price for labor. Partly because the unions were
monopolies. Partly because, as components of oligopolies themselves, the
corporations knew they could safely pass the cost on to their customers,
because their competitors would have to as well. And partly because in mid-
century most of the giant companies were still focused on finding new ways to
milk economies of scale. Just as startups rightly pay AWS a premium over the
cost of running their own servers so they can focus on growth, many of the big
national corporations were willing to pay a premium for labor.
As well as pushing incomes up from the bottom, by overpaying unions, the big
companies of the 20th century also pushed incomes down at the top, by
underpaying their top management. Economist J. K. Galbraith wrote in 1967 that
"There are few corporations in which it would be suggested that executive
salaries are at a maximum."
To some extent this was an illusion. Much of the de facto pay of executives
never showed up on their income tax returns, because it took the form of
perks. The higher the rate of income tax, the more pressure there was to pay
employees upstream of it. (In the UK, where taxes were even higher than in the
US, companies would even pay their kids' private school tuitions.) One of the
most valuable things the big companies of the mid 20th century gave their
employees was job security, and this too didn't show up in tax returns or
income statistics. So the nature of employment in these organizations tended
to yield falsely low numbers about economic inequality. But even accounting
for that, the big companies paid their best people less than market price.
There was no market; the expectation was that you'd work for the same company
for decades if not your whole career.
Your work was so illiquid there was little chance of getting market price. But
that same illiquidity also encouraged you not to seek it. If the company
promised to employ you till you retired and give you a pension afterward, you
didn't want to extract as much from it this year as you could. You needed to
take care of the company so it could take care of you. Especially when you'd
been working with the same group of people for decades. If you tried to
squeeze the company for more money, you were squeezing the organization that
was going to take care of _them_. Plus if you didn't put the company first you
wouldn't be promoted, and if you couldn't switch ladders, promotion on this
one was the only way up.
To someone who'd spent several formative years in the armed forces, this
situation didn't seem as strange as it does to us now. From their point of
view, as big company executives, they were high-ranking officers. They got
paid a lot more than privates. They got to have expense account lunches at the
best restaurants and fly around on the company's Gulfstreams. It probably
didn't occur to most of them to ask if they were being paid market price.
The ultimate way to get market price is to work for yourself, by starting your
own company. That seems obvious to any ambitious person now. But in the mid
20th century it was an alien concept. Not because starting one's own company
seemed too ambitious, but because it didn't seem ambitious enough. Even as
late as the 1970s, when I grew up, the ambitious plan was to get lots of
education at prestigious institutions, and then join some other prestigious
institution and work one's way up the hierarchy. Your prestige was the
prestige of the institution you belonged to. People did start their own
businesses of course, but educated people rarely did, because in those days
there was practically zero concept of starting what we now call a _startup_: a
business that starts small and grows big. That was much harder to do in the
mid 20th century. Starting one's own business meant starting a business that
would start small and stay small. Which in those days of big companies often
meant scurrying around trying to avoid being trampled by elephants. It was
more prestigious to be one of the executive class riding the elephant.
By the 1970s, no one stopped to wonder where the big prestigious companies had
come from in the first place. It seemed like they'd always been there, like
the chemical elements. And indeed, there was a double wall between ambitious
kids in the 20th century and the origins of the big companies. Many of the big
companies were roll-ups that didn't have clear founders. And when they did,
the founders didn't seem like us. Nearly all of them had been uneducated, in
the sense of not having been to college. They were what Shakespeare called
rude mechanicals. College trained one to be a member of the professional
classes. Its graduates didn't expect to do the sort of grubby menial work that
Andrew Carnegie or Henry Ford started out doing.
And in the 20th century there were more and more college graduates. They
increased from about 2% of the population in 1900 to about 25% in 2000. In the
middle of the century our two big forces intersect, in the form of the GI
Bill, which sent 2.2 million World War II veterans to college. Few thought of
it in these terms, but the result of making college the canonical path for the
ambitious was a world in which it was socially acceptable to work for Henry
Ford, but not to be Henry Ford.
I remember this world well. I came of age just as it was starting to break up.
In my childhood it was still dominant. Not quite so dominant as it had been.
We could see from old TV shows and yearbooks and the way adults acted that
people in the 1950s and 60s had been even more conformist than us. The mid-
century model was already starting to get old. But that was not how we saw it
at the time. We would at most have said that one could be a bit more daring in
1975 than 1965. And indeed, things hadn't changed much yet.
But change was coming soon. And when the Duplo economy started to
disintegrate, it disintegrated in several different ways at once. Vertically
integrated companies literally dis-integrated because it was more efficient
to. Incumbents faced new competitors as (a) markets went global and (b)
technical innovation started to trump economies of scale, turning size from an
asset into a liability. Smaller companies were increasingly able to survive as
formerly narrow channels to consumers broadened. Markets themselves started to
change faster, as whole new categories of products appeared. And last but not
least, the federal government, which had previously smiled upon J. P. Morgan's
world as the natural state of things, began to realize it wasn't the last word
after all.
What J. P. Morgan was to the horizontal axis, Henry Ford was to the vertical.
He wanted to do everything himself. The giant plant he built at River Rouge
between 1917 and 1928 literally took in iron ore at one end and sent cars out
the other. 100,000 people worked there. At the time it seemed the future. But
that is not how car companies operate today. Now much of the design and
manufacturing happens in a long supply chain, whose products the car companies
ultimately assemble and sell. The reason car companies operate this way is
that it works better. Each company in the supply chain focuses on what they
know best. And they each have to do it well or they can be swapped out for
another supplier.
Why didn't Henry Ford realize that networks of cooperating companies work
better than a single big company? One reason is that supplier networks take a
while to evolve. In 1917, doing everything himself seemed to Ford the only way
to get the scale he needed. And the second reason is that if you want to solve
a problem using a network of cooperating companies, you have to be able to
coordinate their efforts, and you can do that much better with computers.
Computers reduce the transaction costs that Coase argued are the raison d'etre
of corporations. That is a fundamental change.
In the early 20th century, big companies were synonymous with efficiency. In
the late 20th century they were synonymous with inefficiency. To some extent
this was because the companies themselves had become sclerotic. But it was
also because our standards were higher.
It wasn't just within existing industries that change occurred. The industries
themselves changed. It became possible to make lots of new things, and
sometimes the existing companies weren't the ones who did it best.
Microcomputers are a classic example. The market was pioneered by upstarts
like Apple. When it got big enough, IBM decided it was worth paying attention
to. At the time IBM completely dominated the computer industry. They assumed
that all they had to do, now that this market was ripe, was to reach out and
pick it. Most people at the time would have agreed with them. But what
happened next illustrated how much more complicated the world had become. IBM
did launch a microcomputer. Though quite successful, it did not crush Apple.
But even more importantly, IBM itself ended up being supplanted by a supplier
coming in from the side — from software, which didn't even seem to be the same
business. IBM's big mistake was to accept a non-exclusive license for DOS. It
must have seemed a safe move at the time. No other computer manufacturer had
ever been able to outsell them. What difference did it make if other
manufacturers could offer DOS too? The result of that miscalculation was an
explosion of inexpensive PC clones. Microsoft now owned the PC standard, and
the customer. And the microcomputer business ended up being Apple vs
Microsoft.
Basically, Apple bumped IBM and then Microsoft stole its wallet. That sort of
thing did not happen to big companies in mid-century. But it was going to
happen increasingly often in the future.
Change happened mostly by itself in the computer business. In other
industries, legal obstacles had to be removed first. Many of the mid-century
oligopolies had been anointed by the federal government with policies (and in
wartime, large orders) that kept out competitors. This didn't seem as dubious
to government officials at the time as it sounds to us. They felt a two-party
system ensured sufficient competition in politics. It ought to work for
business too.
Gradually the government realized that anti-competitive policies were doing
more harm than good, and during the Carter administration it started to remove
them. The word used for this process was misleadingly narrow: deregulation.
What was really happening was de-oligopolization. It happened to one industry
after another. Two of the most visible to consumers were air travel and long-
distance phone service, which both became dramatically cheaper after
deregulation.
Deregulation also contributed to the wave of hostile takeovers in the 1980s.
In the old days the only limit on the inefficiency of companies, short of
actual bankruptcy, was the inefficiency of their competitors. Now companies
had to face absolute rather than relative standards. Any public company that
didn't generate sufficient returns on its assets risked having its management
replaced with one that would. Often the new managers did this by breaking
companies up into components that were more valuable separately.
Version 1 of the national economy consisted of a few big blocks whose
relationships were negotiated in back rooms by a handful of executives,
politicians, regulators, and labor leaders. Version 2 was higher resolution:
there were more companies, of more different sizes, making more different
things, and their relationships changed faster. In this world there were still
plenty of back room negotiations, but more was left to market forces. Which
further accelerated the fragmentation.
It's a little misleading to talk of versions when describing a gradual
process, but not as misleading as it might seem. There was a lot of change in
a few decades, and what we ended up with was qualitatively different. The
companies in the S&P 500 in 1958 had been there an average of 61 years. By
2012 that number was 18 years.
The breakup of the Duplo economy happened simultaneously with the spread of
computing power. To what extent were computers a precondition? It would take a
book to answer that. Obviously the spread of computing power was a
precondition for the rise of startups. I suspect it was for most of what
happened in finance too. But was it a precondition for globalization or the
LBO wave? I don't know, but I wouldn't discount the possibility. It may be
that the refragmentation was driven by computers in the way the industrial
revolution was driven by steam engines. Whether or not computers were a
precondition, they have certainly accelerated it.
The new fluidity of companies changed people's relationships with their
employers. Why climb a corporate ladder that might be yanked out from under
you? Ambitious people started to think of a career less as climbing a single
ladder than as a series of jobs that might be at different companies. More
movement (or even potential movement) between companies introduced more
competition in salaries. Plus as companies became smaller it became easier to
estimate how much an employee contributed to the company's revenue. Both
changes drove salaries toward market price. And since people vary dramatically
in productivity, paying market price meant salaries started to diverge.
By no coincidence it was in the early 1980s that the term "yuppie" was coined.
That word is not much used now, because the phenomenon it describes is so
taken for granted, but at the time it was a label for something novel. Yuppies
were young professionals who made lots of money. To someone in their twenties
today, this wouldn't seem worth naming. Why wouldn't young professionals make
lots of money? But until the 1980s, being underpaid early in your career was
part of what it meant to be a professional. Young professionals were paying
their dues, working their way up the ladder. The rewards would come later.
What was novel about yuppies was that they wanted market price for the work
they were doing now.
The first yuppies did not work for startups. That was still in the future. Nor
did they work for big companies. They were professionals working in fields
like law, finance, and consulting. But their example rapidly inspired their
peers. Once they saw that new BMW 325i, they wanted one too.
Underpaying people at the beginning of their career only works if everyone
does it. Once some employer breaks ranks, everyone else has to, or they can't
get good people. And once started this process spreads through the whole
economy, because at the beginnings of people's careers they can easily switch
not merely employers but industries.
But not all young professionals benefitted. You had to produce to get paid a
lot. It was no coincidence that the first yuppies worked in fields where it
was easy to measure that.
More generally, an idea was returning whose name sounds old-fashioned
precisely because it was so rare for so long: that you could make your
fortune. As in the past there were multiple ways to do it. Some made their
fortunes by creating wealth, and others by playing zero-sum games. But once it
became possible to make one's fortune, the ambitious had to decide whether or
not to. A physicist who chose physics over Wall Street in 1990 was making a
sacrifice that a physicist in 1960 didn't have to think about.
The idea even flowed back into big companies. CEOs of big companies make more
now than they used to, and I think much of the reason is prestige. In 1960,
corporate CEOs had immense prestige. They were the winners of the only
economic game in town. But if they made as little now as they did then, in
real dollar terms, they'd seem like small fry compared to professional
athletes and whiz kids making millions from startups and hedge funds. They
don't like that idea, so now they try to get as much as they can, which is
more than they had been getting.
Meanwhile a similar fragmentation was happening at the other end of the
economic scale. As big companies' oligopolies became less secure, they were
less able to pass costs on to customers and thus less willing to overpay for
labor. And as the Duplo world of a few big blocks fragmented into many
companies of different sizes — some of them overseas — it became harder for
unions to enforce their monopolies. As a result workers' wages also tended
toward market price. Which (inevitably, if unions had been doing their job)
tended to be lower. Perhaps dramatically so, if automation had decreased the
need for some kind of work.
And just as the mid-century model induced social as well as economic cohesion,
its breakup brought social as well as economic fragmentation. People started
to dress and act differently. Those who would later be called the "creative
class" became more mobile. People who didn't care much for religion felt less
pressure to go to church for appearances' sake, while those who liked it a lot
opted for increasingly colorful forms. Some switched from meat loaf to tofu,
and others to Hot Pockets. Some switched from driving Ford sedans to driving
small imported cars, and others to driving SUVs. Kids who went to private
schools or wished they did started to dress "preppy," and kids who wanted to
seem rebellious made a conscious effort to look disreputable. In a hundred
ways people spread apart.
Almost four decades later, fragmentation is still increasing. Has it been net
good or bad? I don't know; the question may be unanswerable. Not entirely bad
though. We take for granted the forms of fragmentation we like, and worry only
about the ones we don't. But as someone who caught the tail end of mid-century
_conformism_, I can tell you it was no utopia.
My goal here is not to say whether fragmentation has been good or bad, just to
explain why it's happening. With the centripetal forces of total war and 20th
century oligopoly mostly gone, what will happen next? And more specifically,
is it possible to reverse some of the fragmentation we've seen?
If it is, it will have to happen piecemeal. You can't reproduce mid-century
cohesion the way it was originally produced. It would be insane to go to war
just to induce more national unity. And once you understand the degree to
which the economic history of the 20th century was a low-res version 1, it's
clear you can't reproduce that either.
20th century cohesion was something that happened at least in a sense
naturally. The war was due mostly to external forces, and the Duplo economy
was an evolutionary phase. If you want cohesion now, you'd have to induce it
deliberately. And it's not obvious how. I suspect the best we'll be able to do
is address the symptoms of fragmentation. But that may be enough.
The form of fragmentation people worry most about lately is _economic
inequality_, and if you want to eliminate that you're up against a truly
formidable headwind that has been in operation since the stone age.
Technology.
Technology is a lever. It magnifies work. And the lever not only grows
increasingly long, but the rate at which it grows is itself increasing.
Which in turn means the variation in the amount of wealth people can create
has not only been increasing, but accelerating. The unusual conditions that
prevailed in the mid 20th century masked this underlying trend. The ambitious
had little choice but to join large organizations that made them march in step
with lots of other people — literally in the case of the armed forces,
figuratively in the case of big corporations. Even if the big corporations had
wanted to pay people proportionate to their value, they couldn't have figured
out how. But that constraint has gone now. Ever since it started to erode in
the 1970s, we've seen the underlying forces at work again.
Not everyone who gets rich now does it by creating wealth, certainly. But a
significant number do, and the Baumol Effect means all their peers get dragged
along too. And as long as it's possible to get rich by creating wealth,
the default tendency will be for economic inequality to increase. Even if you
eliminate all the other ways to get rich. You can mitigate this with subsidies
at the bottom and taxes at the top, but unless taxes are high enough to
discourage people from creating wealth, you're always going to be fighting a
losing battle against increasing variation in productivity.
That form of fragmentation, like the others, is here to stay. Or rather, back
to stay. Nothing is forever, but the tendency toward fragmentation should be
more forever than most things, precisely because it's not due to any
particular cause. It's simply a reversion to the mean. When Rockefeller said
individualism was gone, he was right for a hundred years. It's back now, and
that's likely to be true for longer.
I worry that if we don't acknowledge this, we're headed for trouble. If we
think 20th century cohesion disappeared because of few policy tweaks, we'll be
deluded into thinking we can get it back (minus the bad parts, somehow) with a
few countertweaks. And then we'll waste our time trying to eliminate
fragmentation, when we'd be better off thinking about how to mitigate its
consequences.
** |
|
January 2012
A few hours before the Yahoo acquisition was announced in June 1998 I took a
snapshot of Viaweb's site. I thought it might be interesting to look at one
day.
The first thing one notices is is how tiny the pages are. Screens were a lot
smaller in 1998. If I remember correctly, our frontpage used to just fit in
the size window people typically used then.
Browsers then (IE 6 was still 3 years in the future) had few fonts and they
weren't antialiased. If you wanted to make pages that looked good, you had to
render display text as images.
You may notice a certain similarity between the Viaweb and Y Combinator logos.
We did that as an inside joke when we started YC. Considering how basic a red
circle is, it seemed surprising to me when we started Viaweb how few other
companies used one as their logo. A bit later I realized why.
On the Company page you'll notice a mysterious individual called John
McArtyem. Robert Morris (aka Rtm) was so publicity averse after the Worm that
he didn't want his name on the site. I managed to get him to agree to a
compromise: we could use his bio but not his name. He has since relaxed a bit
on that point.
Trevor graduated at about the same time the acquisition closed, so in the
course of 4 days he went from impecunious grad student to millionaire PhD. The
culmination of my career as a writer of press releases was one celebrating his
graduation, illustrated with a drawing I did of him during a meeting.
(Trevor also appears as Trevino Bagwell in our directory of web designers
merchants could hire to build stores for them. We inserted him as a ringer in
case some competitor tried to spam our web designers. We assumed his logo
would deter any actual customers, but it did not.)
Back in the 90s, to get users you had to get mentioned in magazines and
newspapers. There were not the same ways to get found online that there are
today. So we used to pay a PR firm $16,000 a month to get us mentioned in the
press. Fortunately reporters liked us.
In our advice about getting traffic from search engines (I don't think the
term SEO had been coined yet), we say there are only 7 that matter: Yahoo,
AltaVista, Excite, WebCrawler, InfoSeek, Lycos, and HotBot. Notice anything
missing? Google was incorporated that September.
We supported online transactions via a company called Cybercash, since if we
lacked that feature we'd have gotten beaten up in product comparisons. But
Cybercash was so bad and most stores' order volumes were so low that it was
better if merchants processed orders like phone orders. We had a page in our
site trying to talk merchants out of doing real time authorizations.
The whole site was organized like a funnel, directing people to the test
drive. It was a novel thing to be able to try out software online. We put cgi-
bin in our dynamic urls to fool competitors about how our software worked.
We had some well known users. Needless to say, Frederick's of Hollywood got
the most traffic. We charged a flat fee of $300/month for big stores, so it
was a little alarming to have users who got lots of traffic. I once calculated
how much Frederick's was costing us in bandwidth, and it was about $300/month.
Since we hosted all the stores, which together were getting just over 10
million page views per month in June 1998, we consumed what at the time seemed
a lot of bandwidth. We had 2 T1s (3 Mb/sec) coming into our offices. In those
days there was no AWS. Even colocating servers seemed too risky, considering
how often things went wrong with them. So we had our servers in our offices.
Or more precisely, in Trevor's office. In return for the unique privilege of
sharing his office with no other humans, he had to share it with 6 shrieking
tower servers. His office was nicknamed the Hot Tub on account of the heat
they generated. Most days his stack of window air conditioners could keep up.
For describing pages, we had a template language called RTML, which supposedly
stood for something, but which in fact I named after Rtm. RTML was Common Lisp
augmented by some macros and libraries, and concealed under a structure editor
that made it look like it had syntax.
Since we did continuous releases, our software didn't actually have versions.
But in those days the trade press expected versions, so we made them up. If we
wanted to get lots of attention, we made the version number an integer. That
"version 4.0" icon was generated by our own button generator, incidentally.
The whole Viaweb site was made with our software, even though it wasn't an
online store, because we wanted to experience what our users did.
At the end of 1997, we released a general purpose shopping search engine
called Shopfind. It was pretty advanced for the time. It had a programmable
crawler that could crawl most of the different stores online and pick out the
products.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2010
_(I wrote this for Forbes, who asked me to write something about the qualities
we look for in founders. In print they had to cut the last item because they
didn't have room.)_
**1\. Determination**
This has turned out to be the most important quality in startup founders. We
thought when we started Y Combinator that the most important quality would be
intelligence. That's the myth in the Valley. And certainly you don't want
founders to be stupid. But as long as you're over a certain threshold of
intelligence, what matters most is determination. You're going to hit a lot of
obstacles. You can't be the sort of person who gets demoralized easily.
Bill Clerico and Rich Aberman of WePay are a good example. They're doing a
finance startup, which means endless negotiations with big, bureaucratic
companies. When you're starting a startup that depends on deals with big
companies to exist, it often feels like they're trying to ignore you out of
existence. But when Bill Clerico starts calling you, you may as well do what
he asks, because he is not going away.
**2\. Flexibility**
You do not however want the sort of determination implied by phrases like
"don't give up on your dreams." The world of startups is so unpredictable that
you need to be able to modify your dreams on the fly. The best metaphor I've
found for the combination of determination and flexibility you need is a
running back. He's determined to get downfield, but at any given moment he may
need to go sideways or even backwards to get there.
The current record holder for flexibility may be Daniel Gross of Greplin. He
applied to YC with some bad ecommerce idea. We told him we'd fund him if he
did something else. He thought for a second, and said ok. He then went through
two more ideas before settling on Greplin. He'd only been working on it for a
couple days when he presented to investors at Demo Day, but he got a lot of
interest. He always seems to land on his feet.
**3\. Imagination**
Intelligence does matter a lot of course. It seems like the type that matters
most is imagination. It's not so important to be able to solve predefined
problems quickly as to be able to come up with surprising new ideas. In the
startup world, most good ideas seem bad initially. If they were obviously
good, someone would already be doing them. So you need the kind of
intelligence that produces ideas with just the right level of craziness.
Airbnb is that kind of idea. In fact, when we funded Airbnb, we thought it was
too crazy. We couldn't believe large numbers of people would want to stay in
other people's places. We funded them because we liked the founders so much.
As soon as we heard they'd been supporting themselves by selling Obama and
McCain branded breakfast cereal, they were in. And it turned out the idea was
on the right side of crazy after all.
**4\. Naughtiness**
Though the most successful founders are usually good people, they tend to have
a piratical gleam in their eye. They're not Goody Two-Shoes type good.
Morally, they care about getting the big questions right, but not about
observing proprieties. That's why I'd use the word naughty rather than evil.
They delight in breaking rules, but not rules that matter. This quality may be
redundant though; it may be implied by imagination.
Sam Altman of Loopt is one of the most successful alumni, so we asked him what
question we could put on the Y Combinator application that would help us
discover more people like him. He said to ask about a time when they'd hacked
something to their advantage—hacked in the sense of beating the system, not
breaking into computers. It has become one of the questions we pay most
attention to when judging applications.
**5\. Friendship**
Empirically it seems to be hard to start a startup with just one founder. Most
of the big successes have two or three. And the relationship between the
founders has to be strong. They must genuinely like one another, and work well
together. Startups do to the relationship between the founders what a dog does
to a sock: if it can be pulled apart, it will be.
Emmett Shear and Justin Kan of Justin.tv are a good example of close friends
who work well together. They've known each other since second grade. They can
practically read one another's minds. I'm sure they argue, like all founders,
but I have never once sensed any unresolved tension between them.
**Thanks** to Jessica Livingston and Chris Steiner for reading drafts of this.
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2008
The economic situation is apparently so grim that some experts fear we may be
in for a stretch as bad as the mid seventies.
When Microsoft and Apple were founded.
As those examples suggest, a recession may not be such a bad time to start a
startup. I'm not claiming it's a particularly good time either. The truth is
more boring: the state of the economy doesn't matter much either way.
If we've learned one thing from funding so many startups, it's that they
succeed or fail based on the qualities of the founders. The economy has some
effect, certainly, but as a predictor of success it's rounding error compared
to the founders.
Which means that what matters is who you are, not when you do it. If you're
the right sort of person, you'll win even in a bad economy. And if you're not,
a good economy won't save you. Someone who thinks "I better not start a
startup now, because the economy is so bad" is making the same mistake as the
people who thought during the Bubble "all I have to do is start a startup, and
I'll be rich."
So if you want to improve your chances, you should think far more about who
you can recruit as a cofounder than the state of the economy. And if you're
worried about threats to the survival of your company, don't look for them in
the news. Look in the mirror.
But for any given team of founders, would it not pay to wait till the economy
is better before taking the leap? If you're starting a restaurant, maybe, but
not if you're working on technology. Technology progresses more or less
independently of the stock market. So for any given idea, the payoff for
acting fast in a bad economy will be higher than for waiting. Microsoft's
first product was a Basic interpreter for the Altair. That was exactly what
the world needed in 1975, but if Gates and Allen had decided to wait a few
years, it would have been too late.
Of course, the idea you have now won't be the last you have. There are always
new ideas. But if you have a specific idea you want to act on, act now.
That doesn't mean you can ignore the economy. Both customers and investors
will be feeling pinched. It's not necessarily a problem if customers feel
pinched: you may even be able to benefit from it, by making things that save
money. Startups often make things cheaper, so in that respect they're better
positioned to prosper in a recession than big companies.
Investors are more of a problem. Startups generally need to raise some amount
of external funding, and investors tend to be less willing to invest in bad
times. They shouldn't be. Everyone knows you're supposed to buy when times are
bad and sell when times are good. But of course what makes investing so
counterintuitive is that in equity markets, good times are defined as everyone
thinking it's time to buy. You have to be a contrarian to be correct, and by
definition only a minority of investors can be.
So just as investors in 1999 were tripping over one another trying to buy into
lousy startups, investors in 2009 will presumably be reluctant to invest even
in good ones.
You'll have to adapt to this. But that's nothing new: startups always have to
adapt to the whims of investors. Ask any founder in any economy if they'd
describe investors as fickle, and watch the face they make. Last year you had
to be prepared to explain how your startup was viral. Next year you'll have to
explain how it's recession-proof.
(Those are both good things to be. The mistake investors make is not the
criteria they use but that they always tend to focus on one to the exclusion
of the rest.)
Fortunately the way to make a startup recession-proof is to do exactly what
you should do anyway: run it as cheaply as possible. For years I've been
telling founders that the surest route to success is to be the cockroaches of
the corporate world. The immediate cause of death in a startup is always
running out of money. So the cheaper your company is to operate, the harder it
is to kill. And fortunately it has gotten very cheap to run a startup. A
recession will if anything make it cheaper still.
If nuclear winter really is here, it may be safer to be a cockroach even than
to keep your job. Customers may drop off individually if they can no longer
afford you, but you're not going to lose them all at once; markets don't
"reduce headcount."
What if you quit your job to start a startup that fails, and you can't find
another? That could be a problem if you work in sales or marketing. In those
fields it can take months to find a new job in a bad economy. But hackers seem
to be more liquid. Good hackers can always get some kind of job. It might not
be your dream job, but you're not going to starve.
Another advantage of bad times is that there's less competition. Technology
trains leave the station at regular intervals. If everyone else is cowering in
a corner, you may have a whole car to yourself.
You're an investor too. As a founder, you're buying stock with work: the
reason Larry and Sergey are so rich is not so much that they've done work
worth tens of billions of dollars, but that they were the first investors in
Google. And like any investor you should buy when times are bad.
Were you nodding in agreement, thinking "stupid investors" a few paragraphs
ago when I was talking about how investors are reluctant to put money into
startups in bad markets, even though that's the time they should rationally be
most willing to buy? Well, founders aren't much better. When times get bad,
hackers go to grad school. And no doubt that will happen this time too. In
fact, what makes the preceding paragraph true is that most readers won't
believe it—at least to the extent of acting on it.
So maybe a recession is a good time to start a startup. It's hard to say
whether advantages like lack of competition outweigh disadvantages like
reluctant investors. But it doesn't matter much either way. It's the people
that matter. And for a given set of people working on a given technology, the
time to act is always now.
---
Russian Translation
| | Chinese Translation
Japanese Translation
* * *
--- |
|
April 2008
_(This essay is derived from a talk at the 2008 Startup School.)_
About a month after we started Y Combinator we came up with the phrase that
became our motto: Make something people want. We've learned a lot since then,
but if I were choosing now that's still the one I'd pick.
Another thing we tell founders is not to worry too much about the business
model, at least at first. Not because making money is unimportant, but because
it's so much easier than building something great.
A couple weeks ago I realized that if you put those two ideas together, you
get something surprising. Make something people want. Don't worry too much
about making money. What you've got is a description of a charity.
When you get an unexpected result like this, it could either be a bug or a new
discovery. Either businesses aren't supposed to be like charities, and we've
proven by reductio ad absurdum that one or both of the principles we began
with is false. Or we have a new idea.
I suspect it's the latter, because as soon as this thought occurred to me, a
whole bunch of other things fell into place.
**Examples**
For example, Craigslist. It's not a charity, but they run it like one. And
they're astoundingly successful. When you scan down the list of most popular
web sites, the number of employees at Craigslist looks like a misprint. Their
revenues aren't as high as they could be, but most startups would be happy to
trade places with them.
In Patrick O'Brian's novels, his captains always try to get upwind of their
opponents. If you're upwind, you decide when and if to engage the other ship.
Craigslist is effectively upwind of enormous revenues. They'd face some
challenges if they wanted to make more, but not the sort you face when you're
tacking upwind, trying to force a crappy product on ambivalent users by
spending ten times as much on sales as on development.
I'm not saying startups should aim to end up like Craigslist. They're a
product of unusual circumstances. But they're a good model for the early
phases.
Google looked a lot like a charity in the beginning. They didn't have ads for
over a year. At year 1, Google was indistinguishable from a nonprofit. If a
nonprofit or government organization had started a project to index the web,
Google at year 1 is the limit of what they'd have produced.
Back when I was working on spam filters I thought it would be a good idea to
have a web-based email service with good spam filtering. I wasn't thinking of
it as a company. I just wanted to keep people from getting spammed. But as I
thought more about this project, I realized it would probably have to be a
company. It would cost something to run, and it would be a pain to fund with
grants and donations.
That was a surprising realization. Companies often claim to be benevolent, but
it was surprising to realize there were purely benevolent projects that had to
be embodied as companies to work.
I didn't want to start another company, so I didn't do it. But if someone had,
they'd probably be quite rich now. There was a window of about two years when
spam was increasing rapidly but all the big email services had terrible
filters. If someone had launched a new, spam-free mail service, users would
have flocked to it.
Notice the pattern here? From either direction we get to the same spot. If you
start from successful startups, you find they often behaved like nonprofits.
And if you start from ideas for nonprofits, you find they'd often make good
startups.
**Power**
How wide is this territory? Would all good nonprofits be good companies?
Possibly not. What makes Google so valuable is that their users have money. If
you make people with money love you, you can probably get some of it. But
could you also base a successful startup on behaving like a nonprofit to
people who don't have money? Could you, for example, grow a successful startup
out of curing an unfashionable but deadly disease like malaria?
I'm not sure, but I suspect that if you pushed this idea, you'd be surprised
how far it would go. For example, people who apply to Y Combinator don't
generally have much money, and yet we can profit by helping them, because with
our help they could make money. Maybe the situation is similar with malaria.
Maybe an organization that helped lift its weight off a country could benefit
from the resulting growth.
I'm not proposing this is a serious idea. I don't know anything about malaria.
But I've been kicking ideas around long enough to know when I come across a
powerful one.
One way to guess how far an idea extends is to ask yourself at what point
you'd bet against it. The thought of betting against benevolence is alarming
in the same way as saying that something is technically impossible. You're
just asking to be made a fool of, because these are such powerful forces.
For example, initially I thought maybe this principle only applied to Internet
startups. Obviously it worked for Google, but what about Microsoft? Surely
Microsoft isn't benevolent? But when I think back to the beginning, they were.
Compared to IBM they were like Robin Hood. When IBM introduced the PC, they
thought they were going to make money selling hardware at high prices. But by
gaining control of the PC standard, Microsoft opened up the market to any
manufacturer. Hardware prices plummeted, and lots of people got to have
computers who couldn't otherwise have afforded them. It's the sort of thing
you'd expect Google to do.
Microsoft isn't so benevolent now. Now when one thinks of what Microsoft does
to users, all the verbs that come to mind begin with F. And yet it doesn't
seem to pay. Their stock price has been flat for years. Back when they were
Robin Hood, their stock price rose like Google's. Could there be a connection?
You can see how there would be. When you're small, you can't bully customers,
so you have to charm them. Whereas when you're big you can maltreat them at
will, and you tend to, because it's easier than satisfying them. You grow big
by being nice, but you can stay big by being mean.
You get away with it till the underlying conditions change, and then all your
victims escape. So "Don't be evil" may be the most valuable thing Paul
Buchheit made for Google, because it may turn out to be an elixir of corporate
youth. I'm sure they find it constraining, but think how valuable it will be
if it saves them from lapsing into the fatal laziness that afflicted Microsoft
and IBM.
The curious thing is, this elixir is freely available to any other company.
Anyone can adopt "Don't be evil." The catch is that people will hold you to
it. So I don't think you're going to see record labels or tobacco companies
using this discovery.
**Morale**
There's a lot of external evidence that benevolence works. But how does it
work? One advantage of investing in a large number of startups is that you get
a lot of data about how they work. From what we've seen, being good seems to
help startups in three ways: it improves their morale, it makes other people
want to help them, and above all, it helps them be decisive.
Morale is tremendously important to a startup—so important that morale alone
is almost enough to determine success. Startups are often described as
emotional roller-coasters. One minute you're going to take over the world, and
the next you're doomed. The problem with feeling you're doomed is not just
that it makes you unhappy, but that it makes you _stop working_. So the
downhills of the roller-coaster are more of a self fulfilling prophecy than
the uphills. If feeling you're going to succeed makes you work harder, that
probably improves your chances of succeeding, but if feeling you're going to
fail makes you stop working, that practically guarantees you'll fail.
Here's where benevolence comes in. If you feel you're really helping people,
you'll keep working even when it seems like your startup is doomed. Most of us
have some amount of natural benevolence. The mere fact that someone needs you
makes you want to help them. So if you start the kind of startup where users
come back each day, you've basically built yourself a giant tamagotchi. You've
made something you need to take care of.
Blogger is a famous example of a startup that went through really low lows and
survived. At one point they ran out of money and everyone left. Evan Williams
came in to work the next day, and there was no one but him. What kept him
going? Partly that users needed him. He was hosting thousands of people's
blogs. He couldn't just let the site die.
There are many advantages of launching quickly, but the most important may be
that once you have users, the tamagotchi effect kicks in. Once you have users
to take care of, you're forced to figure out what will make them happy, and
that's actually very valuable information.
The added confidence that comes from trying to help people can also help you
with investors. One of the founders of Chatterous told me recently that he and
his cofounder had decided that this service was something the world needed, so
they were going to keep working on it no matter what, even if they had to move
back to Canada and live in their parents' basements.
Once they realized this, they stopped caring so much what investors thought
about them. They still met with them, but they weren't going to die if they
didn't get their money. And you know what? The investors got a lot more
interested. They could sense that the Chatterouses were going to do this
startup with or without them.
If you're really committed and your startup is cheap to run, you become very
hard to kill. And practically all startups, even the most successful, come
close to death at some point. So if doing good for people gives you a sense of
mission that makes you harder to kill, that alone more than compensates for
whatever you lose by not choosing a more selfish project.
**Help**
Another advantage of being good is that it makes other people want to help
you. This too seems to be an inborn trait in humans.
One of the startups we've funded, Octopart, is currently locked in a classic
battle of good versus evil. They're a search site for industrial components. A
lot of people need to search for components, and before Octopart there was no
good way to do it. That, it turned out, was no coincidence.
Octopart built the right way to search for components. Users like it and
they've been growing rapidly. And yet for most of Octopart's life, the biggest
distributor, Digi-Key, has been trying to force them take their prices off the
site. Octopart is sending them customers for free, and yet Digi-Key is trying
to make that traffic stop. Why? Because their current business model depends
on overcharging people who have incomplete information about prices. They
don't want search to work.
The Octoparts are the nicest guys in the world. They dropped out of the PhD
program in physics at Berkeley to do this. They just wanted to fix a problem
they encountered in their research. Imagine how much time you could save the
world's engineers if they could do searches online. So when I hear that a big,
evil company is trying to stop them in order to keep search broken, it makes
me really want to help them. It makes me spend more time on the Octoparts than
I do with most of the other startups we've funded. It just made me spend
several minutes telling you how great they are. Why? Because they're good guys
and they're trying to help the world.
If you're benevolent, people will rally around you: investors, customers,
other companies, and potential employees. In the long term the most important
may be the potential employees. I think everyone knows now that good hackers
are much better than mediocre ones. If you can attract the best hackers to
work for you, as Google has, you have a big advantage. And the very best
hackers tend to be idealistic. They're not desperate for a job. They can work
wherever they want. So most want to work on things that will make the world
better.
**Compass**
But the most important advantage of being good is that it acts as a compass.
One of the hardest parts of doing a startup is that you have so many choices.
There are just two or three of you, and a thousand things you could do. How do
you decide?
Here's the answer: Do whatever's best for your users. You can hold onto this
like a rope in a hurricane, and it will save you if anything can. Follow it
and it will take you through everything you need to do.
It's even the answer to questions that seem unrelated, like how to convince
investors to give you money. If you're a good salesman, you could try to just
talk them into it. But the more reliable route is to convince them through
your users: if you make something users love enough to tell their friends, you
grow exponentially, and that will convince any investor.
Being good is a particularly useful strategy for making decisions in complex
situations because it's stateless. It's like telling the truth. The trouble
with lying is that you have to remember everything you've said in the past to
make sure you don't contradict yourself. If you tell the truth you don't have
to remember anything, and that's a really useful property in domains where
things happen fast.
For example, Y Combinator has now invested in 80 startups, 57 of which are
still alive. (The rest have died or merged or been acquired.) When you're
trying to advise 57 startups, it turns out you have to have a stateless
algorithm. You can't have ulterior motives when you have 57 things going on at
once, because you can't remember them. So our rule is just to do whatever's
best for the founders. Not because we're particularly benevolent, but because
it's the only algorithm that works on that scale.
When you write something telling people to be good, you seem to be claiming to
be good yourself. So I want to say explicitly that I am not a particularly
good person. When I was a kid I was firmly in the camp of bad. The way adults
used the word good, it seemed to be synonymous with quiet, so I grew up very
suspicious of it.
You know how there are some people whose names come up in conversation and
everyone says "He's _such_ a great guy?" People never say that about me. The
best I get is "he means well." I am not claiming to be good. At best I speak
good as a second language.
So I'm not suggesting you be good in the usual sanctimonious way. I'm
suggesting it because it works. It will work not just as a statement of
"values," but as a guide to strategy, and even a design spec for software.
Don't just not be evil. Be good.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
Watch how this essay was written.
---
February 2009
One of the things I always tell startups is a principle I learned from Paul
Buchheit: it's better to make a few people really happy than to make a lot of
people semi-happy. I was saying recently to a reporter that if I could only
tell startups 10 things, this would be one of them. Then I thought: what would
the other 9 be?
When I made the list there turned out to be 13:
**1\. Pick good cofounders.**
Cofounders are for a startup what location is for real estate. You can change
anything about a house except where it is. In a startup you can change your
idea easily, but changing your cofounders is hard. And the success of a
startup is almost always a function of its founders.
**2\. Launch fast.**
The reason to launch fast is not so much that it's critical to get your
product to market early, but that you haven't really started working on it
till you've launched. Launching teaches you what you should have been
building. Till you know that you're wasting your time. So the main value of
whatever you launch with is as a pretext for engaging users.
**3\. Let your idea evolve.**
This is the second half of launching fast. Launch fast and iterate. It's a big
mistake to treat a startup as if it were merely a matter of implementing some
brilliant initial idea. As in an essay, most of the ideas appear in the
implementing.
**4\. Understand your users.**
You can envision the wealth created by a startup as a rectangle, where one
side is the number of users and the other is how much you improve their lives.
The second dimension is the one you have most control over. And indeed,
the growth in the first will be driven by how well you do in the second. As in
science, the hard part is not answering questions but asking them: the hard
part is seeing something new that users lack. The better you understand them
the better the odds of doing that. That's why so many successful startups make
something the founders needed.
**5\. Better to make a few users love you than a lot ambivalent.**
Ideally you want to make large numbers of users love you, but you can't expect
to hit that right away. Initially you have to choose between satisfying all
the needs of a subset of potential users, or satisfying a subset of the needs
of all potential users. Take the first. It's easier to expand userwise than
satisfactionwise. And perhaps more importantly, it's harder to lie to
yourself. If you think you're 85% of the way to a great product, how do you
know it's not 70%? Or 10%? Whereas it's easy to know how many users you have.
**6\. Offer surprisingly good customer service.**
Customers are used to being maltreated. Most of the companies they deal with
are quasi-monopolies that get away with atrocious customer service. Your own
ideas about what's possible have been unconsciously lowered by such
experiences. Try making your customer service not merely good, but
surprisingly good. Go out of your way to make people happy. They'll be
overwhelmed; you'll see. In the earliest stages of a startup, it pays to offer
customer service on a level that wouldn't scale, because it's a way of
learning about your users.
**7\. You make what you measure.**
I learned this one from Joe Kraus. Merely measuring something has an
uncanny tendency to improve it. If you want to make your user numbers go up,
put a big piece of paper on your wall and every day plot the number of users.
You'll be delighted when it goes up and disappointed when it goes down. Pretty
soon you'll start noticing what makes the number go up, and you'll start to do
more of that. Corollary: be careful what you measure.
**8\. Spend little.**
I can't emphasize enough how important it is for a startup to be cheap. Most
startups fail before they make something people want, and the most common form
of failure is running out of money. So being cheap is (almost) interchangeable
with iterating rapidly. But it's more than that. A culture of cheapness
keeps companies young in something like the way exercise keeps people young.
**9\. Get ramen profitable.**
"Ramen profitable" means a startup makes just enough to pay the founders'
living expenses. It's not rapid prototyping for business models (though it can
be), but more a way of hacking the investment process. Once you cross over
into ramen profitable, it completely changes your relationship with investors.
It's also great for morale.
**10\. Avoid distractions.**
Nothing kills startups like distractions. The worst type are those that pay
money: day jobs, consulting, profitable side-projects. The startup may have
more long-term potential, but you'll always interrupt working on it to answer
calls from people paying you now. Paradoxically, fundraising is this type of
distraction, so try to minimize that too.
**11\. Don't get demoralized.**
Though the immediate cause of death in a startup tends to be running out of
money, the underlying cause is usually lack of focus. Either the company is
run by stupid people (which can't be fixed with advice) or the people are
smart but got demoralized. Starting a startup is a huge moral weight.
Understand this and make a conscious effort not to be ground down by it, just
as you'd be careful to bend at the knees when picking up a heavy box.
**12\. Don't give up.**
Even if you get demoralized, don't give up. You can get surprisingly far by
just not giving up. This isn't true in all fields. There are a lot of people
who couldn't become good mathematicians no matter how long they persisted. But
startups aren't like that. Sheer effort is usually enough, so long as you keep
morphing your idea.
**13\. Deals fall through.**
One of the most useful skills we learned from Viaweb was not getting our hopes
up. We probably had 20 deals of various types fall through. After the first 10
or so we learned to treat deals as background processes that we should ignore
till they terminated. It's very dangerous to morale to start to depend on
deals closing, not just because they so often don't, but because it makes them
less likely to.
Having gotten it down to 13 sentences, I asked myself which I'd choose if I
could only keep one.
Understand your users. That's the key. The essential task in a startup is to
create wealth; the dimension of wealth you have most control over is how much
you improve users' lives; and the hardest part of that is knowing what to make
for them. Once you know what to make, it's mere effort to make it, and most
decent hackers are capable of that.
Understanding your users is part of half the principles in this list. That's
the reason to launch early, to understand your users. Evolving your idea is
the embodiment of understanding your users. Understanding your users well will
tend to push you toward making something that makes a few people deeply happy.
The most important reason for having surprisingly good customer service is
that it helps you understand your users. And understanding your users will
even ensure your morale, because when everything else is collapsing around
you, having just ten users who love you will keep you going.
** |
|
July 2008
At this year's startup school, David Heinemeier Hansson gave a talk in which
he suggested that startup founders should do things the old fashioned way.
Instead of hoping to get rich by building a valuable company and then selling
stock in a "liquidity event," founders should start companies that make money
and live off the revenues.
Sounds like a good plan. Let's think about the optimal way to do this.
One disadvantage of living off the revenues of your company is that you have
to keep running it. And as anyone who runs their own business can tell you,
that requires your complete attention. You can't just start a business and
check out once things are going well, or they stop going well surprisingly
fast.
The main economic motives of startup founders seem to be freedom and security.
They want enough money that (a) they don't have to worry about running out of
money and (b) they can spend their time how they want. Running your own
business offers neither. You certainly don't have freedom: no boss is so
demanding. Nor do you have security, because if you stop paying attention to
the company, its revenues go away, and with them your income.
The best case, for most people, would be if you could hire someone to manage
the company for you once you'd grown it to a certain size. Suppose you could
find a really good manager. Then you would have both freedom and security. You
could pay as little attention to the business as you wanted, knowing that your
manager would keep things running smoothly. And that being so, revenues would
continue to flow in, so you'd have security as well.
There will of course be some founders who wouldn't like that idea: the ones
who like running their company so much that there's nothing else they'd rather
do. But this group must be small. The way you succeed in most businesses is to
be fanatically attentive to customers' needs. What are the odds that your own
desires would coincide exactly with the demands of this powerful, external
force?
Sure, running your own company can be fairly interesting. Viaweb was more
interesting than any job I'd had before. And since I made much more money from
it, it offered the highest ratio of income to boringness of anything I'd done,
by orders of magnitude. But was it _the_ most interesting work I could imagine
doing? No.
Whether the number of founders in the same position is asymptotic or merely
large, there are certainly a lot of them. For them the right approach would be
to hand the company over to a professional manager eventually, if they could
find one who was good enough.
_____
So far so good. But what if your manager was hit by a bus? What you really
want is a management company to run your company for you. Then you don't
depend on any one person.
If you own rental property, there are companies you can hire to manage it for
you. Some will do everything, from finding tenants to fixing leaks. Of course,
running companies is a lot more complicated than managing rental property, but
let's suppose there were management companies that could do it for you. They'd
charge a lot, but wouldn't it be worth it? I'd sacrifice a large percentage of
the income for the extra peace of mind.
I realize what I'm describing already sounds too good to be true, but I can
think of a way to make it even more attractive. If company management
companies existed, there would be an additional service they could offer
clients: they could let them insure their returns by pooling their risk. After
all, even a perfect manager can't save a company when, as sometimes happens,
its whole market dies, just as property managers can't save you from the
building burning down. But a company that managed a large enough number of
companies could say to all its clients: we'll combine the revenues from all
your companies, and pay you your proportionate share.
If such management companies existed, they'd offer the maximum of freedom and
security. Someone would run your company for you, and you'd be protected even
if it happened to die.
Let's think about how such a management company might be organized. The
simplest way would be to have a new kind of stock representing the total pool
of companies they were managing. When you signed up, you'd trade your
company's stock for shares of this pool, in proportion to an estimate of your
company's value that you'd both agreed upon. Then you'd automatically get your
share of the returns of the whole pool.
The catch is that because this kind of trade would be hard to undo, you
couldn't switch management companies. But there's a way they could fix that:
suppose all the company management companies got together and agreed to allow
their clients to exchange shares in all their pools. Then you could, in
effect, simultaneously choose all the management companies to run yours for
you, in whatever proportion you wanted, and change your mind later as often as
you wanted.
If such pooled-risk company management companies existed, signing up with one
would seem the ideal plan for most people following the route David advocated.
Good news: they do exist. What I've just described is an acquisition by a
public company.
_____
Unfortunately, though public acquirers are structurally identical to pooled-
risk company management companies, they don't think of themselves that way.
With a property management company, you can just walk in whenever you want and
say "manage my rental property for me" and they'll do it. Whereas acquirers
are, as of this writing, extremely fickle. Sometimes they're in a buying mood
and they'll overpay enormously; other times they're not interested. They're
like property management companies run by madmen. Or more precisely, by
Benjamin Graham's Mr. Market.
So while on average public acquirers behave like pooled-risk company managers,
you need a window of several years to get average case performance. If you
wait long enough (five years, say) you're likely to hit an up cycle where some
acquirer is hot to buy you. But you can't choose when it happens.
You can't assume investors will carry you for as long as you might have to
wait. Your company has to make money. Opinions are divided about how early to
focus on that. Joe Kraus says you should try charging customers right away.
And yet some of the most successful startups, including Google, ignored
revenue at first and concentrated exclusively on development. The answer
probably depends on the type of company you're starting. I can imagine some
where trying to make sales would be a good heuristic for product design, and
others where it would just be a distraction. The test is probably whether it
helps you to understand your users.
You can choose whichever revenue strategy you think is best for the type of
company you're starting, so long as you're profitable. Being profitable
ensures you'll get at least the average of the acquisition market—in which
public companies do behave as pooled-risk company management companies.
David isn't mistaken in saying you should start a company to live off its
revenues. The mistake is thinking this is somehow opposed to starting a
company and selling it. In fact, for most people the latter is merely the
optimal case of the former.
**Thanks** to Trevor Blackwell, Jessica Livingston, Michael Mandel, Robert
Morris, and Fred Wilson for reading drafts of this.
---
---
Russian Translation
* * *
--- |
|
April 2008
There are some topics I save up because they'll be so much fun to write about.
This is one of them: a list of my heroes.
I'm not claiming this is a list of the _n_ most admirable people. Who could
make such a list, even if they wanted to?
Einstein isn't on the list, for example, even though he probably deserves to
be on any shortlist of admirable people. I once asked a physicist friend if
Einstein was really as smart as his fame implies, and she said that yes, he
was. So why isn't he on the list? Because I had to ask. This is a list of
people who've influenced me, not people who would have if I understood their
work.
My test was to think of someone and ask "is this person my hero?" It often
returned surprising answers. For example, it returned false for Montaigne, who
was arguably the inventor of the essay. Why? When I thought about what it
meant to call someone a hero, it meant I'd decide what to do by asking what
they'd do in the same situation. That's a stricter standard than admiration.
After I made the list, I looked to see if there was a pattern, and there was,
a very clear one. Everyone on the list had two qualities: they cared almost
excessively about their work, and they were absolutely honest. By honest I
don't mean trustworthy so much as that they never pander: they never say or do
something because that's what the audience wants. They are all fundamentally
subversive for this reason, though they conceal it to varying degrees.
**Jack Lambert**
I grew up in Pittsburgh in the 1970s. Unless you were there it's hard to
imagine how that town felt about the Steelers. Locally, all the news was bad.
The steel industry was dying. But the Steelers were the best team in football
— and moreover, in a way that seemed to reflect the personality of the city.
They didn't do anything fancy. They just got the job done.
Other players were more famous: Terry Bradshaw, Franco Harris, Lynn Swann. But
they played offense, and you always get more attention for that. It seemed to
me as a twelve year old football expert that the best of them all was Jack
Lambert. And what made him so good was that he was utterly relentless. He
didn't just care about playing well; he cared almost too much. He seemed to
regard it as a personal insult when someone from the other team had possession
of the ball on his side of the line of scrimmage.
The suburbs of Pittsburgh in the 1970s were a pretty dull place. School was
boring. All the adults around were bored with their jobs working for big
companies. Everything that came to us through the mass media was (a) blandly
uniform and (b) produced elsewhere. Jack Lambert was the exception. He was
like nothing else I'd seen.
**Kenneth Clark**
Kenneth Clark is the best nonfiction writer I know of, on any subject. Most
people who write about art history don't really like art; you can tell from a
thousand little signs. But Clark did, and not just intellectually, but the way
one anticipates a delicious dinner.
What really makes him stand out, though, is the quality of his ideas. His
style is deceptively casual, but there is more in his books than in a library
of art monographs. Reading _The Nude_ is like a ride in a Ferrari. Just as
you're getting settled, you're slammed back in your seat by the acceleration.
Before you can adjust, you're thrown sideways as the car screeches into the
first turn. His brain throws off ideas almost too fast to grasp them. Finally
at the end of the chapter you come to a halt, with your eyes wide and a big
smile on your face.
Kenneth Clark was a star in his day, thanks to the documentary series
_Civilisation_. And if you read only one book about art history,
_Civilisation_ is the one I'd recommend. It's much better than the drab Sears
Catalogs of art that undergraduates are forced to buy for Art History 101.
**Larry Mihalko**
A lot of people have a great teacher at some point in their childhood. Larry
Mihalko was mine. When I look back it's like there's a line drawn between
third and fourth grade. After Mr. Mihalko, everything was different.
Why? First of all, he was intellectually curious. I had a few other teachers
who were smart, but I wouldn't describe them as intellectually curious. In
retrospect, he was out of place as an elementary school teacher, and I think
he knew it. That must have been hard for him, but it was wonderful for us, his
students. His class was a constant adventure. I used to like going to school
every day.
The other thing that made him different was that he liked us. Kids are good at
telling that. The other teachers were at best benevolently indifferent. But
Mr. Mihalko seemed like he actually wanted to be our friend. On the last day
of fourth grade, he got out one of the heavy school record players and played
James Taylor's "You've Got a Friend" to us. Just call out my name, and you
know wherever I am, I'll come running. He died at 59 of lung cancer. I've
never cried like I cried at his funeral.
**Leonardo**
One of the things I've learned about making things that I didn't realize when
I was a kid is that much of the best stuff isn't made for audiences, but for
oneself. You see paintings and drawings in museums and imagine they were made
for you to look at. Actually a lot of the best ones were made as a way of
exploring the world, not as a way to please other people. The best of these
explorations are sometimes more pleasing than stuff made explicitly to please.
Leonardo did a lot of things. One of his most admirable qualities was that he
did so many different things that were admirable. What people know of him now
is his paintings and his more flamboyant inventions, like flying machines.
That makes him seem like some kind of dreamer who sketched artists'
conceptions of rocket ships on the side. In fact he made a large number of far
more practical technical discoveries. He was as good an engineer as a painter.
His most impressive work, to me, is his drawings. They're clearly made more as
a way of studying the world than producing something beautiful. And yet they
can hold their own with any work of art ever made. No one else, before or
since, was that good when no one was looking.
**Robert Morris**
Robert Morris has a very unusual quality: he's never wrong. It might seem this
would require you to be omniscient, but actually it's surprisingly easy. Don't
say anything unless you're fairly sure of it. If you're not omniscient, you
just don't end up saying much.
More precisely, the trick is to pay careful attention to how you qualify what
you say. By using this trick, Robert has, as far as I know, managed to be
mistaken only once, and that was when he was an undergrad. When the Mac came
out, he said that little desktop computers would never be suitable for real
hacking.
It's wrong to call it a trick in his case, though. If it were a conscious
trick, he would have slipped in a moment of excitement. With Robert this
quality is wired-in. He has an almost superhuman integrity. He's not just
generally correct, but also correct about how correct he is.
You'd think it would be such a great thing never to be wrong that everyone
would do this. It doesn't seem like that much extra work to pay as much
attention to the error on an idea as to the idea itself. And yet practically
no one does. I know how hard it is, because since meeting Robert I've tried to
do in software what he seems to do in hardware.
**P. G. Wodehouse**
People are finally starting to admit that Wodehouse was a great writer. If you
want to be thought a great novelist in your own time, you have to sound
intellectual. If what you write is popular, or entertaining, or funny, you're
ipso facto suspect. That makes Wodehouse doubly impressive, because it meant
that to write as he wanted to, he had to commit to being despised in his own
lifetime.
Evelyn Waugh called him a great writer, but to most people at the time that
would have read as a chivalrous or deliberately perverse gesture. At the time
any random autobiographical novel by a recent college grad could count on more
respectful treatment from the literary establishment.
Wodehouse may have begun with simple atoms, but the way he composed them into
molecules was near faultless. His rhythm in particular. It makes me self-
conscious to write about it. I can think of only two other writers who came
near him for style: Evelyn Waugh and Nancy Mitford. Those three used the
English language like they owned it.
But Wodehouse has something neither of them did. He's at ease. Evelyn Waugh
and Nancy Mitford cared what other people thought of them: he wanted to seem
aristocratic; she was afraid she wasn't smart enough. But Wodehouse didn't
give a damn what anyone thought of him. He wrote exactly what he wanted.
**Alexander Calder**
Calder's on this list because he makes me happy. Can his work stand up to
Leonardo's? Probably not. There might not be anything from the 20th Century
that can. But what was good about Modernism, Calder had, and had in a way that
he made seem effortless.
What was good about Modernism was its freshness. Art became stuffy in the
nineteenth century. The paintings that were popular at the time were mostly
the art equivalent of McMansions—big, pretentious, and fake. Modernism meant
starting over, making things with the same earnest motives that children
might. The artists who benefited most from this were the ones who had
preserved a child's confidence, like Klee and Calder.
Klee was impressive because he could work in so many different styles. But
between the two I like Calder better, because his work seemed happier.
Ultimately the point of art is to engage the viewer. It's hard to predict what
will; often something that seems interesting at first will bore you after a
month. Calder's sculptures never get boring. They just sit there quietly
radiating optimism, like a battery that never runs out. As far as I can tell
from books and photographs, the happiness of Calder's work is his own
happiness showing through.
**Jane Austen**
Everyone admires Jane Austen. Add my name to the list. To me she seems the
best novelist of all time.
I'm interested in how things work. When I read most novels, I pay as much
attention to the author's choices as to the story. But in her novels I can't
see the gears at work. Though I'd really like to know how she does what she
does, I can't figure it out, because she's so good that her stories don't seem
made up. I feel like I'm reading a description of something that actually
happened.
I used to read a lot of novels when I was younger. I can't read most anymore,
because they don't have enough information in them. Novels seem so
impoverished compared to history and biography. But reading Austen is like
reading nonfiction. She writes so well you don't even notice her.
**John McCarthy**
John McCarthy invented Lisp, the field of (or at least the term) artificial
intelligence, and was an early member of both of the top two computer science
departments, MIT and Stanford. No one would dispute that he's one of the
greats, but he's an especial hero to me because of Lisp.
It's hard for us now to understand what a conceptual leap that was at the
time. Paradoxically, one of the reasons his achievement is hard to appreciate
is that it was so successful. Practically every programming language invented
in the last 20 years includes ideas from Lisp, and each year the median
language gets more Lisplike.
In 1958 these ideas were anything but obvious. In 1958 there seem to have been
two ways of thinking about programming. Some people thought of it as math, and
proved things about Turing Machines. Others thought of it as a way to get
things done, and designed languages all too influenced by the technology of
the day. McCarthy alone bridged the gap. He designed a language that was math.
But designed is not really the word; discovered is more like it.
**The Spitfire**
As I was making this list I found myself thinking of people like Douglas Bader
and R.J. Mitchell and Jeffrey Quill and I realized that though all of them had
done many things in their lives, there was one factor above all that connected
them: the Spitfire.
This is supposed to be a list of heroes. How can a machine be on it? Because
that machine was not just a machine. It was a lens of heroes. Extraordinary
devotion went into it, and extraordinary courage came out.
It's a cliche to call World War II a contest between good and evil, but
between fighter designs, it really was. The Spitfire's original nemesis, the
ME 109, was a brutally practical plane. It was a killing machine. The Spitfire
was optimism embodied. And not just in its beautiful lines: it was at the edge
of what could be manufactured. But taking the high road worked. In the air,
beauty had the edge, just.
**Steve Jobs**
People alive when Kennedy was killed usually remember exactly where they were
when they heard about it. I remember exactly where I was when a friend asked
if I'd heard Steve Jobs had cancer. It was like the floor dropped out. A few
seconds later she told me that it was a rare operable type, and that he'd be
ok. But those seconds seemed long.
I wasn't sure whether to include Jobs on this list. A lot of people at Apple
seem to be afraid of him, which is a bad sign. But he compels admiration.
There's no name for what Steve Jobs is, because there hasn't been anyone quite
like him before. He doesn't design Apple's products himself. Historically the
closest analogy to what he does are the great Renaissance patrons of the arts.
As the CEO of a company, that makes him unique.
Most CEOs delegate taste to a subordinate. The design paradox means they're
choosing more or less at random. But Steve Jobs actually has taste himself —
such good taste that he's shown the world how much more important taste is
than they realized.
**Isaac Newton**
Newton has a strange role in my pantheon of heroes: he's the one I reproach
myself with. He worked on big things, at least for part of his life. It's so
easy to get distracted working on small stuff. The questions you're answering
are pleasantly familiar. You get immediate rewards — in fact, you get bigger
rewards in your time if you work on matters of passing importance. But I'm
uncomfortably aware that this is the route to well-deserved obscurity.
To do really great things, you have to seek out questions people didn't even
realize were questions. There have probably been other people who did this as
well as Newton, for their time, but Newton is my model of this kind of
thought. I can just begin to understand what it must have felt like for him.
You only get one life. Why not do something huge? The phrase "paradigm shift"
is overused now, but Kuhn was onto something. And you know more are out there,
separated from us by what will later seem a surprisingly thin wall of laziness
and stupidity. If we work like Newton.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Jackie McDonough for
reading drafts of this.
---
---
Japanese Translation
* * *
--- |
|
August 2011
I realized recently that we may be able to solve part of the patent problem
without waiting for the government.
I've never been 100% sure whether patents help or hinder technological
progress. When I was a kid I thought they helped. I thought they protected
inventors from having their ideas stolen by big companies. Maybe that was
truer in the past, when more things were physical. But regardless of whether
patents are in general a good thing, there do seem to be bad ways of using
them. And since bad uses of patents seem to be increasing, there is an
increasing call for patent reform.
The problem with patent reform is that it has to go through the government.
That tends to be slow. But recently I realized we can also attack the problem
downstream. As well as pinching off the stream of patents at the point where
they're issued, we may in some cases be able to pinch it off at the point
where they're used.
One way of using patents that clearly does not encourage innovation is when
established companies with bad products use patents to suppress small
competitors with good products. This is the type of abuse we may be able to
decrease without having to go through the government.
The way to do it is to get the companies that are above pulling this sort of
trick to pledge publicly not to. Then the ones that won't make such a pledge
will be very conspicuous. Potential employees won't want to work for them. And
investors, too, will be able to see that they're the sort of company that
competes by litigation rather than by making good products.
Here's the pledge:
> No first use of software patents against companies with less than 25 people.
I've deliberately traded precision for brevity. The patent pledge is not
legally binding. It's like Google's "Don't be evil." They don't define what
evil is, but by publicly saying that, they're saying they're willing to be
held to a standard that, say, Altria is not. And though constraining, "Don't
be evil" has been good for Google. Technology companies win by attracting the
most productive people, and the most productive people are attracted to
employers who hold themselves to a higher standard than the law requires.
The patent pledge is in effect a narrower but open source "Don't be evil." I
encourage every technology company to adopt it. If you want to help fix
patents, encourage your employer to.
Already most technology companies wouldn't sink to using patents on startups.
You don't see Google or Facebook suing startups for patent infringement. They
don't need to. So for the better technology companies, the patent pledge
requires no change in behavior. They're just promising to do what they'd do
anyway. And when all the companies that won't use patents on startups have
said so, the holdouts will be very conspicuous.
The patent pledge doesn't fix every problem with patents. It won't stop patent
trolls, for example; they're already pariahs. But the problem the patent
pledge does fix may be more serious than the problem of patent trolls. Patent
trolls are just parasites. A clumsy parasite may occasionally kill the host,
but that's not its goal. Whereas companies that sue startups for patent
infringement generally do it with explicit goal of keeping their product off
the market.
Companies that use patents on startups are attacking innovation at the root.
Now there's something any individual can do about this problem, without
waiting for the government: ask companies where they stand.
Patent Pledge Site
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2010
Silicon Valley proper is mostly suburban sprawl. At first glance it doesn't
seem there's anything to see. It's not the sort of place that has conspicuous
monuments. But if you look, there are subtle signs you're in a place that's
different from other places.
**1.Stanford University**
Stanford is a strange place. Structurally it is to an ordinary university what
suburbia is to a city. It's enormously spread out, and feels surprisingly
empty much of the time. But notice the weather. It's probably perfect. And
notice the beautiful mountains to the west. And though you can't see it,
cosmopolitan San Francisco is 40 minutes to the north. That combination is
much of the reason Silicon Valley grew up around this university and not some
other one.
**2.University Ave**
A surprising amount of the work of the Valley is done in the cafes on or just
off University Ave in Palo Alto. If you visit on a weekday between 10 and 5,
you'll often see founders pitching investors. In case you can't tell, the
founders are the ones leaning forward eagerly, and the investors are the ones
sitting back with slightly pained expressions.
**3.The Lucky Office**
The office at 165 University Ave was Google's first. Then it was Paypal's.
(Now it's Wepay's.) The interesting thing about it is the location. It's a
smart move to put a startup in a place with restaurants and people walking
around instead of in an office park, because then the people who work there
want to stay there, instead of fleeing as soon as conventional working hours
end. They go out for dinner together, talk about ideas, and then come back and
implement them.
It's important to realize that Google's current location in an office park is
not where they started; it's just where they were forced to move when they
needed more space. Facebook was till recently across the street, till they too
had to move because they needed more space.
**4.Old Palo Alto**
Palo Alto was not originally a suburb. For the first 100 years or so of its
existence, it was a college town out in the countryside. Then in the mid 1950s
it was engulfed in a wave of suburbia that raced down the peninsula. But Palo
Alto north of Oregon expressway still feels noticeably different from the area
around it. It's one of the nicest places in the Valley. The buildings are old
(though increasingly they are being torn down and replaced with generic
McMansions) and the trees are tall. But houses are very expensive—around $1000
per square foot. This is post-exit Silicon Valley.
**5.Sand Hill Road**
It's interesting to see the VCs' offices on the north side of Sand Hill Road
precisely because they're so boringly uniform. The buildings are all more or
less the same, their exteriors express very little, and they are arranged in a
confusing maze. (I've been visiting them for years and I still occasionally
get lost.) It's not a coincidence. These buildings are a pretty accurate
reflection of the VC business.
If you go on a weekday you may see groups of founders there to meet VCs. But
mostly you won't see anyone; bustling is the last word you'd use to describe
the atmos. Visiting Sand Hill Road reminds you that the opposite of "down and
dirty" would be "up and clean."
**6.Castro Street**
It's a tossup whether Castro Street or University Ave should be considered the
heart of the Valley now. University Ave would have been 10 years ago. But Palo
Alto is getting expensive. Increasingly startups are located in Mountain View,
and Palo Alto is a place they come to meet investors. Palo Alto has a lot of
different cafes, but there is one that clearly dominates in Mountain View: Red
Rock.
**7.Google**
Google spread out from its first building in Mountain View to a lot of the
surrounding ones. But because the buildings were built at different times by
different people, the place doesn't have the sterile, walled-off feel that a
typical large company's headquarters have. It definitely has a flavor of its
own though. You sense there is something afoot. The general atmos is vaguely
utopian; there are lots of Priuses, and people who look like they drive them.
You can't get into Google unless you know someone there. It's very much worth
seeing inside if you can, though. Ditto for Facebook, at the end of California
Ave in Palo Alto, though there is nothing to see outside.
**8.Skyline Drive**
Skyline Drive runs along the crest of the Santa Cruz mountains. On one side is
the Valley, and on the other is the sea—which because it's cold and foggy and
has few harbors, plays surprisingly little role in the lives of people in the
Valley, considering how close it is. Along some parts of Skyline the dominant
trees are huge redwoods, and in others they're live oaks. Redwoods mean those
are the parts where the fog off the coast comes in at night; redwoods condense
rain out of fog. The MROSD manages a collection of great walking trails off
Skyline.
**9.280**
Silicon Valley has two highways running the length of it: 101, which is pretty
ugly, and 280, which is one of the more beautiful highways in the world. I
always take 280 when I have a choice. Notice the long narrow lake to the west?
That's the San Andreas Fault. It runs along the base of the hills, then heads
uphill through Portola Valley. One of the MROSD trails runs right along the
fault. A string of rich neighborhoods runs along the foothills to the west of
280: Woodside, Portola Valley, Los Altos Hills, Saratoga, Los Gatos.
SLAC goes right under 280 a little bit south of Sand Hill Road. And a couple
miles south of that is the Valley's equivalent of the "Welcome to Las Vegas"
sign: The Dish.
** |
|
November 2019
If you discover something new, there's a significant chance you'll be accused
of some form of heresy.
To discover new things, you have to work on ideas that are good but non-
obvious; if an idea is obviously good, other people are probably already
working on it. One common way for a good idea to be non-obvious is for it to
be hidden in the shadow of some mistaken assumption that people are very
attached to. But anything you discover from working on such an idea will tend
to contradict the mistaken assumption that was concealing it. And you will
thus get a lot of heat from people attached to the mistaken assumption.
Galileo and Darwin are famous examples of this phenomenon, but it's probably
always an ingredient in the resistance to new ideas.
So it's particularly dangerous for an organization or society to have a
culture of pouncing on heresy. When you suppress heresies, you don't just
prevent people from contradicting the mistaken assumption you're trying to
protect. You also suppress any idea that implies indirectly that it's false.
Every cherished mistaken assumption has a dead zone of unexplored ideas around
it. And the more preposterous the assumption, the bigger the dead zone it
creates.
There is a positive side to this phenomenon though. If you're looking for new
ideas, one way to find them is by _looking for heresies_. When you look at the
question this way, the depressingly large dead zones around mistaken
assumptions become excitingly large mines of new ideas.
---
---
Japanese Translation
| | Russian Translation
Simplified Chinese Translation
* * *
--- |
|
January 2016
Life is short, as everyone knows. When I was a kid I used to wonder about
this. Is life actually short, or are we really complaining about its
finiteness? Would we be just as likely to feel life was short if we lived 10
times as long?
Since there didn't seem any way to answer this question, I stopped wondering
about it. Then I had kids. That gave me a way to answer the question, and the
answer is that life actually is short.
Having kids showed me how to convert a continuous quantity, time, into
discrete quantities. You only get 52 weekends with your 2 year old. If
Christmas-as-magic lasts from say ages 3 to 10, you only get to watch your
child experience it 8 times. And while it's impossible to say what is a lot or
a little of a continuous quantity like time, 8 is not a lot of something. If
you had a handful of 8 peanuts, or a shelf of 8 books to choose from, the
quantity would definitely seem limited, no matter what your lifespan was.
Ok, so life actually is short. Does it make any difference to know that?
It has for me. It means arguments of the form "Life is too short for x" have
great force. It's not just a figure of speech to say that life is too short
for something. It's not just a synonym for annoying. If you find yourself
thinking that life is too short for something, you should try to eliminate it
if you can.
When I ask myself what I've found life is too short for, the word that pops
into my head is "bullshit." I realize that answer is somewhat tautological.
It's almost the definition of bullshit that it's the stuff that life is too
short for. And yet bullshit does have a distinctive character. There's
something fake about it. It's the junk food of experience.
If you ask yourself what you spend your time on that's bullshit, you probably
already know the answer. Unnecessary meetings, pointless disputes,
bureaucracy, posturing, dealing with other people's mistakes, traffic jams,
addictive but unrewarding pastimes.
There are two ways this kind of thing gets into your life: it's either forced
on you, or it tricks you. To some extent you have to put up with the bullshit
forced on you by circumstances. You need to make money, and making money
consists mostly of errands. Indeed, the law of supply and demand ensures that:
the more rewarding some kind of work is, the cheaper people will do it. It may
be that less bullshit is forced on you than you think, though. There has
always been a stream of people who opt out of the default grind and go live
somewhere where opportunities are fewer in the conventional sense, but life
feels more authentic. This could become more common.
You can do it on a smaller scale without moving. The amount of time you have
to spend on bullshit varies between employers. Most large organizations (and
many small ones) are steeped in it. But if you consciously prioritize bullshit
avoidance over other factors like money and prestige, you can probably find
employers that will waste less of your time.
If you're a freelancer or a small company, you can do this at the level of
individual customers. If you fire or avoid toxic customers, you can decrease
the amount of bullshit in your life by more than you decrease your income.
But while some amount of bullshit is inevitably forced on you, the bullshit
that sneaks into your life by tricking you is no one's fault but your own. And
yet the bullshit you choose may be harder to eliminate than the bullshit
that's forced on you. Things that lure you into wasting your time have to be
really good at tricking you. An example that will be familiar to a lot of
people is arguing online. When someone contradicts you, they're in a sense
attacking you. Sometimes pretty overtly. Your instinct when attacked is to
defend yourself. But like a lot of instincts, this one wasn't designed for the
world we now live in. Counterintuitive as it feels, it's better most of the
time not to defend yourself. Otherwise these people are literally taking your
life.
Arguing online is only incidentally addictive. There are more dangerous things
than that. As I've written before, one byproduct of technical progress is that
things we like tend to become _more addictive_. Which means we will
increasingly have to make a conscious effort to avoid addictions to stand
outside ourselves and ask "is this how I want to be spending my time?"
As well as avoiding bullshit, one should actively seek out things that matter.
But different things matter to different people, and most have to learn what
matters to them. A few are lucky and realize early on that they love math or
taking care of animals or writing, and then figure out a way to spend a lot of
time doing it. But most people start out with a life that's a mix of things
that matter and things that don't, and only gradually learn to distinguish
between them.
For the young especially, much of this confusion is induced by the artificial
situations they find themselves in. In middle school and high school, what the
other kids think of you seems the most important thing in the world. But when
you ask adults what they got wrong at that age, nearly all say they cared too
much what other kids thought of them.
One heuristic for distinguishing stuff that matters is to ask yourself whether
you'll care about it in the future. Fake stuff that matters usually has a
sharp peak of seeming to matter. That's how it tricks you. The area under the
curve is small, but its shape jabs into your consciousness like a pin.
The things that matter aren't necessarily the ones people would call
"important." Having coffee with a friend matters. You won't feel later like
that was a waste of time.
One great thing about having small children is that they make you spend time
on things that matter: them. They grab your sleeve as you're staring at your
phone and say "will you play with me?" And odds are that is in fact the
bullshit-minimizing option.
If life is short, we should expect its shortness to take us by surprise. And
that is just what tends to happen. You take things for granted, and then
they're gone. You think you can always write that book, or climb that
mountain, or whatever, and then you realize the window has closed. The saddest
windows close when other people die. Their lives are short too. After my
mother died, I wished I'd spent more time with her. I lived as if she'd always
be there. And in her typical quiet way she encouraged that illusion. But an
illusion it was. I think a lot of people make the same mistake I did.
The usual way to avoid being taken by surprise by something is to be
consciously aware of it. Back when life was more precarious, people used to be
aware of death to a degree that would now seem a bit morbid. I'm not sure why,
but it doesn't seem the right answer to be constantly reminding oneself of the
grim reaper hovering at everyone's shoulder. Perhaps a better solution is to
look at the problem from the other end. Cultivate a habit of impatience about
the things you most want to do. Don't wait before climbing that mountain or
writing that book or visiting your mother. You don't need to be constantly
reminding yourself why you shouldn't wait. Just don't wait.
I can think of two more things one does when one doesn't have much of
something: try to get more of it, and savor what one has. Both make sense
here.
How you live affects how long you live. Most people could do better. Me among
them.
But you can probably get even more effect by paying closer attention to the
time you have. It's easy to let the days rush by. The "flow" that imaginative
people love so much has a darker cousin that prevents you from pausing to
savor life amid the daily slurry of errands and alarms. One of the most
striking things I've read was not in a book, but the title of one: James
Salter's _Burning the Days_.
It is possible to slow time somewhat. I've gotten better at it. Kids help.
When you have small children, there are a lot of moments so perfect that you
can't help noticing.
It does help too to feel that you've squeezed everything out of some
experience. The reason I'm sad about my mother is not just that I miss her but
that I think of all the things we could have done that we didn't. My oldest
son will be 7 soon. And while I miss the 3 year old version of him, I at least
don't have any regrets over what might have been. We had the best time a daddy
and a 3 year old ever had.
Relentlessly prune bullshit, don't wait to do things that matter, and savor
the time you have. That's what you do when life is short.
** |
|
April 2012
A palliative care nurse called Bronnie Ware made a list of the biggest regrets
of the dying. Her list seems plausible. I could see myself — _can_ see myself
— making at least 4 of these 5 mistakes.
If you had to compress them into a single piece of advice, it might be: don't
be a cog. The 5 regrets paint a portrait of post-industrial man, who shrinks
himself into a shape that fits his circumstances, then turns dutifully till he
stops.
The alarming thing is, the mistakes that produce these regrets are all errors
of omission. You forget your dreams, ignore your family, suppress your
feelings, neglect your friends, and forget to be happy. Errors of omission are
a particularly dangerous type of mistake, because you make them by default.
I would like to avoid making these mistakes. But how do you avoid mistakes you
make by default? Ideally you transform your life so it has other defaults. But
it may not be possible to do that completely. As long as these mistakes happen
by default, you probably have to be reminded not to make them. So I inverted
the 5 regrets, yielding a list of 5 commands
> Don't ignore your dreams; don't work too much; say what you think; cultivate
> friendships; be happy.
which I then put at the top of the file I use as a todo list.
---
---
Japanese Translation
* * *
--- |
|
March 2012
I'm not a very good speaker. I say "um" a lot. Sometimes I have to pause when
I lose my train of thought. I wish I were a better speaker. But I don't wish I
were a better speaker like I wish I were a better writer. What I really want
is to have good ideas, and that's a much bigger part of being a good writer
than being a good speaker.
Having good ideas is most of writing well. If you know what you're talking
about, you can say it in the plainest words and you'll be perceived as having
a good style. With speaking it's the opposite: having good ideas is an
alarmingly small component of being a good speaker.
I first noticed this at a conference several years ago. There was another
speaker who was much better than me. He had all of us roaring with laughter. I
seemed awkward and halting by comparison. Afterward I put my talk online like
I usually do. As I was doing it I tried to imagine what a transcript of the
other guy's talk would be like, and it was only then I realized he hadn't said
very much.
Maybe this would have been obvious to someone who knew more about speaking,
but it was a revelation to me how much less ideas mattered in speaking than
writing.
A few years later I heard a talk by someone who was not merely a better
speaker than me, but a famous speaker. Boy was he good. So I decided I'd pay
close attention to what he said, to learn how he did it. After about ten
sentences I found myself thinking "I don't want to be a good speaker."
Being a really good speaker is not merely orthogonal to having good ideas, but
in many ways pushes you in the opposite direction. For example, when I give a
talk, I usually write it out beforehand. I know that's a mistake; I know
delivering a prewritten talk makes it harder to engage with an audience. The
way to get the attention of an audience is to give them _your_ full attention,
and when you're delivering a prewritten talk, your attention is always divided
between the audience and the talk — even if you've memorized it. If you want
to engage an audience, it's better to start with no more than an outline of
what you want to say and ad lib the individual sentences. But if you do that,
you might spend no more time thinking about each sentence than it takes to say
it. Occasionally the stimulation of talking to a live audience makes you
think of new things, but in general this is not going to generate ideas as
well as writing does, where you can spend as long on each sentence as you
want.
If you rehearse a prewritten speech enough, you can get asymptotically close
to the sort of engagement you get when speaking ad lib. Actors do. But here
again there's a tradeoff between smoothness and ideas. All the time you spend
practicing a talk, you could instead spend making it better. Actors don't face
that temptation, except in the rare cases where they've written the script,
but any speaker does. Before I give a talk I can usually be found sitting in a
corner somewhere with a copy printed out on paper, trying to rehearse it in my
head. But I always end up spending most of the time rewriting it instead.
Every talk I give ends up being given from a manuscript full of things crossed
out and rewritten. Which of course makes me um even more, because I haven't
had any time to practice the new bits.
Depending on your audience, there are even worse tradeoffs than these.
Audiences like to be flattered; they like jokes; they like to be swept off
their feet by a vigorous stream of words. As you decrease the intelligence of
the audience, being a good speaker is increasingly a matter of being a good
bullshitter. That's true in writing too of course, but the descent is steeper
with talks. Any given person is dumber as a member of an audience than as a
reader. Just as a speaker ad libbing can only spend as long thinking about
each sentence as it takes to say it, a person hearing a talk can only spend as
long thinking about each sentence as it takes to hear it. Plus people in an
audience are always affected by the reactions of those around them, and the
reactions that spread from person to person in an audience are
disproportionately the more brutish sort, just as low notes travel through
walls better than high ones. Every audience is an incipient mob, and a good
speaker uses that. Part of the reason I laughed so much at the talk by the
good speaker at that conference was that everyone else did.
So are talks useless? They're certainly inferior to the written word as a
source of ideas. But that's not all talks are good for. When I go to a talk,
it's usually because I'm interested in the speaker. Listening to a talk is the
closest most of us can get to having a conversation with someone like the
president, who doesn't have time to meet individually with all the people who
want to meet him.
Talks are also good at motivating me to do things. It's probably no
coincidence that so many famous speakers are described as motivational
speakers. That may be what public speaking is really for. It's probably what
it was originally for. The emotional reactions you can elicit with a talk can
be a powerful force. I wish I could say that this force was more often used
for good than ill, but I'm not sure.
** |
|
January 2016
Since the 1970s, economic inequality in the US has increased dramatically. And
in particular, the rich have gotten a lot richer. Nearly everyone who writes
about the topic says that economic inequality should be decreased.
I'm interested in this question because I was one of the founders of a company
called Y Combinator that helps people start startups. Almost by definition, if
a startup succeeds, its founders become rich. Which means by helping startup
founders I've been helping to increase economic inequality. If economic
inequality should be decreased, I shouldn't be helping founders. No one should
be.
But that doesn't sound right. What's going on here? What's going on is that
while economic inequality is a single measure (or more precisely, two:
variation in income, and variation in wealth), it has multiple causes. Many of
these causes are bad, like tax loopholes and drug addiction. But some are
good, like Larry Page and Sergey Brin starting the company you use to find
things online.
If you want to understand economic inequality — and more importantly, if you
actually want to fix the bad aspects of it — you have to tease apart the
components. And yet the trend in nearly everything written about the subject
is to do the opposite: to squash together all the aspects of economic
inequality as if it were a single phenomenon.
Sometimes this is done for ideological reasons. Sometimes it's because the
writer only has very high-level data and so draws conclusions from that, like
the proverbial drunk who looks for his keys under the lamppost, instead of
where he dropped them, because the light is better there. Sometimes it's
because the writer doesn't understand critical aspects of inequality, like the
role of technology in wealth creation. Much of the time, perhaps most of the
time, writing about economic inequality combines all three.
___
The most common mistake people make about economic inequality is to treat it
as a single phenomenon. The most naive version of which is the one based on
the pie fallacy: that the rich get rich by taking money from the poor.
Usually this is an assumption people start from rather than a conclusion they
arrive at by examining the evidence. Sometimes the pie fallacy is stated
explicitly:
> ...those at the top are grabbing an increasing fraction of the nation's
> income — so much of a larger share that what's left over for the rest is
> diminished....
Other times it's more unconscious. But the unconscious form is very
widespread. I think because we grow up in a world where the pie fallacy is
actually true. To kids, wealth _is_ a fixed pie that's shared out, and if one
person gets more, it's at the expense of another. It takes a conscious effort
to remind oneself that the real world doesn't work that way.
In the real world you can create wealth as well as taking it from others. A
woodworker creates wealth. He makes a chair, and you willingly give him money
in return for it. A high-frequency trader does not. He makes a dollar only
when someone on the other end of a trade loses a dollar.
If the rich people in a society got that way by taking wealth from the poor,
then you have the degenerate case of economic inequality, where the cause of
poverty is the same as the cause of wealth. But instances of inequality don't
have to be instances of the degenerate case. If one woodworker makes 5 chairs
and another makes none, the second woodworker will have less money, but not
because anyone took anything from him.
Even people sophisticated enough to know about the pie fallacy are led toward
it by the custom of describing economic inequality as a ratio of one
quantile's income or wealth to another's. It's so easy to slip from talking
about income shifting from one quantile to another, as a figure of speech,
into believing that is literally what's happening.
Except in the degenerate case, economic inequality can't be described by a
ratio or even a curve. In the general case it consists of multiple ways people
become poor, and multiple ways people become rich. Which means to understand
economic inequality in a country, you have to go find individual people who
are poor or rich and figure out why.
If you want to understand _change_ in economic inequality, you should ask what
those people would have done when it was different. This is one way I know the
rich aren't all getting richer simply from some new system for transferring
wealth to them from everyone else. When you use the would-have method with
startup founders, you find what most would have done _back in 1960_, when
economic inequality was lower, was to join big companies or become professors.
Before Mark Zuckerberg started Facebook, his default expectation was that he'd
end up working at Microsoft. The reason he and most other startup founders are
richer than they would have been in the mid 20th century is not because of
some right turn the country took during the Reagan administration, but because
progress in technology has made it much easier to start a new company that
_grows fast_.
Traditional economists seem strangely averse to studying individual humans. It
seems to be a rule with them that everything has to start with statistics. So
they give you very precise numbers about variation in wealth and income, then
follow it with the most naive speculation about the underlying causes.
But while there are a lot of people who get rich through rent-seeking of
various forms, and a lot who get rich by playing zero-sum games, there are
also a significant number who get rich by creating wealth. And creating
wealth, as a source of economic inequality, is different from taking it — not
just morally, but also practically, in the sense that it is harder to
eradicate. One reason is that variation in productivity is accelerating. The
rate at which individuals can create wealth depends on the technology
available to them, and that grows exponentially. The other reason creating
wealth is such a tenacious source of inequality is that it can expand to
accommodate a lot of people.
___
I'm all for shutting down the crooked ways to get rich. But that won't
eliminate great variations in wealth, because as long as you leave open the
option of getting rich by creating wealth, people who want to get rich will do
that instead.
Most people who get rich tend to be fairly driven. Whatever their other flaws,
laziness is usually not one of them. Suppose new policies make it hard to make
a fortune in finance. Does it seem plausible that the people who currently go
into finance to make their fortunes will continue to do so, but be content to
work for ordinary salaries? The reason they go into finance is not because
they love finance but because they want to get rich. If the only way left to
get rich is to start startups, they'll start startups. They'll do well at it
too, because determination is the main factor in the success of a startup.
And while it would probably be a good thing for the world if people who wanted
to get rich switched from playing zero-sum games to creating wealth, that
would not only not eliminate great variations in wealth, but might even
exacerbate them. In a zero-sum game there is at least a limit to the upside.
Plus a lot of the new startups would create new technology that further
accelerated variation in productivity.
Variation in productivity is far from the only source of economic inequality,
but it is the irreducible core of it, in the sense that you'll have that left
when you eliminate all other sources. And if you do, that core will be big,
because it will have expanded to include the efforts of all the refugees. Plus
it will have a large Baumol penumbra around it: anyone who could get rich by
creating wealth on their own account will have to be paid enough to prevent
them from doing it.
You can't prevent great variations in wealth without preventing people from
getting rich, and you can't do that without preventing them from starting
startups.
So let's be clear about that. Eliminating great variations in wealth would
mean eliminating startups. And that doesn't seem a wise move. Especially since
it would only mean you eliminated startups in your own country. Ambitious
people already move halfway around the world to further their careers, and
startups can operate from anywhere nowadays. So if you made it impossible to
get rich by creating wealth in your country, people who wanted to do that
would just leave and do it somewhere else. Which would certainly get you a
lower Gini coefficient, along with a lesson in being careful what you ask for.
I think rising economic inequality is the inevitable fate of countries that
don't choose something worse. We had a 40 year stretch in the middle of the
20th century that convinced some people otherwise. But as I explained in _The
Refragmentation_, that was an anomaly — a unique combination of circumstances
that compressed American society not just economically but culturally too.
And while some of the growth in economic inequality we've seen since then has
been due to bad behavior of various kinds, there has simultaneously been a
huge increase in individuals' ability to create wealth. Startups are almost
entirely a product of this period. And even within the startup world, there
has been a qualitative change in the last 10 years. Technology has decreased
the cost of starting a startup so much that founders now have the upper hand
over investors. Founders get less diluted, and it is now common for them to
retain _board control_ as well. Both further increase economic inequality, the
former because founders own more stock, and the latter because, as investors
have learned, founders tend to be better at running their companies than
investors.
While the surface manifestations change, the underlying forces are very, very
old. The acceleration of productivity we see in Silicon Valley has been
happening for thousands of years. If you look at the history of stone tools,
technology was already accelerating in the Mesolithic. The acceleration would
have been too slow to perceive in one lifetime. Such is the nature of the
leftmost part of an exponential curve. But it was the same curve.
You do not want to design your society in a way that's incompatible with this
curve. The evolution of technology is one of the most powerful forces in
history.
Louis Brandeis said "We may have democracy, or we may have wealth concentrated
in the hands of a few, but we can't have both." That sounds plausible. But if
I have to choose between ignoring him and ignoring an exponential curve that
has been operating for thousands of years, I'll bet on the curve. Ignoring any
trend that has been operating for thousands of years is dangerous. But
exponential growth, especially, tends to bite you.
___
If accelerating variation in productivity is always going to produce some
baseline growth in economic inequality, it would be a good idea to spend some
time thinking about that future. Can you have a healthy society with great
variation in wealth? What would it look like?
Notice how novel it feels to think about that. The public conversation so far
has been exclusively about the need to decrease economic inequality. We've
barely given a thought to how to live with it.
I'm hopeful we'll be able to. Brandeis was a product of the Gilded Age, and
things have changed since then. It's harder to hide wrongdoing now. And to get
rich now you don't have to buy politicians the way railroad or oil magnates
did. The great concentrations of wealth I see around me in Silicon Valley
don't seem to be destroying democracy.
There are lots of things wrong with the US that have economic inequality as a
symptom. We should fix those things. In the process we may decrease economic
inequality. But we can't start from the symptom and hope to fix the underlying
causes.
The most obvious is poverty. I'm sure most of those who want to decrease
economic inequality want to do it mainly to help the poor, not to hurt the
rich. Indeed, a good number are merely being sloppy by speaking of
decreasing economic inequality when what they mean is decreasing poverty. But
this is a situation where it would be good to be precise about what we want.
Poverty and economic inequality are not identical. When the city is turning
off your _water_ because you can't pay the bill, it doesn't make any
difference what Larry Page's net worth is compared to yours. He might only be
a few times richer than you, and it would still be just as much of a problem
that your water was getting turned off.
Closely related to poverty is lack of social mobility. I've seen this myself:
you don't have to grow up rich or even upper middle class to get rich as a
startup founder, but few successful founders grew up desperately poor. But
again, the problem here is not simply economic inequality. There is an
enormous difference in wealth between the household Larry Page grew up in and
that of a successful startup founder, but that didn't prevent him from joining
their ranks. It's not economic inequality per se that's blocking social
mobility, but some specific combination of things that go wrong when kids grow
up sufficiently poor.
One of the most important principles in Silicon Valley is that "you make what
you measure." It means that if you pick some number to focus on, it will tend
to improve, but that you have to choose the right number, because only the one
you choose will improve; another that seems conceptually adjacent might not.
For example, if you're a university president and you decide to focus on
graduation rates, then you'll improve graduation rates. But only graduation
rates, not how much students learn. Students could learn less, if to improve
graduation rates you made classes easier.
Economic inequality is sufficiently far from identical with the various
problems that have it as a symptom that we'll probably only hit whichever of
the two we aim at. If we aim at economic inequality, we won't fix these
problems. So I say let's aim at the problems.
For example, let's attack poverty, and if necessary damage wealth in the
process. That's much more likely to work than attacking wealth in the hope
that you will thereby fix poverty. And if there are people getting rich by
tricking consumers or lobbying the government for anti-competitive regulations
or tax loopholes, then let's stop them. Not because it's causing economic
inequality, but because it's stealing.
If all you have is statistics, it seems like that's what you need to fix. But
behind a broad statistical measure like economic inequality there are some
things that are good and some that are bad, some that are historical trends
with immense momentum and others that are random accidents. If we want to fix
the world behind the statistics, we have to understand it, and focus our
efforts where they'll do the most good.
** |
|
November 2019
Everyone knows that to do great work you need both natural ability and
determination. But there's a third ingredient that's not as well understood:
an obsessive interest in a particular topic.
To explain this point I need to burn my reputation with some group of people,
and I'm going to choose bus ticket collectors. There are people who collect
old bus tickets. Like many collectors, they have an obsessive interest in the
minutiae of what they collect. They can keep track of distinctions between
different types of bus tickets that would be hard for the rest of us to
remember. Because we don't care enough. What's the point of spending so much
time thinking about old bus tickets?
Which leads us to the second feature of this kind of obsession: there is no
point. A bus ticket collector's love is disinterested. They're not doing it to
impress us or to make themselves rich, but for its own sake.
When you look at the lives of people who've done great work, you see a
consistent pattern. They often begin with a bus ticket collector's obsessive
interest in something that would have seemed pointless to most of their
contemporaries. One of the most striking features of Darwin's book about his
voyage on the Beagle is the sheer depth of his interest in natural history.
His curiosity seems infinite. Ditto for Ramanujan, sitting by the hour working
out on his slate what happens to series.
It's a mistake to think they were "laying the groundwork" for the discoveries
they made later. There's too much intention in that metaphor. Like bus ticket
collectors, they were doing it because they liked it.
But there is a difference between Ramanujan and a bus ticket collector. Series
matter, and bus tickets don't.
If I had to put the recipe for genius into one sentence, that might be it: to
have a disinterested obsession with something that matters.
Aren't I forgetting about the other two ingredients? Less than you might
think. An obsessive interest in a topic is both a proxy for ability and a
substitute for determination. Unless you have sufficient mathematical
aptitude, you won't find series interesting. And when you're obsessively
interested in something, you don't need as much determination: you don't need
to push yourself as hard when curiosity is pulling you.
An obsessive interest will even bring you luck, to the extent anything can.
Chance, as Pasteur said, favors the prepared mind, and if there's one thing an
obsessed mind is, it's prepared.
The disinterestedness of this kind of obsession is its most important feature.
Not just because it's a filter for earnestness, but because it helps you
discover new ideas.
The paths that lead to new ideas tend to look unpromising. If they looked
promising, other people would already have explored them. How do the people
who do great work discover these paths that others overlook? The popular story
is that they simply have better vision: because they're so talented, they see
paths that others miss. But if you look at the way great discoveries are made,
that's not what happens. Darwin didn't pay closer attention to individual
species than other people because he saw that this would lead to great
discoveries, and they didn't. He was just really, really interested in such
things.
Darwin couldn't turn it off. Neither could Ramanujan. They didn't discover the
hidden paths that they did because they seemed promising, but because they
couldn't help it. That's what allowed them to follow paths that someone who
was merely ambitious would have ignored.
What rational person would decide that the way to write great novels was to
begin by spending several years creating an imaginary elvish language, like
Tolkien, or visiting every household in southwestern Britain, like Trollope?
No one, including Tolkien and Trollope.
The bus ticket theory is similar to Carlyle's famous definition of genius as
an infinite capacity for taking pains. But there are two differences. The bus
ticket theory makes it clear that the source of this infinite capacity for
taking pains is not infinite diligence, as Carlyle seems to have meant, but
the sort of infinite interest that collectors have. It also adds an important
qualification: an infinite capacity for taking pains about something that
matters.
So what matters? You can never be sure. It's precisely because no one can tell
in advance which paths are promising that you can discover new ideas by
working on what you're interested in.
But there are some heuristics you can use to guess whether an obsession might
be one that matters. For example, it's more promising if you're creating
something, rather than just consuming something someone else creates. It's
more promising if something you're interested in is difficult, especially if
it's _more difficult for other people_ than it is for you. And the obsessions
of talented people are more likely to be promising. When talented people
become interested in random things, they're not truly random.
But you can never be sure. In fact, here's an interesting idea that's also
rather alarming if it's true: it may be that to do great work, you also have
to waste a lot of time.
In many different areas, reward is proportionate to risk. If that rule holds
here, then the way to find paths that lead to truly great work is to be
willing to expend a lot of effort on things that turn out to be every bit as
unpromising as they seem.
I'm not sure if this is true. On one hand, it seems surprisingly difficult to
waste your time so long as you're working hard on something interesting. So
much of what you do ends up being useful. But on the other hand, the rule
about the relationship between risk and reward is so powerful that it seems to
hold wherever risk occurs. _Newton's_ case, at least, suggests that the
risk/reward rule holds here. He's famous for one particular obsession of his
that turned out to be unprecedentedly fruitful: using math to describe the
world. But he had two other obsessions, alchemy and theology, that seem to
have been complete wastes of time. He ended up net ahead. His bet on what we
now call physics paid off so well that it more than compensated for the other
two. But were the other two necessary, in the sense that he had to take big
risks to make such big discoveries? I don't know.
Here's an even more alarming idea: might one make all bad bets? It probably
happens quite often. But we don't know how often, because these people don't
become famous.
It's not merely that the returns from following a path are hard to predict.
They change dramatically over time. 1830 was a really good time to be
obsessively interested in natural history. If Darwin had been born in 1709
instead of 1809, we might never have heard of him.
What can one do in the face of such uncertainty? One solution is to hedge your
bets, which in this case means to follow the obviously promising paths instead
of your own private obsessions. But as with any hedge, you're decreasing
reward when you decrease risk. If you forgo working on what you like in order
to follow some more conventionally ambitious path, you might miss something
wonderful that you'd otherwise have discovered. That too must happen all the
time, perhaps even more often than the genius whose bets all fail.
The other solution is to let yourself be interested in lots of different
things. You don't decrease your upside if you switch between equally genuine
interests based on which seems to be working so far. But there is a danger
here too: if you work on too many different projects, you might not get deeply
enough into any of them.
One interesting thing about the bus ticket theory is that it may help explain
why different types of people excel at different kinds of work. Interest is
much more unevenly distributed than ability. If natural ability is all you
need to do great work, and natural ability is evenly distributed, you have to
invent elaborate theories to explain the skewed distributions we see among
those who actually do great work in various fields. But it may be that much of
the skew has a simpler explanation: different people are interested in
different things.
The bus ticket theory also explains why people are less likely to do great
work after they have children. Here interest has to compete not just with
external obstacles, but with another interest, and one that for most people is
extremely powerful. It's harder to find time for work after you have kids, but
that's the easy part. The real change is that you don't want to.
But the most exciting implication of the bus ticket theory is that it suggests
ways to encourage great work. If the recipe for genius is simply natural
ability plus hard work, all we can do is hope we have a lot of ability, and
work as hard as we can. But if interest is a critical ingredient in genius, we
may be able, by cultivating interest, to cultivate genius.
For example, for the very ambitious, the bus ticket theory suggests that the
way to do great work is to relax a little. Instead of gritting your teeth and
diligently pursuing what all your peers agree is the most promising line of
research, maybe you should try doing something just for fun. And if you're
stuck, that may be the vector along which to break out.
I've always liked _Hamming's_ famous double-barrelled question: what are the
most important problems in your field, and why aren't you working on one of
them? It's a great way to shake yourself up. But it may be overfitting a bit.
It might be at least as useful to ask yourself: if you could take a year off
to work on something that probably wouldn't be important but would be really
interesting, what would it be?
The bus ticket theory also suggests a way to avoid slowing down as you get
older. Perhaps the reason people have fewer new ideas as they get older is not
simply that they're losing their edge. It may also be because once you become
established, you can no longer mess about with irresponsible side projects the
way you could when you were young and no one cared what you did.
The solution to that is obvious: remain irresponsible. It will be hard,
though, because the apparently random projects you take up to stave off
decline will read to outsiders as evidence of it. And you yourself won't know
for sure that they're wrong. But it will at least be more fun to work on what
you want.
It may even be that we can cultivate a habit of intellectual bus ticket
collecting in kids. The usual plan in education is to start with a broad,
shallow focus, then gradually become more specialized. But I've done the
opposite with my kids. I know I can count on their school to handle the broad,
shallow part, so I take them deep.
When they get interested in something, however random, I encourage them to go
preposterously, bus ticket collectorly, deep. I don't do this because of the
bus ticket theory. I do it because I want them to feel the joy of learning,
and they're never going to feel that about something I'm making them learn. It
has to be something they're interested in. I'm just following the path of
least resistance; depth is a byproduct. But if in trying to show them the joy
of learning I also end up training them to go deep, so much the better.
Will it have any effect? I have no idea. But that uncertainty may be the most
interesting point of all. There is so much more to learn about how to do great
work. As old as human civilization feels, it's really still very young if we
haven't nailed something so basic. It's exciting to think there are still
discoveries to make about discovery. If that's the sort of thing you're
interested in.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2010
After barely changing at all for decades, the startup funding business is now
in what could, at least by comparison, be called turmoil. At Y Combinator
we've seen dramatic changes in the funding environment for startups.
Fortunately one of them is much higher valuations.
The trends we've been seeing are probably not YC-specific. I wish I could say
they were, but the main cause is probably just that we see trends first—partly
because the startups we fund are very plugged into the Valley and are quick to
take advantage of anything new, and partly because we fund so many that we
have enough data points to see patterns clearly.
What we're seeing now, everyone's probably going to be seeing in the next
couple years. So I'm going to explain what we're seeing, and what that will
mean for you if you try to raise money.
**Super-Angels**
Let me start by describing what the world of startup funding used to look
like. There used to be two sharply differentiated types of investors: angels
and venture capitalists. Angels are individual rich people who invest small
amounts of their own money, while VCs are employees of funds that invest large
amounts of other people's.
For decades there were just those two types of investors, but now a third type
has appeared halfway between them: the so-called super-angels. And VCs
have been provoked by their arrival into making a lot of angel-style
investments themselves. So the previously sharp line between angels and VCs
has become hopelessly blurred.
There used to be a no man's land between angels and VCs. Angels would invest
$20k to $50k apiece, and VCs usually a million or more. So an angel round
meant a collection of angel investments that combined to maybe $200k, and a VC
round meant a series A round in which a single VC fund (or occasionally two)
invested $1-5 million.
The no man's land between angels and VCs was a very inconvenient one for
startups, because it coincided with the amount many wanted to raise. Most
startups coming out of Demo Day wanted to raise around $400k. But it was a
pain to stitch together that much out of angel investments, and most VCs
weren't interested in investments so small. That's the fundamental reason the
super-angels have appeared. They're responding to the market.
The arrival of a new type of investor is big news for startups, because there
used to be only two and they rarely competed with one another. Super-angels
compete with both angels and VCs. That's going to change the rules about how
to raise money. I don't know yet what the new rules will be, but it looks like
most of the changes will be for the better.
A super-angel has some of the qualities of an angel, and some of the qualities
of a VC. They're usually individuals, like angels. In fact many of the current
super-angels were initially angels of the classic type. But like VCs, they
invest other people's money. This allows them to invest larger amounts than
angels: a typical super-angel investment is currently about $100k. They make
investment decisions quickly, like angels. And they make a lot more
investments per partner than VCs—up to 10 times as many.
The fact that super-angels invest other people's money makes them doubly
alarming to VCs. They don't just compete for startups; they also compete for
investors. What super-angels really are is a new form of fast-moving,
lightweight VC fund. And those of us in the technology world know what usually
happens when something comes along that can be described in terms like that.
Usually it's the replacement.
Will it be? As of now, few of the startups that take money from super-angels
are ruling out taking VC money. They're just postponing it. But that's still a
problem for VCs. Some of the startups that postpone raising VC money may do so
well on the angel money they raise that they never bother to raise more. And
those who do raise VC rounds will be able to get higher valuations when they
do. If the best startups get 10x higher valuations when they raise series A
rounds, that would cut VCs' returns from winners at least tenfold.
So I think VC funds are seriously threatened by the super-angels. But one
thing that may save them to some extent is the uneven distribution of startup
outcomes: practically all the returns are concentrated in a few big successes.
The expected value of a startup is the percentage chance it's Google. So to
the extent that winning is a matter of absolute returns, the super-angels
could win practically all the battles for individual startups and yet lose the
war, if they merely failed to get those few big winners. And there's a chance
that could happen, because the top VC funds have better brands, and can also
do more for their portfolio companies.
Because super-angels make more investments per partner, they have less partner
per investment. They can't pay as much attention to you as a VC on your board
could. How much is that extra attention worth? It will vary enormously from
one partner to another. There's no consensus yet in the general case. So for
now this is something startups are deciding individually.
Till now, VCs' claims about how much value they added were sort of like the
government's. Maybe they made you feel better, but you had no choice in the
matter, if you needed money on the scale only VCs could supply. Now that VCs
have competitors, that's going to put a market price on the help they offer.
The interesting thing is, no one knows yet what it will be.
Do startups that want to get really big need the sort of advice and
connections only the top VCs can supply? Or would super-angel money do just as
well? The VCs will say you need them, and the super-angels will say you don't.
But the truth is, no one knows yet, not even the VCs and super-angels
themselves. All the super-angels know is that their new model seems promising
enough to be worth trying, and all the VCs know is that it seems promising
enough to worry about.
**Rounds**
Whatever the outcome, the conflict between VCs and super-angels is good news
for founders. And not just for the obvious reason that more competition for
deals means better terms. The whole shape of deals is changing.
One of the biggest differences between angels and VCs is the amount of your
company they want. VCs want a lot. In a series A round they want a third of
your company, if they can get it. They don't care much how much they pay for
it, but they want a lot because the number of series A investments they can do
is so small. In a traditional series A investment, at least one partner from
the VC fund takes a seat on your board. Since board seats last about 5
years and each partner can't handle more than about 10 at once, that means a
VC fund can only do about 2 series A deals per partner per year. And that
means they need to get as much of the company as they can in each one. You'd
have to be a very promising startup indeed to get a VC to use up one of his 10
board seats for only a few percent of you.
Since angels generally don't take board seats, they don't have this
constraint. They're happy to buy only a few percent of you. And although the
super-angels are in most respects mini VC funds, they've retained this
critical property of angels. They don't take board seats, so they don't need a
big percentage of your company.
Though that means you'll get correspondingly less attention from them, it's
good news in other respects. Founders never really liked giving up as much
equity as VCs wanted. It was a lot of the company to give up in one shot. Most
founders doing series A deals would prefer to take half as much money for half
as much stock, and then see what valuation they could get for the second half
of the stock after using the first half of the money to increase its value.
But VCs never offered that option.
Now startups have another alternative. Now it's easy to raise angel rounds
about half the size of series A rounds. Many of the startups we fund are
taking this route, and I predict that will be true of startups in general.
A typical big angel round might be $600k on a convertible note with a
valuation cap of $4 million premoney. Meaning that when the note converts into
stock (in a later round, or upon acquisition), the investors in that round
will get .6 / 4.6, or 13% of the company. That's a lot less than the 30 to 40%
of the company you usually give up in a series A round if you do it so early.
But the advantage of these medium-sized rounds is not just that they cause
less dilution. You also lose less control. After an angel round, the founders
almost always still have control of the company, whereas after a series A
round they often don't. The traditional board structure after a series A round
is two founders, two VCs, and a (supposedly) neutral fifth person. Plus series
A terms usually give the investors a veto over various kinds of important
decisions, including selling the company. Founders usually have a lot of de
facto control after a series A, as long as things are going well. But that's
not the same as just being able to do what you want, like you could before.
A third and quite significant advantage of angel rounds is that they're less
stressful to raise. Raising a traditional series A round has in the past taken
weeks, if not months. When a VC firm can only do 2 deals per partner per year,
they're careful about which they do. To get a traditional series A round you
have to go through a series of meetings, culminating in a full partner meeting
where the firm as a whole says yes or no. That's the really scary part for
founders: not just that series A rounds take so long, but at the end of this
long process the VCs might still say no. The chance of getting rejected after
the full partner meeting averages about 25%. At some firms it's over 50%.
Fortunately for founders, VCs have been getting a lot faster. Nowadays Valley
VCs are more likely to take 2 weeks than 2 months. But they're still not as
fast as angels and super-angels, the most decisive of whom sometimes decide in
hours.
Raising an angel round is not only quicker, but you get feedback as it
progresses. An angel round is not an all or nothing thing like a series A.
It's composed of multiple investors with varying degrees of seriousness,
ranging from the upstanding ones who commit unequivocally to the jerks who
give you lines like "come back to me to fill out the round." You usually start
collecting money from the most committed investors and work your way out
toward the ambivalent ones, whose interest increases as the round fills up.
But at each point you know how you're doing. If investors turn cold you may
have to raise less, but when investors in an angel round turn cold the process
at least degrades gracefully, instead of blowing up in your face and leaving
you with nothing, as happens if you get rejected by a VC fund after a full
partner meeting. Whereas if investors seem hot, you can not only close the
round faster, but now that convertible notes are becoming the norm, actually
raise the price to reflect demand.
**Valuation**
However, the VCs have a weapon they can use against the super-angels, and they
have started to use it. VCs have started making angel-sized investments too.
The term "angel round" doesn't mean that all the investors in it are angels;
it just describes the structure of the round. Increasingly the participants
include VCs making investments of a hundred thousand or two. And when VCs
invest in angel rounds they can do things that super-angels don't like. VCs
are quite valuation-insensitive in angel rounds—partly because they are in
general, and partly because they don't care that much about the returns on
angel rounds, which they still view mostly as a way to recruit startups for
series A rounds later. So VCs who invest in angel rounds can blow up the
valuations for angels and super-angels who invest in them.
Some super-angels seem to care about valuations. Several turned down YC-funded
startups after Demo Day because their valuations were too high. This was not a
problem for the startups; by definition a high valuation means enough
investors were willing to accept it. But it was mysterious to me that the
super-angels would quibble about valuations. Did they not understand that the
big returns come from a few big successes, and that it therefore mattered far
more which startups you picked than how much you paid for them?
After thinking about it for a while and observing certain other signs, I have
a theory that explains why the super-angels may be smarter than they seem. It
would make sense for super-angels to want low valuations if they're hoping to
invest in startups that get bought early. If you're hoping to hit the next
Google, you shouldn't care if the valuation is 20 million. But if you're
looking for companies that are going to get bought for 30 million, you care.
If you invest at 20 and the company gets bought for 30, you only get 1.5x. You
might as well buy Apple.
So if some of the super-angels were looking for companies that could get
acquired quickly, that would explain why they'd care about valuations. But why
would they be looking for those? Because depending on the meaning of
"quickly," it could actually be very profitable. A company that gets acquired
for 30 million is a failure to a VC, but it could be a 10x return for an
angel, and moreover, a _quick_ 10x return. Rate of return is what matters in
investing—not the multiple you get, but the multiple per year. If a super-
angel gets 10x in one year, that's a higher rate of return than a VC could
ever hope to get from a company that took 6 years to go public. To get the
same rate of return, the VC would have to get a multiple of 10^6—one million
x. Even Google didn't come close to that.
So I think at least some super-angels are looking for companies that will get
bought. That's the only rational explanation for focusing on getting the right
valuations, instead of the right companies. And if so they'll be different to
deal with than VCs. They'll be tougher on valuations, but more accommodating
if you want to sell early.
**Prognosis**
Who will win, the super-angels or the VCs? I think the answer to that is, some
of each. They'll each become more like one another. The super-angels will
start to invest larger amounts, and the VCs will gradually figure out ways to
make more, smaller investments faster. A decade from now the players will be
hard to tell apart, and there will probably be survivors from each group.
What does that mean for founders? One thing it means is that the high
valuations startups are presently getting may not last forever. To the extent
that valuations are being driven up by price-insensitive VCs, they'll fall
again if VCs become more like super-angels and start to become more miserly
about valuations. Fortunately if this does happen it will take years.
The short term forecast is more competition between investors, which is good
news for you. The super-angels will try to undermine the VCs by acting faster,
and the VCs will try to undermine the super-angels by driving up valuations.
Which for founders will result in the perfect combination: funding rounds that
close fast, with high valuations.
But remember that to get that combination, your startup will have to appeal to
both super-angels and VCs. If you don't seem like you have the potential to go
public, you won't be able to use VCs to drive up the valuation of an angel
round.
There is a danger of having VCs in an angel round: the so-called signalling
risk. If VCs are only doing it in the hope of investing more later, what
happens if they don't? That's a signal to everyone else that they think you're
lame.
How much should you worry about that? The seriousness of signalling risk
depends on how far along you are. If by the next time you need to raise money,
you have graphs showing rising revenue or traffic month after month, you don't
have to worry about any signals your existing investors are sending. Your
results will speak for themselves.
Whereas if the next time you need to raise money you won't yet have concrete
results, you may need to think more about the message your investors might
send if they don't invest more. I'm not sure yet how much you have to worry,
because this whole phenomenon of VCs doing angel investments is so new. But my
instincts tell me you don't have to worry much. Signalling risk smells like
one of those things founders worry about that's not a real problem. As a rule,
the only thing that can kill a good startup is the startup itself. Startups
hurt themselves way more often than competitors hurt them, for example. I
suspect signalling risk is in this category too.
One thing YC-funded startups have been doing to mitigate the risk of taking
money from VCs in angel rounds is not to take too much from any one VC. Maybe
that will help, if you have the luxury of turning down money.
Fortunately, more and more startups will. After decades of competition that
could best be described as intramural, the startup funding business is finally
getting some real competition. That should last several years at least, and
maybe a lot longer. Unless there's some huge market crash, the next couple
years are going to be a good time for startups to raise money. And that's
exciting because it means lots more startups will happen.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2011
If you look at a list of US cities sorted by population, the number of
successful startups per capita varies by orders of magnitude. Somehow it's as
if most places were sprayed with startupicide.
I wondered about this for years. I could see the average town was like a roach
motel for startup ambitions: smart, ambitious people went in, but no startups
came out. But I was never able to figure out exactly what happened inside the
motel—exactly what was killing all the potential startups.
A couple weeks ago I finally figured it out. I was framing the question wrong.
The problem is not that most towns kill startups. It's that death is the
default for startups, and most towns don't save them. Instead of thinking of
most places as being sprayed with startupicide, it's more accurate to think of
startups as all being poisoned, and a few places being sprayed with the
antidote.
Startups in other places are just doing what startups naturally do: fail. The
real question is, what's _saving_ startups in places like Silicon Valley?
**Environment**
I think there are two components to the antidote: being in a place where
startups are the cool thing to do, and chance meetings with people who can
help you. And what drives them both is the number of startup people around
you.
The first component is particularly helpful in the first stage of a startup's
life, when you go from merely having an interest in starting a company to
actually doing it. It's quite a leap to start a startup. It's an unusual thing
to do. But in Silicon Valley it seems normal.
In most places, if you start a startup, people treat you as if you're
unemployed. People in the Valley aren't automatically impressed with you just
because you're starting a company, but they pay attention. Anyone who's been
here any amount of time knows not to default to skepticism, no matter how
inexperienced you seem or how unpromising your idea sounds at first, because
they've all seen inexperienced founders with unpromising sounding ideas who a
few years later were billionaires.
Having people around you care about what you're doing is an extraordinarily
powerful force. Even the most willful people are susceptible to it. About a
year after we started Y Combinator I said something to a partner at a well
known VC firm that gave him the (mistaken) impression I was considering
starting another startup. He responded so eagerly that for about half a second
I found myself considering doing it.
In most other cities, the prospect of starting a startup just doesn't seem
real. In the Valley it's not only real but fashionable. That no doubt causes a
lot of people to start startups who shouldn't. But I think that's ok. Few
people are suited to running a startup, and it's very hard to predict
beforehand which are (as I know all too well from being in the business of
trying to predict beforehand), so lots of people starting startups who
shouldn't is probably the optimal state of affairs. As long as you're at a
point in your life when you can bear the risk of failure, the best way to find
out if you're suited to running a startup is to try it.
**Chance**
The second component of the antidote is chance meetings with people who can
help you. This force works in both phases: both in the transition from the
desire to start a startup to starting one, and the transition from starting a
company to succeeding. The power of chance meetings is more variable than
people around you caring about startups, which is like a sort of background
radiation that affects everyone equally, but at its strongest it is far
stronger.
Chance meetings produce miracles to compensate for the disasters that
characteristically befall startups. In the Valley, terrible things happen to
startups all the time, just like they do to startups everywhere. The reason
startups are more likely to make it here is that great things happen to them
too. In the Valley, lightning has a sign bit.
For example, you start a site for college students and you decide to move to
the Valley for the summer to work on it. And then on a random suburban street
in Palo Alto you happen to run into Sean Parker, who understands the domain
really well because he started a similar startup himself, and also knows all
the investors. And moreover has advanced views, for 2004, on founders
retaining control of their companies.
You can't say precisely what the miracle will be, or even for sure that one
will happen. The best one can say is: if you're in a startup hub, unexpected
good things will probably happen to you, especially if you deserve them.
I bet this is true even for startups we fund. Even with us working to make
things happen for them on purpose rather than by accident, the frequency of
helpful chance meetings in the Valley is so high that it's still a significant
increment on what we can deliver.
Chance meetings play a role like the role relaxation plays in having ideas.
Most people have had the experience of working hard on some problem, not being
able to solve it, giving up and going to bed, and then thinking of the answer
in the shower in the morning. What makes the answer appear is letting your
thoughts drift a bit—and thus drift off the wrong path you'd been pursuing
last night and onto the right one adjacent to it.
Chance meetings let your acquaintance drift in the same way taking a shower
lets your thoughts drift. The critical thing in both cases is that they drift
just the right amount. The meeting between Larry Page and Sergey Brin was a
good example. They let their acquaintance drift, but only a little; they were
both meeting someone they had a lot in common with.
For Larry Page the most important component of the antidote was Sergey Brin,
and vice versa. The antidote is people. It's not the physical infrastructure
of Silicon Valley that makes it work, or the weather, or anything like that.
Those helped get it started, but now that the reaction is self-sustaining what
drives it is the people.
Many observers have noticed that one of the most distinctive things about
startup hubs is the degree to which people help one another out, with no
expectation of getting anything in return. I'm not sure why this is so.
Perhaps it's because startups are less of a zero sum game than most types of
business; they are rarely killed by competitors. Or perhaps it's because so
many startup founders have backgrounds in the sciences, where collaboration is
encouraged.
A large part of YC's function is to accelerate that process. We're a sort of
Valley within the Valley, where the density of people working on startups and
their willingness to help one another are both artificially amplified.
**Numbers**
Both components of the antidote—an environment that encourages startups, and
chance meetings with people who help you—are driven by the same underlying
cause: the number of startup people around you. To make a startup hub, you
need a _lot_ of people interested in startups.
There are three reasons. The first, obviously, is that if you don't have
enough density, the chance meetings don't happen. The second is that
different startups need such different things, so you need a lot of people to
supply each startup with what they need most. Sean Parker was exactly what
Facebook needed in 2004. Another startup might have needed a database guy, or
someone with connections in the movie business.
This is one of the reasons we fund such a large number of companies,
incidentally. The bigger the community, the greater the chance it will contain
the person who has that one thing you need most.
The third reason you need a lot of people to make a startup hub is that once
you have enough people interested in the same problem, they start to set the
social norms. And it is a particularly valuable thing when the atmosphere
around you encourages you to do something that would otherwise seem too
ambitious. In most places the atmosphere pulls you back toward the mean.
I flew into the Bay Area a few days ago. I notice this every time I fly over
the Valley: somehow you can sense something is going on. Obviously you can
sense prosperity in how well kept a place looks. But there are different kinds
of prosperity. Silicon Valley doesn't look like Boston, or New York, or LA, or
DC. I tried asking myself what word I'd use to describe the feeling the Valley
radiated, and the word that came to mind was optimism.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
April 2008
Umair Haque wrote recently that the reason there aren't more Googles is that
most startups get bought before they can change the world.
> Google, despite serious interest from Microsoft and Yahoo—what must have
> seemed like lucrative interest at the time—didn't sell out. Google might
> simply have been nothing but Yahoo's or MSN's search box.
>
> Why isn't it? Because Google had a deeply felt sense of purpose: a
> conviction to change the world for the better.
This has a nice sound to it, but it isn't true. Google's founders were willing
to sell early on. They just wanted more than acquirers were willing to pay.
It was the same with Facebook. They would have sold, but Yahoo blew it by
offering too little.
Tip for acquirers: when a startup turns you down, consider raising your offer,
because there's a good chance the outrageous price they want will later seem a
bargain.
From the evidence I've seen so far, startups that turn down acquisition offers
usually end up doing better. Not always, but usually there's a bigger offer
coming, or perhaps even an IPO.
Of course, the reason startups do better when they turn down acquisition
offers is not necessarily that all such offers undervalue startups. More
likely the reason is that the kind of founders who have the balls to turn down
a big offer also tend to be very successful. That spirit is exactly what you
want in a startup.
While I'm sure Larry and Sergey do want to change the world, at least now, the
reason Google survived to become a big, independent company is the same reason
Facebook has so far remained independent: acquirers underestimated them.
Corporate M&A is a strange business in that respect. They consistently lose
the best deals, because turning down reasonable offers is the most reliable
test you could invent for whether a startup will make it big.
**VCs**
So what's the real reason there aren't more Googles? Curiously enough, it's
the same reason Google and Facebook have remained independent: money guys
undervalue the most innovative startups.
The reason there aren't more Googles is not that investors encourage
innovative startups to sell out, but that they won't even fund them. I've
learned a lot about VCs during the 3 years we've been doing Y Combinator,
because we often have to work quite closely with them. The most surprising
thing I've learned is how conservative they are. VC firms present an image of
boldly encouraging innovation. Only a handful actually do, and even they are
more conservative in reality than you'd guess from reading their sites.
I used to think of VCs as piratical: bold but unscrupulous. On closer
acquaintance they turn out to be more like bureaucrats. They're more
upstanding than I used to think (the good ones, at least), but less bold.
Maybe the VC industry has changed. Maybe they used to be bolder. But I suspect
it's the startup world that has changed, not them. The low cost of starting a
startup means the average good bet is a riskier one, but most existing VC
firms still operate as if they were investing in hardware startups in 1985.
Howard Aiken said "Don't worry about people stealing your ideas. If your ideas
are any good, you'll have to ram them down people's throats." I have a similar
feeling when I'm trying to convince VCs to invest in startups Y Combinator has
funded. They're terrified of really novel ideas, unless the founders are good
enough salesmen to compensate.
But it's the bold ideas that generate the biggest returns. Any really good new
idea will seem bad to most people; otherwise someone would already be doing
it. And yet most VCs are driven by consensus, not just within their firms, but
within the VC community. The biggest factor determining how a VC will feel
about your startup is how other VCs feel about it. I doubt they realize it,
but this algorithm guarantees they'll miss all the very best ideas. The more
people who have to like a new idea, the more outliers you lose.
Whoever the next Google is, they're probably being told right now by VCs to
come back when they have more "traction."
Why are VCs so conservative? It's probably a combination of factors. The large
size of their investments makes them conservative. Plus they're investing
other people's money, which makes them worry they'll get in trouble if they do
something risky and it fails. Plus most of them are money guys rather than
technical guys, so they don't understand what the startups they're investing
in do.
**What's Next**
The exciting thing about market economies is that stupidity equals
opportunity. And so it is in this case. There is a huge, unexploited
opportunity in startup investing. Y Combinator funds startups at the very
beginning. VCs will fund them once they're already starting to succeed. But
between the two there is a substantial gap.
There are companies that will give $20k to a startup that has nothing more
than the founders, and there are companies that will give $2 million to a
startup that's already taking off, but there aren't enough investors who will
give $200k to a startup that seems very promising but still has some things to
figure out. This territory is occupied mostly by individual angel
investors—people like Andy Bechtolsheim, who gave Google $100k when they
seemed promising but still had some things to figure out. I like angels, but
there just aren't enough of them, and investing is for most of them a part
time job.
And yet as it gets cheaper to start startups, this sparsely occupied territory
is becoming more and more valuable. Nowadays a lot of startups don't want to
raise multi-million dollar series A rounds. They don't need that much money,
and they don't want the hassles that come with it. The median startup coming
out of Y Combinator wants to raise $250-500k. When they go to VC firms they
have to ask for more because they know VCs aren't interested in such small
deals.
VCs are money managers. They're looking for ways to put large sums to work.
But the startup world is evolving away from their current model.
Startups have gotten cheaper. That means they want less money, but also that
there are more of them. So you can still get large returns on large amounts of
money; you just have to spread it more broadly.
I've tried to explain this to VC firms. Instead of making one $2 million
investment, make five $400k investments. Would that mean sitting on too many
boards? Don't sit on their boards. Would that mean too much due diligence? Do
less. If you're investing at a tenth the valuation, you only have to be a
tenth as sure.
It seems obvious. But I've proposed to several VC firms that they set aside
some money and designate one partner to make more, smaller bets, and they
react as if I'd proposed the partners all get nose rings. It's remarkable how
wedded they are to their standard m.o.
But there is a big opportunity here, and one way or the other it's going to
get filled. Either VCs will evolve down into this gap or, more likely, new
investors will appear to fill it. That will be a good thing when it happens,
because these new investors will be compelled by the structure of the
investments they make to be ten times bolder than present day VCs. And that
will get us a lot more Googles. At least, as long as acquirers remain stupid.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
August 2008
Raising money is the second hardest part of starting a startup. The hardest
part is making something people want: most startups that die, die because they
didn't do that. But the second biggest cause of death is probably the
difficulty of raising money. Fundraising is brutal.
One reason it's so brutal is simply the brutality of markets. People who've
spent most of their lives in schools or big companies may not have been
exposed to that. Professors and bosses usually feel some sense of
responsibility toward you; if you make a valiant effort and fail, they'll cut
you a break. Markets are less forgiving. Customers don't care how hard you
worked, only whether you solved their problems.
Investors evaluate startups the way customers evaluate products, not the way
bosses evaluate employees. If you're making a valiant effort and failing,
maybe they'll invest in your next startup, but not this one.
But raising money from investors is harder than selling to customers, because
there are so few of them. There's nothing like an efficient market. You're
unlikely to have more than 10 who are interested; it's difficult to talk to
more. So the randomness of any one investor's behavior can really affect you.
Problem number 3: investors are very random. All investors, including us, are
by ordinary standards incompetent. We constantly have to make decisions about
things we don't understand, and more often than not we're wrong.
And yet a lot is at stake. The amounts invested by different types of
investors vary from five thousand dollars to fifty million, but the amount
usually seems large for whatever type of investor it is. Investment decisions
are big decisions.
That combination—making big decisions about things they don't understand—tends
to make investors very skittish. VCs are notorious for leading founders on.
Some of the more unscrupulous do it deliberately. But even the most well-
intentioned investors can behave in a way that would seem crazy in everyday
life. One day they're full of enthusiasm and seem ready to write you a check
on the spot; the next they won't return your phone calls. They're not playing
games with you. They just can't make up their minds.
If that weren't bad enough, these wildly fluctuating nodes are all linked
together. Startup investors all know one another, and (though they hate to
admit it) the biggest factor in their opinion of you is the opinion of other
investors. Talk about a recipe for an unstable system. You get the
opposite of the damping that the fear/greed balance usually produces in
markets. No one is interested in a startup that's a "bargain" because everyone
else hates it.
So the inefficient market you get because there are so few players is
exacerbated by the fact that they act less than independently. The result is a
system like some kind of primitive, multi-celled sea creature, where you
irritate one extremity and the whole thing contracts violently.
Y Combinator is working to fix this. We're trying to increase the number of
investors just as we're increasing the number of startups. We hope that as the
number of both increases we'll get something more like an efficient market. As
t approaches infinity, Demo Day approaches an auction.
Unfortunately, t is still very far from infinity. What does a startup do now,
in the imperfect world we currently inhabit? The most important thing is not
to let fundraising get you down. Startups live or die on morale. If you let
the difficulty of raising money destroy your morale, it will become a self-
fulfilling prophecy.
**Bootstrapping (= Consulting)**
Some would-be founders may by now be thinking, why deal with investors at all?
If raising money is so painful, why do it?
One answer to that is obvious: because you need money to live on. It's a fine
idea in principle to finance your startup with its own revenues, but you can't
create instant customers. Whatever you make, you have to sell a certain amount
to break even. It will take time to grow your sales to that point, and it's
hard to predict, till you try, how long it will take.
We could not have bootstrapped Viaweb, for example. We charged quite a lot for
our software—about $140 per user per month—but it was at least a year before
our revenues would have covered even our paltry costs. We didn't have enough
saved to live on for a year.
If you factor out the "bootstrapped" companies that were actually funded by
their founders through savings or a day job, the remainder either (a) got
really lucky, which is hard to do on demand, or (b) began life as consulting
companies and gradually transformed themselves into product companies.
Consulting is the only option you can count on. But consulting is far from
free money. It's not as painful as raising money from investors, perhaps, but
the pain is spread over a longer period. Years, probably. And for many types
of startup, that delay could be fatal. If you're working on something so
unusual that no one else is likely to think of it, you can take your time.
Joshua Schachter gradually built Delicious on the side while working on Wall
Street. He got away with it because no one else realized it was a good idea.
But if you were building something as obviously necessary as online store
software at about the same time as Viaweb, and you were working on it on the
side while spending most of your time on client work, you were not in a good
position.
Bootstrapping sounds great in principle, but this apparently verdant territory
is one from which few startups emerge alive. The mere fact that bootstrapped
startups tend to be famous on that account should set off alarm bells. If it
worked so well, it would be the norm.
Bootstrapping may get easier, because starting a company is getting cheaper.
But I don't think we'll ever reach the point where most startups can do
without outside funding. Technology tends to get dramatically cheaper, but
living expenses don't.
The upshot is, you can choose your pain: either the short, sharp pain of
raising money, or the chronic ache of consulting. For a given total amount of
pain, raising money is the better choice, because new technology is usually
more valuable now than later.
But although for most startups raising money will be the lesser evil, it's
still a pretty big evil—so big that it can easily kill you. Not merely in the
obvious sense that if you fail to raise money you might have to shut the
company down, but because the _process_ of raising money itself can kill you.
To survive it you need a set of techniques mostly orthogonal to the ones used
in convincing investors, just as mountain climbers need to know survival
techniques that are mostly orthogonal to those used in physically getting up
and down mountains.
**1\. Have low expectations.**
The reason raising money destroys so many startups' morale is not simply that
it's hard, but that it's so much harder than they expected. What kills you is
the disappointment. And the lower your expectations, the harder it is to be
disappointed.
Startup founders tend to be optimistic. This can work well in technology, at
least some of the time, but it's the wrong way to approach raising money.
Better to assume investors will always let you down. Acquirers too, while
we're at it. At YC one of our secondary mantras is "Deals fall through." No
matter what deal you have going on, assume it will fall through. The
predictive power of this simple rule is amazing.
There will be a tendency, as a deal progresses, to start to believe it will
happen, and then to depend on it happening. You must resist this. Tie yourself
to the mast. This is what kills you. Deals do not have a trajectory like most
other human interactions, where shared plans solidify linearly over time.
Deals often fall through at the last moment. Often the other party doesn't
really think about what they want till the last moment. So you can't use your
everyday intuitions about shared plans as a guide. When it comes to deals, you
have to consciously turn them off and become pathologically cynical.
This is harder to do than it sounds. It's very flattering when eminent
investors seem interested in funding you. It's easy to start to believe that
raising money will be quick and straightforward. But it hardly ever is.
**2\. Keep working on your startup.**
It sounds obvious to say that you should keep working on your startup while
raising money. Actually this is hard to do. Most startups don't manage to.
Raising money has a mysterious capacity to suck up all your attention. Even if
you only have one meeting a day with investors, somehow that one meeting will
burn up your whole day. It costs not just the time of the actual meeting, but
the time getting there and back, and the time preparing for it beforehand and
thinking about it afterward.
The best way to survive the distraction of meeting with investors is probably
to partition the company: to pick one founder to deal with investors while the
others keep the company going. This works better when a startup has 3 founders
than 2, and better when the leader of the company is not also the lead
developer. In the best case, the company keeps moving forward at about half
speed.
That's the best case, though. More often than not the company comes to a
standstill while raising money. And that is dangerous for so many reasons.
Raising money always takes longer than you expect. What seems like it's going
to be a 2 week interruption turns into a 4 month interruption. That can be
very demoralizing. And worse still, it can make you less attractive to
investors. They want to invest in companies that are dynamic. A company that
hasn't done anything new in 4 months doesn't seem dynamic, so they start to
lose interest. Investors rarely grasp this, but much of what they're
responding to when they lose interest in a startup is the damage done by their
own indecision.
The solution: put the startup first. Fit meetings with investors into the
spare moments in your development schedule, rather than doing development in
the spare moments between meetings with investors. If you keep the company
moving forward—releasing new features, increasing traffic, doing deals,
getting written about—those investor meetings are more likely to be
productive. Not just because your startup will seem more alive, but also
because it will be better for your own morale, which is one of the main ways
investors judge you.
**3\. Be conservative.**
As conditions get worse, the optimal strategy becomes more conservative. When
things go well you can take risks; when things are bad you want to play it
safe.
I advise approaching fundraising as if it were always going badly. The reason
is that between your ability to delude yourself and the wildly unstable nature
of the system you're dealing with, things probably either already are or could
easily become much worse than they seem.
What I tell most startups we fund is that if someone reputable offers you
funding on reasonable terms, take it. There have been startups that ignored
this advice and got away with it—startups that ignored a good offer in the
hope of getting a better one, and actually did. But in the same position I'd
give the same advice again. Who knows how many bullets were in the gun they
were playing Russian roulette with?
Corollary: if an investor seems interested, don't just let them sit. You can't
assume someone interested in investing will stay interested. In fact, you
can't even tell (_they_ can't even tell) if they're really interested till you
try to convert that interest into money. So if you have hot prospect, either
close them now or write them off. And unless you already have enough funding,
that reduces to: close them now.
Startups don't win by getting great funding rounds, but by making great
products. So finish raising money and get back to work.
**4\. Be flexible.**
There are two questions VCs ask that you shouldn't answer: "Who else are you
talking to?" and "How much are you trying to raise?"
VCs don't expect you to answer the first question. They ask it just in case.
They do seem to expect an answer to the second. But I don't think you
should just tell them a number. Not as a way to play games with them, but
because you shouldn't _have_ a fixed amount you need to raise.
The custom of a startup needing a fixed amount of funding is an obsolete one
left over from the days when startups were more expensive. A company that
needed to build a factory or hire 50 people obviously needed to raise a
certain minimum amount. But few technology startups are in that position
today.
We advise startups to tell investors there are several different routes they
could take depending on how much they raised. As little as $50k could pay for
food and rent for the founders for a year. A couple hundred thousand would let
them get office space and hire some smart people they know from school. A
couple million would let them really blow this thing out. The message (and not
just the message, but the fact) should be: we're going to succeed no matter
what. Raising more money just lets us do it faster.
If you're raising an angel round, the size of the round can even change on the
fly. In fact, it's just as well to make the round small initially, then expand
as needed, rather than trying to raise a large round and risk losing the
investors you already have if you can't raise the full amount. You may even
want to do a "rolling close," where the round has no predetermined size, but
instead you sell stock to investors one at a time as they say yes. That helps
break deadlocks, because you can start as soon as the first one is ready to
buy.
**5\. Be independent.**
A startup with a couple founders in their early twenties can have expenses so
low that they could be profitable on as little as $2000 per month. That's
negligible as corporate revenues go, but the effect on your morale and your
bargaining position is anything but. At YC we use the phrase "ramen
profitable" to describe the situation where you're making just enough to pay
your living expenses. Once you cross into ramen profitable, everything
changes. You may still need investment to make it big, but you don't need it
this month.
You can't plan when you start a startup how long it will take to become
profitable. But if you find yourself in a position where a little more effort
expended on sales would carry you over the threshold of ramen profitable, do
it.
Investors like it when you're ramen profitable. It shows you've thought about
making money, instead of just working on amusing technical problems; it shows
you have the discipline to keep your expenses low; but above all, it means you
don't need them.
There is nothing investors like more than a startup that seems like it's going
to succeed even without them. Investors like it when they can help a startup,
but they don't like startups that would die without that help.
At YC we spend a lot of time trying to predict how the startups we've funded
will do, because we're trying to learn how to pick winners. We've now watched
the trajectories of so many startups that we're getting better at predicting
them. And when we're talking about startups we think are likely to succeed,
what we find ourselves saying is things like "Oh, those guys can take care of
themselves. They'll be fine." Not "those guys are really smart" or "those guys
are working on a great idea." When we predict good outcomes for startups,
the qualities that come up in the supporting arguments are toughness,
adaptability, determination. Which means to the extent we're correct, those
are the qualities you need to win.
Investors know this, at least unconsciously. The reason they like it when you
don't need them is not simply that they like what they can't have, but because
that quality is what makes founders succeed.
Sam Altman has it. You could parachute him into an island full of cannibals
and come back in 5 years and he'd be the king. If you're Sam Altman, you don't
have to be profitable to convey to investors that you'll succeed with or
without them. (He wasn't, and he did.) Not everyone has Sam's deal-making
ability. I myself don't. But if you don't, you can let the numbers speak for
you.
**6\. Don't take rejection personally.**
Getting rejected by investors can make you start to doubt yourself. After all,
they're more experienced than you. If they think your startup is lame, aren't
they probably right?
Maybe, maybe not. The way to handle rejection is with precision. You shouldn't
simply ignore rejection. It might mean something. But you shouldn't
automatically get demoralized either.
To understand what rejection means, you have to understand first of all how
common it is. Statistically, the average VC is a rejection machine. David
Hornik, a partner at August, told me:
> The numbers for me ended up being something like 500 to 800 plans received
> and read, somewhere between 50 and 100 initial 1 hour meetings held, about
> 20 companies that I got interested in, about 5 that I got serious about and
> did a bunch of work, 1 to 2 deals done in a year. So the odds are against
> you. You may be a great entrepreneur, working on interesting stuff, etc. but
> it is still incredibly unlikely that you get funded.
This is less true with angels, but VCs reject practically everyone. The
structure of their business means a partner does at most 2 new investments a
year, no matter how many good startups approach him.
In addition to the odds being terrible, the average investor is, as I
mentioned, a pretty bad judge of startups. It's harder to judge startups than
most other things, because great startup ideas tend to seem wrong. A good
startup idea has to be not just good but novel. And to be both good and novel,
an idea probably has to seem bad to most people, or someone would already be
doing it and it wouldn't be novel.
That makes judging startups harder than most other things one judges. You have
to be an intellectual contrarian to be a good startup investor. That's a
problem for VCs, most of whom are not particularly imaginative. VCs are mostly
money guys, not people who make things. Angels are better at appreciating
novel ideas, because most were founders themselves.
So when you get a rejection, use the data that's in it, and not what's not. If
an investor gives you specific reasons for not investing, look at your startup
and ask if they're right. If they're real problems, fix them. But don't just
take their word for it. You're supposed to be the domain expert; you have to
decide.
Though a rejection doesn't necessarily tell you anything about your startup,
it does suggest your pitch could be improved. Figure out what's not working
and change it. Don't just think "investors are stupid." Often they are, but
figure out precisely where you lose them.
Don't let rejections pile up as a depressing, undifferentiated heap. Sort them
and analyze them, and then instead of thinking "no one likes us," you'll know
precisely how big a problem you have, and what to do about it.
**7\. Be able to downshift into consulting (if appropriate).**
Consulting, as I mentioned, is a dangerous way to finance a startup. But it's
better than dying. It's a bit like anaerobic respiration: not the optimum
solution for the long term, but it can save you from an immediate threat. If
you're having trouble raising money from investors at all, it could save you
to be able to shift toward consulting.
This works better for some startups than others. It wouldn't have been a
natural fit for, say, Google, but if your company was making software for
building web sites, you could degrade fairly gracefully into consulting by
building sites for clients with it.
So long as you were careful not to get sucked permanently into consulting,
this could even have advantages. You'd understand your users well if you were
using the software for them. Plus as a consulting company you might be able to
get big-name users using your software that you wouldn't have gotten as a
product company.
At Viaweb we were forced to operate like a consulting company initially,
because we were so desperate for users that we'd offer to build merchants'
sites for them if they'd sign up. But we never charged for such work, because
we didn't want them to start treating us like actual consultants, and calling
us every time they wanted something changed on their site. We knew we had to
stay a product company, because only that scales.
**8\. Avoid inexperienced investors.**
Though novice investors seem unthreatening they can be the most dangerous
sort, because they're so nervous. Especially in proportion to the amount they
invest. Raising $20,000 from a first-time angel investor can be as much work
as raising $2 million from a VC fund.
Their lawyers are generally inexperienced too. But while the investors can
admit they don't know what they're doing, their lawyers can't. One YC startup
negotiated terms for a tiny round with an angel, only to receive a 70-page
agreement from his lawyer. And since the lawyer could never admit, in front of
his client, that he'd screwed up, he instead had to insist on retaining all
the draconian terms in it, so the deal fell through.
Of course, someone has to take money from novice investors, or there would
never be any experienced ones. But if you do, either (a) drive the process
yourself, including supplying the paperwork, or (b) use them only to fill up a
larger round led by someone else.
**9\. Know where you stand.**
The most dangerous thing about investors is their indecisiveness. The worst
case scenario is the long no, the no that comes after months of meetings.
Rejections from investors are like design flaws: inevitable, but much less
costly if you discover them early.
So while you're talking to investors, constantly look for signs of where you
stand. How likely are they to offer you a term sheet? What do they have to be
convinced of first? You shouldn't necessarily always be asking these questions
outright—that could get annoying—but you should always be collecting data
about them.
Investors tend to resist committing except to the extent you push them to.
It's in their interest to collect the maximum amount of information while
making the minimum number of decisions. The best way to force them to act is,
of course, competing investors. But you can also apply some force by focusing
the discussion: by asking what specific questions they need answered to make
up their minds, and then answering them. If you get through several obstacles
and they keep raising new ones, assume that ultimately they're going to flake.
You have to be disciplined when collecting data about investors' intentions.
Otherwise their desire to lead you on will combine with your own desire to be
led on to produce completely inaccurate impressions.
Use the data to weight your strategy. You'll probably be talking to several
investors. Focus on the ones that are most likely to say yes. The value of a
potential investor is a combination of how good it would be if they said yes,
and how likely they are to say it. Put the most weight on the second factor.
Partly because the most important quality in an investor is simply investing.
But also because, as I mentioned, the biggest factor in investors' opinion of
you is other investors' opinion of you. If you're talking to several investors
and you manage to get one over the threshold of saying yes, it will make the
others much more interested. So you're not sacrificing the lukewarm investors
if you focus on the hot ones; convincing the hot investors is the best way to
convince the lukewarm ones.
**Future**
I'm hopeful things won't always be so awkward. I hope that as startups get
cheaper and the number of investors increases, raising money will become, if
not easy, at least straightforward.
In the meantime, the brokenness of the funding process offers a big
opportunity. Most investors have no idea how dangerous they are. They'd be
surprised to hear that raising money from them is something that has to be
treated as a threat to a company's survival. They just think they need a
little more information to make up their minds. They don't get that there are
10 other investors who also want a little more information, and that the
process of talking to them all can bring a startup to a standstill for months.
Because investors don't understand the cost of dealing with them, they don't
realize how much room there is for a potential competitor to undercut them. I
know from my own experience how much faster investors could decide, because
we've brought our own time down to 20 minutes (5 minutes of reading an
application plus a 10 minute interview plus 5 minutes of discussion). If you
were investing more money you'd want to take longer, of course. But if we can
decide in 20 minutes, should it take anyone longer than a couple days?
Opportunities like this don't sit unexploited forever, even in an industry as
conservative as venture capital. So either existing investors will start to
make up their minds faster, or new investors will emerge who do.
In the meantime founders have to treat raising money as a dangerous process.
Fortunately, I can fix the biggest danger right here. The biggest danger is
surprise. It's that startups will underestimate the difficulty of raising
money—that they'll cruise through all the initial steps, but when they turn to
raising money they'll find it surprisingly hard, get demoralized, and give up.
So I'm telling you in advance: raising money is hard.
** |
|
December 2008
_(I originally wrote this at the request of a company producing a report
about entrepreneurship. Unfortunately after reading it they decided it was too
controversial to include.)_
VC funding will probably dry up somewhat during the present recession, like it
usually does in bad times. But this time the result may be different. This
time the number of new startups may not decrease. And that could be dangerous
for VCs.
When VC funding dried up after the Internet Bubble, startups dried up too.
There were not a lot of new startups being founded in 2003\. But startups
aren't tied to VC the way they were 10 years ago. It's now possible for VCs
and startups to diverge. And if they do, they may not reconverge once the
economy gets better.
The reason startups no longer depend so much on VCs is one that everyone in
the startup business knows by now: it has gotten much cheaper to start a
startup. There are four main reasons: Moore's law has made hardware cheap;
open source has made software free; the web has made marketing and
distribution free; and more powerful programming languages mean development
teams can be smaller. These changes have pushed the cost of starting a startup
down into the noise. In a lot of startups—probaby most startups funded by Y
Combinator—the biggest expense is simply the founders' living expenses. We've
had startups that were profitable on revenues of $3000 a month.
$3000 is insignificant as revenues go. Why should anyone care about a startup
making $3000 a month? Because, although insignificant as _revenue_ , this
amount of money can change a startup's _funding_ situation completely.
Someone running a startup is always calculating in the back of their mind how
much "runway" they have—how long they have till the money in the bank runs out
and they either have to be profitable, raise more money, or go out of
business. Once you cross the threshold of profitability, however low, your
runway becomes infinite. It's a qualitative change, like the stars turning
into lines and disappearing when the Enterprise accelerates to warp speed.
Once you're profitable you don't need investors' money. And because Internet
startups have become so cheap to run, the threshold of profitability can be
trivially low. Which means many Internet startups don't need VC-scale
investments anymore. For many startups, VC funding has, in the language of
VCs, gone from a must-have to a nice-to-have.
This change happened while no one was looking, and its effects have been
largely masked so far. It was during the trough after the Internet Bubble that
it became trivially cheap to start a startup, but few realized it because
startups were so out of fashion. When startups came back into fashion, around
2005, investors were starting to write checks again. And while founders may
not have needed VC money the way they used to, they were willing to take it if
offered—partly because there was a tradition of startups taking VC money, and
partly because startups, like dogs, tend to eat when given the opportunity. As
long as VCs were writing checks, founders were never forced to explore the
limits of how little they needed them. There were a few startups who hit these
limits accidentally because of their unusual circumstances—most famously
37signals, which hit the limit because they crossed into startup land from the
other direction: they started as a consulting firm, so they had revenue before
they had a product.
VCs and founders are like two components that used to be bolted together.
Around 2000 the bolt was removed. Because the components have so far been
subjected to the same forces, they still seem to be joined together, but
really one is just resting on the other. A sharp impact would make them fly
apart. And the present recession could be that impact.
Because of Y Combinator's position at the extreme end of the spectrum, we'd be
the first to see signs of a separation between founders and investors, and we
are in fact seeing it. For example, though the stock market crash does seem to
have made investors more cautious, it doesn't seem to have had any effect on
the number of people who want to start startups. We take applications for
funding every 6 months. Applications for the current funding cycle closed on
October 17, well after the markets tanked, and even so we got a record number,
up 40% from the same cycle a year before.
Maybe things will be different a year from now, if the economy continues to
get worse, but so far there is zero slackening of interest among potential
founders. That's different from the way things felt in 2001. Then there was a
widespread feeling among potential founders that startups were over, and that
one should just go to grad school. That isn't happening this time, and part of
the reason is that even in a bad economy it's not that hard to build something
that makes $3000 a month. If investors stop writing checks, who cares?
We also see signs of a divergence between founders and investors in the
attitudes of existing startups we've funded. I was talking to one recently
that had a round fall through at the last minute over the sort of trifle that
breaks deals when investors feel they have the upper hand—over an uncertainty
about whether the founders had correctly filed their 83(b) forms, if you can
believe that. And yet this startup is obviously going to succeed: their
traffic and revenue graphs look like a jet taking off. So I asked them if they
wanted me to introduce them to more investors. To my surprise, they said
no—that they'd just spent four months dealing with investors, and they were
actually a lot happier now that they didn't have to. There was a friend they
wanted to hire with the investor money, and now they'd have to postpone that.
But otherwise they felt they had enough in the bank to make it to
profitability. To make sure, they were moving to a cheaper apartment. And in
this economy I bet they got a good deal on it.
I've detected this "investors aren't worth the trouble" vibe from several YC
founders I've talked to recently. At least one startup from the most recent
(summer) cycle may not even raise angel money, let alone VC. Ticketstumbler
made it to profitability on Y Combinator's $15,000 investment and they hope
not to need more. This surprised even us. Although YC is based on the idea of
it being cheap to start a startup, we never anticipated that founders would
grow successful startups on nothing more than YC funding.
If founders decide VCs aren't worth the trouble, that could be bad for VCs.
When the economy bounces back in a few years and they're ready to write checks
again, they may find that founders have moved on.
There is a founder community just as there's a VC community. They all know one
another, and techniques spread rapidly between them. If one tries a new
programming language or a new hosting provider and gets good results, 6 months
later half of them are using it. And the same is true for funding. The current
generation of founders want to raise money from VCs, and Sequoia specifically,
because Larry and Sergey took money from VCs, and Sequoia specifically.
Imagine what it would do to the VC business if the next hot company didn't
take VC at all.
VCs think they're playing a zero sum game. In fact, it's not even that. If you
lose a deal to Benchmark, you lose that deal, but VC as an industry still
wins. If you lose a deal to None, all VCs lose.
This recession may be different from the one after the Internet Bubble. This
time founders may keep starting startups. And if they do, VCs will have to
keep writing checks, or they could become irrelevant.
**Thanks** to Sam Altman, Trevor Blackwell, David Hornik, Jessica Livingston,
Robert Morris, and Fred Wilson for reading drafts of this.
---
---
Russian Translation
* * *
--- |
|
March 2008, rev May 2013
_(This essay grew out of something I wrote for myself to figure out what we
do. Even though Y Combinator is now 3 years old, we're still trying to
understand its implications.)_
I was annoyed recently to read a description of Y Combinator that said "Y
Combinator does seed funding for startups." What was especially annoying about
it was that I wrote it. This doesn't really convey what we do. And the reason
it's inaccurate is that, paradoxically, funding very early stage startups is
not mainly about funding.
Saying YC does seed funding for startups is a description in terms of earlier
models. It's like calling a car a horseless carriage.
When you scale animals you can't just keep everything in proportion. For
example, volume grows as the cube of linear dimension, but surface area only
as the square. So as animals get bigger they have trouble radiating heat.
That's why mice and rabbits are furry and elephants and hippos aren't. You
can't make a mouse by scaling down an elephant.
YC represents a new, smaller kind of animal—so much smaller that all the rules
are different.
Before us, most companies in the startup funding business were venture capital
funds. VCs generally fund later stage companies than we do. And they supply so
much money that, even though the other things they do may be very valuable,
it's not that inaccurate to regard VCs as sources of money. Good VCs are
"smart money," but they're still money.
All good investors supply a combination of money and help. But these scale
differently, just as volume and surface area do. Late stage investors supply
huge amounts of money and comparatively little help: when a company about to
go public gets a mezzanine round of $50 million, the deal tends to be almost
entirely about money. As you move earlier in the venture funding process, the
ratio of help to money increases, because earlier stage companies have
different needs. Early stage companies need less money because they're smaller
and cheaper to run, but they need more help because life is so precarious for
them. So when VCs do a series A round for, say, $2 million, they generally
expect to offer a significant amount of help along with the money.
Y Combinator occupies the earliest end of the spectrum. We're at least one and
generally two steps before VC funding. (Though some startups go straight from
YC to VC, the most common trajectory is to do an angel round first.) And what
happens at Y Combinator is as different from what happens in a series A round
as a series A round is from a mezzanine financing.
At our end, money is almost a negligible factor. The startup usually consists
of just the founders. Their living expenses are the company's main expense,
and since most founders are under 30, their living expenses are low. But at
this early stage companies need a lot of help. Practically every question is
still unanswered. Some companies we've funded have been working on their
software for a year or more, but others haven't decided what to work on, or
even who the founders should be.
When PR people and journalists recount the histories of startups after they've
become big, they always underestimate how uncertain things were at first.
They're not being deliberately misleading. When you look at a company like
Google, it's hard to imagine they could once have been small and helpless.
Sure, at one point they were a just a couple guys in a garage—but even then
their greatness was assured, and all they had to do was roll forward along the
railroad tracks of destiny.
Far from it. A lot of startups with just as promising beginnings end up
failing. Google has such momentum now that it would be hard for anyone to stop
them. But all it would have taken in the beginning would have been for two
Google employees to focus on the wrong things for six months, and the company
could have died.
We know, because we've been there, just how vulnerable startups are in the
earliest phases. Curiously enough, that's why founders tend to get so rich
from them. Reward is always proportionate to risk, and very early stage
startups are insanely risky.
What we really do at Y Combinator is get startups launched straight. One of
many metaphors you could use for YC is a steam catapult on an aircraft
carrier. We get startups airborne. Barely airborne, but enough that they can
accelerate fast.
When you're launching planes they have to be set up properly or you're just
launching projectiles. They have to be pointed straight down the deck; the
wings have to be trimmed properly; the engines have to be at full power; the
pilot has to be ready. These are the kind of problems we deal with. After we
fund startups we work closely with them for three months—so closely in fact
that we insist they move to where we are. And what we do in those three months
is make sure everything is set up for launch. If there are tensions between
cofounders we help sort them out. We get all the paperwork set up properly so
there are no nasty surprises later. If the founders aren't sure what to focus
on first, we try to figure that out. If there is some obstacle right in front
of them, we either try to remove it, or shift the startup sideways. The goal
is to get every distraction out of the way so the founders can use that time
to build (or finish building) something impressive. And then near the end of
the three months we push the button on the steam catapult in the form of Demo
Day, where the current group of startups present to pretty much every investor
in Silicon Valley.
Launching companies isn't identical with launching products. Though we do
spend a lot of time on launch strategies for products, there are some things
that take too long to build for a startup to launch them before raising their
next round of funding. Several of the most promising startups we've funded
haven't launched their products yet, but are definitely launched as companies.
In the earliest stage, startups not only have more questions to answer, but
they tend to be different kinds of questions. In later stage startups the
questions are about deals, or hiring, or organization. In the earliest phase
they tend to be about technology and design. What do you make? That's the
first problem to solve. That's why our motto is "Make something people want."
This is always a good thing for companies to do, but it's even more important
early on, because it sets the bounds for every other question. Who you hire,
how much money you raise, how you market yourself—they all depend on what
you're making.
Because the early problems are so much about technology and design, you
probably need to be hackers to do what we do. While some VCs have technical
backgrounds, I don't know any who still write code. Their expertise is mostly
in business—as it should be, because that's the kind of expertise you need in
the phase between series A and (if you're lucky) IPO.
We're so different from VCs that we're really a different kind of animal. Can
we claim founders are better off as a result of this new type of venture firm?
I'm pretty sure the answer is yes, because YC is an improved version of what
happened to our startup, and our case was not atypical. We started Viaweb with
$10,000 in seed money from our friend Julian. He was a lawyer and arranged all
our paperwork, so we could just code. We spent three months building a version
1, which we then presented to investors to raise more money. Sounds familiar,
doesn't it? But YC improves on that significantly. Julian knew a lot about law
and business, but his advice ended there; he was not a startup guy. So we made
some basic mistakes early on. And when we presented to investors, we presented
to only 2, because that was all we knew. If we'd had our later selves to
encourage and advise us, and Demo Day to present at, we would have been in
much better shape. We probably could have raised money at 3 to 5 times the
valuation we did.
If we take 7% of a company we fund, the founders only have to do 7.5% better
in their next round of funding to end up net ahead. We certainly manage that.
So who is our 7% coming out of? If the founders end up net ahead it's not
coming out of them. So is it coming out of later stage investors? Well, they
do end up paying more. But I think they pay more because the company is
actually more valuable. And later stage investors have no problem with that.
The returns of a VC fund depend on the quality of the companies they invest
in, not how cheaply they can buy stock in them.
If what we do is useful, why wasn't anyone doing it before? There are two
answers to that. One is that people were doing it before, just haphazardly on
a smaller scale. Before us, seed funding came primarily from individual angel
investors. Larry and Sergey, for example, got their seed funding from Andy
Bechtolsheim, one of the founders of Sun. And because he was a startup guy he
probably gave them useful advice. But raising money from angel investors is a
hit or miss thing. It's a sideline for most of them, so they only do a handful
of deals a year and they don't spend a lot of time on the startups they invest
in. And they're hard to reach, because they don't want random startups
pestering them with business plans. The Google guys were lucky because they
knew someone who knew Bechtolsheim. It generally takes a personal introduction
with angels.
The other reason no one was doing quite what we do is that till recently it
was a lot more expensive to start a startup. You'll notice we haven't funded
any biotech startups. That's still expensive. But advancing technology has
made web startups so cheap that you really can get a company airborne for
$15,000. If you understand how to operate a steam catapult, at least.
So in effect what's happened is that a new ecological niche has opened up, and
Y Combinator is the new kind of animal that has moved into it. We're not a
replacement for venture capital funds. We occupy a new, adjacent niche. And
conditions in our niche are really quite different. It's not just that the
problems we face are different; the whole structure of the business is
different. VCs are playing a zero-sum game. They're all competing for a slice
of a fixed amount of "deal flow," and that explains a lot of their behavior.
Whereas our m.o. is to create new deal flow, by encouraging hackers who would
have gotten jobs to start their own startups instead. We compete more with
employers than VCs.
It's not surprising something like this would happen. Most fields become more
specialized—more articulated—as they develop, and startups are certainly an
area in which there has been a lot of development over the past couple
decades. The venture business in its present form is only about forty years
old. It stands to reason it would evolve.
And it's natural that the new niche would at first be described, even by its
inhabitants, in terms of the old one. But really Y Combinator is not in the
startup funding business. Really we're more of a small, furry steam catapult.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this.
Comment on this essay.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
March 2012
One of the more surprising things I've noticed while working on Y Combinator
is how frightening the most ambitious startup ideas are. In this essay I'm
going to demonstrate this phenomenon by describing some. Any one of them could
make you a billionaire. That might sound like an attractive prospect, and yet
when I describe these ideas you may notice you find yourself shrinking away
from them.
Don't worry, it's not a sign of weakness. Arguably it's a sign of sanity. The
biggest startup ideas are terrifying. And not just because they'd be a lot of
work. The biggest ideas seem to threaten your identity: you wonder if you'd
have enough ambition to carry them through.
There's a scene in _Being John Malkovich_ where the nerdy hero encounters a
very attractive, sophisticated woman. She says to him:
> Here's the thing: If you ever got me, you wouldn't have a clue what to do
> with me.
That's what these ideas say to us.
This phenomenon is one of the most important things you can understand about
startups. You'd expect big startup ideas to be attractive, but actually
they tend to repel you. And that has a bunch of consequences. It means these
ideas are invisible to most people who try to think of startup ideas, because
their subconscious filters them out. Even the most ambitious people are
probably best off approaching them obliquely.
**1\. A New Search Engine**
The best ideas are just on the right side of impossible. I don't know if this
one is possible, but there are signs it might be. Making a new search engine
means competing with Google, and recently I've noticed some cracks in their
fortress.
The point when it became clear to me that Microsoft had lost their way was
when they decided to get into the search business. That was not a natural move
for Microsoft. They did it because they were afraid of Google, and Google was
in the search business. But this meant (a) Google was now setting Microsoft's
agenda, and (b) Microsoft's agenda consisted of stuff they weren't good at.
Microsoft : Google :: Google : Facebook.
That does not by itself mean there's room for a new search engine, but lately
when using Google search I've found myself nostalgic for the old days, when
Google was true to its own slightly aspy self. Google used to give me a page
of the right answers, fast, with no clutter. Now the results seem inspired by
the Scientologist principle that what's true is what's true for you. And the
pages don't have the clean, sparse feel they used to. Google search results
used to look like the output of a Unix utility. Now if I accidentally put the
cursor in the wrong place, anything might happen.
The way to win here is to build the search engine all the hackers use. A
search engine whose users consisted of the top 10,000 hackers and no one else
would be in a very powerful position despite its small size, just as Google
was when it was that search engine. And for the first time in over a decade
the idea of switching seems thinkable to me.
Since anyone capable of starting this company is one of those 10,000 hackers,
the route is at least straightforward: make the search engine you yourself
want. Feel free to make it excessively hackerish. Make it really good for code
search, for example. Would you like search queries to be Turing complete?
Anything that gets you those 10,000 users is ipso facto good.
Don't worry if something you want to do will constrain you in the long term,
because if you don't get that initial core of users, there won't be a long
term. If you can just build something that you and your friends genuinely
prefer to Google, you're already about 10% of the way to an IPO, just as
Facebook was (though they probably didn't realize it) when they got all the
Harvard undergrads.
**2\. Replace Email**
Email was not designed to be used the way we use it now. Email is not a
messaging protocol. It's a todo list. Or rather, my inbox is a todo list, and
email is the way things get onto it. But it is a disastrously bad todo list.
I'm open to different types of solutions to this problem, but I suspect that
tweaking the inbox is not enough, and that email has to be replaced with a new
protocol. This new protocol should be a todo list protocol, not a messaging
protocol, although there is a degenerate case where what someone wants you to
do is: read the following text.
As a todo list protocol, the new protocol should give more power to the
recipient than email does. I want there to be more restrictions on what
someone can put on my todo list. And when someone can put something on my todo
list, I want them to tell me more about what they want from me. Do they want
me to do something beyond just reading some text? How important is it? (There
obviously has to be some mechanism to prevent people from saying everything is
important.) When does it have to be done?
This is one of those ideas that's like an irresistible force meeting an
immovable object. On one hand, entrenched protocols are impossible to replace.
On the other, it seems unlikely that people in 100 years will still be living
in the same email hell we do now. And if email is going to get replaced
eventually, why not now?
If you do it right, you may be able to avoid the usual chicken and egg problem
new protocols face, because some of the most powerful people in the world will
be among the first to switch to it. They're all at the mercy of email too.
Whatever you build, make it fast. GMail has become painfully slow. If you
made something no better than GMail, but fast, that alone would let you start
to pull users away from GMail.
GMail is slow because Google can't afford to spend a lot on it. But people
will pay for this. I'd have no problem paying $50 a month. Considering how
much time I spend in email, it's kind of scary to think how much I'd be
justified in paying. At least $1000 a month. If I spend several hours a day
reading and writing email, that would be a cheap way to make my life better.
**3\. Replace Universities**
People are all over this idea lately, and I think they're onto something. I'm
reluctant to suggest that an institution that's been around for a millennium
is finished just because of some mistakes they made in the last few decades,
but certainly in the last few decades US universities seem to have been headed
down the wrong path. One could do a lot better for a lot less money.
I don't think universities will disappear. They won't be replaced wholesale.
They'll just lose the de facto monopoly on certain types of learning that they
once had. There will be many different ways to learn different things, and
some may look quite different from universities. Y Combinator itself is
arguably one of them.
Learning is such a big problem that changing the way people do it will have a
wave of secondary effects. For example, the name of the university one went to
is treated by a lot of people (correctly or not) as a credential in its own
right. If learning breaks up into many little pieces, credentialling may
separate from it. There may even need to be replacements for campus social
life (and oddly enough, YC even has aspects of that).
You could replace high schools too, but there you face bureaucratic obstacles
that would slow down a startup. Universities seem the place to start.
**4\. Internet Drama**
Hollywood has been slow to embrace the Internet. That was a mistake, because I
think we can now call a winner in the race between delivery mechanisms, and it
is the Internet, not cable.
A lot of the reason is the horribleness of cable clients, also known as TVs.
Our family didn't wait for Apple TV. We hated our last TV so much that a few
months ago we replaced it with an iMac bolted to the wall. It's a little
inconvenient to control it with a wireless mouse, but the overall experience
is much better than the nightmare UI we had to deal with before.
Some of the attention people currently devote to watching movies and TV can be
stolen by things that seem completely unrelated, like social networking apps.
More can be stolen by things that are a little more closely related, like
games. But there will probably always remain some residual demand for
conventional drama, where you sit passively and watch as a plot happens. So
how do you deliver drama via the Internet? Whatever you make will have to be
on a larger scale than Youtube clips. When people sit down to watch a show,
they want to know what they're going to get: either part of a series with
familiar characters, or a single longer "movie" whose basic premise they know
in advance.
There are two ways delivery and payment could play out. Either some company
like Netflix or Apple will be the app store for entertainment, and you'll
reach audiences through them. Or the would-be app stores will be too
overreaching, or too technically inflexible, and companies will arise to
supply payment and streaming a la carte to the producers of drama. If that's
the way things play out, there will also be a need for such infrastructure
companies.
**5\. The Next Steve Jobs**
I was talking recently to someone who knew Apple well, and I asked him if the
people now running the company would be able to keep creating new things the
way Apple had under Steve Jobs. His answer was simply "no." I already feared
that would be the answer. I asked more to see how he'd qualify it. But he
didn't qualify it at all. No, there will be no more great new stuff beyond
whatever's currently in the pipeline. Apple's revenues may continue to rise
for a long time, but as Microsoft shows, revenue is a lagging indicator in the
technology business.
So if Apple's not going to make the next iPad, who is? None of the existing
players. None of them are run by product visionaries, and empirically you
can't seem to get those by hiring them. Empirically the way you get a product
visionary as CEO is for him to found the company and not get fired. So the
company that creates the next wave of hardware is probably going to have to be
a startup.
I realize it sounds preposterously ambitious for a startup to try to become as
big as Apple. But no more ambitious than it was for Apple to become as big as
Apple, and they did it. Plus a startup taking on this problem now has an
advantage the original Apple didn't: the example of Apple. Steve Jobs has
shown us what's possible. That helps would-be successors both directly, as
Roger Bannister did, by showing how much better you can do than people did
before, and indirectly, as Augustus did, by lodging the idea in users' minds
that a single person could unroll the future for them.
Now Steve is gone there's a vacuum we can all feel. If a new company led
boldly into the future of hardware, users would follow. The CEO of that
company, the "next Steve Jobs," might not measure up to Steve Jobs. But he
wouldn't have to. He'd just have to do a better job than Samsung and HP and
Nokia, and that seems pretty doable.
**6\. Bring Back Moore's Law**
The last 10 years have reminded us what Moore's Law actually says. Till about
2002 you could safely misinterpret it as promising that clock speeds would
double every 18 months. Actually what it says is that circuit densities will
double every 18 months. It used to seem pedantic to point that out. Not any
more. Intel can no longer give us faster CPUs, just more of them.
This Moore's Law is not as good as the old one. Moore's Law used to mean that
if your software was slow, all you had to do was wait, and the inexorable
progress of hardware would solve your problems. Now if your software is slow
you have to rewrite it to do more things in parallel, which is a lot more work
than waiting.
It would be great if a startup could give us something of the old Moore's Law
back, by writing software that could make a large number of CPUs look to the
developer like one very fast CPU. There are several ways to approach this
problem. The most ambitious is to try to do it automatically: to write a
compiler that will parallelize our code for us. There's a name for this
compiler, _the sufficiently smart compiler,_ and it is a byword for
impossibility. But is it really impossible? Is there no configuration of the
bits in memory of a present day computer that is this compiler? If you really
think so, you should try to prove it, because that would be an interesting
result. And if it's not impossible but simply very hard, it might be worth
trying to write it. The expected value would be high even if the chance of
succeeding was low.
The reason the expected value is so high is web services. If you could write
software that gave programmers the convenience of the way things were in the
old days, you could offer it to them as a web service. And that would in turn
mean that you got practically all the users.
Imagine there was another processor manufacturer that could still translate
increased circuit densities into increased clock speeds. They'd take most of
Intel's business. And since web services mean that no one sees their
processors anymore, by writing the sufficiently smart compiler you could
create a situation indistinguishable from you being that manufacturer, at
least for the server market.
The least ambitious way of approaching the problem is to start from the other
end, and offer programmers more parallelizable Lego blocks to build programs
out of, like Hadoop and MapReduce. Then the programmer still does much of the
work of optimization.
There's an intriguing middle ground where you build a semi-automatic
weapon—where there's a human in the loop. You make something that looks to the
user like the sufficiently smart compiler, but inside has people, using highly
developed optimization tools to find and eliminate bottlenecks in users'
programs. These people might be your employees, or you might create a
marketplace for optimization.
An optimization marketplace would be a way to generate the sufficiently smart
compiler piecemeal, because participants would immediately start writing bots.
It would be a curious state of affairs if you could get to the point where
everything could be done by bots, because then you'd have made the
sufficiently smart compiler, but no one person would have a complete copy of
it.
I realize how crazy all this sounds. In fact, what I like about this idea is
all the different ways in which it's wrong. The whole idea of focusing on
optimization is counter to the general trend in software development for the
last several decades. Trying to write the sufficiently smart compiler is by
definition a mistake. And even if it weren't, compilers are the sort of
software that's supposed to be created by open source projects, not companies.
Plus if this works it will deprive all the programmers who take pleasure in
making multithreaded apps of so much amusing complexity. The forum troll I
have by now internalized doesn't even know where to begin in raising
objections to this project. Now that's what I call a startup idea.
**7\. Ongoing Diagnosis**
But wait, here's another that could face even greater resistance: ongoing,
automatic medical diagnosis.
One of my tricks for generating startup ideas is to imagine the ways in which
we'll seem backward to future generations. And I'm pretty sure that to people
50 or 100 years in the future, it will seem barbaric that people in our era
waited till they had symptoms to be diagnosed with conditions like heart
disease and cancer.
For example, in 2004 Bill Clinton found he was feeling short of breath.
Doctors discovered that several of his arteries were over 90% blocked and 3
days later he had a quadruple bypass. It seems reasonable to assume Bill
Clinton has the best medical care available. And yet even he had to wait till
his arteries were over 90% blocked to learn that the number was over 90%.
Surely at some point in the future we'll know these numbers the way we now
know something like our weight. Ditto for cancer. It will seem preposterous to
future generations that we wait till patients have physical symptoms to be
diagnosed with cancer. Cancer will show up on some sort of radar screen
immediately.
(Of course, what shows up on the radar screen may be different from what we
think of now as cancer. I wouldn't be surprised if at any given time we have
ten or even hundreds of microcancers going at once, none of which normally
amount to anything.)
A lot of the obstacles to ongoing diagnosis will come from the fact that it's
going against the grain of the medical profession. The way medicine has always
worked is that patients come to doctors with problems, and the doctors figure
out what's wrong. A lot of doctors don't like the idea of going on the medical
equivalent of what lawyers call a "fishing expedition," where you go looking
for problems without knowing what you're looking for. They call the things
that get discovered this way "incidentalomas," and they are something of a
nuisance.
For example, a friend of mine once had her brain scanned as part of a study.
She was horrified when the doctors running the study discovered what appeared
to be a large tumor. After further testing, it turned out to be a harmless
cyst. But it cost her a few days of terror. A lot of doctors worry that if you
start scanning people with no symptoms, you'll get this on a giant scale: a
huge number of false alarms that make patients panic and require expensive and
perhaps even dangerous tests to resolve. But I think that's just an artifact
of current limitations. If people were scanned all the time and we got better
at deciding what was a real problem, my friend would have known about this
cyst her whole life and known it was harmless, just as we do a birthmark.
There is room for a lot of startups here. In addition to the technical
obstacles all startups face, and the bureaucratic obstacles all medical
startups face, they'll be going against thousands of years of medical
tradition. But it will happen, and it will be a great thing—so great that
people in the future will feel as sorry for us as we do for the generations
that lived before anaesthesia and antibiotics.
**Tactics**
Let me conclude with some tactical advice. If you want to take on a problem as
big as the ones I've discussed, don't make a direct frontal attack on it.
Don't say, for example, that you're going to replace email. If you do that you
raise too many expectations. Your employees and investors will constantly be
asking "are we there yet?" and you'll have an army of haters waiting to see
you fail. Just say you're building todo-list software. That sounds harmless.
People can notice you've replaced email when it's a _fait accompli_.
Empirically, the way to do really big things seems to be to start with
deceptively small things. Want to dominate microcomputer software? Start by
writing a Basic interpreter for a machine with a few thousand users. Want to
make the universal web site? Start by building a site for Harvard undergrads
to stalk one another.
Empirically, it's not just for other people that you need to start small. You
need to for your own sake. Neither Bill Gates nor Mark Zuckerberg knew at
first how big their companies were going to get. All they knew was that they
were onto something. Maybe it's a bad idea to have really big ambitions
initially, because the bigger your ambition, the longer it's going to take,
and the further you project into the future, the more likely you'll get it
wrong.
I think the way to use these big ideas is not to try to identify a precise
point in the future and then ask yourself how to get from here to there, like
the popular image of a visionary. You'll be better off if you operate like
Columbus and just head in a general westerly direction. Don't try to construct
the future like a building, because your current blueprint is almost certainly
mistaken. Start with something you know works, and when you expand, expand
westward.
The popular image of the visionary is someone with a clear view of the future,
but empirically it may be better to have a blurry one.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
August 2010
Two years ago I wrote about what I called "a huge, unexploited opportunity in
startup funding:" the growing disconnect between VCs, whose current business
model requires them to invest large amounts, and a large class of startups
that need less than they used to. Increasingly, startups want a couple hundred
thousand dollars, not a couple million.
The opportunity is a lot less unexploited now. Investors have poured into this
territory from both directions. VCs are much more likely to make angel-sized
investments than they were a year ago. And meanwhile the past year has seen a
dramatic increase in a new type of investor: the super-angel, who operates
like an angel, but using other people's money, like a VC.
Though a lot of investors are entering this territory, there is still room for
more. The distribution of investors should mirror the distribution of
startups, which has the usual power law dropoff. So there should be a lot more
people investing tens or hundreds of thousands than millions.
In fact, it may be good for angels that there are more people doing angel-
sized deals, because if angel rounds become more legitimate, then startups may
start to opt for angel rounds even when they could, if they wanted, raise
series A rounds from VCs. One reason startups prefer series A rounds is that
they're more prestigious. But if angel investors become more active and better
known, they'll increasingly be able to compete with VCs in brand.
Of course, prestige isn't the main reason to prefer a series A round. A
startup will probably get more attention from investors in a series A round
than an angel round. So if a startup is choosing between an angel round and an
A round from a good VC fund, I usually advise them to take the A round.
But while series A rounds aren't going away, I think VCs should be more
worried about super-angels than vice versa. Despite their name, the super-
angels are really mini VC funds, and they clearly have existing VCs in their
sights.
They would seem to have history on their side. The pattern here seems the same
one we see when startups and established companies enter a new market. Online
video becomes possible, and YouTube plunges right in, while existing media
companies embrace it only half-willingly, driven more by fear than hope, and
aiming more to protect their turf than to do great things for users. Ditto for
PayPal. This pattern is repeated over and over, and it's usually the invaders
who win. In this case the super-angels are the invaders. Angel rounds are
their whole business, as online video was for YouTube. Whereas VCs who make
angel investments mostly do it as a way to generate deal flow for series A
rounds.
On the other hand, startup investing is a very strange business. Nearly all
the returns are concentrated in a few big winners. If the super-angels merely
fail to invest in (and to some extent produce) the big winners, they'll be out
of business, even if they invest in all the others.
**VCs**
Why don't VCs start doing smaller series A rounds? The sticking point is board
seats. In a traditional series A round, the partner whose deal it is takes a
seat on the startup's board. If we assume the average startup runs for 6 years
and a partner can bear to be on 12 boards at once, then a VC fund can do 2
series A deals per partner per year.
It has always seemed to me the solution is to take fewer board seats. You
don't have to be on the board to help a startup. Maybe VCs feel they need the
power that comes with board membership to ensure their money isn't wasted. But
have they tested that theory? Unless they've tried not taking board seats and
found their returns are lower, they're not bracketing the problem.
I'm not saying VCs don't help startups. The good ones help them a lot. What
I'm saying is that the kind of help that matters, you may not have to be a
board member to give.
How will this all play out? Some VCs will probably adapt, by doing more,
smaller deals. I wouldn't be surprised if by streamlining their selection
process and taking fewer board seats, VC funds could do 2 to 3 times as many
series A rounds with no loss of quality.
But other VCs will make no more than superficial changes. VCs are
conservative, and the threat to them isn't mortal. The VC funds that don't
adapt won't be violently displaced. They'll edge gradually into a different
business without realizing it. They'll still do what they will call series A
rounds, but these will increasingly be de facto series B rounds.
In such rounds they won't get the 25 to 40% of the company they do now. You
don't give up as much of the company in later rounds unless something is
seriously wrong. Since the VCs who don't adapt will be investing later, their
returns from winners may be smaller. But investing later should also mean they
have fewer losers. So their ratio of risk to return may be the same or even
better. They'll just have become a different, more conservative, type of
investment.
**Angels**
In the big angel rounds that increasingly compete with series A rounds, the
investors won't take as much equity as VCs do now. And VCs who try to compete
with angels by doing more, smaller deals will probably find they have to take
less equity to do it. Which is good news for founders: they'll get to keep
more of the company.
The deal terms of angel rounds will become less restrictive too—not just less
restrictive than series A terms, but less restrictive than angel terms have
traditionally been.
In the future, angel rounds will less often be for specific amounts or have a
lead investor. In the old days, the standard m.o. for startups was to find one
angel to act as the lead investor. You'd negotiate a round size and valuation
with the lead, who'd supply some but not all of the money. Then the startup
and the lead would cooperate to find the rest.
The future of angel rounds looks more like this: instead of a fixed round
size, startups will do a rolling close, where they take money from investors
one at a time till they feel they have enough. And though there's going to
be one investor who gives them the first check, and his or her help in
recruiting other investors will certainly be welcome, this initial investor
will no longer be the lead in the old sense of managing the round. The startup
will now do that themselves.
There will continue to be lead investors in the sense of investors who take
the lead in _advising_ a startup. They may also make the biggest investment.
But they won't always have to be the one terms are negotiated with, or be the
first money in, as they have in the past. Standardized paperwork will do away
with the need to negotiate anything except the valuation, and that will get
easier too.
If multiple investors have to share a valuation, it will be whatever the
startup can get from the first one to write a check, limited by their guess at
whether this will make later investors balk. But there may not have to be just
one valuation. Startups are increasingly raising money on convertible notes,
and convertible notes have not valuations but at most valuation _caps_ : caps
on what the effective valuation will be when the debt converts to equity (in a
later round, or upon acquisition if that happens first). That's an important
difference because it means a startup could do multiple notes at once with
different caps. This is now starting to happen, and I predict it will become
more common.
**Sheep**
The reason things are moving this way is that the old way sucked for startups.
Leads could (and did) use a fixed size round as a legitimate-seeming way of
saying what all founders hate to hear: I'll invest if other people will. Most
investors, unable to judge startups for themselves, rely instead on the
opinions of other investors. If everyone wants in, they want in too; if not,
not. Founders hate this because it's a recipe for deadlock, and delay is the
thing a startup can least afford. Most investors know this m.o. is lame, and
few say openly that they're doing it. But the craftier ones achieve the same
result by offering to lead rounds of fixed size and supplying only part of the
money. If the startup can't raise the rest, the lead is out too. How could
they go ahead with the deal? The startup would be underfunded!
In the future, investors will increasingly be unable to offer investment
subject to contingencies like other people investing. Or rather, investors who
do that will get last place in line. Startups will go to them only to fill up
rounds that are mostly subscribed. And since hot startups tend to have rounds
that are oversubscribed, being last in line means they'll probably miss the
hot deals. Hot deals and successful startups are not identical, but there is a
significant correlation. So investors who won't invest unilaterally will
have lower returns.
Investors will probably find they do better when deprived of this crutch
anyway. Chasing hot deals doesn't make investors choose better; it just makes
them feel better about their choices. I've seen feeding frenzies both form and
fall apart many times, and as far as I can tell they're mostly random. If
investors can no longer rely on their herd instincts, they'll have to think
more about each startup before investing. They may be surprised how well this
works.
Deadlock wasn't the only disadvantage of letting a lead investor manage an
angel round. The investors would not infrequently collude to push down the
valuation. And rounds took too long to close, because however motivated the
lead was to get the round closed, he was not a tenth as motivated as the
startup.
Increasingly, startups are taking charge of their own angel rounds. Only a few
do so far, but I think we can already declare the old way dead, because those
few are the best startups. They're the ones in a position to tell investors
how the round is going to work. And if the startups you want to invest in do
things a certain way, what difference does it make what the others do?
**Traction**
In fact, it may be slightly misleading to say that angel rounds will
increasingly take the place of series A rounds. What's really happening is
that startup-controlled rounds are taking the place of investor-controlled
rounds.
This is an instance of a very important meta-trend, one that Y Combinator
itself has been based on from the beginning: founders are becoming
increasingly powerful relative to investors. So if you want to predict what
the future of venture funding will be like, just ask: how would founders like
it to be? One by one, all the things founders dislike about raising money are
going to get eliminated.
Using that heuristic, I'll predict a couple more things. One is that investors
will increasingly be unable to wait for startups to have "traction" before
they put in significant money. It's hard to predict in advance which startups
will succeed. So most investors prefer, if they can, to wait till the startup
is already succeeding, then jump in quickly with an offer. Startups hate this
as well, partly because it tends to create deadlock, and partly because it
seems kind of slimy. If you're a promising startup but don't yet have
significant growth, all the investors are your friends in words, but few are
in actions. They all say they love you, but they all wait to invest. Then when
you start to see growth, they claim they were your friend all along, and are
aghast at the thought you'd be so disloyal as to leave them out of your round.
If founders become more powerful, they'll be able to make investors give them
more money upfront.
(The worst variant of this behavior is the tranched deal, where the investor
makes a small initial investment, with more to follow if the startup does
well. In effect, this structure gives the investor a free option on the next
round, which they'll only take if it's worse for the startup than they could
get in the open market. Tranched deals are an abuse. They're increasingly
rare, and they're going to get rarer.)
Investors don't like trying to predict which startups will succeed, but
increasingly they'll have to. Though the way that happens won't necessarily be
that the behavior of existing investors will change; it may instead be that
they'll be replaced by other investors with different behavior—that investors
who understand startups well enough to take on the hard problem of predicting
their trajectory will tend to displace suits whose skills lie more in raising
money from LPs.
**Speed**
The other thing founders hate most about fundraising is how long it takes. So
as founders become more powerful, rounds should start to close faster.
Fundraising is still terribly distracting for startups. If you're a founder in
the middle of raising a round, the round is the top idea in your mind, which
means working on the company isn't. If a round takes 2 months to close, which
is reasonably fast by present standards, that means 2 months during which the
company is basically treading water. That's the worst thing a startup could
do.
So if investors want to get the best deals, the way to do it will be to close
faster. Investors don't need weeks to make up their minds anyway. We decide
based on about 10 minutes of reading an application plus 10 minutes of in
person interview, and we only regret about 10% of our decisions. If we can
decide in 20 minutes, surely the next round of investors can decide in a
couple days.
There are a lot of institutionalized delays in startup funding: the multi-week
mating dance with investors; the distinction between termsheets and deals; the
fact that each series A has enormously elaborate, custom paperwork. Both
founders and investors tend to take these for granted. It's the way things
have always been. But ultimately the reason these delays exist is that they're
to the advantage of investors. More time gives investors more information
about a startup's trajectory, and it also tends to make startups more pliable
in negotiations, since they're usually short of money.
These conventions weren't designed to drag out the funding process, but that's
why they're allowed to persist. Slowness is to the advantage of investors, who
have in the past been the ones with the most power. But there is no need for
rounds to take months or even weeks to close, and once founders realize that,
it's going to stop. Not just in angel rounds, but in series A rounds too. The
future is simple deals with standard terms, done quickly.
One minor abuse that will get corrected in the process is option pools. In a
traditional series A round, before the VCs invest they make the company set
aside a block of stock for future hires—usually between 10 and 30% of the
company. The point is to ensure this dilution is borne by the existing
shareholders. The practice isn't dishonest; founders know what's going on. But
it makes deals unnecessarily complicated. In effect the valuation is 2
numbers. There's no need to keep doing this.
The final thing founders want is to be able to sell some of their own stock in
later rounds. This won't be a change, because the practice is now quite
common. A lot of investors hated the idea, but the world hasn't exploded as a
result, so it will happen more, and more openly.
**Surprise**
I've talked here about a bunch of changes that will be forced on investors as
founders become more powerful. Now the good news: investors may actually make
more money as a result.
A couple days ago an interviewer asked me if founders having more power would
be better or worse for the world. I was surprised, because I'd never
considered that question. Better or worse, it's happening. But after a
second's reflection, the answer seemed obvious. Founders understand their
companies better than investors, and it has to be better if the people with
more knowledge have more power.
One of the mistakes novice pilots make is overcontrolling the aircraft:
applying corrections too vigorously, so the aircraft oscillates about the
desired configuration instead of approaching it asymptotically. It seems
probable that investors have till now on average been overcontrolling their
portfolio companies. In a lot of startups, the biggest source of stress for
the founders is not competitors but investors. Certainly it was for us at
Viaweb. And this is not a new phenomenon: investors were James Watt's biggest
problem too. If having less power prevents investors from overcontrolling
startups, it should be better not just for founders but for investors too.
Investors may end up with less stock per startup, but startups will probably
do better with founders more in control, and there will almost certainly be
more of them. Investors all compete with one another for deals, but they
aren't one another's main competitor. Our main competitor is employers. And so
far that competitor is crushing us. Only a tiny fraction of people who could
start a startup do. Nearly all customers choose the competing product, a job.
Why? Well, let's look at the product we're offering. An unbiased review would
go something like this:
> Starting a startup gives you more freedom and the opportunity to make a lot
> more money than a job, but it's also hard work and at times very stressful.
Much of the stress comes from dealing with investors. If reforming the
investment process removed that stress, we'd make our product much more
attractive. The kind of people who make good startup founders don't mind
dealing with technical problems—they enjoy technical problems—but they hate
the type of problems investors cause.
Investors have no idea that when they maltreat one startup, they're preventing
10 others from happening, but they are. Indirectly, but they are. So when
investors stop trying to squeeze a little more out of their existing deals,
they'll find they're net ahead, because so many more new deals appear.
One of our axioms at Y Combinator is not to think of deal flow as a zero-sum
game. Our main focus is to encourage more startups to happen, not to win a
larger share of the existing stream. We've found this principle very useful,
and we think as it spreads outward it will help later stage investors as well.
"Make something people want" applies to us too.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
September 2013
Most startups that raise money do it more than once. A typical trajectory
might be (1) to get started with a few tens of thousands from something like Y
Combinator or individual angels, then (2) raise a few hundred thousand to a
few million to build the company, and then (3) once the company is clearly
succeeding, raise one or more later rounds to accelerate growth.
Reality can be messier. Some companies raise money twice in phase 2\. Others
skip phase 1 and go straight to phase 2. And at Y Combinator we get an
increasing number of companies that have already raised amounts in the
hundreds of thousands. But the three phase path is at least the one about
which individual startups' paths oscillate.
This essay focuses on phase 2 fundraising. That's the type the startups we
fund are doing on Demo Day, and this essay is the advice we give them.
**Forces**
Fundraising is hard in both senses: hard like lifting a heavy weight, and hard
like solving a puzzle. It's hard like lifting a weight because it's
intrinsically hard to convince people to part with large sums of money. That
problem is irreducible; it should be hard. But much of the other kind of
difficulty can be eliminated. Fundraising only seems a puzzle because it's an
alien world to most founders, and I hope to fix that by supplying a map
through it.
To founders, the behavior of investors is often opaque — partly because their
motivations are obscure, but partly because they deliberately mislead you. And
the misleading ways of investors combine horribly with the wishful thinking of
inexperienced founders. At YC we're always warning founders about this danger,
and investors are probably more circumspect with YC startups than with other
companies they talk to, and even so we witness a constant series of explosions
as these two volatile components combine.
If you're an inexperienced founder, the only way to survive is by imposing
external constraints on yourself. You can't trust your intuitions. I'm going
to give you a set of rules here that will get you through this process if
anything will. At certain moments you'll be tempted to ignore them. So rule
number zero is: these rules exist for a reason. You wouldn't need a rule to
keep you going in one direction if there weren't powerful forces pushing you
in another.
The ultimate source of the forces acting on you are the forces acting on
investors. Investors are pinched between two kinds of fear: fear of investing
in startups that fizzle, and fear of missing out on startups that take off.
The cause of all this fear is the very thing that makes startups such
attractive investments: the successful ones grow very fast. But that fast
growth means investors can't wait around. If you wait till a startup is
obviously a success, it's too late. To get the really high returns, you have
to invest in startups when it's still unclear how they'll do. But that in turn
makes investors nervous they're about to invest in a flop. As indeed they
often are.
What investors would like to do, if they could, is wait. When a startup is
only a few months old, every week that passes gives you significantly more
information about them. But if you wait too long, other investors might take
the deal away from you. And of course the other investors are all subject to
the same forces. So what tends to happen is that they all wait as long as they
can, then when some act the rest have to.
**Don't raise money unless you want it and it wants you.**
Such a high proportion of successful startups raise money that it might seem
fundraising is one of the defining qualities of a startup. Actually it isn't.
Rapid growth is what makes a company a startup. Most companies in a position
to grow rapidly find that (a) taking outside money helps them grow faster, and
(b) their growth potential makes it easy to attract such money. It's so common
for both (a) and (b) to be true of a successful startup that practically all
do raise outside money. But there may be cases where a startup either wouldn't
want to grow faster, or outside money wouldn't help them to, and if you're one
of them, don't raise money.
The other time not to raise money is when you won't be able to. If you try to
raise money before you can convince investors, you'll not only waste your
time, but also burn your reputation with those investors.
**Be in fundraising mode or not.**
One of the things that surprises founders most about fundraising is how
distracting it is. When you start fundraising, everything else grinds to a
halt. The problem is not the time fundraising consumes but that it becomes the
top idea in your mind. A startup can't endure that level of distraction for
long. An early stage startup grows mostly because the founders make it grow,
and if the founders look away, growth usually drops sharply.
Because fundraising is so distracting, a startup should either be in
fundraising mode or not. And when you do decide to raise money, you should
focus your whole attention on it so you can get it done quickly and get back
to work.
You can take money from investors when you're not in fundraising mode. You
just can't expend any attention on it. There are two things that take
attention: convincing investors, and negotiating with them. So when you're not
in fundraising mode, you should take money from investors only if they require
no convincing, and are willing to invest on terms you'll take without
negotiation. For example, if a reputable investor is willing to invest on a
convertible note, using standard paperwork, that is either uncapped or capped
at a good valuation, you can take that without having to think. The terms
will be whatever they turn out to be in your next equity round. And "no
convincing" means just that: zero time spent meeting with investors or
preparing materials for them. If an investor says they're ready to invest, but
they need you to come in for one meeting to meet some of the partners, tell
them no, if you're not in fundraising mode, because that's fundraising.
Tell them politely; tell them you're focusing on the company right now, and
that you'll get back to them when you're fundraising; but do not get sucked
down the slippery slope.
Investors will try to lure you into fundraising when you're not. It's great
for them if they can, because they can thereby get a shot at you before
everyone else. They'll send you emails saying they want to meet to learn more
about you. If you get cold-emailed by an associate at a VC firm, you shouldn't
meet even if you are in fundraising mode. Deals don't happen that way. But
even if you get an email from a partner you should try to delay meeting till
you're in fundraising mode. They may say they just want to meet and chat, but
investors never just want to meet and chat. What if they like you? What if
they start to talk about giving you money? Will you be able to resist having
that conversation? Unless you're experienced enough at fundraising to have a
casual conversation with investors that stays casual, it's safer to tell them
that you'd be happy to later, when you're fundraising, but that right now you
need to focus on the company.
Companies that are successful at raising money in phase 2 sometimes tack on a
few investors after leaving fundraising mode. This is fine; if fundraising
went well, you'll be able to do it without spending time convincing them or
negotiating about terms.
**Get introductions to investors.**
Before you can talk to investors, you have to be introduced to them. If you're
presenting at a Demo Day, you'll be introduced to a whole bunch
simultaneously. But even if you are, you should supplement these with intros
you collect yourself.
Do you have to be introduced? In phase 2, yes. Some investors will let you
email them a business plan, but you can tell from the way their sites are
organized that they don't really want startups to approach them directly.
Intros vary greatly in effectiveness. The best type of intro is from a well-
known investor who has just invested in you. So when you get an investor to
commit, ask them to introduce you to other investors they respect. The
next best type of intro is from a founder of a company they've funded. You can
also get intros from other people in the startup community, like lawyers and
reporters.
There are now sites like AngelList, FundersClub, and WeFunder that can
introduce you to investors. We recommend startups treat them as auxiliary
sources of money. Raise money first from leads you get yourself. Those will on
average be better investors. Plus you'll have an easier time raising money on
these sites once you can say you've already raised some from well-known
investors.
**Hear no till you hear yes.**
Treat investors as saying no till they unequivocally say yes, in the form of a
definite offer with no contingencies.
I mentioned earlier that investors prefer to wait if they can. What's
particularly dangerous for founders is the way they wait. Essentially, they
lead you on. They seem like they're about to invest right up till the moment
they say no. If they even say no. Some of the worse ones never actually do say
no; they just stop replying to your emails. They hope that way to get a free
option on investing. If they decide later that they want to invest — usually
because they've heard you're a hot deal — they can pretend they just got
distracted and then restart the conversation as if they'd been about to.
That's not the worst thing investors will do. Some will use language that
makes it sound as if they're committing, but which doesn't actually commit
them. And wishful thinking founders are happy to meet them half way.
Fortunately, the next rule is a tactic for neutralizing this behavior. But to
work it depends on you not being tricked by the no that sounds like yes. It's
so common for founders to be misled/mistaken about this that we designed a
protocol to fix the problem. If you believe an investor has committed, get
them to confirm it. If you and they have different views of reality, whether
the source of the discrepancy is their sketchiness or your wishful thinking,
the prospect of confirming a commitment in writing will flush it out. And till
they confirm, regard them as saying no.
**Do breadth-first search weighted by expected value.**
When you talk to investors your m.o. should be breadth-first search, weighted
by expected value. You should always talk to investors in parallel rather than
serially. You can't afford the time it takes to talk to investors serially,
plus if you only talk to one investor at a time, they don't have the pressure
of other investors to make them act. But you shouldn't pay the same attention
to every investor, because some are more promising prospects than others. The
optimal solution is to talk to all potential investors in parallel, but give
higher priority to the more promising ones.
Expected value = how likely an investor is to say yes, multiplied by how good
it would be if they did. So for example, an eminent investor who would invest
a lot, but will be hard to convince, might have the same expected value as an
obscure angel who won't invest much, but will be easy to convince. Whereas an
obscure angel who will only invest a small amount, and yet needs to meet
multiple times before making up his mind, has very low expected value. Meet
such investors last, if at all.
Doing breadth-first search weighted by expected value will save you from
investors who never explicitly say no but merely drift away, because you'll
drift away from them at the same rate. It protects you from investors who
flake in much the same way that a distributed algorithm protects you from
processors that fail. If some investor isn't returning your emails, or wants
to have lots of meetings but isn't progressing toward making you an offer, you
automatically focus less on them. But you have to be disciplined about
assigning probabilities. You can't let how much you want an investor influence
your estimate of how much they want you.
**Know where you stand.**
How do you judge how well you're doing with an investor, when investors
habitually seem more positive than they are? By looking at their actions
rather than their words. Every investor has some track they need to move along
from the first conversation to wiring the money, and you should always know
what that track consists of, where you are on it, and how fast you're moving
forward.
Never leave a meeting with an investor without asking what happens next. What
more do they need in order to decide? Do they need another meeting with you?
To talk about what? And how soon? Do they need to do something internally,
like talk to their partners, or investigate some issue? How long do they
expect it to take? Don't be too pushy, but know where you stand. If investors
are vague or resist answering such questions, assume the worst; investors who
are seriously interested in you will usually be happy to talk about what has
to happen between now and wiring the money, because they're already running
through that in their heads.
If you're experienced at negotiations, you already know how to ask such
questions. If you're not, there's a trick you can use in this situation.
Investors know you're inexperienced at raising money. Inexperience there
doesn't make you unattractive. Being a noob at technology would, if you're
starting a technology startup, but not being a noob at fundraising. Larry and
Sergey were noobs at fundraising. So you can just confess that you're
inexperienced at this and ask how their process works and where you are in it.
**Get the first commitment.**
The biggest factor in most investors' opinions of you is the opinion of other
investors. Once you start getting investors to commit, it becomes increasingly
easy to get more to. But the other side of this coin is that it's often hard
to get the first commitment.
Getting the first substantial offer can be half the total difficulty of
fundraising. What counts as a substantial offer depends on who it's from and
how much it is. Money from friends and family doesn't usually count, no matter
how much. But if you get $50k from a well known VC firm or angel investor,
that will usually be enough to set things rolling.
**Close committed money.**
It's not a deal till the money's in the bank. I often hear inexperienced
founders say things like "We've raised $800,000," only to discover that zero
of it is in the bank so far. Remember the twin fears that torment investors?
The fear of missing out that makes them jump early, and the fear of jumping
onto a turd that results? This is a market where people are exceptionally
prone to buyer's remorse. And it's also one that furnishes them plenty of
excuses to gratify it. The public markets snap startup investing around like a
whip. If the Chinese economy blows up tomorrow, all bets are off. But there
are lots of surprises for individual startups too, and they tend to be
concentrated around fundraising. Tomorrow a big competitor could appear, or
you could get C&Ded, or your cofounder could quit.
Even a day's delay can bring news that causes an investor to change their
mind. So when someone commits, get the money. Knowing where you stand doesn't
end when they say they'll invest. After they say yes, know what the timetable
is for getting the money, and then babysit that process till it happens.
Institutional investors have people in charge of wiring money, but you may
have to hunt angels down in person to collect a check.
Inexperienced investors are the ones most likely to get buyer's remorse.
Established ones have learned to treat saying yes as like diving off a diving
board, and they also have more brand to preserve. But I've heard of cases of
even top-tier VC firms welching on deals.
**Avoid investors who don't "lead."**
Since getting the first offer is most of the difficulty of fundraising, that
should be part of your calculation of expected value when you start. You have
to estimate not just the probability that an investor will say yes, but the
probability that they'd be the _first_ to say yes, and the latter is not
simply a constant fraction of the former. Some investors are known for
deciding quickly, and those are extra valuable early on.
Conversely, an investor who will only invest once other investors have is
worthless initially. And while most investors are influenced by how interested
other investors are in you, there are some who have an explicit policy of only
investing after other investors have. You can recognize this contemptible
subspecies of investor because they often talk about "leads." They say that
they don't lead, or that they'll invest once you have a lead. Sometimes they
even claim to be willing to lead themselves, by which they mean they won't
invest till you get $x from other investors. (It's great if by "lead" they
mean they'll invest unilaterally, and in addition will help you raise more.
What's lame is when they use the term to mean they won't invest unless you can
raise more elsewhere.)
Where does this term "lead" come from? Up till a few years ago, startups
raising money in phase 2 would usually raise equity rounds in which several
investors invested at the same time using the same paperwork. You'd negotiate
the terms with one "lead" investor, and then all the others would sign the
same documents and all the money change hands at the closing.
Series A rounds still work that way, but things now work differently for most
fundraising prior to the series A. Now there are rarely actual rounds before
the A round, or leads for them. Now startups simply raise money from investors
one at a time till they feel they have enough.
Since there are no longer leads, why do investors use that term? Because it's
a more legitimate-sounding way of saying what they really mean. All they
really mean is that their interest in you is a function of other investors'
interest in you. I.e. the spectral signature of all mediocre investors. But
when phrased in terms of leads, it sounds like there is something structural
and therefore legitimate about their behavior.
When an investor tells you "I want to invest in you, but I don't lead,"
translate that in your mind to "No, except yes if you turn out to be a hot
deal." And since that's the default opinion of any investor about any startup,
they've essentially just told you nothing.
When you first start fundraising, the expected value of an investor who won't
"lead" is zero, so talk to such investors last if at all.
**Have multiple plans.**
Many investors will ask how much you're planning to raise. This question makes
founders feel they should be planning to raise a specific amount. But in fact
you shouldn't. It's a mistake to have fixed plans in an undertaking as
unpredictable as fundraising.
So why do investors ask how much you plan to raise? For much the same reasons
a salesperson in a store will ask "How much were you planning to spend?" if
you walk in looking for a gift for a friend. You probably didn't have a
precise amount in mind; you just want to find something good, and if it's
inexpensive, so much the better. The salesperson asks you this not because
you're supposed to have a plan to spend a specific amount, but so they can
show you only things that cost the most you'll pay.
Similarly, when investors ask how much you plan to raise, it's not because
you're supposed to have a plan. It's to see whether you'd be a suitable
recipient for the size of investment they like to make, and also to judge your
ambition, reasonableness, and how far you are along with fundraising.
If you're a wizard at fundraising, you can say "We plan to raise a $7 million
series A round, and we'll be accepting termsheets next tuesday." I've known a
handful of founders who could pull that off without having VCs laugh in their
faces. But if you're in the inexperienced but earnest majority, the solution
is analogous to the solution I recommend for pitching your startup: do the
right thing and then just tell investors what you're doing.
And the right strategy, in fundraising, is to have multiple plans depending on
how much you can raise. Ideally you should be able to tell investors something
like: we can make it to profitability without raising any more money, but if
we raise a few hundred thousand we can hire one or two smart friends, and if
we raise a couple million, we can hire a whole engineering team, etc.
Different plans match different investors. If you're talking to a VC firm that
only does series A rounds (though there are few of those left), it would be a
waste of time talking about any but your most expensive plan. Whereas if
you're talking to an angel who invests $20k at a time and you haven't raised
any money yet, you probably want to focus on your least expensive plan.
If you're so fortunate as to have to think about the upper limit on what you
should raise, a good rule of thumb is to multiply the number of people you
want to hire times $15k times 18 months. In most startups, nearly all the
costs are a function of the number of people, and $15k per month is the
conventional total cost (including benefits and even office space) per person.
$15k per month is high, so don't actually spend that much. But it's ok to use
a high estimate when fundraising to add a margin for error. If you have
additional expenses, like manufacturing, add in those at the end. Assuming you
have none and you think you might hire 20 people, the most you'd want to raise
is 20 x $15k x 18 = $5.4 million.
**Underestimate how much you want.**
Though you can focus on different plans when talking to different types of
investors, you should on the whole err on the side of underestimating the
amount you hope to raise.
For example, if you'd like to raise $500k, it's better to say initially that
you're trying to raise $250k. Then when you reach $150k you're more than half
done. That sends two useful signals to investors: that you're doing well, and
that they have to decide quickly because you're running out of room. Whereas
if you'd said you were raising $500k, you'd be less than a third done at
$150k. If fundraising stalled there for an appreciable time, you'd start to
read as a failure.
Saying initially that you're raising $250k doesn't limit you to raising that
much. When you reach your initial target and you still have investor interest,
you can just decide to raise more. Startups do that all the time. In fact,
most startups that are very successful at fundraising end up raising more than
they originally intended.
I'm not saying you should lie, but that you should lower your expectations
initially. There is almost no downside in starting with a low number. It not
only won't cap the amount you raise, but will on the whole tend to increase
it.
A good metaphor here is angle of attack. If you try to fly at too steep an
angle of attack, you just stall. If you say right out of the gate that you
want to raise a $5 million series A round, unless you're in a very strong
position, you not only won't get that but won't get anything. Better to start
at a low angle of attack, build up speed, and then gradually increase the
angle if you want.
**Be profitable if you can.**
You will be in a much stronger position if your collection of plans includes
one for raising zero dollars — i.e. if you can make it to profitability
without raising any additional money. Ideally you want to be able to say to
investors "We'll succeed no matter what, but raising money will help us do it
faster."
There are many analogies between fundraising and dating, and this is one of
the strongest. No one wants you if you seem desperate. And the best way not to
seem desperate is not to _be_ desperate. That's one reason we urge startups
during YC to keep expenses low and to try to make it to ramen profitability
before Demo Day. Though it sounds slightly paradoxical, if you want to raise
money, the best thing you can do is get yourself to the point where you don't
need to.
There are almost two distinct modes of fundraising: one in which founders who
need money knock on doors seeking it, knowing that otherwise the company will
die or at the very least people will have to be fired, and one in which
founders who don't need money take some to grow faster than they could merely
on their own revenues. To emphasize the distinction I'm going to name them:
type A fundraising is when you don't need money, and type B fundraising is
when you do.
Inexperienced founders read about famous startups doing what was type A
fundraising, and decide they should raise money too, since that seems to be
how startups work. Except when they raise money they don't have a clear path
to profitability and are thus doing type B fundraising. And they are then
surprised how difficult and unpleasant it is.
Of course not all startups can make it to ramen profitability in a few months.
And some that don't still manage to have the upper hand over investors, if
they have some other advantage like extraordinary growth numbers or
exceptionally formidable founders. But as time passes it gets increasingly
difficult to fundraise from a position of strength without being profitable.
**Don't optimize for valuation.**
When you raise money, what should your valuation be? The most important thing
to understand about valuation is that it's not that important.
Founders who raise money at high valuations tend to be unduly proud of it.
Founders are often competitive people, and since valuation is usually the only
visible number attached to a startup, they end up competing to raise money at
the highest valuation. This is stupid, because fundraising is not the test
that matters. The real test is revenue. Fundraising is just a means to that
end. Being proud of how well you did at fundraising is like being proud of
your college grades.
Not only is fundraising not the test that matters, valuation is not even the
thing to optimize about fundraising. The number one thing you want from phase
2 fundraising is to get the money you need, so you can get back to focusing on
the real test, the success of your company. Number two is good investors.
Valuation is at best third.
The empirical evidence shows just how unimportant it is. Dropbox and Airbnb
are the most successful companies we've funded so far, and they raised money
after Y Combinator at premoney valuations of $4 million and $2.6 million
respectively. Prices are so much higher now that if you can raise money at all
you'll probably raise it at higher valuations than Dropbox and Airbnb. So let
that satisfy your competitiveness. You're doing better than Dropbox and
Airbnb! At a test that doesn't matter.
When you start fundraising, your initial valuation (or valuation cap) will be
set by the deal you make with the first investor who commits. You can increase
the price for later investors, if you get a lot of interest, but by default
the valuation you got from the first investor becomes your asking price.
So if you're raising money from multiple investors, as most companies do in
phase 2, you have to be careful to avoid raising the first from an over-eager
investor at a price you won't be able to sustain. You can of course lower your
price if you need to (in which case you should give the same terms to
investors who invested earlier at a higher price), but you may lose a bunch of
leads in the process of realizing you need to do this.
What you can do if you have eager first investors is raise money from them on
an uncapped convertible note with an MFN clause. This is essentially a way of
saying that the valuation cap of the note will be determined by the next
investors you raise money from.
It will be easier to raise money at a lower valuation. It shouldn't be, but it
is. Since phase 2 prices vary at most 10x and the big successes generate
returns of at least 100x, investors should pick startups entirely based on
their estimate of the probability that the company will be a big success and
hardly at all on price. But although it's a mistake for investors to care
about price, a significant number do. A startup that investors seem to like
but won't invest in at a cap of $x will have an easier time at $x/2.
**Yes/no before valuation.**
Some investors want to know what your valuation is before they even talk to
you about investing. If your valuation has already been set by a prior
investment at a specific valuation or cap, you can tell them that number. But
if it isn't set because you haven't closed anyone yet, and they try to push
you to name a price, resist doing so. If this would be the first investor
you've closed, then this could be the tipping point of fundraising. That means
closing this investor is the first priority, and you need to get the
conversation onto that instead of being dragged sideways into a discussion of
price.
Fortunately there is a way to avoid naming a price in this situation. And it
is not just a negotiating trick; it's how you (both) should be operating. Tell
them that valuation is not the most important thing to you and that you
haven't thought much about it, that you are looking for investors you want to
partner with and who want to partner with you, and that you should talk first
about whether they want to invest at all. Then if they decide they do want to
invest, you can figure out a price. But first things first.
Since valuation isn't that important and getting fundraising rolling is, we
usually tell founders to give the first investor who commits as low a price as
they need to. This is a safe technique so long as you combine it with the next
one.
**Beware "valuation sensitive" investors.**
Occasionally you'll encounter investors who describe themselves as "valuation
sensitive." What this means in practice is that they are compulsive
negotiators who will suck up a lot of your time trying to push your price
down. You should therefore never approach such investors first. While you
shouldn't chase high valuations, you also don't want your valuation to be set
artificially low because the first investor who committed happened to be a
compulsive negotiator. Some such investors have value, but the time to
approach them is near the end of fundraising, when you're in a position to say
"this is the price everyone else has paid; take it or leave it" and not mind
if they leave it. This way, you'll not only get market price, but it will also
take less time.
Ideally you know which investors have a reputation for being "valuation
sensitive" and can postpone dealing with them till last, but occasionally one
you didn't know about will pop up early on. The rule of doing breadth first
search weighted by expected value already tells you what to do in this case:
slow down your interactions with them.
There are a handful of investors who will try to invest at a lower valuation
even when your price has already been set. Lowering your price is a backup
plan you resort to when you discover you've let the price get set too high to
close all the money you need. So you'd only want to talk to this sort of
investor if you were about to do that anyway. But since investor meetings have
to be arranged at least a few days in advance and you can't predict when
you'll need to resort to lowering your price, this means in practice that you
should approach this type of investor last if at all.
If you're surprised by a lowball offer, treat it as a backup offer and delay
responding to it. When someone makes an offer in good faith, you have a moral
obligation to respond in a reasonable time. But lowballing you is a dick move
that should be met with the corresponding countermove.
**Accept offers greedily.**
I'm a little leery of using the term "greedily" when writing about fundraising
lest non-programmers misunderstand me, but a greedy algorithm is simply one
that doesn't try to look into the future. A greedy algorithm takes the best of
the options in front of it right now. And that is how startups should approach
fundraising in phases 2 and later. Don't try to look into the future because
(a) the future is unpredictable, and indeed in this business you're often
being deliberately misled about it and (b) your first priority in fundraising
should be to get it finished and get back to work anyway.
If someone makes you an acceptable offer, take it. If you have multiple
incompatible offers, take the best. Don't reject an acceptable offer in the
hope of getting a better one in the future.
These simple rules cover a wide variety of cases. If you're raising money from
many investors, roll them up as they say yes. As you start to feel you've
raised enough, the threshold for acceptable will start to get higher.
In practice offers exist for stretches of time, not points. So when you get an
acceptable offer that would be incompatible with others (e.g. an offer to
invest most of the money you need), you can tell the other investors you're
talking to that you have an offer good enough to accept, and give them a few
days to make their own. This could lose you some that might have made an offer
if they had more time. But by definition you don't care; the initial offer was
acceptable.
Some investors will try to prevent others from having time to decide by giving
you an "exploding" offer, meaning one that's only valid for a few days. Offers
from the very best investors explode less frequently and less rapidly — Fred
Wilson never gives exploding offers, for example — because they're confident
you'll pick them. But lower-tier investors sometimes give offers with very
short fuses, because they believe no one who had other options would choose
them. A deadline of three working days is acceptable. You shouldn't need more
than that if you've been talking to investors in parallel. But a deadline any
shorter is a sign you're dealing with a sketchy investor. You can usually call
their bluff, and you may need to.
It might seem that instead of accepting offers greedily, your goal should be
to get the best investors as partners. That is certainly a good goal, but in
phase 2 "get the best investors" only rarely conflicts with "accept offers
greedily," because the best investors don't usually take any longer to decide
than the others. The only case where the two strategies give conflicting
advice is when you have to forgo an offer from an acceptable investor to see
if you'll get an offer from a better one. If you talk to investors in parallel
and push back on exploding offers with excessively short deadlines, that will
almost never happen. But if it does, "get the best investors" is in the
average case bad advice. The best investors are also the most selective,
because they get their pick of all the startups. They reject nearly everyone
they talk to, which means in the average case it's a bad trade to exchange a
definite offer from an acceptable investor for a potential offer from a better
one.
(The situation is different in phase 1. You can't apply to all the incubators
in parallel, because some offset their schedules to prevent this. In phase 1,
"accept offers greedily" and "get the best investors" do conflict, so if you
want to apply to multiple incubators, you should do it in such a way that the
ones you want most decide first.)
Sometimes when you're raising money from multiple investors, a series A will
emerge out of those conversations, and these rules even cover what to do in
that case. When an investor starts to talk to you about a series A, keep
taking smaller investments till they actually give you a termsheet. There's no
practical difficulty. If the smaller investments are on convertible notes,
they'll just convert into the series A round. The series A investor won't like
having all these other random investors as bedfellows, but if it bothers them
so much they should get on with giving you a termsheet. Till they do, you
don't know for sure they will, and the greedy algorithm tells you what to do.
**Don't sell more than 25% in phase 2.**
If you do well, you will probably raise a series A round eventually. I say
probably because things are changing with series A rounds. Startups may start
to skip them. But only one company we've funded has so far, so tentatively
assume the path to huge passes through an A round.
Which means you should avoid doing things in earlier rounds that will mess up
raising an A round. For example, if you've sold more than about 40% of your
company total, it starts to get harder to raise an A round, because VCs worry
there will not be enough stock left to keep the founders motivated.
Our rule of thumb is not to sell more than 25% in phase 2, on top of whatever
you sold in phase 1, which should be less than 15%. If you're raising money on
uncapped notes, you'll have to guess what the eventual equity round valuation
might be. Guess conservatively.
(Since the goal of this rule is to avoid messing up the series A, there's
obviously an exception if you end up raising a series A in phase 2, as a
handful of startups do.)
**Have one person handle fundraising.**
If you have multiple founders, pick one to handle fundraising so the other(s)
can keep working on the company. And since the danger of fundraising is not
the time taken up by the actual meetings but that it becomes the top idea in
your mind, the founder who handles fundraising should make a conscious effort
to insulate the other founder(s) from the details of the process.
(If the founders mistrust one another, this could cause some friction. But if
the founders mistrust one another, you have worse problems to worry about than
how to organize fundraising.)
The founder who handles fundraising should be the CEO, who should in turn be
the most formidable of the founders. Even if the CEO is a programmer and
another founder is a salesperson? Yes. If you happen to be that type of
founding team, you're effectively a single founder when it comes to
fundraising.
It's ok to bring all the founders to meet an investor who will invest a lot,
and who needs this meeting as the final step before deciding. But wait till
that point. Introducing an investor to your cofounder(s) should be like
introducing a girl/boyfriend to your parents — something you do only when
things reach a certain stage of seriousness.
Even if there are still one or more founders focusing on the company during
fundraising, growth will slow. But try to get as much growth as you can,
because fundraising is a segment of time, not a point, and what happens to the
company during that time affects the outcome. If your numbers grow
significantly between two investor meetings, investors will be hot to close,
and if your numbers are flat or down they'll start to get cold feet.
**You'll need an executive summary and (maybe) a deck.**
Traditionally phase 2 fundraising consists of presenting a slide deck in
person to investors. Sequoia describes what such a deck should contain, and
since they're the customer you can take their word for it.
I say "traditionally" because I'm ambivalent about decks, and (though perhaps
this is wishful thinking) they seem to be on the way out. A lot of the most
successful startups we fund never make decks in phase 2. They just talk to
investors and explain what they plan to do. Fundraising usually takes off fast
for the startups that are most successful at it, and they're thus able to
excuse themselves by saying that they haven't had time to make a deck.
You'll also want an executive summary, which should be no more than a page
long and describe in the most matter of fact language what you plan to do, why
it's a good idea, and what progress you've made so far. The point of the
summary is to remind the investor (who may have met many startups that day)
what you talked about.
Assume that if you give someone a copy of your deck or executive summary, it
will be passed on to whoever you'd least like to have it. But don't refuse on
that account to give copies to investors you meet. You just have to treat such
leaks as a cost of doing business. In practice it's not that high a cost.
Though founders are rightly indignant when their plans get leaked to
competitors, I can't think of a startup whose outcome has been affected by it.
Sometimes an investor will ask you to send them your deck and/or executive
summary before they decide whether to meet with you. I wouldn't do that. It's
a sign they're not really interested.
**Stop fundraising when it stops working.**
When do you stop fundraising? Ideally when you've raised enough. But what if
you haven't raised as much as you'd like? When do you give up?
It's hard to give general advice about this, because there have been cases of
startups that kept trying to raise money even when it seemed hopeless, and
miraculously succeeded. But what I usually tell founders is to stop
fundraising when you start to get a lot of air in the straw. When you're
drinking through a straw, you can tell when you get to the end of the liquid
because you start to get a lot of air in the straw. When your fundraising
options run out, they usually run out in the same way. Don't keep sucking on
the straw if you're just getting air. It's not going to get better.
**Don't get addicted to fundraising.**
Fundraising is a chore for most founders, but some find it more interesting
than working on their startup. The work at an early stage startup often
consists of unglamorous schleps. Whereas fundraising, when it's going well,
can be quite the opposite. Instead of sitting in your grubby apartment
listening to users complain about bugs in your software, you're being offered
millions of dollars by famous investors over lunch at a nice restaurant.
The danger of fundraising is particularly acute for people who are good at it.
It's always fun to work on something you're good at. If you're one of these
people, beware. Fundraising is not what will make your company successful.
Listening to users complain about bugs in your software is what will make you
successful. And the big danger of getting addicted to fundraising is not
merely that you'll spend too long on it or raise too much money. It's that
you'll start to think of yourself as being already successful, and lose your
taste for the schleps you need to undertake to actually be successful.
Startups can be destroyed by this.
When I see a startup with young founders that is fabulously successful at
fundraising, I mentally decrease my estimate of the probability that they'll
succeed. The press may be writing about them as if they'd been anointed as the
next Google, but I'm thinking "this is going to end badly."
**Don't raise too much.**
Though only a handful of startups have to worry about this, it is possible to
raise too much. The dangers of raising too much are subtle but insidious. One
is that it will set impossibly high expectations. If you raise an excessive
amount of money, it will be at a high valuation, and the danger of raising
money at too high a valuation is that you won't be able to increase it
sufficiently the next time you raise money.
A company's valuation is expected to rise each time it raises money. If not
it's a sign of a company in trouble, which makes you unattractive to
investors. So if you raise money in phase 2 at a post-money valuation of $30
million, the pre-money valuation of your next round, if you want to raise one,
is going to have to be at least $50 million. And you have to be doing really,
really well to raise money at $50 million.
It's very dangerous to let the competitiveness of your current round set the
performance threshold you have to meet to raise your next one, because the two
are only loosely coupled.
But the money itself may be more dangerous than the valuation. The more you
raise, the more you spend, and spending a lot of money can be disastrous for
an early stage startup. Spending a lot makes it harder to become profitable,
and perhaps even worse, it makes you more rigid, because the main way to spend
money is people, and the more people you have, the harder it is to change
directions. So if you do raise a huge amount of money, don't spend it. (You
will find that advice almost impossible to follow, so hot will be the money
burning a hole in your pocket, but I feel obliged at least to try.)
**Be nice.**
Startups raising money occasionally alienate investors by seeming arrogant.
Sometimes because they are arrogant, and sometimes because they're noobs
clumsily attempting to mimic the toughness they've observed in experienced
founders.
It's a mistake to behave arrogantly to investors. While there are certain
situations in which certain investors like certain kinds of arrogance,
investors vary greatly in this respect, and a flick of the whip that will
bring one to heel will make another roar with indignation. The only safe
strategy is never to seem arrogant at all.
That will require some diplomacy if you follow the advice I've given here,
because the advice I've given is essentially how to play hardball back. When
you refuse to meet an investor because you're not in fundraising mode, or slow
down your interactions with an investor who moves too slow, or treat a
contingent offer as the no it actually is and then, by accepting offers
greedily, end up leaving that investor out, you're going to be doing things
investors don't like. So you must cushion the blow with soft words. At YC we
tell startups they can blame us. And now that I've written this, everyone else
can blame me if they want. That plus the inexperience card should work in most
situations: sorry, we think you're great, but PG said startups shouldn't ___,
and since we're new to fundraising, we feel like we have to play it safe.
The danger of behaving arrogantly is greatest when you're doing well. When
everyone wants you, it's hard not to let it go to your head. Especially if
till recently no one wanted you. But restrain yourself. The startup world is a
small place, and startups have lots of ups and downs. This is a domain where
it's more true than usual that pride goeth before a fall.
Be nice when investors reject you as well. The best investors are not wedded
to their initial opinion of you. If they reject you in phase 2 and you end up
doing well, they'll often invest in phase 3\. In fact investors who reject you
are some of your warmest leads for future fundraising. Any investor who spent
significant time deciding probably came close to saying yes. Often you have
some internal champion who only needs a little more evidence to convince the
skeptics. So it's wise not merely to be nice to investors who reject you, but
(unless they behaved badly) to treat it as the beginning of a relationship.
**The bar will be higher next time.**
Assume the money you raise in phase 2 will be the last you ever raise. You
must make it to profitability on this money if you can.
Over the past several years, the investment community has evolved from a
strategy of anointing a small number of winners early and then supporting them
for years to a strategy of spraying money at early stage startups and then
ruthlessly culling them at the next stage. This is probably the optimal
strategy for investors. It's too hard to pick winners early on. Better to let
the market do it for you. But it often comes as a surprise to startups how
much harder it is to raise money in phase 3.
When your company is only a couple months old, all it has to be is a promising
experiment that's worth funding to see how it turns out. The next time you
raise money, the experiment has to have worked. You have to be on a trajectory
that leads to going public. And while there are some ideas where the proof
that the experiment worked might consist of e.g. query response times, usually
the proof is profitability. Usually phase 3 fundraising has to be type A
fundraising.
In practice there are two ways startups hose themselves between phases 2 and
3. Some are just too slow to become profitable. They raise enough money to
last for two years. There doesn't seem any particular urgency to be
profitable. So they don't make any effort to make money for a year. But by
that time, not making money has become habitual. When they finally decide to
try, they find they can't.
The other way companies hose themselves is by letting their expenses grow too
fast. Which almost always means hiring too many people. You usually shouldn't
go out and hire 8 people as soon as you raise money at phase 2. Usually you
want to wait till you have growth (and thus usually revenues) to justify them.
A lot of VCs will encourage you to hire aggressively. VCs generally tell you
to spend too much, partly because as money people they err on the side of
solving problems by spending money, and partly because they want you to sell
them more of your company in subsequent rounds. Don't listen to them.
**Don't make things complicated.**
I realize it may seem odd to sum up this huge treatise by saying that my
overall advice is not to make fundraising too complicated, but if you go back
and look at this list you'll see it's basically a simple recipe with a lot of
implications and edge cases. Avoid investors till you decide to raise money,
and then when you do, talk to them all in parallel, prioritized by expected
value, and accept offers greedily. That's fundraising in one sentence. Don't
introduce complicated optimizations, and don't let investors introduce
complications either.
Fundraising is not what will make you successful. It's just a means to an end.
Your primary goal should be to get it over with and get back to what will make
you successful — making things and talking to users — and the path I've
described will for most startups be the surest way to that destination.
Be good, take care of yourselves, and _don't leave the path_.
** |
|
December 2019
I've seen the same pattern in many different fields: even though lots of
people have worked hard in the field, only a small fraction of the space of
possibilities has been explored, because they've all worked on similar things.
Even the smartest, most imaginative people are surprisingly conservative when
deciding what to work on. People who would never dream of being fashionable in
any other way get sucked into working on fashionable problems.
If you want to try working on unfashionable problems, one of the best places
to look is in fields that people think have already been fully explored:
essays, Lisp, venture funding you may notice a pattern here. If you can find
a new approach into a big but apparently played out field, the value of
whatever you discover will be _multiplied_ by its enormous surface area.
The best protection against getting drawn into working on the same things as
everyone else may be to _genuinely love_ what you're doing. Then you'll
continue to work on it even if you make the same mistake as other people and
think that it's too marginal to matter.
---
---
Japanese Translation
| | Arabic Translation
French Translation
* * *
--- |
|
October 2015
Here's a simple trick for getting more people to read what you write: write in
spoken language.
Something comes over most people when they start writing. They write in a
different language than they'd use if they were talking to a friend. The
sentence structure and even the words are different. No one uses "pen" as a
verb in spoken English. You'd feel like an idiot using "pen" instead of
"write" in a conversation with a friend.
The last straw for me was a sentence I read a couple days ago:
> The mercurial Spaniard himself declared: "After Altamira, all is decadence."
It's from Neil Oliver's _A History of Ancient Britain_. I feel bad making an
example of this book, because it's no worse than lots of others. But just
imagine calling Picasso "the mercurial Spaniard" when talking to a friend.
Even one sentence of this would raise eyebrows in conversation. And yet people
write whole books of it.
Ok, so written and spoken language are different. Does that make written
language worse?
If you want people to read and understand what you write, yes. Written
language is more complex, which makes it more work to read. It's also more
formal and distant, which gives the reader's attention permission to drift.
But perhaps worst of all, the complex sentences and fancy words give you, the
writer, the false impression that you're saying more than you actually are.
You don't need complex sentences to express complex ideas. When specialists in
some abstruse topic talk to one another about ideas in their field, they don't
use sentences any more complex than they do when talking about what to have
for lunch. They use different words, certainly. But even those they use no
more than necessary. And in my experience, the harder the subject, the more
informally experts speak. Partly, I think, because they have less to prove,
and partly because the harder the ideas you're talking about, the less you can
afford to let language get in the way.
Informal language is the athletic clothing of ideas.
I'm not saying spoken language always works best. Poetry is as much music as
text, so you can say things you wouldn't say in conversation. And there are a
handful of writers who can get away with using fancy language in prose. And
then of course there are cases where writers don't want to make it easy to
understand what they're saying—in corporate announcements of bad news, for
example, or at the more _bogus_ end of the humanities. But for nearly everyone
else, spoken language is better.
It seems to be hard for most people to write in spoken language. So perhaps
the best solution is to write your first draft the way you usually would, then
afterward look at each sentence and ask "Is this the way I'd say this if I
were talking to a friend?" If it isn't, imagine what you would say, and use
that instead. After a while this filter will start to operate as you write.
When you write something you wouldn't say, you'll hear the clank as it hits
the page.
Before I publish a new essay, I read it out loud and fix everything that
doesn't sound like conversation. I even fix bits that are phonetically
awkward; I don't know if that's necessary, but it doesn't cost much.
This trick may not always be enough. I've seen writing so far removed from
spoken language that it couldn't be fixed sentence by sentence. For cases like
that there's a more drastic solution. After writing the first draft, try
explaining to a friend what you just wrote. Then replace the draft with what
you said to your friend.
People often tell me how much my essays sound like me talking. The fact that
this seems worthy of comment shows how rarely people manage to write in spoken
language. Otherwise everyone's writing would sound like them talking.
If you simply manage to write in spoken language, you'll be ahead of 95% of
writers. And it's so easy to do: just don't let a sentence through unless it's
the way you'd say it to a friend.
**Thanks** to Patrick Collison and Jessica Livingston for reading drafts of
this.
---
---
Japanese Translation
| | Arabic Translation
* * *
--- |
|
October 2015
This will come as a surprise to a lot of people, but in some cases it's
possible to detect bias in a selection process without knowing anything about
the applicant pool. Which is exciting because among other things it means
third parties can use this technique to detect bias whether those doing the
selecting want them to or not.
You can use this technique whenever (a) you have at least a random sample of
the applicants that were selected, (b) their subsequent performance is
measured, and (c) the groups of applicants you're comparing have roughly equal
distribution of ability.
How does it work? Think about what it means to be biased. What it means for a
selection process to be biased against applicants of type x is that it's
harder for them to make it through. Which means applicants of type x have to
be better to get selected than applicants not of type x. Which means
applicants of type x who do make it through the selection process will
outperform other successful applicants. And if the performance of all the
successful applicants is measured, you'll know if they do.
Of course, the test you use to measure performance must be a valid one. And in
particular it must not be invalidated by the bias you're trying to measure.
But there are some domains where performance can be measured, and in those
detecting bias is straightforward. Want to know if the selection process was
biased against some type of applicant? Check whether they outperform the
others. This is not just a heuristic for detecting bias. It's what bias means.
For example, many suspect that venture capital firms are biased against female
founders. This would be easy to detect: among their portfolio companies, do
startups with female founders outperform those without? A couple months ago,
one VC firm (almost certainly unintentionally) published a study showing bias
of this type. First Round Capital found that among its portfolio companies,
startups with female founders _outperformed_ those without by 63%.
The reason I began by saying that this technique would come as a surprise to
many people is that we so rarely see analyses of this type. I'm sure it will
come as a surprise to First Round that they performed one. I doubt anyone
there realized that by limiting their sample to their own portfolio, they were
producing a study not of startup trends but of their own biases when selecting
companies.
I predict we'll see this technique used more in the future. The information
needed to conduct such studies is increasingly available. Data about who
applies for things is usually closely guarded by the organizations selecting
them, but nowadays data about who gets selected is often publicly available to
anyone who takes the trouble to aggregate it.
** |
|
December 2019
There are two distinct ways to be politically moderate: on purpose and by
accident. Intentional moderates are trimmers, deliberately choosing a position
mid-way between the extremes of right and left. Accidental moderates end up in
the middle, on average, because they make up their own minds about each
question, and the far right and far left are roughly equally wrong.
You can distinguish intentional from accidental moderates by the distribution
of their opinions. If the far left opinion on some matter is 0 and the far
right opinion 100, an intentional moderate's opinion on every question will be
near 50. Whereas an accidental moderate's opinions will be scattered over a
broad range, but will, like those of the intentional moderate, average to
about 50.
Intentional moderates are similar to those on the far left and the far right
in that their opinions are, in a sense, not their own. The defining quality of
an ideologue, whether on the left or the right, is to acquire one's opinions
in bulk. You don't get to pick and choose. Your opinions about taxation can be
predicted from your opinions about sex. And although intentional moderates
might seem to be the opposite of ideologues, their beliefs (though in their
case the word "positions" might be more accurate) are also acquired in bulk.
If the median opinion shifts to the right or left, the intentional moderate
must shift with it. Otherwise they stop being moderate.
Accidental moderates, on the other hand, not only choose their own answers,
but choose their own questions. They may not care at all about questions that
the left and right both think are terribly important. So you can only even
measure the politics of an accidental moderate from the intersection of the
questions they care about and those the left and right care about, and this
can sometimes be vanishingly small.
It is not merely a manipulative rhetorical trick to say "if you're not with
us, you're against us," but often simply false.
Moderates are sometimes derided as cowards, particularly by the extreme left.
But while it may be accurate to call intentional moderates cowards, openly
being an accidental moderate requires the most courage of all, because you get
attacked from both right and left, and you don't have the comfort of being an
orthodox member of a large group to sustain you.
Nearly all the most impressive people I know are accidental moderates. If I
knew a lot of professional athletes, or people in the entertainment business,
that might be different. Being on the far left or far right doesn't affect how
fast you run or how well you sing. But someone who works with ideas has to be
independent-minded to do it well.
Or more precisely, you have to be independent-minded about the ideas you work
with. You could be mindlessly doctrinaire in your politics and still be a good
mathematician. In the 20th century, a lot of very smart people were Marxists
just no one who was smart about the subjects Marxism involves. But if the
ideas you use in your work intersect with the politics of your time, you have
two choices: be an accidental moderate, or be mediocre.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2014
_(This essay is derived from a guest lecture in Sam Altman'sstartup class at
Stanford. It's intended for college students, but much of it is applicable to
potential founders at other ages.)_
One of the advantages of having kids is that when you have to give advice, you
can ask yourself "what would I tell my own kids?" My kids are little, but I
can imagine what I'd tell them about startups if they were in college, and
that's what I'm going to tell you.
Startups are very counterintuitive. I'm not sure why. Maybe it's just because
knowledge about them hasn't permeated our culture yet. But whatever the
reason, starting a startup is a task where you can't always trust your
instincts.
It's like skiing in that way. When you first try skiing and you want to slow
down, your instinct is to lean back. But if you lean back on skis you fly down
the hill out of control. So part of learning to ski is learning to suppress
that impulse. Eventually you get new habits, but at first it takes a conscious
effort. At first there's a list of things you're trying to remember as you
start down the hill.
Startups are as unnatural as skiing, so there's a similar list for startups.
Here I'm going to give you the first part of it — the things to remember if
you want to prepare yourself to start a startup.
**Counterintuitive**
The first item on it is the fact I already mentioned: that startups are so
weird that if you trust your instincts, you'll make a lot of mistakes. If you
know nothing more than this, you may at least pause before making them.
When I was running Y Combinator I used to joke that our function was to tell
founders things they would ignore. It's really true. Batch after batch, the YC
partners warn founders about mistakes they're about to make, and the founders
ignore them, and then come back a year later and say "I wish we'd listened."
Why do the founders ignore the partners' advice? Well, that's the thing about
counterintuitive ideas: they contradict your intuitions. They seem wrong. So
of course your first impulse is to disregard them. And in fact my joking
description is not merely the curse of Y Combinator but part of its raison
d'etre. If founders' instincts already gave them the right answers, they
wouldn't need us. You only need other people to give you advice that surprises
you. That's why there are a lot of ski instructors and not many running
instructors.
You can, however, trust your instincts about people. And in fact one of the
most common mistakes young founders make is not to do that enough. They get
involved with people who seem impressive, but about whom they feel some
misgivings personally. Later when things blow up they say "I knew there was
something off about him, but I ignored it because he seemed so impressive."
If you're thinking about getting involved with someone — as a cofounder, an
employee, an investor, or an acquirer — and you have misgivings about them,
trust your gut. If someone seems slippery, or bogus, or a jerk, don't ignore
it.
This is one case where it pays to be self-indulgent. Work with people you
genuinely like, and you've known long enough to be sure.
**Expertise**
The second counterintuitive point is that it's not that important to know a
lot about startups. The way to succeed in a startup is not to be an expert on
startups, but to be an expert on your users and the problem you're solving for
them. Mark Zuckerberg didn't succeed because he was an expert on startups. He
succeeded despite being a complete noob at startups, because he understood his
users really well.
If you don't know anything about, say, how to raise an angel round, don't feel
bad on that account. That sort of thing you can learn when you need to, and
forget after you've done it.
In fact, I worry it's not merely unnecessary to learn in great detail about
the mechanics of startups, but possibly somewhat dangerous. If I met an
undergrad who knew all about convertible notes and employee agreements and
(God forbid) class FF stock, I wouldn't think "here is someone who is way
ahead of their peers." It would set off alarms. Because another of the
characteristic mistakes of young founders is to go through the motions of
starting a startup. They make up some plausible-sounding idea, raise money at
a good valuation, rent a cool office, hire a bunch of people. From the outside
that seems like what startups do. But the next step after rent a cool office
and hire a bunch of people is: gradually realize how completely fucked they
are, because while imitating all the outward forms of a startup they have
neglected the one thing that's actually essential: making something people
want.
**Game**
We saw this happen so often that we made up a name for it: playing house.
Eventually I realized why it was happening. The reason young founders go
through the motions of starting a startup is because that's what they've been
trained to do for their whole lives up to that point. Think about what you
have to do to get into college, for example. Extracurricular activities,
check. Even in college classes most of the work is as artificial as running
laps.
I'm not attacking the educational system for being this way. There will always
be a certain amount of fakeness in the work you do when you're being taught
something, and if you measure their performance it's inevitable that people
will exploit the difference to the point where much of what you're measuring
is artifacts of the fakeness.
I confess I did it myself in college. I found that in a lot of classes there
might only be 20 or 30 ideas that were the right shape to make good exam
questions. The way I studied for exams in these classes was not (except
incidentally) to master the material taught in the class, but to make a list
of potential exam questions and work out the answers in advance. When I walked
into the final, the main thing I'd be feeling was curiosity about which of my
questions would turn up on the exam. It was like a game.
It's not surprising that after being trained for their whole lives to play
such games, young founders' first impulse on starting a startup is to try to
figure out the tricks for winning at this new game. Since fundraising appears
to be the measure of success for startups (another classic noob mistake), they
always want to know what the tricks are for convincing investors. We tell them
the best way to convince investors is to make a startup that's actually doing
well, meaning growing fast, and then simply tell investors so. Then they want
to know what the tricks are for growing fast. And we have to tell them the
best way to do that is simply to make something people want.
So many of the conversations YC partners have with young founders begin with
the founder asking "How do we..." and the partner replying "Just..."
Why do the founders always make things so complicated? The reason, I realized,
is that they're looking for the trick.
So this is the third counterintuitive thing to remember about startups:
starting a startup is where gaming the system stops working. Gaming the system
may continue to work if you go to work for a big company. Depending on how
broken the company is, you can succeed by sucking up to the right people,
giving the impression of productivity, and so on. But that doesn't work
with startups. There is no boss to trick, only users, and all users care about
is whether your product does what they want. Startups are as impersonal as
physics. You have to make something people want, and you prosper only to the
extent you do.
The dangerous thing is, faking does work to some degree on investors. If
you're super good at sounding like you know what you're talking about, you can
fool investors for at least one and perhaps even two rounds of funding. But
it's not in your interest to. The company is ultimately doomed. All you're
doing is wasting your own time riding it down.
So stop looking for the trick. There are tricks in startups, as there are in
any domain, but they are an order of magnitude less important than solving the
real problem. A founder who knows nothing about fundraising but has made
something users love will have an easier time raising money than one who knows
every trick in the book but has a flat usage graph. And more importantly, the
founder who has made something users love is the one who will go on to succeed
after raising the money.
Though in a sense it's bad news in that you're deprived of one of your most
powerful weapons, I think it's exciting that gaming the system stops working
when you start a startup. It's exciting that there even exist parts of the
world where you win by doing good work. Imagine how depressing the world would
be if it were all like school and big companies, where you either have to
spend a lot of time on bullshit things or lose to people who do. I would
have been delighted if I'd realized in college that there were parts of the
real world where gaming the system mattered less than others, and a few where
it hardly mattered at all. But there are, and this variation is one of the
most important things to consider when you're thinking about your future. How
do you win in each type of work, and what would you like to win by doing?
**All-Consuming**
That brings us to our fourth counterintuitive point: startups are all-
consuming. If you start a startup, it will take over your life to a degree you
cannot imagine. And if your startup succeeds, it will take over your life for
a long time: for several years at the very least, maybe for a decade, maybe
for the rest of your working life. So there is a real opportunity cost here.
Larry Page may seem to have an enviable life, but there are aspects of it that
are unenviable. Basically at 25 he started running as fast as he could and it
must seem to him that he hasn't stopped to catch his breath since. Every day
new shit happens in the Google empire that only the CEO can deal with, and he,
as CEO, has to deal with it. If he goes on vacation for even a week, a whole
week's backlog of shit accumulates. And he has to bear this uncomplainingly,
partly because as the company's daddy he can never show fear or weakness, and
partly because billionaires get less than zero sympathy if they talk about
having difficult lives. Which has the strange side effect that the difficulty
of being a successful startup founder is concealed from almost everyone except
those who've done it.
Y Combinator has now funded several companies that can be called big
successes, and in every single case the founders say the same thing. It never
gets any easier. The nature of the problems change. You're worrying about
construction delays at your London office instead of the broken air
conditioner in your studio apartment. But the total volume of worry never
decreases; if anything it increases.
Starting a successful startup is similar to having kids in that it's like a
button you push that changes your life irrevocably. And while it's truly
wonderful having kids, there are a lot of things that are easier to do before
you have them than after. Many of which will make you a better parent when you
do have kids. And since you can delay pushing the button for a while, most
people in rich countries do.
Yet when it comes to startups, a lot of people seem to think they're supposed
to start them while they're still in college. Are you crazy? And what are the
universities thinking? They go out of their way to ensure their students are
well supplied with contraceptives, and yet they're setting up entrepreneurship
programs and startup incubators left and right.
To be fair, the universities have their hand forced here. A lot of incoming
students are interested in startups. Universities are, at least de facto,
expected to prepare them for their careers. So students who want to start
startups hope universities can teach them about startups. And whether
universities can do this or not, there's some pressure to claim they can, lest
they lose applicants to other universities that do.
Can universities teach students about startups? Yes and no. They can teach
students about startups, but as I explained before, this is not what you need
to know. What you need to learn about are the needs of your own users, and you
can't do that until you actually start the company. So starting a startup
is intrinsically something you can only really learn by doing it. And it's
impossible to do that in college, for the reason I just explained: startups
take over your life. You can't start a startup for real as a student, because
if you start a startup for real you're not a student anymore. You may be
nominally a student for a bit, but you won't even be that for long.
Given this dichotomy, which of the two paths should you take? Be a real
student and not start a startup, or start a real startup and not be a student?
I can answer that one for you. Do not start a startup in college. How to start
a startup is just a subset of a bigger problem you're trying to solve: how to
have a good life. And though starting a startup can be part of a good life for
a lot of ambitious people, age 20 is not the optimal time to do it. Starting a
startup is like a brutally fast depth-first search. Most people should still
be searching breadth-first at 20.
You can do things in your early 20s that you can't do as well before or after,
like plunge deeply into projects on a whim and travel super cheaply with no
sense of a deadline. For unambitious people, this sort of thing is the dreaded
"failure to launch," but for the ambitious ones it can be an incomparably
valuable sort of exploration. If you start a startup at 20 and you're
sufficiently successful, you'll never get to do it.
Mark Zuckerberg will never get to bum around a foreign country. He can do
other things most people can't, like charter jets to fly him to foreign
countries. But success has taken a lot of the serendipity out of his life.
Facebook is running him as much as he's running Facebook. And while it can be
very cool to be in the grip of a project you consider your life's work, there
are advantages to serendipity too, especially early in life. Among other
things it gives you more options to choose your life's work from.
There's not even a tradeoff here. You're not sacrificing anything if you forgo
starting a startup at 20, because you're more likely to succeed if you wait.
In the unlikely case that you're 20 and one of your side projects takes off
like Facebook did, you'll face a choice of running with it or not, and it may
be reasonable to run with it. But the usual way startups take off is for the
founders to make them take off, and it's gratuitously stupid to do that at 20.
**Try**
Should you do it at any age? I realize I've made startups sound pretty hard.
If I haven't, let me try again: starting a startup is really hard. What if
it's too hard? How can you tell if you're up to this challenge?
The answer is the fifth counterintuitive point: you can't tell. Your life so
far may have given you some idea what your prospects might be if you tried to
become a mathematician, or a professional football player. But unless you've
had a very strange life you haven't done much that was like being a startup
founder. Starting a startup will change you a lot. So what you're trying to
estimate is not just what you are, but what you could grow into, and who can
do that?
For the past 9 years it was my job to predict whether people would have what
it took to start successful startups. It was easy to tell how smart they were,
and most people reading this will be over that threshold. The hard part was
predicting how tough and ambitious they would become. There may be no one who
has more experience at trying to predict that, so I can tell you how much an
expert can know about it, and the answer is: not much. I learned to keep a
completely open mind about which of the startups in each batch would turn out
to be the stars.
The founders sometimes think they know. Some arrive feeling sure they will ace
Y Combinator just as they've aced every one of the (few, artificial, easy)
tests they've faced in life so far. Others arrive wondering how they got in,
and hoping YC doesn't discover whatever mistake caused it to accept them. But
there is little correlation between founders' initial attitudes and how well
their companies do.
I've read that the same is true in the military — that the swaggering recruits
are no more likely to turn out to be really tough than the quiet ones. And
probably for the same reason: that the tests involved are so different from
the ones in their previous lives.
If you're absolutely terrified of starting a startup, you probably shouldn't
do it. But if you're merely unsure whether you're up to it, the only way to
find out is to try. Just not now.
**Ideas**
So if you want to start a startup one day, what should you do in college?
There are only two things you need initially: an idea and cofounders. And the
m.o. for getting both is the same. Which leads to our sixth and last
counterintuitive point: that the way to get startup ideas is not to try to
think of startup ideas.
I've written a whole essay on this, so I won't repeat it all here. But the
short version is that if you make a conscious effort to think of startup
ideas, the ideas you come up with will not merely be bad, but bad and
plausible-sounding, meaning you'll waste a lot of time on them before
realizing they're bad.
The way to come up with good startup ideas is to take a step back. Instead of
making a conscious effort to think of startup ideas, turn your mind into the
type that startup ideas form in without any conscious effort. In fact, so
unconsciously that you don't even realize at first that they're startup ideas.
This is not only possible, it's how Apple, Yahoo, Google, and Facebook all got
started. None of these companies were even meant to be companies at first.
They were all just side projects. The best startups almost have to start as
side projects, because great ideas tend to be such outliers that your
conscious mind would reject them as ideas for companies.
Ok, so how do you turn your mind into the type that startup ideas form in
unconsciously? (1) Learn a lot about things that matter, then (2) work on
problems that interest you (3) with people you like and respect. The third
part, incidentally, is how you get cofounders at the same time as the idea.
The first time I wrote that paragraph, instead of "learn a lot about things
that matter," I wrote "become good at some technology." But that prescription,
though sufficient, is too narrow. What was special about Brian Chesky and Joe
Gebbia was not that they were experts in technology. They were good at design,
and perhaps even more importantly, they were good at organizing groups and
making projects happen. So you don't have to work on technology per se, so
long as you work on problems demanding enough to stretch you.
What kind of problems are those? That is very hard to answer in the general
case. History is full of examples of young people who were working on
important problems that no one else at the time thought were important, and in
particular that their parents didn't think were important. On the other hand,
history is even fuller of examples of parents who thought their kids were
wasting their time and who were right. So how do you know when you're working
on real stuff?
I know how _I_ know. Real problems are interesting, and I am self-indulgent in
the sense that I always want to work on interesting things, even if no one
else cares about them (in fact, especially if no one else cares about them),
and find it very hard to make myself work on boring things, even if they're
supposed to be important.
My life is full of case after case where I worked on something just because it
seemed interesting, and it turned out later to be useful in some worldly way.
Y Combinator itself was something I only did because it seemed interesting. So
I seem to have some sort of internal compass that helps me out. But I don't
know what other people have in their heads. Maybe if I think more about this I
can come up with heuristics for recognizing genuinely interesting problems,
but for the moment the best I can offer is the hopelessly question-begging
advice that if you have a taste for genuinely interesting problems, indulging
it energetically is the best way to prepare yourself for a startup. And
indeed, probably also the best way to live.
But although I can't explain in the general case what counts as an interesting
problem, I can tell you about a large subset of them. If you think of
technology as something that's spreading like a sort of fractal stain, every
moving point on the edge represents an interesting problem. So one guaranteed
way to turn your mind into the type that has good startup ideas is to get
yourself to the leading edge of some technology — to cause yourself, as Paul
Buchheit put it, to "live in the future." When you reach that point, ideas
that will seem to other people uncannily prescient will seem obvious to you.
You may not realize they're startup ideas, but you'll know they're something
that ought to exist.
For example, back at Harvard in the mid 90s a fellow grad student of my
friends Robert and Trevor wrote his own voice over IP software. He didn't mean
it to be a startup, and he never tried to turn it into one. He just wanted to
talk to his girlfriend in Taiwan without paying for long distance calls, and
since he was an expert on networks it seemed obvious to him that the way to do
it was turn the sound into packets and ship it over the Internet. He never did
any more with his software than talk to his girlfriend, but this is exactly
the way the best startups get started.
So strangely enough the optimal thing to do in college if you want to be a
successful startup founder is not some sort of new, vocational version of
college focused on "entrepreneurship." It's the classic version of college as
education for its own sake. If you want to start a startup after college, what
you should do in college is learn powerful things. And if you have genuine
intellectual curiosity, that's what you'll naturally tend to do if you just
follow your own inclinations.
The component of entrepreneurship that really matters is domain expertise. The
way to become Larry Page was to become an expert on search. And the way to
become an expert on search was to be driven by genuine curiosity, not some
ulterior motive.
At its best, starting a startup is merely an ulterior motive for curiosity.
And you'll do it best if you introduce the ulterior motive toward the end of
the process.
So here is the ultimate advice for young would-be startup founders, boiled
down to two words: just learn.
** |
|
July 2010
What hard liquor, cigarettes, heroin, and crack have in common is that they're
all more concentrated forms of less addictive predecessors. Most if not all
the things we describe as addictive are. And the scary thing is, the process
that created them is accelerating.
We wouldn't want to stop it. It's the same process that cures diseases:
technological progress. Technological progress means making things do more of
what we want. When the thing we want is something we want to want, we consider
technological progress good. If some new technique makes solar cells x% more
efficient, that seems strictly better. When progress concentrates something we
don't want to want — when it transforms opium into heroin — it seems bad. But
it's the same process at work.
No one doubts this process is accelerating, which means increasing numbers of
things we like will be transformed into things we like too much.
As far as I know there's no word for something we like too much. The closest
is the colloquial sense of "addictive." That usage has become increasingly
common during my lifetime. And it's clear why: there are an increasing number
of things we need it for. At the extreme end of the spectrum are crack and
meth. Food has been transformed by a combination of factory farming and
innovations in food processing into something with way more immediate bang for
the buck, and you can see the results in any town in America. Checkers and
solitaire have been replaced by World of Warcraft and FarmVille. TV has become
much more engaging, and even so it can't compete with Facebook.
The world is more addictive than it was 40 years ago. And unless the forms of
technological progress that produced these things are subject to different
laws than technological progress in general, the world will get more addictive
in the next 40 years than it did in the last 40.
The next 40 years will bring us some wonderful things. I don't mean to imply
they're all to be avoided. Alcohol is a dangerous drug, but I'd rather live in
a world with wine than one without. Most people can coexist with alcohol; but
you have to be careful. More things we like will mean more things we have to
be careful about.
Most people won't, unfortunately. Which means that as the world becomes more
addictive, the two senses in which one can live a normal life will be driven
ever further apart. One sense of "normal" is statistically normal: what
everyone else does. The other is the sense we mean when we talk about the
normal operating range of a piece of machinery: what works best.
These two senses are already quite far apart. Already someone trying to live
well would seem eccentrically abstemious in most of the US. That phenomenon is
only going to become more pronounced. You can probably take it as a rule of
thumb from now on that if people don't think you're weird, you're living
badly.
Societies eventually develop antibodies to addictive new things. I've seen
that happen with cigarettes. When cigarettes first appeared, they spread the
way an infectious disease spreads through a previously isolated population.
Smoking rapidly became a (statistically) normal thing. There were ashtrays
everywhere. We had ashtrays in our house when I was a kid, even though neither
of my parents smoked. You had to for guests.
As knowledge spread about the dangers of smoking, customs changed. In the last
20 years, smoking has been transformed from something that seemed totally
normal into a rather seedy habit: from something movie stars did in publicity
shots to something small huddles of addicts do outside the doors of office
buildings. A lot of the change was due to legislation, of course, but the
legislation couldn't have happened if customs hadn't already changed.
It took a while though—on the order of 100 years. And unless the rate at which
social antibodies evolve can increase to match the accelerating rate at which
technological progress throws off new addictions, we'll be increasingly unable
to rely on customs to protect us. Unless we want to be canaries in the
coal mine of each new addiction—the people whose sad example becomes a lesson
to future generations—we'll have to figure out for ourselves what to avoid and
how. It will actually become a reasonable strategy (or a more reasonable
strategy) to suspect everything new.
In fact, even that won't be enough. We'll have to worry not just about new
things, but also about existing things becoming more addictive. That's what
bit me. I've avoided most addictions, but the Internet got me because it
became addictive while I was using it.
Most people I know have problems with Internet addiction. We're all trying to
figure out our own customs for getting free of it. That's why I don't have an
iPhone, for example; the last thing I want is for the Internet to follow me
out into the world. My latest trick is taking long hikes. I used to think
running was a better form of exercise than hiking because it took less time.
Now the slowness of hiking seems an advantage, because the longer I spend on
the trail, the longer I have to think without interruption.
Sounds pretty eccentric, doesn't it? It always will when you're trying to
solve problems where there are no customs yet to guide you. Maybe I can't
plead Occam's razor; maybe I'm simply eccentric. But if I'm right about the
acceleration of addictiveness, then this kind of lonely squirming to avoid it
will increasingly be the fate of anyone who wants to get things done. We'll
increasingly be defined by what we say no to.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
January 2012
A year ago I noticed a pattern in the least successful startups we'd funded:
they all seemed hard to talk to. It felt as if there was some kind of wall
between us. I could never quite tell if they understood what I was saying.
This caught my attention because earlier we'd noticed a pattern among the most
successful startups, and it seemed to hinge on a different quality. We found
the startups that did best were the ones with the sort of founders about whom
we'd say "they can take care of themselves." The startups that do best are
fire-and-forget in the sense that all you have to do is give them a lead, and
they'll close it, whatever type of lead it is. When they're raising money, for
example, you can do the initial intros knowing that if you wanted to you could
stop thinking about it at that point. You won't have to babysit the round to
make sure it happens. That type of founder is going to come back with the
money; the only question is how much on what terms.
It seemed odd that the outliers at the two ends of the spectrum could be
detected by what appeared to be unrelated tests. You'd expect that if the
founders at one end were distinguished by the presence of quality x, at the
other end they'd be distinguished by lack of x. Was there some kind of inverse
relation between resourcefulness and being hard to talk to?
It turns out there is, and the key to the mystery is the old adage "a word to
the wise is sufficient." Because this phrase is not only overused, but
overused in an indirect way (by prepending the subject to some advice), most
people who've heard it don't know what it means. What it means is that if
someone is wise, all you have to do is say one word to them, and they'll
understand immediately. You don't have to explain in detail; they'll chase
down all the implications.
In much the same way that all you have to do is give the right sort of founder
a one line intro to a VC, and he'll chase down the money. That's the
connection. Understanding all the implications — even the inconvenient
implications — of what someone tells you is a subset of resourcefulness. It's
conversational resourcefulness.
Like real world resourcefulness, conversational resourcefulness often means
doing things you don't want to. Chasing down all the implications of what's
said to you can sometimes lead to uncomfortable conclusions. The best word to
describe the failure to do so is probably "denial," though that seems a bit
too narrow. A better way to describe the situation would be to say that the
unsuccessful founders had the sort of conservatism that comes from weakness.
They traversed idea space as gingerly as a very old person traverses the
physical world.
The unsuccessful founders weren't stupid. Intellectually they were as capable
as the successful founders of following all the implications of what one said
to them. They just weren't eager to.
So being hard to talk to was not what was killing the unsuccessful startups.
It was a sign of an underlying lack of resourcefulness. That's what was
killing them. As well as failing to chase down the implications of what was
said to them, the unsuccessful founders would also fail to chase down funding,
and users, and sources of new ideas. But the most immediate evidence I had
that something was amiss was that I couldn't talk to them.
** |
|
February 2008
A user on Hacker News recently posted a comment that set me thinking:
> Something about hacker culture that never really set well with me was this
> the nastiness. ... I just don't understand why people troll like they do.
I've thought a lot over the last couple years about the problem of trolls.
It's an old one, as old as forums, but we're still just learning what the
causes are and how to address them.
There are two senses of the word "troll." In the original sense it meant
someone, usually an outsider, who deliberately stirred up fights in a forum by
saying controversial things. For example, someone who didn't use a certain
programming language might go to a forum for users of that language and make
disparaging remarks about it, then sit back and watch as people rose to the
bait. This sort of trolling was in the nature of a practical joke, like
letting a bat loose in a room full of people.
The definition then spread to people who behaved like assholes in forums,
whether intentionally or not. Now when people talk about trolls they usually
mean this broader sense of the word. Though in a sense this is historically
inaccurate, it is in other ways more accurate, because when someone is being
an asshole it's usually uncertain even in their own mind how much is
deliberate. That is arguably one of the defining qualities of an asshole.
I think trolling in the broader sense has four causes. The most important is
distance. People will say things in anonymous forums that they'd never dare
say to someone's face, just as they'll do things in cars that they'd never do
as pedestrians like tailgate people, or honk at them, or cut them off.
Trolling tends to be particularly bad in forums related to computers, and I
think that's due to the kind of people you find there. Most of them (myself
included) are more comfortable dealing with abstract ideas than with people.
Hackers can be abrupt even in person. Put them on an anonymous forum, and the
problem gets worse.
The third cause of trolling is incompetence. If you disagree with something,
it's easier to say "you suck" than to figure out and explain exactly what you
disagree with. You're also safe that way from refutation. In this respect
trolling is a lot like graffiti. Graffiti happens at the intersection of
ambition and incompetence: people want to make their mark on the world, but
have no other way to do it than literally making a mark on the world.
The final contributing factor is the culture of the forum. Trolls are like
children (many _are_ children) in that they're capable of a wide range of
behavior depending on what they think will be tolerated. In a place where
rudeness isn't tolerated, most can be polite. But vice versa as well.
There's a sort of Gresham's Law of trolls: trolls are willing to use a forum
with a lot of thoughtful people in it, but thoughtful people aren't willing to
use a forum with a lot of trolls in it. Which means that once trolling takes
hold, it tends to become the dominant culture. That had already happened to
Slashdot and Digg by the time I paid attention to comment threads there, but I
watched it happen to Reddit.
News.YC is, among other things, an experiment to see if this fate can be
avoided. The sites's guidelines explicitly ask people not to say things they
wouldn't say face to face. If someone starts being rude, other users will step
in and tell them to stop. And when people seem to be deliberately trolling, we
ban them ruthlessly.
Technical tweaks may also help. On Reddit, votes on your comments don't affect
your karma score, but they do on News.YC. And it does seem to influence people
when they can see their reputation in the eyes of their peers drain away after
making an asshole remark. Often users have second thoughts and delete such
comments.
One might worry this would prevent people from expressing controversial ideas,
but empirically that doesn't seem to be what happens. When people say
something substantial that gets modded down, they stubbornly leave it up. What
people delete are wisecracks, because they have less invested in them.
So far the experiment seems to be working. The level of conversation on
News.YC is as high as on any forum I've seen. But we still only have about
8,000 uniques a day. The conversations on Reddit were good when it was that
small. The challenge is whether we can keep things this way.
I'm optimistic we will. We're not depending just on technical tricks. The core
users of News.YC are mostly refugees from other sites that were overrun by
trolls. They feel about trolls roughly the way refugees from Cuba or Eastern
Europe feel about dictatorships. So there are a lot of people working to keep
this from happening again.
** |
|
December 2008
For nearly all of history the success of a society was proportionate to its
ability to assemble large and disciplined organizations. Those who bet on
economies of scale generally won, which meant the largest organizations were
the most successful ones.
Things have already changed so much that this is hard for us to believe, but
till just a few decades ago the largest organizations tended to be the most
progressive. An ambitious kid graduating from college in 1960 wanted to work
in the huge, gleaming offices of Ford, or General Electric, or NASA. Small
meant small-time. Small in 1960 didn't mean a cool little startup. It meant
uncle Sid's shoe store.
When I grew up in the 1970s, the idea of the "corporate ladder" was still very
much alive. The standard plan was to try to get into a good college, from
which one would be drafted into some organization and then rise to positions
of gradually increasing responsibility. The more ambitious merely hoped to
climb the same ladder faster.
But in the late twentieth century something changed. It turned out that
economies of scale were not the only force at work. Particularly in
technology, the increase in speed one could get from smaller groups started to
trump the advantages of size.
The future turned out to be different from the one we were expecting in 1970.
The domed cities and flying cars we expected have failed to materialize. But
fortunately so have the jumpsuits with badges indicating our specialty and
rank. Instead of being dominated by a few, giant tree-structured
organizations, it's now looking like the economy of the future will be a fluid
network of smaller, independent units.
It's not so much that large organizations stopped working. There's no evidence
that famously successful organizations like the Roman army or the British East
India Company were any less afflicted by protocol and politics than
organizations of the same size today. But they were competing against
opponents who couldn't change the rules on the fly by discovering new
technology. Now it turns out the rule "large and disciplined organizations
win" needs to have a qualification appended: "at games that change slowly." No
one knew till change reached a sufficient speed.
Large organizations _will_ start to do worse now, though, because for the
first time in history they're no longer getting the best people. An ambitious
kid graduating from college now doesn't want to work for a big company. They
want to work for the hot startup that's rapidly growing into one. If they're
really ambitious, they want to start it.
This doesn't mean big companies will disappear. To say that startups will
succeed implies that big companies will exist, because startups that succeed
either become big companies or are acquired by them. But large
organizations will probably never again play the leading role they did up till
the last quarter of the twentieth century.
It's kind of surprising that a trend that lasted so long would ever run out.
How often does it happen that a rule works for thousands of years, then
switches polarity?
The millennia-long run of bigger-is-better left us with a lot of traditions
that are now obsolete, but extremely deeply rooted. Which means the ambitious
can now do arbitrage on them. It will be very valuable to understand precisely
which ideas to keep and which can now be discarded.
The place to look is where the spread of smallness began: in the world of
startups.
There have always been occasional cases, particularly in the US, of ambitious
people who grew the ladder under them instead of climbing it. But till
recently this was an anomalous route that tended to be followed only by
outsiders. It was no coincidence that the great industrialists of the
nineteenth century had so little formal education. As huge as their companies
eventually became, they were all essentially mechanics and shopkeepers at
first. That was a social step no one with a college education would take if
they could avoid it. Till the rise of technology startups, and in particular,
Internet startups, it was very unusual for educated people to start their own
businesses.
The eight men who left Shockley Semiconductor to found Fairchild
Semiconductor, the original Silicon Valley startup, weren't even trying to
start a company at first. They were just looking for a company willing to hire
them as a group. Then one of their parents introduced them to a small
investment bank that offered to find funding for them to start their own, so
they did. But starting a company was an alien idea to them; it was something
they backed into.
Now I would guess that practically every Stanford or Berkeley undergrad who
knows how to program has at least considered the idea of starting a startup.
East Coast universities are not far behind, and British universities only a
little behind them. This pattern suggests that attitudes at Stanford and
Berkeley are not an anomaly, but a leading indicator. This is the way the
world is going.
Of course, Internet startups are still only a fraction of the world's economy.
Could a trend based on them be that powerful?
I think so. There's no reason to suppose there's any limit to the amount of
work that could be done in this area. Like science, wealth seems to expand
fractally. Steam power was a sliver of the British economy when Watt started
working on it. But his work led to more work till that sliver had expanded
into something bigger than the whole economy of which it had initially been a
part.
The same thing could happen with the Internet. If Internet startups offer the
best opportunity for ambitious people, then a lot of ambitious people will
start them, and this bit of the economy will balloon in the usual fractal way.
Even if Internet-related applications only become a tenth of the world's
economy, this component will set the tone for the rest. The most dynamic part
of the economy always does, in everything from salaries to standards of dress.
Not just because of its prestige, but because the principles underlying the
most dynamic part of the economy tend to be ones that work.
For the future, the trend to bet on seems to be networks of small, autonomous
groups whose performance is measured individually. And the societies that win
will be the ones with the least impedance.
As with the original industrial revolution, some societies are going to be
better at this than others. Within a generation of its birth in England, the
Industrial Revolution had spread to continental Europe and North America. But
it didn't spread everywhere. This new way of doing things could only take root
in places that were prepared for it. It could only spread to places that
already had a vigorous middle class.
There is a similar social component to the transformation that began in
Silicon Valley in the 1960s. Two new kinds of techniques were developed there:
techniques for building integrated circuits, and techniques for building a new
type of company designed to grow fast by creating new technology. The
techniques for building integrated circuits spread rapidly to other countries.
But the techniques for building startups didn't. Fifty years later, startups
are ubiquitous in Silicon Valley and common in a handful of other US cities,
but they're still an anomaly in most of the world.
Part of the reason—possibly the main reason—that startups have not spread as
broadly as the Industrial Revolution did is their social disruptiveness.
Though it brought many social changes, the Industrial Revolution was not
fighting the principle that bigger is better. Quite the opposite: the two
dovetailed beautifully. The new industrial companies adapted the customs of
existing large organizations like the military and the civil service, and the
resulting hybrid worked well. "Captains of industry" issued orders to "armies
of workers," and everyone knew what they were supposed to do.
Startups seem to go more against the grain, socially. It's hard for them to
flourish in societies that value hierarchy and stability, just as it was hard
for industrialization to flourish in societies ruled by people who stole at
will from the merchant class. But there were already a handful of countries
past that stage when the Industrial Revolution happened. There do not seem to
be that many ready this time.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
March 2008, rev. June 2008
Technology tends to separate normal from natural. Our bodies weren't designed
to eat the foods that people in rich countries eat, or to get so little
exercise. There may be a similar problem with the way we work: a normal job
may be as bad for us intellectually as white flour or sugar is for us
physically.
I began to suspect this after spending several years working with startup
founders. I've now worked with over 200 of them, and I've noticed a definite
difference between programmers working on their own startups and those working
for large organizations. I wouldn't say founders seem happier, necessarily;
starting a startup can be very stressful. Maybe the best way to put it is to
say that they're happier in the sense that your body is happier during a long
run than sitting on a sofa eating doughnuts.
Though they're statistically abnormal, startup founders seem to be working in
a way that's more natural for humans.
I was in Africa last year and saw a lot of animals in the wild that I'd only
seen in zoos before. It was remarkable how different they seemed. Particularly
lions. Lions in the wild seem about ten times more alive. They're like
different animals. I suspect that working for oneself feels better to humans
in much the same way that living in the wild must feel better to a wide-
ranging predator like a lion. Life in a zoo is easier, but it isn't the life
they were designed for.
**Trees**
What's so unnatural about working for a big company? The root of the problem
is that humans weren't meant to work in such large groups.
Another thing you notice when you see animals in the wild is that each species
thrives in groups of a certain size. A herd of impalas might have 100 adults;
baboons maybe 20; lions rarely 10. Humans also seem designed to work in
groups, and what I've read about hunter-gatherers accords with research on
organizations and my own experience to suggest roughly what the ideal size is:
groups of 8 work well; by 20 they're getting hard to manage; and a group of 50
is really unwieldy.
Whatever the upper limit is, we are clearly not meant to work in groups of
several hundred. And yet—for reasons having more to do with technology than
human nature—a great many people work for companies with hundreds or thousands
of employees.
Companies know groups that large wouldn't work, so they divide themselves into
units small enough to work together. But to coordinate these they have to
introduce something new: bosses.
These smaller groups are always arranged in a tree structure. Your boss is the
point where your group attaches to the tree. But when you use this trick for
dividing a large group into smaller ones, something strange happens that I've
never heard anyone mention explicitly. In the group one level up from yours,
your boss represents your entire group. A group of 10 managers is not merely a
group of 10 people working together in the usual way. It's really a group of
groups. Which means for a group of 10 managers to work together as if they
were simply a group of 10 individuals, the group working for each manager
would have to work as if they were a single person—the workers and manager
would each share only one person's worth of freedom between them.
In practice a group of people are never able to act as if they were one
person. But in a large organization divided into groups in this way, the
pressure is always in that direction. Each group tries its best to work as if
it were the small group of individuals that humans were designed to work in.
That was the point of creating it. And when you propagate that constraint, the
result is that each person gets freedom of action in inverse proportion to the
size of the entire tree.
Anyone who's worked for a large organization has felt this. You can feel the
difference between working for a company with 100 employees and one with
10,000, even if your group has only 10 people.
**Corn Syrup**
A group of 10 people within a large organization is a kind of fake tribe. The
number of people you interact with is about right. But something is missing:
individual initiative. Tribes of hunter-gatherers have much more freedom. The
leaders have a little more power than other members of the tribe, but they
don't generally tell them what to do and when the way a boss can.
It's not your boss's fault. The real problem is that in the group above you in
the hierarchy, your entire group is one virtual person. Your boss is just the
way that constraint is imparted to you.
So working in a group of 10 people within a large organization feels both
right and wrong at the same time. On the surface it feels like the kind of
group you're meant to work in, but something major is missing. A job at a big
company is like high fructose corn syrup: it has some of the qualities of
things you're meant to like, but is disastrously lacking in others.
Indeed, food is an excellent metaphor to explain what's wrong with the usual
sort of job.
For example, working for a big company is the default thing to do, at least
for programmers. How bad could it be? Well, food shows that pretty clearly. If
you were dropped at a random point in America today, nearly all the food
around you would be bad for you. Humans were not designed to eat white flour,
refined sugar, high fructose corn syrup, and hydrogenated vegetable oil. And
yet if you analyzed the contents of the average grocery store you'd probably
find these four ingredients accounted for most of the calories. "Normal" food
is terribly bad for you. The only people who eat what humans were actually
designed to eat are a few Birkenstock-wearing weirdos in Berkeley.
If "normal" food is so bad for us, why is it so common? There are two main
reasons. One is that it has more immediate appeal. You may feel lousy an hour
after eating that pizza, but eating the first couple bites feels great. The
other is economies of scale. Producing junk food scales; producing fresh
vegetables doesn't. Which means (a) junk food can be very cheap, and (b) it's
worth spending a lot to market it.
If people have to choose between something that's cheap, heavily marketed, and
appealing in the short term, and something that's expensive, obscure, and
appealing in the long term, which do you think most will choose?
It's the same with work. The average MIT graduate wants to work at Google or
Microsoft, because it's a recognized brand, it's safe, and they'll get paid a
good salary right away. It's the job equivalent of the pizza they had for
lunch. The drawbacks will only become apparent later, and then only in a vague
sense of malaise.
And founders and early employees of startups, meanwhile, are like the
Birkenstock-wearing weirdos of Berkeley: though a tiny minority of the
population, they're the ones living as humans are meant to. In an artificial
world, only extremists live naturally.
**Programmers**
The restrictiveness of big company jobs is particularly hard on programmers,
because the essence of programming is to build new things. Sales people make
much the same pitches every day; support people answer much the same
questions; but once you've written a piece of code you don't need to write it
again. So a programmer working as programmers are meant to is always making
new things. And when you're part of an organization whose structure gives each
person freedom in inverse proportion to the size of the tree, you're going to
face resistance when you do something new.
This seems an inevitable consequence of bigness. It's true even in the
smartest companies. I was talking recently to a founder who considered
starting a startup right out of college, but went to work for Google instead
because he thought he'd learn more there. He didn't learn as much as he
expected. Programmers learn by doing, and most of the things he wanted to do,
he couldn't—sometimes because the company wouldn't let him, but often because
the company's code wouldn't let him. Between the drag of legacy code, the
overhead of doing development in such a large organization, and the
restrictions imposed by interfaces owned by other groups, he could only try a
fraction of the things he would have liked to. He said he has learned much
more in his own startup, despite the fact that he has to do all the company's
errands as well as programming, because at least when he's programming he can
do whatever he wants.
An obstacle downstream propagates upstream. If you're not allowed to implement
new ideas, you stop having them. And vice versa: when you can do whatever you
want, you have more ideas about what to do. So working for yourself makes your
brain more powerful in the same way a low-restriction exhaust system makes an
engine more powerful.
Working for yourself doesn't have to mean starting a startup, of course. But a
programmer deciding between a regular job at a big company and their own
startup is probably going to learn more doing the startup.
You can adjust the amount of freedom you get by scaling the size of company
you work for. If you start the company, you'll have the most freedom. If you
become one of the first 10 employees you'll have almost as much freedom as the
founders. Even a company with 100 people will feel different from one with
1000.
Working for a small company doesn't ensure freedom. The tree structure of
large organizations sets an upper bound on freedom, not a lower bound. The
head of a small company may still choose to be a tyrant. The point is that a
large organization is compelled by its structure to be one.
**Consequences**
That has real consequences for both organizations and individuals. One is that
companies will inevitably slow down as they grow larger, no matter how hard
they try to keep their startup mojo. It's a consequence of the tree structure
that every large organization is forced to adopt.
Or rather, a large organization could only avoid slowing down if they avoided
tree structure. And since human nature limits the size of group that can work
together, the only way I can imagine for larger groups to avoid tree structure
would be to have no structure: to have each group actually be independent, and
to work together the way components of a market economy do.
That might be worth exploring. I suspect there are already some highly
partitionable businesses that lean this way. But I don't know any technology
companies that have done it.
There is one thing companies can do short of structuring themselves as
sponges: they can stay small. If I'm right, then it really pays to keep a
company as small as it can be at every stage. Particularly a technology
company. Which means it's doubly important to hire the best people. Mediocre
hires hurt you twice: they get less done, but they also make you big, because
you need more of them to solve a given problem.
For individuals the upshot is the same: aim small. It will always suck to work
for large organizations, and the larger the organization, the more it will
suck.
In an essay I wrote a couple years ago I advised graduating seniors to work
for a couple years for another company before starting their own. I'd modify
that now. Work for another company if you want to, but only for a small one,
and if you want to start your own startup, go ahead.
The reason I suggested college graduates not start startups immediately was
that I felt most would fail. And they will. But ambitious programmers are
better off doing their own thing and failing than going to work at a big
company. Certainly they'll learn more. They might even be better off
financially. A lot of people in their early twenties get into debt, because
their expenses grow even faster than the salary that seemed so high when they
left school. At least if you start a startup and fail your net worth will be
zero rather than negative.
We've now funded so many different types of founders that we have enough data
to see patterns, and there seems to be no benefit from working for a big
company. The people who've worked for a few years do seem better than the ones
straight out of college, but only because they're that much older.
The people who come to us from big companies often seem kind of conservative.
It's hard to say how much is because big companies made them that way, and how
much is the natural conservatism that made them work for the big companies in
the first place. But certainly a large part of it is learned. I know because
I've seen it burn off.
Having seen that happen so many times is one of the things that convinces me
that working for oneself, or at least for a small group, is the natural way
for programmers to live. Founders arriving at Y Combinator often have the
downtrodden air of refugees. Three months later they're transformed: they have
so much more confidence that they seem as if they've grown several inches
taller. Strange as this sounds, they seem both more worried and happier at
the same time. Which is exactly how I'd describe the way lions seem in the
wild.
Watching employees get transformed into founders makes it clear that the
difference between the two is due mostly to environment—and in particular that
the environment in big companies is toxic to programmers. In the first couple
weeks of working on their own startup they seem to come to life, because
finally they're working the way people are meant to.
** |
|
December 2008
A few months ago I read a _New York Times_ article on South Korean cram
schools that said
> Admission to the right university can make or break an ambitious young South
> Korean.
A parent added:
> "In our country, college entrance exams determine 70 to 80 percent of a
> person's future."
It was striking how old fashioned this sounded. And yet when I was in high
school it wouldn't have seemed too far off as a description of the US. Which
means things must have been changing here.
The course of people's lives in the US now seems to be determined less by
credentials and more by performance than it was 25 years ago. Where you go to
college still matters, but not like it used to.
What happened?
_____
Judging people by their academic credentials was in its time an advance. The
practice seems to have begun in China, where starting in 587 candidates for
the imperial civil service had to take an exam on classical literature. It
was also a test of wealth, because the knowledge it tested was so specialized
that passing required years of expensive training. But though wealth was a
necessary condition for passing, it was not a sufficient one. By the standards
of the rest of the world in 587, the Chinese system was very enlightened.
Europeans didn't introduce formal civil service exams till the nineteenth
century, and even then they seem to have been influenced by the Chinese
example.
Before credentials, government positions were obtained mainly by family
influence, if not outright bribery. It was a great step forward to judge
people by their performance on a test. But by no means a perfect solution.
When you judge people that way, you tend to get cram schools—which they did in
Ming China and nineteenth century England just as much as in present day South
Korea.
What cram schools are, in effect, is leaks in a seal. The use of credentials
was an attempt to seal off the direct transmission of power between
generations, and cram schools represent that power finding holes in the seal.
Cram schools turn wealth in one generation into credentials in the next.
It's hard to beat this phenomenon, because the schools adjust to suit whatever
the tests measure. When the tests are narrow and predictable, you get cram
schools on the classic model, like those that prepared candidates for
Sandhurst (the British West Point) or the classes American students take now
to improve their SAT scores. But as the tests get broader, the schools do too.
Preparing a candidate for the Chinese imperial civil service exams took years,
as prep school does today. But the raison d'etre of all these institutions has
been the same: to beat the system.
_____
History suggests that, all other things being equal, a society prospers in
proportion to its ability to prevent parents from influencing their children's
success directly. It's a fine thing for parents to help their children
indirectly—for example, by helping them to become smarter or more disciplined,
which then makes them more successful. The problem comes when parents use
direct methods: when they are able to use their own wealth or power as a
substitute for their children's qualities.
Parents will tend to do this when they can. Parents will die for their kids,
so it's not surprising to find they'll also push their scruples to the limits
for them. Especially if other parents are doing it.
Sealing off this force has a double advantage. Not only does a society get
"the best man for the job," but parents' ambitions are diverted from direct
methods to indirect ones—to actually trying to raise their kids well.
But we should expect it to be very hard to contain parents' efforts to obtain
an unfair advantage for their kids. We're dealing with one of the most
powerful forces in human nature. We shouldn't expect naive solutions to work,
any more than we'd expect naive solutions for keeping heroin out of a prison
to work.
_____
The obvious way to solve the problem is to make credentials better. If the
tests a society uses are currently hackable, we can study the way people beat
them and try to plug the holes. You can use the cram schools to show you where
most of the holes are. They also tell you when you're succeeding in fixing
them: when cram schools become less popular.
A more general solution would be to push for increased transparency,
especially at critical social bottlenecks like college admissions. In the US
this process still shows many outward signs of corruption. For example, legacy
admissions. The official story is that legacy status doesn't carry much
weight, because all it does is break ties: applicants are bucketed by ability,
and legacy status is only used to decide between the applicants in the bucket
that straddles the cutoff. But what this means is that a university can make
legacy status have as much or as little weight as they want, by adjusting the
size of the bucket that straddles the cutoff.
By gradually chipping away at the abuse of credentials, you could probably
make them more airtight. But what a long fight it would be. Especially when
the institutions administering the tests don't really want them to be
airtight.
_____
Fortunately there's a better way to prevent the direct transmission of power
between generations. Instead of trying to make credentials harder to hack, we
can also make them matter less.
Let's think about what credentials are for. What they are, functionally, is a
way of predicting performance. If you could measure actual performance, you
wouldn't need them.
So why did they even evolve? Why haven't we just been measuring actual
performance? Think about where credentialism first appeared: in selecting
candidates for large organizations. Individual performance is hard to measure
in large organizations, and the harder performance is to measure, the more
important it is to predict it. If an organization could immediately and
cheaply measure the performance of recruits, they wouldn't need to examine
their credentials. They could take everyone and keep just the good ones.
Large organizations can't do this. But a bunch of small organizations in a
market can come close. A market takes every organization and keeps just the
good ones. As organizations get smaller, this approaches taking every person
and keeping just the good ones. So all other things being equal, a society
consisting of more, smaller organizations will care less about credentials.
_____
That's what's been happening in the US. That's why those quotes from Korea
sound so old fashioned. They're talking about an economy like America's a few
decades ago, dominated by a few big companies. The route for the ambitious in
that sort of environment is to join one and climb to the top. Credentials
matter a lot then. In the culture of a large organization, an elite pedigree
becomes a self-fulfilling prophecy.
This doesn't work in small companies. Even if your colleagues were impressed
by your credentials, they'd soon be parted from you if your performance didn't
match, because the company would go out of business and the people would be
dispersed.
In a world of small companies, performance is all anyone cares about. People
hiring for a startup don't care whether you've even graduated from college,
let alone which one. All they care about is what you can do. Which is in fact
all that should matter, even in a large organization. The reason credentials
have such prestige is that for so long the large organizations in a society
tended to be the most powerful. But in the US at least they don't have the
monopoly on power they once did, precisely because they can't measure (and
thus reward) individual performance. Why spend twenty years climbing the
corporate ladder when you can get rewarded directly by the market?
I realize I see a more exaggerated version of the change than most other
people. As a partner at an early stage venture funding firm, I'm like a
jumpmaster shoving people out of the old world of credentials and into the new
one of performance. I'm an agent of the change I'm seeing. But I don't think
I'm imagining it. It was not so easy 25 years ago for an ambitious person to
choose to be judged directly by the market. You had to go through bosses, and
they were influenced by where you'd been to college.
_____
What made it possible for small organizations to succeed in America? I'm still
not entirely sure. Startups are certainly a large part of it. Small
organizations can develop new ideas faster than large ones, and new ideas are
increasingly valuable.
But I don't think startups account for all the shift from credentials to
measurement. My friend Julian Weber told me that when he went to work for a
New York law firm in the 1950s they paid associates far less than firms do
today. Law firms then made no pretense of paying people according to the value
of the work they'd done. Pay was based on seniority. The younger employees
were paying their dues. They'd be rewarded later.
The same principle prevailed at industrial companies. When my father was
working at Westinghouse in the 1970s, he had people working for him who made
more than he did, because they'd been there longer.
Now companies increasingly have to pay employees market price for the work
they do. One reason is that employees no longer trust companies to deliver
deferred rewards: why work to accumulate deferred rewards at a company that
might go bankrupt, or be taken over and have all its implicit obligations
wiped out? The other is that some companies broke ranks and started to pay
young employees large amounts. This was particularly true in consulting, law,
and finance, where it led to the phenomenon of yuppies. The word is rarely
used today because it's no longer surprising to see a 25 year old with money,
but in 1985 the sight of a 25 year old _professional_ able to afford a new BMW
was so novel that it called forth a new word.
The classic yuppie worked for a small organization. He didn't work for General
Widget, but for the law firm that handled General Widget's acquisitions or the
investment bank that floated their bond issues.
Startups and yuppies entered the American conceptual vocabulary roughly
simultaneously in the late 1970s and early 1980s. I don't think there was a
causal connection. Startups happened because technology started to change so
fast that big companies could no longer keep a lid on the smaller ones. I
don't think the rise of yuppies was inspired by it; it seems more as if there
was a change in the social conventions (and perhaps the laws) governing the
way big companies worked. But the two phenomena rapidly fused to produce a
principle that now seems obvious: paying energetic young people market rates,
and getting correspondingly high performance from them.
At about the same time the US economy rocketed out of the doldrums that had
afflicted it for most of the 1970s. Was there a connection? I don't know
enough to say, but it felt like it at the time. There was a lot of energy
released.
_____
Countries worried about their competitiveness are right to be concerned about
the number of startups started within them. But they would do even better to
examine the underlying principle. Do they let energetic young people get paid
market rate for the work they do? The young are the test, because when people
aren't rewarded according to performance, they're invariably rewarded
according to seniority instead.
All it takes is a few beachheads in your economy that pay for performance.
Measurement spreads like heat. If one part of a society is better at
measurement than others, it tends to push the others to do better. If people
who are young but smart and driven can make more by starting their own
companies than by working for existing ones, the existing companies are forced
to pay more to keep them. So market rates gradually permeate every
organization, even the government.
The measurement of performance will tend to push even the organizations
issuing credentials into line. When we were kids I used to annoy my sister by
ordering her to do things I knew she was about to do anyway. As credentials
are superseded by performance, a similar role is the best former gatekeepers
can hope for. Once credential granting institutions are no longer in the self-
fullfilling prophecy business, they'll have to work harder to predict the
future.
_____
Credentials are a step beyond bribery and influence. But they're not the final
step. There's an even better way to block the transmission of power between
generations: to encourage the trend toward an economy made of more, smaller
units. Then you can measure what credentials merely predict.
No one likes the transmission of power between generations—not the left or the
right. But the market forces favored by the right turn out to be a better way
of preventing it than the credentials the left are forced to fall back on.
The era of credentials began to end when the power of large organizations
peaked in the late twentieth century. Now we seem to be entering a new era
based on measurement. The reason the new model has advanced so rapidly is that
it works so much better. It shows no sign of slowing.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
July 2009
Now that the term "ramen profitable" has become widespread, I ought to explain
precisely what the idea entails.
Ramen profitable means a startup makes just enough to pay the founders' living
expenses. This is a different form of profitability than startups have
traditionally aimed for. Traditional profitability means a big bet is finally
paying off, whereas the main importance of ramen profitability is that it buys
you time.
In the past, a startup would usually become profitable only after raising and
spending quite a lot of money. A company making computer hardware might not
become profitable for 5 years, during which they spent $50 million. But when
they did they might have revenues of $50 million a year. This kind of
profitability means the startup has succeeded.
Ramen profitability is the other extreme: a startup that becomes profitable
after 2 months, even though its revenues are only $3000 a month, because the
only employees are a couple 25 year old founders who can live on practically
nothing. Revenues of $3000 a month do not mean the company has succeeded. But
it does share something with the one that's profitable in the traditional way:
they don't need to raise money to survive.
Ramen profitability is an unfamiliar idea to most people because it only
recently became feasible. It's still not feasible for a lot of startups; it
would not be for most biotech startups, for example; but it is for many
software startups because they're now so cheap. For many, the only real cost
is the founders' living expenses.
The main significance of this type of profitability is that you're no longer
at the mercy of investors. If you're still losing money, then eventually
you'll either have to raise more or shut down. Once you're ramen profitable
this painful choice goes away. You can still raise money, but you don't have
to do it now.
* * *
The most obvious advantage of not needing money is that you can get better
terms. If investors know you need money, they'll sometimes take advantage of
you. Some may even deliberately stall, because they know that as you run out
of money you'll become increasingly pliable.
But there are also three less obvious advantages of ramen profitability. One
is that it makes you more attractive to investors. If you're already
profitable, on however small a scale, it shows that (a) you can get at least
someone to pay you, (b) you're serious about building things people want, and
(c) you're disciplined enough to keep expenses low.
This is reassuring to investors, because you've addressed three of their
biggest worries. It's common for them to fund companies that have smart
founders and a big market, and yet still fail. When these companies fail, it's
usually because (a) people wouldn't pay for what they made, e.g. because it
was too hard to sell to them, or the market wasn't ready yet, (b) the founders
solved the wrong problem, instead of paying attention to what users needed, or
(c) the company spent too much and burned through their funding before they
started to make money. If you're ramen profitable, you're already avoiding
these mistakes.
Another advantage of ramen profitability is that it's good for morale. A
company tends to feel rather theoretical when you first start it. It's legally
a company, but you feel like you're lying when you call it one. When people
start to pay you significant amounts, the company starts to feel real. And
your own living expenses are the milestone you feel most, because at that
point the future flips state. Now survival is the default, instead of dying.
A morale boost on that scale is very valuable in a startup, because the moral
weight of running a startup is what makes it hard. Startups are still very
rare. Why don't more people do it? The financial risk? Plenty of 25 year olds
save nothing anyway. The long hours? Plenty of people work just as long hours
in regular jobs. What keeps people from starting startups is the fear of
having so much responsibility. And this is not an irrational fear: it really
is hard to bear. Anything that takes some of that weight off you will greatly
increase your chances of surviving.
A startup that reaches ramen profitability may be more likely to succeed than
not. Which is pretty exciting, considering the bimodal distribution of
outcomes in startups: you either fail or make a lot of money.
The fourth advantage of ramen profitability is the least obvious but may be
the most important. If you don't need to raise money, you don't have to
interrupt working on the company to do it.
Raising money is terribly distracting. You're lucky if your productivity is a
third of what it was before. And it can last for months.
I didn't understand (or rather, remember) precisely why raising money was so
distracting till earlier this year. I'd noticed that startups we funded would
usually grind to a halt when they switched to raising money, but I didn't
remember exactly why till YC raised money itself. We had a comparatively easy
time of it; the first people I asked said yes; but it took months to work out
the details, and during that time I got hardly any real work done. Why?
Because I thought about it all the time.
At any given time there tends to be one problem that's the most urgent for a
startup. This is what you think about as you fall asleep at night and when you
take a shower in the morning. And when you start raising money, that becomes
the problem you think about. You only take one shower in the morning, and if
you're thinking about investors during it, then you're not thinking about the
product.
Whereas if you can choose when you raise money, you can pick a time when
you're not in the middle of something else, and you can probably also insist
that the round close fast. You may even be able to avoid having the round
occupy your thoughts, if you don't care whether it closes.
* * *
Ramen profitable means no more than the definition implies. It does not, for
example, imply that you're "bootstrapping" the startup—that you're never going
to take money from investors. Empirically that doesn't seem to work very well.
Few startups succeed without taking investment. Maybe as startups get cheaper
it will become more common. On the other hand, the money is there, waiting to
be invested. If startups need it less, they'll be able to get it on better
terms, which will make them more inclined to take it. That will tend to
produce an equilibrium.
Another thing ramen profitability doesn't imply is Joe Kraus's idea that you
should put your business model in beta when you put your product in beta. He
believes you should get people to pay you from the beginning. I think that's
too constraining. Facebook didn't, and they've done better than most startups.
Making money right away was not only unnecessary for them, but probably would
have been harmful. I do think Joe's rule could be useful for many startups,
though. When founders seem unfocused, I sometimes suggest they try to get
customers to pay them for something, in the hope that this constraint will
prod them into action.
The difference between Joe's idea and ramen profitability is that a ramen
profitable company doesn't have to be making money the way it ultimately will.
It just has to be making money. The most famous example is Google, which
initially made money by licensing search to sites like Yahoo.
Is there a downside to ramen profitability? Probably the biggest danger is
that it might turn you into a consulting firm. Startups have to be product
companies, in the sense of making a single thing that everyone uses. The
defining quality of startups is that they grow fast, and consulting just can't
scale the way a product can. But it's pretty easy to make $3000 a month
consulting; in fact, that would be a low rate for contract programming. So
there could be a temptation to slide into consulting, and telling yourselves
you're a ramen profitable startup, when in fact you're not a startup at all.
It's ok to do a little consulting-type work at first. Startups usually have to
do something weird at first. But remember that ramen profitability is not the
destination. A startup's destination is to grow really big; ramen
profitability is a trick for not dying en route.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
August 2010
When I went to work for Yahoo after they bought our startup in 1998, it felt
like the center of the world. It was supposed to be the next big thing. It was
supposed to be what Google turned out to be.
What went wrong? The problems that hosed Yahoo go back a long time,
practically to the beginning of the company. They were already very visible
when I got there in 1998. Yahoo had two problems Google didn't: easy money,
and ambivalence about being a technology company.
**Money**
The first time I met Jerry Yang, we thought we were meeting for different
reasons. He thought we were meeting so he could check us out in person before
buying us. I thought we were meeting so we could show him our new technology,
Revenue Loop. It was a way of sorting shopping search results. Merchants bid a
percentage of sales for traffic, but the results were sorted not by the bid
but by the bid times the average amount a user would buy. It was like the
algorithm Google uses now to sort ads, but this was in the spring of 1998,
before Google was founded.
Revenue Loop was the optimal sort for shopping search, in the sense that it
sorted in order of how much money Yahoo would make from each link. But it
wasn't just optimal in that sense. Ranking search results by user behavior
also makes search better. Users train the search: you can start out finding
matches based on mere textual similarity, and as users buy more stuff the
search results get better and better.
Jerry didn't seem to care. I was confused. I was showing him technology that
extracted the maximum value from search traffic, and he didn't care? I
couldn't tell whether I was explaining it badly, or he was just very poker
faced.
I didn't realize the answer till later, after I went to work at Yahoo. It was
neither of my guesses. The reason Yahoo didn't care about a technique that
extracted the full value of traffic was that advertisers were already
overpaying for it. If Yahoo merely extracted the actual value, they'd have
made less.
Hard as it is to believe now, the big money then was in banner ads.
Advertisers were willing to pay ridiculous amounts for banner ads. So Yahoo's
sales force had evolved to exploit this source of revenue. Led by a large and
terrifyingly formidable man called Anil Singh, Yahoo's sales guys would fly
out to Procter & Gamble and come back with million dollar orders for banner ad
impressions.
The prices seemed cheap compared to print, which was what advertisers, for
lack of any other reference, compared them to. But they were expensive
compared to what they were worth. So these big, dumb companies were a
dangerous source of revenue to depend on. But there was another source even
more dangerous: other Internet startups.
By 1998, Yahoo was the beneficiary of a de facto Ponzi scheme. Investors were
excited about the Internet. One reason they were excited was Yahoo's revenue
growth. So they invested in new Internet startups. The startups then used the
money to buy ads on Yahoo to get traffic. Which caused yet more revenue growth
for Yahoo, and further convinced investors the Internet was worth investing
in. When I realized this one day, sitting in my cubicle, I jumped up like
Archimedes in his bathtub, except instead of "Eureka!" I was shouting "Sell!"
Both the Internet startups and the Procter & Gambles were doing brand
advertising. They didn't care about targeting. They just wanted lots of people
to see their ads. So traffic became the thing to get at Yahoo. It didn't
matter what type.
It wasn't just Yahoo. All the search engines were doing it. This was why they
were trying to get people to start calling them "portals" instead of "search
engines." Despite the actual meaning of the word portal, what they meant by it
was a site where users would find what they wanted on the site itself, instead
of just passing through on their way to other destinations, as they did at a
search engine.
I remember telling David Filo in late 1998 or early 1999 that Yahoo should buy
Google, because I and most of the other programmers in the company were using
it instead of Yahoo for search. He told me that it wasn't worth worrying
about. Search was only 6% of our traffic, and we were growing at 10% a month.
It wasn't worth doing better.
I didn't say "But search traffic is worth more than other traffic!" I said
"Oh, ok." Because I didn't realize either how much search traffic was worth.
I'm not sure even Larry and Sergey did then. If they had, Google presumably
wouldn't have expended any effort on enterprise search.
If circumstances had been different, the people running Yahoo might have
realized sooner how important search was. But they had the most opaque
obstacle in the world between them and the truth: money. As long as customers
were writing big checks for banner ads, it was hard to take search seriously.
Google didn't have that to distract them.
**Hackers**
But Yahoo also had another problem that made it hard to change directions.
They'd been thrown off balance from the start by their ambivalence about being
a technology company.
One of the weirdest things about Yahoo when I went to work there was the way
they insisted on calling themselves a "media company." If you walked around
their offices, it seemed like a software company. The cubicles were full of
programmers writing code, product managers thinking about feature lists and
ship dates, support people (yes, there were actually support people) telling
users to restart their browsers, and so on, just like a software company. So
why did they call themselves a media company?
One reason was the way they made money: by selling ads. In 1995 it was hard to
imagine a technology company making money that way. Technology companies made
money by selling their software to users. Media companies sold ads. So they
must be a media company.
Another big factor was the fear of Microsoft. If anyone at Yahoo considered
the idea that they should be a technology company, the next thought would have
been that Microsoft would crush them.
It's hard for anyone much younger than me to understand the fear Microsoft
still inspired in 1995. Imagine a company with several times the power Google
has now, but way meaner. It was perfectly reasonable to be afraid of them.
Yahoo watched them crush the first hot Internet company, Netscape. It was
reasonable to worry that if they tried to be the next Netscape, they'd suffer
the same fate. How were they to know that Netscape would turn out to be
Microsoft's last victim?
It would have been a clever move to pretend to be a media company to throw
Microsoft off their scent. But unfortunately Yahoo actually tried to be one,
sort of. Project managers at Yahoo were called "producers," for example, and
the different parts of the company were called "properties." But what Yahoo
really needed to be was a technology company, and by trying to be something
else, they ended up being something that was neither here nor there. That's
why Yahoo as a company has never had a sharply defined identity.
The worst consequence of trying to be a media company was that they didn't
take programming seriously enough. Microsoft (back in the day), Google, and
Facebook have all had hacker-centric cultures. But Yahoo treated programming
as a commodity. At Yahoo, user-facing software was controlled by product
managers and designers. The job of programmers was just to take the work of
the product managers and designers the final step, by translating it into
code.
One obvious result of this practice was that when Yahoo built things, they
often weren't very good. But that wasn't the worst problem. The worst problem
was that they hired bad programmers.
Microsoft (back in the day), Google, and Facebook have all been obsessed with
hiring the best programmers. Yahoo wasn't. They preferred good programmers to
bad ones, but they didn't have the kind of single-minded, almost obnoxiously
elitist focus on hiring the smartest people that the big winners have had. And
when you consider how much competition there was for programmers when they
were hiring, during the Bubble, it's not surprising that the quality of their
programmers was uneven.
In technology, once you have bad programmers, you're doomed. I can't think of
an instance where a company has sunk into technical mediocrity and recovered.
Good programmers want to work with other good programmers. So once the quality
of programmers at your company starts to drop, you enter a death spiral from
which there is no recovery.
At Yahoo this death spiral started early. If there was ever a time when Yahoo
was a Google-style talent magnet, it was over by the time I got there in 1998.
The company felt prematurely old. Most technology companies eventually get
taken over by suits and middle managers. At Yahoo it felt as if they'd
deliberately accelerated this process. They didn't want to be a bunch of
hackers. They wanted to be suits. A media company should be run by suits.
The first time I visited Google, they had about 500 people, the same number
Yahoo had when I went to work there. But boy did things seem different. It was
still very much a hacker-centric culture. I remember talking to some
programmers in the cafeteria about the problem of gaming search results (now
known as SEO), and they asked "what should we do?" Programmers at Yahoo
wouldn't have asked that. Theirs was not to reason why; theirs was to build
what product managers spec'd. I remember coming away from Google thinking
"Wow, it's still a startup."
There's not much we can learn from Yahoo's first fatal flaw. It's probably too
much to hope any company could avoid being damaged by depending on a bogus
source of revenue. But startups can learn an important lesson from the second
one. In the software business, you can't afford not to have a hacker-centric
culture.
Probably the most impressive commitment I've heard to having a hacker-centric
culture came from Mark Zuckerberg, when he spoke at Startup School in 2007. He
said that in the early days Facebook made a point of hiring programmers even
for jobs that would not ordinarily consist of programming, like HR and
marketing.
So which companies need to have a hacker-centric culture? Which companies are
"in the software business" in this respect? As Yahoo discovered, the area
covered by this rule is bigger than most people realize. The answer is: any
company that needs to have good software.
Why would great programmers want to work for a company that didn't have a
hacker-centric culture, as long as there were others that did? I can imagine
two reasons: if they were paid a huge amount, or if the domain was interesting
and none of the companies in it were hacker-centric. Otherwise you can't
attract good programmers to work in a suit-centric culture. And without good
programmers you won't get good software, no matter how many people you put on
a task, or how many procedures you establish to ensure "quality."
Hacker culture often seems kind of irresponsible. That's why people proposing
to destroy it use phrases like "adult supervision." That was the phrase they
used at Yahoo. But there are worse things than seeming irresponsible. Losing,
for example.
** |
|
March 2012
As a child I read a book of stories about a famous judge in eighteenth century
Japan called Ooka Tadasuke. One of the cases he decided was brought by the
owner of a food shop. A poor student who could afford only rice was eating his
rice while enjoying the delicious cooking smells coming from the food shop.
The owner wanted the student to pay for the smells he was enjoying.
The student was stealing his smells!
This story often comes to mind when I hear the RIAA and MPAA accusing people
of stealing music and movies.
It sounds ridiculous to us to treat smells as property. But I can imagine
scenarios in which one could charge for smells. Imagine we were living on a
moon base where we had to buy air by the liter. I could imagine air suppliers
adding scents at an extra charge.
The reason it seems ridiculous to us to treat smells as property is that it
wouldn't work to. It would work on a moon base, though.
What counts as property depends on what works to treat as property. And that
not only can change, but has changed. Humans may always (for some definition
of human and always) have treated small items carried on one's person as
property. But hunter gatherers didn't treat land, for example, as property in
the way we do.
The reason so many people think of property as having a single unchanging
definition is that its definition changes very slowly. But we are in the
midst of such a change now. The record labels and movie studios used to
distribute what they made like air shipped through tubes on a moon base. But
with the arrival of networks, it's as if we've moved to a planet with a
breathable atmosphere. Data moves like smells now. And through a combination
of wishful thinking and short-term greed, the labels and studios have put
themselves in the position of the food shop owner, accusing us all of stealing
their smells.
(The reason I say short-term greed is that the underlying problem with the
labels and studios is that the people who run them are driven by bonuses
rather than equity. If they were driven by equity they'd be looking for ways
to take advantage of technological change instead of fighting it. But building
new things takes too long. Their bonuses depend on this year's revenues, and
the best way to increase those is to extract more money from stuff they do
already.)
So what does this mean? Should people not be able to charge for content?
There's not a single yes or no answer to that question. People should be able
to charge for content when it works to charge for content.
But by "works" I mean something more subtle than "when they can get away with
it." I mean when people can charge for content without warping society in
order to do it. After all, the companies selling smells on the moon base could
continue to sell them on the Earth, if they lobbied successfully for laws
requiring us all to continue to breathe through tubes down here too, even
though we no longer needed to.
The crazy legal measures that the labels and studios have been taking have a
lot of that flavor. Newspapers and magazines are just as screwed, but they are
at least declining gracefully. The RIAA and MPAA would make us breathe through
tubes if they could.
Ultimately it comes down to common sense. When you're abusing the legal system
by trying to use mass lawsuits against randomly chosen people as a form of
exemplary punishment, or lobbying for laws that would break the Internet if
they passed, that's ipso facto evidence you're using a definition of property
that doesn't work.
This is where it's helpful to have working democracies and multiple sovereign
countries. If the world had a single, autocratic government, the labels and
studios could buy laws making the definition of property be whatever they
wanted. But fortunately there are still some countries that are not copyright
colonies of the US, and even in the US, politicians still seem to be afraid of
actual voters, in sufficient numbers.
The people running the US may not like it when voters or other countries
refuse to bend to their will, but ultimately it's in all our interest that
there's not a single point of attack for people trying to warp the law to
serve their own purposes. Private property is an extremely useful idea —
arguably one of our greatest inventions. So far, each new definition of it has
brought us increasing material wealth. It seems reasonable to suppose the
newest one will too. It would be a disaster if we all had to keep running an
obsolete version just because a few powerful people were too lazy to upgrade.
** |
|
October 2015
When I talk to a startup that's been operating for more than 8 or 9 months,
the first thing I want to know is almost always the same. Assuming their
expenses remain constant and their revenue growth is what it has been over the
last several months, do they make it to profitability on the money they have
left? Or to put it more dramatically, by default do they live or die?
The startling thing is how often the founders themselves don't know. Half the
founders I talk to don't know whether they're default alive or default dead.
If you're among that number, Trevor Blackwell has made a handy _calculator_
you can use to find out.
The reason I want to know first whether a startup is default alive or default
dead is that the rest of the conversation depends on the answer. If the
company is default alive, we can talk about ambitious new things they could
do. If it's default dead, we probably need to talk about how to save it. We
know the current trajectory ends badly. How can they get off that trajectory?
Why do so few founders know whether they're default alive or default dead?
Mainly, I think, because they're not used to asking that. It's not a question
that makes sense to ask early on, any more than it makes sense to ask a 3 year
old how he plans to support himself. But as the company grows older, the
question switches from meaningless to critical. That kind of switch often
takes people by surprise.
I propose the following solution: instead of starting to ask too late whether
you're default alive or default dead, start asking too early. It's hard to say
precisely when the question switches polarity. But it's probably not that
dangerous to start worrying too early that you're default dead, whereas it's
very dangerous to start worrying too late.
The reason is a phenomenon I wrote about earlier: the fatal pinch. The fatal
pinch is default dead + slow growth + not enough time to fix it. And the way
founders end up in it is by not realizing that's where they're headed.
There is another reason founders don't ask themselves whether they're default
alive or default dead: they assume it will be easy to raise more money. But
that assumption is often false, and worse still, the more you depend on it,
the falser it becomes.
Maybe it will help to separate facts from hopes. Instead of thinking of the
future with vague optimism, explicitly separate the components. Say "We're
default dead, but we're counting on investors to save us." Maybe as you say
that, it will set off the same alarms in your head that it does in mine. And
if you set off the alarms sufficiently early, you may be able to avoid the
fatal pinch.
It would be safe to be default dead if you could count on investors saving
you. As a rule their interest is a function of growth. If you have steep
revenue growth, say over 5x a year, you can start to count on investors being
interested even if you're not profitable. But investors are so fickle that
you can never do more than start to count on them. Sometimes something about
your business will spook investors even if your growth is great. So no matter
how good your growth is, you can never safely treat fundraising as more than a
plan A. You should always have a plan B as well: you should know (as in write
down) precisely what you'll need to do to survive if you can't raise more
money, and precisely when you'll have to switch to plan B if plan A isn't
working.
In any case, growing fast versus operating cheaply is far from the sharp
dichotomy many founders assume it to be. In practice there is surprisingly
little connection between how much a startup spends and how fast it grows.
When a startup grows fast, it's usually because the product hits a nerve, in
the sense of hitting some big need straight on. When a startup spends a lot,
it's usually because the product is expensive to develop or sell, or simply
because they're wasteful.
If you're paying attention, you'll be asking at this point not just how to
avoid the fatal pinch, but how to avoid being default dead. That one is easy:
don't hire too fast. Hiring too fast is by far the biggest killer of startups
that raise money.
Founders tell themselves they need to hire in order to grow. But most err on
the side of overestimating this need rather than underestimating it. Why?
Partly because there's so much work to do. Naive founders think that if they
can just hire enough people, it will all get done. Partly because successful
startups have lots of employees, so it seems like that's what one does in
order to be successful. In fact the large staffs of successful startups are
probably more the effect of growth than the cause. And partly because when
founders have slow growth they don't want to face what is usually the real
reason: the product is not appealing enough.
Plus founders who've just raised money are often encouraged to overhire by the
VCs who funded them. Kill-or-cure strategies are optimal for VCs because
they're protected by the portfolio effect. VCs want to blow you up, in one
sense of the phrase or the other. But as a founder your incentives are
different. You want above all to survive.
Here's a common way startups die. They make something moderately appealing and
have decent initial growth. They raise their first round fairly easily,
because the founders seem smart and the idea sounds plausible. But because the
product is only moderately appealing, growth is ok but not great. The founders
convince themselves that hiring a bunch of people is the way to boost growth.
Their investors agree. But (because the product is only moderately appealing)
the growth never comes. Now they're rapidly running out of runway. They hope
further investment will save them. But because they have high expenses and
slow growth, they're now unappealing to investors. They're unable to raise
more, and the company dies.
What the company should have done is address the fundamental problem: that the
product is only moderately appealing. Hiring people is rarely the way to fix
that. More often than not it makes it harder. At this early stage, the product
needs to evolve more than to be "built out," and that's usually easier with
fewer people.
Asking whether you're default alive or default dead may save you from this.
Maybe the alarm bells it sets off will counteract the forces that push you to
overhire. Instead you'll be compelled to seek growth in other ways. For
example, by _doing things that don't scale_, or by redesigning the product in
the way only founders can. And for many if not most startups, these paths to
growth will be the ones that actually work.
Airbnb waited 4 months after raising money at the end of Y Combinator before
they hired their first employee. In the meantime the founders were terribly
overworked. But they were overworked evolving Airbnb into the astonishingly
successful organism it is now.
** |
|
December 2019
Before I had kids, I was afraid of having kids. Up to that point I felt about
kids the way the young Augustine felt about living virtuously. I'd have been
sad to think I'd never have children. But did I want them now? No.
If I had kids, I'd become a parent, and parents, as I'd known since I was a
kid, were uncool. They were dull and responsible and had no fun. And while
it's not surprising that kids would believe that, to be honest I hadn't seen
much as an adult to change my mind. Whenever I'd noticed parents with kids,
the kids seemed to be terrors, and the parents pathetic harried creatures,
even when they prevailed.
When people had babies, I congratulated them enthusiastically, because that
seemed to be what one did. But I didn't feel it at all. "Better you than me,"
I was thinking.
Now when people have babies I congratulate them enthusiastically and I mean
it. Especially the first one. I feel like they just got the best gift in the
world.
What changed, of course, is that I had kids. Something I dreaded turned out to
be wonderful.
Partly, and I won't deny it, this is because of serious chemical changes that
happened almost instantly when our first child was born. It was like someone
flipped a switch. I suddenly felt protective not just toward our child, but
toward all children. As I was driving my wife and new son home from the
hospital, I approached a crosswalk full of pedestrians, and I found myself
thinking "I have to be really careful of all these people. Every one of them
is someone's child!"
So to some extent you can't trust me when I say having kids is great. To some
extent I'm like a religious cultist telling you that you'll be happy if you
join the cult too but only because joining the cult will alter your mind in
a way that will make you happy to be a cult member.
But not entirely. There were some things about having kids that I clearly got
wrong before I had them.
For example, there was a huge amount of selection bias in my observations of
parents and children. Some parents may have noticed that I wrote "Whenever I'd
noticed parents with kids." Of course the times I noticed kids were when
things were going wrong. I only noticed them when they made noise. And where
was I when I noticed them? Ordinarily I never went to places with kids, so the
only times I encountered them were in shared bottlenecks like airplanes. Which
is not exactly a representative sample. Flying with a toddler is something
very few parents enjoy.
What I didn't notice, because they tend to be much quieter, were all the great
moments parents had with kids. People don't talk about these much the magic
is hard to put into words, and all other parents know about them anyway but
one of the great things about having kids is that there are so many times when
you feel there is nowhere else you'd rather be, and nothing else you'd rather
be doing. You don't have to be doing anything special. You could just be going
somewhere together, or putting them to bed, or pushing them on the swings at
the park. But you wouldn't trade these moments for anything. One doesn't tend
to associate kids with peace, but that's what you feel. You don't need to look
any further than where you are right now.
Before I had kids, I had moments of this kind of peace, but they were rarer.
With kids it can happen several times a day.
My other source of data about kids was my own childhood, and that was
similarly misleading. I was pretty bad, and was always in trouble for
something or other. So it seemed to me that parenthood was essentially law
enforcement. I didn't realize there were good times too.
I remember my mother telling me once when I was about 30 that she'd really
enjoyed having me and my sister. My god, I thought, this woman is a saint. She
not only endured all the pain we subjected her to, but actually enjoyed it?
Now I realize she was simply telling the truth.
She said that one reason she liked having us was that we'd been interesting to
talk to. That took me by surprise when I had kids. You don't just love them.
They become your friends too. They're really interesting. And while I admit
small children are disastrously fond of repetition (anything worth doing once
is worth doing fifty times) it's often genuinely fun to play with them. That
surprised me too. Playing with a 2 year old was fun when I was 2 and
definitely not fun when I was 6. Why would it become fun again later? But it
does.
There are of course times that are pure drudgery. Or worse still, terror.
Having kids is one of those intense types of experience that are hard to
imagine unless you've had them. But it is not, as I implicitly believed before
having kids, simply your DNA heading for the lifeboats.
Some of my worries about having kids were right, though. They definitely make
you less productive. I know having kids makes some people get their act
together, but if your act was already together, you're going to have less time
to do it in. In particular, you're going to have to work to a schedule. Kids
have schedules. I'm not sure if it's because that's how kids are, or because
it's the only way to integrate their lives with adults', but once you have
kids, you tend to have to work on their schedule.
You will have chunks of time to work. But you can't let work spill
promiscuously through your whole life, like I used to before I had kids.
You're going to have to work at the same time every day, whether inspiration
is flowing or not, and there are going to be times when you have to stop, even
if it is.
I've been able to adapt to working this way. Work, like love, finds a way. If
there are only certain times it can happen, it happens at those times. So
while I don't get as much done as before I had kids, I get enough done.
I hate to say this, because being ambitious has always been a part of my
identity, but having kids may make one less ambitious. It hurts to see that
sentence written down. I squirm to avoid it. But if there weren't something
real there, why would I squirm? The fact is, once you have kids, you're
probably going to care more about them than you do about yourself. And
attention is a zero-sum game. Only one idea at a time can be the _top idea in
your mind_. Once you have kids, it will often be your kids, and that means it
will less often be some project you're working on.
I have some hacks for sailing close to this wind. For example, when I write
essays, I think about what I'd want my kids to know. That drives me to get
things right. And when I was writing _Bel_, I told my kids that once I
finished it I'd take them to Africa. When you say that sort of thing to a
little kid, they treat it as a promise. Which meant I had to finish or I'd be
taking away their trip to Africa. Maybe if I'm really lucky such tricks could
put me net ahead. But the wind is there, no question.
On the other hand, what kind of wimpy ambition do you have if it won't survive
having kids? Do you have so little to spare?
And while having kids may be warping my present judgement, it hasn't
overwritten my memory. I remember perfectly well what life was like before.
Well enough to miss some things a lot, like the ability to take off for some
other country at a moment's notice. That was so great. Why did I never do
that?
See what I did there? The fact is, most of the freedom I had before kids, I
never used. I paid for it in loneliness, but I never used it.
I had plenty of happy times before I had kids. But if I count up happy
moments, not just potential happiness but actual happy moments, there are more
after kids than before. Now I practically have it on tap, almost any bedtime.
People's experiences as parents vary a lot, and I know I've been lucky. But I
think the worries I had before having kids must be pretty common, and judging
by other parents' faces when they see their kids, so must the happiness that
kids bring.
**Note**
Adults are sophisticated enough to see 2 year olds for the fascinatingly
complex characters they are, whereas to most 6 year olds, 2 year olds are just
defective 6 year olds.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this.
---
---
Arabic Translation
| | Slovak Translation
* * *
--- |
|
December 2019
The most damaging thing you learned in school wasn't something you learned in
any specific class. It was learning to get good grades.
When I was in college, a particularly earnest philosophy grad student once
told me that he never cared what grade he got in a class, only what he learned
in it. This stuck in my mind because it was the only time I ever heard anyone
say such a thing.
For me, as for most students, the measurement of what I was learning
completely dominated actual learning in college. I was fairly earnest; I was
genuinely interested in most of the classes I took, and I worked hard. And yet
I worked by far the hardest when I was studying for a test.
In theory, tests are merely what their name implies: tests of what you've
learned in the class. In theory you shouldn't have to prepare for a test in a
class any more than you have to prepare for a blood test. In theory you learn
from taking the class, from going to the lectures and doing the reading and/or
assignments, and the test that comes afterward merely measures how well you
learned.
In practice, as almost everyone reading this will know, things are so
different that hearing this explanation of how classes and tests are meant to
work is like hearing the etymology of a word whose meaning has changed
completely. In practice, the phrase "studying for a test" was almost
redundant, because that was when one really studied. The difference between
diligent and slack students was that the former studied hard for tests and the
latter didn't. No one was pulling all-nighters two weeks into the semester.
Even though I was a diligent student, almost all the work I did in school was
aimed at getting a good grade on something.
To many people, it would seem strange that the preceding sentence has a
"though" in it. Aren't I merely stating a tautology? Isn't that what a
diligent student is, a straight-A student? That's how deeply the conflation of
learning with grades has infused our culture.
Is it so bad if learning is conflated with grades? Yes, it is bad. And it
wasn't till decades after college, when I was running Y Combinator, that I
realized how bad it is.
I knew of course when I was a student that studying for a test is far from
identical with actual learning. At the very least, you don't retain knowledge
you cram into your head the night before an exam. But the problem is worse
than that. The real problem is that most tests don't come close to measuring
what they're supposed to.
If tests truly were tests of learning, things wouldn't be so bad. Getting good
grades and learning would converge, just a little late. The problem is that
nearly all tests given to students are terribly hackable. Most people who've
gotten good grades know this, and know it so well they've ceased even to
question it. You'll see when you realize how naive it sounds to act otherwise.
Suppose you're taking a class on medieval history and the final exam is coming
up. The final exam is supposed to be a test of your knowledge of medieval
history, right? So if you have a couple days between now and the exam, surely
the best way to spend the time, if you want to do well on the exam, is to read
the best books you can find about medieval history. Then you'll know a lot
about it, and do well on the exam.
No, no, no, experienced students are saying to themselves. If you merely read
good books on medieval history, most of the stuff you learned wouldn't be on
the test. It's not good books you want to read, but the lecture notes and
assigned reading in this class. And even most of that you can ignore, because
you only have to worry about the sort of thing that could turn up as a test
question. You're looking for sharply-defined chunks of information. If one of
the assigned readings has an interesting digression on some subtle point, you
can safely ignore that, because it's not the sort of thing that could be
turned into a test question. But if the professor tells you that there were
three underlying causes of the Schism of 1378, or three main consequences of
the Black Death, you'd better know them. And whether they were in fact the
causes or consequences is beside the point. For the purposes of this class
they are.
At a university there are often copies of old exams floating around, and these
narrow still further what you have to learn. As well as learning what kind of
questions this professor asks, you'll often get actual exam questions. Many
professors re-use them. After teaching a class for 10 years, it would be hard
not to, at least inadvertently.
In some classes, your professor will have had some sort of political axe to
grind, and if so you'll have to grind it too. The need for this varies. In
classes in math or the hard sciences or engineering it's rarely necessary, but
at the other end of the spectrum there are classes where you couldn't get a
good grade without it.
Getting a good grade in a class on x is so different from learning a lot about
x that you have to choose one or the other, and you can't blame students if
they choose grades. Everyone judges them by their grades graduate programs,
employers, scholarships, even their own parents.
I liked learning, and I really enjoyed some of the papers and programs I wrote
in college. But did I ever, after turning in a paper in some class, sit down
and write another just for fun? Of course not. I had things due in other
classes. If it ever came to a choice of learning or grades, I chose grades. I
hadn't come to college to do badly.
Anyone who cares about getting good grades has to play this game, or they'll
be surpassed by those who do. And at elite universities, that means nearly
everyone, since someone who didn't care about getting good grades probably
wouldn't be there in the first place. The result is that students compete to
maximize the difference between learning and getting good grades.
Why are tests so bad? More precisely, why are they so hackable? Any
experienced programmer could answer that. How hackable is software whose
author hasn't paid any attention to preventing it from being hacked? Usually
it's as porous as a colander.
Hackable is the default for any test imposed by an authority. The reason the
tests you're given are so consistently bad so consistently far from
measuring what they're supposed to measure is simply that the people
creating them haven't made much effort to prevent them from being hacked.
But you can't blame teachers if their tests are hackable. Their job is to
teach, not to create unhackable tests. The real problem is grades, or more
precisely, that grades have been overloaded. If grades were merely a way for
teachers to tell students what they were doing right and wrong, like a coach
giving advice to an athlete, students wouldn't be tempted to hack tests. But
unfortunately after a certain age grades become more than advice. After a
certain age, whenever you're being taught, you're usually also being judged.
I've used college tests as an example, but those are actually the least
hackable. All the tests most students take their whole lives are at least as
bad, including, most spectacularly of all, the test that gets them into
college. If getting into college were merely a matter of having the quality of
one's mind measured by admissions officers the way scientists measure the mass
of an object, we could tell teenage kids "learn a lot" and leave it at that.
You can tell how bad college admissions are, as a test, from how unlike high
school that sounds. In practice, the freakishly specific nature of the stuff
ambitious kids have to do in high school is directly proportionate to the
hackability of college admissions. The classes you don't care about that are
mostly memorization, the random "extracurricular activities" you have to
participate in to show you're "well-rounded," the standardized tests as
artificial as chess, the "essay" you have to write that's presumably meant to
hit some very specific target, but you're not told what.
As well as being bad in what it does to kids, this test is also bad in the
sense of being very hackable. So hackable that whole industries have grown up
to hack it. This is the explicit purpose of test-prep companies and admissions
counsellors, but it's also a significant part of the function of private
schools.
Why is this particular test so hackable? I think because of what it's
measuring. Although the popular story is that the way to get into a good
college is to be really smart, admissions officers at elite colleges neither
are, nor claim to be, looking only for that. What are they looking for?
They're looking for people who are not simply smart, but admirable in some
more general sense. And how is this more general admirableness measured? The
admissions officers feel it. In other words, they accept who they like.
So what college admissions is a test of is whether you suit the taste of some
group of people. Well, of course a test like that is going to be hackable. And
because it's both very hackable and there's (thought to be) a lot at stake,
it's hacked like nothing else. That's why it distorts your life so much for so
long.
It's no wonder high school students often feel alienated. The shape of their
lives is completely artificial.
But wasting your time is not the worst thing the educational system does to
you. The worst thing it does is to train you that the way to win is by hacking
bad tests. This is a much subtler problem that I didn't recognize until I saw
it happening to other people.
When I started advising startup founders at Y Combinator, especially young
ones, I was puzzled by the way they always seemed to make things
overcomplicated. How, they would ask, do you raise money? What's the trick for
making venture capitalists want to invest in you? The best way to make VCs
want to invest in you, I would explain, is to actually be a good investment.
Even if you could trick VCs into investing in a bad startup, you'd be tricking
yourselves too. You're investing time in the same company you're asking them
to invest money in. If it's not a good investment, why are you even doing it?
Oh, they'd say, and then after a pause to digest this revelation, they'd ask:
What makes a startup a good investment?
So I would explain that what makes a startup promising, not just in the eyes
of investors but in fact, is _growth_. Ideally in revenue, but failing that in
usage. What they needed to do was get lots of users.
How does one get lots of users? They had all kinds of ideas about that. They
needed to do a big launch that would get them "exposure." They needed
influential people to talk about them. They even knew they needed to launch on
a tuesday, because that's when one gets the most attention.
No, I would explain, that is not how to get lots of users. The way you get
lots of users is to make the product really great. Then people will not only
use it but recommend it to their friends, so your growth will be exponential
once you _get it started_.
At this point I've told the founders something you'd think would be completely
obvious: that they should make a good company by making a good product. And
yet their reaction would be something like the reaction many physicists must
have had when they first heard about the theory of relativity: a mixture of
astonishment at its apparent genius, combined with a suspicion that anything
so weird couldn't possibly be right. Ok, they would say, dutifully. And could
you introduce us to such-and-such influential person? And remember, we want to
launch on Tuesday.
It would sometimes take founders years to grasp these simple lessons. And not
because they were lazy or stupid. They just seemed blind to what was right in
front of them.
Why, I would ask myself, do they always make things so complicated? And then
one day I realized this was not a rhetorical question.
Why did founders tie themselves in knots doing the wrong things when the
answer was right in front of them? Because that was what they'd been trained
to do. Their education had taught them that the way to win was to hack the
test. And without even telling them they were being trained to do this. The
younger ones, the recent graduates, had never faced a non-artificial test.
They thought this was just how the world worked: that the first thing you did,
when facing any kind of challenge, was to figure out what the trick was for
hacking the test. That's why the conversation would always start with how to
raise money, because that read as the test. It came at the end of YC. It had
numbers attached to it, and higher numbers seemed to be better. It must be the
test.
There are certainly big chunks of the world where the way to win is to hack
the test. This phenomenon isn't limited to schools. And some people, either
due to ideology or ignorance, claim that this is true of startups too. But it
isn't. In fact, one of the most striking things about startups is the degree
to which you win by simply doing good work. There are edge cases, as there are
in anything, but in general you win by getting users, and what users care
about is whether the product does what they want.
Why did it take me so long to understand why founders made startups
overcomplicated? Because I hadn't realized explicitly that schools train us to
win by hacking bad tests. And not just them, but me! I'd been trained to hack
bad tests too, and hadn't realized it till decades later.
I had lived as if I realized it, but without knowing why. For example, I had
avoided working for big companies. But if you'd asked why, I'd have said it
was because they were bogus, or bureaucratic. Or just yuck. I never understood
how much of my dislike of big companies was due to the fact that you win by
hacking bad tests.
Similarly, the fact that the tests were unhackable was a lot of what attracted
me to startups. But again, I hadn't realized that explicitly.
I had in effect achieved by successive approximations something that may have
a closed-form solution. I had gradually undone my training in hacking bad
tests without knowing I was doing it. Could someone coming out of school
banish this demon just by knowing its name, and saying begone? It seems worth
trying.
Merely talking explicitly about this phenomenon is likely to make things
better, because much of its power comes from the fact that we take it for
granted. After you've noticed it, it seems the elephant in the room, but it's
a pretty well camouflaged elephant. The phenomenon is so old, and so
pervasive. And it's simply the result of neglect. No one meant things to be
this way. This is just what happens when you combine learning with grades,
competition, and the naive assumption of unhackability.
It was mind-blowing to realize that two of the things I'd puzzled about the
most the bogusness of high school, and the difficulty of getting founders to
see the obvious both had the same cause. It's rare for such a big block to
slide into place so late.
Usually when that happens it has implications in a lot of different areas, and
this case seems no exception. For example, it suggests both that education
could be done better, and how you might fix it. But it also suggests a
potential answer to the question all big companies seem to have: how can we be
more like a startup? I'm not going to chase down all the implications now.
What I want to focus on here is what it means for individuals.
To start with, it means that most ambitious kids graduating from college have
something they may want to unlearn. But it also changes how you look at the
world. Instead of looking at all the different kinds of work people do and
thinking of them vaguely as more or less appealing, you can now ask a very
specific question that will sort them in an interesting way: to what extent do
you win at this kind of work by hacking bad tests?
It would help if there was a way to recognize bad tests quickly. Is there a
pattern here? It turns out there is.
Tests can be divided into two kinds: those that are imposed by authorities,
and those that aren't. Tests that aren't imposed by authorities are inherently
unhackable, in the sense that no one is claiming they're tests of anything
more than they actually test. A football match, for example, is simply a test
of who wins, not which team is better. You can tell that from the fact that
commentators sometimes say afterward that the better team won. Whereas tests
imposed by authorities are usually proxies for something else. A test in a
class is supposed to measure not just how well you did on that particular
test, but how much you learned in the class. While tests that aren't imposed
by authorities are inherently unhackable, those imposed by authorities have to
be made unhackable. Usually they aren't. So as a first approximation, bad
tests are roughly equivalent to tests imposed by authorities.
You might actually like to win by hacking bad tests. Presumably some people
do. But I bet most people who find themselves doing this kind of work don't
like it. They just take it for granted that this is how the world works,
unless you want to drop out and be some kind of hippie artisan.
I suspect many people implicitly assume that working in a field with bad tests
is the price of making lots of money. But that, I can tell you, is false. It
used to be true. In the mid-twentieth century, when the economy was _composed
of oligopolies_, the only way to the top was by playing their game. But it's
not true now. There are now ways to get rich by doing good work, and that's
part of the reason people are so much more excited about getting rich than
they used to be. When I was a kid, you could either become an engineer and
make cool things, or make lots of money by becoming an "executive." Now you
can make lots of money by making cool things.
Hacking bad tests is becoming less important as the link between work and
authority erodes. The erosion of that link is one of the most important trends
happening now, and we see its effects in almost every kind of work people do.
Startups are one of the most visible examples, but we see much the same thing
in writing. Writers no longer have to submit to publishers and editors to
reach readers; now they can go direct.
The more I think about this question, the more optimistic I get. This seems
one of those situations where we don't realize how much something was holding
us back until it's eliminated. And I can foresee the whole bogus edifice
crumbling. Imagine what happens as more and more people start to ask
themselves if they want to win by hacking bad tests, and decide that they
don't. The kinds of work where you win by hacking bad tests will be starved of
talent, and the kinds where you win by doing good work will see an influx of
the most ambitious people. And as hacking bad tests shrinks in importance,
education will evolve to stop training us to do it. Imagine what the world
could look like if that happened.
This is not just a lesson for individuals to unlearn, but one for society to
unlearn, and we'll be amazed at the energy that's liberated when we do.
** |
|
August 2015
I recently got an email from a founder that helped me understand something
important: why it's safe for startup founders to be nice people.
I grew up with a cartoon idea of a very successful businessman (in the cartoon
it was always a man): a rapacious, cigar-smoking, table-thumping guy in his
fifties who wins by exercising power, and isn't too fussy about how. As I've
written before, one of the things that has surprised me most about startups is
how few of the most successful founders are like that. Maybe successful people
in other industries are; I don't know; but not startup founders.
I knew this empirically, but I never saw the math of why till I got this
founder's email. In it he said he worried that he was fundamentally soft-
hearted and tended to give away too much for free. He thought perhaps he
needed "a little dose of sociopath-ness."
I told him not to worry about it, because so long as he built something good
enough to spread by word of mouth, he'd have a superlinear growth curve. If he
was bad at extracting money from people, at worst this curve would be some
constant multiple less than 1 of what it might have been. But a constant
multiple of any curve is exactly the same shape. The numbers on the Y axis are
smaller, but the curve is just as steep, and when anything grows at the rate
of a successful startup, the Y axis will take care of itself.
Some examples will make this clear. Suppose your company is making $1000 a
month now, and you've made something so great that it's growing at 5% a week.
Two years from now, you'll be making about $160k a month.
Now suppose you're so un-rapacious that you only extract half as much from
your users as you could. That means two years later you'll be making $80k a
month instead of $160k. How far behind are you? How long will it take to catch
up with where you'd have been if you were extracting every penny? A mere 15
weeks. After two years, the un-rapacious founder is only 3.5 months behind the
rapacious one.
If you're going to optimize a number, the one to choose is your growth rate.
Suppose as before that you only extract half as much from users as you could,
but that you're able to grow 6% a week instead of 5%. Now how are you doing
compared to the rapacious founder after two years? You're already ahead—$214k
a month versus $160k—and pulling away fast. In another year you'll be making
$4.4 million a month to the rapacious founder's $2 million.
Obviously one case where it would help to be rapacious is when growth depends
on that. What makes startups different is that usually it doesn't. Startups
usually win by making something so great that people recommend it to their
friends. And being rapacious not only doesn't help you do that, but probably
hurts.
The reason startup founders can safely be nice is that making great things is
compounded, and rapacity isn't.
So if you're a founder, here's a deal you can make with yourself that will
both make you happy and make your company successful. Tell yourself you can be
as nice as you want, so long as you work hard on your growth rate to
compensate. Most successful startups make that tradeoff unconsciously. Maybe
if you do it consciously you'll do it even better.
** |
|
March 2012
Y Combinator's 7th birthday was March 11. As usual we were so busy we didn't
notice till a few days after. I don't think we've ever managed to remember our
birthday on our birthday. On March 11 2005, Jessica and I were walking home
from dinner in Harvard Square. Jessica was working at an investment bank at
the time, but she didn't like it much, so she had interviewed for a job as
director of marketing at a Boston VC fund. The VC fund was doing what now
seems a comically familiar thing for a VC fund to do: taking a long time to
make up their mind. Meanwhile I had been telling Jessica all the things they
should change about the VC business essentially the ideas now underlying Y
Combinator: investors should be making more, smaller investments, they should
be funding hackers instead of suits, they should be willing to fund younger
founders, etc. At the time I had been thinking about doing some angel
investing. I had just given a talk to the undergraduate computer club at
Harvard about how to start a startup, and it hit me afterward that although I
had always meant to do angel investing, 7 years had now passed since I got
enough money to do it, and I still hadn't started. I had also been thinking
about ways to work with Robert Morris and Trevor Blackwell again. A few hours
before I had sent them an email trying to figure out what we could do
together. Between Harvard Square and my house the idea gelled. We'd start our
own investment firm and Jessica could work for that instead. As we turned onto
Walker Street we decided to do it. I agreed to put $100k into the new fund and
Jessica agreed to quit her job to work for it. Over the next couple days I
recruited Robert and Trevor, who put in another $50k each. So YC started with
$200k. Jessica was so happy to be able to quit her job and start her own
company that I took her picture when we got home. The company wasn't called Y
Combinator yet. At first we called it Cambridge Seed. But that name never saw
the light of day, because by the time we announced it a few days later, we'd
changed the name to Y Combinator. We realized early on that what we were doing
could be national in scope and we didn't want a name that tied us to one
place. Initially we only had part of the idea. We were going to do seed
funding with standardized terms. Before YC, seed funding was very haphazard.
You'd get that first $10k from your friend's rich uncle. The deal terms were
often a disaster; often neither the investor nor the founders nor the lawyer
knew what the documents should look like. Facebook's early history as a
Florida LLC shows how random things could be in those days. We were going to
be something there had not been before: a standard source of seed funding. We
modelled YC on the seed funding we ourselves had taken when we started Viaweb.
We started Viaweb with $10k we got from our friend Julian Weber, the husband
of Idelle Weber, whose painting class I took as a grad student at Harvard.
Julian knew about business, but you would not describe him as a suit. Among
other things he'd been president of the _National Lampoon_. He was also a
lawyer, and got all our paperwork set up properly. In return for $10k, getting
us set up as a company, teaching us what business was about, and remaining
calm in times of crisis, Julian got 10% of Viaweb. I remember thinking once
what a good deal Julian got. And then a second later I realized that without
Julian, Viaweb would never have made it. So even though it was a good deal for
him, it was a good deal for us too. That's why I knew there was room for
something like Y Combinator. Initially we didn't have what turned out to be
the most important idea: funding startups synchronously, instead of
asynchronously as it had always been done before. Or rather we had the idea,
but we didn't realize its significance. We decided very early that the first
thing we'd do would be to fund a bunch of startups over the coming summer. But
we didn't realize initially that this would be the way we'd do all our
investing. The reason we began by funding a bunch of startups at once was not
that we thought it would be a better way to fund startups, but simply because
we wanted to learn how to be angel investors, and a summer program for
undergrads seemed the fastest way to do it. No one takes summer jobs that
seriously. The opportunity cost for a bunch of undergrads to spend a summer
working on startups was low enough that we wouldn't feel guilty encouraging
them to do it. We knew students would already be making plans for the summer,
so we did what we're always telling startups to do: we launched fast. Here are
the initial announcement and description of what was at the time called the
Summer Founders Program. We got lucky in that the length and structure of a
summer program turns out to be perfect for what we do. The structure of the YC
cycle is still almost identical to what it was that first summer. We also got
lucky in who the first batch of founders were. We never expected to make any
money from that first batch. We thought of the money we were investing as a
combination of an educational expense and a charitable donation. But the
founders in the first batch turned out to be surprisingly good. And great
people too. We're still friends with a lot of them today. It's hard for
people to realize now how inconsequential YC seemed at the time. I can't blame
people who didn't take us seriously, because we ourselves didn't take that
first summer program seriously in the very beginning. But as the summer
progressed we were increasingly impressed by how well the startups were doing.
Other people started to be impressed too. Jessica and I invented a term, "the
Y Combinator effect," to describe the moment when the realization hit someone
that YC was not totally lame. When people came to YC to speak at the dinners
that first summer, they came in the spirit of someone coming to address a Boy
Scout troop. By the time they left the building they were all saying some
variant of "Wow, these companies might actually succeed." Now YC is well
enough known that people are no longer surprised when the companies we fund
are legit, but it took a while for reputation to catch up with reality. That's
one of the reasons we especially like funding ideas that might be dismissed as
"toys" because YC itself was dismissed as one initially. When we saw how
well it worked to fund companies synchronously, we decided we'd keep doing
that. We'd fund two batches of startups a year. We funded the second batch in
Silicon Valley. That was a last minute decision. In retrospect I think what
pushed me over the edge was going to Foo Camp that fall. The density of
startup people in the Bay Area was so much greater than in Boston, and the
weather was so nice. I remembered that from living there in the 90s. Plus I
didn't want someone else to copy us and describe it as the Y Combinator of
Silicon Valley. I wanted YC to be the Y Combinator of Silicon Valley. So doing
the winter batch in California seemed like one of those rare cases where the
self-indulgent choice and the ambitious one were the same. If we'd had enough
time to do what we wanted, Y Combinator would have been in Berkeley. That was
our favorite part of the Bay Area. But we didn't have time to get a building
in Berkeley. We didn't have time to get our own building anywhere. The only
way to get enough space in time was to convince Trevor to let us take over
part of his (as it then seemed) giant building in Mountain View. Yet again we
lucked out, because Mountain View turned out to be the ideal place to put
something like YC. But even then we barely made it. The first dinner in
California, we had to warn all the founders not to touch the walls, because
the paint was still wet.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
September 2010
The reason startups have been using more convertible notes in angel rounds is
that they make deals close faster. By making it easier for startups to give
different prices to different investors, they help them break the sort of
deadlock that happens when investors all wait to see who else is going to
invest.
By far the biggest influence on investors' opinions of a startup is the
opinion of other investors. There are very, very few who simply decide for
themselves. Any startup founder can tell you the most common question they
hear from investors is not about the founders or the product, but "who else is
investing?"
That tends to produce deadlocks. Raising an old-fashioned fixed-size equity
round can take weeks, because all the angels sit around waiting for the others
to commit, like competitors in a bicycle sprint who deliberately ride slowly
at the start so they can follow whoever breaks first.
Convertible notes let startups beat such deadlocks by rewarding investors
willing to move first with lower (effective) valuations. Which they deserve
because they're taking more risk. It's much safer to invest in a startup Ron
Conway has already invested in; someone who comes after him should pay a
higher price.
The reason convertible notes allow more flexibility in price is that valuation
caps aren't actual valuations, and notes are cheap and easy to do. So you can
do high-resolution fundraising: if you wanted you could have a separate note
with a different cap for each investor.
That cap need not simply rise monotonically. A startup could also give better
deals to investors they expected to help them most. The point is simply that
different investors, whether because of the help they offer or their
willingness to commit, have different values for startups, and their terms
should reflect that.
Different terms for different investors is clearly the way of the future.
Markets always evolve toward higher resolution. You may not need to use
convertible notes to do it. With sufficiently lightweight standardized equity
terms (and some changes in investors' and lawyers' expectations about equity
rounds) you might be able to do the same thing with equity instead of debt.
Either would be fine with startups, so long as they can easily change their
valuation.
Deadlocks weren't the only problem with fixed-size equity rounds. Another was
that startups had to decide in advance how much to raise. I think it's a
mistake for a startup to fix upon a specific number. If investors are easily
convinced, the startup should raise more now, and if investors are skeptical,
the startup should take a smaller amount and use that to get the company to
the point where it's more convincing.
It's just not reasonable to expect startups to pick an optimal round size in
advance, because that depends on the reactions of investors, and those are
impossible to predict.
Fixed-size, multi-investor angel rounds are such a bad idea for startups that
one wonders why things were ever done that way. One possibility is that this
custom reflects the way investors like to collude when they can get away with
it. But I think the actual explanation is less sinister. I think angels (and
their lawyers) organized rounds this way in unthinking imitation of VC series
A rounds. In a series A, a fixed-size equity round with a lead makes sense,
because there is usually just one big investor, who is unequivocally the lead.
Fixed-size series A rounds already are high res. But the more investors you
have in a round, the less sense it makes for everyone to get the same price.
The most interesting question here may be what high res fundraising will do to
the world of investors. Bolder investors will now get rewarded with lower
prices. But more important, in a hits-driven business, is that they'll be able
to get into the deals they want. Whereas the "who else is investing?" type of
investors will not only pay higher prices, but may not be able to get into the
best deals at all.
**Thanks** to Immad Akhund, Sam Altman, John Bautista, Pete Koomen, Jessica
Livingston, Dan Siroker, Harj Taggar, and Fred Wilson for reading drafts of
this.
* * *
--- |
|
July 2009
The Segway hasn't delivered on its initial promise, to put it mildly. There
are several reasons why, but one is that people don't want to be seen riding
them. Someone riding a Segway looks like a dork.
My friend Trevor Blackwell built his own Segway, which we called the Segwell.
He also built a one-wheeled version, the Eunicycle, which looks exactly like a
regular unicycle till you realize the rider isn't pedaling. He has ridden them
both to downtown Mountain View to get coffee. When he rides the Eunicycle,
people smile at him. But when he rides the Segwell, they shout abuse from
their cars: "Too lazy to walk, ya fuckin homo?"
Why do Segways provoke this reaction? The reason you look like a dork riding a
Segway is that you look _smug_. You don't seem to be working hard enough.
Someone riding a motorcycle isn't working any harder. But because he's sitting
astride it, he seems to be making an effort. When you're riding a Segway
you're just standing there. And someone who's being whisked along while
seeming to do no work — someone in a sedan chair, for example — can't help but
look smug.
Try this thought experiment and it becomes clear: imagine something that
worked like the Segway, but that you rode with one foot in front of the other,
like a skateboard. That wouldn't seem nearly as uncool.
So there may be a way to capture more of the market Segway hoped to reach:
make a version that doesn't look so easy for the rider. It would also be
helpful if the styling was in the tradition of skateboards or bicycles rather
than medical devices.
Curiously enough, what got Segway into this problem was that the company was
itself a kind of Segway. It was too easy for them; they were too successful
raising money. If they'd had to grow the company gradually, by iterating
through several versions they sold to real users, they'd have learned pretty
quickly that people looked stupid riding them. Instead they had enough to work
in secret. They had focus groups aplenty, I'm sure, but they didn't have the
people yelling insults out of cars. So they never realized they were zooming
confidently down a blind alley.
---
* * *
--- |
|
February 2009
I finally realized today why politics and religion yield such uniquely useless
discussions.
As a rule, any mention of religion on an online forum degenerates into a
religious argument. Why? Why does this happen with religion and not with
Javascript or baking or other topics people talk about on forums?
What's different about religion is that people don't feel they need to have
any particular expertise to have opinions about it. All they need is strongly
held beliefs, and anyone can have those. No thread about Javascript will grow
as fast as one about religion, because people feel they have to be over some
threshold of expertise to post comments about that. But on religion everyone's
an expert.
Then it struck me: this is the problem with politics too. Politics, like
religion, is a topic where there's no threshold of expertise for expressing an
opinion. All you need is strong convictions.
Do religion and politics have something in common that explains this
similarity? One possible explanation is that they deal with questions that
have no definite answers, so there's no back pressure on people's opinions.
Since no one can be proven wrong, every opinion is equally valid, and sensing
this, everyone lets fly with theirs.
But this isn't true. There are certainly some political questions that have
definite answers, like how much a new government policy will cost. But the
more precise political questions suffer the same fate as the vaguer ones.
I think what religion and politics have in common is that they become part of
people's identity, and people can never have a fruitful argument about
something that's part of their identity. By definition they're partisan.
Which topics engage people's identity depends on the people, not the topic.
For example, a discussion about a battle that included citizens of one or more
of the countries involved would probably degenerate into a political argument.
But a discussion today about a battle that took place in the Bronze Age
probably wouldn't. No one would know what side to be on. So it's not politics
that's the source of the trouble, but identity. When people say a discussion
has degenerated into a religious war, what they really mean is that it has
started to be driven mostly by people's identities.
Because the point at which this happens depends on the people rather than the
topic, it's a mistake to conclude that because a question tends to provoke
religious wars, it must have no answer. For example, the question of the
relative merits of programming languages often degenerates into a religious
war, because so many programmers identify as X programmers or Y programmers.
This sometimes leads people to conclude the question must be unanswerable—that
all languages are equally good. Obviously that's false: anything else people
make can be well or badly designed; why should this be uniquely impossible for
programming languages? And indeed, you can have a fruitful discussion about
the relative merits of programming languages, so long as you exclude people
who respond from identity.
More generally, you can have a fruitful discussion about a topic only if it
doesn't engage the identities of any of the participants. What makes politics
and religion such minefields is that they engage so many people's identities.
But you could in principle have a useful conversation about them with some
people. And there are other topics that might seem harmless, like the relative
merits of Ford and Chevy pickup trucks, that you couldn't safely talk about
with others.
The most intriguing thing about this theory, if it's right, is that it
explains not merely which kinds of discussions to avoid, but how to have
better ideas. If people can't think clearly about anything that has become
part of their identity, then all other things being equal, the best plan is to
let as few things into your identity as possible.
Most people reading this will already be fairly tolerant. But there is a step
beyond thinking of yourself as x but tolerating y: not even to consider
yourself an x. The more labels you have for yourself, the dumber they make
you.
** |
|
March 2008
The web is turning writing into a conversation. Twenty years ago, writers
wrote and readers read. The web lets readers respond, and increasingly they
do—in comment threads, on forums, and in their own blog posts.
Many who respond to something disagree with it. That's to be expected.
Agreeing tends to motivate people less than disagreeing. And when you agree
there's less to say. You could expand on something the author said, but he has
probably already explored the most interesting implications. When you disagree
you're entering territory he may not have explored.
The result is there's a lot more disagreeing going on, especially measured by
the word. That doesn't mean people are getting angrier. The structural change
in the way we communicate is enough to account for it. But though it's not
anger that's driving the increase in disagreement, there's a danger that the
increase in disagreement will make people angrier. Particularly online, where
it's easy to say things you'd never say face to face.
If we're all going to be disagreeing more, we should be careful to do it well.
What does it mean to disagree well? Most readers can tell the difference
between mere name-calling and a carefully reasoned refutation, but I think it
would help to put names on the intermediate stages. So here's an attempt at a
disagreement hierarchy:
**DH0. Name-calling.**
This is the lowest form of disagreement, and probably also the most common.
We've all seen comments like this:
> u r a fag!!!!!!!!!!
But it's important to realize that more articulate name-calling has just as
little weight. A comment like
> The author is a self-important dilettante.
is really nothing more than a pretentious version of "u r a fag."
**DH1. Ad Hominem.**
An ad hominem attack is not quite as weak as mere name-calling. It might
actually carry some weight. For example, if a senator wrote an article saying
senators' salaries should be increased, one could respond:
> Of course he would say that. He's a senator.
This wouldn't refute the author's argument, but it may at least be relevant to
the case. It's still a very weak form of disagreement, though. If there's
something wrong with the senator's argument, you should say what it is; and if
there isn't, what difference does it make that he's a senator?
Saying that an author lacks the authority to write about a topic is a variant
of ad hominem—and a particularly useless sort, because good ideas often come
from outsiders. The question is whether the author is correct or not. If his
lack of authority caused him to make mistakes, point those out. And if it
didn't, it's not a problem.
**DH2. Responding to Tone.**
The next level up we start to see responses to the writing, rather than the
writer. The lowest form of these is to disagree with the author's tone. E.g.
> I can't believe the author dismisses intelligent design in such a cavalier
> fashion.
Though better than attacking the author, this is still a weak form of
disagreement. It matters much more whether the author is wrong or right than
what his tone is. Especially since tone is so hard to judge. Someone who has a
chip on their shoulder about some topic might be offended by a tone that to
other readers seemed neutral.
So if the worst thing you can say about something is to criticize its tone,
you're not saying much. Is the author flippant, but correct? Better that than
grave and wrong. And if the author is incorrect somewhere, say where.
**DH3. Contradiction.**
In this stage we finally get responses to what was said, rather than how or by
whom. The lowest form of response to an argument is simply to state the
opposing case, with little or no supporting evidence.
This is often combined with DH2 statements, as in:
> I can't believe the author dismisses intelligent design in such a cavalier
> fashion. Intelligent design is a legitimate scientific theory.
Contradiction can sometimes have some weight. Sometimes merely seeing the
opposing case stated explicitly is enough to see that it's right. But usually
evidence will help.
**DH4. Counterargument.**
At level 4 we reach the first form of convincing disagreement:
counterargument. Forms up to this point can usually be ignored as proving
nothing. Counterargument might prove something. The problem is, it's hard to
say exactly what.
Counterargument is contradiction plus reasoning and/or evidence. When aimed
squarely at the original argument, it can be convincing. But unfortunately
it's common for counterarguments to be aimed at something slightly different.
More often than not, two people arguing passionately about something are
actually arguing about two different things. Sometimes they even agree with
one another, but are so caught up in their squabble they don't realize it.
There could be a legitimate reason for arguing against something slightly
different from what the original author said: when you feel they missed the
heart of the matter. But when you do that, you should say explicitly you're
doing it.
**DH5. Refutation.**
The most convincing form of disagreement is refutation. It's also the rarest,
because it's the most work. Indeed, the disagreement hierarchy forms a kind of
pyramid, in the sense that the higher you go the fewer instances you find.
To refute someone you probably have to quote them. You have to find a "smoking
gun," a passage in whatever you disagree with that you feel is mistaken, and
then explain why it's mistaken. If you can't find an actual quote to disagree
with, you may be arguing with a straw man.
While refutation generally entails quoting, quoting doesn't necessarily imply
refutation. Some writers quote parts of things they disagree with to give the
appearance of legitimate refutation, then follow with a response as low as DH3
or even DH0.
**DH6. Refuting the Central Point.**
The force of a refutation depends on what you refute. The most powerful form
of disagreement is to refute someone's central point.
Even as high as DH5 we still sometimes see deliberate dishonesty, as when
someone picks out minor points of an argument and refutes those. Sometimes the
spirit in which this is done makes it more of a sophisticated form of ad
hominem than actual refutation. For example, correcting someone's grammar, or
harping on minor mistakes in names or numbers. Unless the opposing argument
actually depends on such things, the only purpose of correcting them is to
discredit one's opponent.
Truly refuting something requires one to refute its central point, or at least
one of them. And that means one has to commit explicitly to what the central
point is. So a truly effective refutation would look like:
> The author's main point seems to be x. As he says:
>
>> <quotation>
>
> But this is wrong for the following reasons...
The quotation you point out as mistaken need not be the actual statement of
the author's main point. It's enough to refute something it depends upon.
**What It Means**
Now we have a way of classifying forms of disagreement. What good is it? One
thing the disagreement hierarchy _doesn't_ give us is a way of picking a
winner. DH levels merely describe the form of a statement, not whether it's
correct. A DH6 response could still be completely mistaken.
But while DH levels don't set a lower bound on the convincingness of a reply,
they do set an upper bound. A DH6 response might be unconvincing, but a DH2 or
lower response is always unconvincing.
The most obvious advantage of classifying the forms of disagreement is that it
will help people to evaluate what they read. In particular, it will help them
to see through intellectually dishonest arguments. An eloquent speaker or
writer can give the impression of vanquishing an opponent merely by using
forceful words. In fact that is probably the defining quality of a demagogue.
By giving names to the different forms of disagreement, we give critical
readers a pin for popping such balloons.
Such labels may help writers too. Most intellectual dishonesty is
unintentional. Someone arguing against the tone of something he disagrees with
may believe he's really saying something. Zooming out and seeing his current
position on the disagreement hierarchy may inspire him to try moving up to
counterargument or refutation.
But the greatest benefit of disagreeing well is not just that it will make
conversations better, but that it will make the people who have them happier.
If you study conversations, you find there is a lot more meanness down in DH1
than up in DH6. You don't have to be mean when you have a real point to make.
In fact, you don't want to. If you have something real to say, being mean just
gets in the way.
If moving up the disagreement hierarchy makes people less mean, that will make
most of them happier. Most people don't really enjoy being mean; they do it
because they can't help it.
**Thanks** to Trevor Blackwell and Jessica Livingston for reading drafts of
this.
**Related:**
---
---
What You Can't Say
| | The Age of the Essay
Italian Translation
| | Russian Translation
Swedish Translation
| | Spanish Translation
German Translation
| | French Translation
Arabic Translation
| | Finnish Translation
Italian Translation
| | Turkish Translation
* * *
--- |
|
October 2004
As E. B. White said, "good writing is rewriting." I didn't realize this when I
was in school. In writing, as in math and science, they only show you the
finished product. You don't see all the false starts. This gives students a
misleading view of how things get made.
Part of the reason it happens is that writers don't want people to see their
mistakes. But I'm willing to let people see an early draft if it will show how
much you have to rewrite to beat an essay into shape.
Below is the oldest version I can find of The Age of the Essay (probably the
second or third day), with text that ultimately survived in red and text that
later got deleted in gray. There seem to be several categories of cuts: things
I got wrong, things that seem like bragging, flames, digressions, stretches of
awkward prose, and unnecessary words.
I discarded more from the beginning. That's not surprising; it takes a while
to hit your stride. There are more digressions at the start, because I'm not
sure where I'm heading.
The amount of cutting is about average. I probably write three to four words
for every one that appears in the final version of an essay.
(Before anyone gets mad at me for opinions expressed here, remember that
anything you see here that's not in the final version is obviously something I
chose not to publish, often because I disagree with it.)
Recently a friend said that what he liked about my essays was that they
weren't written the way we'd been taught to write essays in school. You
remember: topic sentence, introductory paragraph, supporting paragraphs,
conclusion. It hadn't occurred to me till then that those horrible things we
had to write in school were even connected to what I was doing now. But sure
enough, I thought, they did call them "essays," didn't they?
Well, they're not. Those things you have to write in school are not only not
essays, they're one of the most pointless of all the pointless hoops you have
to jump through in school. And I worry that they not only teach students the
wrong things about writing, but put them off writing entirely.
So I'm going to give the other side of the story: what an essay really is, and
how you write one. Or at least, how I write one. Students be forewarned: if
you actually write the kind of essay I describe, you'll probably get bad
grades. But knowing how it's really done should at least help you to
understand the feeling of futility you have when you're writing the things
they tell you to.
The most obvious difference between real essays and the things one has to
write in school is that real essays are not exclusively about English
literature. It's a fine thing for schools to teach students how to write. But
for some bizarre reason (actually, a very specific bizarre reason that I'll
explain in a moment), the teaching of writing has gotten mixed together with
the study of literature. And so all over the country, students are writing not
about how a baseball team with a small budget might compete with the Yankees,
or the role of color in fashion, or what constitutes a good dessert, but about
symbolism in Dickens.
With obvious results. Only a few people really care about symbolism in
Dickens. The teacher doesn't. The students don't. Most of the people who've
had to write PhD disserations about Dickens don't. And certainly Dickens
himself would be more interested in an essay about color or baseball.
How did things get this way? To answer that we have to go back almost a
thousand years. Between about 500 and 1000, life was not very good in Europe.
The term "dark ages" is presently out of fashion as too judgemental (the
period wasn't dark; it was just _different_), but if this label didn't already
exist, it would seem an inspired metaphor. What little original thought there
was took place in lulls between constant wars and had something of the
character of the thoughts of parents with a new baby. The most amusing thing
written during this period, Liudprand of Cremona's Embassy to Constantinople,
is, I suspect, mostly inadvertantly so.
Around 1000 Europe began to catch its breath. And once they had the luxury of
curiosity, one of the first things they discovered was what we call "the
classics." Imagine if we were visited by aliens. If they could even get here
they'd presumably know a few things we don't. Immediately Alien Studies would
become the most dynamic field of scholarship: instead of painstakingly
discovering things for ourselves, we could simply suck up everything they'd
discovered. So it was in Europe in 1200. When classical texts began to
circulate in Europe, they contained not just new answers, but new questions.
(If anyone proved a theorem in christian Europe before 1200, for example,
there is no record of it.)
For a couple centuries, some of the most important work being done was
intellectual archaelogy. Those were also the centuries during which schools
were first established. And since reading ancient texts was the essence of
what scholars did then, it became the basis of the curriculum.
By 1700, someone who wanted to learn about physics didn't need to start by
mastering Greek in order to read Aristotle. But schools change slower than
scholarship: the study of ancient texts had such prestige that it remained the
backbone of education until the late 19th century. By then it was merely a
tradition. It did serve some purposes: reading a foreign language was
difficult, and thus taught discipline, or at least, kept students busy; it
introduced students to cultures quite different from their own; and its very
uselessness made it function (like white gloves) as a social bulwark. But it
certainly wasn't true, and hadn't been true for centuries, that students were
serving apprenticeships in the hottest area of scholarship.
Classical scholarship had also changed. In the early era, philology actually
mattered. The texts that filtered into Europe were all corrupted to some
degree by the errors of translators and copyists. Scholars had to figure out
what Aristotle said before they could figure out what he meant. But by the
modern era such questions were answered as well as they were ever going to be.
And so the study of ancient texts became less about ancientness and more about
texts.
The time was then ripe for the question: if the study of ancient texts is a
valid field for scholarship, why not modern texts? The answer, of course, is
that the raison d'etre of classical scholarship was a kind of intellectual
archaelogy that does not need to be done in the case of contemporary authors.
But for obvious reasons no one wanted to give that answer. The archaeological
work being mostly done, it implied that the people studying the classics were,
if not wasting their time, at least working on problems of minor importance.
And so began the study of modern literature. There was some initial
resistance, but it didn't last long. The limiting reagent in the growth of
university departments is what parents will let undergraduates study. If
parents will let their children major in x, the rest follows
straightforwardly. There will be jobs teaching x, and professors to fill them.
The professors will establish scholarly journals and publish one another's
papers. Universities with x departments will subscribe to the journals.
Graduate students who want jobs as professors of x will write dissertations
about it. It may take a good long while for the more prestigious universities
to cave in and establish departments in cheesier xes, but at the other end of
the scale there are so many universities competing to attract students that
the mere establishment of a discipline requires little more than the desire to
do it.
High schools imitate universities. And so once university English departments
were established in the late nineteenth century, the 'riting component of the
3 Rs was morphed into English. With the bizarre consequence that high school
students now had to write about English literature-- to write, without even
realizing it, imitations of whatever English professors had been publishing in
their journals a few decades before. It's no wonder if this seems to the
student a pointless exercise, because we're now three steps removed from real
work: the students are imitating English professors, who are imitating
classical scholars, who are merely the inheritors of a tradition growing out
of what was, 700 years ago, fascinating and urgently needed work.
Perhaps high schools should drop English and just teach writing. The valuable
part of English classes is learning to write, and that could be taught better
by itself. Students learn better when they're interested in what they're
doing, and it's hard to imagine a topic less interesting than symbolism in
Dickens. Most of the people who write about that sort of thing professionally
are not really interested in it. (Though indeed, it's been a while since they
were writing about symbolism; now they're writing about gender.)
I have no illusions about how eagerly this suggestion will be adopted. Public
schools probably couldn't stop teaching English even if they wanted to;
they're probably required to by law. But here's a related suggestion that goes
with the grain instead of against it: that universities establish a writing
major. Many of the students who now major in English would major in writing if
they could, and most would be better off.
It will be argued that it is a good thing for students to be exposed to their
literary heritage. Certainly. But is that more important than that they learn
to write well? And are English classes even the place to do it? After all, the
average public high school student gets zero exposure to his artistic
heritage. No disaster results. The people who are interested in art learn
about it for themselves, and those who aren't don't. I find that American
adults are no better or worse informed about literature than art, despite the
fact that they spent years studying literature in high school and no time at
all studying art. Which presumably means that what they're taught in school is
rounding error compared to what they pick up on their own.
Indeed, English classes may even be harmful. In my case they were effectively
aversion therapy. Want to make someone dislike a book? Force him to read it
and write an essay about it. And make the topic so intellectually bogus that
you could not, if asked, explain why one ought to write about it. I love to
read more than anything, but by the end of high school I never read the books
we were assigned. I was so disgusted with what we were doing that it became a
point of honor with me to write nonsense at least as good at the other
students' without having more than glanced over the book to learn the names of
the characters and a few random events in it.
I hoped this might be fixed in college, but I found the same problem there. It
was not the teachers. It was English. We were supposed to read novels and
write essays about them. About what, and why? That no one seemed to be able to
explain. Eventually by trial and error I found that what the teacher wanted us
to do was pretend that the story had really taken place, and to analyze based
on what the characters said and did (the subtler clues, the better) what their
motives must have been. One got extra credit for motives having to do with
class, as I suspect one must now for those involving gender and sexuality. I
learned how to churn out such stuff well enough to get an A, but I never took
another English class.
And the books we did these disgusting things to, like those we mishandled in
high school, I find still have black marks against them in my mind. The one
saving grace was that English courses tend to favor pompous, dull writers like
Henry James, who deserve black marks against their names anyway. One of the
principles the IRS uses in deciding whether to allow deductions is that, if
something is fun, it isn't work. Fields that are intellectually unsure of
themselves rely on a similar principle. Reading P.G. Wodehouse or Evelyn Waugh
or Raymond Chandler is too obviously pleasing to seem like serious work, as
reading Shakespeare would have been before English evolved enough to make it
an effort to understand him. [sh] And so good writers (just you wait and see
who's still in print in 300 years) are less likely to have readers turned
against them by clumsy, self-appointed tour guides.
The other big difference between a real essay and the things they make you
write in school is that a real essay doesn't take a position and then defend
it. That principle, like the idea that we ought to be writing about
literature, turns out to be another intellectual hangover of long forgotten
origins. It's often mistakenly believed that medieval universities were mostly
seminaries. In fact they were more law schools. And at least in our tradition
lawyers are advocates: they are trained to be able to take either side of an
argument and make as good a case for it as they can.
Whether or not this is a good idea (in the case of prosecutors, it probably
isn't), it tended to pervade the atmosphere of early universities. After the
lecture the most common form of discussion was the disputation. This idea is
at least nominally preserved in our present-day thesis defense\-- indeed, in
the very word thesis. Most people treat the words thesis and dissertation as
interchangeable, but originally, at least, a thesis was a position one took
and the dissertation was the argument by which one defended it.
I'm not complaining that we blur these two words together. As far as I'm
concerned, the sooner we lose the original sense of the word thesis, the
better. For many, perhaps most, graduate students, it is stuffing a square peg
into a round hole to try to recast one's work as a single thesis. And as for
the disputation, that seems clearly a net lose. Arguing two sides of a case
may be a necessary evil in a legal dispute, but it's not the best way to get
at the truth, as I think lawyers would be the first to admit.
And yet this principle is built into the very structure of the essays they
teach you to write in high school. The topic sentence is your thesis, chosen
in advance, the supporting paragraphs the blows you strike in the conflict,
and the conclusion--- uh, what it the conclusion? I was never sure about that
in high school. If your thesis was well expressed, what need was there to
restate it? In theory it seemed that the conclusion of a really good essay
ought not to need to say any more than QED. But when you understand the
origins of this sort of "essay", you can see where the conclusion comes from.
It's the concluding remarks to the jury.
What other alternative is there? To answer that we have to reach back into
history again, though this time not so far. To Michel de Montaigne, inventor
of the essay. He was doing something quite different from what a lawyer does,
and the difference is embodied in the name. Essayer is the French verb meaning
"to try" (the cousin of our word assay), and an "essai" is an effort. An
essay is something you write in order to figure something out.
Figure out what? You don't know yet. And so you can't begin with a thesis,
because you don't have one, and may never have one. An essay doesn't begin
with a statement, but with a question. In a real essay, you don't take a
position and defend it. You see a door that's ajar, and you open it and walk
in to see what's inside.
If all you want to do is figure things out, why do you need to write anything,
though? Why not just sit and think? Well, there precisely is Montaigne's great
discovery. Expressing ideas helps to form them. Indeed, helps is far too weak
a word. 90% of what ends up in my essays was stuff I only thought of when I
sat down to write them. That's why I write them.
So there's another difference between essays and the things you have to write
in school. In school you are, in theory, explaining yourself to someone else.
In the best case---if you're really organized---you're just writing it _down._
In a real essay you're writing for yourself. You're thinking out loud.
But not quite. Just as inviting people over forces you to clean up your
apartment, writing something that you know other people will read forces you
to think well. So it does matter to have an audience. The things I've written
just for myself are no good. Indeed, they're bad in a particular way: they
tend to peter out. When I run into difficulties, I notice that I tend to
conclude with a few vague questions and then drift off to get a cup of tea.
This seems a common problem. It's practically the standard ending in blog
entries--- with the addition of a "heh" or an emoticon, prompted by the all
too accurate sense that something is missing.
And indeed, a lot of published essays peter out in this same way. Particularly
the sort written by the staff writers of newsmagazines. Outside writers tend
to supply editorials of the defend-a-position variety, which make a beeline
toward a rousing (and foreordained) conclusion. But the staff writers feel
obliged to write something more balanced, which in practice ends up meaning
blurry. Since they're writing for a popular magazine, they start with the most
radioactively controversial questions, from which (because they're writing for
a popular magazine) they then proceed to recoil from in terror. Gay marriage,
for or against? This group says one thing. That group says another. One thing
is certain: the question is a complex one. (But don't get mad at us. We didn't
draw any conclusions.)
Questions aren't enough. An essay has to come up with answers. They don't
always, of course. Sometimes you start with a promising question and get
nowhere. But those you don't publish. Those are like experiments that get
inconclusive results. Something you publish ought to tell the reader something
he didn't already know.
But _what_ you tell him doesn't matter, so long as it's interesting. I'm
sometimes accused of meandering. In defend-a-position writing that would be a
flaw. There you're not concerned with truth. You already know where you're
going, and you want to go straight there, blustering through obstacles, and
hand-waving your way across swampy ground. But that's not what you're trying
to do in an essay. An essay is supposed to be a search for truth. It would be
suspicious if it didn't meander.
The Meander is a river in Asia Minor (aka Turkey). As you might expect, it
winds all over the place. But does it do this out of frivolity? Quite the
opposite. Like all rivers, it's rigorously following the laws of physics. The
path it has discovered, winding as it is, represents the most economical route
to the sea.
The river's algorithm is simple. At each step, flow down. For the essayist
this translates to: flow interesting. Of all the places to go next, choose
whichever seems most interesting.
I'm pushing this metaphor a bit. An essayist can't have quite as little
foresight as a river. In fact what you do (or what I do) is somewhere between
a river and a roman road-builder. I have a general idea of the direction I
want to go in, and I choose the next topic with that in mind. This essay is
about writing, so I do occasionally yank it back in that direction, but it is
not all the sort of essay I thought I was going to write about writing.
Note too that hill-climbing (which is what this algorithm is called) can get
you in trouble. Sometimes, just like a river, you run up against a blank wall.
What I do then is just what the river does: backtrack. At one point in this
essay I found that after following a certain thread I ran out of ideas. I had
to go back n paragraphs and start over in another direction. For illustrative
purposes I've left the abandoned branch as a footnote.
Err on the side of the river. An essay is not a reference work. It's not
something you read looking for a specific answer, and feel cheated if you
don't find it. I'd much rather read an essay that went off in an unexpected
but interesting direction than one that plodded dutifully along a prescribed
course.
So what's interesting? For me, interesting means surprise. Design, as Matz has
said, should follow the principle of least surprise. A button that looks like
it will make a machine stop should make it stop, not speed up. Essays should
do the opposite. Essays should aim for maximum surprise.
I was afraid of flying for a long time and could only travel vicariously. When
friends came back from faraway places, it wasn't just out of politeness that I
asked them about their trip. I really wanted to know. And I found that the
best way to get information out of them was to ask what surprised them. How
was the place different from what they expected? This is an extremely useful
question. You can ask it of even the most unobservant people, and it will
extract information they didn't even know they were recording.
Indeed, you can ask it in real time. Now when I go somewhere new, I make a
note of what surprises me about it. Sometimes I even make a conscious effort
to visualize the place beforehand, so I'll have a detailed image to diff with
reality.
Surprises are facts you didn't already know. But they're more than that.
They're facts that contradict things you thought you knew. And so they're the
most valuable sort of fact you can get. They're like a food that's not merely
healthy, but counteracts the unhealthy effects of things you've already eaten.
How do you find surprises? Well, therein lies half the work of essay writing.
(The other half is expressing yourself well.) You can at least use yourself as
a proxy for the reader. You should only write about things you've thought
about a lot. And anything you come across that surprises you, who've thought
about the topic a lot, will probably surprise most readers.
For example, in a recent essay I pointed out that because you can only judge
computer programmers by working with them, no one knows in programming who the
heroes should be. I certainly didn't realize this when I started writing the
essay, and even now I find it kind of weird. That's what you're looking for.
So if you want to write essays, you need two ingredients: you need a few
topics that you think about a lot, and you need some ability to ferret out the
unexpected.
What should you think about? My guess is that it doesn't matter. Almost
everything is interesting if you get deeply enough into it. The one possible
exception are things like working in fast food, which have deliberately had
all the variation sucked out of them. In retrospect, was there anything
interesting about working in Baskin-Robbins? Well, it was interesting to
notice how important color was to the customers. Kids a certain age would
point into the case and say that they wanted yellow. Did they want French
Vanilla or Lemon? They would just look at you blankly. They wanted yellow. And
then there was the mystery of why the perennial favorite Pralines n' Cream was
so appealing. I'm inclined now to think it was the salt. And the mystery of
why Passion Fruit tasted so disgusting. People would order it because of the
name, and were always disappointed. It should have been called In-sink-erator
Fruit. And there was the difference in the way fathers and mothers bought ice
cream for their kids. Fathers tended to adopt the attitude of benevolent kings
bestowing largesse, and mothers that of harried bureaucrats, giving in to
pressure against their better judgement. So, yes, there does seem to be
material, even in fast food.
What about the other half, ferreting out the unexpected? That may require some
natural ability. I've noticed for a long time that I'm pathologically
observant. ....
[That was as far as I'd gotten at the time.]
** |
|
March 2005
A couple months ago I got an email from a recruiter asking if I was interested
in being a "technologist in residence" at a new venture capital fund. I think
the idea was to play Karl Rove to the VCs' George Bush.
I considered it for about four seconds. Work for a VC fund? Ick.
One of my most vivid memories from our startup is going to visit Greylock, the
famous Boston VCs. They were the most arrogant people I've met in my life. And
I've met a lot of arrogant people.
I'm not alone in feeling this way, of course. Even a VC friend of mine
dislikes VCs. "Assholes," he says.
But lately I've been learning more about how the VC world works, and a few
days ago it hit me that there's a reason VCs are the way they are. It's not so
much that the business attracts jerks, or even that the power they wield
corrupts them. The real problem is the way they're paid.
The problem with VC funds is that they're _funds_. Like the managers of mutual
funds or hedge funds, VCs get paid a percentage of the money they manage:
about 2% a year in management fees, plus a percentage of the gains. So they
want the fund to be huge-- hundreds of millions of dollars, if possible. But
that means each partner ends up being responsible for investing a lot of
money. And since one person can only manage so many deals, each deal has to be
for multiple millions of dollars.
This turns out to explain nearly all the characteristics of VCs that founders
hate.
It explains why VCs take so agonizingly long to make up their minds, and why
their due diligence feels like a body cavity search. With so much at
stake, they have to be paranoid.
It explains why they steal your ideas. Every founder knows that VCs will tell
your secrets to your competitors if they end up investing in them. It's not
unheard of for VCs to meet you when they have no intention of funding you,
just to pick your brain for a competitor. This prospect makes naive founders
clumsily secretive. Experienced founders treat it as a cost of doing business.
Either way it sucks. But again, the only reason VCs are so sneaky is the giant
deals they do. With so much at stake, they have to be devious.
It explains why VCs tend to interfere in the companies they invest in. They
want to be on your board not just so that they can advise you, but so that
they can watch you. Often they even install a new CEO. Yes, he may have
extensive business experience. But he's also their man: these newly installed
CEOs always play something of the role of a political commissar in a Red Army
unit. With so much at stake, VCs can't resist micromanaging you.
The huge investments themselves are something founders would dislike, if they
realized how damaging they can be. VCs don't invest $x million because that's
the amount you need, but because that's the amount the structure of their
business requires them to invest. Like steroids, these sudden huge investments
can do more harm than good. Google survived enormous VC funding because it
could legitimately absorb large amounts of money. They had to buy a lot of
servers and a lot of bandwidth to crawl the whole Web. Less fortunate startups
just end up hiring armies of people to sit around having meetings.
In principle you could take a huge VC investment, put it in treasury bills,
and continue to operate frugally. You just try it.
And of course giant investments mean giant valuations. They have to, or
there's not enough stock left to keep the founders interested. You might think
a high valuation is a great thing. Many founders do. But you can't eat paper.
You can't benefit from a high valuation unless you can somehow achieve what
those in the business call a "liquidity event," and the higher your
valuation, the narrower your options for doing that. Many a founder would be
happy to sell his company for $15 million, but VCs who've just invested at a
pre-money valuation of $8 million won't hear of that. You're rolling the dice
again, whether you like it or not.
Back in 1997, one of our competitors raised $20 million in a single round of
VC funding. This was at the time more than the valuation of our entire
company. Was I worried? Not at all: I was delighted. It was like watching a
car you're chasing turn down a street that you know has no outlet.
Their smartest move at that point would have been to take every penny of the
$20 million and use it to buy us. We would have sold. Their investors would
have been furious of course. But I think the main reason they never considered
this was that they never imagined we could be had so cheap. They probably
assumed we were on the same VC gravy train they were.
In fact we only spent about $2 million in our entire existence. And that gave
us flexibility. We could sell ourselves to Yahoo for $50 million, and everyone
was delighted. If our competitor had done that, the last round of investors
would presumably have lost money. I assume they could have vetoed such a deal.
But no one those days was paying a lot more than Yahoo. So unless their
founders could pull off an IPO (which would be difficult with Yahoo as a
competitor), they had no choice but to ride the thing down.
The puffed-up companies that went public during the Bubble didn't do it just
because they were pulled into it by unscrupulous investment bankers. Most were
pushed just as hard from the other side by VCs who'd invested at high
valuations, leaving an IPO as the only way out. The only people dumber were
retail investors. So it was literally IPO or bust. Or rather, IPO then bust,
or just bust.
Add up all the evidence of VCs' behavior, and the resulting personality is not
attractive. In fact, it's the classic villain: alternately cowardly, greedy,
sneaky, and overbearing.
I used to take it for granted that VCs were like this. Complaining that VCs
were jerks used to seem as naive to me as complaining that users didn't read
the reference manual. Of course VCs were jerks. How could it be otherwise?
But I realize now that they're not intrinsically jerks. VCs are like car
salesmen or bureaucrats: the nature of their work turns them into jerks.
I've met a few VCs I like. Mike Moritz seems a good guy. He even has a sense
of humor, which is almost unheard of among VCs. From what I've read about John
Doerr, he sounds like a good guy too, almost a hacker. But they work for the
very best VC funds. And my theory explains why they'd tend to be different:
just as the very most popular kids don't have to persecute nerds, the very
best VCs don't have to act like VCs. They get the pick of all the best deals.
So they don't have to be so paranoid and sneaky, and they can choose those
rare companies, like Google, that will actually benefit from the giant sums
they're compelled to invest.
VCs often complain that in their business there's too much money chasing too
few deals. Few realize that this also describes a flaw in the way funding
works at the level of individual firms.
Perhaps this was the sort of strategic insight I was supposed to come up with
as a "technologist in residence." If so, the good news is that they're getting
it for free. The bad news is it means that if you're not one of the very top
funds, you're condemned to be the bad guys.
** |
|
January 2007
_(Foreword to Jessica Livingston'sFounders at Work.)_
Apparently sprinters reach their highest speed right out of the blocks, and
spend the rest of the race slowing down. The winners slow down the least. It's
that way with most startups too. The earliest phase is usually the most
productive. That's when they have the really big ideas. Imagine what Apple was
like when 100% of its employees were either Steve Jobs or Steve Wozniak.
The striking thing about this phase is that it's completely different from
most people's idea of what business is like. If you looked in people's heads
(or stock photo collections) for images representing "business," you'd get
images of people dressed up in suits, groups sitting around conference tables
looking serious, Powerpoint presentations, people producing thick reports for
one another to read. Early stage startups are the exact opposite of this. And
yet they're probably the most productive part of the whole economy.
Why the disconnect? I think there's a general principle at work here: the less
energy people expend on performance, the more they expend on appearances to
compensate. More often than not the energy they expend on seeming impressive
makes their actual performance worse. A few years ago I read an article in
which a car magazine modified the "sports" model of some production car to get
the fastest possible standing quarter mile. You know how they did it? They cut
off all the crap the manufacturer had bolted onto the car to make it _look_
fast.
Business is broken the same way that car was. The effort that goes into
looking productive is not merely wasted, but actually makes organizations less
productive. Suits, for example. Suits do not help people to think better. I
bet most executives at big companies do their best thinking when they wake up
on Sunday morning and go downstairs in their bathrobe to make a cup of coffee.
That's when you have ideas. Just imagine what a company would be like if
people could think that well at work. People do in startups, at least some of
the time. (Half the time you're in a panic because your servers are on fire,
but the other half you're thinking as deeply as most people only get to
sitting alone on a Sunday morning.)
Ditto for most of the other differences between startups and what passes for
productivity in big companies. And yet conventional ideas of professionalism
have such an iron grip on our minds that even startup founders are affected by
them. In our startup, when outsiders came to visit we tried hard to seem
"professional." We'd clean up our offices, wear better clothes, try to arrange
that a lot of people were there during conventional office hours. In fact,
programming didn't get done by well-dressed people at clean desks during
office hours. It got done by badly dressed people (I was notorious for
programmming wearing just a towel) in offices strewn with junk at 2 in the
morning. But no visitor would understand that. Not even investors, who are
supposed to be able to recognize real productivity when they see it. Even we
were affected by the conventional wisdom. We thought of ourselves as
impostors, succeeding despite being totally unprofessional. It was as if we'd
created a Formula 1 car but felt sheepish because it didn't look like a car
was supposed to look.
In the car world, there are at least some people who know that a high
performance car looks like a Formula 1 racecar, not a sedan with giant rims
and a fake spoiler bolted to the trunk. Why not in business? Probably because
startups are so small. The really dramatic growth happens when a startup only
has three or four people, so only three or four people see that, whereas tens
of thousands see business as it's practiced by Boeing or Philip Morris.
This book can help fix that problem, by showing everyone what, till now, only
a handful people got to see: what happens in the first year of a startup. This
is what real productivity looks like. This is the Formula 1 racecar. It looks
weird, but it goes fast.
Of course, big companies won't be able to do everything these startups do. In
big companies there's always going to be more politics, and less scope for
individual decisions. But seeing what startups are really like will at least
show other organizations what to aim for. The time may soon be coming when
instead of startups trying to seem more corporate, corporations will try to
seem more like startups. That would be a good thing.
Japanese Translation
* * *
---
---
| | **Founders at Work**
There can't be more than a couple thousand people who know first-hand what
happens in the first month of a successful startup. Jessica Livingston got
them to tell us. So despite the interview format, this is really a how-to
book. It is probably the single most valuable book a startup founder could
read. |
|
May 2007
People who worry about the increasing gap between rich and poor generally look
back on the mid twentieth century as a golden age. In those days we had a
large number of high-paying union manufacturing jobs that boosted the median
income. I wouldn't quite call the high-paying union job a myth, but I think
people who dwell on it are reading too much into it.
Oddly enough, it was working with startups that made me realize where the
high-paying union job came from. In a rapidly growing market, you don't worry
too much about efficiency. It's more important to grow fast. If there's some
mundane problem getting in your way, and there's a simple solution that's
somewhat expensive, just take it and get on with more important things. EBay
didn't win by paying less for servers than their competitors.
Difficult though it may be to imagine now, manufacturing was a growth industry
in the mid twentieth century. This was an era when small firms making
everything from cars to candy were getting consolidated into a new kind of
corporation with national reach and huge economies of scale. You had to grow
fast or die. Workers were for these companies what servers are for an Internet
startup. A reliable supply was more important than low cost.
If you looked in the head of a 1950s auto executive, the attitude must have
been: sure, give 'em whatever they ask for, so long as the new model isn't
delayed.
In other words, those workers were not paid what their work was worth.
Circumstances being what they were, companies would have been stupid to insist
on paying them so little.
If you want a less controversial example of this phenomenon, ask anyone who
worked as a consultant building web sites during the Internet Bubble. In the
late nineties you could get paid huge sums of money for building the most
trivial things. And yet does anyone who was there have any expectation those
days will ever return? I doubt it. Surely everyone realizes that was just a
temporary aberration.
The era of labor unions seems to have been the same kind of aberration, just
spread over a longer period, and mixed together with a lot of ideology that
prevents people from viewing it with as cold an eye as they would something
like consulting during the Bubble.
Basically, unions were just Razorfish.
People who think the labor movement was the creation of heroic union
organizers have a problem to explain: why are unions shrinking now? The best
they can do is fall back on the default explanation of people living in fallen
civilizations. Our ancestors were giants. The workers of the early twentieth
century must have had a moral courage that's lacking today.
In fact there's a simpler explanation. The early twentieth century was just a
fast-growing startup overpaying for infrastructure. And we in the present are
not a fallen people, who have abandoned whatever mysterious high-minded
principles produced the high-paying union job. We simply live in a time when
the fast-growing companies overspend on different things.
---
* * *
--- |
|
October 2023
One of the most important things I didn't understand about the world when I
was a child is the degree to which the returns for performance are
superlinear.
Teachers and coaches implicitly told us the returns were linear. "You get
out," I heard a thousand times, "what you put in." They meant well, but this
is rarely true. If your product is only half as good as your competitor's, you
don't get half as many customers. You get no customers, and you go out of
business.
It's obviously true that the returns for performance are superlinear in
business. Some think this is a flaw of capitalism, and that if we changed the
rules it would stop being true. But superlinear returns for performance are a
feature of the world, not an artifact of rules we've invented. We see the same
pattern in fame, power, military victories, knowledge, and even benefit to
humanity. In all of these, the rich get richer.
You can't understand the world without understanding the concept of
superlinear returns. And if you're ambitious you definitely should, because
this will be the wave you surf on.
It may seem as if there are a lot of different situations with superlinear
returns, but as far as I can tell they reduce to two fundamental causes:
exponential growth and thresholds.
The most obvious case of superlinear returns is when you're working on
something that grows exponentially. For example, growing bacterial cultures.
When they grow at all, they grow exponentially. But they're tricky to grow.
Which means the difference in outcome between someone who's adept at it and
someone who's not is very great.
Startups can also grow exponentially, and we see the same pattern there. Some
manage to achieve high growth rates. Most don't. And as a result you get
qualitatively different outcomes: the companies with high growth rates tend to
become immensely valuable, while the ones with lower growth rates may not even
survive.
Y Combinator encourages founders to focus on growth rate rather than absolute
numbers. It prevents them from being discouraged early on, when the absolute
numbers are still low. It also helps them decide what to focus on: you can use
growth rate as a compass to tell you how to evolve the company. But the main
advantage is that by focusing on growth rate you tend to get something that
grows exponentially.
YC doesn't explicitly tell founders that with growth rate "you get out what
you put in," but it's not far from the truth. And if growth rate were
proportional to performance, then the reward for performance _p_ over time _t_
would be proportional to _p t_.
Even after decades of thinking about this, I find that sentence startling.
Whenever how well you do depends on how well you've done, you'll get
exponential growth. But neither our DNA nor our customs prepare us for it. No
one finds exponential growth natural; every child is surprised, the first time
they hear it, by the story of the man who asks the king for a single grain of
rice the first day and double the amount each successive day.
What we don't understand naturally we develop customs to deal with, but we
don't have many customs about exponential growth either, because there have
been so few instances of it in human history. In principle herding should have
been one: the more animals you had, the more offspring they'd have. But in
practice grazing land was the limiting factor, and there was no plan for
growing that exponentially.
Or more precisely, no generally applicable plan. There _was_ a way to grow
one's territory exponentially: by conquest. The more territory you control,
the more powerful your army becomes, and the easier it is to conquer new
territory. This is why history is full of empires. But so few people created
or ran empires that their experiences didn't affect customs very much. The
emperor was a remote and terrifying figure, not a source of lessons one could
use in one's own life.
The most common case of exponential growth in preindustrial times was probably
scholarship. The more you know, the easier it is to learn new things. The
result, then as now, was that some people were startlingly more knowledgeable
than the rest about certain topics. But this didn't affect customs much
either. Although empires of ideas can overlap and there can thus be far more
emperors, in preindustrial times this type of empire had little practical
effect.
That has changed in the last few centuries. Now the emperors of ideas can
design bombs that defeat the emperors of territory. But this phenomenon is
still so new that we haven't fully assimilated it. Few even of the
participants realize they're benefitting from exponential growth or ask what
they can learn from other instances of it.
The other source of superlinear returns is embodied in the expression "winner
take all." In a sports match the relationship between performance and return
is a step function: the winning team gets one win whether they do much better
or just slightly better.
The source of the step function is not competition per se, however. It's that
there are thresholds in the outcome. You don't need competition to get those.
There can be thresholds in situations where you're the only participant, like
proving a theorem or hitting a target.
It's remarkable how often a situation with one source of superlinear returns
also has the other. Crossing thresholds leads to exponential growth: the
winning side in a battle usually suffers less damage, which makes them more
likely to win in the future. And exponential growth helps you cross
thresholds: in a market with network effects, a company that grows fast enough
can shut out potential competitors.
Fame is an interesting example of a phenomenon that combines both sources of
superlinear returns. Fame grows exponentially because existing fans bring you
new ones. But the fundamental reason it's so concentrated is thresholds:
there's only so much room on the A-list in the average person's head.
The most important case combining both sources of superlinear returns may be
learning. Knowledge grows exponentially, but there are also thresholds in it.
Learning to ride a bicycle, for example. Some of these thresholds are akin to
machine tools: once you learn to read, you're able to learn anything else much
faster. But the most important thresholds of all are those representing new
discoveries. Knowledge seems to be fractal in the sense that if you push hard
at the boundary of one area of knowledge, you sometimes discover a whole new
field. And if you do, you get first crack at all the new discoveries to be
made in it. Newton did this, and so did Durer and Darwin.
Are there general rules for finding situations with superlinear returns? The
most obvious one is to seek work that compounds.
There are two ways work can compound. It can compound directly, in the sense
that doing well in one cycle causes you to do better in the next. That happens
for example when you're building infrastructure, or growing an audience or
brand. Or work can compound by teaching you, since learning compounds. This
second case is an interesting one because you may feel you're doing badly as
it's happening. You may be failing to achieve your immediate goal. But if
you're learning a lot, then you're getting exponential growth nonetheless.
This is one reason Silicon Valley is so tolerant of failure. People in Silicon
Valley aren't blindly tolerant of failure. They'll only continue to bet on you
if you're learning from your failures. But if you are, you are in fact a good
bet: maybe your company didn't grow the way you wanted, but you yourself have,
and that should yield results eventually.
Indeed, the forms of exponential growth that don't consist of learning are so
often intermixed with it that we should probably treat this as the rule rather
than the exception. Which yields another heuristic: always be learning. If
you're not learning, you're probably not on a path that leads to superlinear
returns.
But don't overoptimize _what_ you're learning. Don't limit yourself to
learning things that are already known to be valuable. You're learning; you
don't know for sure yet what's going to be valuable, and if you're too strict
you'll lop off the outliers.
What about step functions? Are there also useful heuristics of the form "seek
thresholds" or "seek competition?" Here the situation is trickier. The
existence of a threshold doesn't guarantee the game will be worth playing. If
you play a round of Russian roulette, you'll be in a situation with a
threshold, certainly, but in the best case you're no better off. "Seek
competition" is similarly useless; what if the prize isn't worth competing
for? Sufficiently fast exponential growth guarantees both the shape and
magnitude of the return curve — because something that grows fast enough will
grow big even if it's trivially small at first — but thresholds only guarantee
the shape.
A principle for taking advantage of thresholds has to include a test to ensure
the game is worth playing. Here's one that does: if you come across something
that's mediocre yet still popular, it could be a good idea to replace it. For
example, if a company makes a product that people dislike yet still buy, then
presumably they'd buy a better alternative if you made one.
It would be great if there were a way to find promising intellectual
thresholds. Is there a way to tell which questions have whole new fields
beyond them? I doubt we could ever predict this with certainty, but the prize
is so valuable that it would be useful to have predictors that were even a
little better than random, and there's hope of finding those. We can to some
degree predict when a research problem _isn't_ likely to lead to new
discoveries: when it seems legit but boring. Whereas the kind that do lead to
new discoveries tend to seem very mystifying, but perhaps unimportant. (If
they were mystifying and obviously important, they'd be famous open questions
with lots of people already working on them.) So one heuristic here is to be
driven by curiosity rather than careerism — to give free rein to your
curiosity instead of working on what you're supposed to.
The prospect of superlinear returns for performance is an exciting one for the
ambitious. And there's good news in this department: this territory is
expanding in both directions. There are more types of work in which you can
get superlinear returns, and the returns themselves are growing.
There are two reasons for this, though they're so closely intertwined that
they're more like one and a half: progress in technology, and the decreasing
importance of organizations.
Fifty years ago it used to be much more necessary to be part of an
organization to work on ambitious projects. It was the only way to get the
resources you needed, the only way to have colleagues, and the only way to get
distribution. So in 1970 your prestige was in most cases the prestige of the
organization you belonged to. And prestige was an accurate predictor, because
if you weren't part of an organization, you weren't likely to achieve much.
There were a handful of exceptions, most notably artists and writers, who
worked alone using inexpensive tools and had their own brands. But even they
were at the mercy of organizations for reaching audiences.
A world dominated by organizations damped variation in the returns for
performance. But this world has eroded significantly just in my lifetime. Now
a lot more people can have the freedom that artists and writers had in the
20th century. There are lots of ambitious projects that don't require much
initial funding, and lots of new ways to learn, make money, find colleagues,
and reach audiences.
There's still plenty of the old world left, but the rate of change has been
dramatic by historical standards. Especially considering what's at stake. It's
hard to imagine a more fundamental change than one in the returns for
performance.
Without the damping effect of institutions, there will be more variation in
outcomes. Which doesn't imply everyone will be better off: people who do well
will do even better, but those who do badly will do worse. That's an important
point to bear in mind. Exposing oneself to superlinear returns is not for
everyone. Most people will be better off as part of the pool. So who should
shoot for superlinear returns? Ambitious people of two types: those who know
they're so good that they'll be net ahead in a world with higher variation,
and those, particularly the young, who can afford to risk trying it to find
out.
The switch away from institutions won't simply be an exodus of their current
inhabitants. Many of the new winners will be people they'd never have let in.
So the resulting democratization of opportunity will be both greater and more
authentic than any tame intramural version the institutions themselves might
have cooked up.
Not everyone is happy about this great unlocking of ambition. It threatens
some vested interests and contradicts some ideologies. But if you're an
ambitious individual it's good news for you. How should you take advantage of
it?
The most obvious way to take advantage of superlinear returns for performance
is by doing exceptionally good work. At the far end of the curve, incremental
effort is a bargain. All the more so because there's less competition at the
far end — and not just for the obvious reason that it's hard to do something
exceptionally well, but also because people find the prospect so intimidating
that few even try. Which means it's not just a bargain to do exceptional work,
but a bargain even to try to.
There are many variables that affect how good your work is, and if you want to
be an outlier you need to get nearly all of them right. For example, to do
something exceptionally well, you have to be interested in it. Mere diligence
is not enough. So in a world with superlinear returns, it's even more valuable
to know what you're interested in, and to find ways to work on it. It will
also be important to choose work that suits your circumstances. For example,
if there's a kind of work that inherently requires a huge expenditure of time
and energy, it will be increasingly valuable to do it when you're young and
don't yet have children.
There's a surprising amount of technique to doing great work. It's not just a
matter of trying hard. I'm going to take a shot giving a recipe in one
paragraph.
Choose work you have a natural aptitude for and a deep interest in. Develop a
habit of working on your own projects; it doesn't matter what they are so long
as you find them excitingly ambitious. Work as hard as you can without burning
out, and this will eventually bring you to one of the frontiers of knowledge.
These look smooth from a distance, but up close they're full of gaps. Notice
and explore such gaps, and if you're lucky one will expand into a whole new
field. Take as much risk as you can afford; if you're not failing occasionally
you're probably being too conservative. Seek out the best colleagues. Develop
good taste and learn from the best examples. Be honest, especially with
yourself. Exercise and eat and sleep well and avoid the more dangerous drugs.
When in doubt, follow your curiosity. It never lies, and it knows more than
you do about what's worth paying attention to.
And there is of course one other thing you need: to be lucky. Luck is always a
factor, but it's even more of a factor when you're working on your own rather
than as part of an organization. And though there are some valid aphorisms
about luck being where preparedness meets opportunity and so on, there's also
a component of true chance that you can't do anything about. The solution is
to take multiple shots. Which is another reason to start taking risks early.
The best example of a field with superlinear returns is probably science. It
has exponential growth, in the form of learning, combined with thresholds at
the extreme edge of performance — literally at the limits of knowledge.
The result has been a level of inequality in scientific discovery that makes
the wealth inequality of even the most stratified societies seem mild by
comparison. Newton's discoveries were arguably greater than all his
contemporaries' combined.
This point may seem obvious, but it might be just as well to spell it out.
Superlinear returns imply inequality. The steeper the return curve, the
greater the variation in outcomes.
In fact, the correlation between superlinear returns and inequality is so
strong that it yields another heuristic for finding work of this type: look
for fields where a few big winners outperform everyone else. A kind of work
where everyone does about the same is unlikely to be one with superlinear
returns.
What are fields where a few big winners outperform everyone else? Here are
some obvious ones: sports, politics, art, music, acting, directing, writing,
math, science, starting companies, and investing. In sports the phenomenon is
due to externally imposed thresholds; you only need to be a few percent faster
to win every race. In politics, power grows much as it did in the days of
emperors. And in some of the other fields (including politics) success is
driven largely by fame, which has its own source of superlinear growth. But
when we exclude sports and politics and the effects of fame, a remarkable
pattern emerges: the remaining list is exactly the same as the list of fields
where you have to be _independent-minded_ to succeed — where your ideas have
to be not just correct, but novel as well.
This is obviously the case in science. You can't publish papers saying things
that other people have already said. But it's just as true in investing, for
example. It's only useful to believe that a company will do well if most other
investors don't; if everyone else thinks the company will do well, then its
stock price will already reflect that, and there's no room to make money.
What else can we learn from these fields? In all of them you have to put in
the initial effort. Superlinear returns seem small at first. _At this rate,_
you find yourself thinking, _I'll never get anywhere._ But because the reward
curve rises so steeply at the far end, it's worth taking extraordinary
measures to get there.
In the startup world, the name for this principle is "do things that don't
scale." If you pay a ridiculous amount of attention to your tiny initial set
of customers, ideally you'll kick off exponential growth by word of mouth. But
this same principle applies to anything that grows exponentially. Learning,
for example. When you first start learning something, you feel lost. But it's
worth making the initial effort to get a toehold, because the more you learn,
the easier it will get.
There's another more subtle lesson in the list of fields with superlinear
returns: not to equate work with a job. For most of the 20th century the two
were identical for nearly everyone, and as a result we've inherited a custom
that equates productivity with having a job. Even now to most people the
phrase "your work" means their job. But to a writer or artist or scientist it
means whatever they're currently studying or creating. For someone like that,
their work is something they carry with them from job to job, if they have
jobs at all. It may be done for an employer, but it's part of their portfolio.
It's an intimidating prospect to enter a field where a few big winners
outperform everyone else. Some people do this deliberately, but you don't need
to. If you have sufficient natural ability and you follow your curiosity
sufficiently far, you'll end up in one. Your curiosity won't let you be
interested in boring questions, and interesting questions tend to create
fields with superlinear returns if they're not already part of one.
The territory of superlinear returns is by no means static. Indeed, the most
extreme returns come from expanding it. So while both ambition and curiosity
can get you into this territory, curiosity may be the more powerful of the
two. Ambition tends to make you climb existing peaks, but if you stick close
enough to an interesting enough question, it may grow into a mountain beneath
you.
** |
|
After a link to Beating the Averages was posted on slashdot, some readers
wanted to hear in more detail about the specific technical advantages we got
from using Lisp in Viaweb. For those who are interested, here are some
excerpts from a talk I gave in April 2001 at BBN Labs in Cambridge, MA.
---
---
| | BBN Talk Excerpts (ASCII)
* * *
--- |
|
December 2020
To celebrate Airbnb's IPO and to help future founders, I thought it might be
useful to explain what was special about Airbnb.
What was special about the Airbnbs was how earnest they were. They did nothing
half-way, and we could sense this even in the interview. Sometimes after we
interviewed a startup we'd be uncertain what to do, and have to talk it over.
Other times we'd just look at one another and smile. The Airbnbs' interview
was that kind. We didn't even like the idea that much. Nor did users, at that
stage; they had no growth. But the founders seemed so full of energy that it
was impossible not to like them.
That first impression was not misleading. During the batch our nickname for
Brian Chesky was The Tasmanian Devil, because like the cartoon character he
seemed a tornado of energy. All three of them were like that. No one ever
worked harder during YC than the Airbnbs did. When you talked to the Airbnbs,
they took notes. If you suggested an idea to them in office hours, the next
time you talked to them they'd not only have implemented it, but also
implemented two new ideas they had in the process. "They probably have the
best attitude of any startup we've funded" I wrote to Mike Arrington during
the batch.
They're still like that. Jessica and I had dinner with Brian in the summer of
2018, just the three of us. By this point the company is ten years old. He
took a page of notes about ideas for new things Airbnb could do.
What we didn't realize when we first met Brian and Joe and Nate was that
Airbnb was on its last legs. After working on the company for a year and
getting no growth, they'd agreed to give it one last shot. They'd try this Y
Combinator thing, and if the company still didn't take off, they'd give up.
Any normal person would have given up already. They'd been funding the company
with credit cards. They had a _binder_ full of credit cards they'd maxed out.
Investors didn't think much of the idea. One investor they met in a cafe
walked out in the middle of meeting with them. They thought he was going to
the bathroom, but he never came back. "He didn't even finish his smoothie,"
Brian said. And now, in late 2008, it was the worst recession in decades. The
stock market was in free fall and wouldn't hit bottom for another four months.
Why hadn't they given up? This is a useful question to ask. People, like
matter, reveal their nature under extreme conditions. One thing that's clear
is that they weren't doing this just for the money. As a money-making scheme,
this was pretty lousy: a year's work and all they had to show for it was a
binder full of maxed-out credit cards. So why were they still working on this
startup? Because of the experience they'd had as the first hosts.
When they first tried renting out airbeds on their floor during a design
convention, all they were hoping for was to make enough money to pay their
rent that month. But something surprising happened: they enjoyed having those
first three guests staying with them. And the guests enjoyed it too. Both they
and the guests had done it because they were in a sense forced to, and yet
they'd all had a great experience. Clearly there was something new here: for
hosts, a new way to make money that had literally been right under their
noses, and for guests, a new way to travel that was in many ways better than
hotels.
That experience was why the Airbnbs didn't give up. They knew they'd
discovered something. They'd seen a glimpse of the future, and they couldn't
let it go.
They knew that once people tried staying in what is now called "an airbnb,"
they would also realize that this was the future. But only if they tried it,
and they weren't. That was the problem during Y Combinator: to get growth
started.
Airbnb's goal during YC was to reach what we call ramen profitability, which
means making enough money that the company can pay the founders' living
expenses, if they live on ramen noodles. Ramen profitability is not,
obviously, the end goal of any startup, but it's the most important threshold
on the way, because this is the point where you're airborne. This is the point
where you no longer need investors' permission to continue existing. For the
Airbnbs, ramen profitability was $4000 a month: $3500 for rent, and $500 for
food. They taped this goal to the mirror in the bathroom of their apartment.
The way to get growth started in something like Airbnb is to focus on the
hottest subset of the market. If you can get growth started there, it will
spread to the rest. When I asked the Airbnbs where there was most demand, they
knew from searches: New York City. So they focused on New York. They went
there in person to visit their hosts and help them make their listings more
attractive. A big part of that was better pictures. So Joe and Brian rented a
professional camera and took pictures of the hosts' places themselves.
This didn't just make the listings better. It also taught them about their
hosts. When they came back from their first trip to New York, I asked what
they'd noticed about hosts that surprised them, and they said the biggest
surprise was how many of the hosts were in the same position they'd been in:
they needed this money to pay their rent. This was, remember, the worst
recession in decades, and it had hit New York first. It definitely added to
the Airbnbs' sense of mission to feel that people needed them.
In late January 2009, about three weeks into Y Combinator, their efforts
started to show results, and their numbers crept upward. But it was hard to
say for sure whether it was growth or just random fluctuation. By February it
was clear that it was real growth. They made $460 in fees in the first week of
February, $897 in the second, and $1428 in the third. That was it: they were
airborne. Brian sent me an email on February 22 announcing that they were
ramen profitable and giving the last three weeks' numbers.
"I assume you know what you've now set yourself up for next week," I
responded.
Brian's reply was seven words: "We are not going to slow down."
---
* * *
--- |
|
April 2021
Every year since 1982, _Forbes_ magazine has published a list of the richest
Americans. If we compare the 100 richest people in 1982 to the 100 richest in
2020, we notice some big differences.
In 1982 the most common source of wealth was inheritance. Of the 100 richest
people, 60 inherited from an ancestor. There were 10 du Pont heirs alone. By
2020 the number of heirs had been cut in half, accounting for only 27 of the
biggest 100 fortunes.
Why would the percentage of heirs decrease? Not because inheritance taxes
increased. In fact, they decreased significantly during this period. The
reason the percentage of heirs has decreased is not that fewer people are
inheriting great fortunes, but that more people are making them.
How are people making these new fortunes? Roughly 3/4 by starting companies
and 1/4 by investing. Of the 73 new fortunes in 2020, 56 derive from founders'
or early employees' equity (52 founders, 2 early employees, and 2 wives of
founders), and 17 from managing investment funds.
There were no fund managers among the 100 richest Americans in 1982. Hedge
funds and private equity firms existed in 1982, but none of their founders
were rich enough yet to make it into the top 100. Two things changed: fund
managers discovered new ways to generate high returns, and more investors were
willing to trust them with their money.
But the main source of new fortunes now is starting companies, and when you
look at the data, you see big changes there too. People get richer from
starting companies now than they did in 1982, because the companies do
different things.
In 1982, there were two dominant sources of new wealth: oil and real estate.
Of the 40 new fortunes in 1982, at least 24 were due primarily to oil or real
estate. Now only a small number are: of the 73 new fortunes in 2020, 4 were
due to real estate and only 2 to oil.
By 2020 the biggest source of new wealth was what are sometimes called "tech"
companies. Of the 73 new fortunes, about 30 derive from such companies. These
are particularly common among the richest of the rich: 8 of the top 10
fortunes in 2020 were new fortunes of this type.
Arguably it's slightly misleading to treat tech as a category. Isn't Amazon
really a retailer, and Tesla a car maker? Yes and no. Maybe in 50 years, when
what we call tech is taken for granted, it won't seem right to put these two
businesses in the same category. But at the moment at least, there is
definitely something they share in common that distinguishes them. What
retailer starts AWS? What car maker is run by someone who also has a rocket
company?
The tech companies behind the top 100 fortunes also form a well-differentiated
group in the sense that they're all companies that venture capitalists would
readily invest in, and the others mostly not. And there's a reason why: these
are mostly companies that win by having better technology, rather than just a
CEO who's really driven and good at making deals.
To that extent, the rise of the tech companies represents a qualitative
change. The oil and real estate magnates of the 1982 Forbes 400 didn't win by
making better technology. They won by being really driven and good at making
deals. And indeed, that way of getting rich is so old that it predates the
Industrial Revolution. The courtiers who got rich in the (nominal) service of
European royal houses in the 16th and 17th centuries were also, as a rule,
really driven and good at making deals.
People who don't look any deeper than the Gini coefficient look back on the
world of 1982 as the good old days, because those who got rich then didn't get
as rich. But if you dig into _how_ they got rich, the old days don't look so
good. In 1982, 84% of the richest 100 people got rich by inheritance,
extracting natural resources, or doing real estate deals. Is that really
better than a world in which the richest people get rich by starting tech
companies?
Why are people starting so many more new companies than they used to, and why
are they getting so rich from it? The answer to the first question, curiously
enough, is that it's misphrased. We shouldn't be asking why people are
starting companies, but why they're starting companies _again_.
In 1892, the _New York Herald Tribune_ compiled a list of all the millionaires
in America. They found 4047 of them. How many had inherited their wealth then?
Only about 20%, which is less than the proportion of heirs today. And when you
investigate the sources of the new fortunes, 1892 looks even more like today.
Hugh Rockoff found that "many of the richest ... gained their initial edge
from the new technology of mass production."
So it's not 2020 that's the anomaly here, but 1982. The real question is why
so few people had gotten rich from starting companies in 1982\. And the answer
is that even as the _Herald Tribune_ 's list was being compiled, a wave of
_consolidation_ was sweeping through the American economy. In the late 19th
and early 20th centuries, financiers like J. P. Morgan combined thousands of
smaller companies into a few hundred giant ones with commanding economies of
scale. By the end of World War II, as Michael Lind writes, "the major sectors
of the economy were either organized as government-backed cartels or dominated
by a few oligopolistic corporations."
In 1960, most of the people who start startups today would have gone to work
for one of them. You could get rich from starting your own company in 1890 and
in 2020, but in 1960 it was not really a viable option. You couldn't break
through the oligopolies to get at the markets. So the prestigious route in
1960 was not to start your own company, but to work your way up the corporate
ladder at an existing one.
Making everyone a corporate employee decreased economic inequality (and every
other kind of variation), but if your model of normal is the mid 20th century,
you have a very misleading model in that respect. J. P. Morgan's economy
turned out to be just a phase, and starting in the 1970s, it began to break
up.
Why did it break up? Partly senescence. The big companies that seemed models
of scale and efficiency in 1930 had by 1970 become slack and bloated. By 1970
the rigid structure of the economy was full of cosy nests that various groups
had built to insulate themselves from market forces. During the Carter
administration the federal government realized something was amiss and began,
in a process they called "deregulation," to roll back the policies that
propped up the oligopolies.
But it wasn't just decay from within that broke up J. P. Morgan's economy.
There was also pressure from without, in the form of new technology, and
particularly microelectronics. The best way to envision what happened is to
imagine a pond with a crust of ice on top. Initially the only way from the
bottom to the surface is around the edges. But as the ice crust weakens, you
start to be able to punch right through the middle.
The edges of the pond were pure tech: companies that actually described
themselves as being in the electronics or software business. When you used the
word "startup" in 1990, that was what you meant. But now startups are punching
right through the middle of the ice crust and displacing incumbents like
retailers and TV networks and car companies.
But though the breakup of J. P. Morgan's economy created a new world in the
technological sense, it was a reversion to the norm in the social sense. If
you only look back as far as the mid 20th century, it seems like people
getting rich by starting their own companies is a recent phenomenon. But if
you look back further, you realize it's actually the default. So what we
should expect in the future is more of the same. Indeed, we should expect both
the number and wealth of founders to grow, because every decade it gets easier
to start a startup.
Part of the reason it's getting easier to start a startup is social. Society
is (re)assimilating the concept. If you start one now, your parents won't
freak out the way they would have a generation ago, and knowledge about how to
do it is much more widespread. But the main reason it's easier to start a
startup now is that it's cheaper. Technology has driven down the cost of both
building products and acquiring customers.
The decreasing cost of starting a startup has in turn changed the balance of
power between founders and investors. Back when starting a startup meant
building a factory, you needed investors' permission to do it at all. But now
investors need founders more than founders need investors, and that, combined
with the increasing amount of venture capital available, has driven up
valuations.
So the decreasing cost of starting a startup increases the number of rich
people in two ways: it means that more people start them, and that those who
do can raise money on better terms.
But there's also a third factor at work: the companies themselves are more
valuable, because newly founded companies grow faster than they used to.
Technology hasn't just made it cheaper to build and distribute things, but
faster too.
This trend has been running for a long time. IBM, founded in 1896, took 45
years to reach a billion 2020 dollars in revenue. Hewlett-Packard, founded in
1939, took 25 years. Microsoft, founded in 1975, took 13 years. Now the norm
for fast-growing companies is 7 or 8 years.
Fast growth has a double effect on the value of founders' stock. The value of
a company is a function of its revenue and its growth rate. So if a company
grows faster, you not only get to a billion dollars in revenue sooner, but the
company is more valuable when it reaches that point than it would be if it
were growing slower.
That's why founders sometimes get so rich so young now. The low initial cost
of starting a startup means founders can start young, and the fast growth of
companies today means that if they succeed they could be surprisingly rich
just a few years later.
It's easier now to start and grow a company than it has ever been. That means
more people start them, that those who do get better terms from investors, and
that the resulting companies become more valuable. Once you understand how
these mechanisms work, and that startups were suppressed for most of the 20th
century, you don't have to resort to some vague right turn the country took
under Reagan to explain why America's Gini coefficient is increasing. Of
course the Gini coefficient is increasing. With more people starting more
valuable companies, how could it not be?
** |
|
June 2021
It might not seem there's much to learn about how to work hard. Anyone who's
been to school knows what it entails, even if they chose not to do it. There
are 12 year olds who work amazingly hard. And yet when I ask if I know more
about working hard now than when I was in school, the answer is definitely
yes.
One thing I know is that if you want to do great things, you'll have to work
very hard. I wasn't sure of that as a kid. Schoolwork varied in difficulty;
one didn't always have to work super hard to do well. And some of the things
famous adults did, they seemed to do almost effortlessly. Was there, perhaps,
some way to evade hard work through sheer brilliance? Now I know the answer to
that question. There isn't.
The reason some subjects seemed easy was that my school had low standards. And
the reason famous adults seemed to do things effortlessly was years of
practice; they made it look easy.
Of course, those famous adults usually had a lot of natural ability too. There
are three ingredients in great work: natural ability, practice, and effort.
You can do pretty well with just two, but to do the best work you need all
three: you need great natural ability _and_ to have practiced a lot _and_ to
be trying very hard.
Bill Gates, for example, was among the smartest people in business in his era,
but he was also among the hardest working. "I never took a day off in my
twenties," he said. "Not one." It was similar with Lionel Messi. He had great
natural ability, but when his youth coaches talk about him, what they remember
is not his talent but his dedication and his desire to win. P. G. Wodehouse
would probably get my vote for best English writer of the 20th century, if I
had to choose. Certainly no one ever made it look easier. But no one ever
worked harder. At 74, he wrote
> with each new book of mine I have, as I say, the feeling that this time I
> have picked a lemon in the garden of literature. A good thing, really, I
> suppose. Keeps one up on one's toes and makes one rewrite every sentence ten
> times. Or in many cases twenty times.
Sounds a bit extreme, you think. And yet Bill Gates sounds even more extreme.
Not one day off in ten years? These two had about as much natural ability as
anyone could have, and yet they also worked about as hard as anyone could
work. You need both.
That seems so obvious, and yet in practice we find it slightly hard to grasp.
There's a faint xor between talent and hard work. It comes partly from popular
culture, where it seems to run very deep, and partly from the fact that the
outliers are so rare. If great talent and great drive are both rare, then
people with both are rare squared. Most people you meet who have a lot of one
will have less of the other. But you'll need both if you want to be an outlier
yourself. And since you can't really change how much natural talent you have,
in practice doing great work, insofar as you can, reduces to working very
hard.
It's straightforward to work hard if you have clearly defined, externally
imposed goals, as you do in school. There is some technique to it: you have to
learn not to lie to yourself, not to procrastinate (which is a form of lying
to yourself), not to get distracted, and not to give up when things go wrong.
But this level of discipline seems to be within the reach of quite young
children, if they want it.
What I've learned since I was a kid is how to work toward goals that are
neither clearly defined nor externally imposed. You'll probably have to learn
both if you want to do really great things.
The most basic level of which is simply to feel you should be working without
anyone telling you to. Now, when I'm not working hard, alarm bells go off. I
can't be sure I'm getting anywhere when I'm working hard, but I can be sure
I'm getting nowhere when I'm not, and it feels awful.
There wasn't a single point when I learned this. Like most little kids, I
enjoyed the feeling of achievement when I learned or did something new. As I
grew older, this morphed into a feeling of disgust when I wasn't achieving
anything. The one precisely dateable landmark I have is when I stopped
watching TV, at age 13.
Several people I've talked to remember getting serious about work around this
age. When I asked Patrick Collison when he started to find idleness
distasteful, he said
> I think around age 13 or 14. I have a clear memory from around then of
> sitting in the sitting room, staring outside, and wondering why I was
> wasting my summer holiday.
Perhaps something changes at adolescence. That would make sense.
Strangely enough, the biggest obstacle to getting serious about work was
probably school, which made work (what they called work) seem boring and
pointless. I had to learn what real work was before I could wholeheartedly
desire to do it. That took a while, because even in college a lot of the work
is pointless; there are entire departments that are pointless. But as I
learned the shape of real work, I found that my desire to do it slotted into
it as if they'd been made for each other.
I suspect most people have to learn what work is before they can love it.
Hardy wrote eloquently about this in _A Mathematician's Apology_ :
> I do not remember having felt, as a boy, any _passion_ for mathematics, and
> such notions as I may have had of the career of a mathematician were far
> from noble. I thought of mathematics in terms of examinations and
> scholarships: I wanted to beat other boys, and this seemed to be the way in
> which I could do so most decisively.
He didn't learn what math was really about till part way through college, when
he read Jordan's _Cours d'analyse_.
> I shall never forget the astonishment with which I read that remarkable
> work, the first inspiration for so many mathematicians of my generation, and
> learnt for the first time as I read it what mathematics really meant.
There are two separate kinds of fakeness you need to learn to discount in
order to understand what real work is. One is the kind Hardy encountered in
school. Subjects get distorted when they're adapted to be taught to kids —
often so distorted that they're nothing like the work done by actual
practitioners. The other kind of fakeness is intrinsic to certain types of
work. Some types of work are inherently bogus, or at best mere busywork.
There's a kind of solidity to real work. It's not all writing the _Principia_
, but it all feels necessary. That's a vague criterion, but it's deliberately
vague, because it has to cover a lot of different types.
Once you know the shape of real work, you have to learn how many hours a day
to spend on it. You can't solve this problem by simply working every waking
hour, because in many kinds of work there's a point beyond which the quality
of the result will start to decline.
That limit varies depending on the type of work and the person. I've done
several different kinds of work, and the limits were different for each. My
limit for the harder types of writing or programming is about five hours a
day. Whereas when I was running a startup, I could work all the time. At least
for the three years I did it; if I'd kept going much longer, I'd probably have
needed to take occasional vacations.
The only way to find the limit is by crossing it. Cultivate a sensitivity to
the quality of the work you're doing, and then you'll notice if it decreases
because you're working too hard. Honesty is critical here, in both directions:
you have to notice when you're being lazy, but also when you're working too
hard. And if you think there's something admirable about working too hard, get
that idea out of your head. You're not merely getting worse results, but
getting them because you're showing off — if not to other people, then to
yourself.
Finding the limit of working hard is a constant, ongoing process, not
something you do just once. Both the difficulty of the work and your ability
to do it can vary hour to hour, so you need to be constantly judging both how
hard you're trying and how well you're doing.
Trying hard doesn't mean constantly pushing yourself to work, though. There
may be some people who do, but I think my experience is fairly typical, and I
only have to push myself occasionally when I'm starting a project or when I
encounter some sort of check. That's when I'm in danger of procrastinating.
But once I get rolling, I tend to keep going.
What keeps me going depends on the type of work. When I was working on Viaweb,
I was driven by fear of failure. I barely procrastinated at all then, because
there was always something that needed doing, and if I could put more distance
between me and the pursuing beast by doing it, why wait? Whereas what
drives me now, writing essays, is the flaws in them. Between essays I fuss for
a few days, like a dog circling while it decides exactly where to lie down.
But once I get started on one, I don't have to push myself to work, because
there's always some error or omission already pushing me.
I do make some amount of effort to focus on important topics. Many problems
have a hard core at the center, surrounded by easier stuff at the edges.
Working hard means aiming toward the center to the extent you can. Some days
you may not be able to; some days you'll only be able to work on the easier,
peripheral stuff. But you should always be aiming as close to the center as
you can without stalling.
The bigger question of what to do with your life is one of these problems with
a hard core. There are important problems at the center, which tend to be
hard, and less important, easier ones at the edges. So as well as the small,
daily adjustments involved in working on a specific problem, you'll
occasionally have to make big, lifetime-scale adjustments about which type of
work to do. And the rule is the same: working hard means aiming toward the
center — toward the most ambitious problems.
By center, though, I mean the actual center, not merely the current consensus
about the center. The consensus about which problems are most important is
often mistaken, both in general and within specific fields. If you disagree
with it, and you're right, that could represent a valuable opportunity to do
something new.
The more ambitious types of work will usually be harder, but although you
should not be in denial about this, neither should you treat difficulty as an
infallible guide in deciding what to do. If you discover some ambitious type
of work that's a bargain in the sense of being easier for you than other
people, either because of the abilities you happen to have, or because of some
new way you've found to approach it, or simply because you're more excited
about it, by all means work on that. Some of the best work is done by people
who find an easy way to do something hard.
As well as learning the shape of real work, you need to figure out which kind
you're suited for. And that doesn't just mean figuring out which kind your
natural abilities match the best; it doesn't mean that if you're 7 feet tall,
you have to play basketball. What you're suited for depends not just on your
talents but perhaps even more on your interests. A _deep interest_ in a topic
makes people work harder than any amount of discipline can.
It can be harder to discover your interests than your talents. There are fewer
types of talent than interest, and they start to be judged early in childhood,
whereas interest in a topic is a subtle thing that may not mature till your
twenties, or even later. The topic may not even exist earlier. Plus there are
some powerful sources of error you need to learn to discount. Are you really
interested in x, or do you want to work on it because you'll make a lot of
money, or because other people will be impressed with you, or because your
parents want you to?
The difficulty of figuring out what to work on varies enormously from one
person to another. That's one of the most important things I've learned about
work since I was a kid. As a kid, you get the impression that everyone has a
calling, and all they have to do is figure out what it is. That's how it works
in movies, and in the streamlined biographies fed to kids. Sometimes it works
that way in real life. Some people figure out what to do as children and just
do it, like Mozart. But others, like Newton, turn restlessly from one kind of
work to another. Maybe in retrospect we can identify one as their calling — we
can wish Newton spent more time on math and physics and less on alchemy and
theology — but this is an _illusion_ induced by hindsight bias. There was no
voice calling to him that he could have heard.
So while some people's lives converge fast, there will be others whose lives
never converge. And for these people, figuring out what to work on is not so
much a prelude to working hard as an ongoing part of it, like one of a set of
simultaneous equations. For these people, the process I described earlier has
a third component: along with measuring both how hard you're working and how
well you're doing, you have to think about whether you should keep working in
this field or switch to another. If you're working hard but not getting good
enough results, you should switch. It sounds simple expressed that way, but in
practice it's very difficult. You shouldn't give up on the first day just
because you work hard and don't get anywhere. You need to give yourself time
to get going. But how much time? And what should you do if work that was going
well stops going well? How much time do you give yourself then?
What even counts as good results? That can be really hard to decide. If you're
exploring an area few others have worked in, you may not even know what good
results look like. History is full of examples of people who misjudged the
importance of what they were working on.
The best test of whether it's worthwhile to work on something is whether you
find it interesting. That may sound like a dangerously subjective measure, but
it's probably the most accurate one you're going to get. You're the one
working on the stuff. Who's in a better position than you to judge whether
it's important, and what's a better predictor of its importance than whether
it's interesting?
For this test to work, though, you have to be honest with yourself. Indeed,
that's the most striking thing about the whole question of working hard: how
at each point it depends on being honest with yourself.
Working hard is not just a dial you turn up to 11. It's a complicated, dynamic
system that has to be tuned just right at each point. You have to understand
the shape of real work, see clearly what kind you're best suited for, aim as
close to the true core of it as you can, accurately judge at each moment both
what you're capable of and how you're doing, and put in as many hours each day
as you can without harming the quality of the result. This network is too
complicated to trick. But if you're consistently honest and clear-sighted, it
will automatically assume an optimal shape, and you'll be productive in a way
few people are.
** |
|
June 2021
A few days ago, on the way home from school, my nine year old son told me he
couldn't wait to get home to write more of the story he was working on. This
made me as happy as anything I've heard him say — not just because he was
excited about his story, but because he'd discovered this way of working.
Working on a project of your own is as different from ordinary work as skating
is from walking. It's more fun, but also much more productive.
What proportion of great work has been done by people who were skating in this
sense? If not all of it, certainly a lot.
There is something special about working on a project of your own. I wouldn't
say exactly that you're happier. A better word would be excited, or engaged.
You're happy when things are going well, but often they aren't. When I'm
writing an essay, most of the time I'm worried and puzzled: worried that the
essay will turn out badly, and puzzled because I'm groping for some idea that
I can't see clearly enough. Will I be able to pin it down with words? In the
end I usually can, if I take long enough, but I'm never sure; the first few
attempts often fail.
You have moments of happiness when things work out, but they don't last long,
because then you're on to the next problem. So why do it at all? Because to
the kind of people who like working this way, nothing else feels as right. You
feel as if you're an animal in its natural habitat, doing what you were meant
to do — not always happy, maybe, but awake and alive.
Many kids experience the excitement of working on projects of their own. The
hard part is making this converge with the work you do as an adult. And our
customs make it harder. We treat "playing" and "hobbies" as qualitatively
different from "work". It's not clear to a kid building a treehouse that
there's a direct (though long) route from that to architecture or engineering.
And instead of pointing out the route, we conceal it, by implicitly treating
the stuff kids do as different from real work.
Instead of telling kids that their treehouses could be on the path to the work
they do as adults, we tell them the path goes through school. And
unfortunately schoolwork tends to be very different from working on projects
of one's own. It's usually neither a project, nor one's own. So as school gets
more serious, working on projects of one's own is something that survives, if
at all, as a thin thread off to the side.
It's a bit sad to think of all the high school kids turning their backs on
building treehouses and sitting in class dutifully learning about Darwin or
Newton to pass some exam, when the work that made Darwin and Newton famous was
actually closer in spirit to building treehouses than studying for exams.
If I had to choose between my kids getting good grades and working on
ambitious projects of their own, I'd pick the projects. And not because I'm an
indulgent parent, but because I've been on the other end and I know which has
more predictive value. When I was picking startups for Y Combinator, I didn't
care about applicants' grades. But if they'd worked on projects of their own,
I wanted to hear all about those.
It may be inevitable that school is the way it is. I'm not saying we have to
redesign it (though I'm not saying we don't), just that we should understand
what it does to our attitudes to work — that it steers us toward the dutiful
plodding kind of work, often using competition as bait, and away from skating.
There are occasionally times when schoolwork becomes a project of one's own.
Whenever I had to write a paper, that would become a project of my own —
except in English classes, ironically, because the things one has to write in
English classes are so _bogus_. And when I got to college and started taking
CS classes, the programs I had to write became projects of my own. Whenever I
was writing or programming, I was usually skating, and that has been true ever
since.
So where exactly is the edge of projects of one's own? That's an interesting
question, partly because the answer is so complicated, and partly because
there's so much at stake. There turn out to be two senses in which work can be
one's own: 1) that you're doing it voluntarily, rather than merely because
someone told you to, and 2) that you're doing it by yourself.
The edge of the former is quite sharp. People who care a lot about their work
are usually very sensitive to the difference between pulling, and being
pushed, and work tends to fall into one category or the other. But the test
isn't simply whether you're told to do something. You can choose to do
something you're told to do. Indeed, you can own it far more thoroughly than
the person who told you to do it.
For example, math homework is for most people something they're told to do.
But for my father, who was a mathematician, it wasn't. Most of us think of the
problems in a math book as a way to test or develop our knowledge of the
material explained in each section. But to my father the problems were the
part that mattered, and the text was merely a sort of annotation. Whenever he
got a new math book it was to him like being given a puzzle: here was a new
set of problems to solve, and he'd immediately set about solving all of them.
The other sense of a project being one's own — working on it by oneself — has
a much softer edge. It shades gradually into collaboration. And interestingly,
it shades into collaboration in two different ways. One way to collaborate is
to share a single project. For example, when two mathematicians collaborate on
a proof that takes shape in the course of a conversation between them. The
other way is when multiple people work on separate projects of their own that
fit together like a jigsaw puzzle. For example, when one person writes the
text of a book and another does the graphic design.
These two paths into collaboration can of course be combined. But under the
right conditions, the excitement of working on a project of one's own can be
preserved for quite a while before disintegrating into the turbulent flow of
work in a large organization. Indeed, the history of successful organizations
is partly the history of techniques for preserving that excitement.
The team that made the original Macintosh were a great example of this
phenomenon. People like Burrell Smith and Andy Hertzfeld and Bill Atkinson and
Susan Kare were not just following orders. They were not tennis balls hit by
Steve Jobs, but rockets let loose by Steve Jobs. There was a lot of
collaboration between them, but they all seem to have individually felt the
excitement of working on a project of one's own.
In Andy Hertzfeld's book on the Macintosh, he describes how they'd come back
into the office after dinner and work late into the night. People who've never
experienced the thrill of working on a project they're excited about can't
distinguish this kind of working long hours from the kind that happens in
sweatshops and boiler rooms, but they're at opposite ends of the spectrum.
That's why it's a mistake to insist dogmatically on "work/life balance."
Indeed, the mere expression "work/life" embodies a mistake: it assumes work
and life are distinct. For those to whom the word "work" automatically implies
the dutiful plodding kind, they are. But for the skaters, the relationship
between work and life would be better represented by a dash than a slash. I
wouldn't want to work on anything that I didn't want to take over my life.
Of course, it's easier to achieve this level of motivation when you're making
something like the Macintosh. It's easy for something new to feel like a
project of your own. That's one of the reasons for the tendency programmers
have to rewrite things that don't need rewriting, and to write their own
versions of things that already exist. This sometimes alarms managers, and
measured by total number of characters typed, it's rarely the optimal
solution. But it's not always driven simply by arrogance or cluelessness.
Writing code from scratch is also much more rewarding — so much more rewarding
that a good programmer can end up net ahead, despite the shocking waste of
characters. Indeed, it may be one of the advantages of capitalism that it
encourages such rewriting. A company that needs software to do something can't
use the software already written to do it at another company, and thus has to
write their own, which often turns out better.
The natural alignment between skating and solving new problems is one of the
reasons the payoffs from startups are so high. Not only is the market price of
unsolved problems higher, you also get a discount on productivity when you
work on them. In fact, you get a double increase in productivity: when you're
doing a clean-sheet design, it's easier to recruit skaters, and they get to
spend all their time skating.
Steve Jobs knew a thing or two about skaters from having watched Steve
Wozniak. If you can find the right people, you only have to tell them what to
do at the highest level. They'll handle the details. Indeed, they insist on
it. For a project to feel like your own, you must have sufficient autonomy.
You can't be working to order, or _slowed down_ by bureaucracy.
One way to ensure autonomy is not to have a boss at all. There are two ways to
do that: to be the boss yourself, and to work on projects outside of work.
Though they're at opposite ends of the scale financially, startups and open
source projects have a lot in common, including the fact that they're often
run by skaters. And indeed, there's a wormhole from one end of the scale to
the other: one of the best ways to discover _startup ideas_ is to work on a
project just for fun.
If your projects are the kind that make money, it's easy to work on them. It's
harder when they're not. And the hardest part, usually, is morale. That's
where adults have it harder than kids. Kids just plunge in and build their
treehouse without worrying about whether they're wasting their time, or how it
compares to other treehouses. And frankly we could learn a lot from kids here.
The high standards most grownups have for "real" work do not always serve us
well.
The most important phase in a project of one's own is at the beginning: when
you go from thinking it might be cool to do x to actually doing x. And at that
point high standards are not merely useless but positively harmful. There are
a few people who start too many new projects, but far more, I suspect, who are
deterred by fear of failure from starting projects that would have succeeded
if they had.
But if we couldn't benefit as kids from the knowledge that our treehouses were
on the path to grownup projects, we can at least benefit as grownups from
knowing that our projects are on a path that stretches back to treehouses.
Remember that careless confidence you had as a kid when starting something
new? That would be a powerful thing to recapture.
If it's harder as adults to retain that kind of confidence, we at least tend
to be more aware of what we're doing. Kids bounce, or are herded, from one
kind of work to the next, barely realizing what's happening to them. Whereas
we know more about different types of work and have more control over which we
do. Ideally we can have the best of both worlds: to be deliberate in choosing
to work on projects of our own, and carelessly confident in starting new ones.
** |
|
March 2021
I try to write using ordinary words and simple sentences.
That kind of writing is easier to read, and the easier something is to read,
the more deeply readers will engage with it. The less energy they expend on
your prose, the more they'll have left for your ideas.
And the further they'll read. Most readers' energy tends to flag part way
through an article or essay. If the friction of reading is low enough, more
keep going till the end.
There's an Italian dish called _saltimbocca_ , which means "leap into the
mouth." My goal when writing might be called _saltintesta_ : the ideas leap
into your head and you barely notice the words that got them there.
It's too much to hope that writing could ever be pure ideas. You might not
even want it to be. But for most writers, most of the time, that's the goal to
aim for. The gap between most writing and pure ideas is not filled with
poetry.
Plus it's more considerate to write simply. When you write in a fancy way to
impress people, you're making them do extra work just so you can seem cool.
It's like trailing a long train behind you that readers have to carry.
And remember, if you're writing in English, that a lot of your readers won't
be native English speakers. Their understanding of ideas may be way ahead of
their understanding of English. So you can't assume that writing about a
difficult topic means you can use difficult words.
Of course, fancy writing doesn't just conceal ideas. It can also conceal the
lack of them. That's why some people write that way, to conceal the fact that
they have __nothing to say. Whereas writing simply keeps you honest. If you
say nothing simply, it will be obvious to everyone, including you.
Simple writing also lasts better. People reading your stuff in the future will
be in much the same position as people from other countries reading it today.
The culture and the language will have changed. It's not vain to care about
that, any more than it's vain for a woodworker to build a chair to last.
Indeed, lasting is not merely an accidental quality of chairs, or writing.
It's a sign you did a good job.
But although these are all real advantages of writing simply, none of them are
why I do it. The main reason I write simply is that it offends me not to. When
I write a sentence that seems too complicated, or that uses unnecessarily
intellectual words, it doesn't seem fancy to me. It seems clumsy.
There are of course times when you want to use a complicated sentence or fancy
word for effect. But you should never do it by accident.
The other reason my writing ends up being simple is the way I do it. I write
the first draft fast, then spend days editing it, trying to get everything
just right. Much of this editing is cutting, and that makes simple writing
even simpler.
---
* * *
--- |
|
November 2020
There are some kinds of work that you can't do well without thinking
differently from your peers. To be a successful scientist, for example, it's
not enough just to be correct. Your ideas have to be both correct and novel.
You can't publish papers saying things other people already know. You need to
say things no one else has realized yet.
The same is true for investors. It's not enough for a public market investor
to predict correctly how a company will do. If a lot of other people make the
same prediction, the stock price will already reflect it, and there's no room
to make money. The only valuable insights are the ones most other investors
don't share.
You see this pattern with startup founders too. You don't want to start a
startup to do something that everyone agrees is a good idea, or there will
already be other companies doing it. You have to do something that sounds to
most other people like a bad idea, but that you know isn't like writing
software for a tiny computer used by a few thousand hobbyists, or starting a
site to let people rent airbeds on strangers' floors.
Ditto for essayists. An essay that told people things they already knew would
be boring. You have to tell them something _new_.
But this pattern isn't universal. In fact, it doesn't hold for most kinds of
work. In most kinds of work to be an administrator, for example all you
need is the first half. All you need is to be right. It's not essential that
everyone else be wrong.
There's room for a little novelty in most kinds of work, but in practice
there's a fairly sharp distinction between the kinds of work where it's
essential to be independent-minded, and the kinds where it's not.
I wish someone had told me about this distinction when I was a kid, because
it's one of the most important things to think about when you're deciding what
kind of work you want to do. Do you want to do the kind of work where you can
only win by thinking differently from everyone else? I suspect most people's
unconscious mind will answer that question before their conscious mind has a
chance to. I know mine does.
Independent-mindedness seems to be more a matter of nature than nurture. Which
means if you pick the wrong type of work, you're going to be unhappy. If
you're naturally independent-minded, you're going to find it frustrating to be
a middle manager. And if you're naturally conventional-minded, you're going to
be sailing into a headwind if you try to do original research.
One difficulty here, though, is that people are often mistaken about where
they fall on the spectrum from conventional- to independent-minded.
Conventional-minded people don't like to think of themselves as conventional-
minded. And in any case, it genuinely feels to them as if they make up their
own minds about everything. It's just a coincidence that their beliefs are
identical to their peers'. And the independent-minded, meanwhile, are often
unaware how different their ideas are from conventional ones, at least till
they state them publicly.
By the time they reach adulthood, most people know roughly how smart they are
(in the narrow sense of ability to solve pre-set problems), because they're
constantly being tested and ranked according to it. But schools generally
ignore independent-mindedness, except to the extent they try to suppress it.
So we don't get anything like the same kind of feedback about how independent-
minded we are.
There may even be a phenomenon like Dunning-Kruger at work, where the most
conventional-minded people are confident that they're independent-minded,
while the genuinely independent-minded worry they might not be independent-
minded enough.
___________
Can you make yourself more independent-minded? I think so. This quality may be
largely inborn, but there seem to be ways to magnify it, or at least not to
suppress it.
One of the most effective techniques is one practiced unintentionally by most
nerds: simply to be less aware what conventional beliefs are. It's hard to be
a conformist if you don't know what you're supposed to conform to. Though
again, it may be that such people already are independent-minded. A
conventional-minded person would probably feel anxious not knowing what other
people thought, and make more effort to find out.
It matters a lot who you surround yourself with. If you're surrounded by
conventional-minded people, it will constrain which ideas you can express, and
that in turn will constrain which ideas you have. But if you surround yourself
with independent-minded people, you'll have the opposite experience: hearing
other people say surprising things will encourage you to, and to think of
more.
Because the independent-minded find it uncomfortable to be surrounded by
conventional-minded people, they tend to self-segregate once they have a
chance to. The problem with high school is that they haven't yet had a chance
to. Plus high school tends to be an inward-looking little world whose
inhabitants lack confidence, both of which magnify the forces of conformism.
So high school is often a _bad time_ for the independent-minded. But there is
some advantage even here: it teaches you what to avoid. If you later find
yourself in a situation that makes you think "this is like high school," you
know you should get out.
Another place where the independent- and conventional-minded are thrown
together is in successful startups. The founders and early employees are
almost always independent-minded; otherwise the startup wouldn't be
successful. But conventional-minded people greatly outnumber independent-
minded ones, so as the company grows, the original spirit of independent-
mindedness is inevitably diluted. This causes all kinds of problems besides
the obvious one that the company starts to suck. One of the strangest is that
the founders find themselves able to speak more freely with founders of other
companies than with their own employees.
Fortunately you don't have to spend all your time with independent-minded
people. It's enough to have one or two you can talk to regularly. And once you
find them, they're usually as eager to talk as you are; they need you too.
Although universities no longer have the kind of monopoly they used to have on
education, good universities are still an excellent way to meet independent-
minded people. Most students will still be conventional-minded, but you'll at
least find clumps of independent-minded ones, rather than the near zero you
may have found in high school.
It also works to go in the other direction: as well as cultivating a small
collection of independent-minded friends, to try to meet as many different
types of people as you can. It will decrease the influence of your immediate
peers if you have several other groups of peers. Plus if you're part of
several different worlds, you can often import ideas from one to another.
But by different types of people, I don't mean demographically different. For
this technique to work, they have to think differently. So while it's an
excellent idea to go and visit other countries, you can probably find people
who think differently right around the corner. When I meet someone who knows a
lot about something unusual (which includes practically everyone, if you dig
deep enough), I try to learn what they know that other people don't. There are
almost always surprises here. It's a good way to make conversation when you
meet strangers, but I don't do it to make conversation. I really want to know.
You can expand the source of influences in time as well as space, by reading
history. When I read history I do it not just to learn what happened, but to
try to get inside the heads of people who lived in the past. How did things
look to them? This is hard to do, but worth the effort for the same reason
it's worth travelling far to triangulate a point.
You can also take more explicit measures to prevent yourself from
automatically adopting conventional opinions. The most general is to cultivate
an attitude of skepticism. When you hear someone say something, stop and ask
yourself "Is that true?" Don't say it out loud. I'm not suggesting that you
impose on everyone who talks to you the burden of proving what they say, but
rather that you take upon yourself the burden of evaluating what they say.
Treat it as a puzzle. You know that some accepted ideas will later turn out to
be wrong. See if you can guess which. The end goal is not to find flaws in the
things you're told, but to find the new ideas that had been concealed by the
broken ones. So this game should be an exciting quest for novelty, not a
boring protocol for intellectual hygiene. And you'll be surprised, when you
start asking "Is this true?", how often the answer is not an immediate yes. If
you have any imagination, you're more likely to have too many leads to follow
than too few.
More generally your goal should be not to let anything into your head
unexamined, and things don't always enter your head in the form of statements.
Some of the most powerful influences are implicit. How do you even notice
these? By standing back and watching how other people get their ideas.
When you stand back at a sufficient distance, you can see ideas spreading
through groups of people like waves. The most obvious are in fashion: you
notice a few people wearing a certain kind of shirt, and then more and more,
until half the people around you are wearing the same shirt. You may not care
much what you wear, but there are intellectual fashions too, and you
definitely don't want to participate in those. Not just because you want
sovereignty over your own thoughts, but because _unfashionable_ ideas are
disproportionately likely to lead somewhere interesting. The best place to
find undiscovered ideas is where no one else is looking.
___________
To go beyond this general advice, we need to look at the internal structure of
independent-mindedness at the individual muscles we need to exercise, as it
were. It seems to me that it has three components: fastidiousness about truth,
resistance to being told what to think, and curiosity.
Fastidiousness about truth means more than just not believing things that are
false. It means being careful about degree of belief. For most people, degree
of belief rushes unexamined toward the extremes: the unlikely becomes
impossible, and the probable becomes certain. To the independent-minded,
this seems unpardonably sloppy. They're willing to have anything in their
heads, from highly speculative hypotheses to (apparent) tautologies, but on
subjects they care about, everything has to be labelled with a carefully
considered degree of belief.
The independent-minded thus have a horror of ideologies, which require one to
accept a whole collection of beliefs at once, and to treat them as articles of
faith. To an independent-minded person that would seem revolting, just as it
would seem to someone fastidious about food to take a bite of a submarine
sandwich filled with a large variety of ingredients of indeterminate age and
provenance.
Without this fastidiousness about truth, you can't be truly independent-
minded. It's not enough just to have resistance to being told what to think.
Those kind of people reject conventional ideas only to replace them with the
most random conspiracy theories. And since these conspiracy theories have
often been manufactured to capture them, they end up being less independent-
minded than ordinary people, because they're subject to a much more exacting
master than mere convention.
Can you increase your fastidiousness about truth? I would think so. In my
experience, merely thinking about something you're fastidious about causes
that fastidiousness to grow. If so, this is one of those rare virtues we can
have more of merely by wanting it. And if it's like other forms of
fastidiousness, it should also be possible to encourage in children. I
certainly got a strong dose of it from my father.
The second component of independent-mindedness, resistance to being told what
to think, is the most visible of the three. But even this is often
misunderstood. The big mistake people make about it is to think of it as a
merely negative quality. The language we use reinforces that idea. You're _un_
conventional. You _don't_ care what other people think. But it's not just a
kind of immunity. In the most independent-minded people, the desire not to be
told what to think is a positive force. It's not mere skepticism, but an
active _delight_ in ideas that subvert the conventional wisdom, the more
counterintuitive the better.
Some of the most novel ideas seemed at the time almost like practical jokes.
Think how often your reaction to a novel idea is to laugh. I don't think it's
because novel ideas are funny per se, but because novelty and humor share a
certain kind of surprisingness. But while not identical, the two are close
enough that there is a definite correlation between having a sense of humor
and being independent-minded just as there is between being humorless and
being conventional-minded.
I don't think we can significantly increase our resistance to being told what
to think. It seems the most innate of the three components of independent-
mindedness; people who have this quality as adults usually showed all too
visible signs of it as children. But if we can't increase our resistance to
being told what to think, we can at least shore it up, by surrounding
ourselves with other independent-minded people.
The third component of independent-mindedness, curiosity, may be the most
interesting. To the extent that we can give a brief answer to the question of
where novel ideas come from, it's curiosity. That's what people are usually
feeling before having them.
In my experience, independent-mindedness and curiosity predict one another
perfectly. Everyone I know who's independent-minded is deeply curious, and
everyone I know who's conventional-minded isn't. Except, curiously, children.
All small children are curious. Perhaps the reason is that even the
conventional-minded have to be curious in the beginning, in order to learn
what the conventions are. Whereas the independent-minded are the gluttons of
curiosity, who keep eating even after they're full.
The three components of independent-mindedness work in concert: fastidiousness
about truth and resistance to being told what to think leave space in your
brain, and curiosity finds new ideas to fill it.
Interestingly, the three components can substitute for one another in much the
same way muscles can. If you're sufficiently fastidious about truth, you don't
need to be as resistant to being told what to think, because fastidiousness
alone will create sufficient gaps in your knowledge. And either one can
compensate for curiosity, because if you create enough space in your brain,
your discomfort at the resulting vacuum will add force to your curiosity. Or
curiosity can compensate for them: if you're sufficiently curious, you don't
need to clear space in your brain, because the new ideas you discover will
push out the conventional ones you acquired by default.
Because the components of independent-mindedness are so interchangeable, you
can have them to varying degrees and still get the same result. So there is
not just a single model of independent-mindedness. Some independent-minded
people are openly subversive, and others are quietly curious. They all know
the secret handshake though.
Is there a way to cultivate curiosity? To start with, you want to avoid
situations that suppress it. How much does the work you're currently doing
engage your curiosity? If the answer is "not much," maybe you should change
something.
The most important active step you can take to cultivate your curiosity is
probably to seek out the topics that engage it. Few adults are equally curious
about everything, and it doesn't seem as if you can choose which topics
interest you. So it's up to you to _find_ them. Or invent them, if necessary.
Another way to increase your curiosity is to indulge it, by investigating
things you're interested in. Curiosity is unlike most other appetites in this
respect: indulging it tends to increase rather than to sate it. Questions lead
to more questions.
Curiosity seems to be more individual than fastidiousness about truth or
resistance to being told what to think. To the degree people have the latter
two, they're usually pretty general, whereas different people can be curious
about very different things. So perhaps curiosity is the compass here.
Perhaps, if your goal is to discover novel ideas, your motto should not be "do
what you love" so much as "do what you're curious about."
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
April 2001, rev. April 2003
_(This article is derived from a talk given at the 2001 Franz Developer
Symposium.)_
In the summer of 1995, my friend Robert Morris and I started a startup called
Viaweb. Our plan was to write software that would let end users build online
stores. What was novel about this software, at the time, was that it ran on
our server, using ordinary Web pages as the interface.
A lot of people could have been having this idea at the same time, of course,
but as far as I know, Viaweb was the first Web-based application. It seemed
such a novel idea to us that we named the company after it: Viaweb, because
our software worked via the Web, instead of running on your desktop computer.
Another unusual thing about this software was that it was written primarily in
a programming language called Lisp. It was one of the first big end-user
applications to be written in Lisp, which up till then had been used mostly in
universities and research labs.
**The Secret Weapon**
Eric Raymond has written an essay called "How to Become a Hacker," and in it,
among other things, he tells would-be hackers what languages they should
learn. He suggests starting with Python and Java, because they are easy to
learn. The serious hacker will also want to learn C, in order to hack Unix,
and Perl for system administration and cgi scripts. Finally, the truly serious
hacker should consider learning Lisp:
> Lisp is worth learning for the profound enlightenment experience you will
> have when you finally get it; that experience will make you a better
> programmer for the rest of your days, even if you never actually use Lisp
> itself a lot.
This is the same argument you tend to hear for learning Latin. It won't get
you a job, except perhaps as a classics professor, but it will improve your
mind, and make you a better writer in languages you do want to use, like
English.
But wait a minute. This metaphor doesn't stretch that far. The reason Latin
won't get you a job is that no one speaks it. If you write in Latin, no one
can understand you. But Lisp is a computer language, and computers speak
whatever language you, the programmer, tell them to.
So if Lisp makes you a better programmer, like he says, why wouldn't you want
to use it? If a painter were offered a brush that would make him a better
painter, it seems to me that he would want to use it in all his paintings,
wouldn't he? I'm not trying to make fun of Eric Raymond here. On the whole,
his advice is good. What he says about Lisp is pretty much the conventional
wisdom. But there is a contradiction in the conventional wisdom: Lisp will
make you a better programmer, and yet you won't use it.
Why not? Programming languages are just tools, after all. If Lisp really does
yield better programs, you should use it. And if it doesn't, then who needs
it?
This is not just a theoretical question. Software is a very competitive
business, prone to natural monopolies. A company that gets software written
faster and better will, all other things being equal, put its competitors out
of business. And when you're starting a startup, you feel this very keenly.
Startups tend to be an all or nothing proposition. You either get rich, or you
get nothing. In a startup, if you bet on the wrong technology, your
competitors will crush you.
Robert and I both knew Lisp well, and we couldn't see any reason not to trust
our instincts and go with Lisp. We knew that everyone else was writing their
software in C++ or Perl. But we also knew that that didn't mean anything. If
you chose technology that way, you'd be running Windows. When you choose
technology, you have to ignore what other people are doing, and consider only
what will work the best.
This is especially true in a startup. In a big company, you can do what all
the other big companies are doing. But a startup can't do what all the other
startups do. I don't think a lot of people realize this, even in startups.
The average big company grows at about ten percent a year. So if you're
running a big company and you do everything the way the average big company
does it, you can expect to do as well as the average big company-- that is, to
grow about ten percent a year.
The same thing will happen if you're running a startup, of course. If you do
everything the way the average startup does it, you should expect average
performance. The problem here is, average performance means that you'll go out
of business. The survival rate for startups is way less than fifty percent. So
if you're running a startup, you had better be doing something odd. If not,
you're in trouble.
Back in 1995, we knew something that I don't think our competitors understood,
and few understand even now: when you're writing software that only has to run
on your own servers, you can use any language you want. When you're writing
desktop software, there's a strong bias toward writing applications in the
same language as the operating system. Ten years ago, writing applications
meant writing applications in C. But with Web-based software, especially when
you have the source code of both the language and the operating system, you
can use whatever language you want.
This new freedom is a double-edged sword, however. Now that you can use any
language, you have to think about which one to use. Companies that try to
pretend nothing has changed risk finding that their competitors do not.
If you can use any language, which do you use? We chose Lisp. For one thing,
it was obvious that rapid development would be important in this market. We
were all starting from scratch, so a company that could get new features done
before its competitors would have a big advantage. We knew Lisp was a really
good language for writing software quickly, and server-based applications
magnify the effect of rapid development, because you can release software the
minute it's done.
If other companies didn't want to use Lisp, so much the better. It might give
us a technological edge, and we needed all the help we could get. When we
started Viaweb, we had no experience in business. We didn't know anything
about marketing, or hiring people, or raising money, or getting customers.
Neither of us had ever even had what you would call a real job. The only thing
we were good at was writing software. We hoped that would save us. Any
advantage we could get in the software department, we would take.
So you could say that using Lisp was an experiment. Our hypothesis was that if
we wrote our software in Lisp, we'd be able to get features done faster than
our competitors, and also to do things in our software that they couldn't do.
And because Lisp was so high-level, we wouldn't need a big development team,
so our costs would be lower. If this were so, we could offer a better product
for less money, and still make a profit. We would end up getting all the
users, and our competitors would get none, and eventually go out of business.
That was what we hoped would happen, anyway.
What were the results of this experiment? Somewhat surprisingly, it worked. We
eventually had many competitors, on the order of twenty to thirty of them, but
none of their software could compete with ours. We had a wysiwyg online store
builder that ran on the server and yet felt like a desktop application. Our
competitors had cgi scripts. And we were always far ahead of them in features.
Sometimes, in desperation, competitors would try to introduce features that we
didn't have. But with Lisp our development cycle was so fast that we could
sometimes duplicate a new feature within a day or two of a competitor
announcing it in a press release. By the time journalists covering the press
release got round to calling us, we would have the new feature too.
It must have seemed to our competitors that we had some kind of secret
weapon-- that we were decoding their Enigma traffic or something. In fact we
did have a secret weapon, but it was simpler than they realized. No one was
leaking news of their features to us. We were just able to develop software
faster than anyone thought possible.
When I was about nine I happened to get hold of a copy of _The Day of the
Jackal,_ by Frederick Forsyth. The main character is an assassin who is hired
to kill the president of France. The assassin has to get past the police to
get up to an apartment that overlooks the president's route. He walks right by
them, dressed up as an old man on crutches, and they never suspect him.
Our secret weapon was similar. We wrote our software in a weird AI language,
with a bizarre syntax full of parentheses. For years it had annoyed me to hear
Lisp described that way. But now it worked to our advantage. In business,
there is nothing more valuable than a technical advantage your competitors
don't understand. In business, as in war, surprise is worth as much as force.
And so, I'm a little embarrassed to say, I never said anything publicly about
Lisp while we were working on Viaweb. We never mentioned it to the press, and
if you searched for Lisp on our Web site, all you'd find were the titles of
two books in my bio. This was no accident. A startup should give its
competitors as little information as possible. If they didn't know what
language our software was written in, or didn't care, I wanted to keep it that
way.
The people who understood our technology best were the customers. They didn't
care what language Viaweb was written in either, but they noticed that it
worked really well. It let them build great looking online stores literally in
minutes. And so, by word of mouth mostly, we got more and more users. By the
end of 1996 we had about 70 stores online. At the end of 1997 we had 500. Six
months later, when Yahoo bought us, we had 1070 users. Today, as Yahoo Store,
this software continues to dominate its market. It's one of the more
profitable pieces of Yahoo, and the stores built with it are the foundation of
Yahoo Shopping. I left Yahoo in 1999, so I don't know exactly how many users
they have now, but the last I heard there were about 20,000.
**The Blub Paradox**
What's so great about Lisp? And if Lisp is so great, why doesn't everyone use
it? These sound like rhetorical questions, but actually they have
straightforward answers. Lisp is so great not because of some magic quality
visible only to devotees, but because it is simply the most powerful language
available. And the reason everyone doesn't use it is that programming
languages are not merely technologies, but habits of mind as well, and nothing
changes slower. Of course, both these answers need explaining.
I'll begin with a shockingly controversial statement: programming languages
vary in power.
Few would dispute, at least, that high level languages are more powerful than
machine language. Most programmers today would agree that you do not,
ordinarily, want to program in machine language. Instead, you should program
in a high-level language, and have a compiler translate it into machine
language for you. This idea is even built into the hardware now: since the
1980s, instruction sets have been designed for compilers rather than human
programmers.
Everyone knows it's a mistake to write your whole program by hand in machine
language. What's less often understood is that there is a more general
principle here: that if you have a choice of several languages, it is, all
other things being equal, a mistake to program in anything but the most
powerful one.
There are many exceptions to this rule. If you're writing a program that has
to work very closely with a program written in a certain language, it might be
a good idea to write the new program in the same language. If you're writing a
program that only has to do something very simple, like number crunching or
bit manipulation, you may as well use a less abstract language, especially
since it may be slightly faster. And if you're writing a short, throwaway
program, you may be better off just using whatever language has the best
library functions for the task. But in general, for application software, you
want to be using the most powerful (reasonably efficient) language you can
get, and using anything else is a mistake, of exactly the same kind, though
possibly in a lesser degree, as programming in machine language.
You can see that machine language is very low level. But, at least as a kind
of social convention, high-level languages are often all treated as
equivalent. They're not. Technically the term "high-level language" doesn't
mean anything very definite. There's no dividing line with machine languages
on one side and all the high-level languages on the other. Languages fall
along a continuum of abstractness, from the most powerful all the way down
to machine languages, which themselves vary in power.
Consider Cobol. Cobol is a high-level language, in the sense that it gets
compiled into machine language. Would anyone seriously argue that Cobol is
equivalent in power to, say, Python? It's probably closer to machine language
than Python.
Or how about Perl 4? Between Perl 4 and Perl 5, lexical closures got added to
the language. Most Perl hackers would agree that Perl 5 is more powerful than
Perl 4. But once you've admitted that, you've admitted that one high level
language can be more powerful than another. And it follows inexorably that,
except in special cases, you ought to use the most powerful you can get.
This idea is rarely followed to its conclusion, though. After a certain age,
programmers rarely switch languages voluntarily. Whatever language people
happen to be used to, they tend to consider just good enough.
Programmers get very attached to their favorite languages, and I don't want to
hurt anyone's feelings, so to explain this point I'm going to use a
hypothetical language called Blub. Blub falls right in the middle of the
abstractness continuum. It is not the most powerful language, but it is more
powerful than Cobol or machine language.
And in fact, our hypothetical Blub programmer wouldn't use either of them. Of
course he wouldn't program in machine language. That's what compilers are for.
And as for Cobol, he doesn't know how anyone can get anything done with it. It
doesn't even have x (Blub feature of your choice).
As long as our hypothetical Blub programmer is looking down the power
continuum, he knows he's looking down. Languages less powerful than Blub are
obviously less powerful, because they're missing some feature he's used to.
But when our hypothetical Blub programmer looks in the other direction, up the
power continuum, he doesn't realize he's looking up. What he sees are merely
weird languages. He probably considers them about equivalent in power to Blub,
but with all this other hairy stuff thrown in as well. Blub is good enough for
him, because he thinks in Blub.
When we switch to the point of view of a programmer using any of the languages
higher up the power continuum, however, we find that he in turn looks down
upon Blub. How can you get anything done in Blub? It doesn't even have y.
By induction, the only programmers in a position to see all the differences in
power between the various languages are those who understand the most powerful
one. (This is probably what Eric Raymond meant about Lisp making you a better
programmer.) You can't trust the opinions of the others, because of the Blub
paradox: they're satisfied with whatever language they happen to use, because
it dictates the way they think about programs.
I know this from my own experience, as a high school kid writing programs in
Basic. That language didn't even support recursion. It's hard to imagine
writing programs without using recursion, but I didn't miss it at the time. I
thought in Basic. And I was a whiz at it. Master of all I surveyed.
The five languages that Eric Raymond recommends to hackers fall at various
points on the power continuum. Where they fall relative to one another is a
sensitive topic. What I will say is that I think Lisp is at the top. And to
support this claim I'll tell you about one of the things I find missing when I
look at the other four languages. How can you get anything done in them, I
think, without macros?
Many languages have something called a macro. But Lisp macros are unique. And
believe it or not, what they do is related to the parentheses. The designers
of Lisp didn't put all those parentheses in the language just to be different.
To the Blub programmer, Lisp code looks weird. But those parentheses are there
for a reason. They are the outward evidence of a fundamental difference
between Lisp and other languages.
Lisp code is made out of Lisp data objects. And not in the trivial sense that
the source files contain characters, and strings are one of the data types
supported by the language. Lisp code, after it's read by the parser, is made
of data structures that you can traverse.
If you understand how compilers work, what's really going on is not so much
that Lisp has a strange syntax as that Lisp has no syntax. You write programs
in the parse trees that get generated within the compiler when other languages
are parsed. But these parse trees are fully accessible to your programs. You
can write programs that manipulate them. In Lisp, these programs are called
macros. They are programs that write programs.
Programs that write programs? When would you ever want to do that? Not very
often, if you think in Cobol. All the time, if you think in Lisp. It would be
convenient here if I could give an example of a powerful macro, and say there!
how about that? But if I did, it would just look like gibberish to someone who
didn't know Lisp; there isn't room here to explain everything you'd need to
know to understand what it meant. In Ansi Common Lisp I tried to move things
along as fast as I could, and even so I didn't get to macros until page 160.
But I think I can give a kind of argument that might be convincing. The source
code of the Viaweb editor was probably about 20-25% macros. Macros are harder
to write than ordinary Lisp functions, and it's considered to be bad style to
use them when they're not necessary. So every macro in that code is there
because it has to be. What that means is that at least 20-25% of the code in
this program is doing things that you can't easily do in any other language.
However skeptical the Blub programmer might be about my claims for the
mysterious powers of Lisp, this ought to make him curious. We weren't writing
this code for our own amusement. We were a tiny startup, programming as hard
as we could in order to put technical barriers between us and our competitors.
A suspicious person might begin to wonder if there was some correlation here.
A big chunk of our code was doing things that are very hard to do in other
languages. The resulting software did things our competitors' software
couldn't do. Maybe there was some kind of connection. I encourage you to
follow that thread. There may be more to that old man hobbling along on his
crutches than meets the eye.
**Aikido for Startups**
But I don't expect to convince anyone (over 25) to go out and learn Lisp. The
purpose of this article is not to change anyone's mind, but to reassure people
already interested in using Lisp-- people who know that Lisp is a powerful
language, but worry because it isn't widely used. In a competitive situation,
that's an advantage. Lisp's power is multiplied by the fact that your
competitors don't get it.
If you think of using Lisp in a startup, you shouldn't worry that it isn't
widely understood. You should hope that it stays that way. And it's likely to.
It's the nature of programming languages to make most people satisfied with
whatever they currently use. Computer hardware changes so much faster than
personal habits that programming practice is usually ten to twenty years
behind the processor. At places like MIT they were writing programs in high-
level languages in the early 1960s, but many companies continued to write code
in machine language well into the 1980s. I bet a lot of people continued to
write machine language until the processor, like a bartender eager to close up
and go home, finally kicked them out by switching to a risc instruction set.
Ordinarily technology changes fast. But programming languages are different:
programming languages are not just technology, but what programmers think in.
They're half technology and half religion. And so the median language,
meaning whatever language the median programmer uses, moves as slow as an
iceberg. Garbage collection, introduced by Lisp in about 1960, is now widely
considered to be a good thing. Runtime typing, ditto, is growing in
popularity. Lexical closures, introduced by Lisp in the early 1970s, are now,
just barely, on the radar screen. Macros, introduced by Lisp in the mid 1960s,
are still terra incognita.
Obviously, the median language has enormous momentum. I'm not proposing that
you can fight this powerful force. What I'm proposing is exactly the opposite:
that, like a practitioner of Aikido, you can use it against your opponents.
If you work for a big company, this may not be easy. You will have a hard time
convincing the pointy-haired boss to let you build things in Lisp, when he has
just read in the paper that some other language is poised, like Ada was twenty
years ago, to take over the world. But if you work for a startup that doesn't
have pointy-haired bosses yet, you can, like we did, turn the Blub paradox to
your advantage: you can use technology that your competitors, glued immovably
to the median language, will never be able to match.
If you ever do find yourself working for a startup, here's a handy tip for
evaluating competitors. Read their job listings. Everything else on their site
may be stock photos or the prose equivalent, but the job listings have to be
specific about what they want, or they'll get the wrong candidates.
During the years we worked on Viaweb I read a lot of job descriptions. A new
competitor seemed to emerge out of the woodwork every month or so. The first
thing I would do, after checking to see if they had a live online demo, was
look at their job listings. After a couple years of this I could tell which
companies to worry about and which not to. The more of an IT flavor the job
descriptions had, the less dangerous the company was. The safest kind were the
ones that wanted Oracle experience. You never had to worry about those. You
were also safe if they said they wanted C++ or Java developers. If they wanted
Perl or Python programmers, that would be a bit frightening-- that's starting
to sound like a company where the technical side, at least, is run by real
hackers. If I had ever seen a job posting looking for Lisp hackers, I would
have been really worried.
** |
|
July 2023
If you collected lists of techniques for doing great work in a lot of
different fields, what would the intersection look like? I decided to find out
by making it.
Partly my goal was to create a guide that could be used by someone working in
any field. But I was also curious about the shape of the intersection. And one
thing this exercise shows is that it does have a definite shape; it's not just
a point labelled "work hard."
The following recipe assumes you're very ambitious.
The first step is to decide what to work on. The work you choose needs to have
three qualities: it has to be something you have a natural aptitude for, that
you have a deep interest in, and that offers scope to do great work.
In practice you don't have to worry much about the third criterion. Ambitious
people are if anything already too conservative about it. So all you need to
do is find something you have an aptitude for and great interest in.
That sounds straightforward, but it's often quite difficult. When you're young
you don't know what you're good at or what different kinds of work are like.
Some kinds of work you end up doing may not even exist yet. So while some
people know what they want to do at 14, most have to figure it out.
The way to figure out what to work on is by working. If you're not sure what
to work on, guess. But pick something and get going. You'll probably guess
wrong some of the time, but that's fine. It's good to know about multiple
things; some of the biggest discoveries come from noticing connections between
different fields.
Develop a habit of working on your own projects. Don't let "work" mean
something other people tell you to do. If you do manage to do great work one
day, it will probably be on a project of your own. It may be within some
bigger project, but you'll be driving your part of it.
What should your projects be? Whatever seems to you excitingly ambitious. As
you grow older and your taste in projects evolves, exciting and important will
converge. At 7 it may seem excitingly ambitious to build huge things out of
Lego, then at 14 to teach yourself calculus, till at 21 you're starting to
explore unanswered questions in physics. But always preserve excitingness.
There's a kind of excited curiosity that's both the engine and the rudder of
great work. It will not only drive you, but if you let it have its way, will
also show you what to work on.
What are you excessively curious about — curious to a degree that would bore
most other people? That's what you're looking for.
Once you've found something you're excessively interested in, the next step is
to learn enough about it to get you to one of the frontiers of knowledge.
Knowledge expands fractally, and from a distance its edges look smooth, but
once you learn enough to get close to one, they turn out to be full of gaps.
The next step is to notice them. This takes some skill, because your brain
wants to ignore such gaps in order to make a simpler model of the world. Many
discoveries have come from asking questions about things that everyone else
took for granted.
If the answers seem strange, so much the better. Great work often has a
tincture of strangeness. You see this from painting to math. It would be
affected to try to manufacture it, but if it appears, embrace it.
Boldly chase outlier ideas, even if other people aren't interested in them —
in fact, especially if they aren't. If you're excited about some possibility
that everyone else ignores, and you have enough expertise to say precisely
what they're all overlooking, that's as good a bet as you'll find.
Four steps: choose a field, learn enough to get to the frontier, notice gaps,
explore promising ones. This is how practically everyone who's done great work
has done it, from painters to physicists.
Steps two and four will require hard work. It may not be possible to prove
that you have to work hard to do great things, but the empirical evidence is
on the scale of the evidence for mortality. That's why it's essential to work
on something you're deeply interested in. Interest will drive you to work
harder than mere diligence ever could.
The three most powerful motives are curiosity, delight, and the desire to do
something impressive. Sometimes they converge, and that combination is the
most powerful of all.
The big prize is to discover a new fractal bud. You notice a crack in the
surface of knowledge, pry it open, and there's a whole world inside.
Let's talk a little more about the complicated business of figuring out what
to work on. The main reason it's hard is that you can't tell what most kinds
of work are like except by doing them. Which means the four steps overlap: you
may have to work at something for years before you know how much you like it
or how good you are at it. And in the meantime you're not doing, and thus not
learning about, most other kinds of work. So in the worst case you choose late
based on very incomplete information.
The nature of ambition exacerbates this problem. Ambition comes in two forms,
one that precedes interest in the subject and one that grows out of it. Most
people who do great work have a mix, and the more you have of the former, the
harder it will be to decide what to do.
The educational systems in most countries pretend it's easy. They expect you
to commit to a field long before you could know what it's really like. And as
a result an ambitious person on an optimal trajectory will often read to the
system as an instance of breakage.
It would be better if they at least admitted it — if they admitted that the
system not only can't do much to help you figure out what to work on, but is
designed on the assumption that you'll somehow magically guess as a teenager.
They don't tell you, but I will: when it comes to figuring out what to work
on, you're on your own. Some people get lucky and do guess correctly, but the
rest will find themselves scrambling diagonally across tracks laid down on the
assumption that everyone does.
What should you do if you're young and ambitious but don't know what to work
on? What you should not do is drift along passively, assuming the problem will
solve itself. You need to take action. But there is no systematic procedure
you can follow. When you read biographies of people who've done great work,
it's remarkable how much luck is involved. They discover what to work on as a
result of a chance meeting, or by reading a book they happen to pick up. So
you need to make yourself a big target for luck, and the way to do that is to
be curious. Try lots of things, meet lots of people, read lots of books, ask
lots of questions.
When in doubt, optimize for interestingness. Fields change as you learn more
about them. What mathematicians do, for example, is very different from what
you do in high school math classes. So you need to give different types of
work a chance to show you what they're like. But a field should become
_increasingly_ interesting as you learn more about it. If it doesn't, it's
probably not for you.
Don't worry if you find you're interested in different things than other
people. The stranger your tastes in interestingness, the better. Strange
tastes are often strong ones, and a strong taste for work means you'll be
productive. And you're more likely to find new things if you're looking where
few have looked before.
One sign that you're suited for some kind of work is when you like even the
parts that other people find tedious or frightening.
But fields aren't people; you don't owe them any loyalty. If in the course of
working on one thing you discover another that's more exciting, don't be
afraid to switch.
If you're making something for people, make sure it's something they actually
want. The best way to do this is to make something you yourself want. Write
the story you want to read; build the tool you want to use. Since your friends
probably have similar interests, this will also get you your initial audience.
This _should_ follow from the excitingness rule. Obviously the most exciting
story to write will be the one you want to read. The reason I mention this
case explicitly is that so many people get it wrong. Instead of making what
they want, they try to make what some imaginary, more sophisticated audience
wants. And once you go down that route, you're lost.
There are a lot of forces that will lead you astray when you're trying to
figure out what to work on. Pretentiousness, fashion, fear, money, politics,
other people's wishes, eminent frauds. But if you stick to what you find
genuinely interesting, you'll be proof against all of them. If you're
interested, you're not astray.
Following your interests may sound like a rather passive strategy, but in
practice it usually means following them past all sorts of obstacles. You
usually have to risk rejection and failure. So it does take a good deal of
boldness.
But while you need boldness, you don't usually need much planning. In most
cases the recipe for doing great work is simply: work hard on excitingly
ambitious projects, and something good will come of it. Instead of making a
plan and then executing it, you just try to preserve certain invariants.
The trouble with planning is that it only works for achievements you can
describe in advance. You can win a gold medal or get rich by deciding to as a
child and then tenaciously pursuing that goal, but you can't discover natural
selection that way.
I think for most people who want to do great work, the right strategy is not
to plan too much. At each stage do whatever seems most interesting and gives
you the best options for the future. I call this approach "staying upwind."
This is how most people who've done great work seem to have done it.
Even when you've found something exciting to work on, working on it is not
always straightforward. There will be times when some new idea makes you leap
out of bed in the morning and get straight to work. But there will also be
plenty of times when things aren't like that.
You don't just put out your sail and get blown forward by inspiration. There
are headwinds and currents and hidden shoals. So there's a technique to
working, just as there is to sailing.
For example, while you must work hard, it's possible to work too hard, and if
you do that you'll find you get diminishing returns: fatigue will make you
stupid, and eventually even damage your health. The point at which work yields
diminishing returns depends on the type. Some of the hardest types you might
only be able to do for four or five hours a day.
Ideally those hours will be contiguous. To the extent you can, try to arrange
your life so you have big blocks of time to work in. You'll shy away from hard
tasks if you know you might be interrupted.
It will probably be harder to start working than to keep working. You'll often
have to trick yourself to get over that initial threshold. Don't worry about
this; it's the nature of work, not a flaw in your character. Work has a sort
of activation energy, both per day and per project. And since this threshold
is fake in the sense that it's higher than the energy required to keep going,
it's ok to tell yourself a lie of corresponding magnitude to get over it.
It's usually a mistake to lie to yourself if you want to do great work, but
this is one of the rare cases where it isn't. When I'm reluctant to start work
in the morning, I often trick myself by saying "I'll just read over what I've
got so far." Five minutes later I've found something that seems mistaken or
incomplete, and I'm off.
Similar techniques work for starting new projects. It's ok to lie to yourself
about how much work a project will entail, for example. Lots of great things
began with someone saying "How hard could it be?"
This is one case where the young have an advantage. They're more optimistic,
and even though one of the sources of their optimism is ignorance, in this
case ignorance can sometimes beat knowledge.
Try to finish what you start, though, even if it turns out to be more work
than you expected. Finishing things is not just an exercise in tidiness or
self-discipline. In many projects a lot of the best work happens in what was
meant to be the final stage.
Another permissible lie is to exaggerate the importance of what you're working
on, at least in your own mind. If that helps you discover something new, it
may turn out not to have been a lie after all.
Since there are two senses of starting work — per day and per project — there
are also two forms of procrastination. Per-project procrastination is far the
more dangerous. You put off starting that ambitious project from year to year
because the time isn't quite right. When you're procrastinating in units of
years, you can get a lot not done.
One reason per-project procrastination is so dangerous is that it usually
camouflages itself as work. You're not just sitting around doing nothing;
you're working industriously on something else. So per-project procrastination
doesn't set off the alarms that per-day procrastination does. You're too busy
to notice it.
The way to beat it is to stop occasionally and ask yourself: Am I working on
what I most want to work on? When you're young it's ok if the answer is
sometimes no, but this gets increasingly dangerous as you get older.
Great work usually entails spending what would seem to most people an
unreasonable amount of time on a problem. You can't think of this time as a
cost, or it will seem too high. You have to find the work sufficiently
engaging as it's happening.
There may be some jobs where you have to work diligently for years at things
you hate before you get to the good part, but this is not how great work
happens. Great work happens by focusing consistently on something you're
genuinely interested in. When you pause to take stock, you're surprised how
far you've come.
The reason we're surprised is that we underestimate the cumulative effect of
work. Writing a page a day doesn't sound like much, but if you do it every day
you'll write a book a year. That's the key: consistency. People who do great
things don't get a lot done every day. They get something done, rather than
nothing.
If you do work that compounds, you'll get exponential growth. Most people who
do this do it unconsciously, but it's worth stopping to think about. Learning,
for example, is an instance of this phenomenon: the more you learn about
something, the easier it is to learn more. Growing an audience is another: the
more fans you have, the more new fans they'll bring you.
The trouble with exponential growth is that the curve feels flat in the
beginning. It isn't; it's still a wonderful exponential curve. But we can't
grasp that intuitively, so we underrate exponential growth in its early
stages.
Something that grows exponentially can become so valuable that it's worth
making an extraordinary effort to get it started. But since we underrate
exponential growth early on, this too is mostly done unconsciously: people
push through the initial, unrewarding phase of learning something new because
they know from experience that learning new things always takes an initial
push, or they grow their audience one fan at a time because they have nothing
better to do. If people consciously realized they could invest in exponential
growth, many more would do it.
Work doesn't just happen when you're trying to. There's a kind of undirected
thinking you do when walking or taking a shower or lying in bed that can be
very powerful. By letting your mind wander a little, you'll often solve
problems you were unable to solve by frontal attack.
You have to be working hard in the normal way to benefit from this phenomenon,
though. You can't just walk around daydreaming. The daydreaming has to be
interleaved with deliberate work that feeds it questions.
Everyone knows to avoid distractions at work, but it's also important to avoid
them in the other half of the cycle. When you let your mind wander, it wanders
to whatever you care about most at that moment. So avoid the kind of
distraction that pushes your work out of the top spot, or you'll waste this
valuable type of thinking on the distraction instead. (Exception: Don't avoid
love.)
Consciously cultivate your taste in the work done in your field. Until you
know which is the best and what makes it so, you don't know what you're aiming
for.
And that _is_ what you're aiming for, because if you don't try to be the best,
you won't even be good. This observation has been made by so many people in so
many different fields that it might be worth thinking about why it's true. It
could be because ambition is a phenomenon where almost all the error is in one
direction — where almost all the shells that miss the target miss by falling
short. Or it could be because ambition to be the best is a qualitatively
different thing from ambition to be good. Or maybe being good is simply too
vague a standard. Probably all three are true.
Fortunately there's a kind of economy of scale here. Though it might seem like
you'd be taking on a heavy burden by trying to be the best, in practice you
often end up net ahead. It's exciting, and also strangely liberating. It
simplifies things. In some ways it's easier to try to be the best than to try
merely to be good.
One way to aim high is to try to make something that people will care about in
a hundred years. Not because their opinions matter more than your
contemporaries', but because something that still seems good in a hundred
years is more likely to be genuinely good.
Don't try to work in a distinctive style. Just try to do the best job you can;
you won't be able to help doing it in a distinctive way.
Style is doing things in a distinctive way without trying to. Trying to is
affectation.
Affectation is in effect to pretend that someone other than you is doing the
work. You adopt an impressive but fake persona, and while you're pleased with
the impressiveness, the fakeness is what shows in the work.
The temptation to be someone else is greatest for the young. They often feel
like nobodies. But you never need to worry about that problem, because it's
self-solving if you work on sufficiently ambitious projects. If you succeed at
an ambitious project, you're not a nobody; you're the person who did it. So
just do the work and your identity will take care of itself.
"Avoid affectation" is a useful rule so far as it goes, but how would you
express this idea positively? How would you say what to be, instead of what
not to be? The best answer is earnest. If you're earnest you avoid not just
affectation but a whole set of similar vices.
The core of being earnest is being intellectually honest. We're taught as
children to be honest as an unselfish virtue — as a kind of sacrifice. But in
fact it's a source of power too. To see new ideas, you need an exceptionally
sharp eye for the truth. You're trying to see more truth than others have seen
so far. And how can you have a sharp eye for the truth if you're
intellectually dishonest?
One way to avoid intellectual dishonesty is to maintain a slight positive
pressure in the opposite direction. Be aggressively willing to admit that
you're mistaken. Once you've admitted you were mistaken about something,
you're free. Till then you have to carry it.
Another more subtle component of earnestness is informality. Informality is
much more important than its grammatically negative name implies. It's not
merely the absence of something. It means focusing on what matters instead of
what doesn't.
What formality and affectation have in common is that as well as doing the
work, you're trying to seem a certain way as you're doing it. But any energy
that goes into how you seem comes out of being good. That's one reason nerds
have an advantage in doing great work: they expend little effort on seeming
anything. In fact that's basically the definition of a nerd.
Nerds have a kind of innocent boldness that's exactly what you need in doing
great work. It's not learned; it's preserved from childhood. So hold onto it.
Be the one who puts things out there rather than the one who sits back and
offers sophisticated-sounding criticisms of them. "It's easy to criticize" is
true in the most literal sense, and the route to great work is never easy.
There may be some jobs where it's an advantage to be cynical and pessimistic,
but if you want to do great work it's an advantage to be optimistic, even
though that means you'll risk looking like a fool sometimes. There's an old
tradition of doing the opposite. The Old Testament says it's better to keep
quiet lest you look like a fool. But that's advice for _seeming_ smart. If you
actually want to discover new things, it's better to take the risk of telling
people your ideas.
Some people are naturally earnest, and with others it takes a conscious
effort. Either kind of earnestness will suffice. But I doubt it would be
possible to do great work without being earnest. It's so hard to do even if
you are. You don't have enough margin for error to accommodate the distortions
introduced by being affected, intellectually dishonest, orthodox, fashionable,
or cool.
Great work is consistent not only with who did it, but with itself. It's
usually all of a piece. So if you face a decision in the middle of working on
something, ask which choice is more consistent.
You may have to throw things away and redo them. You won't necessarily have
to, but you have to be willing to. And that can take some effort; when there's
something you need to redo, status quo bias and laziness will combine to keep
you in denial about it. To beat this ask: If I'd already made the change,
would I want to revert to what I have now?
Have the confidence to cut. Don't keep something that doesn't fit just because
you're proud of it, or because it cost you a lot of effort.
Indeed, in some kinds of work it's good to strip whatever you're doing to its
essence. The result will be more concentrated; you'll understand it better;
and you won't be able to lie to yourself about whether there's anything real
there.
Mathematical elegance may sound like a mere metaphor, drawn from the arts.
That's what I thought when I first heard the term "elegant" applied to a
proof. But now I suspect it's conceptually prior — that the main ingredient in
artistic elegance is mathematical elegance. At any rate it's a useful standard
well beyond math.
Elegance can be a long-term bet, though. Laborious solutions will often have
more prestige in the short term. They cost a lot of effort and they're hard to
understand, both of which impress people, at least temporarily.
Whereas some of the very best work will seem like it took comparatively little
effort, because it was in a sense already there. It didn't have to be built,
just seen. It's a very good sign when it's hard to say whether you're creating
something or discovering it.
When you're doing work that could be seen as either creation or discovery, err
on the side of discovery. Try thinking of yourself as a mere conduit through
which the ideas take their natural shape.
(Strangely enough, one exception is the problem of choosing a problem to work
on. This is usually seen as search, but in the best case it's more like
creating something. In the best case you create the field in the process of
exploring it.)
Similarly, if you're trying to build a powerful tool, make it gratuitously
unrestrictive. A powerful tool almost by definition will be used in ways you
didn't expect, so err on the side of eliminating restrictions, even if you
don't know what the benefit will be.
Great work will often be tool-like in the sense of being something others
build on. So it's a good sign if you're creating ideas that others could use,
or exposing questions that others could answer. The best ideas have
implications in many different areas.
If you express your ideas in the most general form, they'll be truer than you
intended.
True by itself is not enough, of course. Great ideas have to be true and new.
And it takes a certain amount of ability to see new ideas even once you've
learned enough to get to one of the frontiers of knowledge.
In English we give this ability names like originality, creativity, and
imagination. And it seems reasonable to give it a separate name, because it
does seem to some extent a separate skill. It's possible to have a great deal
of ability in other respects — to have a great deal of what's often called
"technical ability" — and yet not have much of this.
I've never liked the term "creative process." It seems misleading. Originality
isn't a process, but a habit of mind. Original thinkers throw off new ideas
about whatever they focus on, like an angle grinder throwing off sparks. They
can't help it.
If the thing they're focused on is something they don't understand very well,
these new ideas might not be good. One of the most original thinkers I know
decided to focus on dating after he got divorced. He knew roughly as much
about dating as the average 15 year old, and the results were spectacularly
colorful. But to see originality separated from expertise like that made its
nature all the more clear.
I don't know if it's possible to cultivate originality, but there are
definitely ways to make the most of however much you have. For example, you're
much more likely to have original ideas when you're working on something.
Original ideas don't come from trying to have original ideas. They come from
trying to build or understand something slightly too difficult.
Talking or writing about the things you're interested in is a good way to
generate new ideas. When you try to put ideas into words, a missing idea
creates a sort of vacuum that draws it out of you. Indeed, there's a kind of
thinking that can only be done by writing.
Changing your context can help. If you visit a new place, you'll often find
you have new ideas there. The journey itself often dislodges them. But you may
not have to go far to get this benefit. Sometimes it's enough just to go for a
walk.
It also helps to travel in topic space. You'll have more new ideas if you
explore lots of different topics, partly because it gives the angle grinder
more surface area to work on, and partly because analogies are an especially
fruitful source of new ideas.
Don't divide your attention _evenly_ between many topics though, or you'll
spread yourself too thin. You want to distribute it according to something
more like a power law. Be professionally curious about a few topics and
idly curious about many more.
Curiosity and originality are closely related. Curiosity feeds originality by
giving it new things to work on. But the relationship is closer than that.
Curiosity is itself a kind of originality; it's roughly to questions what
originality is to answers. And since questions at their best are a big
component of answers, curiosity at its best is a creative force.
Having new ideas is a strange game, because it usually consists of seeing
things that were right under your nose. Once you've seen a new idea, it tends
to seem obvious. Why did no one think of this before?
When an idea seems simultaneously novel and obvious, it's probably a good one.
Seeing something obvious sounds easy. And yet empirically having new ideas is
hard. What's the source of this apparent contradiction? It's that seeing the
new idea usually requires you to change the way you look at the world. We see
the world through models that both help and constrain us. When you fix a
broken model, new ideas become obvious. But noticing and fixing a broken model
is hard. That's how new ideas can be both obvious and yet hard to discover:
they're easy to see after you do something hard.
One way to discover broken models is to be stricter than other people. Broken
models of the world leave a trail of clues where they bash against reality.
Most people don't want to see these clues. It would be an understatement to
say that they're attached to their current model; it's what they think in; so
they'll tend to ignore the trail of clues left by its breakage, however
conspicuous it may seem in retrospect.
To find new ideas you have to seize on signs of breakage instead of looking
away. That's what Einstein did. He was able to see the wild implications of
Maxwell's equations not so much because he was looking for new ideas as
because he was stricter.
The other thing you need is a willingness to break rules. Paradoxical as it
sounds, if you want to fix your model of the world, it helps to be the sort of
person who's comfortable breaking rules. From the point of view of the old
model, which everyone including you initially shares, the new model usually
breaks at least implicit rules.
Few understand the degree of rule-breaking required, because new ideas seem
much more conservative once they succeed. They seem perfectly reasonable once
you're using the new model of the world they brought with them. But they
didn't at the time; it took the greater part of a century for the heliocentric
model to be generally accepted, even among astronomers, because it felt so
wrong.
Indeed, if you think about it, a good new idea has to seem bad to most people,
or someone would have already explored it. So what you're looking for is ideas
that seem crazy, but the right kind of crazy. How do you recognize these? You
can't with certainty. Often ideas that seem bad are bad. But ideas that are
the right kind of crazy tend to be exciting; they're rich in implications;
whereas ideas that are merely bad tend to be depressing.
There are two ways to be comfortable breaking rules: to enjoy breaking them,
and to be indifferent to them. I call these two cases being aggressively and
passively independent-minded.
The aggressively independent-minded are the naughty ones. Rules don't merely
fail to stop them; breaking rules gives them additional energy. For this sort
of person, delight at the sheer audacity of a project sometimes supplies
enough activation energy to get it started.
The other way to break rules is not to care about them, or perhaps even to
know they exist. This is why novices and outsiders often make new discoveries;
their ignorance of a field's assumptions acts as a source of temporary passive
independent-mindedness. Aspies also seem to have a kind of immunity to
conventional beliefs. Several I know say that this helps them to have new
ideas.
Strictness plus rule-breaking sounds like a strange combination. In popular
culture they're opposed. But popular culture has a broken model in this
respect. It implicitly assumes that issues are trivial ones, and in trivial
matters strictness and rule-breaking _are_ opposed. But in questions that
really matter, only rule-breakers can be truly strict.
An overlooked idea often doesn't lose till the semifinals. You do see it,
subconsciously, but then another part of your subconscious shoots it down
because it would be too weird, too risky, too much work, too controversial.
This suggests an exciting possibility: if you could turn off such filters, you
could see more new ideas.
One way to do that is to ask what would be good ideas for _someone else_ to
explore. Then your subconscious won't shoot them down to protect you.
You could also discover overlooked ideas by working in the other direction: by
starting from what's obscuring them. Every cherished but mistaken principle is
surrounded by a dead zone of valuable ideas that are unexplored because they
contradict it.
Religions are collections of cherished but mistaken principles. So anything
that can be described either literally or metaphorically as a religion will
have valuable unexplored ideas in its shadow. Copernicus and Darwin both made
discoveries of this type.
What are people in your field religious about, in the sense of being too
attached to some principle that might not be as self-evident as they think?
What becomes possible if you discard it?
People show much more originality in solving problems than in deciding which
problems to solve. Even the smartest can be surprisingly conservative when
deciding what to work on. People who'd never dream of being fashionable in any
other way get sucked into working on fashionable problems.
One reason people are more conservative when choosing problems than solutions
is that problems are bigger bets. A problem could occupy you for years, while
exploring a solution might only take days. But even so I think most people are
too conservative. They're not merely responding to risk, but to fashion as
well. Unfashionable problems are undervalued.
One of the most interesting kinds of unfashionable problem is the problem that
people think has been fully explored, but hasn't. Great work often takes
something that already exists and shows its latent potential. Durer and Watt
both did this. So if you're interested in a field that others think is tapped
out, don't let their skepticism deter you. People are often wrong about this.
Working on an unfashionable problem can be very pleasing. There's no hype or
hurry. Opportunists and critics are both occupied elsewhere. The existing work
often has an old-school solidity. And there's a satisfying sense of economy in
cultivating ideas that would otherwise be wasted.
But the most common type of overlooked problem is not explicitly unfashionable
in the sense of being out of fashion. It just doesn't seem to matter as much
as it actually does. How do you find these? By being self-indulgent — by
letting your curiosity have its way, and tuning out, at least temporarily, the
little voice in your head that says you should only be working on "important"
problems.
You do need to work on important problems, but almost everyone is too
conservative about what counts as one. And if there's an important but
overlooked problem in your neighborhood, it's probably already on your
subconscious radar screen. So try asking yourself: if you were going to take a
break from "serious" work to work on something just because it would be really
interesting, what would you do? The answer is probably more important than it
seems.
Originality in choosing problems seems to matter even more than originality in
solving them. That's what distinguishes the people who discover whole new
fields. So what might seem to be merely the initial step — deciding what to
work on — is in a sense the key to the whole game.
Few grasp this. One of the biggest misconceptions about new ideas is about the
ratio of question to answer in their composition. People think big ideas are
answers, but often the real insight was in the question.
Part of the reason we underrate questions is the way they're used in schools.
In schools they tend to exist only briefly before being answered, like
unstable particles. But a really good question can be much more than that. A
really good question is a partial discovery. How do new species arise? Is the
force that makes objects fall to earth the same as the one that keeps planets
in their orbits? By even asking such questions you were already in excitingly
novel territory.
Unanswered questions can be uncomfortable things to carry around with you. But
the more you're carrying, the greater the chance of noticing a solution — or
perhaps even more excitingly, noticing that two unanswered questions are the
same.
Sometimes you carry a question for a long time. Great work often comes from
returning to a question you first noticed years before — in your childhood,
even — and couldn't stop thinking about. People talk a lot about the
importance of keeping your youthful dreams alive, but it's just as important
to keep your youthful questions alive.
This is one of the places where actual expertise differs most from the popular
picture of it. In the popular picture, experts are certain. But actually the
more puzzled you are, the better, so long as (a) the things you're puzzled
about matter, and (b) no one else understands them either.
Think about what's happening at the moment just before a new idea is
discovered. Often someone with sufficient expertise is puzzled about
something. Which means that originality consists partly of puzzlement — of
confusion! You have to be comfortable enough with the world being full of
puzzles that you're willing to see them, but not so comfortable that you don't
want to solve them.
It's a great thing to be rich in unanswered questions. And this is one of
those situations where the rich get richer, because the best way to acquire
new questions is to try answering existing ones. Questions don't just lead to
answers, but also to more questions.
The best questions grow in the answering. You notice a thread protruding from
the current paradigm and try pulling on it, and it just gets longer and
longer. So don't require a question to be obviously big before you try
answering it. You can rarely predict that. It's hard enough even to notice the
thread, let alone to predict how much will unravel if you pull on it.
It's better to be promiscuously curious — to pull a little bit on a lot of
threads, and see what happens. Big things start small. The initial versions of
big things were often just experiments, or side projects, or talks, which then
grew into something bigger. So start lots of small things.
Being prolific is underrated. The more different things you try, the greater
the chance of discovering something new. Understand, though, that trying lots
of things will mean trying lots of things that don't work. You can't have a
lot of good ideas without also having a lot of bad ones.
Though it sounds more responsible to begin by studying everything that's been
done before, you'll learn faster and have more fun by trying stuff. And you'll
understand previous work better when you do look at it. So err on the side of
starting. Which is easier when starting means starting small; those two ideas
fit together like two puzzle pieces.
How do you get from starting small to doing something great? By making
successive versions. Great things are almost always made in successive
versions. You start with something small and evolve it, and the final version
is both cleverer and more ambitious than anything you could have planned.
It's particularly useful to make successive versions when you're making
something for people — to get an initial version in front of them quickly, and
then evolve it based on their response.
Begin by trying the simplest thing that could possibly work. Surprisingly
often, it does. If it doesn't, this will at least get you started.
Don't try to cram too much new stuff into any one version. There are names for
doing this with the first version (taking too long to ship) and the second
(the second system effect), but these are both merely instances of a more
general principle.
An early version of a new project will sometimes be dismissed as a toy. It's a
good sign when people do this. That means it has everything a new idea needs
except scale, and that tends to follow.
The alternative to starting with something small and evolving it is to plan in
advance what you're going to do. And planning does usually seem the more
responsible choice. It sounds more organized to say "we're going to do x and
then y and then z" than "we're going to try x and see what happens." And it is
more _organized_ ; it just doesn't work as well.
Planning per se isn't good. It's sometimes necessary, but it's a necessary
evil — a response to unforgiving conditions. It's something you have to do
because you're working with inflexible media, or because you need to
coordinate the efforts of a lot of people. If you keep projects small and use
flexible media, you don't have to plan as much, and your designs can evolve
instead.
Take as much risk as you can afford. In an efficient market, risk is
proportionate to reward, so don't look for certainty, but for a bet with high
expected value. If you're not failing occasionally, you're probably being too
conservative.
Though conservatism is usually associated with the old, it's the young who
tend to make this mistake. Inexperience makes them fear risk, but it's when
you're young that you can afford the most.
Even a project that fails can be valuable. In the process of working on it,
you'll have crossed territory few others have seen, and encountered questions
few others have asked. And there's probably no better source of questions than
the ones you encounter in trying to do something slightly too hard.
Use the advantages of youth when you have them, and the advantages of age once
you have those. The advantages of youth are energy, time, optimism, and
freedom. The advantages of age are knowledge, efficiency, money, and power.
With effort you can acquire some of the latter when young and keep some of the
former when old.
The old also have the advantage of knowing which advantages they have. The
young often have them without realizing it. The biggest is probably time. The
young have no idea how rich they are in time. The best way to turn this time
to advantage is to use it in slightly frivolous ways: to learn about something
you don't need to know about, just out of curiosity, or to try building
something just because it would be cool, or to become freakishly good at
something.
That "slightly" is an important qualification. Spend time lavishly when you're
young, but don't simply waste it. There's a big difference between doing
something you worry might be a waste of time and doing something you know for
sure will be. The former is at least a bet, and possibly a better one than you
think.
The most subtle advantage of youth, or more precisely of inexperience, is that
you're seeing everything with fresh eyes. When your brain embraces an idea for
the first time, sometimes the two don't fit together perfectly. Usually the
problem is with your brain, but occasionally it's with the idea. A piece of it
sticks out awkwardly and jabs you when you think about it. People who are used
to the idea have learned to ignore it, but you have the opportunity not to.
So when you're learning about something for the first time, pay attention to
things that seem wrong or missing. You'll be tempted to ignore them, since
there's a 99% chance the problem is with you. And you may have to set aside
your misgivings temporarily to keep progressing. But don't forget about them.
When you've gotten further into the subject, come back and check if they're
still there. If they're still viable in the light of your present knowledge,
they probably represent an undiscovered idea.
One of the most valuable kinds of knowledge you get from experience is to know
what you _don't_ have to worry about. The young know all the things that could
matter, but not their relative importance. So they worry equally about
everything, when they should worry much more about a few things and hardly at
all about the rest.
But what you don't know is only half the problem with inexperience. The other
half is what you do know that ain't so. You arrive at adulthood with your head
full of nonsense — bad habits you've acquired and false things you've been
taught — and you won't be able to do great work till you clear away at least
the nonsense in the way of whatever type of work you want to do.
Much of the nonsense left in your head is left there by schools. We're so used
to schools that we unconsciously treat going to school as identical with
learning, but in fact schools have all sorts of strange qualities that warp
our ideas about learning and thinking.
For example, schools induce passivity. Since you were a small child, there was
an authority at the front of the class telling all of you what you had to
learn and then measuring whether you did. But neither classes nor tests are
intrinsic to learning; they're just artifacts of the way schools are usually
designed.
The sooner you overcome this passivity, the better. If you're still in school,
try thinking of your education as your project, and your teachers as working
for you rather than vice versa. That may seem a stretch, but it's not merely
some weird thought experiment. It's the truth, economically, and in the best
case it's the truth intellectually as well. The best teachers don't want to be
your bosses. They'd prefer it if you pushed ahead, using them as a source of
advice, rather than being pulled by them through the material.
Schools also give you a misleading impression of what work is like. In school
they tell you what the problems are, and they're almost always soluble using
no more than you've been taught so far. In real life you have to figure out
what the problems are, and you often don't know if they're soluble at all.
But perhaps the worst thing schools do to you is train you to win by hacking
the test. You can't do great work by doing that. You can't trick God. So stop
looking for that kind of shortcut. The way to beat the system is to focus on
problems and solutions that others have overlooked, not to skimp on the work
itself.
Don't think of yourself as dependent on some gatekeeper giving you a "big
break." Even if this were true, the best way to get it would be to focus on
doing good work rather than chasing influential people.
And don't take rejection by committees to heart. The qualities that impress
admissions officers and prize committees are quite different from those
required to do great work. The decisions of selection committees are only
meaningful to the extent that they're part of a feedback loop, and very few
are.
People new to a field will often copy existing work. There's nothing
inherently bad about that. There's no better way to learn how something works
than by trying to reproduce it. Nor does copying necessarily make your work
unoriginal. Originality is the presence of new ideas, not the absence of old
ones.
There's a good way to copy and a bad way. If you're going to copy something,
do it openly instead of furtively, or worse still, unconsciously. This is
what's meant by the famously misattributed phrase "Great artists steal." The
really dangerous kind of copying, the kind that gives copying a bad name, is
the kind that's done without realizing it, because you're nothing more than a
train running on tracks laid down by someone else. But at the other extreme,
copying can be a sign of superiority rather than subordination.
In many fields it's almost inevitable that your early work will be in some
sense based on other people's. Projects rarely arise in a vacuum. They're
usually a reaction to previous work. When you're first starting out, you don't
have any previous work; if you're going to react to something, it has to be
someone else's. Once you're established, you can react to your own. But while
the former gets called derivative and the latter doesn't, structurally the two
cases are more similar than they seem.
Oddly enough, the very novelty of the most novel ideas sometimes makes them
seem at first to be more derivative than they are. New discoveries often have
to be conceived initially as variations of existing things, _even by their
discoverers_ , because there isn't yet the conceptual vocabulary to express
them.
There are definitely some dangers to copying, though. One is that you'll tend
to copy old things — things that were in their day at the frontier of
knowledge, but no longer are.
And when you do copy something, don't copy every feature of it. Some will make
you ridiculous if you do. Don't copy the manner of an eminent 50 year old
professor if you're 18, for example, or the idiom of a Renaissance poem
hundreds of years later.
Some of the features of things you admire are flaws they succeeded despite.
Indeed, the features that are easiest to imitate are the most likely to be the
flaws.
This is particularly true for behavior. Some talented people are jerks, and
this sometimes makes it seem to the inexperienced that being a jerk is part of
being talented. It isn't; being talented is merely how they get away with it.
One of the most powerful kinds of copying is to copy something from one field
into another. History is so full of chance discoveries of this type that it's
probably worth giving chance a hand by deliberately learning about other kinds
of work. You can take ideas from quite distant fields if you let them be
metaphors.
Negative examples can be as inspiring as positive ones. In fact you can
sometimes learn more from things done badly than from things done well;
sometimes it only becomes clear what's needed when it's missing.
If a lot of the best people in your field are collected in one place, it's
usually a good idea to visit for a while. It will increase your ambition, and
also, by showing you that these people are human, increase your self-
confidence.
If you're earnest you'll probably get a warmer welcome than you might expect.
Most people who are very good at something are happy to talk about it with
anyone who's genuinely interested. If they're really good at their work, then
they probably have a hobbyist's interest in it, and hobbyists always want to
talk about their hobbies.
It may take some effort to find the people who are really good, though. Doing
great work has such prestige that in some places, particularly universities,
there's a polite fiction that everyone is engaged in it. And that is far from
true. People within universities can't say so openly, but the quality of the
work being done in different departments varies immensely. Some departments
have people doing great work; others have in the past; others never have.
Seek out the best colleagues. There are a lot of projects that can't be done
alone, and even if you're working on one that can be, it's good to have other
people to encourage you and to bounce ideas off.
Colleagues don't just affect your work, though; they also affect you. So work
with people you want to become like, because you will.
Quality is more important than quantity in colleagues. It's better to have one
or two great ones than a building full of pretty good ones. In fact it's not
merely better, but necessary, judging from history: the degree to which great
work happens in clusters suggests that one's colleagues often make the
difference between doing great work and not.
How do you know when you have sufficiently good colleagues? In my experience,
when you do, you know. Which means if you're unsure, you probably don't. But
it may be possible to give a more concrete answer than that. Here's an
attempt: sufficiently good colleagues offer _surprising_ insights. They can
see and do things that you can't. So if you have a handful of colleagues good
enough to keep you on your toes in this sense, you're probably over the
threshold.
Most of us can benefit from collaborating with colleagues, but some projects
require people on a larger scale, and starting one of those is not for
everyone. If you want to run a project like that, you'll have to become a
manager, and managing well takes aptitude and interest like any other kind of
work. If you don't have them, there is no middle path: you must either force
yourself to learn management as a second language, or avoid such projects.
Husband your morale. It's the basis of everything when you're working on
ambitious projects. You have to nurture and protect it like a living organism.
Morale starts with your view of life. You're more likely to do great work if
you're an optimist, and more likely to if you think of yourself as lucky than
if you think of yourself as a victim.
Indeed, work can to some extent protect you from your problems. If you choose
work that's pure, its very difficulties will serve as a refuge from the
difficulties of everyday life. If this is escapism, it's a very productive
form of it, and one that has been used by some of the greatest minds in
history.
Morale compounds via work: high morale helps you do good work, which increases
your morale and helps you do even better work. But this cycle also operates in
the other direction: if you're not doing good work, that can demoralize you
and make it even harder to. Since it matters so much for this cycle to be
running in the right direction, it can be a good idea to switch to easier work
when you're stuck, just so you start to get something done.
One of the biggest mistakes ambitious people make is to allow setbacks to
destroy their morale all at once, like a balloon bursting. You can inoculate
yourself against this by explicitly considering setbacks a part of your
process. Solving hard problems always involves some backtracking.
Doing great work is a depth-first search whose root node is the desire to. So
"If at first you don't succeed, try, try again" isn't quite right. It should
be: If at first you don't succeed, either try again, or backtrack and then try
again.
"Never give up" is also not quite right. Obviously there are times when it's
the right choice to eject. A more precise version would be: Never let setbacks
panic you into backtracking more than you need to. Corollary: Never abandon
the root node.
It's not necessarily a bad sign if work is a struggle, any more than it's a
bad sign to be out of breath while running. It depends how fast you're
running. So learn to distinguish good pain from bad. Good pain is a sign of
effort; bad pain is a sign of damage.
An audience is a critical component of morale. If you're a scholar, your
audience may be your peers; in the arts, it may be an audience in the
traditional sense. Either way it doesn't need to be big. The value of an
audience doesn't grow anything like linearly with its size. Which is bad news
if you're famous, but good news if you're just starting out, because it means
a small but dedicated audience can be enough to sustain you. If a handful of
people genuinely love what you're doing, that's enough.
To the extent you can, avoid letting intermediaries come between you and your
audience. In some types of work this is inevitable, but it's so liberating to
escape it that you might be better off switching to an adjacent type if that
will let you go direct.
The people you spend time with will also have a big effect on your morale.
You'll find there are some who increase your energy and others who decrease
it, and the effect someone has is not always what you'd expect. Seek out the
people who increase your energy and avoid those who decrease it. Though of
course if there's someone you need to take care of, that takes precedence.
Don't marry someone who doesn't understand that you need to work, or sees your
work as competition for your attention. If you're ambitious, you need to work;
it's almost like a medical condition; so someone who won't let you work either
doesn't understand you, or does and doesn't care.
Ultimately morale is physical. You think with your body, so it's important to
take care of it. That means exercising regularly, eating and sleeping well,
and avoiding the more dangerous kinds of drugs. Running and walking are
particularly good forms of exercise because they're good for thinking.
People who do great work are not necessarily happier than everyone else, but
they're happier than they'd be if they didn't. In fact, if you're smart and
ambitious, it's dangerous _not_ to be productive. People who are smart and
ambitious but don't achieve much tend to become bitter.
It's ok to want to impress other people, but choose the right people. The
opinion of people you respect is signal. Fame, which is the opinion of a much
larger group you might or might not respect, just adds noise.
The prestige of a type of work is at best a trailing indicator and sometimes
completely mistaken. If you do anything well enough, you'll make it
prestigious. So the question to ask about a type of work is not how much
prestige it has, but how well it could be done.
Competition can be an effective motivator, but don't let it choose the problem
for you; don't let yourself get drawn into chasing something just because
others are. In fact, don't let competitors make you do anything much more
specific than work harder.
Curiosity is the best guide. Your curiosity never lies, and it knows more than
you do about what's worth paying attention to.
Notice how often that word has come up. If you asked an oracle the secret to
doing great work and the oracle replied with a single word, my bet would be on
"curiosity."
That doesn't translate directly to advice. It's not enough just to be curious,
and you can't command curiosity anyway. But you can nurture it and let it
drive you.
Curiosity is the key to all four steps in doing great work: it will choose the
field for you, get you to the frontier, cause you to notice the gaps in it,
and drive you to explore them. The whole process is a kind of dance with
curiosity.
Believe it or not, I tried to make this essay as short as I could. But its
length at least means it acts as a filter. If you made it this far, you must
be interested in doing great work. And if so you're already further along than
you might realize, because the set of people willing to want to is small.
The factors in doing great work are factors in the literal, mathematical
sense, and they are: ability, interest, effort, and luck. Luck by definition
you can't do anything about, so we can ignore that. And we can assume effort,
if you do in fact want to do great work. So the problem boils down to ability
and interest. Can you find a kind of work where your ability and interest will
combine to yield an explosion of new ideas?
Here there are grounds for optimism. There are so many different ways to do
great work, and even more that are still undiscovered. Out of all those
different types of work, the one you're most suited for is probably a pretty
close match. Probably a comically close match. It's just a question of finding
it, and how far into it your ability and interest can take you. And you can
only answer that by trying.
Many more people could try to do great work than do. What holds them back is a
combination of modesty and fear. It seems presumptuous to try to be Newton or
Shakespeare. It also seems hard; surely if you tried something like that,
you'd fail. Presumably the calculation is rarely explicit. Few people
consciously decide not to try to do great work. But that's what's going on
subconsciously; they shy away from the question.
So I'm going to pull a sneaky trick on you. Do you want to do great work, or
not? Now you have to decide consciously. Sorry about that. I wouldn't have
done it to a general audience. But we already know you're interested.
Don't worry about being presumptuous. You don't have to tell anyone. And if
it's too hard and you fail, so what? Lots of people have worse problems than
that. In fact you'll be lucky if it's the worst problem you have.
Yes, you'll have to work hard. But again, lots of people have to work hard.
And if you're working on something you find very interesting, which you
necessarily will if you're on the right path, the work will probably feel less
burdensome than a lot of your peers'.
The discoveries are out there, waiting to be made. Why not by you?
** |
|
July 2007
An investor wants to give you money for a certain percentage of your startup.
Should you take it? You're about to hire your first employee. How much stock
should you give him?
These are some of the hardest questions founders face. And yet both have the
same answer:
1/(1 - n)
Whenever you're trading stock in your company for anything, whether it's money
or an employee or a deal with another company, the test for whether to do it
is the same. You should give up n% of your company if what you trade it for
improves your average outcome enough that the (100 - n)% you have left is
worth more than the whole company was before.
For example, if an investor wants to buy half your company, how much does that
investment have to improve your average outcome for you to break even?
Obviously it has to double: if you trade half your company for something that
more than doubles the company's average outcome, you're net ahead. You have
half as big a share of something worth more than twice as much.
In the general case, if n is the fraction of the company you're giving up, the
deal is a good one if it makes the company worth more than 1/(1 - n).
For example, suppose Y Combinator offers to fund you in return for 7% of your
company. In this case, n is .07 and 1/(1 - n) is 1.075. So you should take the
deal if you believe we can improve your average outcome by more than 7.5%. If
we improve your outcome by 10%, you're net ahead, because the remaining .93
you hold is worth .93 x 1.1 = 1.023.
One of the things the equity equation shows us is that, financially at least,
taking money from a top VC firm can be a really good deal. Greg Mcadoo from
Sequoia recently said at a YC dinner that when Sequoia invests alone they like
to take about 30% of a company. 1/.7 = 1.43, meaning that deal is worth taking
if they can improve your outcome by more than 43%. For the average startup,
that would be an extraordinary bargain. It would improve the average startup's
prospects by more than 43% just to be able to _say_ they were funded by
Sequoia, even if they never actually got the money.
The reason Sequoia is such a good deal is that the percentage of the company
they take is artificially low. They don't even try to get market price for
their investment; they limit their holdings to leave the founders enough stock
to feel the company is still theirs.
The catch is that Sequoia gets about 6000 business plans a year and funds
about 20 of them, so the odds of getting this great deal are 1 in 300. The
companies that make it through are not average startups.
Of course, there are other factors to consider in a VC deal. It's never just a
straight trade of money for stock. But if it were, taking money from a top
firm would generally be a bargain.
You can use the same formula when giving stock to employees, but it works in
the other direction. If i is the average outcome for the company with the
addition of some new person, then they're worth n such that i = 1/(1 - n).
Which means n = (i - 1)/i.
For example, suppose you're just two founders and you want to hire an
additional hacker who's so good you feel he'll increase the average outcome of
the whole company by 20%. n = (1.2 - 1)/1.2 = .167. So you'll break even if
you trade 16.7% of the company for him.
That doesn't mean 16.7% is the right amount of stock to give him. Stock is not
the only cost of hiring someone: there's usually salary and overhead as well.
And if the company merely breaks even on the deal, there's no reason to do it.
I think to translate salary and overhead into stock you should multiply the
annual rate by about 1.5. Most startups grow fast or die; if you die you don't
have to pay the guy, and if you grow fast you'll be paying next year's salary
out of next year's valuation, which should be 3x this year's. If your
valuation grows 3x a year, the total cost in stock of a new hire's salary and
overhead is 1.5 years' cost at the present valuation.
How much of an additional margin should the company need as the "activation
energy" for the deal? Since this is in effect the company's profit on a hire,
the market will determine that: if you're a hot opportunity, you can charge
more.
Let's run through an example. Suppose the company wants to make a "profit" of
50% on the new hire mentioned above. So subtract a third from 16.7% and we
have 11.1% as his "retail" price. Suppose further that he's going to cost $60k
a year in salary and overhead, x 1.5 = $90k total. If the company's valuation
is $2 million, $90k is 4.5%. 11.1% - 4.5% = an offer of 6.6%.
Incidentally, notice how important it is for early employees to take little
salary. It comes right out of stock that could otherwise be given to them.
Obviously there is a great deal of play in these numbers. I'm not claiming
that stock grants can now be reduced to a formula. Ultimately you always have
to guess. But at least know what you're guessing. If you choose a number based
on your gut feel, or a table of typical grant sizes supplied by a VC firm,
understand what those are estimates of.
And more generally, when you make any decision involving equity, run it
through 1/(1 - n) to see if it makes sense. You should always feel richer
after trading equity. If the trade didn't increase the value of your remaining
shares enough to put you net ahead, you wouldn't have (or shouldn't have) done
it.
** |
|
February 2007
A few days ago I finally figured out something I've wondered about for 25
years: the relationship between wisdom and intelligence. Anyone can see
they're not the same by the number of people who are smart, but not very wise.
And yet intelligence and wisdom do seem related. How?
What is wisdom? I'd say it's knowing what to do in a lot of situations. I'm
not trying to make a deep point here about the true nature of wisdom, just to
figure out how we use the word. A wise person is someone who usually knows the
right thing to do.
And yet isn't being smart also knowing what to do in certain situations? For
example, knowing what to do when the teacher tells your elementary school
class to add all the numbers from 1 to 100?
Some say wisdom and intelligence apply to different types of problems—wisdom
to human problems and intelligence to abstract ones. But that isn't true. Some
wisdom has nothing to do with people: for example, the wisdom of the engineer
who knows certain structures are less prone to failure than others. And
certainly smart people can find clever solutions to human problems as well as
abstract ones.
Another popular explanation is that wisdom comes from experience while
intelligence is innate. But people are not simply wise in proportion to how
much experience they have. Other things must contribute to wisdom besides
experience, and some may be innate: a reflective disposition, for example.
Neither of the conventional explanations of the difference between wisdom and
intelligence stands up to scrutiny. So what is the difference? If we look at
how people use the words "wise" and "smart," what they seem to mean is
different shapes of performance.
**Curve**
"Wise" and "smart" are both ways of saying someone knows what to do. The
difference is that "wise" means one has a high average outcome across all
situations, and "smart" means one does spectacularly well in a few. That is,
if you had a graph in which the x axis represented situations and the y axis
the outcome, the graph of the wise person would be high overall, and the graph
of the smart person would have high peaks.
The distinction is similar to the rule that one should judge talent at its
best and character at its worst. Except you judge intelligence at its best,
and wisdom by its average. That's how the two are related: they're the two
different senses in which the same curve can be high.
So a wise person knows what to do in most situations, while a smart person
knows what to do in situations where few others could. We need to add one more
qualification: we should ignore cases where someone knows what to do because
they have inside information. But aside from that, I don't think we can
get much more specific without starting to be mistaken.
Nor do we need to. Simple as it is, this explanation predicts, or at least
accords with, both of the conventional stories about the distinction between
wisdom and intelligence. Human problems are the most common type, so being
good at solving those is key in achieving a high average outcome. And it seems
natural that a high average outcome depends mostly on experience, but that
dramatic peaks can only be achieved by people with certain rare, innate
qualities; nearly anyone can learn to be a good swimmer, but to be an Olympic
swimmer you need a certain body type.
This explanation also suggests why wisdom is such an elusive concept: there's
no such thing. "Wise" means something—that one is on average good at making
the right choice. But giving the name "wisdom" to the supposed quality that
enables one to do that doesn't mean such a thing exists. To the extent
"wisdom" means anything, it refers to a grab-bag of qualities as various as
self-discipline, experience, and empathy.
Likewise, though "intelligent" means something, we're asking for trouble if we
insist on looking for a single thing called "intelligence." And whatever its
components, they're not all innate. We use the word "intelligent" as an
indication of ability: a smart person can grasp things few others could. It
does seem likely there's some inborn predisposition to intelligence (and
wisdom too), but this predisposition is not itself intelligence.
One reason we tend to think of intelligence as inborn is that people trying to
measure it have concentrated on the aspects of it that are most measurable. A
quality that's inborn will obviously be more convenient to work with than one
that's influenced by experience, and thus might vary in the course of a study.
The problem comes when we drag the word "intelligence" over onto what they're
measuring. If they're measuring something inborn, they can't be measuring
intelligence. Three year olds aren't smart. When we describe one as smart,
it's shorthand for "smarter than other three year olds."
**Split**
Perhaps it's a technicality to point out that a predisposition to intelligence
is not the same as intelligence. But it's an important technicality, because
it reminds us that we can become smarter, just as we can become wiser.
The alarming thing is that we may have to choose between the two.
If wisdom and intelligence are the average and peaks of the same curve, then
they converge as the number of points on the curve decreases. If there's just
one point, they're identical: the average and maximum are the same. But as the
number of points increases, wisdom and intelligence diverge. And historically
the number of points on the curve seems to have been increasing: our ability
is tested in an ever wider range of situations.
In the time of Confucius and Socrates, people seem to have regarded wisdom,
learning, and intelligence as more closely related than we do. Distinguishing
between "wise" and "smart" is a modern habit. And the reason we do is that
they've been diverging. As knowledge gets more specialized, there are more
points on the curve, and the distinction between the spikes and the average
becomes sharper, like a digital image rendered with more pixels.
One consequence is that some old recipes may have become obsolete. At the very
least we have to go back and figure out if they were really recipes for wisdom
or intelligence. But the really striking change, as intelligence and wisdom
drift apart, is that we may have to decide which we prefer. We may not be able
to optimize for both simultaneously.
Society seems to have voted for intelligence. We no longer admire the sage—not
the way people did two thousand years ago. Now we admire the genius. Because
in fact the distinction we began with has a rather brutal converse: just as
you can be smart without being very wise, you can be wise without being very
smart. That doesn't sound especially admirable. That gets you James Bond, who
knows what to do in a lot of situations, but has to rely on Q for the ones
involving math.
Intelligence and wisdom are obviously not mutually exclusive. In fact, a high
average may help support high peaks. But there are reasons to believe that at
some point you have to choose between them. One is the example of very smart
people, who are so often unwise that in popular culture this now seems to be
regarded as the rule rather than the exception. Perhaps the absent-minded
professor is wise in his way, or wiser than he seems, but he's not wise in the
way Confucius or Socrates wanted people to be.
**New**
For both Confucius and Socrates, wisdom, virtue, and happiness were
necessarily related. The wise man was someone who knew what the right choice
was and always made it; to be the right choice, it had to be morally right; he
was therefore always happy, knowing he'd done the best he could. I can't think
of many ancient philosophers who would have disagreed with that, so far as it
goes.
"The superior man is always happy; the small man sad," said Confucius.
Whereas a few years ago I read an interview with a mathematician who said that
most nights he went to bed discontented, feeling he hadn't made enough
progress. The Chinese and Greek words we translate as "happy" didn't mean
exactly what we do by it, but there's enough overlap that this remark
contradicts them.
Is the mathematician a small man because he's discontented? No; he's just
doing a kind of work that wasn't very common in Confucius's day.
Human knowledge seems to grow fractally. Time after time, something that
seemed a small and uninteresting area—experimental error, even—turns out, when
examined up close, to have as much in it as all knowledge up to that point.
Several of the fractal buds that have exploded since ancient times involve
inventing and discovering new things. Math, for example, used to be something
a handful of people did part-time. Now it's the career of thousands. And in
work that involves making new things, some old rules don't apply.
Recently I've spent some time advising people, and there I find the ancient
rule still works: try to understand the situation as well as you can, give the
best advice you can based on your experience, and then don't worry about it,
knowing you did all you could. But I don't have anything like this serenity
when I'm writing an essay. Then I'm worried. What if I run out of ideas? And
when I'm writing, four nights out of five I go to bed discontented, feeling I
didn't get enough done.
Advising people and writing are fundamentally different types of work. When
people come to you with a problem and you have to figure out the right thing
to do, you don't (usually) have to invent anything. You just weigh the
alternatives and try to judge which is the prudent choice. But _prudence_
can't tell me what sentence to write next. The search space is too big.
Someone like a judge or a military officer can in much of his work be guided
by duty, but duty is no guide in making things. Makers depend on something
more precarious: inspiration. And like most people who lead a precarious
existence, they tend to be worried, not contented. In that respect they're
more like the small man of Confucius's day, always one bad harvest (or ruler)
away from starvation. Except instead of being at the mercy of weather and
officials, they're at the mercy of their own imagination.
**Limits**
To me it was a relief just to realize it might be ok to be discontented. The
idea that a successful person should be happy has thousands of years of
momentum behind it. If I was any good, why didn't I have the easy confidence
winners are supposed to have? But that, I now believe, is like a runner asking
"If I'm such a good athlete, why do I feel so tired?" Good runners still get
tired; they just get tired at higher speeds.
People whose work is to invent or discover things are in the same position as
the runner. There's no way for them to do the best they can, because there's
no limit to what they could do. The closest you can come is to compare
yourself to other people. But the better you do, the less this matters. An
undergrad who gets something published feels like a star. But for someone at
the top of the field, what's the test of doing well? Runners can at least
compare themselves to others doing exactly the same thing; if you win an
Olympic gold medal, you can be fairly content, even if you think you could
have run a bit faster. But what is a novelist to do?
Whereas if you're doing the kind of work in which problems are presented to
you and you have to choose between several alternatives, there's an upper
bound on your performance: choosing the best every time. In ancient societies,
nearly all work seems to have been of this type. The peasant had to decide
whether a garment was worth mending, and the king whether or not to invade his
neighbor, but neither was expected to invent anything. In principle they could
have; the king could have invented firearms, then invaded his neighbor. But in
practice innovations were so rare that they weren't expected of you, any more
than goalkeepers are expected to score goals. In practice, it seemed as if
there was a correct decision in every situation, and if you made it you'd done
your job perfectly, just as a goalkeeper who prevents the other team from
scoring is considered to have played a perfect game.
In this world, wisdom seemed paramount. Even now, most people do work in
which problems are put before them and they have to choose the best
alternative. But as knowledge has grown more specialized, there are more and
more types of work in which people have to make up new things, and in which
performance is therefore unbounded. Intelligence has become increasingly
important relative to wisdom because there is more room for spikes.
**Recipes**
Another sign we may have to choose between intelligence and wisdom is how
different their recipes are. Wisdom seems to come largely from curing childish
qualities, and intelligence largely from cultivating them.
Recipes for wisdom, particularly ancient ones, tend to have a remedial
character. To achieve wisdom one must cut away all the debris that fills one's
head on emergence from childhood, leaving only the important stuff. Both self-
control and experience have this effect: to eliminate the random biases that
come from your own nature and from the circumstances of your upbringing
respectively. That's not all wisdom is, but it's a large part of it. Much of
what's in the sage's head is also in the head of every twelve year old. The
difference is that in the head of the twelve year old it's mixed together with
a lot of random junk.
The path to intelligence seems to be through working on hard problems. You
develop intelligence as you might develop muscles, through exercise. But there
can't be too much compulsion here. No amount of discipline can replace genuine
curiosity. So cultivating intelligence seems to be a matter of identifying
some bias in one's character—some tendency to be interested in certain types
of things—and nurturing it. Instead of obliterating your idiosyncrasies in an
effort to make yourself a neutral vessel for the truth, you select one and try
to grow it from a seedling into a tree.
The wise are all much alike in their wisdom, but very smart people tend to be
smart in distinctive ways.
Most of our educational traditions aim at wisdom. So perhaps one reason
schools work badly is that they're trying to make intelligence using recipes
for wisdom. Most recipes for wisdom have an element of subjection. At the very
least, you're supposed to do what the teacher says. The more extreme recipes
aim to break down your individuality the way basic training does. But that's
not the route to intelligence. Whereas wisdom comes through humility, it may
actually help, in cultivating intelligence, to have a mistakenly high opinion
of your abilities, because that encourages you to keep working. Ideally till
you realize how mistaken you were.
(The reason it's hard to learn new skills late in life is not just that one's
brain is less malleable. Another probably even worse obstacle is that one has
higher standards.)
I realize we're on dangerous ground here. I'm not proposing the primary goal
of education should be to increase students' "self-esteem." That just breeds
laziness. And in any case, it doesn't really fool the kids, not the smart
ones. They can tell at a young age that a contest where everyone wins is a
fraud.
A teacher has to walk a narrow path: you want to encourage kids to come up
with things on their own, but you can't simply applaud everything they
produce. You have to be a good audience: appreciative, but not too easily
impressed. And that's a lot of work. You have to have a good enough grasp of
kids' capacities at different ages to know when to be surprised.
That's the opposite of traditional recipes for education. Traditionally the
student is the audience, not the teacher; the student's job is not to invent,
but to absorb some prescribed body of material. (The use of the term
"recitation" for sections in some colleges is a fossil of this.) The problem
with these old traditions is that they're too much influenced by recipes for
wisdom.
**Different**
I deliberately gave this essay a provocative title; of course it's worth being
wise. But I think it's important to understand the relationship between
intelligence and wisdom, and particularly what seems to be the growing gap
between them. That way we can avoid applying rules and standards to
intelligence that are really meant for wisdom. These two senses of "knowing
what to do" are more different than most people realize. The path to wisdom is
through discipline, and the path to intelligence through carefully selected
self-indulgence. Wisdom is universal, and intelligence idiosyncratic. And
while wisdom yields calmness, intelligence much of the time leads to
discontentment.
That's particularly worth remembering. A physicist friend recently told me
half his department was on Prozac. Perhaps if we acknowledge that some amount
of frustration is inevitable in certain kinds of work, we can mitigate its
effects. Perhaps we can box it up and put it away some of the time, instead of
letting it flow together with everyday sadness to produce what seems an
alarmingly large pool. At the very least, we can avoid being discontented
about being discontented.
If you feel exhausted, it's not necessarily because there's something wrong
with you. Maybe you're just running fast.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
March 2005
_(This essay is derived from a talk at the Harvard Computer Society.)_
You need three things to create a successful startup: to start with good
people, to make something customers actually want, and to spend as little
money as possible. Most startups that fail do it because they fail at one of
these. A startup that does all three will probably succeed.
And that's kind of exciting, when you think about it, because all three are
doable. Hard, but doable. And since a startup that succeeds ordinarily makes
its founders rich, that implies getting rich is doable too. Hard, but doable.
If there is one message I'd like to get across about startups, that's it.
There is no magically difficult step that requires brilliance to solve.
**The Idea**
In particular, you don't need a brilliant idea to start a startup around. The
way a startup makes money is to offer people better technology than they have
now. But what people have now is often so bad that it doesn't take brilliance
to do better.
Google's plan, for example, was simply to create a search site that didn't
suck. They had three new ideas: index more of the Web, use links to rank
search results, and have clean, simple web pages with unintrusive keyword-
based ads. Above all, they were determined to make a site that was good to
use. No doubt there are great technical tricks within Google, but the overall
plan was straightforward. And while they probably have bigger ambitions now,
this alone brings them a billion dollars a year.
There are plenty of other areas that are just as backward as search was before
Google. I can think of several heuristics for generating ideas for startups,
but most reduce to this: look at something people are trying to do, and figure
out how to do it in a way that doesn't suck.
For example, dating sites currently suck far worse than search did before
Google. They all use the same simple-minded model. They seem to have
approached the problem by thinking about how to do database matches instead of
how dating works in the real world. An undergrad could build something better
as a class project. And yet there's a lot of money at stake. Online dating is
a valuable business now, and it might be worth a hundred times as much if it
worked.
An idea for a startup, however, is only a beginning. A lot of would-be startup
founders think the key to the whole process is the initial idea, and from that
point all you have to do is execute. Venture capitalists know better. If you
go to VC firms with a brilliant idea that you'll tell them about if they sign
a nondisclosure agreement, most will tell you to get lost. That shows how much
a mere idea is worth. The market price is less than the inconvenience of
signing an NDA.
Another sign of how little the initial idea is worth is the number of startups
that change their plan en route. Microsoft's original plan was to make money
selling programming languages, of all things. Their current business model
didn't occur to them until IBM dropped it in their lap five years later.
Ideas for startups are worth something, certainly, but the trouble is, they're
not transferrable. They're not something you could hand to someone else to
execute. Their value is mainly as starting points: as questions for the people
who had them to continue thinking about.
What matters is not ideas, but the people who have them. Good people can fix
bad ideas, but good ideas can't save bad people.
**People**
What do I mean by good people? One of the best tricks I learned during our
startup was a rule for deciding who to hire. Could you describe the person as
an animal? It might be hard to translate that into another language, but I
think everyone in the US knows what it means. It means someone who takes their
work a little too seriously; someone who does what they do so well that they
pass right through professional and cross over into obsessive.
What it means specifically depends on the job: a salesperson who just won't
take no for an answer; a hacker who will stay up till 4:00 AM rather than go
to bed leaving code with a bug in it; a PR person who will cold-call _New York
Times_ reporters on their cell phones; a graphic designer who feels physical
pain when something is two millimeters out of place.
Almost everyone who worked for us was an animal at what they did. The woman in
charge of sales was so tenacious that I used to feel sorry for potential
customers on the phone with her. You could sense them squirming on the hook,
but you knew there would be no rest for them till they'd signed up.
If you think about people you know, you'll find the animal test is easy to
apply. Call the person's image to mind and imagine the sentence "so-and-so is
an animal." If you laugh, they're not. You don't need or perhaps even want
this quality in big companies, but you need it in a startup.
For programmers we had three additional tests. Was the person genuinely smart?
If so, could they actually get things done? And finally, since a few good
hackers have unbearable personalities, could we stand to have them around?
That last test filters out surprisingly few people. We could bear any amount
of nerdiness if someone was truly smart. What we couldn't stand were people
with a lot of attitude. But most of those weren't truly smart, so our third
test was largely a restatement of the first.
When nerds are unbearable it's usually because they're trying too hard to seem
smart. But the smarter they are, the less pressure they feel to act smart. So
as a rule you can recognize genuinely smart people by their ability to say
things like "I don't know," "Maybe you're right," and "I don't understand x
well enough."
This technique doesn't always work, because people can be influenced by their
environment. In the MIT CS department, there seems to be a tradition of acting
like a brusque know-it-all. I'm told it derives ultimately from Marvin Minsky,
in the same way the classic airline pilot manner is said to derive from Chuck
Yeager. Even genuinely smart people start to act this way there, so you have
to make allowances.
It helped us to have Robert Morris, who is one of the readiest to say "I don't
know" of anyone I've met. (At least, he was before he became a professor at
MIT.) No one dared put on attitude around Robert, because he was obviously
smarter than they were and yet had zero attitude himself.
Like most startups, ours began with a group of friends, and it was through
personal contacts that we got most of the people we hired. This is a crucial
difference between startups and big companies. Being friends with someone for
even a couple days will tell you more than companies could ever learn in
interviews.
It's no coincidence that startups start around universities, because that's
where smart people meet. It's not what people learn in classes at MIT and
Stanford that has made technology companies spring up around them. They could
sing campfire songs in the classes so long as admissions worked the same.
If you start a startup, there's a good chance it will be with people you know
from college or grad school. So in theory you ought to try to make friends
with as many smart people as you can in school, right? Well, no. Don't make a
conscious effort to schmooze; that doesn't work well with hackers.
What you should do in college is work on your own projects. Hackers should do
this even if they don't plan to start startups, because it's the only real way
to learn how to program. In some cases you may collaborate with other
students, and this is the best way to get to know good hackers. The project
may even grow into a startup. But once again, I wouldn't aim too directly at
either target. Don't force things; just work on stuff you like with people you
like.
Ideally you want between two and four founders. It would be hard to start with
just one. One person would find the moral weight of starting a company hard to
bear. Even Bill Gates, who seems to be able to bear a good deal of moral
weight, had to have a co-founder. But you don't want so many founders that the
company starts to look like a group photo. Partly because you don't need a lot
of people at first, but mainly because the more founders you have, the worse
disagreements you'll have. When there are just two or three founders, you know
you have to resolve disputes immediately or perish. If there are seven or
eight, disagreements can linger and harden into factions. You don't want mere
voting; you need unanimity.
In a technology startup, which most startups are, the founders should include
technical people. During the Internet Bubble there were a number of startups
founded by business people who then went looking for hackers to create their
product for them. This doesn't work well. Business people are bad at deciding
what to do with technology, because they don't know what the options are, or
which kinds of problems are hard and which are easy. And when business people
try to hire hackers, they can't tell which ones are good. Even other hackers
have a hard time doing that. For business people it's roulette.
Do the founders of a startup have to include business people? That depends. We
thought so when we started ours, and we asked several people who were said to
know about this mysterious thing called "business" if they would be the
president. But they all said no, so I had to do it myself. And what I
discovered was that business was no great mystery. It's not something like
physics or medicine that requires extensive study. You just try to get people
to pay you for stuff.
I think the reason I made such a mystery of business was that I was disgusted
by the idea of doing it. I wanted to work in the pure, intellectual world of
software, not deal with customers' mundane problems. People who don't want to
get dragged into some kind of work often develop a protective incompetence at
it. Paul Erdos was particularly good at this. By seeming unable even to cut a
grapefruit in half (let alone go to the store and buy one), he forced other
people to do such things for him, leaving all his time free for math. Erdos
was an extreme case, but most husbands use the same trick to some degree.
Once I was forced to discard my protective incompetence, I found that business
was neither so hard nor so boring as I feared. There are esoteric areas of
business that are quite hard, like tax law or the pricing of derivatives, but
you don't need to know about those in a startup. All you need to know about
business to run a startup are commonsense things people knew before there were
business schools, or even universities.
If you work your way down the Forbes 400 making an x next to the name of each
person with an MBA, you'll learn something important about business school.
After Warren Buffett, you don't hit another MBA till number 22, Phil Knight,
the CEO of Nike. There are only 5 MBAs in the top 50\. What you notice in the
Forbes 400 are a lot of people with technical backgrounds. Bill Gates, Steve
Jobs, Larry Ellison, Michael Dell, Jeff Bezos, Gordon Moore. The rulers of the
technology business tend to come from technology, not business. So if you want
to invest two years in something that will help you succeed in business, the
evidence suggests you'd do better to learn how to hack than get an MBA.
There is one reason you might want to include business people in a startup,
though: because you have to have at least one person willing and able to focus
on what customers want. Some believe only business people can do this-- that
hackers can implement software, but not design it. That's nonsense. There's
nothing about knowing how to program that prevents hackers from understanding
users, or about not knowing how to program that magically enables business
people to understand them.
If you can't understand users, however, you should either learn how or find a
co-founder who can. That is the single most important issue for technology
startups, and the rock that sinks more of them than anything else.
**What Customers Want**
It's not just startups that have to worry about this. I think most businesses
that fail do it because they don't give customers what they want. Look at
restaurants. A large percentage fail, about a quarter in the first year. But
can you think of one restaurant that had really good food and went out of
business?
Restaurants with great food seem to prosper no matter what. A restaurant with
great food can be expensive, crowded, noisy, dingy, out of the way, and even
have bad service, and people will keep coming. It's true that a restaurant
with mediocre food can sometimes attract customers through gimmicks. But that
approach is very risky. It's more straightforward just to make the food good.
It's the same with technology. You hear all kinds of reasons why startups
fail. But can you think of one that had a massively popular product and still
failed?
In nearly every failed startup, the real problem was that customers didn't
want the product. For most, the cause of death is listed as "ran out of
funding," but that's only the immediate cause. Why couldn't they get more
funding? Probably because the product was a dog, or never seemed likely to be
done, or both.
When I was trying to think of the things every startup needed to do, I almost
included a fourth: get a version 1 out as soon as you can. But I decided not
to, because that's implicit in making something customers want. The only way
to make something customers want is to get a prototype in front of them and
refine it based on their reactions.
The other approach is what I call the "Hail Mary" strategy. You make elaborate
plans for a product, hire a team of engineers to develop it (people who do
this tend to use the term "engineer" for hackers), and then find after a year
that you've spent two million dollars to develop something no one wants. This
was not uncommon during the Bubble, especially in companies run by business
types, who thought of software development as something terrifying that
therefore had to be carefully planned.
We never even considered that approach. As a Lisp hacker, I come from the
tradition of rapid prototyping. I would not claim (at least, not here) that
this is the right way to write every program, but it's certainly the right way
to write software for a startup. In a startup, your initial plans are almost
certain to be wrong in some way, and your first priority should be to figure
out where. The only way to do that is to try implementing them.
Like most startups, we changed our plan on the fly. At first we expected our
customers to be Web consultants. But it turned out they didn't like us,
because our software was easy to use and we hosted the site. It would be too
easy for clients to fire them. We also thought we'd be able to sign up a lot
of catalog companies, because selling online was a natural extension of their
existing business. But in 1996 that was a hard sell. The middle managers we
talked to at catalog companies saw the Web not as an opportunity, but as
something that meant more work for them.
We did get a few of the more adventurous catalog companies. Among them was
Frederick's of Hollywood, which gave us valuable experience dealing with heavy
loads on our servers. But most of our users were small, individual merchants
who saw the Web as an opportunity to build a business. Some had retail stores,
but many only existed online. And so we changed direction to focus on these
users. Instead of concentrating on the features Web consultants and catalog
companies would want, we worked to make the software easy to use.
I learned something valuable from that. It's worth trying very, very hard to
make technology easy to use. Hackers are so used to computers that they have
no idea how horrifying software seems to normal people. Stephen Hawking's
editor told him that every equation he included in his book would cut sales in
half. When you work on making technology easier to use, you're riding that
curve up instead of down. A 10% improvement in ease of use doesn't just
increase your sales 10%. It's more likely to double your sales.
How do you figure out what customers want? Watch them. One of the best places
to do this was at trade shows. Trade shows didn't pay as a way of getting new
customers, but they were worth it as market research. We didn't just give
canned presentations at trade shows. We used to show people how to build real,
working stores. Which meant we got to watch as they used our software, and
talk to them about what they needed.
No matter what kind of startup you start, it will probably be a stretch for
you, the founders, to understand what users want. The only kind of software
you can build without studying users is the sort for which you are the typical
user. But this is just the kind that tends to be open source: operating
systems, programming languages, editors, and so on. So if you're developing
technology for money, you're probably not going to be developing it for people
like you. Indeed, you can use this as a way to generate ideas for startups:
what do people who are not like you want from technology?
When most people think of startups, they think of companies like Apple or
Google. Everyone knows these, because they're big consumer brands. But for
every startup like that, there are twenty more that operate in niche markets
or live quietly down in the infrastructure. So if you start a successful
startup, odds are you'll start one of those.
Another way to say that is, if you try to start the kind of startup that has
to be a big consumer brand, the odds against succeeding are steeper. The best
odds are in niche markets. Since startups make money by offering people
something better than they had before, the best opportunities are where things
suck most. And it would be hard to find a place where things suck more than in
corporate IT departments. You would not believe the amount of money companies
spend on software, and the crap they get in return. This imbalance equals
opportunity.
If you want ideas for startups, one of the most valuable things you could do
is find a middle-sized non-technology company and spend a couple weeks just
watching what they do with computers. Most good hackers have no more idea of
the horrors perpetrated in these places than rich Americans do of what goes on
in Brazilian slums.
Start by writing software for smaller companies, because it's easier to sell
to them. It's worth so much to sell stuff to big companies that the people
selling them the crap they currently use spend a lot of time and money to do
it. And while you can outhack Oracle with one frontal lobe tied behind your
back, you can't outsell an Oracle salesman. So if you want to win through
better technology, aim at smaller customers.
They're the more strategically valuable part of the market anyway. In
technology, the low end always eats the high end. It's easier to make an
inexpensive product more powerful than to make a powerful product cheaper. So
the products that start as cheap, simple options tend to gradually grow more
powerful till, like water rising in a room, they squash the "high-end"
products against the ceiling. Sun did this to mainframes, and Intel is doing
it to Sun. Microsoft Word did it to desktop publishing software like Interleaf
and Framemaker. Mass-market digital cameras are doing it to the expensive
models made for professionals. Avid did it to the manufacturers of specialized
video editing systems, and now Apple is doing it to Avid. _Henry Ford_ did it
to the car makers that preceded him. If you build the simple, inexpensive
option, you'll not only find it easier to sell at first, but you'll also be in
the best position to conquer the rest of the market.
It's very dangerous to let anyone fly under you. If you have the cheapest,
easiest product, you'll own the low end. And if you don't, you're in the
crosshairs of whoever does.
**Raising Money**
To make all this happen, you're going to need money. Some startups have been
self-funding-- Microsoft for example-- but most aren't. I think it's wise to
take money from investors. To be self-funding, you have to start as a
consulting company, and it's hard to switch from that to a product company.
Financially, a startup is like a pass/fail course. The way to get rich from a
startup is to maximize the company's chances of succeeding, not to maximize
the amount of stock you retain. So if you can trade stock for something that
improves your odds, it's probably a smart move.
To most hackers, getting investors seems like a terrifying and mysterious
process. Actually it's merely tedious. I'll try to give an outline of how it
works.
The first thing you'll need is a few tens of thousands of dollars to pay your
expenses while you develop a prototype. This is called seed capital. Because
so little money is involved, raising seed capital is comparatively easy-- at
least in the sense of getting a quick yes or no.
Usually you get seed money from individual rich people called "angels." Often
they're people who themselves got rich from technology. At the seed stage,
investors don't expect you to have an elaborate business plan. Most know that
they're supposed to decide quickly. It's not unusual to get a check within a
week based on a half-page agreement.
We started Viaweb with $10,000 of seed money from our friend Julian. But he
gave us a lot more than money. He's a former CEO and also a corporate lawyer,
so he gave us a lot of valuable advice about business, and also did all the
legal work of getting us set up as a company. Plus he introduced us to one of
the two angel investors who supplied our next round of funding.
Some angels, especially those with technology backgrounds, may be satisfied
with a demo and a verbal description of what you plan to do. But many will
want a copy of your business plan, if only to remind themselves what they
invested in.
Our angels asked for one, and looking back, I'm amazed how much worry it
caused me. "Business plan" has that word "business" in it, so I figured it had
to be something I'd have to read a book about business plans to write. Well,
it doesn't. At this stage, all most investors expect is a brief description of
what you plan to do and how you're going to make money from it, and the
resumes of the founders. If you just sit down and write out what you've been
saying to one another, that should be fine. It shouldn't take more than a
couple hours, and you'll probably find that writing it all down gives you more
ideas about what to do.
For the angel to have someone to make the check out to, you're going to have
to have some kind of company. Merely incorporating yourselves isn't hard. The
problem is, for the company to exist, you have to decide who the founders are,
and how much stock they each have. If there are two founders with the same
qualifications who are both equally committed to the business, that's easy.
But if you have a number of people who are expected to contribute in varying
degrees, arranging the proportions of stock can be hard. And once you've done
it, it tends to be set in stone.
I have no tricks for dealing with this problem. All I can say is, try hard to
do it right. I do have a rule of thumb for recognizing when you have, though.
When everyone feels they're getting a slightly bad deal, that they're doing
more than they should for the amount of stock they have, the stock is
optimally apportioned.
There is more to setting up a company than incorporating it, of course:
insurance, business license, unemployment compensation, various things with
the IRS. I'm not even sure what the list is, because we, ah, skipped all that.
When we got real funding near the end of 1996, we hired a great CFO, who fixed
everything retroactively. It turns out that no one comes and arrests you if
you don't do everything you're supposed to when starting a company. And a good
thing too, or a lot of startups would never get started.
It can be dangerous to delay turning yourself into a company, because one or
more of the founders might decide to split off and start another company doing
the same thing. This does happen. So when you set up the company, as well as
as apportioning the stock, you should get all the founders to sign something
agreeing that everyone's ideas belong to this company, and that this company
is going to be everyone's only job.
[If this were a movie, ominous music would begin here.]
While you're at it, you should ask what else they've signed. One of the worst
things that can happen to a startup is to run into intellectual property
problems. We did, and it came closer to killing us than any competitor ever
did.
As we were in the middle of getting bought, we discovered that one of our
people had, early on, been bound by an agreement that said all his ideas
belonged to the giant company that was paying for him to go to grad school. In
theory, that could have meant someone else owned big chunks of our software.
So the acquisition came to a screeching halt while we tried to sort this out.
The problem was, since we'd been about to be acquired, we'd allowed ourselves
to run low on cash. Now we needed to raise more to keep going. But it's hard
to raise money with an IP cloud over your head, because investors can't judge
how serious it is.
Our existing investors, knowing that we needed money and had nowhere else to
get it, at this point attempted certain gambits which I will not describe in
detail, except to remind readers that the word "angel" is a metaphor. The
founders thereupon proposed to walk away from the company, after giving the
investors a brief tutorial on how to administer the servers themselves. And
while this was happening, the acquirers used the delay as an excuse to welch
on the deal.
Miraculously it all turned out ok. The investors backed down; we did another
round of funding at a reasonable valuation; the giant company finally gave us
a piece of paper saying they didn't own our software; and six months later we
were bought by Yahoo for much more than the earlier acquirer had agreed to
pay. So we were happy in the end, though the experience probably took several
years off my life.
Don't do what we did. Before you consummate a startup, ask everyone about
their previous IP history.
Once you've got a company set up, it may seem presumptuous to go knocking on
the doors of rich people and asking them to invest tens of thousands of
dollars in something that is really just a bunch of guys with some ideas. But
when you look at it from the rich people's point of view, the picture is more
encouraging. Most rich people are looking for good investments. If you really
think you have a chance of succeeding, you're doing them a favor by letting
them invest. Mixed with any annoyance they might feel about being approached
will be the thought: are these guys the next Google?
Usually angels are financially equivalent to founders. They get the same kind
of stock and get diluted the same amount in future rounds. How much stock
should they get? That depends on how ambitious you feel. When you offer x
percent of your company for y dollars, you're implicitly claiming a certain
value for the whole company. Venture investments are usually described in
terms of that number. If you give an investor new shares equal to 5% of those
already outstanding in return for $100,000, then you've done the deal at a
pre-money valuation of $2 million.
How do you decide what the value of the company should be? There is no
rational way. At this stage the company is just a bet. I didn't realize that
when we were raising money. Julian thought we ought to value the company at
several million dollars. I thought it was preposterous to claim that a couple
thousand lines of code, which was all we had at the time, were worth several
million dollars. Eventually we settled on one million, because Julian said no
one would invest in a company with a valuation any lower.
What I didn't grasp at the time was that the valuation wasn't just the value
of the code we'd written so far. It was also the value of our ideas, which
turned out to be right, and of all the future work we'd do, which turned out
to be a lot.
The next round of funding is the one in which you might deal with actual
venture capital firms. But don't wait till you've burned through your last
round of funding to start approaching them. VCs are slow to make up their
minds. They can take months. You don't want to be running out of money while
you're trying to negotiate with them.
Getting money from an actual VC firm is a bigger deal than getting money from
angels. The amounts of money involved are larger, millions usually. So the
deals take longer, dilute you more, and impose more onerous conditions.
Sometimes the VCs want to install a new CEO of their own choosing. Usually the
claim is that you need someone mature and experienced, with a business
background. Maybe in some cases this is true. And yet Bill Gates was young and
inexperienced and had no business background, and he seems to have done ok.
Steve Jobs got booted out of his own company by someone mature and
experienced, with a business background, who then proceeded to ruin the
company. So I think people who are mature and experienced, with a business
background, may be overrated. We used to call these guys "newscasters,"
because they had neat hair and spoke in deep, confident voices, and generally
didn't know much more than they read on the teleprompter.
We talked to a number of VCs, but eventually we ended up financing our startup
entirely with angel money. The main reason was that we feared a brand-name VC
firm would stick us with a newscaster as part of the deal. That might have
been ok if he was content to limit himself to talking to the press, but what
if he wanted to have a say in running the company? That would have led to
disaster, because our software was so complex. We were a company whose whole
m.o. was to win through better technology. The strategic decisions were mostly
decisions about technology, and we didn't need any help with those.
This was also one reason we didn't go public. Back in 1998 our CFO tried to
talk me into it. In those days you could go public as a dogfood portal, so as
a company with a real product and real revenues, we might have done well. But
I feared it would have meant taking on a newscaster-- someone who, as they
say, "can talk Wall Street's language."
I'm happy to see Google is bucking that trend. They didn't talk Wall Street's
language when they did their IPO, and Wall Street didn't buy. And now Wall
Street is collectively kicking itself. They'll pay attention next time. Wall
Street learns new languages fast when money is involved.
You have more leverage negotiating with VCs than you realize. The reason is
other VCs. I know a number of VCs now, and when you talk to them you realize
that it's a seller's market. Even now there is too much money chasing too few
good deals.
VCs form a pyramid. At the top are famous ones like Sequoia and Kleiner
Perkins, but beneath those are a huge number you've never heard of. What they
all have in common is that a dollar from them is worth one dollar. Most VCs
will tell you that they don't just provide money, but connections and advice.
If you're talking to Vinod Khosla or John Doerr or Mike Moritz, this is true.
But such advice and connections can come very expensive. And as you go down
the food chain the VCs get rapidly dumber. A few steps down from the top
you're basically talking to bankers who've picked up a few new vocabulary
words from reading _Wired_. (Does your product use _XML?_) So I'd advise you
to be skeptical about claims of experience and connections. Basically, a VC is
a source of money. I'd be inclined to go with whoever offered the most money
the soonest with the least strings attached.
You may wonder how much to tell VCs. And you should, because some of them may
one day be funding your competitors. I think the best plan is not to be
overtly secretive, but not to tell them everything either. After all, as most
VCs say, they're more interested in the people than the ideas. The main reason
they want to talk about your idea is to judge you, not the idea. So as long as
you seem like you know what you're doing, you can probably keep a few things
back from them.
Talk to as many VCs as you can, even if you don't want their money, because a)
they may be on the board of someone who will buy you, and b) if you seem
impressive, they'll be discouraged from investing in your competitors. The
most efficient way to reach VCs, especially if you only want them to know
about you and don't want their money, is at the conferences that are
occasionally organized for startups to present to them.
**Not Spending It**
When and if you get an infusion of real money from investors, what should you
do with it? Not spend it, that's what. In nearly every startup that fails, the
proximate cause is running out of money. Usually there is something deeper
wrong. But even a proximate cause of death is worth trying hard to avoid.
During the Bubble many startups tried to "get big fast." Ideally this meant
getting a lot of customers fast. But it was easy for the meaning to slide over
into hiring a lot of people fast.
Of the two versions, the one where you get a lot of customers fast is of
course preferable. But even that may be overrated. The idea is to get there
first and get all the users, leaving none for competitors. But I think in most
businesses the advantages of being first to market are not so overwhelmingly
great. Google is again a case in point. When they appeared it seemed as if
search was a mature market, dominated by big players who'd spent millions to
build their brands: Yahoo, Lycos, Excite, Infoseek, Altavista, Inktomi. Surely
1998 was a little late to arrive at the party.
But as the founders of Google knew, brand is worth next to nothing in the
search business. You can come along at any point and make something better,
and users will gradually seep over to you. As if to emphasize the point,
Google never did any advertising. They're like dealers; they sell the stuff,
but they know better than to use it themselves.
The competitors Google buried would have done better to spend those millions
improving their software. Future startups should learn from that mistake.
Unless you're in a market where products are as undifferentiated as cigarettes
or vodka or laundry detergent, spending a lot on brand advertising is a sign
of breakage. And few if any Web businesses are so undifferentiated. The dating
sites are running big ad campaigns right now, which is all the more evidence
they're ripe for the picking. (Fee, fie, fo, fum, I smell a company run by
marketing guys.)
We were compelled by circumstances to grow slowly, and in retrospect it was a
good thing. The founders all learned to do every job in the company. As well
as writing software, I had to do sales and customer support. At sales I was
not very good. I was persistent, but I didn't have the smoothness of a good
salesman. My message to potential customers was: you'd be stupid not to sell
online, and if you sell online you'd be stupid to use anyone else's software.
Both statements were true, but that's not the way to convince people.
I was great at customer support though. Imagine talking to a customer support
person who not only knew everything about the product, but would apologize
abjectly if there was a bug, and then fix it immediately, while you were on
the phone with them. Customers loved us. And we loved them, because when
you're growing slow by word of mouth, your first batch of users are the ones
who were smart enough to find you by themselves. There is nothing more
valuable, in the early stages of a startup, than smart users. If you listen to
them, they'll tell you exactly how to make a winning product. And not only
will they give you this advice for free, they'll pay you.
We officially launched in early 1996. By the end of that year we had about 70
users. Since this was the era of "get big fast," I worried about how small and
obscure we were. But in fact we were doing exactly the right thing. Once you
get big (in users or employees) it gets hard to change your product. That year
was effectively a laboratory for improving our software. By the end of it, we
were so far ahead of our competitors that they never had a hope of catching
up. And since all the hackers had spent many hours talking to users, we
understood online commerce way better than anyone else.
That's the key to success as a startup. There is nothing more important than
understanding your business. You might think that anyone in a business must,
ex officio, understand it. Far from it. Google's secret weapon was simply that
they understood search. I was working for Yahoo when Google appeared, and
Yahoo didn't understand search. I know because I once tried to convince the
powers that be that we had to make search better, and I got in reply what was
then the party line about it: that Yahoo was no longer a mere "search engine."
Search was now only a small percentage of our page views, less than one
month's growth, and now that we were established as a "media company," or
"portal," or whatever we were, search could safely be allowed to wither and
drop off, like an umbilical cord.
Well, a small fraction of page views they may be, but they are an important
fraction, because they are the page views that Web sessions start with. I
think Yahoo gets that now.
Google understands a few other things most Web companies still don't. The most
important is that you should put users before advertisers, even though the
advertisers are paying and users aren't. One of my favorite bumper stickers
reads "if the people lead, the leaders will follow." Paraphrased for the Web,
this becomes "get all the users, and the advertisers will follow." More
generally, design your product to please users first, and then think about how
to make money from it. If you don't put users first, you leave a gap for
competitors who do.
To make something users love, you have to understand them. And the bigger you
are, the harder that is. So I say "get big slow." The slower you burn through
your funding, the more time you have to learn.
The other reason to spend money slowly is to encourage a culture of cheapness.
That's something Yahoo did understand. David Filo's title was "Chief Yahoo,"
but he was proud that his unofficial title was "Cheap Yahoo." Soon after we
arrived at Yahoo, we got an email from Filo, who had been crawling around our
directory hierarchy, asking if it was really necessary to store so much of our
data on expensive RAID drives. I was impressed by that. Yahoo's market cap
then was already in the billions, and they were still worrying about wasting a
few gigs of disk space.
When you get a couple million dollars from a VC firm, you tend to feel rich.
It's important to realize you're not. A rich company is one with large
revenues. This money isn't revenue. It's money investors have given you in the
hope you'll be able to generate revenues. So despite those millions in the
bank, you're still poor.
For most startups the model should be grad student, not law firm. Aim for cool
and cheap, not expensive and impressive. For us the test of whether a startup
understood this was whether they had Aeron chairs. The Aeron came out during
the Bubble and was very popular with startups. Especially the type, all too
common then, that was like a bunch of kids playing house with money supplied
by VCs. We had office chairs so cheap that the arms all fell off. This was
slightly embarrassing at the time, but in retrospect the grad-studenty
atmosphere of our office was another of those things we did right without
knowing it.
Our offices were in a wooden triple-decker in Harvard Square. It had been an
apartment until about the 1970s, and there was still a claw-footed bathtub in
the bathroom. It must once have been inhabited by someone fairly eccentric,
because a lot of the chinks in the walls were stuffed with aluminum foil, as
if to protect against cosmic rays. When eminent visitors came to see us, we
were a bit sheepish about the low production values. But in fact that place
was the perfect space for a startup. We felt like our role was to be impudent
underdogs instead of corporate stuffed shirts, and that is exactly the spirit
you want.
An apartment is also the right kind of place for developing software. Cube
farms suck for that, as you've probably discovered if you've tried it. Ever
notice how much easier it is to hack at home than at work? So why not make
work more like home?
When you're looking for space for a startup, don't feel that it has to look
professional. Professional means doing good work, not elevators and glass
walls. I'd advise most startups to avoid corporate space at first and just
rent an apartment. You want to live at the office in a startup, so why not
have a place designed to be lived in as your office?
Besides being cheaper and better to work in, apartments tend to be in better
locations than office buildings. And for a startup location is very important.
The key to productivity is for people to come back to work after dinner. Those
hours after the phone stops ringing are by far the best for getting work done.
Great things happen when a group of employees go out to dinner together, talk
over ideas, and then come back to their offices to implement them. So you want
to be in a place where there are a lot of restaurants around, not some dreary
office park that's a wasteland after 6:00 PM. Once a company shifts over into
the model where everyone drives home to the suburbs for dinner, however late,
you've lost something extraordinarily valuable. God help you if you actually
start in that mode.
If I were going to start a startup today, there are only three places I'd
consider doing it: on the Red Line near Central, Harvard, or Davis Squares
(Kendall is too sterile); in Palo Alto on University or California Aves; and
in Berkeley immediately north or south of campus. These are the only places I
know that have the right kind of vibe.
The most important way to not spend money is by not hiring people. I may be an
extremist, but I think hiring people is the worst thing a company can do. To
start with, people are a recurring expense, which is the worst kind. They also
tend to cause you to grow out of your space, and perhaps even move to the sort
of uncool office building that will make your software worse. But worst of
all, they slow you down: instead of sticking your head in someone's office and
checking out an idea with them, eight people have to have a meeting about it.
So the fewer people you can hire, the better.
During the Bubble a lot of startups had the opposite policy. They wanted to
get "staffed up" as soon as possible, as if you couldn't get anything done
unless there was someone with the corresponding job title. That's big company
thinking. Don't hire people to fill the gaps in some a priori org chart. The
only reason to hire someone is to do something you'd like to do but can't.
If hiring unnecessary people is expensive and slows you down, why do nearly
all companies do it? I think the main reason is that people like the idea of
having a lot of people working for them. This weakness often extends right up
to the CEO. If you ever end up running a company, you'll find the most common
question people ask is how many employees you have. This is their way of
weighing you. It's not just random people who ask this; even reporters do. And
they're going to be a lot more impressed if the answer is a thousand than if
it's ten.
This is ridiculous, really. If two companies have the same revenues, it's the
one with fewer employees that's more impressive. When people used to ask me
how many people our startup had, and I answered "twenty," I could see them
thinking that we didn't count for much. I used to want to add "but our main
competitor, whose ass we regularly kick, has a hundred and forty, so can we
have credit for the larger of the two numbers?"
As with office space, the number of your employees is a choice between seeming
impressive, and being impressive. Any of you who were nerds in high school
know about this choice. Keep doing it when you start a company.
**Should You?**
But should you start a company? Are you the right sort of person to do it? If
you are, is it worth it?
More people are the right sort of person to start a startup than realize it.
That's the main reason I wrote this. There could be ten times more startups
than there are, and that would probably be a good thing.
I was, I now realize, exactly the right sort of person to start a startup. But
the idea terrified me at first. I was forced into it because I was a Lisp
hacker. The company I'd been consulting for seemed to be running into trouble,
and there were not a lot of other companies using Lisp. Since I couldn't bear
the thought of programming in another language (this was 1995, remember, when
"another language" meant C++) the only option seemed to be to start a new
company using Lisp.
I realize this sounds far-fetched, but if you're a Lisp hacker you'll know
what I mean. And if the idea of starting a startup frightened me so much that
I only did it out of necessity, there must be a lot of people who would be
good at it but who are too intimidated to try.
So who should start a startup? Someone who is a good hacker, between about 23
and 38, and who wants to solve the money problem in one shot instead of
getting paid gradually over a conventional working life.
I can't say precisely what a good hacker is. At a first rate university this
might include the top half of computer science majors. Though of course you
don't have to be a CS major to be a hacker; I was a philosophy major in
college.
It's hard to tell whether you're a good hacker, especially when you're young.
Fortunately the process of starting startups tends to select them
automatically. What drives people to start startups is (or should be) looking
at existing technology and thinking, don't these guys realize they should be
doing x, y, and z? And that's also a sign that one is a good hacker.
I put the lower bound at 23 not because there's something that doesn't happen
to your brain till then, but because you need to see what it's like in an
existing business before you try running your own. The business doesn't have
to be a startup. I spent a year working for a software company to pay off my
college loans. It was the worst year of my adult life, but I learned, without
realizing it at the time, a lot of valuable lessons about the software
business. In this case they were mostly negative lessons: don't have a lot of
meetings; don't have chunks of code that multiple people own; don't have a
sales guy running the company; don't make a high-end product; don't let your
code get too big; don't leave finding bugs to QA people; don't go too long
between releases; don't isolate developers from users; don't move from
Cambridge to Route 128; and so on. But negative lessons are just as
valuable as positive ones. Perhaps even more valuable: it's hard to repeat a
brilliant performance, but it's straightforward to avoid errors.
The other reason it's hard to start a company before 23 is that people won't
take you seriously. VCs won't trust you, and will try to reduce you to a
mascot as a condition of funding. Customers will worry you're going to flake
out and leave them stranded. Even you yourself, unless you're very unusual,
will feel your age to some degree; you'll find it awkward to be the boss of
someone much older than you, and if you're 21, hiring only people younger
rather limits your options.
Some people could probably start a company at 18 if they wanted to. Bill Gates
was 19 when he and Paul Allen started Microsoft. (Paul Allen was 22, though,
and that probably made a difference.) So if you're thinking, I don't care what
he says, I'm going to start a company now, you may be the sort of person who
could get away with it.
The other cutoff, 38, has a lot more play in it. One reason I put it there is
that I don't think many people have the physical stamina much past that age. I
used to work till 2:00 or 3:00 AM every night, seven days a week. I don't know
if I could do that now.
Also, startups are a big risk financially. If you try something that blows up
and leaves you broke at 26, big deal; a lot of 26 year olds are broke. By 38
you can't take so many risks-- especially if you have kids.
My final test may be the most restrictive. Do you actually want to start a
startup? What it amounts to, economically, is compressing your working life
into the smallest possible space. Instead of working at an ordinary rate for
40 years, you work like hell for four. And maybe end up with nothing-- though
in that case it probably won't take four years.
During this time you'll do little but work, because when you're not working,
your competitors will be. My only leisure activities were running, which I
needed to do to keep working anyway, and about fifteen minutes of reading a
night. I had a girlfriend for a total of two months during that three year
period. Every couple weeks I would take a few hours off to visit a used
bookshop or go to a friend's house for dinner. I went to visit my family
twice. Otherwise I just worked.
Working was often fun, because the people I worked with were some of my best
friends. Sometimes it was even technically interesting. But only about 10% of
the time. The best I can say for the other 90% is that some of it is funnier
in hindsight than it seemed then. Like the time the power went off in
Cambridge for about six hours, and we made the mistake of trying to start a
gasoline powered generator inside our offices. I won't try that again.
I don't think the amount of bullshit you have to deal with in a startup is
more than you'd endure in an ordinary working life. It's probably less, in
fact; it just seems like a lot because it's compressed into a short period. So
mainly what a startup buys you is time. That's the way to think about it if
you're trying to decide whether to start one. If you're the sort of person who
would like to solve the money problem once and for all instead of working for
a salary for 40 years, then a startup makes sense.
For a lot of people the conflict is between startups and graduate school. Grad
students are just the age, and just the sort of people, to start software
startups. You may worry that if you do you'll blow your chances of an academic
career. But it's possible to be part of a startup and stay in grad school,
especially at first. Two of our three original hackers were in grad school the
whole time, and both got their degrees. There are few sources of energy so
powerful as a procrastinating grad student.
If you do have to leave grad school, in the worst case it won't be for too
long. If a startup fails, it will probably fail quickly enough that you can
return to academic life. And if it succeeds, you may find you no longer have
such a burning desire to be an assistant professor.
If you want to do it, do it. Starting a startup is not the great mystery it
seems from outside. It's not something you have to know about "business" to
do. Build something users love, and spend less than you make. How hard is
that?
** |
|
September 2004
_(This essay is derived from an invited talk at ICFP 2004.)_
I had a front row seat for the Internet Bubble, because I worked at Yahoo
during 1998 and 1999. One day, when the stock was trading around $200, I sat
down and calculated what I thought the price should be. The answer I got was
$12. I went to the next cubicle and told my friend Trevor. "Twelve!" he said.
He tried to sound indignant, but he didn't quite manage it. He knew as well as
I did that our valuation was crazy.
Yahoo was a special case. It was not just our price to earnings ratio that was
bogus. Half our earnings were too. Not in the Enron way, of course. The
finance guys seemed scrupulous about reporting earnings. What made our
earnings bogus was that Yahoo was, in effect, the center of a Ponzi scheme.
Investors looked at Yahoo's earnings and said to themselves, here is proof
that Internet companies can make money. So they invested in new startups that
promised to be the next Yahoo. And as soon as these startups got the money,
what did they do with it? Buy millions of dollars worth of advertising on
Yahoo to promote their brand. Result: a capital investment in a startup this
quarter shows up as Yahoo earnings next quarter—stimulating another round of
investments in startups.
As in a Ponzi scheme, what seemed to be the returns of this system were simply
the latest round of investments in it. What made it not a Ponzi scheme was
that it was unintentional. At least, I think it was. The venture capital
business is pretty incestuous, and there were presumably people in a position,
if not to create this situation, to realize what was happening and to milk it.
A year later the game was up. Starting in January 2000, Yahoo's stock price
began to crash, ultimately losing 95% of its value.
Notice, though, that even with all the fat trimmed off its market cap, Yahoo
was still worth a lot. Even at the morning-after valuations of March and April
2001, the people at Yahoo had managed to create a company worth about $8
billion in just six years.
The fact is, despite all the nonsense we heard during the Bubble about the
"new economy," there was a core of truth. You need that to get a really big
bubble: you need to have something solid at the center, so that even smart
people are sucked in. (Isaac Newton and Jonathan Swift both lost money in the
South Sea Bubble of 1720.)
Now the pendulum has swung the other way. Now anything that became fashionable
during the Bubble is ipso facto unfashionable. But that's a mistake—an even
bigger mistake than believing what everyone was saying in 1999. Over the long
term, what the Bubble got right will be more important than what it got wrong.
**1\. Retail VC**
After the excesses of the Bubble, it's now considered dubious to take
companies public before they have earnings. But there is nothing intrinsically
wrong with that idea. Taking a company public at an early stage is simply
retail VC: instead of going to venture capital firms for the last round of
funding, you go to the public markets.
By the end of the Bubble, companies going public with no earnings were being
derided as "concept stocks," as if it were inherently stupid to invest in
them. But investing in concepts isn't stupid; it's what VCs do, and the best
of them are far from stupid.
The stock of a company that doesn't yet have earnings is worth _something._ It
may take a while for the market to learn how to value such companies, just as
it had to learn to value common stocks in the early 20th century. But markets
are good at solving that kind of problem. I wouldn't be surprised if the
market ultimately did a better job than VCs do now.
Going public early will not be the right plan for every company. And it can of
course be disruptive—by distracting the management, or by making the early
employees suddenly rich. But just as the market will learn how to value
startups, startups will learn how to minimize the damage of going public.
**2\. The Internet**
The Internet genuinely is a big deal. That was one reason even smart people
were fooled by the Bubble. Obviously it was going to have a huge effect.
Enough of an effect to triple the value of Nasdaq companies in two years? No,
as it turned out. But it was hard to say for certain at the time.
The same thing happened during the Mississippi and South Sea Bubbles. What
drove them was the invention of organized public finance (the South Sea
Company, despite its name, was really a competitor of the Bank of England).
And that did turn out to be a big deal, in the long run.
Recognizing an important trend turns out to be easier than figuring out how to
profit from it. The mistake investors always seem to make is to take the trend
too literally. Since the Internet was the big new thing, investors supposed
that the more Internettish the company, the better. Hence such parodies as
Pets.Com.
In fact most of the money to be made from big trends is made indirectly. It
was not the railroads themselves that made the most money during the railroad
boom, but the companies on either side, like Carnegie's steelworks, which made
the rails, and Standard Oil, which used railroads to get oil to the East
Coast, where it could be shipped to Europe.
I think the Internet will have great effects, and that what we've seen so far
is nothing compared to what's coming. But most of the winners will only
indirectly be Internet companies; for every Google there will be ten JetBlues.
**3\. Choices**
Why will the Internet have great effects? The general argument is that new
forms of communication always do. They happen rarely (till industrial times
there were just speech, writing, and printing), but when they do, they always
cause a big splash.
The specific argument, or one of them, is the Internet gives us more choices.
In the "old" economy, the high cost of presenting information to people meant
they had only a narrow range of options to choose from. The tiny, expensive
pipeline to consumers was tellingly named "the channel." Control the channel
and you could feed them what you wanted, on your terms. And it was not just
big corporations that depended on this principle. So, in their way, did labor
unions, the traditional news media, and the art and literary establishments.
Winning depended not on doing good work, but on gaining control of some
bottleneck.
There are signs that this is changing. Google has over 82 million unique users
a month and annual revenues of about three billion dollars. And yet have
you ever seen a Google ad? Something is going on here.
Admittedly, Google is an extreme case. It's very easy for people to switch to
a new search engine. It costs little effort and no money to try a new one, and
it's easy to see if the results are better. And so Google doesn't _have_ to
advertise. In a business like theirs, being the best is enough.
The exciting thing about the Internet is that it's shifting everything in that
direction. The hard part, if you want to win by making the best stuff, is the
beginning. Eventually everyone will learn by word of mouth that you're the
best, but how do you survive to that point? And it is in this crucial stage
that the Internet has the most effect. First, the Internet lets anyone find
you at almost zero cost. Second, it dramatically speeds up the rate at which
reputation spreads by word of mouth. Together these mean that in many fields
the rule will be: Build it, and they will come. Make something great and put
it online. That is a big change from the recipe for winning in the past
century.
**4\. Youth**
The aspect of the Internet Bubble that the press seemed most taken with was
the youth of some of the startup founders. This too is a trend that will last.
There is a huge standard deviation among 26 year olds. Some are fit only for
entry level jobs, but others are ready to rule the world if they can find
someone to handle the paperwork for them.
A 26 year old may not be very good at managing people or dealing with the SEC.
Those require experience. But those are also commodities, which can be handed
off to some lieutenant. The most important quality in a CEO is his vision for
the company's future. What will they build next? And in that department, there
are 26 year olds who can compete with anyone.
In 1970 a company president meant someone in his fifties, at least. If he had
technologists working for him, they were treated like a racing stable: prized,
but not powerful. But as technology has grown more important, the power of
nerds has grown to reflect it. Now it's not enough for a CEO to have someone
smart he can ask about technical matters. Increasingly, he has to be that
person himself.
As always, business has clung to old forms. VCs still seem to want to install
a legitimate-looking talking head as the CEO. But increasingly the founders of
the company are the real powers, and the grey-headed man installed by the VCs
more like a music group's manager than a general.
**5\. Informality**
In New York, the Bubble had dramatic consequences: suits went out of fashion.
They made one seem old. So in 1998 powerful New York types were suddenly
wearing open-necked shirts and khakis and oval wire-rimmed glasses, just like
guys in Santa Clara.
The pendulum has swung back a bit, driven in part by a panicked reaction by
the clothing industry. But I'm betting on the open-necked shirts. And this is
not as frivolous a question as it might seem. Clothes are important, as all
nerds can sense, though they may not realize it consciously.
If you're a nerd, you can understand how important clothes are by asking
yourself how you'd feel about a company that made you wear a suit and tie to
work. The idea sounds horrible, doesn't it? In fact, horrible far out of
proportion to the mere discomfort of wearing such clothes. A company that made
programmers wear suits would have something deeply wrong with it.
And what would be wrong would be that how one presented oneself counted more
than the quality of one's ideas. _That's_ the problem with formality. Dressing
up is not so much bad in itself. The problem is the receptor it binds to:
dressing up is inevitably a substitute for good ideas. It is no coincidence
that technically inept business types are known as "suits."
Nerds don't just happen to dress informally. They do it too consistently.
Consciously or not, they dress informally as a prophylactic measure against
stupidity.
**6\. Nerds**
Clothing is only the most visible battleground in the war against formality.
Nerds tend to eschew formality of any sort. They're not impressed by one's job
title, for example, or any of the other appurtenances of authority.
Indeed, that's practically the definition of a nerd. I found myself talking
recently to someone from Hollywood who was planning a show about nerds. I
thought it would be useful if I explained what a nerd was. What I came up with
was: someone who doesn't expend any effort on marketing himself.
A nerd, in other words, is someone who concentrates on substance. So what's
the connection between nerds and technology? Roughly that you can't fool
mother nature. In technical matters, you have to get the right answers. If
your software miscalculates the path of a space probe, you can't finesse your
way out of trouble by saying that your code is patriotic, or avant-garde, or
any of the other dodges people use in nontechnical fields.
And as technology becomes increasingly important in the economy, nerd culture
is rising with it. Nerds are already a lot cooler than they were when I was a
kid. When I was in college in the mid-1980s, "nerd" was still an insult.
People who majored in computer science generally tried to conceal it. Now
women ask me where they can meet nerds. (The answer that springs to mind is
"Usenix," but that would be like drinking from a firehose.)
I have no illusions about why nerd culture is becoming more accepted. It's not
because people are realizing that substance is more important than marketing.
It's because the nerds are getting rich. But that is not going to change.
**7\. Options**
What makes the nerds rich, usually, is stock options. Now there are moves
afoot to make it harder for companies to grant options. To the extent there's
some genuine accounting abuse going on, by all means correct it. But don't
kill the golden goose. Equity is the fuel that drives technical innovation.
Options are a good idea because (a) they're fair, and (b) they work. Someone
who goes to work for a company is (one hopes) adding to its value, and it's
only fair to give them a share of it. And as a purely practical measure,
people work a _lot_ harder when they have options. I've seen that first hand.
The fact that a few crooks during the Bubble robbed their companies by
granting themselves options doesn't mean options are a bad idea. During the
railroad boom, some executives enriched themselves by selling watered stock—by
issuing more shares than they said were outstanding. But that doesn't make
common stock a bad idea. Crooks just use whatever means are available.
If there is a problem with options, it's that they reward slightly the wrong
thing. Not surprisingly, people do what you pay them to. If you pay them by
the hour, they'll work a lot of hours. If you pay them by the volume of work
done, they'll get a lot of work done (but only as you defined work). And if
you pay them to raise the stock price, which is what options amount to,
they'll raise the stock price.
But that's not quite what you want. What you want is to increase the actual
value of the company, not its market cap. Over time the two inevitably meet,
but not always as quickly as options vest. Which means options tempt
employees, if only unconsciously, to "pump and dump"—to do things that will
make the company _seem_ valuable. I found that when I was at Yahoo, I couldn't
help thinking, "how will this sound to investors?" when I should have been
thinking "is this a good idea?"
So maybe the standard option deal needs to be tweaked slightly. Maybe options
should be replaced with something tied more directly to earnings. It's still
early days.
**8\. Startups**
What made the options valuable, for the most part, is that they were options
on the stock of startups. Startups were not of course a creation of the
Bubble, but they were more visible during the Bubble than ever before.
One thing most people did learn about for the first time during the Bubble was
the startup created with the intention of selling it. Originally a startup
meant a small company that hoped to grow into a big one. But increasingly
startups are evolving into a vehicle for developing technology on spec.
As I wrote in Hackers & Painters, employees seem to be most productive when
they're paid in proportion to the wealth they generate. And the advantage of a
startup—indeed, almost its raison d'etre—is that it offers something otherwise
impossible to obtain: a way of _measuring_ that.
In many businesses, it just makes more sense for companies to get technology
by buying startups rather than developing it in house. You pay more, but there
is less risk, and risk is what big companies don't want. It makes the guys
developing the technology more accountable, because they only get paid if they
build the winner. And you end up with better technology, created faster,
because things are made in the innovative atmosphere of startups instead of
the bureaucratic atmosphere of big companies.
Our startup, Viaweb, was built to be sold. We were open with investors about
that from the start. And we were careful to create something that could slot
easily into a larger company. That is the pattern for the future.
**9\. California**
The Bubble was a California phenomenon. When I showed up in Silicon Valley in
1998, I felt like an immigrant from Eastern Europe arriving in America in
1900. Everyone was so cheerful and healthy and rich. It seemed a new and
improved world.
The press, ever eager to exaggerate small trends, now gives one the impression
that Silicon Valley is a ghost town. Not at all. When I drive down 101 from
the airport, I still feel a buzz of energy, as if there were a giant
transformer nearby. Real estate is still more expensive than just about
anywhere else in the country. The people still look healthy, and the weather
is still fabulous. The future is there. (I say "there" because I moved back to
the East Coast after Yahoo. I still wonder if this was a smart idea.)
What makes the Bay Area superior is the attitude of the people. I notice that
when I come home to Boston. The first thing I see when I walk out of the
airline terminal is the fat, grumpy guy in charge of the taxi line. I brace
myself for rudeness: _remember, you're back on the East Coast now._
The atmosphere varies from city to city, and fragile organisms like startups
are exceedingly sensitive to such variation. If it hadn't already been
hijacked as a new euphemism for liberal, the word to describe the atmosphere
in the Bay Area would be "progressive." People there are trying to build the
future. Boston has MIT and Harvard, but it also has a lot of truculent,
unionized employees like the police who recently held the Democratic National
Convention for ransom, and a lot of people trying to be Thurston Howell. Two
sides of an obsolete coin.
Silicon Valley may not be the next Paris or London, but it is at least the
next Chicago. For the next fifty years, that's where new wealth will come
from.
**10\. Productivity**
During the Bubble, optimistic analysts used to justify high price to earnings
ratios by saying that technology was going to increase productivity
dramatically. They were wrong about the specific companies, but not so wrong
about the underlying principle. I think one of the big trends we'll see in the
coming century is a huge increase in productivity.
Or more precisely, a huge increase in variation in productivity. Technology is
a lever. It doesn't add; it multiplies. If the present range of productivity
is 0 to 100, introducing a multiple of 10 increases the range from 0 to 1000.
One upshot of which is that the companies of the future may be surprisingly
small. I sometimes daydream about how big you could grow a company (in
revenues) without ever having more than ten people. What would happen if you
outsourced everything except product development? If you tried this
experiment, I think you'd be surprised at how far you could get. As Fred
Brooks pointed out, small groups are intrinsically more productive, because
the internal friction in a group grows as the square of its size.
Till quite recently, running a major company meant managing an army of
workers. Our standards about how many employees a company should have are
still influenced by old patterns. Startups are perforce small, because they
can't afford to hire a lot of people. But I think it's a big mistake for
companies to loosen their belts as revenues increase. The question is not
whether you can afford the extra salaries. Can you afford the loss in
productivity that comes from making the company bigger?
The prospect of technological leverage will of course raise the specter of
unemployment. I'm surprised people still worry about this. After centuries of
supposedly job-killing innovations, the number of jobs is within ten percent
of the number of people who want them. This can't be a coincidence. There must
be some kind of balancing mechanism.
**What's New**
When one looks over these trends, is there any overall theme? There does seem
to be: that in the coming century, good ideas will count for more. That 26
year olds with good ideas will increasingly have an edge over 50 year olds
with powerful connections. That doing good work will matter more than dressing
up—or advertising, which is the same thing for companies. That people will be
rewarded a bit more in proportion to the value of what they create.
If so, this is good news indeed. Good ideas always tend to win eventually. The
problem is, it can take a very long time. It took decades for relativity to be
accepted, and the greater part of a century to establish that central planning
didn't work. So even a small increase in the rate at which good ideas win
would be a momentous change—big enough, probably, to justify a name like the
"new economy."
** |
|
November 2004
_(This is a new essay for the Japanese edition ofHackers & Painters. It tries
to explain why Americans make some things well and others badly.)_
A few years ago an Italian friend of mine travelled by train from Boston to
Providence. She had only been in America for a couple weeks and hadn't seen
much of the country yet. She arrived looking astonished. "It's so _ugly!"_
People from other rich countries can scarcely imagine the squalor of the man-
made bits of America. In travel books they show you mostly natural
environments: the Grand Canyon, whitewater rafting, horses in a field. If you
see pictures with man-made things in them, it will be either a view of the New
York skyline shot from a discreet distance, or a carefully cropped image of a
seacoast town in Maine.
How can it be, visitors must wonder. How can the richest country in the world
look like this?
Oddly enough, it may not be a coincidence. Americans are good at some things
and bad at others. We're good at making movies and software, and bad at making
cars and cities. And I think we may be good at what we're good at for the same
reason we're bad at what we're bad at. We're impatient. In America, if you
want to do something, you don't worry that it might come out badly, or upset
delicate social balances, or that people might think you're getting above
yourself. If you want to do something, as Nike says, _just do it._
This works well in some fields and badly in others. I suspect it works in
movies and software because they're both messy processes. "Systematic" is the
last word I'd use to describe the way good programmers write software. Code is
not something they assemble painstakingly after careful planning, like the
pyramids. It's something they plunge into, working fast and constantly
changing their minds, like a charcoal sketch.
In software, paradoxical as it sounds, good craftsmanship means working fast.
If you work slowly and meticulously, you merely end up with a very fine
implementation of your initial, mistaken idea. Working slowly and meticulously
is premature optimization. Better to get a prototype done fast, and see what
new ideas it gives you.
It sounds like making movies works a lot like making software. Every movie is
a Frankenstein, full of imperfections and usually quite different from what
was originally envisioned. But interesting, and finished fairly quickly.
I think we get away with this in movies and software because they're both
malleable mediums. Boldness pays. And if at the last minute two parts don't
quite fit, you can figure out some hack that will at least conceal the
problem.
Not so with cars, or cities. They are all too physical. If the car business
worked like software or movies, you'd surpass your competitors by making a car
that weighed only fifty pounds, or folded up to the size of a motorcycle when
you wanted to park it. But with physical products there are more constraints.
You don't win by dramatic innovations so much as by good taste and attention
to detail.
The trouble is, the very word "taste" sounds slightly ridiculous to American
ears. It seems pretentious, or frivolous, or even effeminate. Blue staters
think it's "subjective," and red staters think it's for sissies. So anyone in
America who really cares about design will be sailing upwind.
Twenty years ago we used to hear that the problem with the US car industry was
the workers. We don't hear that any more now that Japanese companies are
building cars in the US. The problem with American cars is bad design. You can
see that just by looking at them.
All that extra sheet metal on the AMC Matador wasn't added by the workers. The
problem with this car, as with American cars today, is that it was designed by
marketing people instead of designers.
Why do the Japanese make better cars than us? Some say it's because their
culture encourages cooperation. That may come into it. But in this case it
seems more to the point that their culture prizes design and craftsmanship.
For centuries the Japanese have made finer things than we have in the West.
When you look at swords they made in 1200, you just can't believe the date on
the label is right. Presumably their cars fit together more precisely than
ours for the same reason their joinery always has. They're obsessed with
making things well.
Not us. When we make something in America, our aim is just to get the job
done. Once we reach that point, we take one of two routes. We can stop there,
and have something crude but serviceable, like a Vise-grip. Or we can improve
it, which usually means encrusting it with gratuitous ornament. When we want
to make a car "better," we stick tail fins on it, or make it longer, or make
the windows smaller, depending on the current fashion.
Ditto for houses. In America you can have either a flimsy box banged together
out of two by fours and drywall, or a McMansion-- a flimsy box banged together
out of two by fours and drywall, but larger, more dramatic-looking, and full
of expensive fittings. Rich people don't get better design or craftsmanship;
they just get a larger, more conspicuous version of the standard house.
We don't especially prize design or craftsmanship here. What we like is speed,
and we're willing to do something in an ugly way to get it done fast. In some
fields, like software or movies, this is a net win.
But it's not just that software and movies are malleable mediums. In those
businesses, the designers (though they're not generally called that) have more
power. Software companies, at least successful ones, tend to be run by
programmers. And in the film industry, though producers may second-guess
directors, the director controls most of what appears on the screen. And so
American software and movies, and Japanese cars, all have this in common: the
people in charge care about design-- the former because the designers are in
charge, and the latter because the whole culture cares about design.
I think most Japanese executives would be horrified at the idea of making a
bad car. Whereas American executives, in their hearts, still believe the most
important thing about a car is the image it projects. Make a good car? What's
"good?" It's so _subjective._ If you want to know how to design a car, ask a
focus group.
Instead of relying on their own internal design compass (like Henry Ford did),
American car companies try to make what marketing people think consumers want.
But it isn't working. American cars continue to lose market share. And the
reason is that the customer doesn't want what he thinks he wants.
Letting focus groups design your cars for you only wins in the short term. In
the long term, it pays to bet on good design. The focus group may say they
want the meretricious feature du jour, but what they want even more is to
imitate sophisticated buyers, and they, though a small minority, really do
care about good design. Eventually the pimps and drug dealers notice that the
doctors and lawyers have switched from Cadillac to Lexus, and do the same.
Apple is an interesting counterexample to the general American trend. If you
want to buy a nice CD player, you'll probably buy a Japanese one. But if you
want to buy an MP3 player, you'll probably buy an iPod. What happened? Why
doesn't Sony dominate MP3 players? Because Apple is in the consumer
electronics business now, and unlike other American companies, they're
obsessed with good design. Or more precisely, their CEO is.
I just got an iPod, and it's not just nice. It's _surprisingly_ nice. For it
to surprise me, it must be satisfying expectations I didn't know I had. No
focus group is going to discover those. Only a great designer can.
Cars aren't the worst thing we make in America. Where the just-do-it model
fails most dramatically is in our cities-- or rather, exurbs. If real estate
developers operated on a large enough scale, if they built whole towns, market
forces would compel them to build towns that didn't suck. But they only build
a couple office buildings or suburban streets at a time, and the result is so
depressing that the inhabitants consider it a great treat to fly to Europe and
spend a couple weeks living what is, for people there, just everyday life.
But the just-do-it model does have advantages. It seems the clear winner for
generating wealth and technical innovations (which are practically the same
thing). I think speed is the reason. It's hard to create wealth by making a
commodity. The real value is in things that are new, and if you want to be the
first to make something, it helps to work fast. For better or worse, the just-
do-it model is fast, whether you're Dan Bricklin writing the prototype of
VisiCalc in a weekend, or a real estate developer building a block of shoddy
condos in a month.
If I had to choose between the just-do-it model and the careful model, I'd
probably choose just-do-it. But do we have to choose? Could we have it both
ways? Could Americans have nice places to live without undermining the
impatient, individualistic spirit that makes us good at software? Could other
countries introduce more individualism into their technology companies and
research labs without having it metastasize as strip malls? I'm optimistic.
It's harder to say about other countries, but in the US, at least, I think we
can have both.
Apple is an encouraging example. They've managed to preserve enough of the
impatient, hackerly spirit you need to write software. And yet when you pick
up a new Apple laptop, well, it doesn't seem American. It's too perfect. It
seems as if it must have been made by a Swedish or a Japanese company.
In many technologies, version 2 has higher resolution. Why not in design
generally? I think we'll gradually see national characters superseded by
occupational characters: hackers in Japan will be allowed to behave with a
willfulness that would now seem unJapanese, and products in America will be
designed with an insistence on taste that would now seem unAmerican. Perhaps
the most successful countries, in the future, will be those most willing to
ignore what are now considered national characters, and do each kind of work
in the way that works best. Race you.
** |
|
August 2004
In a recent talk I said something that upset a lot of people: that you could
get smarter programmers to work on a Python project than you could to work on
a Java project.
I didn't mean by this that Java programmers are dumb. I meant that Python
programmers are smart. It's a lot of work to learn a new programming language.
And people don't learn Python because it will get them a job; they learn it
because they genuinely like to program and aren't satisfied with the languages
they already know.
Which makes them exactly the kind of programmers companies should want to
hire. Hence what, for lack of a better name, I'll call the Python paradox: if
a company chooses to write its software in a comparatively esoteric language,
they'll be able to hire better programmers, because they'll attract only those
who cared enough to learn it. And for programmers the paradox is even more
pronounced: the language to learn, if you want to get a good job, is a
language that people don't learn merely to get a job.
Only a few companies have been smart enough to realize this so far. But there
is a kind of selection going on here too: they're exactly the companies
programmers would most like to work for. Google, for example. When they
advertise Java programming jobs, they also want Python experience.
A friend of mine who knows nearly all the widely used languages uses Python
for most of his projects. He says the main reason is that he likes the way
source code looks. That may seem a frivolous reason to choose one language
over another. But it is not so frivolous as it sounds: when you program, you
spend more time reading code than writing it. You push blobs of source code
around the way a sculptor does blobs of clay. So a language that makes source
code ugly is maddening to an exacting programmer, as clay full of lumps would
be to a sculptor.
At the mention of ugly source code, people will of course think of Perl. But
the superficial ugliness of Perl is not the sort I mean. Real ugliness is not
harsh-looking syntax, but having to build programs out of the wrong concepts.
Perl may look like a cartoon character swearing, but there are cases where it
surpasses Python conceptually.
So far, anyway. Both languages are of course moving targets. But they share,
along with Ruby (and Icon, and Joy, and J, and Lisp, and Smalltalk) the fact
that they're created by, and used by, people who really care about
programming. And those tend to be the ones who do it well.
---
---
Turkish Translation
| | Japanese Translation
Portuguese Translation
| | Italian Translation
Polish Translation
| | Romanian Translation
Russian Translation
| | Spanish Translation
French Translation
| | Telugu Translation
* * *
| If you liked this, you may also like **_Hackers & Painters_**. |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
May 2005
_(This essay is derived from a talk at the Berkeley CSUA.)_
The three big powers on the Internet now are Yahoo, Google, and Microsoft.
Average age of their founders: 24. So it is pretty well established now that
grad students can start successful companies. And if grad students can do it,
why not undergrads?
Like everything else in technology, the cost of starting a startup has
decreased dramatically. Now it's so low that it has disappeared into the
noise. The main cost of starting a Web-based startup is food and rent. Which
means it doesn't cost much more to start a company than to be a total slacker.
You can probably start a startup on ten thousand dollars of seed funding, if
you're prepared to live on ramen.
The less it costs to start a company, the less you need the permission of
investors to do it. So a lot of people will be able to start companies now who
never could have before.
The most interesting subset may be those in their early twenties. I'm not so
excited about founders who have everything investors want except intelligence,
or everything except energy. The most promising group to be liberated by the
new, lower threshold are those who have everything investors want except
experience.
**Market Rate**
I once claimed that nerds were unpopular in secondary school mainly because
they had better things to do than work full-time at being popular. Some said I
was just telling people what they wanted to hear. Well, I'm now about to do
that in a spectacular way: I think undergraduates are undervalued.
Or more precisely, I think few realize the huge spread in the value of 20 year
olds. Some, it's true, are not very capable. But others are more capable than
all but a handful of 30 year olds.
Till now the problem has always been that it's difficult to pick them out.
Every VC in the world, if they could go back in time, would try to invest in
Microsoft. But which would have then? How many would have understood that this
particular 19 year old was Bill Gates?
It's hard to judge the young because (a) they change rapidly, (b) there is
great variation between them, and (c) they're individually inconsistent. That
last one is a big problem. When you're young, you occasionally say and do
stupid things even when you're smart. So if the algorithm is to filter out
people who say stupid things, as many investors and employers unconsciously
do, you're going to get a lot of false positives.
Most organizations who hire people right out of college are only aware of the
average value of 22 year olds, which is not that high. And so the idea for
most of the twentieth century was that everyone had to begin as a trainee in
some entry-level job. Organizations realized there was a lot of variation in
the incoming stream, but instead of pursuing this thought they tended to
suppress it, in the belief that it was good for even the most promising kids
to start at the bottom, so they didn't get swelled heads.
The most productive young people will _always_ be undervalued by large
organizations, because the young have no performance to measure yet, and any
error in guessing their ability will tend toward the mean.
What's an especially productive 22 year old to do? One thing you can do is go
over the heads of organizations, directly to the users. Any company that hires
you is, economically, acting as a proxy for the customer. The rate at which
they value you (though they may not consciously realize it) is an attempt to
guess your value to the user. But there's a way to appeal their judgement. If
you want, you can opt to be valued directly by users, by starting your own
company.
The market is a lot more discerning than any employer. And it is completely
non-discriminatory. On the Internet, nobody knows you're a dog. And more to
the point, nobody knows you're 22. All users care about is whether your site
or software gives them what they want. They don't care if the person behind it
is a high school kid.
If you're really productive, why not make employers pay market rate for you?
Why go work as an ordinary employee for a big company, when you could start a
startup and make them buy it to get you?
When most people hear the word "startup," they think of the famous ones that
have gone public. But most startups that succeed do it by getting bought. And
usually the acquirer doesn't just want the technology, but the people who
created it as well.
Often big companies buy startups before they're profitable. Obviously in such
cases they're not after revenues. What they want is the development team and
the software they've built so far. When a startup gets bought for 2 or 3
million six months in, it's really more of a hiring bonus than an acquisition.
I think this sort of thing will happen more and more, and that it will be
better for everyone. It's obviously better for the people who start the
startup, because they get a big chunk of money up front. But I think it will
be better for the acquirers too. The central problem in big companies, and the
main reason they're so much less productive than small companies, is the
difficulty of valuing each person's work. Buying larval startups solves that
problem for them: the acquirer doesn't pay till the developers have proven
themselves. Acquirers are protected on the downside, but still get most of the
upside.
**Product Development**
Buying startups also solves another problem afflicting big companies: they
can't do product development. Big companies are good at extracting the value
from existing products, but bad at creating new ones.
Why? It's worth studying this phenomenon in detail, because this is the raison
d'etre of startups.
To start with, most big companies have some kind of turf to protect, and this
tends to warp their development decisions. For example, Web-based applications
are hot now, but within Microsoft there must be a lot of ambivalence about
them, because the very idea of Web-based software threatens the desktop. So
any Web-based application that Microsoft ends up with, will probably, like
Hotmail, be something developed outside the company.
Another reason big companies are bad at developing new products is that the
kind of people who do that tend not to have much power in big companies
(unless they happen to be the CEO). Disruptive technologies are developed by
disruptive people. And they either don't work for the big company, or have
been outmaneuvered by yes-men and have comparatively little influence.
Big companies also lose because they usually only build one of each thing.
When you only have one Web browser, you can't do anything really risky with
it. If ten different startups design ten different Web browsers and you take
the best, you'll probably get something better.
The more general version of this problem is that there are too many new ideas
for companies to explore them all. There might be 500 startups right now who
think they're making something Microsoft might buy. Even Microsoft probably
couldn't manage 500 development projects in-house.
Big companies also don't pay people the right way. People developing a new
product at a big company get paid roughly the same whether it succeeds or
fails. People at a startup expect to get rich if the product succeeds, and get
nothing if it fails. So naturally the people at the startup work a lot
harder.
The mere bigness of big companies is an obstacle. In startups, developers are
often forced to talk directly to users, whether they want to or not, because
there is no one else to do sales and support. It's painful doing sales, but
you learn much more from trying to sell people something than reading what
they said in focus groups.
And then of course, big companies are bad at product development because
they're bad at everything. Everything happens slower in big companies than
small ones, and product development is something that has to happen fast,
because you have to go through a lot of iterations to get something good.
**Trend**
I think the trend of big companies buying startups will only accelerate. One
of the biggest remaining obstacles is pride. Most companies, at least
unconsciously, feel they ought to be able to develop stuff in house, and that
buying startups is to some degree an admission of failure. And so, as people
generally do with admissions of failure, they put it off for as long as
possible. That makes the acquisition very expensive when it finally happens.
What companies should do is go out and discover startups when they're young,
before VCs have puffed them up into something that costs hundreds of millions
to acquire. Much of what VCs add, the acquirer doesn't need anyway.
Why don't acquirers try to predict the companies they're going to have to buy
for hundreds of millions, and grab them early for a tenth or a twentieth of
that? Because they can't predict the winners in advance? If they're only
paying a twentieth as much, they only have to predict a twentieth as well.
Surely they can manage that.
I think companies that acquire technology will gradually learn to go after
earlier stage startups. They won't necessarily buy them outright. The solution
may be some hybrid of investment and acquisition: for example, to buy a chunk
of the company and get an option to buy the rest later.
When companies buy startups, they're effectively fusing recruiting and product
development. And I think that's more efficient than doing the two separately,
because you always get people who are really committed to what they're working
on.
Plus this method yields teams of developers who already work well together.
Any conflicts between them have been ironed out under the very hot iron of
running a startup. By the time the acquirer gets them, they're finishing one
another's sentences. That's valuable in software, because so many bugs occur
at the boundaries between different people's code.
**Investors**
The increasing cheapness of starting a company doesn't just give hackers more
power relative to employers. It also gives them more power relative to
investors.
The conventional wisdom among VCs is that hackers shouldn't be allowed to run
their own companies. The founders are supposed to accept MBAs as their bosses,
and themselves take on some title like Chief Technical Officer. There may be
cases where this is a good idea. But I think founders will increasingly be
able to push back in the matter of control, because they just don't need the
investors' money as much as they used to.
Startups are a comparatively new phenomenon. Fairchild Semiconductor is
considered the first VC-backed startup, and they were founded in 1959, less
than fifty years ago. Measured on the time scale of social change, what we
have now is pre-beta. So we shouldn't assume the way startups work now is the
way they have to work.
Fairchild needed a lot of money to get started. They had to build actual
factories. What does the first round of venture funding for a Web-based
startup get spent on today? More money can't get software written faster; it
isn't needed for facilities, because those can now be quite cheap; all money
can really buy you is sales and marketing. A sales force is worth something,
I'll admit. But marketing is increasingly irrelevant. On the Internet,
anything genuinely good will spread by word of mouth.
Investors' power comes from money. When startups need less money, investors
have less power over them. So future founders may not have to accept new CEOs
if they don't want them. The VCs will have to be dragged kicking and screaming
down this road, but like many things people have to be dragged kicking and
screaming toward, it may actually be good for them.
Google is a sign of the way things are going. As a condition of funding, their
investors insisted they hire someone old and experienced as CEO. But from what
I've heard the founders didn't just give in and take whoever the VCs wanted.
They delayed for an entire year, and when they did finally take a CEO, they
chose a guy with a PhD in computer science.
It sounds to me as if the founders are still the most powerful people in the
company, and judging by Google's performance, their youth and inexperience
doesn't seem to have hurt them. Indeed, I suspect Google has done better than
they would have if the founders had given the VCs what they wanted, when they
wanted it, and let some MBA take over as soon as they got their first round of
funding.
I'm not claiming the business guys installed by VCs have no value. Certainly
they have. But they don't need to become the founders' bosses, which is what
that title CEO means. I predict that in the future the executives installed by
VCs will increasingly be COOs rather than CEOs. The founders will run
engineering directly, and the rest of the company through the COO.
**The Open Cage**
With both employers and investors, the balance of power is slowly shifting
towards the young. And yet they seem the last to realize it. Only the most
ambitious undergrads even consider starting their own company when they
graduate. Most just want to get a job.
Maybe this is as it should be. Maybe if the idea of starting a startup is
intimidating, you filter out the uncommitted. But I suspect the filter is set
a little too high. I think there are people who could, if they tried, start
successful startups, and who instead let themselves be swept into the intake
ducts of big companies.
Have you ever noticed that when animals are let out of cages, they don't
always realize at first that the door's open? Often they have to be poked with
a stick to get them out. Something similar happened with blogs. People could
have been publishing online in 1995, and yet blogging has only really taken
off in the last couple years. In 1995 we thought only professional writers
were entitled to publish their ideas, and that anyone else who did was a
crank. Now publishing online is becoming so popular that everyone wants to do
it, even print journalists. But blogging has not taken off recently because of
any technical innovation; it just took eight years for everyone to realize the
cage was open.
I think most undergrads don't realize yet that the economic cage is open. A
lot have been told by their parents that the route to success is to get a good
job. This was true when their parents were in college, but it's less true now.
The route to success is to build something valuable, and you don't have to be
working for an existing company to do that. Indeed, you can often do it better
if you're not.
When I talk to undergrads, what surprises me most about them is how
conservative they are. Not politically, of course. I mean they don't seem to
want to take risks. This is a mistake, because the younger you are, the more
risk you can take.
**Risk**
Risk and reward are always proportionate. For example, stocks are riskier than
bonds, and over time always have greater returns. So why does anyone invest in
bonds? The catch is that phrase "over time." Stocks will generate greater
returns over thirty years, but they might lose value from year to year. So
what you should invest in depends on how soon you need the money. If you're
young, you should take the riskiest investments you can find.
All this talk about investing may seem very theoretical. Most undergrads
probably have more debts than assets. They may feel they have nothing to
invest. But that's not true: they have their time to invest, and the same rule
about risk applies there. Your early twenties are exactly the time to take
insane career risks.
The reason risk is always proportionate to reward is that market forces make
it so. People will pay extra for stability. So if you choose stability-- by
buying bonds, or by going to work for a big company-- it's going to cost you.
Riskier career moves pay better on average, because there is less demand for
them. Extreme choices like starting a startup are so frightening that most
people won't even try. So you don't end up having as much competition as you
might expect, considering the prizes at stake.
The math is brutal. While perhaps 9 out of 10 startups fail, the one that
succeeds will pay the founders more than 10 times what they would have made in
an ordinary job. That's the sense in which startups pay better "on
average."
Remember that. If you start a startup, you'll probably fail. Most startups
fail. It's the nature of the business. But it's not necessarily a mistake to
try something that has a 90% chance of failing, if you can afford the risk.
Failing at 40, when you have a family to support, could be serious. But if you
fail at 22, so what? If you try to start a startup right out of college and it
tanks, you'll end up at 23 broke and a lot smarter. Which, if you think about
it, is roughly what you hope to get from a graduate program.
Even if your startup does tank, you won't harm your prospects with employers.
To make sure I asked some friends who work for big companies. I asked managers
at Yahoo, Google, Amazon, Cisco and Microsoft how they'd feel about two
candidates, both 24, with equal ability, one who'd tried to start a startup
that tanked, and another who'd spent the two years since college working as a
developer at a big company. Every one responded that they'd prefer the guy
who'd tried to start his own company. Zod Nazem, who's in charge of
engineering at Yahoo, said:
> I actually put more value on the guy with the failed startup. And you can
> quote me!
So there you have it. Want to get hired by Yahoo? Start your own company.
**The Man is the Customer**
If even big employers think highly of young hackers who start companies, why
don't more do it? Why are undergrads so conservative? I think it's because
they've spent so much time in institutions.
The first twenty years of everyone's life consists of being piped from one
institution to another. You probably didn't have much choice about the
secondary schools you went to. And after high school it was probably
understood that you were supposed to go to college. You may have had a few
different colleges to choose between, but they were probably pretty similar.
So by this point you've been riding on a subway line for twenty years, and the
next stop seems to be a job.
Actually college is where the line ends. Superficially, going to work for a
company may feel like just the next in a series of institutions, but
underneath, everything is different. The end of school is the fulcrum of your
life, the point where you go from net consumer to net producer.
The other big change is that now, you're steering. You can go anywhere you
want. So it may be worth standing back and understanding what's going on,
instead of just doing the default thing.
All through college, and probably long before that, most undergrads have been
thinking about what employers want. But what really matters is what customers
want, because they're the ones who give employers the money to pay you.
So instead of thinking about what employers want, you're probably better off
thinking directly about what users want. To the extent there's any difference
between the two, you can even use that to your advantage if you start a
company of your own. For example, big companies like docile conformists. But
this is merely an artifact of their bigness, not something customers need.
**Grad School**
I didn't consciously realize all this when I was graduating from college--
partly because I went straight to grad school. Grad school can be a pretty
good deal, even if you think of one day starting a startup. You can start one
when you're done, or even pull the ripcord part way through, like the founders
of Yahoo and Google.
Grad school makes a good launch pad for startups, because you're collected
together with a lot of smart people, and you have bigger chunks of time to
work on your own projects than an undergrad or corporate employee would. As
long as you have a fairly tolerant advisor, you can take your time developing
an idea before turning it into a company. David Filo and Jerry Yang started
the Yahoo directory in February 1994 and were getting a million hits a day by
the fall, but they didn't actually drop out of grad school and start a company
till March 1995.
You could also try the startup first, and if it doesn't work, then go to grad
school. When startups tank they usually do it fairly quickly. Within a year
you'll know if you're wasting your time.
If it fails, that is. If it succeeds, you may have to delay grad school a
little longer. But you'll have a much more enjoyable life once there than you
would on a regular grad student stipend.
**Experience**
Another reason people in their early twenties don't start startups is that
they feel they don't have enough experience. Most investors feel the same.
I remember hearing a lot of that word "experience" when I was in college. What
do people really mean by it? Obviously it's not the experience itself that's
valuable, but something it changes in your brain. What's different about your
brain after you have "experience," and can you make that change happen faster?
I now have some data on this, and I can tell you what tends to be missing when
people lack experience. I've said that every startup needs three things: to
start with good people, to make something users want, and not to spend too
much money. It's the middle one you get wrong when you're inexperienced. There
are plenty of undergrads with enough technical skill to write good software,
and undergrads are not especially prone to waste money. If they get something
wrong, it's usually not realizing they have to make something people want.
This is not exclusively a failing of the young. It's common for startup
founders of all ages to build things no one wants.
Fortunately, this flaw should be easy to fix. If undergrads were all bad
programmers, the problem would be a lot harder. It can take years to learn how
to program. But I don't think it takes years to learn how to make things
people want. My hypothesis is that all you have to do is smack hackers on the
side of the head and tell them: Wake up. Don't sit here making up a priori
theories about what users need. Go find some users and see what they need.
Most successful startups not only do something very specific, but solve a
problem people already know they have.
The big change that "experience" causes in your brain is learning that you
need to solve people's problems. Once you grasp that, you advance quickly to
the next step, which is figuring out what those problems are. And that takes
some effort, because the way software actually gets used, especially by the
people who pay the most for it, is not at all what you might expect. For
example, the stated purpose of Powerpoint is to present ideas. Its real role
is to overcome people's fear of public speaking. It allows you to give an
impressive-looking talk about nothing, and it causes the audience to sit in a
dark room looking at slides, instead of a bright one looking at you.
This kind of thing is out there for anyone to see. The key is to know to look
for it-- to realize that having an idea for a startup is not like having an
idea for a class project. The goal in a startup is not to write a cool piece
of software. It's to make something people want. And to do that you have to
look at users-- forget about hacking, and just look at users. This can be
quite a mental adjustment, because little if any of the software you write in
school even has users.
A few steps before a Rubik's Cube is solved, it still looks like a mess. I
think there are a lot of undergrads whose brains are in a similar position:
they're only a few steps away from being able to start successful startups, if
they wanted to, but they don't realize it. They have more than enough
technical skill. They just haven't realized yet that the way to create wealth
is to make what users want, and that employers are just proxies for users in
which risk is pooled.
If you're young and smart, you don't need either of those. You don't need
someone else to tell you what users want, because you can figure it out
yourself. And you don't want to pool risk, because the younger you are, the
more risk you should take.
**A Public Service Message**
I'd like to conclude with a joint message from me and your parents. Don't drop
out of college to start a startup. There's no rush. There will be plenty of
time to start companies after you graduate. In fact, it may be just as well to
go work for an existing company for a couple years after you graduate, to
learn how companies work.
And yet, when I think about it, I can't imagine telling Bill Gates at 19 that
he should wait till he graduated to start a company. He'd have told me to get
lost. And could I have honestly claimed that he was harming his future-- that
he was learning less by working at ground zero of the microcomputer revolution
than he would have if he'd been taking classes back at Harvard? No, probably
not.
And yes, while it is probably true that you'll learn some valuable things by
going to work for an existing company for a couple years before starting your
own, you'd learn a thing or two running your own company during that time too.
The advice about going to work for someone else would get an even colder
reception from the 19 year old Bill Gates. So I'm supposed to finish college,
then go work for another company for two years, and then I can start my own? I
have to wait till I'm 23? That's _four years_. That's more than twenty percent
of my life so far. Plus in four years it will be way too late to make money
writing a Basic interpreter for the Altair.
And he'd be right. The Apple II was launched just two years later. In fact, if
Bill had finished college and gone to work for another company as we're
suggesting, he might well have gone to work for Apple. And while that would
probably have been better for all of us, it wouldn't have been better for him.
So while I stand by our responsible advice to finish college and then go work
for a while before starting a startup, I have to admit it's one of those
things the old tell the young, but don't expect them to listen to. We say this
sort of thing mainly so we can claim we warned you. So don't say I didn't warn
you.
** |
|
August 2007
A good programmer working intensively on his own code can hold it in his mind
the way a mathematician holds a problem he's working on. Mathematicians don't
answer questions by working them out on paper the way schoolchildren are
taught to. They do more in their heads: they try to understand a problem space
well enough that they can walk around it the way you can walk around the
memory of the house you grew up in. At its best programming is the same. You
hold the whole program in your head, and you can manipulate it at will.
That's particularly valuable at the start of a project, because initially the
most important thing is to be able to change what you're doing. Not just to
solve the problem in a different way, but to change the problem you're
solving.
Your code is your understanding of the problem you're exploring. So it's only
when you have your code in your head that you really understand the problem.
It's not easy to get a program into your head. If you leave a project for a
few months, it can take days to really understand it again when you return to
it. Even when you're actively working on a program it can take half an hour to
load into your head when you start work each day. And that's in the best case.
Ordinary programmers working in typical office conditions never enter this
mode. Or to put it more dramatically, ordinary programmers working in typical
office conditions never really understand the problems they're solving.
Even the best programmers don't always have the whole program they're working
on loaded into their heads. But there are things you can do to help:
1. **Avoid distractions.** Distractions are bad for many types of work, but especially bad for programming, because programmers tend to operate at the limit of the detail they can handle.
The danger of a distraction depends not on how long it is, but on how much it
scrambles your brain. A programmer can leave the office and go and get a
sandwich without losing the code in his head. But the wrong kind of
interruption can wipe your brain in 30 seconds.
Oddly enough, scheduled distractions may be worse than unscheduled ones. If
you know you have a meeting in an hour, you don't even start working on
something hard.
2. **Work in long stretches.** Since there's a fixed cost each time you start working on a program, it's more efficient to work in a few long sessions than many short ones. There will of course come a point where you get stupid because you're tired. This varies from person to person. I've heard of people hacking for 36 hours straight, but the most I've ever been able to manage is about 18, and I work best in chunks of no more than 12.
The optimum is not the limit you can physically endure. There's an advantage
as well as a cost of breaking up a project. Sometimes when you return to a
problem after a rest, you find your unconscious mind has left an answer
waiting for you.
3. **Use succinct languages.** More powerful programming languages make programs shorter. And programmers seem to think of programs at least partially in the language they're using to write them. The more succinct the language, the shorter the program, and the easier it is to load and keep in your head.
You can magnify the effect of a powerful language by using a style called
bottom-up programming, where you write programs in multiple layers, the lower
ones acting as programming languages for those above. If you do this right,
you only have to keep the topmost layer in your head.
4. **Keep rewriting your program.** Rewriting a program often yields a cleaner design. But it would have advantages even if it didn't: you have to understand a program completely to rewrite it, so there is no better way to get one loaded into your head.
5. **Write rereadable code.** All programmers know it's good to write readable code. But you yourself are the most important reader. Especially in the beginning; a prototype is a conversation with yourself. And when writing for yourself you have different priorities. If you're writing for other people, you may not want to make code too dense. Some parts of a program may be easiest to read if you spread things out, like an introductory textbook. Whereas if you're writing code to make it easy to reload into your head, it may be best to go for brevity.
6. **Work in small groups.** When you manipulate a program in your head, your vision tends to stop at the edge of the code you own. Other parts you don't understand as well, and more importantly, can't take liberties with. So the smaller the number of programmers, the more completely a project can mutate. If there's just one programmer, as there often is at first, you can do all-encompassing redesigns.
7. **Don't have multiple people editing the same piece of code.** You never understand other people's code as well as your own. No matter how thoroughly you've read it, you've only read it, not written it. So if a piece of code is written by multiple authors, none of them understand it as well as a single author would.
And of course you can't safely redesign something other people are working on.
It's not just that you'd have to ask permission. You don't even let yourself
think of such things. Redesigning code with several authors is like changing
laws; redesigning code you alone control is like seeing the other
interpretation of an ambiguous image.
If you want to put several people to work on a project, divide it into
components and give each to one person.
8. **Start small.** A program gets easier to hold in your head as you become familiar with it. You can start to treat parts as black boxes once you feel confident you've fully explored them. But when you first start working on a project, you're forced to see everything. If you start with too big a problem, you may never quite be able to encompass it. So if you need to write a big, complex program, the best way to begin may not be to write a spec for it, but to write a prototype that solves a subset of the problem. Whatever the advantages of planning, they're often outweighed by the advantages of being able to keep a program in your head.
It's striking how often programmers manage to hit all eight points by
accident. Someone has an idea for a new project, but because it's not
officially sanctioned, he has to do it in off hours—which turn out to be more
productive because there are no distractions. Driven by his enthusiasm for the
new project he works on it for many hours at a stretch. Because it's initially
just an experiment, instead of a "production" language he uses a mere
"scripting" language—which is in fact far more powerful. He completely
rewrites the program several times; that wouldn't be justifiable for an
official project, but this is a labor of love and he wants it to be perfect.
And since no one is going to see it except him, he omits any comments except
the note-to-self variety. He works in a small group perforce, because he
either hasn't told anyone else about the idea yet, or it seems so unpromising
that no one else is allowed to work on it. Even if there is a group, they
couldn't have multiple people editing the same code, because it changes too
fast for that to be possible. And the project starts small because the idea
_is_ small at first; he just has some cool hack he wants to try out.
Even more striking are the number of officially sanctioned projects that
manage to do _all eight things wrong_. In fact, if you look at the way
software gets written in most organizations, it's almost as if they were
deliberately trying to do things wrong. In a sense, they are. One of the
defining qualities of organizations since there have been such a thing is to
treat individuals as interchangeable parts. This works well for more
parallelizable tasks, like fighting wars. For most of history a well-drilled
army of professional soldiers could be counted on to beat an army of
individual warriors, no matter how valorous. But having ideas is not very
parallelizable. And that's what programs are: ideas.
It's not merely true that organizations dislike the idea of depending on
individual genius, it's a tautology. It's part of the definition of an
organization not to. Of our current concept of an organization, at least.
Maybe we could define a new kind of organization that combined the efforts of
individuals without requiring them to be interchangeable. Arguably a market is
such a form of organization, though it may be more accurate to describe a
market as a degenerate case—as what you get by default when organization isn't
possible.
Probably the best we'll do is some kind of hack, like making the programming
parts of an organization work differently from the rest. Perhaps the optimal
solution is for big companies not even to try to develop ideas in house, but
simply to buy them. But regardless of what the solution turns out to be, the
first step is to realize there's a problem. There is a contradiction in the
very phrase "software company." The two words are pulling in opposite
directions. Any good programmer in a large organization is going to be at odds
with it, because organizations are designed to prevent what programmers strive
for.
Good programmers manage to get a lot done anyway. But often it requires
practically an act of rebellion against the organizations that employ them.
Perhaps it will help if more people understand that the way programmers behave
is driven by the demands of the work they do. It's not because they're
irresponsible that they work in long binges during which they blow off all
other obligations, plunge straight into programming instead of writing specs
first, and rewrite code that already works. It's not because they're
unfriendly that they prefer to work alone, or growl at people who pop their
head in the door to say hello. This apparently random collection of annoying
habits has a single explanation: the power of holding a program in one's head.
Whether or not understanding this can help large organizations, it can
certainly help their competitors. The weakest point in big companies is that
they don't let individual programmers do great work. So if you're a little
startup, this is the place to attack them. Take on the kind of problems that
have to be solved in one big brain.
**Thanks** to Sam Altman, David Greenspan, Aaron Iba, Jessica Livingston,
Robert Morris, Peter Norvig, Lisa Randall, Emmett Shear, Sergei Tsarev, and
Stephen Wolfram for reading drafts of this.
---
---
| | Japanese Translation
| | | | Simplified Chinese Translation
| | Portuguese Translation
| | | | Bulgarian Translation
| | Russian Translation
* * *
--- |
|
April 2007
A few days ago I suddenly realized Microsoft was dead. I was talking to a
young startup founder about how Google was different from Yahoo. I said that
Yahoo had been warped from the start by their fear of Microsoft. That was why
they'd positioned themselves as a "media company" instead of a technology
company. Then I looked at his face and realized he didn't understand. It was
as if I'd told him how much girls liked Barry Manilow in the mid 80s. Barry
who?
Microsoft? He didn't say anything, but I could tell he didn't quite believe
anyone would be frightened of them.
Microsoft cast a shadow over the software world for almost 20 years starting
in the late 80s. I can remember when it was IBM before them. I mostly ignored
this shadow. I never used Microsoft software, so it only affected me
indirectly—for example, in the spam I got from botnets. And because I wasn't
paying attention, I didn't notice when the shadow disappeared.
But it's gone now. I can sense that. No one is even afraid of Microsoft
anymore. They still make a lot of money—so does IBM, for that matter. But
they're not dangerous.
When did Microsoft die, and of what? I know they seemed dangerous as late as
2001, because I wrote an essay then about how they were less dangerous than
they seemed. I'd guess they were dead by 2005. I know when we started Y
Combinator we didn't worry about Microsoft as competition for the startups we
funded. In fact, we've never even invited them to the demo days we organize
for startups to present to investors. We invite Yahoo and Google and some
other Internet companies, but we've never bothered to invite Microsoft. Nor
has anyone there ever even sent us an email. They're in a different world.
What killed them? Four things, I think, all of them occurring simultaneously
in the mid 2000s.
The most obvious is Google. There can only be one big man in town, and they're
clearly it. Google is the most dangerous company now by far, in both the good
and bad senses of the word. Microsoft can at best limp along afterward.
When did Google take the lead? There will be a tendency to push it back to
their IPO in August 2004, but they weren't setting the terms of the debate
then. I'd say they took the lead in 2005\. Gmail was one of the things that
put them over the edge. Gmail showed they could do more than search.
Gmail also showed how much you could do with web-based software, if you took
advantage of what later came to be called "Ajax." And that was the second
cause of Microsoft's death: everyone can see the desktop is over. It now seems
inevitable that applications will live on the web—not just email, but
everything, right up to Photoshop. Even Microsoft sees that now.
Ironically, Microsoft unintentionally helped create Ajax. The x in Ajax is
from the XMLHttpRequest object, which lets the browser communicate with the
server in the background while displaying a page. (Originally the only way to
communicate with the server was to ask for a new page.) XMLHttpRequest was
created by Microsoft in the late 90s because they needed it for Outlook. What
they didn't realize was that it would be useful to a lot of other people
too—in fact, to anyone who wanted to make web apps work like desktop ones.
The other critical component of Ajax is Javascript, the programming language
that runs in the browser. Microsoft saw the danger of Javascript and tried to
keep it broken for as long as they could. But eventually the open source
world won, by producing Javascript libraries that grew over the brokenness of
Explorer the way a tree grows over barbed wire.
The third cause of Microsoft's death was broadband Internet. Anyone who cares
can have fast Internet access now. And the bigger the pipe to the server, the
less you need the desktop.
The last nail in the coffin came, of all places, from Apple. Thanks to OS X,
Apple has come back from the dead in a way that is extremely rare in
technology. Their victory is so complete that I'm now surprised when I
come across a computer running Windows. Nearly all the people we fund at Y
Combinator use Apple laptops. It was the same in the audience at startup
school. All the computer people use Macs or Linux now. Windows is for
grandmas, like Macs used to be in the 90s. So not only does the desktop no
longer matter, no one who cares about computers uses Microsoft's anyway.
And of course Apple has Microsoft on the run in music too, with TV and phones
on the way.
I'm glad Microsoft is dead. They were like Nero or Commodus—evil in the way
only inherited power can make you. Because remember, the Microsoft monopoly
didn't begin with Microsoft. They got it from IBM. The software business was
overhung by a monopoly from about the mid-1950s to about 2005. For practically
its whole existence, that is. One of the reasons "Web 2.0" has such an air of
euphoria about it is the feeling, conscious or not, that this era of monopoly
may finally be over.
Of course, as a hacker I can't help thinking about how something broken could
be fixed. Is there some way Microsoft could come back? In principle, yes. To
see how, envision two things: (a) the amount of cash Microsoft now has on
hand, and (b) Larry and Sergey making the rounds of all the search engines ten
years ago trying to sell the idea for Google for a million dollars, and being
turned down by everyone.
The surprising fact is, brilliant hackers—dangerously brilliant hackers—can be
had very cheaply, by the standards of a company as rich as Microsoft. They
can't hire smart people anymore, but they could buy as many as they wanted for
only an order of magnitude more. So if they wanted to be a contender again,
this is how they could do it:
1. Buy all the good "Web 2.0" startups. They could get substantially all of them for less than they'd have to pay for Facebook.
2. Put them all in a building in Silicon Valley, surrounded by lead shielding to protect them from any contact with Redmond.
I feel safe suggesting this, because they'd never do it. Microsoft's biggest
weakness is that they still don't realize how much they suck. They still think
they can write software in house. Maybe they can, by the standards of the
desktop world. But that world ended a few years ago.
I already know what the reaction to this essay will be. Half the readers will
say that Microsoft is still an enormously profitable company, and that I
should be more careful about drawing conclusions based on what a few people
think in our insular little "Web 2.0" bubble. The other half, the younger
half, will complain that this is old news.
**See also:** Microsoft is Dead: the Cliffs |
|
March 2024
Despite its title this isn't meant to be the best essay. My goal here is to
figure out what the best essay would be like.
It would be well-written, but you can write well about any topic. What made it
special would be what it was about.
Obviously some topics would be better than others. It probably wouldn't be
about this year's lipstick colors. But it wouldn't be vaporous talk about
elevated themes either. A good essay has to be surprising. It has to tell
people something they don't already know.
The best essay would be on the most important topic you could tell people
something surprising about.
That may sound obvious, but it has some unexpected consequences. One is that
science enters the picture like an elephant stepping into a rowboat. For
example, Darwin first described the idea of natural selection in an essay
written in 1844. Talk about an important topic you could tell people something
surprising about. If that's the test of a great essay, this was surely the
best one written in 1844. And indeed, the best possible essay at any given
time would usually be one describing the most important scientific or
technological discovery it was possible to make.
Another unexpected consequence: I imagined when I started writing this that
the best essay would be fairly timeless — that the best essay you could write
in 1844 would be much the same as the best one you could write now. But in
fact the opposite seems to be true. It might be true that the best painting
would be timeless in this sense. But it wouldn't be impressive to write an
essay introducing natural selection now. The best essay _now_ would be one
describing a great discovery we didn't yet know about.
If the question of how to write the best possible essay reduces to the
question of how to make great discoveries, then I started with the wrong
question. Perhaps what this exercise shows is that we shouldn't waste our time
writing essays but instead focus on making discoveries in some specific
domain. But I'm interested in essays and what can be done with them, so I want
to see if there's some other question I could have asked.
There is, and on the face of it, it seems almost identical to the one I
started with. Instead of asking _what would the best essay be?_ I should have
asked _how do you write essays well?_ Though these seem only phrasing apart,
their answers diverge. The answer to the first question, as we've seen, isn't
really about essay writing. The second question forces it to be.
Writing essays, at its best, is a way of discovering ideas. How do you do that
well? How do you discover by writing?
An essay should ordinarily start with what I'm going to call a question,
though I mean this in a very general sense: it doesn't have to be a question
grammatically, just something that acts like one in the sense that it spurs
some response.
How do you get this initial question? It probably won't work to choose some
important-sounding topic at random and go at it. Professional traders won't
even trade unless they have what they call an _edge_ — a convincing story
about why in some class of trades they'll win more than they lose. Similarly,
you shouldn't attack a topic unless you have a way in — some new insight about
it or way of approaching it.
You don't need to have a complete thesis; you just need some kind of gap you
can explore. In fact, merely having questions about something other people
take for granted can be edge enough.
If you come across a question that's sufficiently puzzling, it could be worth
exploring even if it doesn't seem very momentous. Many an important discovery
has been made by pulling on a thread that seemed insignificant at first. How
can they _all_ be finches?
Once you've got a question, then what? You start thinking out loud about it.
Not literally out loud, but you commit to a specific string of words in
response, as you would if you were talking. This initial response is usually
mistaken or incomplete. Writing converts your ideas from vague to bad. But
that's a step forward, because once you can see the brokenness, you can fix
it.
Perhaps beginning writers are alarmed at the thought of starting with
something mistaken or incomplete, but you shouldn't be, because this is why
essay writing works. Forcing yourself to commit to some specific string of
words gives you a starting point, and if it's wrong, you'll see that when you
reread it. At least half of essay writing is rereading what you've written and
asking _is this correct and complete?_ You have to be very strict when
rereading, not just because you want to keep yourself honest, but because a
gap between your response and the truth is often a sign of new ideas to be
discovered.
The prize for being strict with what you've written is not just refinement.
When you take a roughly correct answer and try to make it exactly right,
sometimes you find that you can't, and that the reason is that you were
depending on a false assumption. And when you discard it, the answer turns out
to be completely different.
Ideally the response to a question is two things: the first step in a process
that converges on the truth, and a source of additional questions (in my very
general sense of the word). So the process continues recursively, as response
spurs response.
Usually there are several possible responses to a question, which means you're
traversing a tree. But essays are linear, not tree-shaped, which means you
have to choose one branch to follow at each point. How do you choose? Usually
you should follow whichever offers the greatest combination of generality and
novelty. I don't consciously rank branches this way; I just follow whichever
seems most exciting; but generality and novelty are what make a branch
exciting.
If you're willing to do a lot of rewriting, you don't have to guess right. You
can follow a branch and see how it turns out, and if it isn't good enough, cut
it and backtrack. I do this all the time. In this essay I've already cut a
17-paragraph subtree, in addition to countless shorter ones. Maybe I'll
reattach it at the end, or boil it down to a footnote, or spin it off as its
own essay; we'll see.
In general you want to be quick to cut. One of the most dangerous temptations
in writing (and in software and painting) is to keep something that isn't
right, just because it contains a few good bits or cost you a lot of effort.
The most surprising new question being thrown off at this point is _does it
really matter what the initial question is?_ If the space of ideas is highly
connected, it shouldn't, because you should be able to get from any question
to the most valuable ones in a few hops. And we see evidence that it's highly
connected in the way, for example, that people who are obsessed with some
topic can turn any conversation toward it. But that only works if you know
where you want to go, and you don't in an essay. That's the whole point. You
don't want to be the obsessive conversationalist, or all your essays will be
about the same thing.
The other reason the initial question matters is that you usually feel
somewhat obliged to stick to it. I don't think about this when I decide which
branch to follow. I just follow novelty and generality. Sticking to the
question is enforced later, when I notice I've wandered too far and have to
backtrack. But I think this is the optimal solution. You don't want the hunt
for novelty and generality to be constrained in the moment. Go with it and see
what you get.
Since the initial question does constrain you, in the best case it sets an
upper bound on the quality of essay you'll write. If you do as well as you
possibly can on the chain of thoughts that follow from the initial question,
the initial question itself is the only place where there's room for
variation.
It would be a mistake to let this make you too conservative though, because
you can't predict where a question will lead. Not if you're doing things
right, because doing things right means making discoveries, and by definition
you can't predict those. So the way to respond to this situation is not to be
cautious about which initial question you choose, but to write a lot of
essays. Essays are for taking risks.
Almost any question can get you a good essay. Indeed, it took some effort to
think of a sufficiently unpromising topic in the third paragraph, because any
essayist's first impulse on hearing that the best essay couldn't be about x
would be to try to write it. But if most questions yield good essays, only
some yield great ones.
Can we predict which questions will yield great essays? Considering how long
I've been writing essays, it's alarming how novel that question feels.
One thing I like in an initial question is outrageousness. I love questions
that seem naughty in some way — for example, by seeming counterintuitive or
overambitious or heterodox. Ideally all three. This essay is an example.
Writing about the best essay implies there is such a thing, which pseudo-
intellectuals will dismiss as reductive, though it follows necessarily from
the possibility of one essay being better than another. And thinking about how
to do something so ambitious is close enough to doing it that it holds your
attention.
I like to start an essay with a gleam in my eye. This could be just a taste of
mine, but there's one aspect of it that probably isn't: to write a really good
essay on some topic, you have to be interested in it. A good writer can write
well about anything, but to stretch for the novel insights that are the raison
d'etre of the essay, you have to care.
If caring about it is one of the criteria for a good initial question, then
the optimal question varies from person to person. It also means you're more
likely to write great essays if you care about a lot of different things. The
more curious you are, the greater the probable overlap between the set of
things you're curious about and the set of topics that yield great essays.
What other qualities would a great initial question have? It's probably good
if it has implications in a lot of different areas. And I find it's a good
sign if it's one that people think has already been thoroughly explored. But
the truth is that I've barely thought about how to choose initial questions,
because I rarely do it. I rarely _choose_ what to write about; I just start
thinking about something, and sometimes it turns into an essay.
Am I going to stop writing essays about whatever I happen to be thinking about
and instead start working my way through some systematically generated list of
topics? That doesn't sound like much fun. And yet I want to write good essays,
and if the initial question matters, I should care about it.
Perhaps the answer is to go one step earlier: to write about whatever pops
into your head, but try to ensure that what pops into your head is good.
Indeed, now that I think about it, this has to be the answer, because a mere
list of topics wouldn't be any use if you didn't have edge with any of them.
To start writing an essay, you need a topic plus some initial insight about
it, and you can't generate those systematically. If only.
You can probably cause yourself to have more of them, though. The quality of
the ideas that come out of your head depends on what goes in, and you can
improve that in two dimensions, breadth and depth.
You can't learn everything, so getting breadth implies learning about topics
that are very different from one another. When I tell people about my book-
buying trips to Hay and they ask what I buy books about, I usually feel a bit
sheepish answering, because the topics seem like a laundry list of unrelated
subjects. But perhaps that's actually optimal in this business.
You can also get ideas by talking to people, by doing and building things, and
by going places and seeing things. I don't think it's important to talk to new
people so much as the sort of people who make you have new ideas. I get more
new ideas after talking for an afternoon with Robert Morris than from talking
to 20 new smart people. I know because that's what a block of office hours at
Y Combinator consists of.
While breadth comes from reading and talking and seeing, depth comes from
doing. The way to really learn about some domain is to have to solve problems
in it. Though this could take the form of writing, I suspect that to be a good
essayist you also have to do, or have done, some other kind of work. That may
not be true for most other fields, but essay writing is different. You could
spend half your time working on something else and be net ahead, so long as it
was hard.
I'm not proposing that as a recipe so much as an encouragement to those
already doing it. If you've spent all your life so far working on other
things, you're already halfway there. Though of course to be good at writing
you have to like it, and if you like writing you'd probably have spent at
least some time doing it.
Everything I've said about initial questions applies also to the questions you
encounter in writing the essay. They're the same thing; every subtree of an
essay is usually a shorter essay, just as every subtree of a Calder mobile is
a smaller mobile. So any technique that gets you good initial questions also
gets you good whole essays.
At some point the cycle of question and response reaches what feels like a
natural end. Which is a little suspicious; shouldn't every answer suggest more
questions? I think what happens is that you start to feel sated. Once you've
covered enough interesting ground, you start to lose your appetite for new
questions. Which is just as well, because the reader is probably feeling sated
too. And it's not lazy to stop asking questions, because you could instead be
asking the initial question of a new essay.
That's the ultimate source of drag on the connectedness of ideas: the
discoveries you make along the way. If you discover enough starting from
question A, you'll never make it to question B. Though if you keep writing
essays you'll gradually fix this problem by burning off such discoveries. So
bizarrely enough, writing lots of essays makes it as if the space of ideas
were more highly connected.
When a subtree comes to an end, you can do one of two things. You can either
stop, or pull the Cubist trick of laying separate subtrees end to end by
returning to a question you skipped earlier. Usually it requires some sleight
of hand to make the essay flow continuously at this point, but not this time.
This time I actually need an example of the phenomenon. For example, we
discovered earlier that the best possible essay wouldn't usually be timeless
in the way the best painting would. This seems surprising enough to be worth
investigating further.
There are two senses in which an essay can be timeless: to be about a matter
of permanent importance, and always to have the same effect on readers. With
art these two senses blend together. Art that looked beautiful to the ancient
Greeks still looks beautiful to us. But with essays the two senses diverge,
because essays teach, and you can't teach people something they already know.
Natural selection is certainly a matter of permanent importance, but an essay
explaining it couldn't have the same effect on us that it would have had on
Darwin's contemporaries, precisely because his ideas were so successful that
everyone already knows about them.
I imagined when I started writing this that the best possible essay would be
timeless in the stricter, evergreen sense: that it would contain some deep,
timeless wisdom that would appeal equally to Aristotle and Feynman. That
doesn't seem to be true. But if the best possible essay wouldn't usually be
timeless in this stricter sense, what would it take to write essays that were?
The answer to that turns out to be very strange: to be the evergreen kind of
timeless, an essay has to be ineffective, in the sense that its discoveries
aren't assimilated into our shared culture. Otherwise there will be nothing
new in it for the second generation of readers. If you want to surprise
readers not just now but in the future as well, you have to write essays that
won't stick — essays that, no matter how good they are, won't become part of
what people in the future learn before they read them.
I can imagine several ways to do that. One would be to write about things
people never learn. For example, it's a long-established pattern for ambitious
people to chase after various types of prizes, and only later, perhaps too
late, to realize that some of them weren't worth as much as they thought. If
you write about that, you can be confident of a conveyor belt of future
readers to be surprised by it.
Ditto if you write about the tendency of the inexperienced to overdo things —
of young engineers to produce overcomplicated solutions, for example. There
are some kinds of mistakes people never learn to avoid except by making them.
Any of those should be a timeless topic.
Sometimes when we're slow to grasp things it's not just because we're obtuse
or in denial but because we've been deliberately lied to. There are a lot of
things adults _lie_ to kids about, and when you reach adulthood, they don't
take you aside and hand you a list of them. They don't remember which lies
they told you, and most were implicit anyway. So contradicting such lies will
be a source of surprises for as long as adults keep telling them.
Sometimes it's systems that lie to you. For example, the educational systems
in most countries train you to win by _hacking the test_. But that's not how
you win at the most important real-world tests, and after decades of training,
this is hard for new arrivals in the real world to grasp. Helping them
overcome such institutional lies will work as long as the institutions remain
broken.
Another recipe for timelessness is to write about things readers already know,
but in much more detail than can be transmitted culturally. "Everyone knows,"
for example, that it can be rewarding to have _kids_. But till you have them
you don't know precisely what forms that takes, and even then much of what you
know you may never have put into words.
I've written about all these kinds of topics. But I didn't do it in a
deliberate attempt to write essays that were timeless in the stricter sense.
And indeed, the fact that this depends on one's ideas not sticking suggests
that it's not worth making a deliberate attempt to. You should write about
topics of timeless importance, yes, but if you do such a good job that your
conclusions stick and future generations find your essay obvious instead of
novel, so much the better. You've crossed into Darwin territory.
Writing about topics of timeless importance is an instance of something even
more general, though: breadth of applicability. And there are more kinds of
breadth than chronological — applying to lots of different fields, for
example. So breadth is the ultimate aim.
I already aim for it. Breadth and novelty are the two things I'm always
chasing. But I'm glad I understand where timelessness fits.
I understand better where a lot of things fit now. This essay has been a kind
of tour of essay writing. I started out hoping to get advice about topics; if
you assume good writing, the only thing left to differentiate the best essay
is its topic. And I did get advice about topics: discover natural selection.
Yeah, that would be nice. But when you step back and ask what's the best you
can do short of making some great discovery like that, the answer turns out to
be about procedure. Ultimately the quality of an essay is a function of the
ideas discovered in it, and the way you get them is by casting a wide net for
questions and then being very exacting with the answers.
The most striking feature of this map of essay writing are the alternating
stripes of inspiration and effort required. The questions depend on
inspiration, but the answers can be got by sheer persistence. You don't have
to get an answer right the first time, but there's no excuse for not getting
it right eventually, because you can keep rewriting till you do. And this is
not just a theoretical possibility. It's a pretty accurate description of the
way I work. I'm rewriting as we speak.
But although I wish I could say that writing great essays depends mostly on
effort, in the limit case it's inspiration that makes the difference. In the
limit case, the questions are the harder thing to get. That pool has no
bottom.
How to get more questions? That is the most important question of all.
** |
|
February 2021
Before college the two main things I worked on, outside of school, were
writing and programming. I didn't write essays. I wrote what beginning writers
were supposed to write then, and probably still are: short stories. My stories
were awful. They had hardly any plot, just characters with strong feelings,
which I imagined made them deep.
The first programs I tried writing were on the IBM 1401 that our school
district used for what was then called "data processing." This was in 9th
grade, so I was 13 or 14. The school district's 1401 happened to be in the
basement of our junior high school, and my friend Rich Draves and I got
permission to use it. It was like a mini Bond villain's lair down there, with
all these alien-looking machines CPU, disk drives, printer, card reader
sitting up on a raised floor under bright fluorescent lights.
The language we used was an early version of Fortran. You had to type programs
on punch cards, then stack them in the card reader and press a button to load
the program into memory and run it. The result would ordinarily be to print
something on the spectacularly loud printer.
I was puzzled by the 1401. I couldn't figure out what to do with it. And in
retrospect there's not much I could have done with it. The only form of input
to programs was data stored on punched cards, and I didn't have any data
stored on punched cards. The only other option was to do things that didn't
rely on any input, like calculate approximations of pi, but I didn't know
enough math to do anything interesting of that type. So I'm not surprised I
can't remember any programs I wrote, because they can't have done much. My
clearest memory is of the moment I learned it was possible for programs not to
terminate, when one of mine didn't. On a machine without time-sharing, this
was a social as well as a technical error, as the data center manager's
expression made clear.
With microcomputers, everything changed. Now you could have a computer sitting
right in front of you, on a desk, that could respond to your keystrokes as it
was running instead of just churning through a stack of punch cards and then
stopping.
The first of my friends to get a microcomputer built it himself. It was sold
as a kit by Heathkit. I remember vividly how impressed and envious I felt
watching him sitting in front of it, typing programs right into the computer.
Computers were expensive in those days and it took me years of nagging before
I convinced my father to buy one, a TRS-80, in about 1980\. The gold standard
then was the Apple II, but a TRS-80 was good enough. This was when I really
started programming. I wrote simple games, a program to predict how high my
model rockets would fly, and a word processor that my father used to write at
least one book. There was only room in memory for about 2 pages of text, so
he'd write 2 pages at a time and then print them out, but it was a lot better
than a typewriter.
Though I liked programming, I didn't plan to study it in college. In college I
was going to study philosophy, which sounded much more powerful. It seemed, to
my naive high school self, to be the study of the ultimate truths, compared to
which the things studied in other fields would be mere domain knowledge. What
I discovered when I got to college was that the other fields took up so much
of the space of ideas that there wasn't much left for these supposed ultimate
truths. All that seemed left for philosophy were edge cases that people in
other fields felt could safely be ignored.
I couldn't have put this into words when I was 18. All I knew at the time was
that I kept taking philosophy courses and they kept being boring. So I decided
to switch to AI.
AI was in the air in the mid 1980s, but there were two things especially that
made me want to work on it: a novel by Heinlein called _The Moon is a Harsh
Mistress_ , which featured an intelligent computer called Mike, and a PBS
documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading
_The Moon is a Harsh Mistress_ , so I don't know how well it has aged, but
when I read it I was drawn entirely into its world. It seemed only a matter of
time before we'd have Mike, and when I saw Winograd using SHRDLU, it seemed
like that time would be a few years at most. All you had to do was teach
SHRDLU more words.
There weren't any classes in AI at Cornell then, not even graduate classes, so
I started trying to teach myself. Which meant learning Lisp, since in those
days Lisp was regarded as the language of AI. The commonly used programming
languages then were pretty primitive, and programmers' ideas correspondingly
so. The default language at Cornell was a Pascal-like language called PL/I,
and the situation was similar elsewhere. Learning Lisp expanded my concept of
a program so fast that it was years before I started to have a sense of where
the new limits were. This was more like it; this was what I had expected
college to do. It wasn't happening in a class, like it was supposed to, but
that was ok. For the next couple years I was on a roll. I knew what I was
going to do.
For my undergraduate thesis, I reverse-engineered SHRDLU. My God did I love
working on that program. It was a pleasing bit of code, but what made it even
more exciting was my belief hard to imagine now, but not unique in 1985
that it was already climbing the lower slopes of intelligence.
I had gotten into a program at Cornell that didn't make you choose a major.
You could take whatever classes you liked, and choose whatever you liked to
put on your degree. I of course chose "Artificial Intelligence." When I got
the actual physical diploma, I was dismayed to find that the quotes had been
included, which made them read as scare-quotes. At the time this bothered me,
but now it seems amusingly accurate, for reasons I was about to discover.
I applied to 3 grad schools: MIT and Yale, which were renowned for AI at the
time, and Harvard, which I'd visited because Rich Draves went there, and was
also home to Bill Woods, who'd invented the type of parser I used in my SHRDLU
clone. Only Harvard accepted me, so that was where I went.
I don't remember the moment it happened, or if there even was a specific
moment, but during the first year of grad school I realized that AI, as
practiced at the time, was a hoax. By which I mean the sort of AI in which a
program that's told "the dog is sitting on the chair" translates this into
some formal representation and adds it to the list of things it knows.
What these programs really showed was that there's a subset of natural
language that's a formal language. But a very proper subset. It was clear that
there was an unbridgeable gap between what they could do and actually
understanding natural language. It was not, in fact, simply a matter of
teaching SHRDLU more words. That whole way of doing AI, with explicit data
structures representing concepts, was not going to work. Its brokenness did,
as so often happens, generate a lot of opportunities to write papers about
various band-aids that could be applied to it, but it was never going to get
us Mike.
So I looked around to see what I could salvage from the wreckage of my plans,
and there was Lisp. I knew from experience that Lisp was interesting for its
own sake and not just for its association with AI, even though that was the
main reason people cared about it at the time. So I decided to focus on Lisp.
In fact, I decided to write a book about Lisp hacking. It's scary to think how
little I knew about Lisp hacking when I started writing that book. But there's
nothing like writing a book about something to help you learn it. The book,
_On Lisp_ , wasn't published till 1993, but I wrote much of it in grad school.
Computer Science is an uneasy alliance between two halves, theory and systems.
The theory people prove things, and the systems people build things. I wanted
to build things. I had plenty of respect for theory indeed, a sneaking
suspicion that it was the more admirable of the two halves but building
things seemed so much more exciting.
The problem with systems work, though, was that it didn't last. Any program
you wrote today, no matter how good, would be obsolete in a couple decades at
best. People might mention your software in footnotes, but no one would
actually use it. And indeed, it would seem very feeble work. Only people with
a sense of the history of the field would even realize that, in its time, it
had been good.
There were some surplus Xerox Dandelions floating around the computer lab at
one point. Anyone who wanted one to play around with could have one. I was
briefly tempted, but they were so slow by present standards; what was the
point? No one else wanted one either, so off they went. That was what happened
to systems work.
I wanted not just to build things, but to build things that would last.
In this dissatisfied state I went in 1988 to visit Rich Draves at CMU, where
he was in grad school. One day I went to visit the Carnegie Institute, where
I'd spent a lot of time as a kid. While looking at a painting there I realized
something that might seem obvious, but was a big surprise to me. There, right
on the wall, was something you could make that would last. Paintings didn't
become obsolete. Some of the best ones were hundreds of years old.
And moreover this was something you could make a living doing. Not as easily
as you could by writing software, of course, but I thought if you were really
industrious and lived really cheaply, it had to be possible to make enough to
survive. And as an artist you could be truly independent. You wouldn't have a
boss, or even need to get research funding.
I had always liked looking at paintings. Could I make them? I had no idea. I'd
never imagined it was even possible. I knew intellectually that people made
art that it didn't just appear spontaneously but it was as if the people
who made it were a different species. They either lived long ago or were
mysterious geniuses doing strange things in profiles in _Life_ magazine. The
idea of actually being able to make art, to put that verb before that noun,
seemed almost miraculous.
That fall I started taking art classes at Harvard. Grad students could take
classes in any department, and my advisor, Tom Cheatham, was very easy going.
If he even knew about the strange classes I was taking, he never said
anything.
So now I was in a PhD program in computer science, yet planning to be an
artist, yet also genuinely in love with Lisp hacking and working away at _On
Lisp_. In other words, like many a grad student, I was working energetically
on multiple projects that were not my thesis.
I didn't see a way out of this situation. I didn't want to drop out of grad
school, but how else was I going to get out? I remember when my friend Robert
Morris got kicked out of Cornell for writing the internet worm of 1988, I was
envious that he'd found such a spectacular way to get out of grad school.
Then one day in April 1990 a crack appeared in the wall. I ran into professor
Cheatham and he asked if I was far enough along to graduate that June. I
didn't have a word of my dissertation written, but in what must have been the
quickest bit of thinking in my life, I decided to take a shot at writing one
in the 5 weeks or so that remained before the deadline, reusing parts of _On
Lisp_ where I could, and I was able to respond, with no perceptible delay
"Yes, I think so. I'll give you something to read in a few days."
I picked applications of continuations as the topic. In retrospect I should
have written about macros and embedded languages. There's a whole world there
that's barely been explored. But all I wanted was to get out of grad school,
and my rapidly written dissertation sufficed, just barely.
Meanwhile I was applying to art schools. I applied to two: RISD in the US, and
the Accademia di Belli Arti in Florence, which, because it was the oldest art
school, I imagined would be good. RISD accepted me, and I never heard back
from the Accademia, so off to Providence I went.
I'd applied for the BFA program at RISD, which meant in effect that I had to
go to college again. This was not as strange as it sounds, because I was only
25, and art schools are full of people of different ages. RISD counted me as a
transfer sophomore and said I had to do the foundation that summer. The
foundation means the classes that everyone has to take in fundamental subjects
like drawing, color, and design.
Toward the end of the summer I got a big surprise: a letter from the
Accademia, which had been delayed because they'd sent it to Cambridge England
instead of Cambridge Massachusetts, inviting me to take the entrance exam in
Florence that fall. This was now only weeks away. My nice landlady let me
leave my stuff in her attic. I had some money saved from consulting work I'd
done in grad school; there was probably enough to last a year if I lived
cheaply. Now all I had to do was learn Italian.
Only _stranieri_ (foreigners) had to take this entrance exam. In retrospect it
may well have been a way of excluding them, because there were so many
_stranieri_ attracted by the idea of studying art in Florence that the Italian
students would otherwise have been outnumbered. I was in decent shape at
painting and drawing from the RISD foundation that summer, but I still don't
know how I managed to pass the written exam. I remember that I answered the
essay question by writing about Cezanne, and that I cranked up the
intellectual level as high as I could to make the most of my limited
vocabulary.
I'm only up to age 25 and already there are such conspicuous patterns. Here I
was, yet again about to attend some august institution in the hopes of
learning about some prestigious subject, and yet again about to be
disappointed. The students and faculty in the painting department at the
Accademia were the nicest people you could imagine, but they had long since
arrived at an arrangement whereby the students wouldn't require the faculty to
teach anything, and in return the faculty wouldn't require the students to
learn anything. And at the same time all involved would adhere outwardly to
the conventions of a 19th century atelier. We actually had one of those little
stoves, fed with kindling, that you see in 19th century studio paintings, and
a nude model sitting as close to it as possible without getting burned. Except
hardly anyone else painted her besides me. The rest of the students spent
their time chatting or occasionally trying to imitate things they'd seen in
American art magazines.
Our model turned out to live just down the street from me. She made a living
from a combination of modelling and making fakes for a local antique dealer.
She'd copy an obscure old painting out of a book, and then he'd take the copy
and maltreat it to make it look old.
While I was a student at the Accademia I started painting still lives in my
bedroom at night. These paintings were tiny, because the room was, and because
I painted them on leftover scraps of canvas, which was all I could afford at
the time. Painting still lives is different from painting people, because the
subject, as its name suggests, can't move. People can't sit for more than
about 15 minutes at a time, and when they do they don't sit very still. So the
traditional m.o. for painting people is to know how to paint a generic person,
which you then modify to match the specific person you're painting. Whereas a
still life you can, if you want, copy pixel by pixel from what you're seeing.
You don't want to stop there, of course, or you get merely photographic
accuracy, and what makes a still life interesting is that it's been through a
head. You want to emphasize the visual cues that tell you, for example, that
the reason the color changes suddenly at a certain point is that it's the edge
of an object. By subtly emphasizing such things you can make paintings that
are more realistic than photographs not just in some metaphorical sense, but
in the strict information-theoretic sense.
I liked painting still lives because I was curious about what I was seeing. In
everyday life, we aren't consciously aware of much we're seeing. Most visual
perception is handled by low-level processes that merely tell your brain
"that's a water droplet" without telling you details like where the lightest
and darkest points are, or "that's a bush" without telling you the shape and
position of every leaf. This is a feature of brains, not a bug. In everyday
life it would be distracting to notice every leaf on every bush. But when you
have to paint something, you have to look more closely, and when you do
there's a lot to see. You can still be noticing new things after days of
trying to paint something people usually take for granted, just as you can
after days of trying to write an essay about something people usually take for
granted.
This is not the only way to paint. I'm not 100% sure it's even a good way to
paint. But it seemed a good enough bet to be worth trying.
Our teacher, professor Ulivi, was a nice guy. He could see I worked hard, and
gave me a good grade, which he wrote down in a sort of passport each student
had. But the Accademia wasn't teaching me anything except Italian, and my
money was running out, so at the end of the first year I went back to the US.
I wanted to go back to RISD, but I was now broke and RISD was very expensive,
so I decided to get a job for a year and then return to RISD the next fall. I
got one at a company called Interleaf, which made software for creating
documents. You mean like Microsoft Word? Exactly. That was how I learned that
low end software tends to eat high end software. But Interleaf still had a few
years to live yet.
Interleaf had done something pretty bold. Inspired by Emacs, they'd added a
scripting language, and even made the scripting language a dialect of Lisp.
Now they wanted a Lisp hacker to write things in it. This was the closest
thing I've had to a normal job, and I hereby apologize to my boss and
coworkers, because I was a bad employee. Their Lisp was the thinnest icing on
a giant C cake, and since I didn't know C and didn't want to learn it, I never
understood most of the software. Plus I was terribly irresponsible. This was
back when a programming job meant showing up every day during certain working
hours. That seemed unnatural to me, and on this point the rest of the world is
coming around to my way of thinking, but at the time it caused a lot of
friction. Toward the end of the year I spent much of my time surreptitiously
working on _On Lisp_ , which I had by this time gotten a contract to publish.
The good part was that I got paid huge amounts of money, especially by art
student standards. In Florence, after paying my part of the rent, my budget
for everything else had been $7 a day. Now I was getting paid more than 4
times that every hour, even when I was just sitting in a meeting. By living
cheaply I not only managed to save enough to go back to RISD, but also paid
off my college loans.
I learned some useful things at Interleaf, though they were mostly about what
not to do. I learned that it's better for technology companies to be run by
product people than sales people (though sales is a real skill and people who
are good at it are really good at it), that it leads to bugs when code is
edited by too many people, that cheap office space is no bargain if it's
depressing, that planned meetings are inferior to corridor conversations, that
big, bureaucratic customers are a dangerous source of money, and that there's
not much overlap between conventional office hours and the optimal time for
hacking, or conventional offices and the optimal place for it.
But the most important thing I learned, and which I used in both Viaweb and Y
Combinator, is that the low end eats the high end: that it's good to be the
"entry level" option, even though that will be less prestigious, because if
you're not, someone else will be, and will squash you against the ceiling.
Which in turn means that prestige is a danger sign.
When I left to go back to RISD the next fall, I arranged to do freelance work
for the group that did projects for customers, and this was how I survived for
the next several years. When I came back to visit for a project later on,
someone told me about a new thing called HTML, which was, as he described it,
a derivative of SGML. Markup language enthusiasts were an occupational hazard
at Interleaf and I ignored him, but this HTML thing later became a big part of
my life.
In the fall of 1992 I moved back to Providence to continue at RISD. The
foundation had merely been intro stuff, and the Accademia had been a (very
civilized) joke. Now I was going to see what real art school was like. But
alas it was more like the Accademia than not. Better organized, certainly, and
a lot more expensive, but it was now becoming clear that art school did not
bear the same relationship to art that medical school bore to medicine. At
least not the painting department. The textile department, which my next door
neighbor belonged to, seemed to be pretty rigorous. No doubt illustration and
architecture were too. But painting was post-rigorous. Painting students were
supposed to express themselves, which to the more worldly ones meant to try to
cook up some sort of distinctive signature style.
A signature style is the visual equivalent of what in show business is known
as a "schtick": something that immediately identifies the work as yours and no
one else's. For example, when you see a painting that looks like a certain
kind of cartoon, you know it's by Roy Lichtenstein. So if you see a big
painting of this type hanging in the apartment of a hedge fund manager, you
know he paid millions of dollars for it. That's not always why artists have a
signature style, but it's usually why buyers pay a lot for such work.
There were plenty of earnest students too: kids who "could draw" in high
school, and now had come to what was supposed to be the best art school in the
country, to learn to draw even better. They tended to be confused and
demoralized by what they found at RISD, but they kept going, because painting
was what they did. I was not one of the kids who could draw in high school,
but at RISD I was definitely closer to their tribe than the tribe of signature
style seekers.
I learned a lot in the color class I took at RISD, but otherwise I was
basically teaching myself to paint, and I could do that for free. So in 1993 I
dropped out. I hung around Providence for a bit, and then my college friend
Nancy Parmet did me a big favor. A rent-controlled apartment in a building her
mother owned in New York was becoming vacant. Did I want it? It wasn't much
more than my current place, and New York was supposed to be where the artists
were. So yes, I wanted it!
Asterix comics begin by zooming in on a tiny corner of Roman Gaul that turns
out not to be controlled by the Romans. You can do something similar on a map
of New York City: if you zoom in on the Upper East Side, there's a tiny corner
that's not rich, or at least wasn't in 1993. It's called Yorkville, and that
was my new home. Now I was a New York artist in the strictly technical sense
of making paintings and living in New York.
I was nervous about money, because I could sense that Interleaf was on the way
down. Freelance Lisp hacking work was very rare, and I didn't want to have to
program in another language, which in those days would have meant C++ if I was
lucky. So with my unerring nose for financial opportunity, I decided to write
another book on Lisp. This would be a popular book, the sort of book that
could be used as a textbook. I imagined myself living frugally off the
royalties and spending all my time painting. (The painting on the cover of
this book, _ANSI Common Lisp_ , is one that I painted around this time.)
The best thing about New York for me was the presence of Idelle and Julian
Weber. Idelle Weber was a painter, one of the early photorealists, and I'd
taken her painting class at Harvard. I've never known a teacher more beloved
by her students. Large numbers of former students kept in touch with her,
including me. After I moved to New York I became her de facto studio
assistant.
She liked to paint on big, square canvases, 4 to 5 feet on a side. One day in
late 1994 as I was stretching one of these monsters there was something on the
radio about a famous fund manager. He wasn't that much older than me, and was
super rich. The thought suddenly occurred to me: why don't I become rich? Then
I'll be able to work on whatever I want.
Meanwhile I'd been hearing more and more about this new thing called the World
Wide Web. Robert Morris showed it to me when I visited him in Cambridge, where
he was now in grad school at Harvard. It seemed to me that the web would be a
big deal. I'd seen what graphical user interfaces had done for the popularity
of microcomputers. It seemed like the web would do the same for the internet.
If I wanted to get rich, here was the next train leaving the station. I was
right about that part. What I got wrong was the idea. I decided we should
start a company to put art galleries online. I can't honestly say, after
reading so many Y Combinator applications, that this was the worst startup
idea ever, but it was up there. Art galleries didn't want to be online, and
still don't, not the fancy ones. That's not how they sell. I wrote some
software to generate web sites for galleries, and Robert wrote some to resize
images and set up an http server to serve the pages. Then we tried to sign up
galleries. To call this a difficult sale would be an understatement. It was
difficult to give away. A few galleries let us make sites for them for free,
but none paid us.
Then some online stores started to appear, and I realized that except for the
order buttons they were identical to the sites we'd been generating for
galleries. This impressive-sounding thing called an "internet storefront" was
something we already knew how to build.
So in the summer of 1995, after I submitted the camera-ready copy of _ANSI
Common Lisp_ to the publishers, we started trying to write software to build
online stores. At first this was going to be normal desktop software, which in
those days meant Windows software. That was an alarming prospect, because
neither of us knew how to write Windows software or wanted to learn. We lived
in the Unix world. But we decided we'd at least try writing a prototype store
builder on Unix. Robert wrote a shopping cart, and I wrote a new site
generator for stores in Lisp, of course.
We were working out of Robert's apartment in Cambridge. His roommate was away
for big chunks of time, during which I got to sleep in his room. For some
reason there was no bed frame or sheets, just a mattress on the floor. One
morning as I was lying on this mattress I had an idea that made me sit up like
a capital L. What if we ran the software on the server, and let users control
it by clicking on links? Then we'd never have to write anything to run on
users' computers. We could generate the sites on the same server we'd serve
them from. Users wouldn't need anything more than a browser.
This kind of software, known as a web app, is common now, but at the time it
wasn't clear that it was even possible. To find out, we decided to try making
a version of our store builder that you could control through the browser. A
couple days later, on August 12, we had one that worked. The UI was horrible,
but it proved you could build a whole store through the browser, without any
client software or typing anything into the command line on the server.
Now we felt like we were really onto something. I had visions of a whole new
generation of software working this way. You wouldn't need versions, or ports,
or any of that crap. At Interleaf there had been a whole group called Release
Engineering that seemed to be at least as big as the group that actually wrote
the software. Now you could just update the software right on the server.
We started a new company we called Viaweb, after the fact that our software
worked via the web, and we got $10,000 in seed funding from Idelle's husband
Julian. In return for that and doing the initial legal work and giving us
business advice, we gave him 10% of the company. Ten years later this deal
became the model for Y Combinator's. We knew founders needed something like
this, because we'd needed it ourselves.
At this stage I had a negative net worth, because the thousand dollars or so I
had in the bank was more than counterbalanced by what I owed the government in
taxes. (Had I diligently set aside the proper proportion of the money I'd made
consulting for Interleaf? No, I had not.) So although Robert had his graduate
student stipend, I needed that seed funding to live on.
We originally hoped to launch in September, but we got more ambitious about
the software as we worked on it. Eventually we managed to build a WYSIWYG site
builder, in the sense that as you were creating pages, they looked exactly
like the static ones that would be generated later, except that instead of
leading to static pages, the links all referred to closures stored in a hash
table on the server.
It helped to have studied art, because the main goal of an online store
builder is to make users look legit, and the key to looking legit is high
production values. If you get page layouts and fonts and colors right, you can
make a guy running a store out of his bedroom look more legit than a big
company.
(If you're curious why my site looks so old-fashioned, it's because it's still
made with this software. It may look clunky today, but in 1996 it was the last
word in slick.)
In September, Robert rebelled. "We've been working on this for a month," he
said, "and it's still not done." This is funny in retrospect, because he would
still be working on it almost 3 years later. But I decided it might be prudent
to recruit more programmers, and I asked Robert who else in grad school with
him was really good. He recommended Trevor Blackwell, which surprised me at
first, because at that point I knew Trevor mainly for his plan to reduce
everything in his life to a stack of notecards, which he carried around with
him. But Rtm was right, as usual. Trevor turned out to be a frighteningly
effective hacker.
It was a lot of fun working with Robert and Trevor. They're the two most
_independent-minded_ people I know, and in completely different ways. If you
could see inside Rtm's brain it would look like a colonial New England church,
and if you could see inside Trevor's it would look like the worst excesses of
Austrian Rococo.
We opened for business, with 6 stores, in January 1996. It was just as well we
waited a few months, because although we worried we were late, we were
actually almost fatally early. There was a lot of talk in the press then about
ecommerce, but not many people actually wanted online stores.
There were three main parts to the software: the editor, which people used to
build sites and which I wrote, the shopping cart, which Robert wrote, and the
manager, which kept track of orders and statistics, and which Trevor wrote. In
its time, the editor was one of the best general-purpose site builders. I kept
the code tight and didn't have to integrate with any other software except
Robert's and Trevor's, so it was quite fun to work on. If all I'd had to do
was work on this software, the next 3 years would have been the easiest of my
life. Unfortunately I had to do a lot more, all of it stuff I was worse at
than programming, and the next 3 years were instead the most stressful.
There were a lot of startups making ecommerce software in the second half of
the 90s. We were determined to be the Microsoft Word, not the Interleaf. Which
meant being easy to use and inexpensive. It was lucky for us that we were
poor, because that caused us to make Viaweb even more inexpensive than we
realized. We charged $100 a month for a small store and $300 a month for a big
one. This low price was a big attraction, and a constant thorn in the sides of
competitors, but it wasn't because of some clever insight that we set the
price low. We had no idea what businesses paid for things. $300 a month seemed
like a lot of money to us.
We did a lot of things right by accident like that. For example, we did what's
now called "doing things that _don't scale_," although at the time we would
have described it as "being so lame that we're driven to the most desperate
measures to get users." The most common of which was building stores for them.
This seemed particularly humiliating, since the whole raison d'etre of our
software was that people could use it to make their own stores. But anything
to get users.
We learned a lot more about retail than we wanted to know. For example, that
if you could only have a small image of a man's shirt (and all images were
small then by present standards), it was better to have a closeup of the
collar than a picture of the whole shirt. The reason I remember learning this
was that it meant I had to rescan about 30 images of men's shirts. My first
set of scans were so beautiful too.
Though this felt wrong, it was exactly the right thing to be doing. Building
stores for users taught us about retail, and about how it felt to use our
software. I was initially both mystified and repelled by "business" and
thought we needed a "business person" to be in charge of it, but once we
started to get users, I was converted, in much the same way I was converted to
_fatherhood_ once I had kids. Whatever users wanted, I was all theirs. Maybe
one day we'd have so many users that I couldn't scan their images for them,
but in the meantime there was nothing more important to do.
Another thing I didn't get at the time is that _growth rate_ is the ultimate
test of a startup. Our growth rate was fine. We had about 70 stores at the end
of 1996 and about 500 at the end of 1997. I mistakenly thought the thing that
mattered was the absolute number of users. And that is the thing that matters
in the sense that that's how much money you're making, and if you're not
making enough, you might go out of business. But in the long term the growth
rate takes care of the absolute number. If we'd been a startup I was advising
at Y Combinator, I would have said: Stop being so stressed out, because you're
doing fine. You're growing 7x a year. Just don't hire too many more people and
you'll soon be profitable, and then you'll control your own destiny.
Alas I hired lots more people, partly because our investors wanted me to, and
partly because that's what startups did during the Internet Bubble. A company
with just a handful of employees would have seemed amateurish. So we didn't
reach breakeven until about when Yahoo bought us in the summer of 1998. Which
in turn meant we were at the mercy of investors for the entire life of the
company. And since both we and our investors were noobs at startups, the
result was a mess even by startup standards.
It was a huge relief when Yahoo bought us. In principle our Viaweb stock was
valuable. It was a share in a business that was profitable and growing
rapidly. But it didn't feel very valuable to me; I had no idea how to value a
business, but I was all too keenly aware of the near-death experiences we
seemed to have every few months. Nor had I changed my grad student lifestyle
significantly since we started. So when Yahoo bought us it felt like going
from rags to riches. Since we were going to California, I bought a car, a
yellow 1998 VW GTI. I remember thinking that its leather seats alone were by
far the most luxurious thing I owned.
The next year, from the summer of 1998 to the summer of 1999, must have been
the least productive of my life. I didn't realize it at the time, but I was
worn out from the effort and stress of running Viaweb. For a while after I got
to California I tried to continue my usual m.o. of programming till 3 in the
morning, but fatigue combined with Yahoo's prematurely aged _culture_ and grim
cube farm in Santa Clara gradually dragged me down. After a few months it felt
disconcertingly like working at Interleaf.
Yahoo had given us a lot of options when they bought us. At the time I thought
Yahoo was so overvalued that they'd never be worth anything, but to my
astonishment the stock went up 5x in the next year. I hung on till the first
chunk of options vested, then in the summer of 1999 I left. It had been so
long since I'd painted anything that I'd half forgotten why I was doing this.
My brain had been entirely full of software and men's shirts for 4 years. But
I had done this to get rich so I could paint, I reminded myself, and now I was
rich, so I should go paint.
When I said I was leaving, my boss at Yahoo had a long conversation with me
about my plans. I told him all about the kinds of pictures I wanted to paint.
At the time I was touched that he took such an interest in me. Now I realize
it was because he thought I was lying. My options at that point were worth
about $2 million a month. If I was leaving that kind of money on the table, it
could only be to go and start some new startup, and if I did, I might take
people with me. This was the height of the Internet Bubble, and Yahoo was
ground zero of it. My boss was at that moment a billionaire. Leaving then to
start a new startup must have seemed to him an insanely, and yet also
plausibly, ambitious plan.
But I really was quitting to paint, and I started immediately. There was no
time to lose. I'd already burned 4 years getting rich. Now when I talk to
founders who are leaving after selling their companies, my advice is always
the same: take a vacation. That's what I should have done, just gone off
somewhere and done nothing for a month or two, but the idea never occurred to
me.
So I tried to paint, but I just didn't seem to have any energy or ambition.
Part of the problem was that I didn't know many people in California. I'd
compounded this problem by buying a house up in the Santa Cruz Mountains, with
a beautiful view but miles from anywhere. I stuck it out for a few more
months, then in desperation I went back to New York, where unless you
understand about rent control you'll be surprised to hear I still had my
apartment, sealed up like a tomb of my old life. Idelle was in New York at
least, and there were other people trying to paint there, even though I didn't
know any of them.
When I got back to New York I resumed my old life, except now I was rich. It
was as weird as it sounds. I resumed all my old patterns, except now there
were doors where there hadn't been. Now when I was tired of walking, all I had
to do was raise my hand, and (unless it was raining) a taxi would stop to pick
me up. Now when I walked past charming little restaurants I could go in and
order lunch. It was exciting for a while. Painting started to go better. I
experimented with a new kind of still life where I'd paint one painting in the
old way, then photograph it and print it, blown up, on canvas, and then use
that as the underpainting for a second still life, painted from the same
objects (which hopefully hadn't rotted yet).
Meanwhile I looked for an apartment to buy. Now I could actually choose what
neighborhood to live in. Where, I asked myself and various real estate agents,
is the Cambridge of New York? Aided by occasional visits to actual Cambridge,
I gradually realized there wasn't one. Huh.
Around this time, in the spring of 2000, I had an idea. It was clear from our
experience with Viaweb that web apps were the future. Why not build a web app
for making web apps? Why not let people edit code on our server through the
browser, and then host the resulting applications for them? You could run
all sorts of services on the servers that these applications could use just by
making an API call: making and receiving phone calls, manipulating images,
taking credit card payments, etc.
I got so excited about this idea that I couldn't think about anything else. It
seemed obvious that this was the future. I didn't particularly want to start
another company, but it was clear that this idea would have to be embodied as
one, so I decided to move to Cambridge and start it. I hoped to lure Robert
into working on it with me, but there I ran into a hitch. Robert was now a
postdoc at MIT, and though he'd made a lot of money the last time I'd lured
him into working on one of my schemes, it had also been a huge time sink. So
while he agreed that it sounded like a plausible idea, he firmly refused to
work on it.
Hmph. Well, I'd do it myself then. I recruited Dan Giffin, who had worked for
Viaweb, and two undergrads who wanted summer jobs, and we got to work trying
to build what it's now clear is about twenty companies and several open source
projects worth of software. The language for defining applications would of
course be a dialect of Lisp. But I wasn't so naive as to assume I could spring
an overt Lisp on a general audience; we'd hide the parentheses, like Dylan
did.
By then there was a name for the kind of company Viaweb was, an "application
service provider," or ASP. This name didn't last long before it was replaced
by "software as a service," but it was current for long enough that I named
this new company after it: it was going to be called Aspra.
I started working on the application builder, Dan worked on network
infrastructure, and the two undergrads worked on the first two services
(images and phone calls). But about halfway through the summer I realized I
really didn't want to run a company especially not a big one, which it was
looking like this would have to be. I'd only started Viaweb because I needed
the money. Now that I didn't need money anymore, why was I doing this? If this
vision had to be realized as a company, then screw the vision. I'd build a
subset that could be done as an open source project.
Much to my surprise, the time I spent working on this stuff was not wasted
after all. After we started Y Combinator, I would often encounter startups
working on parts of this new architecture, and it was very useful to have
spent so much time thinking about it and even trying to write some of it.
The subset I would build as an open source project was the new Lisp, whose
parentheses I now wouldn't even have to hide. A lot of Lisp hackers dream of
building a new Lisp, partly because one of the distinctive features of the
language is that it has dialects, and partly, I think, because we have in our
minds a Platonic form of Lisp that all existing dialects fall short of. I
certainly did. So at the end of the summer Dan and I switched to working on
this new dialect of Lisp, which I called Arc, in a house I bought in
Cambridge.
The following spring, lightning struck. I was invited to give a talk at a Lisp
conference, so I gave one about how we'd used Lisp at Viaweb. Afterward I put
a postscript file of this talk online, on paulgraham.com, which I'd created
years before using Viaweb but had never used for anything. In one day it got
30,000 page views. What on earth had happened? The referring urls showed that
someone had posted it on Slashdot.
Wow, I thought, there's an audience. If I write something and put it on the
web, anyone can read it. That may seem obvious now, but it was surprising
then. In the print era there was a narrow channel to readers, guarded by
fierce monsters known as editors. The only way to get an audience for anything
you wrote was to get it published as a book, or in a newspaper or magazine.
Now anyone could publish anything.
This had been possible in principle since 1993, but not many people had
realized it yet. I had been intimately involved with building the
infrastructure of the web for most of that time, and a writer as well, and it
had taken me 8 years to realize it. Even then it took me several years to
understand the implications. It meant there would be a whole new generation of
_essays_.
In the print era, the channel for publishing essays had been vanishingly
small. Except for a few officially anointed thinkers who went to the right
parties in New York, the only people allowed to publish essays were
specialists writing about their specialties. There were so many essays that
had never been written, because there had been no way to publish them. Now
they could be, and I was going to write them.
I've worked on several different things, but to the extent there was a turning
point where I figured out what to work on, it was when I started publishing
essays online. From then on I knew that whatever else I did, I'd always write
essays too.
I knew that online essays would be a _marginal_ medium at first. Socially
they'd seem more like rants posted by nutjobs on their GeoCities sites than
the genteel and beautifully typeset compositions published in _The New
Yorker_. But by this point I knew enough to find that encouraging instead of
discouraging.
One of the most conspicuous patterns I've noticed in my life is how well it
has worked, for me at least, to work on things that weren't prestigious. Still
life has always been the least prestigious form of painting. Viaweb and Y
Combinator both seemed lame when we started them. I still get the glassy eye
from strangers when they ask what I'm writing, and I explain that it's an
essay I'm going to publish on my web site. Even Lisp, though prestigious
intellectually in something like the way Latin is, also seems about as hip.
It's not that unprestigious types of work are good per se. But when you find
yourself drawn to some kind of work despite its current lack of prestige, it's
a sign both that there's something real to be discovered there, and that you
have the right kind of motives. Impure motives are a big danger for the
ambitious. If anything is going to lead you astray, it will be the desire to
impress people. So while working on things that aren't prestigious doesn't
guarantee you're on the right track, it at least guarantees you're not on the
most common type of wrong one.
Over the next several years I wrote lots of essays about all kinds of
different topics. O'Reilly reprinted a collection of them as a book, called
_Hackers & Painters_ after one of the essays in it. I also worked on spam
filters, and did some more painting. I used to have dinners for a group of
friends every thursday night, which taught me how to cook for groups. And I
bought another building in Cambridge, a former candy factory (and later, twas
said, porn studio), to use as an office.
One night in October 2003 there was a big party at my house. It was a clever
idea of my friend Maria Daniels, who was one of the thursday diners. Three
separate hosts would all invite their friends to one party. So for every
guest, two thirds of the other guests would be people they didn't know but
would probably like. One of the guests was someone I didn't know but would
turn out to like a lot: a woman called Jessica Livingston. A couple days later
I asked her out.
Jessica was in charge of marketing at a Boston investment bank. This bank
thought it understood startups, but over the next year, as she met friends of
mine from the startup world, she was surprised how different reality was. And
how colorful their stories were. So she decided to compile a book of
_interviews_ with startup founders.
When the bank had financial problems and she had to fire half her staff, she
started looking for a new job. In early 2005 she interviewed for a marketing
job at a Boston VC firm. It took them weeks to make up their minds, and during
this time I started telling her about all the things that needed to be fixed
about venture capital. They should make a larger number of smaller investments
instead of a handful of giant ones, they should be funding younger, more
technical founders instead of MBAs, they should let the founders remain as
CEO, and so on.
One of my tricks for writing essays had always been to give talks. The
prospect of having to stand up in front of a group of people and tell them
something that won't waste their time is a great spur to the imagination. When
the Harvard Computer Society, the undergrad computer club, asked me to give a
talk, I decided I would tell them how to start a startup. Maybe they'd be able
to avoid the worst of the mistakes we'd made.
So I gave this talk, in the course of which I told them that the best sources
of seed funding were successful startup founders, because then they'd be
sources of advice too. Whereupon it seemed they were all looking expectantly
at me. Horrified at the prospect of having my inbox flooded by business plans
(if I'd only known), I blurted out "But not me!" and went on with the talk.
But afterward it occurred to me that I should really stop procrastinating
about angel investing. I'd been meaning to since Yahoo bought us, and now it
was 7 years later and I still hadn't done one angel investment.
Meanwhile I had been scheming with Robert and Trevor about projects we could
work on together. I missed working with them, and it seemed like there had to
be something we could collaborate on.
As Jessica and I were walking home from dinner on March 11, at the corner of
Garden and Walker streets, these three threads converged. Screw the VCs who
were taking so long to make up their minds. We'd start our own investment firm
and actually implement the ideas we'd been talking about. I'd fund it, and
Jessica could quit her job and work for it, and we'd get Robert and Trevor as
partners too.
Once again, ignorance worked in our favor. We had no idea how to be angel
investors, and in Boston in 2005 there were no Ron Conways to learn from. So
we just made what seemed like the obvious choices, and some of the things we
did turned out to be novel.
There are multiple components to Y Combinator, and we didn't figure them all
out at once. The part we got first was to be an angel firm. In those days,
those two words didn't go together. There were VC firms, which were organized
companies with people whose job it was to make investments, but they only did
big, million dollar investments. And there were angels, who did smaller
investments, but these were individuals who were usually focused on other
things and made investments on the side. And neither of them helped founders
enough in the beginning. We knew how helpless founders were in some respects,
because we remembered how helpless we'd been. For example, one thing Julian
had done for us that seemed to us like magic was to get us set up as a
company. We were fine writing fairly difficult software, but actually getting
incorporated, with bylaws and stock and all that stuff, how on earth did you
do that? Our plan was not only to make seed investments, but to do for
startups everything Julian had done for us.
YC was not organized as a fund. It was cheap enough to run that we funded it
with our own money. That went right by 99% of readers, but professional
investors are thinking "Wow, that means they got all the returns." But once
again, this was not due to any particular insight on our part. We didn't know
how VC firms were organized. It never occurred to us to try to raise a fund,
and if it had, we wouldn't have known where to start.
The most distinctive thing about YC is the batch model: to fund a bunch of
startups all at once, twice a year, and then to spend three months focusing
intensively on trying to help them. That part we discovered by accident, not
merely implicitly but explicitly due to our ignorance about investing. We
needed to get experience as investors. What better way, we thought, than to
fund a whole bunch of startups at once? We knew undergrads got temporary jobs
at tech companies during the summer. Why not organize a summer program where
they'd start startups instead? We wouldn't feel guilty for being in a sense
fake investors, because they would in a similar sense be fake founders. So
while we probably wouldn't make much money out of it, we'd at least get to
practice being investors on them, and they for their part would probably have
a more interesting summer than they would working at Microsoft.
We'd use the building I owned in Cambridge as our headquarters. We'd all have
dinner there once a week on tuesdays, since I was already cooking for the
thursday diners on thursdays and after dinner we'd bring in experts on
startups to give talks.
We knew undergrads were deciding then about summer jobs, so in a matter of
days we cooked up something we called the Summer Founders Program, and I
posted an _announcement_ on my site, inviting undergrads to apply. I had never
imagined that writing essays would be a way to get "deal flow," as investors
call it, but it turned out to be the perfect source. We got 225
applications for the Summer Founders Program, and we were surprised to find
that a lot of them were from people who'd already graduated, or were about to
that spring. Already this SFP thing was starting to feel more serious than
we'd intended.
We invited about 20 of the 225 groups to interview in person, and from those
we picked 8 to fund. They were an impressive group. That first batch included
reddit, Justin Kan and Emmett Shear, who went on to found Twitch, Aaron
Swartz, who had already helped write the RSS spec and would a few years later
become a martyr for open access, and Sam Altman, who would later become the
second president of YC. I don't think it was entirely luck that the first
batch was so good. You had to be pretty bold to sign up for a weird thing like
the Summer Founders Program instead of a summer job at a legit place like
Microsoft or Goldman Sachs.
The deal for startups was based on a combination of the deal we did with
Julian ($10k for 10%) and what Robert said MIT grad students got for the
summer ($6k). We invested $6k per founder, which in the typical two-founder
case was $12k, in return for 6%. That had to be fair, because it was twice as
good as the deal we ourselves had taken. Plus that first summer, which was
really hot, Jessica brought the founders free air conditioners.
Fairly quickly I realized that we had stumbled upon the way to scale startup
funding. Funding startups in batches was more convenient for us, because it
meant we could do things for a lot of startups at once, but being part of a
batch was better for the startups too. It solved one of the biggest problems
faced by founders: the isolation. Now you not only had colleagues, but
colleagues who understood the problems you were facing and could tell you how
they were solving them.
As YC grew, we started to notice other advantages of scale. The alumni became
a tight community, dedicated to helping one another, and especially the
current batch, whose shoes they remembered being in. We also noticed that the
startups were becoming one another's customers. We used to refer jokingly to
the "YC GDP," but as YC grows this becomes less and less of a joke. Now lots
of startups get their initial set of customers almost entirely from among
their batchmates.
I had not originally intended YC to be a full-time job. I was going to do
three things: hack, write essays, and work on YC. As YC grew, and I grew more
excited about it, it started to take up a lot more than a third of my
attention. But for the first few years I was still able to work on other
things.
In the summer of 2006, Robert and I started working on a new version of Arc.
This one was reasonably fast, because it was compiled into Scheme. To test
this new Arc, I wrote Hacker News in it. It was originally meant to be a news
aggregator for startup founders and was called Startup News, but after a few
months I got tired of reading about nothing but startups. Plus it wasn't
startup founders we wanted to reach. It was future startup founders. So I
changed the name to Hacker News and the topic to whatever engaged one's
intellectual curiosity.
HN was no doubt good for YC, but it was also by far the biggest source of
stress for me. If all I'd had to do was select and help founders, life would
have been so easy. And that implies that HN was a mistake. Surely the biggest
source of stress in one's work should at least be something close to the core
of the work. Whereas I was like someone who was in pain while running a
marathon not from the exertion of running, but because I had a blister from an
ill-fitting shoe. When I was dealing with some urgent problem during YC, there
was about a 60% chance it had to do with HN, and a 40% chance it had do with
everything else combined.
As well as HN, I wrote all of YC's internal software in Arc. But while I
continued to work a good deal _in_ Arc, I gradually stopped working _on_ Arc,
partly because I didn't have time to, and partly because it was a lot less
attractive to mess around with the language now that we had all this
infrastructure depending on it. So now my three projects were reduced to two:
writing essays and working on YC.
YC was different from other kinds of work I've done. Instead of deciding for
myself what to work on, the problems came to me. Every 6 months there was a
new batch of startups, and their problems, whatever they were, became our
problems. It was very engaging work, because their problems were quite varied,
and the good founders were very effective. If you were trying to learn the
most you could about startups in the shortest possible time, you couldn't have
picked a better way to do it.
There were parts of the job I didn't like. Disputes between cofounders,
figuring out when people were lying to us, fighting with people who maltreated
the startups, and so on. But I worked hard even at the parts I didn't like. I
was haunted by something Kevin Hale once said about companies: "No one works
harder than the boss." He meant it both descriptively and prescriptively, and
it was the second part that scared me. I wanted YC to be good, so if how hard
I worked set the upper bound on how hard everyone else worked, I'd better work
very hard.
One day in 2010, when he was visiting California for interviews, Robert Morris
did something astonishing: he offered me unsolicited advice. I can only
remember him doing that once before. One day at Viaweb, when I was bent over
double from a kidney stone, he suggested that it would be a good idea for him
to take me to the hospital. That was what it took for Rtm to offer unsolicited
advice. So I remember his exact words very clearly. "You know," he said, "you
should make sure Y Combinator isn't the last cool thing you do."
At the time I didn't understand what he meant, but gradually it dawned on me
that he was saying I should quit. This seemed strange advice, because YC was
doing great. But if there was one thing rarer than Rtm offering advice, it was
Rtm being wrong. So this set me thinking. It was true that on my current
trajectory, YC would be the last thing I did, because it was only taking up
more of my attention. It had already eaten Arc, and was in the process of
eating essays too. Either YC was my life's work or I'd have to leave
eventually. And it wasn't, so I would.
In the summer of 2012 my mother had a stroke, and the cause turned out to be a
blood clot caused by colon cancer. The stroke destroyed her balance, and she
was put in a nursing home, but she really wanted to get out of it and back to
her house, and my sister and I were determined to help her do it. I used to
fly up to Oregon to visit her regularly, and I had a lot of time to think on
those flights. On one of them I realized I was ready to hand YC over to
someone else.
I asked Jessica if she wanted to be president, but she didn't, so we decided
we'd try to recruit Sam Altman. We talked to Robert and Trevor and we agreed
to make it a complete changing of the guard. Up till that point YC had been
controlled by the original LLC we four had started. But we wanted YC to last
for a long time, and to do that it couldn't be controlled by the founders. So
if Sam said yes, we'd let him reorganize YC. Robert and I would retire, and
Jessica and Trevor would become ordinary partners.
When we asked Sam if he wanted to be president of YC, initially he said no. He
wanted to start a startup to make nuclear reactors. But I kept at it, and in
October 2013 he finally agreed. We decided he'd take over starting with the
winter 2014 batch. For the rest of 2013 I left running YC more and more to
Sam, partly so he could learn the job, and partly because I was focused on my
mother, whose cancer had returned.
She died on January 15, 2014. We knew this was coming, but it was still hard
when it did.
I kept working on YC till March, to help get that batch of startups through
Demo Day, then I checked out pretty completely. (I still talk to alumni and to
new startups working on things I'm interested in, but that only takes a few
hours a week.)
What should I do next? Rtm's advice hadn't included anything about that. I
wanted to do something completely different, so I decided I'd paint. I wanted
to see how good I could get if I really focused on it. So the day after I
stopped working on YC, I started painting. I was rusty and it took a while to
get back into shape, but it was at least completely engaging.
I spent most of the rest of 2014 painting. I'd never been able to work so
uninterruptedly before, and I got to be better than I had been. Not good
enough, but better. Then in November, right in the middle of a painting, I ran
out of steam. Up till that point I'd always been curious to see how the
painting I was working on would turn out, but suddenly finishing this one
seemed like a chore. So I stopped working on it and cleaned my brushes and
haven't painted since. So far anyway.
I realize that sounds rather wimpy. But attention is a zero sum game. If you
can choose what to work on, and you choose a project that's not the best one
(or at least a good one) for you, then it's getting in the way of another
project that is. And at 50 there was some opportunity cost to screwing around.
I started writing essays again, and wrote a bunch of new ones over the next
few months. I even wrote a couple that _weren't_ about startups. Then in March
2015 I started working on Lisp again.
The distinctive thing about Lisp is that its core is a language defined by
writing an interpreter in itself. It wasn't originally intended as a
programming language in the ordinary sense. It was meant to be a formal model
of computation, an alternative to the Turing machine. If you want to write an
interpreter for a language in itself, what's the minimum set of predefined
operators you need? The Lisp that John McCarthy invented, or more accurately
discovered, is an answer to that question.
McCarthy didn't realize this Lisp could even be used to program computers till
his grad student Steve Russell suggested it. Russell translated McCarthy's
interpreter into IBM 704 machine language, and from that point Lisp started
also to be a programming language in the ordinary sense. But its origins as a
model of computation gave it a power and elegance that other languages
couldn't match. It was this that attracted me in college, though I didn't
understand why at the time.
McCarthy's 1960 Lisp did nothing more than interpret Lisp expressions. It was
missing a lot of things you'd want in a programming language. So these had to
be added, and when they were, they weren't defined using McCarthy's original
axiomatic approach. That wouldn't have been feasible at the time. McCarthy
tested his interpreter by hand-simulating the execution of programs. But it
was already getting close to the limit of interpreters you could test that way
indeed, there was a bug in it that McCarthy had overlooked. To test a more
complicated interpreter, you'd have had to run it, and computers then weren't
powerful enough.
Now they are, though. Now you could continue using McCarthy's axiomatic
approach till you'd defined a complete programming language. And as long as
every change you made to McCarthy's Lisp was a discoveredness-preserving
transformation, you could, in principle, end up with a complete language that
had this quality. Harder to do than to talk about, of course, but if it was
possible in principle, why not try? So I decided to take a shot at it. It took
4 years, from March 26, 2015 to October 12, 2019. It was fortunate that I had
a precisely defined goal, or it would have been hard to keep at it for so
long.
I wrote this new Lisp, called _Bel_, in itself in Arc. That may sound like a
contradiction, but it's an indication of the sort of trickery I had to engage
in to make this work. By means of an egregious collection of hacks I managed
to make something close enough to an interpreter written in itself that could
actually run. Not fast, but fast enough to test.
I had to ban myself from writing essays during most of this time, or I'd never
have finished. In late 2015 I spent 3 months writing essays, and when I went
back to working on Bel I could barely understand the code. Not so much because
it was badly written as because the problem is so convoluted. When you're
working on an interpreter written in itself, it's hard to keep track of what's
happening at what level, and errors can be practically encrypted by the time
you get them.
So I said no more essays till Bel was done. But I told few people about Bel
while I was working on it. So for years it must have seemed that I was doing
nothing, when in fact I was working harder than I'd ever worked on anything.
Occasionally after wrestling for hours with some gruesome bug I'd check
Twitter or HN and see someone asking "Does Paul Graham still code?"
Working on Bel was hard but satisfying. I worked on it so intensively that at
any given time I had a decent chunk of the code in my head and could write
more there. I remember taking the boys to the coast on a sunny day in 2015 and
figuring out how to deal with some problem involving continuations while I
watched them play in the tide pools. It felt like I was doing life right. I
remember that because I was slightly dismayed at how novel it felt. The good
news is that I had more moments like this over the next few years.
In the summer of 2016 we moved to England. We wanted our kids to see what it
was like living in another country, and since I was a British citizen by
birth, that seemed the obvious choice. We only meant to stay for a year, but
we liked it so much that we still live there. So most of Bel was written in
England.
In the fall of 2019, Bel was finally finished. Like McCarthy's original Lisp,
it's a spec rather than an implementation, although like McCarthy's Lisp it's
a spec expressed as code.
Now that I could write essays again, I wrote a bunch about topics I'd had
stacked up. I kept writing essays through 2020, but I also started to think
about other things I could work on. How should I choose what to do? Well, how
had I chosen what to work on in the past? I wrote an essay for myself to
answer that question, and I was surprised how long and messy the answer turned
out to be. If this surprised me, who'd lived it, then I thought perhaps it
would be interesting to other people, and encouraging to those with similarly
messy lives. So I wrote a more detailed version for others to read, and this
is the last sentence of it.
** |
|
August 2020
Some politicians are proposing to introduce wealth taxes in addition to income
and capital gains taxes. Let's try modeling the effects of various levels of
wealth tax to see what they would mean in practice for a startup founder.
Suppose you start a successful startup in your twenties, and then live for
another 60 years. How much of your stock will a wealth tax consume?
If the wealth tax applies to all your assets, it's easy to calculate its
effect. A wealth tax of 1% means you get to keep 99% of your stock each year.
After 60 years the proportion of stock you'll have left will be .99^60, or
.547. So a straight 1% wealth tax means the government will over the course of
your life take 45% of your stock.
(Losing shares does not, obviously, mean becoming _net_ poorer unless the
value per share is increasing by less than the wealth tax rate.)
Here's how much stock the government would take over 60 years at various
levels of wealth tax:
| wealth tax| government takes
---|---
0.1%| 6%
0.5%| 26%
1.0%| 45%
2.0%| 70%
3.0%| 84%
4.0%| 91%
5.0%| 95%
A wealth tax will usually have a threshold at which it starts. How much
difference would a high threshold make? To model that, we need to make some
assumptions about the initial value of your stock and the growth rate.
Suppose your stock is initially worth $2 million, and the company's trajectory
is as follows: the value of your stock grows 3x for 2 years, then 2x for 2
years, then 50% for 2 years, after which you just get a typical public company
growth rate, which we'll call 8%. Suppose the wealth tax threshold is $50
million. How much stock does the government take now? wealth tax| government
takes
---|---
0.1%| 5%
0.5%| 23%
1.0%| 41%
2.0%| 65%
3.0%| 79%
4.0%| 88%
5.0%| 93%
It may at first seem surprising that such apparently small tax rates produce
such dramatic effects. A 2% wealth tax with a $50 million threshold takes
about two thirds of a successful founder's stock.
The reason wealth taxes have such dramatic effects is that they're applied
over and over to the same money. Income tax happens every year, but only to
that year's income. Whereas if you live for 60 years after acquiring some
asset, a wealth tax will tax that same asset 60 times. A wealth tax compounds.
**Note**
In practice, eventually some of this 8% would come in the form of
dividends, which are taxed as income at issue, so this model actually
represents the most optimistic case for the founder.
* * *
--- |
|
October 2020
One of the biggest things holding people back from doing great work is the
fear of making something lame. And this fear is not an irrational one. Many
great projects go through a stage early on where they don't seem very
impressive, even to their creators. You have to push through this stage to
reach the great work that lies beyond. But many people don't. Most people
don't even reach the stage of making something they're embarrassed by, let
alone continue past it. They're too frightened even to start.
Imagine if we could turn off the fear of making something lame. Imagine how
much more we'd do.
Is there any hope of turning it off? I think so. I think the habits at work
here are not very deeply rooted.
Making new things is itself a new thing for us as a species. It has always
happened, but till the last few centuries it happened so slowly as to be
invisible to individual humans. And since we didn't need customs for dealing
with new ideas, we didn't develop any.
We just don't have enough experience with early versions of ambitious projects
to know how to respond to them. We judge them as we would judge more finished
work, or less ambitious projects. We don't realize they're a special case.
Or at least, most of us don't. One reason I'm confident we can do better is
that it's already starting to happen. There are already a few places that are
living in the future in this respect. Silicon Valley is one of them: an
unknown person working on a strange-sounding idea won't automatically be
dismissed the way they would back home. In Silicon Valley, people have learned
how dangerous that is.
The right way to deal with new ideas is to treat them as a challenge to your
imagination not just to have lower standards, but to _switch polarity_
entirely, from listing the reasons an idea won't work to trying to think of
ways it could. That's what I do when I meet people with new ideas. I've become
quite good at it, but I've had a lot of practice. Being a partner at Y
Combinator means being practically immersed in strange-sounding ideas proposed
by unknown people. Every six months you get thousands of new ones thrown at
you and have to sort through them, knowing that in a world with a power-law
distribution of outcomes, it will be painfully obvious if you miss the needle
in this haystack. Optimism becomes urgent.
But I'm hopeful that, with time, this kind of optimism can become widespread
enough that it becomes a social custom, not just a trick used by a few
specialists. It is after all an extremely lucrative trick, and those tend to
spread quickly.
Of course, inexperience is not the only reason people are too harsh on early
versions of ambitious projects. They also do it to seem clever. And in a field
where the new ideas are risky, like startups, those who dismiss them are in
fact more likely to be right. Just not when their predictions are _weighted by
outcome_.
But there is another more sinister reason people dismiss new ideas. If you try
something ambitious, many of those around you will hope, consciously or
unconsciously, that you'll fail. They worry that if you try something
ambitious and succeed, it will put you above them. In some countries this is
not just an individual failing but part of the national culture.
I wouldn't claim that people in Silicon Valley overcome these impulses because
they're morally better. The reason many hope you'll succeed is that they
hope to rise with you. For investors this incentive is particularly explicit.
They want you to succeed because they hope you'll make them rich in the
process. But many other people you meet can hope to benefit in some way from
your success. At the very least they'll be able to say, when you're famous,
that they've known you since way back.
But even if Silicon Valley's encouraging attitude is rooted in self-interest,
it has over time actually grown into a sort of benevolence. Encouraging
startups has been practiced for so long that it has become a custom. Now it
just seems that that's what one does with startups.
Maybe Silicon Valley is too optimistic. Maybe it's too easily fooled by
impostors. Many less optimistic journalists want to believe that. But the
lists of impostors they cite are suspiciously short, and plagued with
asterisks. If you use revenue as the test, Silicon Valley's optimism seems
better tuned than the rest of the world's. And because it works, it will
spread.
There's a lot more to new ideas than new startup ideas, of course. The fear of
making something lame holds people back in every field. But Silicon Valley
shows how quickly customs can evolve to support new ideas. And that in turn
proves that dismissing new ideas is not so deeply rooted in human nature that
it can't be unlearnt.
___________
Unfortunately, if you want to do new things, you'll face a force more powerful
than other people's skepticism: your own skepticism. You too will judge your
early work too harshly. How do you avoid that?
This is a difficult problem, because you don't want to completely eliminate
your horror of making something lame. That's what steers you toward doing good
work. You just want to turn it off temporarily, the way a painkiller
temporarily turns off pain.
People have already discovered several techniques that work. Hardy mentions
two in _A Mathematician's Apology_ :
> Good work is not done by "humble" men. It is one of the first duties of a
> professor, for example, in any subject, to exaggerate a little both the
> importance of his subject and his importance in it.
If you overestimate the importance of what you're working on, that will
compensate for your mistakenly harsh judgment of your initial results. If you
look at something that's 20% of the way to a goal worth 100 and conclude that
it's 10% of the way to a goal worth 200, your estimate of its expected value
is correct even though both components are wrong.
It also helps, as Hardy suggests, to be slightly overconfident. I've noticed
in many fields that the most successful people are slightly overconfident. On
the face of it this seems implausible. Surely it would be optimal to have
exactly the right estimate of one's abilities. How could it be an advantage to
be mistaken? Because this error compensates for other sources of error in the
opposite direction: being slightly overconfident armors you against both other
people's skepticism and your own.
Ignorance has a similar effect. It's safe to make the mistake of judging early
work as finished work if you're a sufficiently lax judge of finished work. I
doubt it's possible to cultivate this kind of ignorance, but empirically it's
a real advantage, especially for the young.
Another way to get through the lame phase of ambitious projects is to surround
yourself with the right people to create an eddy in the social headwind. But
it's not enough to collect people who are always encouraging. You'd learn to
discount that. You need colleagues who can actually tell an ugly duckling from
a baby swan. The people best able to do this are those working on similar
projects of their own, which is why university departments and research labs
work so well. You don't need institutions to collect colleagues. They
naturally coalesce, given the chance. But it's very much worth accelerating
this process by seeking out other people trying to do new things.
Teachers are in effect a special case of colleagues. It's a teacher's job both
to see the promise of early work and to encourage you to continue. But
teachers who are good at this are unfortunately quite rare, so if you have the
opportunity to learn from one, take it.
For some it might work to rely on sheer discipline: to tell yourself that you
just have to press on through the initial crap phase and not get discouraged.
But like a lot of "just tell yourself" advice, this is harder than it sounds.
And it gets still harder as you get older, because your standards rise. The
old do have one compensating advantage though: they've been through this
before.
It can help if you focus less on where you are and more on the rate of change.
You won't worry so much about doing bad work if you can see it improving.
Obviously the faster it improves, the easier this is. So when you start
something new, it's good if you can spend a lot of time on it. That's another
advantage of being young: you tend to have bigger blocks of time.
Another common trick is to start by considering new work to be of a different,
less exacting type. To start a painting saying that it's just a sketch, or a
new piece of software saying that it's just a quick hack. Then you judge your
initial results by a lower standard. Once the project is rolling you can
sneakily convert it to something more.
This will be easier if you use a medium that lets you work fast and doesn't
require too much commitment up front. It's easier to convince yourself that
something is just a sketch when you're drawing in a notebook than when you're
carving stone. Plus you get initial results faster.
It will be easier to try out a risky project if you think of it as a way to
learn and not just as a way to make something. Then even if the project truly
is a failure, you'll still have gained by it. If the problem is sharply enough
defined, failure itself is knowledge: if the theorem you're trying to prove
turns out to be false, or you use a structural member of a certain size and it
fails under stress, you've learned something, even if it isn't what you wanted
to learn.
One motivation that works particularly well for me is curiosity. I like to try
new things just to see how they'll turn out. We started Y Combinator in this
spirit, and it was one of main things that kept me going while I was working
on _Bel_. Having worked for so long with various dialects of Lisp, I was very
curious to see what its inherent shape was: what you'd end up with if you
followed the axiomatic approach all the way.
But it's a bit strange that you have to play mind games with yourself to avoid
being discouraged by lame-looking early efforts. The thing you're trying to
trick yourself into believing is in fact the truth. A lame-looking early
version of an ambitious project truly is more valuable than it seems. So the
ultimate solution may be to teach yourself that.
One way to do it is to study the histories of people who've done great work.
What were they thinking early on? What was the very first thing they did? It
can sometimes be hard to get an accurate answer to this question, because
people are often embarrassed by their earliest work and make little effort to
publish it. (They too misjudge it.) But when you can get an accurate picture
of the first steps someone made on the path to some great work, they're often
pretty feeble.
Perhaps if you study enough such cases, you can teach yourself to be a better
judge of early work. Then you'll be immune both to other people's skepticism
and your own fear of making something lame. You'll see early work for what it
is.
Curiously enough, the solution to the problem of judging early work too
harshly is to realize that our attitudes toward it are themselves early work.
Holding everything to the same standard is a crude version 1. We're already
evolving better customs, and we can already see signs of how big the payoff
will be.
** |
|
March 2021
The secret curse of the nonprofit world is restricted donations. If you
haven't been involved with nonprofits, you may never have heard this phrase
before. But if you have been, it probably made you wince.
Restricted donations mean donations where the donor limits what can be done
with the money. This is common with big donations, perhaps the default. And
yet it's usually a bad idea. Usually the way the donor wants the money spent
is not the way the nonprofit would have chosen. Otherwise there would have
been no need to restrict the donation. But who has a better understanding of
where money needs to be spent, the nonprofit or the donor?
If a nonprofit doesn't understand better than its donors where money needs to
be spent, then it's incompetent and you shouldn't be donating to it at all.
Which means a restricted donation is inherently suboptimal. It's either a
donation to a bad nonprofit, or a donation for the wrong things.
There are a couple exceptions to this principle. One is when the nonprofit is
an umbrella organization. It's reasonable to make a restricted donation to a
university, for example, because a university is only nominally a single
nonprofit. Another exception is when the donor actually does know as much as
the nonprofit about where money needs to be spent. The Gates Foundation, for
example, has specific goals and often makes restricted donations to individual
nonprofits to accomplish them. But unless you're a domain expert yourself or
donating to an umbrella organization, your donation would do more good if it
were unrestricted.
If restricted donations do less good than unrestricted ones, why do donors so
often make them? Partly because doing good isn't donors' only motive. They
often have other motives as well — to make a mark, or to generate good
publicity , or to comply with regulations or corporate policies. Many
donors may simply never have considered the distinction between restricted and
unrestricted donations. They may believe that donating money for some specific
purpose is just how donation works. And to be fair, nonprofits don't try very
hard to discourage such illusions. They can't afford to. People running
nonprofits are almost always anxious about money. They can't afford to talk
back to big donors.
You can't expect candor in a relationship so asymmetric. So I'll tell you what
nonprofits wish they could tell you. If you want to donate to a nonprofit,
donate unrestricted. If you trust them to spend your money, trust them to
decide how.
**Note**
Unfortunately restricted donations tend to generate more publicity than
unrestricted ones. "X donates money to build a school in Africa" is not only
more interesting than "X donates money to Y nonprofit to spend as Y chooses,"
but also focuses more attention on X.
**Thanks** to Chase Adam, Ingrid Bassett, Trevor Blackwell, and Edith Elliot
for reading drafts of this.
---
* * *
--- |
|
March 2024
_(This is a talk I gave to 14 and 15 year olds about what to do now if they
might want to start a startup later. Lots of schools think they should tell
students something about startups. This is what I think they should tell
them.)_
Most of you probably think that when you're released into the so-called real
world you'll eventually have to get some kind of job. That's not true, and
today I'm going to talk about a trick you can use to avoid ever having to get
a job.
The trick is to start your own company. So it's not a trick for avoiding
_work_ , because if you start your own company you'll work harder than you
would if you had an ordinary job. But you will avoid many of the annoying
things that come with a job, including a boss telling you what to do.
It's more exciting to work on your own project than someone else's. And you
can also get a lot richer. In fact, this is the standard way to get _really
rich_. If you look at the lists of the richest people that occasionally get
published in the press, nearly all of them did it by starting their own
companies.
Starting your own company can mean anything from starting a barber shop to
starting Google. I'm here to talk about one extreme end of that continuum. I'm
going to tell you how to start Google.
The companies at the Google end of the continuum are called startups when
they're young. The reason I know about them is that my wife Jessica and I
started something called Y Combinator that is basically a startup factory.
Since 2005, Y Combinator has funded over 4000 startups. So we know exactly
what you need to start a startup, because we've helped people do it for the
last 19 years.
You might have thought I was joking when I said I was going to tell you how to
start Google. You might be thinking "How could _we_ start Google?" But that's
effectively what the people who did start Google were thinking before they
started it. If you'd told Larry Page and Sergey Brin, the founders of Google,
that the company they were about to start would one day be worth over a
trillion dollars, their heads would have exploded.
All you can know when you start working on a startup is that it seems worth
pursuing. You can't know whether it will turn into a company worth billions or
one that goes out of business. So when I say I'm going to tell you how to
start Google, I mean I'm going to tell you how to get to the point where you
can start a company that has as much chance of being Google as Google had of
being Google.
How do you get from where you are now to the point where you can start a
successful startup? You need three things. You need to be good at some kind of
technology, you need an idea for what you're going to build, and you need
cofounders to start the company with.
How do you get good at technology? And how do you choose which technology to
get good at? Both of those questions turn out to have the same answer: work on
your own projects. Don't try to guess whether gene editing or LLMs or rockets
will turn out to be the most valuable technology to know about. No one can
predict that. Just work on whatever interests you the most. You'll work much
harder on something you're interested in than something you're doing because
you think you're supposed to.
If you're not sure what technology to get good at, get good at programming.
That has been the source of the median startup for the last 30 years, and this
is probably not going to change in the next 10.
Those of you who are taking computer science classes in school may at this
point be thinking, ok, we've got this sorted. We're already being taught all
about programming. But sorry, this is not enough. You have to be working on
your own projects, not just learning stuff in classes. You can do well in
computer science classes without ever really learning to program. In fact you
can graduate with a degree in computer science from a top university and still
not be any good at programming. That's why tech companies all make you take a
coding test before they'll hire you, regardless of where you went to
university or how well you did there. They know grades and exam results prove
nothing.
If you really want to learn to program, you have to work on your own projects.
You learn so much faster that way. Imagine you're writing a game and there's
something you want to do in it, and you don't know how. You're going to figure
out how a lot faster than you'd learn anything in a class.
You don't have to learn programming, though. If you're wondering what counts
as technology, it includes practically everything you could describe using the
words "make" or "build." So welding would count, or making clothes, or making
videos. Whatever you're most interested in. The critical distinction is
whether you're producing or just consuming. Are you writing computer games, or
just playing them? That's the cutoff.
Steve Jobs, the founder of Apple, spent time when he was a teenager studying
calligraphy — the sort of beautiful writing that you see in medieval
manuscripts. No one, including him, thought that this would help him in his
career. He was just doing it because he was interested in it. But it turned
out to help him a lot. The computer that made Apple really big, the Macintosh,
came out at just the moment when computers got powerful enough to make letters
like the ones in printed books instead of the computery-looking letters you
see in 8 bit games. Apple destroyed everyone else at this, and one reason was
that Steve was one of the few people in the computer business who really got
graphic design.
Don't feel like your projects have to be _serious_. They can be as frivolous
as you like, so long as you're building things you're excited about. Probably
90% of programmers start out building games. They and their friends like to
play games. So they build the kind of things they and their friends want. And
that's exactly what you should be doing at 15 if you want to start a startup
one day.
You don't have to do just one project. In fact it's good to learn about
multiple things. Steve Jobs didn't just learn calligraphy. He also learned
about electronics, which was even more valuable. Whatever you're interested
in. (Do you notice a theme here?)
So that's the first of the three things you need, to get good at some kind or
kinds of technology. You do it the same way you get good at the violin or
football: practice. If you start a startup at 22, and you start writing your
own programs now, then by the time you start the company you'll have spent at
least 7 years practicing writing code, and you can get pretty good at anything
after practicing it for 7 years.
Let's suppose you're 22 and you've succeeded: You're now really good at some
technology. How do you get _startup ideas_? It might seem like that's the hard
part. Even if you are a good programmer, how do you get the idea to start
Google?
Actually it's easy to get startup ideas once you're good at technology. Once
you're good at some technology, when you look at the world you see dotted
outlines around the things that are missing. You start to be able to see both
the things that are missing from the technology itself, and all the broken
things that could be fixed using it, and each one of these is a potential
startup.
In the town near our house there's a shop with a sign warning that the door is
hard to close. The sign has been there for several years. To the people in the
shop it must seem like this mysterious natural phenomenon that the door
sticks, and all they can do is put up a sign warning customers about it. But
any carpenter looking at this situation would think "why don't you just plane
off the part that sticks?"
Once you're good at programming, all the missing software in the world starts
to become as obvious as a sticking door to a carpenter. I'll give you a real
world example. Back in the 20th century, American universities used to publish
printed directories with all the students' names and contact info. When I tell
you what these directories were called, you'll know which startup I'm talking
about. They were called facebooks, because they usually had a picture of each
student next to their name.
So Mark Zuckerberg shows up at Harvard in 2002, and the university still
hasn't gotten the facebook online. Each individual house has an online
facebook, but there isn't one for the whole university. The university
administration has been diligently having meetings about this, and will
probably have solved the problem in another decade or so. Most of the students
don't consciously notice that anything is wrong. But Mark is a programmer. He
looks at this situation and thinks "Well, this is stupid. I could write a
program to fix this in one night. Just let people upload their own photos and
then combine the data into a new site for the whole university." So he does.
And almost literally overnight he has thousands of users.
Of course Facebook was not a startup yet. It was just a... project. There's
that word again. Projects aren't just the best way to learn about technology.
They're also the best source of startup ideas.
Facebook was not unusual in this respect. Apple and Google also began as
projects. Apple wasn't meant to be a company. Steve Wozniak just wanted to
build his own computer. It only turned into a company when Steve Jobs said
"Hey, I wonder if we could sell plans for this computer to other people."
That's how Apple started. They weren't even selling computers, just plans for
computers. Can you imagine how lame this company seemed?
Ditto for Google. Larry and Sergey weren't trying to start a company at first.
They were just trying to make search better. Before Google, most search
engines didn't try to sort the results they gave you in order of importance.
If you searched for "rugby" they just gave you every web page that contained
the word "rugby." And the web was so small in 1997 that this actually worked!
Kind of. There might only be 20 or 30 pages with the word "rugby," but the web
was growing exponentially, which meant this way of doing search was becoming
exponentially more broken. Most users just thought, "Wow, I sure have to look
through a lot of search results to find what I want." Door sticks. But like
Mark, Larry and Sergey were programmers. Like Mark, they looked at this
situation and thought "Well, this is stupid. Some pages about rugby matter
more than others. Let's figure out which those are and show them first."
It's obvious in retrospect that this was a great idea for a startup. It wasn't
obvious at the time. It's never obvious. If it was obviously a good idea to
start Apple or Google or Facebook, someone else would have already done it.
That's why the best startups grow out of projects that aren't meant to be
startups. You're not trying to start a company. You're just following your
instincts about what's interesting. And if you're young and good at
technology, then your unconscious instincts about what's interesting are
better than your conscious ideas about what would be a good company.
So it's critical, if you're a young founder, to build things for yourself and
your friends to use. The biggest mistake young founders make is to build
something for some mysterious group of other people. But if you can make
something that you and your friends truly want to use — something your friends
aren't just using out of loyalty to you, but would be really sad to lose if
you shut it down — then you almost certainly have the germ of a good startup
idea. It may not seem like a startup to you. It may not be obvious how to make
money from it. But trust me, there's a way.
What you need in a startup idea, and all you need, is something your friends
actually want. And those ideas aren't hard to see once you're good at
technology. There are sticking doors everywhere.
Now for the third and final thing you need: a cofounder, or cofounders. The
optimal startup has two or three founders, so you need one or two cofounders.
How do you find them? Can you predict what I'm going to say next? It's the
same thing: projects. You find cofounders by working on projects with them.
What you need in a cofounder is someone who's good at what they do and that
you work well with, and the only way to judge this is to work with them on
things.
At this point I'm going to tell you something you might not want to hear. It
really matters to do well in your classes, even the ones that are just
memorization or blathering about literature, because you need to do well in
your classes to get into a good university. And if you want to start a startup
you should try to get into the best university you can, because that's where
the best cofounders are. It's also where the best employees are. When Larry
and Sergey started Google, they began by just hiring all the smartest people
they knew out of Stanford, and this was a real advantage for them.
The empirical evidence is clear on this. If you look at where the largest
numbers of successful startups come from, it's pretty much the same as the
list of the most selective universities.
I don't think it's the prestigious names of these universities that cause more
good startups to come out of them. Nor do I think it's because the quality of
the teaching is better. What's driving this is simply the difficulty of
getting in. You have to be pretty smart and determined to get into MIT or
Cambridge, so if you do manage to get in, you'll find the other students
include a lot of smart and determined people.
You don't have to start a startup with someone you meet at university. The
founders of Twitch met when they were seven. The founders of Stripe, Patrick
and John Collison, met when John was born. But universities are the main
source of cofounders. And because they're where the cofounders are, they're
also where the ideas are, because the best ideas grow out of projects you do
with the people who become your cofounders.
So the list of what you need to do to get from here to starting a startup is
quite short. You need to get good at technology, and the way to do that is to
work on your own projects. And you need to do as well in school as you can, so
you can get into a good university, because that's where the cofounders and
the ideas are.
That's it, just two things, build stuff and do well in school.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
March 2007
_(This essay is derived from talks at the 2007 Startup School and the
Berkeley CSUA.)_
We've now been doing Y Combinator long enough to have some data about success
rates. Our first batch, in the summer of 2005, had eight startups in it. Of
those eight, it now looks as if at least four succeeded. Three have been
acquired: Reddit was a merger of two, Reddit and Infogami, and a third was
acquired that we can't talk about yet. Another from that batch was Loopt,
which is doing so well they could probably be acquired in about ten minutes if
they wanted to.
So about half the founders from that first summer, less than two years ago,
are now rich, at least by their standards. (One thing you learn when you get
rich is that there are many degrees of it.)
I'm not ready to predict our success rate will stay as high as 50%. That first
batch could have been an anomaly. But we should be able to do better than the
oft-quoted (and probably made up) standard figure of 10%. I'd feel safe aiming
at 25%.
Even the founders who fail don't seem to have such a bad time. Of those first
eight startups, three are now probably dead. In two cases the founders just
went on to do other things at the end of the summer. I don't think they were
traumatized by the experience. The closest to a traumatic failure was Kiko,
whose founders kept working on their startup for a whole year before being
squashed by Google Calendar. But they ended up happy. They sold their software
on eBay for a quarter of a million dollars. After they paid back their angel
investors, they had about a year's salary each. Then they immediately went
on to start a new and much more exciting startup, Justin.TV.
So here is an even more striking statistic: 0% of that first batch had a
terrible experience. They had ups and downs, like every startup, but I don't
think any would have traded it for a job in a cubicle. And that statistic is
probably not an anomaly. Whatever our long-term success rate ends up being, I
think the rate of people who wish they'd gotten a regular job will stay close
to 0%.
The big mystery to me is: why don't more people start startups? If nearly
everyone who does it prefers it to a regular job, and a significant percentage
get rich, why doesn't everyone want to do this? A lot of people think we get
thousands of applications for each funding cycle. In fact we usually only get
several hundred. Why don't more people apply? And while it must seem to anyone
watching this world that startups are popping up like crazy, the number is
small compared to the number of people with the necessary skills. The great
majority of programmers still go straight from college to cubicle, and stay
there.
It seems like people are not acting in their own interest. What's going on?
Well, I can answer that. Because of Y Combinator's position at the very start
of the venture funding process, we're probably the world's leading experts on
the psychology of people who aren't sure if they want to start a company.
There's nothing wrong with being unsure. If you're a hacker thinking about
starting a startup and hesitating before taking the leap, you're part of a
grand tradition. Larry and Sergey seem to have felt the same before they
started Google, and so did Jerry and Filo before they started Yahoo. In fact,
I'd guess the most successful startups are the ones started by uncertain
hackers rather than gung-ho business guys.
We have some evidence to support this. Several of the most successful startups
we've funded told us later that they only decided to apply at the last moment.
Some decided only hours before the deadline.
The way to deal with uncertainty is to analyze it into components. Most people
who are reluctant to do something have about eight different reasons mixed
together in their heads, and don't know themselves which are biggest. Some
will be justified and some bogus, but unless you know the relative proportion
of each, you don't know whether your overall uncertainty is mostly justified
or mostly bogus.
So I'm going to list all the components of people's reluctance to start
startups, and explain which are real. Then would-be founders can use this as a
checklist to examine their own feelings.
I admit my goal is to increase your self-confidence. But there are two things
different here from the usual confidence-building exercise. One is that I'm
motivated to be honest. Most people in the confidence-building business have
already achieved their goal when you buy the book or pay to attend the seminar
where they tell you how great you are. Whereas if I encourage people to start
startups who shouldn't, I make my own life worse. If I encourage too many
people to apply to Y Combinator, it just means more work for me, because I
have to read all the applications.
The other thing that's going to be different is my approach. Instead of being
positive, I'm going to be negative. Instead of telling you "come on, you can
do it" I'm going to consider all the reasons you aren't doing it, and show why
most (but not all) should be ignored. We'll start with the one everyone's born
with.
**1\. Too young**
A lot of people think they're too young to start a startup. Many are right.
The median age worldwide is about 27, so probably a third of the population
can truthfully say they're too young.
What's too young? One of our goals with Y Combinator was to discover the lower
bound on the age of startup founders. It always seemed to us that investors
were too conservative here—that they wanted to fund professors, when really
they should be funding grad students or even undergrads.
The main thing we've discovered from pushing the edge of this envelope is not
where the edge is, but how fuzzy it is. The outer limit may be as low as 16.
We don't look beyond 18 because people younger than that can't legally enter
into contracts. But the most successful founder we've funded so far, Sam
Altman, was 19 at the time.
Sam Altman, however, is an outlying data point. When he was 19, he seemed like
he had a 40 year old inside him. There are other 19 year olds who are 12
inside.
There's a reason we have a distinct word "adult" for people over a certain
age. There is a threshold you cross. It's conventionally fixed at 21, but
different people cross it at greatly varying ages. You're old enough to start
a startup if you've crossed this threshold, whatever your age.
How do you tell? There are a couple tests adults use. I realized these tests
existed after meeting Sam Altman, actually. I noticed that I felt like I was
talking to someone much older. Afterward I wondered, what am I even measuring?
What made him seem older?
One test adults use is whether you still have the kid flake reflex. When
you're a little kid and you're asked to do something hard, you can cry and say
"I can't do it" and the adults will probably let you off. As a kid there's a
magic button you can press by saying "I'm just a kid" that will get you out of
most difficult situations. Whereas adults, by definition, are not allowed to
flake. They still do, of course, but when they do they're ruthlessly pruned.
The other way to tell an adult is by how they react to a challenge. Someone
who's not yet an adult will tend to respond to a challenge from an adult in a
way that acknowledges their dominance. If an adult says "that's a stupid
idea," a kid will either crawl away with his tail between his legs, or rebel.
But rebelling presumes inferiority as much as submission. The adult response
to "that's a stupid idea," is simply to look the other person in the eye and
say "Really? Why do you think so?"
There are a lot of adults who still react childishly to challenges, of course.
What you don't often find are kids who react to challenges like adults. When
you do, you've found an adult, whatever their age.
**2\. Too inexperienced**
I once wrote that startup founders should be at least 23, and that people
should work for another company for a few years before starting their own. I
no longer believe that, and what changed my mind is the example of the
startups we've funded.
I still think 23 is a better age than 21. But the best way to get experience
if you're 21 is to start a startup. So, paradoxically, if you're too
inexperienced to start a startup, what you should do is start one. That's a
way more efficient cure for inexperience than a normal job. In fact, getting a
normal job may actually make you less able to start a startup, by turning you
into a tame animal who thinks he needs an office to work in and a product
manager to tell him what software to write.
What really convinced me of this was the Kikos. They started a startup right
out of college. Their inexperience caused them to make a lot of mistakes. But
by the time we funded their second startup, a year later, they had become
extremely formidable. They were certainly not tame animals. And there is no
way they'd have grown so much if they'd spent that year working at Microsoft,
or even Google. They'd still have been diffident junior programmers.
So now I'd advise people to go ahead and start startups right out of college.
There's no better time to take risks than when you're young. Sure, you'll
probably fail. But even failure will get you to the ultimate goal faster than
getting a job.
It worries me a bit to be saying this, because in effect we're advising people
to educate themselves by failing at our expense, but it's the truth.
**3\. Not determined enough**
You need a lot of determination to succeed as a startup founder. It's probably
the single best predictor of success.
Some people may not be determined enough to make it. It's hard for me to say
for sure, because I'm so determined that I can't imagine what's going on in
the heads of people who aren't. But I know they exist.
Most hackers probably underestimate their determination. I've seen a lot
become visibly more determined as they get used to running a startup. I can
think of several we've funded who would have been delighted at first to be
bought for $2 million, but are now set on world domination.
How can you tell if you're determined enough, when Larry and Sergey themselves
were unsure at first about starting a company? I'm guessing here, but I'd say
the test is whether you're sufficiently driven to work on your own projects.
Though they may have been unsure whether they wanted to start a company, it
doesn't seem as if Larry and Sergey were meek little research assistants,
obediently doing their advisors' bidding. They started projects of their own.
**4\. Not smart enough**
You may need to be moderately smart to succeed as a startup founder. But if
you're worried about this, you're probably mistaken. If you're smart enough to
worry that you might not be smart enough to start a startup, you probably are.
And in any case, starting a startup just doesn't require that much
intelligence. Some startups do. You have to be good at math to write
Mathematica. But most companies do more mundane stuff where the decisive
factor is effort, not brains. Silicon Valley can warp your perspective on
this, because there's a cult of smartness here. People who aren't smart at
least try to act that way. But if you think it takes a lot of intelligence to
get rich, try spending a couple days in some of the fancier bits of New York
or LA.
If you don't think you're smart enough to start a startup doing something
technically difficult, just write enterprise software. Enterprise software
companies aren't technology companies, they're sales companies, and sales
depends mostly on effort.
**5\. Know nothing about business**
This is another variable whose coefficient should be zero. You don't need to
know anything about business to start a startup. The initial focus should be
the product. All you need to know in this phase is how to build things people
want. If you succeed, you'll have to think about how to make money from it.
But this is so easy you can pick it up on the fly.
I get a fair amount of flak for telling founders just to make something great
and not worry too much about making money. And yet all the empirical evidence
points that way: pretty much 100% of startups that make something popular
manage to make money from it. And acquirers tell me privately that revenue is
not what they buy startups for, but their strategic value. Which means,
because they made something people want. Acquirers know the rule holds for
them too: if users love you, you can always make money from that somehow, and
if they don't, the cleverest business model in the world won't save you.
So why do so many people argue with me? I think one reason is that they hate
the idea that a bunch of twenty year olds could get rich from building
something cool that doesn't make any money. They just don't want that to be
possible. But how possible it is doesn't depend on how much they want it to
be.
For a while it annoyed me to hear myself described as some kind of
irresponsible pied piper, leading impressionable young hackers down the road
to ruin. But now I realize this kind of controversy is a sign of a good idea.
The most valuable truths are the ones most people don't believe. They're like
undervalued stocks. If you start with them, you'll have the whole field to
yourself. So when you find an idea you know is good but most people disagree
with, you should not merely ignore their objections, but push aggressively in
that direction. In this case, that means you should seek out ideas that would
be popular but seem hard to make money from.
We'll bet a seed round you can't make something popular that we can't figure
out how to make money from.
**6\. No cofounder**
Not having a cofounder is a real problem. A startup is too much for one person
to bear. And though we differ from other investors on a lot of questions, we
all agree on this. All investors, without exception, are more likely to fund
you with a cofounder than without.
We've funded two single founders, but in both cases we suggested their first
priority should be to find a cofounder. Both did. But we'd have preferred them
to have cofounders before they applied. It's not super hard to get a cofounder
for a project that's just been funded, and we'd rather have cofounders
committed enough to sign up for something super hard.
If you don't have a cofounder, what should you do? Get one. It's more
important than anything else. If there's no one where you live who wants to
start a startup with you, move where there are people who do. If no one wants
to work with you on your current idea, switch to an idea people want to work
on.
If you're still in school, you're surrounded by potential cofounders. A few
years out it gets harder to find them. Not only do you have a smaller pool to
draw from, but most already have jobs, and perhaps even families to support.
So if you had friends in college you used to scheme about startups with, stay
in touch with them as well as you can. That may help keep the dream alive.
It's possible you could meet a cofounder through something like a user's group
or a conference. But I wouldn't be too optimistic. You need to work with
someone to know whether you want them as a cofounder.
The real lesson to draw from this is not how to find a cofounder, but that you
should start startups when you're young and there are lots of them around.
**7\. No idea**
In a sense, it's not a problem if you don't have a good idea, because most
startups change their idea anyway. In the average Y Combinator startup, I'd
guess 70% of the idea is new at the end of the first three months. Sometimes
it's 100%.
In fact, we're so sure the founders are more important than the initial idea
that we're going to try something new this funding cycle. We're going to let
people apply with no idea at all. If you want, you can answer the question on
the application form that asks what you're going to do with "We have no idea."
If you seem really good we'll accept you anyway. We're confident we can sit
down with you and cook up some promising project.
Really this just codifies what we do already. We put little weight on the
idea. We ask mainly out of politeness. The kind of question on the application
form that we really care about is the one where we ask what cool things you've
made. If what you've made is version one of a promising startup, so much the
better, but the main thing we care about is whether you're good at making
things. Being lead developer of a popular open source project counts almost as
much.
That solves the problem if you get funded by Y Combinator. What about in the
general case? Because in another sense, it is a problem if you don't have an
idea. If you start a startup with no idea, what do you do next?
So here's the brief recipe for getting startup ideas. Find something that's
missing in your own life, and supply that need—no matter how specific to you
it seems. Steve Wozniak built himself a computer; who knew so many other
people would want them? A need that's narrow but genuine is a better starting
point than one that's broad but hypothetical. So even if the problem is simply
that you don't have a date on Saturday night, if you can think of a way to fix
that by writing software, you're onto something, because a lot of other people
have the same problem.
**8\. No room for more startups**
A lot of people look at the ever-increasing number of startups and think "this
can't continue." Implicit in their thinking is a fallacy: that there is some
limit on the number of startups there could be. But this is false. No one
claims there's any limit on the number of people who can work for salary at
1000-person companies. Why should there be any limit on the number who can
work for equity at 5-person companies?
Nearly everyone who works is satisfying some kind of need. Breaking up
companies into smaller units doesn't make those needs go away. Existing needs
would probably get satisfied more efficiently by a network of startups than by
a few giant, hierarchical organizations, but I don't think that would mean
less opportunity, because satisfying current needs would lead to more.
Certainly this tends to be the case in individuals. Nor is there anything
wrong with that. We take for granted things that medieval kings would have
considered effeminate luxuries, like whole buildings heated to spring
temperatures year round. And if things go well, our descendants will take for
granted things we would consider shockingly luxurious. There is no absolute
standard for material wealth. Health care is a component of it, and that alone
is a black hole. For the foreseeable future, people will want ever more
material wealth, so there is no limit to the amount of work available for
companies, and for startups in particular.
Usually the limited-room fallacy is not expressed directly. Usually it's
implicit in statements like "there are only so many startups Google,
Microsoft, and Yahoo can buy." Maybe, though the list of acquirers is a lot
longer than that. And whatever you think of other acquirers, Google is not
stupid. The reason big companies buy startups is that they've created
something valuable. And why should there be any limit to the number of
valuable startups companies can acquire, any more than there is a limit to the
amount of wealth individual people want? Maybe there would be practical limits
on the number of startups any one acquirer could assimilate, but if there is
value to be had, in the form of upside that founders are willing to forgo in
return for an immediate payment, acquirers will evolve to consume it. Markets
are pretty smart that way.
**9\. Family to support**
This one is real. I wouldn't advise anyone with a family to start a startup.
I'm not saying it's a bad idea, just that I don't want to take responsibility
for advising it. I'm willing to take responsibility for telling 22 year olds
to start startups. So what if they fail? They'll learn a lot, and that job at
Microsoft will still be waiting for them if they need it. But I'm not prepared
to cross moms.
What you can do, if you have a family and want to start a startup, is start a
consulting business you can then gradually turn into a product business.
Empirically the chances of pulling that off seem very small. You're never
going to produce Google this way. But at least you'll never be without an
income.
Another way to decrease the risk is to join an existing startup instead of
starting your own. Being one of the first employees of a startup is a lot like
being a founder, in both the good ways and the bad. You'll be roughly 1/n^2
founder, where n is your employee number.
As with the question of cofounders, the real lesson here is to start startups
when you're young.
**10\. Independently wealthy**
This is my excuse for not starting a startup. Startups are stressful. Why do
it if you don't need the money? For every "serial entrepreneur," there are
probably twenty sane ones who think "Start another company? Are you crazy?"
I've come close to starting new startups a couple times, but I always pull
back because I don't want four years of my life to be consumed by random
schleps. I know this business well enough to know you can't do it half-
heartedly. What makes a good startup founder so dangerous is his willingness
to endure infinite schleps.
There is a bit of a problem with retirement, though. Like a lot of people, I
like to work. And one of the many weird little problems you discover when you
get rich is that a lot of the interesting people you'd like to work with are
not rich. They need to work at something that pays the bills. Which means if
you want to have them as colleagues, you have to work at something that pays
the bills too, even though you don't need to. I think this is what drives a
lot of serial entrepreneurs, actually.
That's why I love working on Y Combinator so much. It's an excuse to work on
something interesting with people I like.
**11\. Not ready for commitment**
This was my reason for not starting a startup for most of my twenties. Like a
lot of people that age, I valued freedom most of all. I was reluctant to do
anything that required a commitment of more than a few months. Nor would I
have wanted to do anything that completely took over my life the way a startup
does. And that's fine. If you want to spend your time travelling around, or
playing in a band, or whatever, that's a perfectly legitimate reason not to
start a company.
If you start a startup that succeeds, it's going to consume at least three or
four years. (If it fails, you'll be done a lot quicker.) So you shouldn't do
it if you're not ready for commitments on that scale. Be aware, though, that
if you get a regular job, you'll probably end up working there for as long as
a startup would take, and you'll find you have much less spare time than you
might expect. So if you're ready to clip on that ID badge and go to that
orientation session, you may also be ready to start that startup.
**12\. Need for structure**
I'm told there are people who need structure in their lives. This seems to be
a nice way of saying they need someone to tell them what to do. I believe such
people exist. There's plenty of empirical evidence: armies, religious cults,
and so on. They may even be the majority.
If you're one of these people, you probably shouldn't start a startup. In
fact, you probably shouldn't even go to work for one. In a good startup, you
don't get told what to do very much. There may be one person whose job title
is CEO, but till the company has about twelve people no one should be telling
anyone what to do. That's too inefficient. Each person should just do what
they need to without anyone telling them.
If that sounds like a recipe for chaos, think about a soccer team. Eleven
people manage to work together in quite complicated ways, and yet only in
occasional emergencies does anyone tell anyone else what to do. A reporter
once asked David Beckham if there were any language problems at Real Madrid,
since the players were from about eight different countries. He said it was
never an issue, because everyone was so good they never had to talk. They all
just did the right thing.
How do you tell if you're independent-minded enough to start a startup? If
you'd bristle at the suggestion that you aren't, then you probably are.
**13\. Fear of uncertainty**
Perhaps some people are deterred from starting startups because they don't
like the uncertainty. If you go to work for Microsoft, you can predict fairly
accurately what the next few years will be like—all too accurately, in fact.
If you start a startup, anything might happen.
Well, if you're troubled by uncertainty, I can solve that problem for you: if
you start a startup, it will probably fail. Seriously, though, this is not a
bad way to think about the whole experience. Hope for the best, but expect the
worst. In the worst case, it will at least be interesting. In the best case
you might get rich.
No one will blame you if the startup tanks, so long as you made a serious
effort. There may once have been a time when employers would regard that as a
mark against you, but they wouldn't now. I asked managers at big companies,
and they all said they'd prefer to hire someone who'd tried to start a startup
and failed over someone who'd spent the same time working at a big company.
Nor will investors hold it against you, as long as you didn't fail out of
laziness or incurable stupidity. I'm told there's a lot of stigma attached to
failing in other places—in Europe, for example. Not here. In America,
companies, like practically everything else, are disposable.
**14\. Don't realize what you're avoiding**
One reason people who've been out in the world for a year or two make better
founders than people straight from college is that they know what they're
avoiding. If their startup fails, they'll have to get a job, and they know how
much jobs suck.
If you've had summer jobs in college, you may think you know what jobs are
like, but you probably don't. Summer jobs at technology companies are not real
jobs. If you get a summer job as a waiter, that's a real job. Then you have to
carry your weight. But software companies don't hire students for the summer
as a source of cheap labor. They do it in the hope of recruiting them when
they graduate. So while they're happy if you produce, they don't expect you
to.
That will change if you get a real job after you graduate. Then you'll have to
earn your keep. And since most of what big companies do is boring, you're
going to have to work on boring stuff. Easy, compared to college, but boring.
At first it may seem cool to get paid for doing easy stuff, after paying to do
hard stuff in college. But that wears off after a few months. Eventually it
gets demoralizing to work on dumb stuff, even if it's easy and you get paid a
lot.
And that's not the worst of it. The thing that really sucks about having a
regular job is the expectation that you're supposed to be there at certain
times. Even Google is afflicted with this, apparently. And what this means, as
everyone who's had a regular job can tell you, is that there are going to be
times when you have absolutely no desire to work on anything, and you're going
to have to go to work anyway and sit in front of your screen and pretend to.
To someone who likes work, as most good hackers do, this is torture.
In a startup, you skip all that. There's no concept of office hours in most
startups. Work and life just get mixed together. But the good thing about that
is that no one minds if you have a life at work. In a startup you can do
whatever you want most of the time. If you're a founder, what you want to do
most of the time is work. But you never have to pretend to.
If you took a nap in your office in a big company, it would seem
unprofessional. But if you're starting a startup and you fall asleep in the
middle of the day, your cofounders will just assume you were tired.
**15\. Parents want you to be a doctor**
A significant number of would-be startup founders are probably dissuaded from
doing it by their parents. I'm not going to say you shouldn't listen to them.
Families are entitled to their own traditions, and who am I to argue with
them? But I will give you a couple reasons why a safe career might not be what
your parents really want for you.
One is that parents tend to be more conservative for their kids than they
would be for themselves. This is actually a rational response to their
situation. Parents end up sharing more of their kids' ill fortune than good
fortune. Most parents don't mind this; it's part of the job; but it does tend
to make them excessively conservative. And erring on the side of conservatism
is still erring. In almost everything, reward is proportionate to risk. So by
protecting their kids from risk, parents are, without realizing it, also
protecting them from rewards. If they saw that, they'd want you to take more
risks.
The other reason parents may be mistaken is that, like generals, they're
always fighting the last war. If they want you to be a doctor, odds are it's
not just because they want you to help the sick, but also because it's a
prestigious and lucrative career. But not so lucrative or prestigious as
it was when their opinions were formed. When I was a kid in the seventies, a
doctor was _the_ thing to be. There was a sort of golden triangle involving
doctors, Mercedes 450SLs, and tennis. All three vertices now seem pretty
dated.
The parents who want you to be a doctor may simply not realize how much things
have changed. Would they be that unhappy if you were Steve Jobs instead? So I
think the way to deal with your parents' opinions about what you should do is
to treat them like feature requests. Even if your only goal is to please them,
the way to do that is not simply to give them what they ask for. Instead think
about why they're asking for something, and see if there's a better way to
give them what they need.
**16\. A job is the default**
This leads us to the last and probably most powerful reason people get regular
jobs: it's the default thing to do. Defaults are enormously powerful,
precisely because they operate without any conscious choice.
To almost everyone except criminals, it seems an axiom that if you need money,
you should get a job. Actually this tradition is not much more than a hundred
years old. Before that, the default way to make a living was by farming. It's
a bad plan to treat something only a hundred years old as an axiom. By
historical standards, that's something that's changing pretty rapidly.
We may be seeing another such change right now. I've read a lot of economic
history, and I understand the startup world pretty well, and it now seems to
me fairly likely that we're seeing the beginning of a change like the one from
farming to manufacturing.
And you know what? If you'd been around when that change began (around 1000 in
Europe) it would have seemed to nearly everyone that running off to the city
to make your fortune was a crazy thing to do. Though serfs were in principle
forbidden to leave their manors, it can't have been that hard to run away to a
city. There were no guards patrolling the perimeter of the village. What
prevented most serfs from leaving was that it seemed insanely risky. Leave
one's plot of land? Leave the people you'd spent your whole life with, to live
in a giant city of three or four thousand complete strangers? How would you
live? How would you get food, if you didn't grow it?
Frightening as it seemed to them, it's now the default with us to live by our
wits. So if it seems risky to you to start a startup, think how risky it once
seemed to your ancestors to live as we do now. Oddly enough, the people who
know this best are the very ones trying to get you to stick to the old model.
How can Larry and Sergey say you should come work as their employee, when they
didn't get jobs themselves?
Now we look back on medieval peasants and wonder how they stood it. How grim
it must have been to till the same fields your whole life with no hope of
anything better, under the thumb of lords and priests you had to give all your
surplus to and acknowledge as your masters. I wouldn't be surprised if one day
people look back on what we consider a normal job in the same way. How grim it
would be to commute every day to a cubicle in some soulless office complex,
and be told what to do by someone you had to acknowledge as a boss—someone who
could call you into their office and say "take a seat," and you'd sit! Imagine
having to ask _permission_ to release software to users. Imagine being sad on
Sunday afternoons because the weekend was almost over, and tomorrow you'd have
to get up and go to work. How did they stand it?
It's exciting to think we may be on the cusp of another shift like the one
from farming to manufacturing. That's why I care about startups. Startups
aren't interesting just because they're a way to make a lot of money. I
couldn't care less about other ways to do that, like speculating in
securities. At most those are interesting the way puzzles are. There's more
going on with startups. They may represent one of those rare, historic shifts
in the way wealth is created.
That's ultimately what drives us to work on Y Combinator. We want to make
money, if only so we don't have to stop doing it, but that's not the main
goal. There have only been a handful of these great economic shifts in human
history. It would be an amazing hack to make one happen faster.
** |
|
July 2007
I have too much stuff. Most people in America do. In fact, the poorer people
are, the more stuff they seem to have. Hardly anyone is so poor that they
can't afford a front yard full of old cars.
It wasn't always this way. Stuff used to be rare and valuable. You can still
see evidence of that if you look for it. For example, in my house in
Cambridge, which was built in 1876, the bedrooms don't have closets. In those
days people's stuff fit in a chest of drawers. Even as recently as a few
decades ago there was a lot less stuff. When I look back at photos from the
1970s, I'm surprised how empty houses look. As a kid I had what I thought was
a huge fleet of toy cars, but they'd be dwarfed by the number of toys my
nephews have. All together my Matchboxes and Corgis took up about a third of
the surface of my bed. In my nephews' rooms the bed is the only clear space.
Stuff has gotten a lot cheaper, but our attitudes toward it haven't changed
correspondingly. We overvalue stuff.
That was a big problem for me when I had no money. I felt poor, and stuff
seemed valuable, so almost instinctively I accumulated it. Friends would leave
something behind when they moved, or I'd see something as I was walking down
the street on trash night (beware of anything you find yourself describing as
"perfectly good"), or I'd find something in almost new condition for a tenth
its retail price at a garage sale. And pow, more stuff.
In fact these free or nearly free things weren't bargains, because they were
worth even less than they cost. Most of the stuff I accumulated was worthless,
because I didn't need it.
What I didn't understand was that the value of some new acquisition wasn't the
difference between its retail price and what I paid for it. It was the value I
derived from it. Stuff is an extremely illiquid asset. Unless you have some
plan for selling that valuable thing you got so cheaply, what difference does
it make what it's "worth?" The only way you're ever going to extract any value
from it is to use it. And if you don't have any immediate use for it, you
probably never will.
Companies that sell stuff have spent huge sums training us to think stuff is
still valuable. But it would be closer to the truth to treat stuff as
worthless.
In fact, worse than worthless, because once you've accumulated a certain
amount of stuff, it starts to own you rather than the other way around. I know
of one couple who couldn't retire to the town they preferred because they
couldn't afford a place there big enough for all their stuff. Their house
isn't theirs; it's their stuff's.
And unless you're extremely organized, a house full of stuff can be very
depressing. A cluttered room saps one's spirits. One reason, obviously, is
that there's less room for people in a room full of stuff. But there's more
going on than that. I think humans constantly scan their environment to build
a mental model of what's around them. And the harder a scene is to parse, the
less energy you have left for conscious thoughts. A cluttered room is
literally exhausting.
(This could explain why clutter doesn't seem to bother kids as much as adults.
Kids are less perceptive. They build a coarser model of their surroundings,
and this consumes less energy.)
I first realized the worthlessness of stuff when I lived in Italy for a year.
All I took with me was one large backpack of stuff. The rest of my stuff I
left in my landlady's attic back in the US. And you know what? All I missed
were some of the books. By the end of the year I couldn't even remember what
else I had stored in that attic.
And yet when I got back I didn't discard so much as a box of it. Throw away a
perfectly good rotary telephone? I might need that one day.
The really painful thing to recall is not just that I accumulated all this
useless stuff, but that I often spent money I desperately needed on stuff that
I didn't.
Why would I do that? Because the people whose job is to sell you stuff are
really, really good at it. The average 25 year old is no match for companies
that have spent years figuring out how to get you to spend money on stuff.
They make the experience of buying stuff so pleasant that "shopping" becomes a
leisure activity.
How do you protect yourself from these people? It can't be easy. I'm a fairly
skeptical person, and their tricks worked on me well into my thirties. But one
thing that might work is to ask yourself, before buying something, "is this
going to make my life noticeably better?"
A friend of mine cured herself of a clothes buying habit by asking herself
before she bought anything "Am I going to wear this all the time?" If she
couldn't convince herself that something she was thinking of buying would
become one of those few things she wore all the time, she wouldn't buy it. I
think that would work for any kind of purchase. Before you buy anything, ask
yourself: will this be something I use constantly? Or is it just something
nice? Or worse still, a mere bargain?
The worst stuff in this respect may be stuff you don't use much because it's
too good. Nothing owns you like fragile stuff. For example, the "good china"
so many households have, and whose defining quality is not so much that it's
fun to use, but that one must be especially careful not to break it.
Another way to resist acquiring stuff is to think of the overall cost of
owning it. The purchase price is just the beginning. You're going to have to
_think_ about that thing for years—perhaps for the rest of your life. Every
thing you own takes energy away from you. Some give more than they take. Those
are the only things worth having.
I've now stopped accumulating stuff. Except books—but books are different.
Books are more like a fluid than individual objects. It's not especially
inconvenient to own several thousand books, whereas if you owned several
thousand random possessions you'd be a local celebrity. But except for books,
I now actively avoid stuff. If I want to spend money on some kind of treat,
I'll take services over goods any day.
I'm not claiming this is because I've achieved some kind of zenlike detachment
from material things. I'm talking about something more mundane. A historical
change has taken place, and I've now realized it. Stuff used to be valuable,
and now it's not.
In industrialized countries the same thing happened with food in the middle of
the twentieth century. As food got cheaper (or we got richer; they're
indistinguishable), eating too much started to be a bigger danger than eating
too little. We've now reached that point with stuff. For most people, rich or
poor, stuff has become a burden.
The good news is, if you're carrying a burden without knowing it, your life
could be better than you realize. Imagine walking around for years with five
pound ankle weights, then suddenly having them removed.
---
---
| | Spanish Translation
| | | | Russian Translation
| | Italian Translation
| | | | Polish Translation
| | Turkish Translation
| | | | French Translation
| | Slovak Translation
| | | | Romanian Translation
| | German Translation |
|
April 2005
"Suits make a corporate comeback," says the _New York Times_. Why does this
sound familiar? Maybe because the suit was also back in February, September
2004, June 2004, March 2004, September 2003, November 2002, April 2002, and
February 2002.
Why do the media keep running stories saying suits are back? Because PR firms
tell them to. One of the most surprising things I discovered during my brief
business career was the existence of the PR industry, lurking like a huge,
quiet submarine beneath the news. Of the stories you read in traditional media
that aren't about politics, crimes, or disasters, more than half probably come
from PR firms.
I know because I spent years hunting such "press hits." Our startup spent its
entire marketing budget on PR: at a time when we were assembling our own
computers to save money, we were paying a PR firm $16,000 a month. And they
were worth it. PR is the news equivalent of search engine optimization;
instead of buying ads, which readers ignore, you get yourself inserted
directly into the stories.
Our PR firm was one of the best in the business. In 18 months, they got press
hits in over 60 different publications. And we weren't the only ones they did
great things for. In 1997 I got a call from another startup founder
considering hiring them to promote his company. I told him they were PR gods,
worth every penny of their outrageous fees. But I remember thinking his
company's name was odd. Why call an auction site "eBay"?
**Symbiosis**
PR is not dishonest. Not quite. In fact, the reason the best PR firms are so
effective is precisely that they aren't dishonest. They give reporters
genuinely valuable information. A good PR firm won't bug reporters just
because the client tells them to; they've worked hard to build their
credibility with reporters, and they don't want to destroy it by feeding them
mere propaganda.
If anyone is dishonest, it's the reporters. The main reason PR firms exist is
that reporters are lazy. Or, to put it more nicely, overworked. Really they
ought to be out there digging up stories for themselves. But it's so tempting
to sit in their offices and let PR firms bring the stories to them. After all,
they know good PR firms won't lie to them.
A good flatterer doesn't lie, but tells his victim selective truths (what a
nice color your eyes are). Good PR firms use the same strategy: they give
reporters stories that are true, but whose truth favors their clients.
For example, our PR firm often pitched stories about how the Web let small
merchants compete with big ones. This was perfectly true. But the reason
reporters ended up writing stories about this particular truth, rather than
some other one, was that small merchants were our target market, and we were
paying the piper.
Different publications vary greatly in their reliance on PR firms. At the
bottom of the heap are the trade press, who make most of their money from
advertising and would give the magazines away for free if advertisers would
let them. The average trade publication is a bunch of ads, glued together
by just enough articles to make it look like a magazine. They're so desperate
for "content" that some will print your press releases almost verbatim, if you
take the trouble to write them to read like articles.
At the other extreme are publications like the _New York Times_ and the _Wall
Street Journal_. Their reporters do go out and find their own stories, at
least some of the time. They'll listen to PR firms, but briefly and
skeptically. We managed to get press hits in almost every publication we
wanted, but we never managed to crack the print edition of the _Times_.
The weak point of the top reporters is not laziness, but vanity. You don't
pitch stories to them. You have to approach them as if you were a specimen
under their all-seeing microscope, and make it seem as if the story you want
them to run is something they thought of themselves.
Our greatest PR coup was a two-part one. We estimated, based on some fairly
informal math, that there were about 5000 stores on the Web. We got one paper
to print this number, which seemed neutral enough. But once this "fact" was
out there in print, we could quote it to other publications, and claim that
with 1000 users we had 20% of the online store market.
This was roughly true. We really did have the biggest share of the online
store market, and 5000 was our best guess at its size. But the way the story
appeared in the press sounded a lot more definite.
Reporters like definitive statements. For example, many of the stories about
Jeremy Jaynes's conviction say that he was one of the 10 worst spammers. This
"fact" originated in Spamhaus's ROKSO list, which I think even Spamhaus would
admit is a rough guess at the top spammers. The first stories about Jaynes
cited this source, but now it's simply repeated as if it were part of the
indictment.
All you can say with certainty about Jaynes is that he was a fairly big
spammer. But reporters don't want to print vague stuff like "fairly big." They
want statements with punch, like "top ten." And PR firms give them what they
want. Wearing suits, we're told, will make us 3.6 percent more productive.
**Buzz**
Where the work of PR firms really does get deliberately misleading is in the
generation of "buzz." They usually feed the same story to several different
publications at once. And when readers see similar stories in multiple places,
they think there is some important trend afoot. Which is exactly what they're
supposed to think.
When Windows 95 was launched, people waited outside stores at midnight to buy
the first copies. None of them would have been there without PR firms, who
generated such a buzz in the news media that it became self-reinforcing, like
a nuclear chain reaction.
I doubt PR firms realize it yet, but the Web makes it possible to track them
at work. If you search for the obvious phrases, you turn up several efforts
over the years to place stories about the return of the suit. For example, the
Reuters article that got picked up by USA Today in September 2004. "The suit
is back," it begins.
Trend articles like this are almost always the work of PR firms. Once you know
how to read them, it's straightforward to figure out who the client is. With
trend stories, PR firms usually line up one or more "experts" to talk about
the industry generally. In this case we get three: the NPD Group, the creative
director of GQ, and a research director at Smith Barney. When you get to
the end of the experts, look for the client. And bingo, there it is: The Men's
Wearhouse.
Not surprising, considering The Men's Wearhouse was at that moment running ads
saying "The Suit is Back." Talk about a successful press hit-- a wire service
article whose first sentence is your own ad copy.
The secret to finding other press hits from a given pitch is to realize that
they all started from the same document back at the PR firm. Search for a few
key phrases and the names of the clients and the experts, and you'll turn up
other variants of this story.
Casual fridays are out and dress codes are in writes Diane E. Lewis in _The
Boston Globe_. In a remarkable coincidence, Ms. Lewis's industry contacts also
include the creative director of GQ.
Ripped jeans and T-shirts are out, writes Mary Kathleen Flynn in _US News &
World Report_. And _she too_ knows the creative director of GQ.
Men's suits are back writes Nicole Ford in Sexbuzz.Com ("the ultimate men's
entertainment magazine").
Dressing down loses appeal as men suit up at the office writes Tenisha Mercer
of _The Detroit News_.
Now that so many news articles are online, I suspect you could find a similar
pattern for most trend stories placed by PR firms. I propose we call this new
sport "PR diving," and I'm sure there are far more striking examples out there
than this clump of five stories.
**Online**
After spending years chasing them, it's now second nature to me to recognize
press hits for what they are. But before we hired a PR firm I had no idea
where articles in the mainstream media came from. I could tell a lot of them
were crap, but I didn't realize why.
Remember the exercises in critical reading you did in school, where you had to
look at a piece of writing and step back and ask whether the author was
telling the whole truth? If you really want to be a critical reader, it turns
out you have to step back one step further, and ask not just whether the
author is telling the truth, but _why he's writing about this subject at all._
Online, the answer tends to be a lot simpler. Most people who publish online
write what they write for the simple reason that they want to. You can't see
the fingerprints of PR firms all over the articles, as you can in so many
print publications-- which is one of the reasons, though they may not
consciously realize it, that readers trust bloggers more than _Business Week_.
I was talking recently to a friend who works for a big newspaper. He thought
the print media were in serious trouble, and that they were still mostly in
denial about it. "They think the decline is cyclic," he said. "Actually it's
structural."
In other words, the readers are leaving, and they're not coming back.
Why? I think the main reason is that the writing online is more honest.
Imagine how incongruous the _New York Times_ article about suits would sound
if you read it in a blog:
> The urge to look corporate-- sleek, commanding, prudent, yet with just a
> touch of hubris on your well-cut sleeve-- is an unexpected development in a
> time of business disgrace.
The problem with this article is not just that it originated in a PR firm. The
whole tone is bogus. This is the tone of someone writing down to their
audience.
Whatever its flaws, the writing you find online is authentic. It's not mystery
meat cooked up out of scraps of pitch letters and press releases, and pressed
into molds of zippy journalese. It's people writing what they think.
I didn't realize, till there was an alternative, just how artificial most of
the writing in the mainstream media was. I'm not saying I used to believe what
I read in _Time_ and _Newsweek_. Since high school, at least, I've thought of
magazines like that more as guides to what ordinary people were being told to
think than as sources of information. But I didn't realize till the last few
years that writing for publication didn't have to mean writing that way. I
didn't realize you could write as candidly and informally as you would if you
were writing to a friend.
Readers aren't the only ones who've noticed the change. The PR industry has
too. A hilarious article on the site of the PR Society of America gets to the
heart of the matter:
> Bloggers are sensitive about becoming mouthpieces for other organizations
> and companies, which is the reason they began blogging in the first place.
PR people fear bloggers for the same reason readers like them. And that means
there may be a struggle ahead. As this new kind of writing draws readers away
from traditional media, we should be prepared for whatever PR mutates into to
compensate. When I think how hard PR firms work to score press hits in the
traditional media, I can't imagine they'll work any less hard to feed stories
to bloggers, if they can figure out how.
** |
|
September 2004
Remember the essays you had to write in high school? Topic sentence,
introductory paragraph, supporting paragraphs, conclusion. The conclusion
being, say, that Ahab in _Moby Dick_ was a Christ-like figure.
Oy. So I'm going to try to give the other side of the story: what an essay
really is, and how you write one. Or at least, how I write one.
**Mods**
The most obvious difference between real essays and the things one has to
write in school is that real essays are not exclusively about English
literature. Certainly schools should teach students how to write. But due to a
series of historical accidents the teaching of writing has gotten mixed
together with the study of literature. And so all over the country students
are writing not about how a baseball team with a small budget might compete
with the Yankees, or the role of color in fashion, or what constitutes a good
dessert, but about symbolism in Dickens.
With the result that writing is made to seem boring and pointless. Who cares
about symbolism in Dickens? Dickens himself would be more interested in an
essay about color or baseball.
How did things get this way? To answer that we have to go back almost a
thousand years. Around 1100, Europe at last began to catch its breath after
centuries of chaos, and once they had the luxury of curiosity they
rediscovered what we call "the classics." The effect was rather as if we were
visited by beings from another solar system. These earlier civilizations were
so much more sophisticated that for the next several centuries the main work
of European scholars, in almost every field, was to assimilate what they knew.
During this period the study of ancient texts acquired great prestige. It
seemed the essence of what scholars did. As European scholarship gained
momentum it became less and less important; by 1350 someone who wanted to
learn about science could find better teachers than Aristotle in his own era.
But schools change slower than scholarship. In the 19th century the study
of ancient texts was still the backbone of the curriculum.
The time was then ripe for the question: if the study of ancient texts is a
valid field for scholarship, why not modern texts? The answer, of course, is
that the original raison d'etre of classical scholarship was a kind of
intellectual archaeology that does not need to be done in the case of
contemporary authors. But for obvious reasons no one wanted to give that
answer. The archaeological work being mostly done, it implied that those
studying the classics were, if not wasting their time, at least working on
problems of minor importance.
And so began the study of modern literature. There was a good deal of
resistance at first. The first courses in English literature seem to have been
offered by the newer colleges, particularly American ones. Dartmouth, the
University of Vermont, Amherst, and University College, London taught English
literature in the 1820s. But Harvard didn't have a professor of English
literature until 1876, and Oxford not till 1885. (Oxford had a chair of
Chinese before it had one of English.)
What tipped the scales, at least in the US, seems to have been the idea that
professors should do research as well as teach. This idea (along with the PhD,
the department, and indeed the whole concept of the modern university) was
imported from Germany in the late 19th century. Beginning at Johns Hopkins in
1876, the new model spread rapidly.
Writing was one of the casualties. Colleges had long taught English
composition. But how do you do research on composition? The professors who
taught math could be required to do original math, the professors who taught
history could be required to write scholarly articles about history, but what
about the professors who taught rhetoric or composition? What should they do
research on? The closest thing seemed to be English literature.
And so in the late 19th century the teaching of writing was inherited by
English professors. This had two drawbacks: (a) an expert on literature need
not himself be a good writer, any more than an art historian has to be a good
painter, and (b) the subject of writing now tends to be literature, since
that's what the professor is interested in.
High schools imitate universities. The seeds of our miserable high school
experiences were sown in 1892, when the National Education Association
"formally recommended that literature and composition be unified in the high
school course." The 'riting component of the 3 Rs then morphed into
English, with the bizarre consequence that high school students now had to
write about English literature-- to write, without even realizing it,
imitations of whatever English professors had been publishing in their
journals a few decades before.
It's no wonder if this seems to the student a pointless exercise, because
we're now three steps removed from real work: the students are imitating
English professors, who are imitating classical scholars, who are merely the
inheritors of a tradition growing out of what was, 700 years ago, fascinating
and urgently needed work.
**No Defense**
The other big difference between a real essay and the things they make you
write in school is that a real essay doesn't take a position and then defend
it. That principle, like the idea that we ought to be writing about
literature, turns out to be another intellectual hangover of long forgotten
origins.
It's often mistakenly believed that medieval universities were mostly
seminaries. In fact they were more law schools. And at least in our tradition
lawyers are advocates, trained to take either side of an argument and make as
good a case for it as they can. Whether cause or effect, this spirit pervaded
early universities. The study of rhetoric, the art of arguing persuasively,
was a third of the undergraduate curriculum. And after the lecture the
most common form of discussion was the disputation. This is at least nominally
preserved in our present-day thesis defense: most people treat the words
thesis and dissertation as interchangeable, but originally, at least, a thesis
was a position one took and the dissertation was the argument by which one
defended it.
Defending a position may be a necessary evil in a legal dispute, but it's not
the best way to get at the truth, as I think lawyers would be the first to
admit. It's not just that you miss subtleties this way. The real problem is
that you can't change the question.
And yet this principle is built into the very structure of the things they
teach you to write in high school. The topic sentence is your thesis, chosen
in advance, the supporting paragraphs the blows you strike in the conflict,
and the conclusion-- uh, what is the conclusion? I was never sure about that
in high school. It seemed as if we were just supposed to restate what we said
in the first paragraph, but in different enough words that no one could tell.
Why bother? But when you understand the origins of this sort of "essay," you
can see where the conclusion comes from. It's the concluding remarks to the
jury.
Good writing should be convincing, certainly, but it should be convincing
because you got the right answers, not because you did a good job of arguing.
When I give a draft of an essay to friends, there are two things I want to
know: which parts bore them, and which seem unconvincing. The boring bits can
usually be fixed by cutting. But I don't try to fix the unconvincing bits by
arguing more cleverly. I need to talk the matter over.
At the very least I must have explained something badly. In that case, in the
course of the conversation I'll be forced to come up a with a clearer
explanation, which I can just incorporate in the essay. More often than not I
have to change what I was saying as well. But the aim is never to be
convincing per se. As the reader gets smarter, convincing and true become
identical, so if I can convince smart readers I must be near the truth.
The sort of writing that attempts to persuade may be a valid (or at least
inevitable) form, but it's historically inaccurate to call it an essay. An
essay is something else.
**Trying**
To understand what a real essay is, we have to reach back into history again,
though this time not so far. To Michel de Montaigne, who in 1580 published a
book of what he called "essais." He was doing something quite different from
what lawyers do, and the difference is embodied in the name. _Essayer_ is the
French verb meaning "to try" and an _essai_ is an attempt. An essay is
something you write to try to figure something out.
Figure out what? You don't know yet. And so you can't begin with a thesis,
because you don't have one, and may never have one. An essay doesn't begin
with a statement, but with a question. In a real essay, you don't take a
position and defend it. You notice a door that's ajar, and you open it and
walk in to see what's inside.
If all you want to do is figure things out, why do you need to write anything,
though? Why not just sit and think? Well, there precisely is Montaigne's great
discovery. Expressing ideas helps to form them. Indeed, helps is far too weak
a word. Most of what ends up in my essays I only thought of when I sat down to
write them. That's why I write them.
In the things you write in school you are, in theory, merely explaining
yourself to the reader. In a real essay you're writing for yourself. You're
thinking out loud.
But not quite. Just as inviting people over forces you to clean up your
apartment, writing something that other people will read forces you to think
well. So it does matter to have an audience. The things I've written just for
myself are no good. They tend to peter out. When I run into difficulties, I
find I conclude with a few vague questions and then drift off to get a cup of
tea.
Many published essays peter out in the same way. Particularly the sort written
by the staff writers of newsmagazines. Outside writers tend to supply
editorials of the defend-a-position variety, which make a beeline toward a
rousing (and foreordained) conclusion. But the staff writers feel obliged to
write something "balanced." Since they're writing for a popular magazine, they
start with the most radioactively controversial questions, from which--
because they're writing for a popular magazine-- they then proceed to recoil
in terror. Abortion, for or against? This group says one thing. That group
says another. One thing is certain: the question is a complex one. (But don't
get mad at us. We didn't draw any conclusions.)
**The River**
Questions aren't enough. An essay has to come up with answers. They don't
always, of course. Sometimes you start with a promising question and get
nowhere. But those you don't publish. Those are like experiments that get
inconclusive results. An essay you publish ought to tell the reader something
he didn't already know.
But _what_ you tell him doesn't matter, so long as it's interesting. I'm
sometimes accused of meandering. In defend-a-position writing that would be a
flaw. There you're not concerned with truth. You already know where you're
going, and you want to go straight there, blustering through obstacles, and
hand-waving your way across swampy ground. But that's not what you're trying
to do in an essay. An essay is supposed to be a search for truth. It would be
suspicious if it didn't meander.
The Meander (aka Menderes) is a river in Turkey. As you might expect, it winds
all over the place. But it doesn't do this out of frivolity. The path it has
discovered is the most economical route to the sea.
The river's algorithm is simple. At each step, flow down. For the essayist
this translates to: flow interesting. Of all the places to go next, choose the
most interesting. One can't have quite as little foresight as a river. I
always know generally what I want to write about. But not the specific
conclusions I want to reach; from paragraph to paragraph I let the ideas take
their course.
This doesn't always work. Sometimes, like a river, one runs up against a wall.
Then I do the same thing the river does: backtrack. At one point in this essay
I found that after following a certain thread I ran out of ideas. I had to go
back seven paragraphs and start over in another direction.
Fundamentally an essay is a train of thought-- but a cleaned-up train of
thought, as dialogue is cleaned-up conversation. Real thought, like real
conversation, is full of false starts. It would be exhausting to read. You
need to cut and fill to emphasize the central thread, like an illustrator
inking over a pencil drawing. But don't change so much that you lose the
spontaneity of the original.
Err on the side of the river. An essay is not a reference work. It's not
something you read looking for a specific answer, and feel cheated if you
don't find it. I'd much rather read an essay that went off in an unexpected
but interesting direction than one that plodded dutifully along a prescribed
course.
**Surprise**
So what's interesting? For me, interesting means surprise. Interfaces, as
Geoffrey James has said, should follow the principle of least astonishment. A
button that looks like it will make a machine stop should make it stop, not
speed up. Essays should do the opposite. Essays should aim for maximum
surprise.
I was afraid of flying for a long time and could only travel vicariously. When
friends came back from faraway places, it wasn't just out of politeness that I
asked what they saw. I really wanted to know. And I found the best way to get
information out of them was to ask what surprised them. How was the place
different from what they expected? This is an extremely useful question. You
can ask it of the most unobservant people, and it will extract information
they didn't even know they were recording.
Surprises are things that you not only didn't know, but that contradict things
you thought you knew. And so they're the most valuable sort of fact you can
get. They're like a food that's not merely healthy, but counteracts the
unhealthy effects of things you've already eaten.
How do you find surprises? Well, therein lies half the work of essay writing.
(The other half is expressing yourself well.) The trick is to use yourself as
a proxy for the reader. You should only write about things you've thought
about a lot. And anything you come across that surprises you, who've thought
about the topic a lot, will probably surprise most readers.
For example, in a recent essay I pointed out that because you can only judge
computer programmers by working with them, no one knows who the best
programmers are overall. I didn't realize this when I began that essay, and
even now I find it kind of weird. That's what you're looking for.
So if you want to write essays, you need two ingredients: a few topics you've
thought about a lot, and some ability to ferret out the unexpected.
What should you think about? My guess is that it doesn't matter-- that
anything can be interesting if you get deeply enough into it. One possible
exception might be things that have deliberately had all the variation sucked
out of them, like working in fast food. In retrospect, was there anything
interesting about working at Baskin-Robbins? Well, it was interesting how
important color was to the customers. Kids a certain age would point into the
case and say that they wanted yellow. Did they want French Vanilla or Lemon?
They would just look at you blankly. They wanted yellow. And then there was
the mystery of why the perennial favorite Pralines 'n' Cream was so appealing.
(I think now it was the salt.) And the difference in the way fathers and
mothers bought ice cream for their kids: the fathers like benevolent kings
bestowing largesse, the mothers harried, giving in to pressure. So, yes, there
does seem to be some material even in fast food.
I didn't notice those things at the time, though. At sixteen I was about as
observant as a lump of rock. I can see more now in the fragments of memory I
preserve of that age than I could see at the time from having it all happening
live, right in front of me.
**Observation**
So the ability to ferret out the unexpected must not merely be an inborn one.
It must be something you can learn. How do you learn it?
To some extent it's like learning history. When you first read history, it's
just a whirl of names and dates. Nothing seems to stick. But the more you
learn, the more hooks you have for new facts to stick onto-- which means you
accumulate knowledge at an exponential rate. Once you remember that Normans
conquered England in 1066, it will catch your attention when you hear that
other Normans conquered southern Italy at about the same time. Which will make
you wonder about Normandy, and take note when a third book mentions that
Normans were not, like most of what is now called France, tribes that flowed
in as the Roman empire collapsed, but Vikings (norman = north man) who arrived
four centuries later in 911. Which makes it easier to remember that Dublin was
also established by Vikings in the 840s. Etc, etc squared.
Collecting surprises is a similar process. The more anomalies you've seen, the
more easily you'll notice new ones. Which means, oddly enough, that as you
grow older, life should become more and more surprising. When I was a kid, I
used to think adults had it all figured out. I had it backwards. Kids are the
ones who have it all figured out. They're just mistaken.
When it comes to surprises, the rich get richer. But (as with wealth) there
may be habits of mind that will help the process along. It's good to have a
habit of asking questions, especially questions beginning with Why. But not in
the random way that three year olds ask why. There are an infinite number of
questions. How do you find the fruitful ones?
I find it especially useful to ask why about things that seem wrong. For
example, why should there be a connection between humor and misfortune? Why do
we find it funny when a character, even one we like, slips on a banana peel?
There's a whole essay's worth of surprises there for sure.
If you want to notice things that seem wrong, you'll find a degree of
skepticism helpful. I take it as an axiom that we're only achieving 1% of what
we could. This helps counteract the rule that gets beaten into our heads as
children: that things are the way they are because that is how things have to
be. For example, everyone I've talked to while writing this essay felt the
same about English classes-- that the whole process seemed pointless. But none
of us had the balls at the time to hypothesize that it was, in fact, all a
mistake. We all thought there was just something we weren't getting.
I have a hunch you want to pay attention not just to things that seem wrong,
but things that seem wrong in a humorous way. I'm always pleased when I see
someone laugh as they read a draft of an essay. But why should I be? I'm
aiming for good ideas. Why should good ideas be funny? The connection may be
surprise. Surprises make us laugh, and surprises are what one wants to
deliver.
I write down things that surprise me in notebooks. I never actually get around
to reading them and using what I've written, but I do tend to reproduce the
same thoughts later. So the main value of notebooks may be what writing things
down leaves in your head.
People trying to be cool will find themselves at a disadvantage when
collecting surprises. To be surprised is to be mistaken. And the essence of
cool, as any fourteen year old could tell you, is _nil admirari._ When you're
mistaken, don't dwell on it; just act like nothing's wrong and maybe no one
will notice.
One of the keys to coolness is to avoid situations where inexperience may make
you look foolish. If you want to find surprises you should do the opposite.
Study lots of different things, because some of the most interesting surprises
are unexpected connections between different fields. For example, jam, bacon,
pickles, and cheese, which are among the most pleasing of foods, were all
originally intended as methods of preservation. And so were books and
paintings.
Whatever you study, include history-- but social and economic history, not
political history. History seems to me so important that it's misleading to
treat it as a mere field of study. Another way to describe it is _all the data
we have so far._
Among other things, studying history gives one confidence that there are good
ideas waiting to be discovered right under our noses. Swords evolved during
the Bronze Age out of daggers, which (like their flint predecessors) had a
hilt separate from the blade. Because swords are longer the hilts kept
breaking off. But it took five hundred years before someone thought of casting
hilt and blade as one piece.
**Disobedience**
Above all, make a habit of paying attention to things you're not supposed to,
either because they're "inappropriate," or not important, or not what you're
supposed to be working on. If you're curious about something, trust your
instincts. Follow the threads that attract your attention. If there's
something you're really interested in, you'll find they have an uncanny way of
leading back to it anyway, just as the conversation of people who are
especially proud of something always tends to lead back to it.
For example, I've always been fascinated by comb-overs, especially the extreme
sort that make a man look as if he's wearing a beret made of his own hair.
Surely this is a lowly sort of thing to be interested in-- the sort of
superficial quizzing best left to teenage girls. And yet there is something
underneath. The key question, I realized, is how does the comber-over not see
how odd he looks? And the answer is that he got to look that way
_incrementally._ What began as combing his hair a little carefully over a thin
patch has gradually, over 20 years, grown into a monstrosity. Gradualness is
very powerful. And that power can be used for constructive purposes too: just
as you can trick yourself into looking like a freak, you can trick yourself
into creating something so grand that you would never have dared to _plan_
such a thing. Indeed, this is just how most good software gets created. You
start by writing a stripped-down kernel (how hard can it be?) and gradually it
grows into a complete operating system. Hence the next leap: could you do the
same thing in painting, or in a novel?
See what you can extract from a frivolous question? If there's one piece of
advice I would give about writing essays, it would be: don't do as you're
told. Don't believe what you're supposed to. Don't write the essay readers
expect; one learns nothing from what one expects. And don't write the way they
taught you to in school.
The most important sort of disobedience is to write essays at all.
Fortunately, this sort of disobedience shows signs of becoming rampant. It
used to be that only a tiny number of officially approved writers were allowed
to write essays. Magazines published few of them, and judged them less by what
they said than who wrote them; a magazine might publish a story by an unknown
writer if it was good enough, but if they published an essay on x it had to be
by someone who was at least forty and whose job title had x in it. Which is a
problem, because there are a lot of things insiders can't say precisely
because they're insiders.
The Internet is changing that. Anyone can publish an essay on the Web, and it
gets judged, as any writing should, by what it says, not who wrote it. Who are
you to write about x? You are whatever you wrote.
Popular magazines made the period between the spread of literacy and the
arrival of TV the golden age of the short story. The Web may well make this
the golden age of the essay. And that's certainly not something I realized
when I started writing this.
** |
|
January 2005
_(I wrote this talk for a high school. I never actually gave it, because the
school authorities vetoed the plan to invite me.)_
When I said I was speaking at a high school, my friends were curious. What
will you say to high school students? So I asked them, what do you wish
someone had told you in high school? Their answers were remarkably similar. So
I'm going to tell you what we all wish someone had told us.
I'll start by telling you something you don't have to know in high school:
what you want to do with your life. People are always asking you this, so you
think you're supposed to have an answer. But adults ask this mainly as a
conversation starter. They want to know what sort of person you are, and this
question is just to get you talking. They ask it the way you might poke a
hermit crab in a tide pool, to see what it does.
If I were back in high school and someone asked about my plans, I'd say that
my first priority was to learn what the options were. You don't need to be in
a rush to choose your life's work. What you need to do is discover what you
like. You have to work on stuff you like if you want to be good at what you
do.
It might seem that nothing would be easier than deciding what you like, but it
turns out to be hard, partly because it's hard to get an accurate picture of
most jobs. Being a doctor is not the way it's portrayed on TV. Fortunately you
can also watch real doctors, by volunteering in hospitals.
But there are other jobs you can't learn about, because no one is doing them
yet. Most of the work I've done in the last ten years didn't exist when I was
in high school. The world changes fast, and the rate at which it changes is
itself speeding up. In such a world it's not a good idea to have fixed plans.
And yet every May, speakers all over the country fire up the Standard
Graduation Speech, the theme of which is: don't give up on your dreams. I know
what they mean, but this is a bad way to put it, because it implies you're
supposed to be bound by some plan you made early on. The computer world has a
name for this: premature optimization. And it is synonymous with disaster.
These speakers would do better to say simply, don't give up.
What they really mean is, don't get demoralized. Don't think that you can't do
what other people can. And I agree you shouldn't underestimate your potential.
People who've done great things tend to seem as if they were a race apart. And
most biographies only exaggerate this illusion, partly due to the worshipful
attitude biographers inevitably sink into, and partly because, knowing how the
story ends, they can't help streamlining the plot till it seems like the
subject's life was a matter of destiny, the mere unfolding of some innate
genius. In fact I suspect if you had the sixteen year old Shakespeare or
Einstein in school with you, they'd seem impressive, but not totally unlike
your other friends.
Which is an uncomfortable thought. If they were just like us, then they had to
work very hard to do what they did. And that's one reason we like to believe
in genius. It gives us an excuse for being lazy. If these guys were able to do
what they did only because of some magic Shakespeareness or Einsteinness, then
it's not our fault if we can't do something as good.
I'm not saying there's no such thing as genius. But if you're trying to choose
between two theories and one gives you an excuse for being lazy, the other one
is probably right.
So far we've cut the Standard Graduation Speech down from "don't give up on
your dreams" to "what someone else can do, you can do." But it needs to be cut
still further. There is _some_ variation in natural ability. Most people
overestimate its role, but it does exist. If I were talking to a guy four feet
tall whose ambition was to play in the NBA, I'd feel pretty stupid saying, you
can do anything if you really try.
We need to cut the Standard Graduation Speech down to, "what someone else with
your abilities can do, you can do; and don't underestimate your abilities."
But as so often happens, the closer you get to the truth, the messier your
sentence gets. We've taken a nice, neat (but wrong) slogan, and churned it up
like a mud puddle. It doesn't make a very good speech anymore. But worse
still, it doesn't tell you what to do anymore. Someone with your abilities?
What are your abilities?
**Upwind**
I think the solution is to work in the other direction. Instead of working
back from a goal, work forward from promising situations. This is what most
successful people actually do anyway.
In the graduation-speech approach, you decide where you want to be in twenty
years, and then ask: what should I do now to get there? I propose instead that
you don't commit to anything in the future, but just look at the options
available now, and choose those that will give you the most promising range of
options afterward.
It's not so important what you work on, so long as you're not wasting your
time. Work on things that interest you and increase your options, and worry
later about which you'll take.
Suppose you're a college freshman deciding whether to major in math or
economics. Well, math will give you more options: you can go into almost any
field from math. If you major in math it will be easy to get into grad school
in economics, but if you major in economics it will be hard to get into grad
school in math.
Flying a glider is a good metaphor here. Because a glider doesn't have an
engine, you can't fly into the wind without losing a lot of altitude. If you
let yourself get far downwind of good places to land, your options narrow
uncomfortably. As a rule you want to stay upwind. So I propose that as a
replacement for "don't give up on your dreams." Stay upwind.
How do you do that, though? Even if math is upwind of economics, how are you
supposed to know that as a high school student?
Well, you don't, and that's what you need to find out. Look for smart people
and hard problems. Smart people tend to clump together, and if you can find
such a clump, it's probably worthwhile to join it. But it's not
straightforward to find these, because there is a lot of faking going on.
To a newly arrived undergraduate, all university departments look much the
same. The professors all seem forbiddingly intellectual and publish papers
unintelligible to outsiders. But while in some fields the papers are
unintelligible because they're full of hard ideas, in others they're
deliberately written in an obscure way to seem as if they're saying something
important. This may seem a scandalous proposition, but it has been
experimentally verified, in the famous _Social Text_ affair. Suspecting that
the papers published by literary theorists were often just intellectual-
sounding nonsense, a physicist deliberately wrote a paper full of
intellectual-sounding nonsense, and submitted it to a literary theory journal,
which published it.
The best protection is always to be working on hard problems. Writing novels
is hard. Reading novels isn't. Hard means worry: if you're not worrying that
something you're making will come out badly, or that you won't be able to
understand something you're studying, then it isn't hard enough. There has to
be suspense.
Well, this seems a grim view of the world, you may think. What I'm telling you
is that you should worry? Yes, but it's not as bad as it sounds. It's
exhilarating to overcome worries. You don't see faces much happier than people
winning gold medals. And you know why they're so happy? Relief.
I'm not saying this is the only way to be happy. Just that some kinds of worry
are not as bad as they sound.
**Ambition**
In practice, "stay upwind" reduces to "work on hard problems." And you can
start today. I wish I'd grasped that in high school.
Most people like to be good at what they do. In the so-called real world this
need is a powerful force. But high school students rarely benefit from it,
because they're given a fake thing to do. When I was in high school, I let
myself believe that my job was to be a high school student. And so I let my
need to be good at what I did be satisfied by merely doing well in school.
If you'd asked me in high school what the difference was between high school
kids and adults, I'd have said it was that adults had to earn a living. Wrong.
It's that adults take responsibility for themselves. Making a living is only a
small part of it. Far more important is to take intellectual responsibility
for oneself.
If I had to go through high school again, I'd treat it like a day job. I don't
mean that I'd slack in school. Working at something as a day job doesn't mean
doing it badly. It means not being defined by it. I mean I wouldn't think of
myself as a high school student, just as a musician with a day job as a waiter
doesn't think of himself as a waiter. And when I wasn't working at my day
job I'd start trying to do real work.
When I ask people what they regret most about high school, they nearly all say
the same thing: that they wasted so much time. If you're wondering what you're
doing now that you'll regret most later, that's probably it.
Some people say this is inevitable — that high school students aren't capable
of getting anything done yet. But I don't think this is true. And the proof is
that you're bored. You probably weren't bored when you were eight. When you're
eight it's called "playing" instead of "hanging out," but it's the same thing.
And when I was eight, I was rarely bored. Give me a back yard and a few other
kids and I could play all day.
The reason this got stale in middle school and high school, I now realize, is
that I was ready for something else. Childhood was getting old.
I'm not saying you shouldn't hang out with your friends — that you should all
become humorless little robots who do nothing but work. Hanging out with
friends is like chocolate cake. You enjoy it more if you eat it occasionally
than if you eat nothing but chocolate cake for every meal. No matter how much
you like chocolate cake, you'll be pretty queasy after the third meal of it.
And that's what the malaise one feels in high school is: mental queasiness.
You may be thinking, we have to do more than get good grades. We have to have
_extracurricular activities._ But you know perfectly well how bogus most of
these are. Collecting donations for a charity is an admirable thing to do, but
it's not _hard._ It's not getting something done. What I mean by getting
something done is learning how to write well, or how to program computers, or
what life was really like in preindustrial societies, or how to draw the human
face from life. This sort of thing rarely translates into a line item on a
college application.
**Corruption**
It's dangerous to design your life around getting into college, because the
people you have to impress to get into college are not a very discerning
audience. At most colleges, it's not the professors who decide whether you get
in, but admissions officers, and they are nowhere near as smart. They're the
NCOs of the intellectual world. They can't tell how smart you are. The mere
existence of prep schools is proof of that.
Few parents would pay so much for their kids to go to a school that didn't
improve their admissions prospects. Prep schools openly say this is one of
their aims. But what that means, if you stop to think about it, is that they
can hack the admissions process: that they can take the very same kid and make
him seem a more appealing candidate than he would if he went to the local
public school.
Right now most of you feel your job in life is to be a promising college
applicant. But that means you're designing your life to satisfy a process so
mindless that there's a whole industry devoted to subverting it. No wonder you
become cynical. The malaise you feel is the same that a producer of reality TV
shows or a tobacco industry executive feels. And you don't even get paid a
lot.
So what do you do? What you should not do is rebel. That's what I did, and it
was a mistake. I didn't realize exactly what was happening to us, but I
smelled a major rat. And so I just gave up. Obviously the world sucked, so why
bother?
When I discovered that one of our teachers was herself using Cliff's |
|
March 2005
_(In the process of answering an email, I accidentally wrote a tiny essay
about writing. I usually spend weeks on an essay. This one took 67 minutes—23
of writing, and 44 of rewriting.)_
I think it's far more important to write well than most people realize.
Writing doesn't just communicate ideas; it generates them. If you're bad at
writing and don't like to do it, you'll miss out on most of the ideas writing
would have generated.
As for how to write well, here's the short version: Write a bad version 1 as
fast as you can; rewrite it over and over; cut ~~out~~ everything unnecessary;
write in a conversational tone; develop a nose for bad writing, so you can see
and fix it in yours; imitate writers you like; if you can't get started, tell
someone what you plan to write about, then write down what you said; expect
80% of the ideas in an essay to happen after you start writing it, and 50% of
those you start with to be wrong; be confident enough to cut; have friends you
trust read your stuff and tell you which bits are confusing or drag; don't
(always) make detailed outlines; mull ideas over for a few days before
writing; carry a small notebook or scrap paper with you; start writing when
you think of the first sentence; if a deadline forces you to start before
that, just say the most important sentence first; write about stuff you like;
don't try to sound impressive; don't hesitate to change the topic on the fly;
use footnotes to contain digressions; use anaphora to knit sentences together;
read your essays out loud to see (a) where you stumble over awkward phrases
and (b) which bits are boring (the paragraphs you dread reading); try to tell
the reader something new and useful; work in fairly big quanta of time; when
you restart, begin by rereading what you have so far; when you finish, leave
yourself something easy to start with; accumulate notes for topics you plan to
cover at the bottom of the file; don't feel obliged to cover any of them;
write for a reader who won't read the essay as carefully as you do, just as
pop songs are designed to sound ok on crappy car radios; if you say anything
mistaken, fix it immediately; ask friends which sentence you'll regret most;
go back and tone down harsh remarks; publish stuff online, because an audience
makes you write more, and thus generate more ideas; print out drafts instead
of just looking at them on the screen; use simple, germanic words; learn to
distinguish surprises from digressions; learn to recognize the approach of an
ending, and when one appears, grab it.
---
---
| | Russian Translation
| | | | Japanese Translation
| | Romanian Translation
| | | | Spanish Translation
| | German Translation
| | | | Chinese Translation
| | Hungarian Translation
| | | | Catalan Translation
| | Danish Translation
| | | | Arabic Translation
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
May 2004
_(This essay was originally published inHackers & Painters.) _
If you wanted to get rich, how would you do it? I think your best bet would be
to start or join a startup. That's been a reliable way to get rich for
hundreds of years. The word "startup" dates from the 1960s, but what happens
in one is very similar to the venture-backed trading voyages of the Middle
Ages.
Startups usually involve technology, so much so that the phrase "high-tech
startup" is almost redundant. A startup is a small company that takes on a
hard technical problem.
Lots of people get rich knowing nothing more than that. You don't have to know
physics to be a good pitcher. But I think it could give you an edge to
understand the underlying principles. Why do startups have to be small? Will a
startup inevitably stop being a startup as it grows larger? And why do they so
often work on developing new technology? Why are there so many startups
selling new drugs or computer software, and none selling corn oil or laundry
detergent?
**The Proposition**
Economically, you can think of a startup as a way to compress your whole
working life into a few years. Instead of working at a low intensity for forty
years, you work as hard as you possibly can for four. This pays especially
well in technology, where you earn a premium for working fast.
Here is a brief sketch of the economic proposition. If you're a good hacker in
your mid twenties, you can get a job paying about $80,000 per year. So on
average such a hacker must be able to do at least $80,000 worth of work per
year for the company just to break even. You could probably work twice as many
hours as a corporate employee, and if you focus you can probably get three
times as much done in an hour. You should get another multiple of two, at
least, by eliminating the drag of the pointy-haired middle manager who would
be your boss in a big company. Then there is one more multiple: how much
smarter are you than your job description expects you to be? Suppose another
multiple of three. Combine all these multipliers, and I'm claiming you could
be 36 times more productive than you're expected to be in a random corporate
job. If a fairly good hacker is worth $80,000 a year at a big company,
then a smart hacker working very hard without any corporate bullshit to slow
him down should be able to do work worth about $3 million a year.
Like all back-of-the-envelope calculations, this one has a lot of wiggle room.
I wouldn't try to defend the actual numbers. But I stand by the structure of
the calculation. I'm not claiming the multiplier is precisely 36, but it is
certainly more than 10, and probably rarely as high as 100.
If $3 million a year seems high, remember that we're talking about the limit
case: the case where you not only have zero leisure time but indeed work so
hard that you endanger your health.
Startups are not magic. They don't change the laws of wealth creation. They
just represent a point at the far end of the curve. There is a conservation
law at work here: if you want to make a million dollars, you have to endure a
million dollars' worth of pain. For example, one way to make a million dollars
would be to work for the Post Office your whole life, and save every penny of
your salary. Imagine the stress of working for the Post Office for fifty
years. In a startup you compress all this stress into three or four years. You
do tend to get a certain bulk discount if you buy the economy-size pain, but
you can't evade the fundamental conservation law. If starting a startup were
easy, everyone would do it.
**Millions, not Billions**
If $3 million a year seems high to some people, it will seem low to others.
Three _million?_ How do I get to be a billionaire, like Bill Gates?
So let's get Bill Gates out of the way right now. It's not a good idea to use
famous rich people as examples, because the press only write about the very
richest, and these tend to be outliers. Bill Gates is a smart, determined, and
hardworking man, but you need more than that to make as much money as he has.
You also need to be very lucky.
There is a large random factor in the success of any company. So the guys you
end up reading about in the papers are the ones who are very smart, totally
dedicated, _and_ win the lottery. Certainly Bill is smart and dedicated, but
Microsoft also happens to have been the beneficiary of one of the most
spectacular blunders in the history of business: the licensing deal for DOS.
No doubt Bill did everything he could to steer IBM into making that blunder,
and he has done an excellent job of exploiting it, but if there had been one
person with a brain on IBM's side, Microsoft's future would have been very
different. Microsoft at that stage had little leverage over IBM. They were
effectively a component supplier. If IBM had required an exclusive license, as
they should have, Microsoft would still have signed the deal. It would still
have meant a lot of money for them, and IBM could easily have gotten an
operating system elsewhere.
Instead IBM ended up using all its power in the market to give Microsoft
control of the PC standard. From that point, all Microsoft had to do was
execute. They never had to bet the company on a bold decision. All they had to
do was play hardball with licensees and copy more innovative products
reasonably promptly.
If IBM hadn't made this mistake, Microsoft would still have been a successful
company, but it could not have grown so big so fast. Bill Gates would be rich,
but he'd be somewhere near the bottom of the Forbes 400 with the other guys
his age.
There are a lot of ways to get rich, and this essay is about only one of them.
This essay is about how to make money by creating wealth and getting paid for
it. There are plenty of other ways to get money, including chance,
speculation, marriage, inheritance, theft, extortion, fraud, monopoly, graft,
lobbying, counterfeiting, and prospecting. Most of the greatest fortunes have
probably involved several of these.
The advantage of creating wealth, as a way to get rich, is not just that it's
more legitimate (many of the other methods are now illegal) but that it's more
_straightforward._ You just have to do something people want.
**Money Is Not Wealth**
If you want to create wealth, it will help to understand what it is. Wealth is
not the same thing as money. Wealth is as old as human history. Far older,
in fact; ants have wealth. Money is a comparatively recent invention.
Wealth is the fundamental thing. Wealth is stuff we want: food, clothes,
houses, cars, gadgets, travel to interesting places, and so on. You can have
wealth without having money. If you had a magic machine that could on command
make you a car or cook you dinner or do your laundry, or do anything else you
wanted, you wouldn't need money. Whereas if you were in the middle of
Antarctica, where there is nothing to buy, it wouldn't matter how much money
you had.
Wealth is what you want, not money. But if wealth is the important thing, why
does everyone talk about making money? It is a kind of shorthand: money is a
way of moving wealth, and in practice they are usually interchangeable. But
they are not the same thing, and unless you plan to get rich by
counterfeiting, talking about _making money_ can make it harder to understand
how to make money.
Money is a side effect of specialization. In a specialized society, most of
the things you need, you can't make for yourself. If you want a potato or a
pencil or a place to live, you have to get it from someone else.
How do you get the person who grows the potatoes to give you some? By giving
him something he wants in return. But you can't get very far by trading things
directly with the people who need them. If you make violins, and none of the
local farmers wants one, how will you eat?
The solution societies find, as they get more specialized, is to make the
trade into a two-step process. Instead of trading violins directly for
potatoes, you trade violins for, say, silver, which you can then trade again
for anything else you need. The intermediate stuff-- the _medium of exchange_
\-- can be anything that's rare and portable. Historically metals have been
the most common, but recently we've been using a medium of exchange, called
the _dollar_ , that doesn't physically exist. It works as a medium of
exchange, however, because its rarity is guaranteed by the U.S. Government.
The advantage of a medium of exchange is that it makes trade work. The
disadvantage is that it tends to obscure what trade really means. People think
that what a business does is make money. But money is just the intermediate
stage-- just a shorthand-- for whatever people want. What most businesses
really do is make wealth. They do something people want.
**The Pie Fallacy**
A surprising number of people retain from childhood the idea that there is a
fixed amount of wealth in the world. There is, in any normal family, a fixed
amount of _money_ at any moment. But that's not the same thing.
When wealth is talked about in this context, it is often described as a pie.
"You can't make the pie larger," say politicians. When you're talking about
the amount of money in one family's bank account, or the amount available to a
government from one year's tax revenue, this is true. If one person gets more,
someone else has to get less.
I can remember believing, as a child, that if a few rich people had all the
money, it left less for everyone else. Many people seem to continue to believe
something like this well into adulthood. This fallacy is usually there in the
background when you hear someone talking about how x percent of the population
have y percent of the wealth. If you plan to start a startup, then whether you
realize it or not, you're planning to disprove the Pie Fallacy.
What leads people astray here is the abstraction of money. Money is not
wealth. It's just something we use to move wealth around. So although there
may be, in certain specific moments (like your family, this month) a fixed
amount of money available to trade with other people for things you want,
there is not a fixed amount of wealth in the world. _You can make more
wealth._ Wealth has been getting created and destroyed (but on balance,
created) for all of human history.
Suppose you own a beat-up old car. Instead of sitting on your butt next
summer, you could spend the time restoring your car to pristine condition. In
doing so you create wealth. The world is-- and you specifically are-- one
pristine old car the richer. And not just in some metaphorical way. If you
sell your car, you'll get more for it.
In restoring your old car you have made yourself richer. You haven't made
anyone else poorer. So there is obviously not a fixed pie. And in fact, when
you look at it this way, you wonder why anyone would think there was.
Kids know, without knowing they know, that they can create wealth. If you need
to give someone a present and don't have any money, you make one. But kids are
so bad at making things that they consider home-made presents to be a
distinct, inferior, sort of thing to store-bought ones-- a mere expression of
the proverbial thought that counts. And indeed, the lumpy ashtrays we made for
our parents did not have much of a resale market.
**Craftsmen**
The people most likely to grasp that wealth can be created are the ones who
are good at making things, the craftsmen. Their hand-made objects become
store-bought ones. But with the rise of industrialization there are fewer and
fewer craftsmen. One of the biggest remaining groups is computer programmers.
A programmer can sit down in front of a computer and _create wealth_. A good
piece of software is, in itself, a valuable thing. There is no manufacturing
to confuse the issue. Those characters you type are a complete, finished
product. If someone sat down and wrote a web browser that didn't suck (a fine
idea, by the way), the world would be that much richer. [5b]
Everyone in a company works together to create wealth, in the sense of making
more things people want. Many of the employees (e.g. the people in the
mailroom or the personnel department) work at one remove from the actual
making of stuff. Not the programmers. They literally think the product, one
line at a time. And so it's clearer to programmers that wealth is something
that's made, rather than being distributed, like slices of a pie, by some
imaginary Daddy.
It's also obvious to programmers that there are huge variations in the rate at
which wealth is created. At Viaweb we had one programmer who was a sort of
monster of productivity. I remember watching what he did one long day and
estimating that he had added several hundred thousand dollars to the market
value of the company. A great programmer, on a roll, could create a million
dollars worth of wealth in a couple weeks. A mediocre programmer over the same
period will generate zero or even negative wealth (e.g. by introducing bugs).
This is why so many of the best programmers are libertarians. In our world,
you sink or swim, and there are no excuses. When those far removed from the
creation of wealth-- undergraduates, reporters, politicians-- hear that the
richest 5% of the people have half the total wealth, they tend to think
_injustice!_ An experienced programmer would be more likely to think _is that
all?_ The top 5% of programmers probably write 99% of the good software.
Wealth can be created without being sold. Scientists, till recently at least,
effectively donated the wealth they created. We are all richer for knowing
about penicillin, because we're less likely to die from infections. Wealth is
whatever people want, and not dying is certainly something we want. Hackers
often donate their work by writing open source software that anyone can use
for free. I am much the richer for the operating system FreeBSD, which I'm
running on the computer I'm using now, and so is Yahoo, which runs it on all
their servers.
**What a Job Is**
In industrialized countries, people belong to one institution or another at
least until their twenties. After all those years you get used to the idea of
belonging to a group of people who all get up in the morning, go to some set
of buildings, and do things that they do not, ordinarily, enjoy doing.
Belonging to such a group becomes part of your identity: name, age, role,
institution. If you have to introduce yourself, or someone else describes you,
it will be as something like, John Smith, age 10, a student at such and such
elementary school, or John Smith, age 20, a student at such and such college.
When John Smith finishes school he is expected to get a job. And what getting
a job seems to mean is joining another institution. Superficially it's a lot
like college. You pick the companies you want to work for and apply to join
them. If one likes you, you become a member of this new group. You get up in
the morning and go to a new set of buildings, and do things that you do not,
ordinarily, enjoy doing. There are a few differences: life is not as much fun,
and you get paid, instead of paying, as you did in college. But the
similarities feel greater than the differences. John Smith is now John Smith,
22, a software developer at such and such corporation.
In fact John Smith's life has changed more than he realizes. Socially, a
company looks much like college, but the deeper you go into the underlying
reality, the more different it gets.
What a company does, and has to do if it wants to continue to exist, is earn
money. And the way most companies make money is by creating wealth. Companies
can be so specialized that this similarity is concealed, but it is not only
manufacturing companies that create wealth. A big component of wealth is
location. Remember that magic machine that could make you cars and cook you
dinner and so on? It would not be so useful if it delivered your dinner to a
random location in central Asia. If wealth means what people want, companies
that move things also create wealth. Ditto for many other kinds of companies
that don't make anything physical. Nearly all companies exist to do something
people want.
And that's what you do, as well, when you go to work for a company. But here
there is another layer that tends to obscure the underlying reality. In a
company, the work you do is averaged together with a lot of other people's.
You may not even be aware you're doing something people want. Your
contribution may be indirect. But the company as a whole must be giving people
something they want, or they won't make any money. And if they are paying you
x dollars a year, then on average you must be contributing at least x dollars
a year worth of work, or the company will be spending more than it makes, and
will go out of business.
Someone graduating from college thinks, and is told, that he needs to get a
job, as if the important thing were becoming a member of an institution. A
more direct way to put it would be: you need to start doing something people
want. You don't need to join a company to do that. All a company is is a group
of people working together to do something people want. It's doing something
people want that matters, not joining the group.
For most people the best plan probably is to go to work for some existing
company. But it is a good idea to understand what's happening when you do
this. A job means doing something people want, averaged together with everyone
else in that company.
**Working Harder**
That averaging gets to be a problem. I think the single biggest problem
afflicting large companies is the difficulty of assigning a value to each
person's work. For the most part they punt. In a big company you get paid a
fairly predictable salary for working fairly hard. You're expected not to be
obviously incompetent or lazy, but you're not expected to devote your whole
life to your work.
It turns out, though, that there are economies of scale in how much of your
life you devote to your work. In the right kind of business, someone who
really devoted himself to work could generate ten or even a hundred times as
much wealth as an average employee. A programmer, for example, instead of
chugging along maintaining and updating an existing piece of software, could
write a whole new piece of software, and with it create a new source of
revenue.
Companies are not set up to reward people who want to do this. You can't go to
your boss and say, I'd like to start working ten times as hard, so will you
please pay me ten times as much? For one thing, the official fiction is that
you are already working as hard as you can. But a more serious problem is that
the company has no way of measuring the value of your work.
Salesmen are an exception. It's easy to measure how much revenue they
generate, and they're usually paid a percentage of it. If a salesman wants to
work harder, he can just start doing it, and he will automatically get paid
proportionally more.
There is one other job besides sales where big companies can hire first-rate
people: in the top management jobs. And for the same reason: their performance
can be measured. The top managers are held responsible for the performance of
the entire company. Because an ordinary employee's performance can't usually
be measured, he is not expected to do more than put in a solid effort. Whereas
top management, like salespeople, have to actually come up with the numbers.
The CEO of a company that tanks cannot plead that he put in a solid effort. If
the company does badly, he's done badly.
A company that could pay all its employees so straightforwardly would be
enormously successful. Many employees would work harder if they could get paid
for it. More importantly, such a company would attract people who wanted to
work especially hard. It would crush its competitors.
Unfortunately, companies can't pay everyone like salesmen. Salesmen work
alone. Most employees' work is tangled together. Suppose a company makes some
kind of consumer gadget. The engineers build a reliable gadget with all kinds
of new features; the industrial designers design a beautiful case for it; and
then the marketing people convince everyone that it's something they've got to
have. How do you know how much of the gadget's sales are due to each group's
efforts? Or, for that matter, how much is due to the creators of past gadgets
that gave the company a reputation for quality? There's no way to untangle all
their contributions. Even if you could read the minds of the consumers, you'd
find these factors were all blurred together.
If you want to go faster, it's a problem to have your work tangled together
with a large number of other people's. In a large group, your performance is
not separately measurable-- and the rest of the group slows you down.
**Measurement and Leverage**
To get rich you need to get yourself in a situation with two things,
measurement and leverage. You need to be in a position where your performance
can be measured, or there is no way to get paid more by doing more. And you
have to have leverage, in the sense that the decisions you make have a big
effect.
Measurement alone is not enough. An example of a job with measurement but not
leverage is doing piecework in a sweatshop. Your performance is measured and
you get paid accordingly, but you have no scope for decisions. The only
decision you get to make is how fast you work, and that can probably only
increase your earnings by a factor of two or three.
An example of a job with both measurement and leverage would be lead actor in
a movie. Your performance can be measured in the gross of the movie. And you
have leverage in the sense that your performance can make or break it.
CEOs also have both measurement and leverage. They're measured, in that the
performance of the company is their performance. And they have leverage in
that their decisions set the whole company moving in one direction or another.
I think everyone who gets rich by their own efforts will be found to be in a
situation with measurement and leverage. Everyone I can think of does: CEOs,
movie stars, hedge fund managers, professional athletes. A good hint to the
presence of leverage is the possibility of failure. Upside must be balanced by
downside, so if there is big potential for gain there must also be a
terrifying possibility of loss. CEOs, stars, fund managers, and athletes all
live with the sword hanging over their heads; the moment they start to suck,
they're out. If you're in a job that feels safe, you are not going to get
rich, because if there is no danger there is almost certainly no leverage.
But you don't have to become a CEO or a movie star to be in a situation with
measurement and leverage. All you need to do is be part of a small group
working on a hard problem.
**Smallness = Measurement**
If you can't measure the value of the work done by individual employees, you
can get close. You can measure the value of the work done by small groups.
One level at which you can accurately measure the revenue generated by
employees is at the level of the whole company. When the company is small, you
are thereby fairly close to measuring the contributions of individual
employees. A viable startup might only have ten employees, which puts you
within a factor of ten of measuring individual effort.
Starting or joining a startup is thus as close as most people can get to
saying to one's boss, I want to work ten times as hard, so please pay me ten
times as much. There are two differences: you're not saying it to your boss,
but directly to the customers (for whom your boss is only a proxy after all),
and you're not doing it individually, but along with a small group of other
ambitious people.
It will, ordinarily, be a group. Except in a few unusual kinds of work, like
acting or writing books, you can't be a company of one person. And the people
you work with had better be good, because it's their work that yours is going
to be averaged with.
A big company is like a giant galley driven by a thousand rowers. Two things
keep the speed of the galley down. One is that individual rowers don't see any
result from working harder. The other is that, in a group of a thousand
people, the average rower is likely to be pretty average.
If you took ten people at random out of the big galley and put them in a boat
by themselves, they could probably go faster. They would have both carrot and
stick to motivate them. An energetic rower would be encouraged by the thought
that he could have a visible effect on the speed of the boat. And if someone
was lazy, the others would be more likely to notice and complain.
But the real advantage of the ten-man boat shows when you take the ten _best_
rowers out of the big galley and put them in a boat together. They will have
all the extra motivation that comes from being in a small group. But more
importantly, by selecting that small a group you can get the best rowers. Each
one will be in the top 1%. It's a much better deal for them to average their
work together with a small group of their peers than to average it with
everyone.
That's the real point of startups. Ideally, you are getting together with a
group of other people who also want to work a lot harder, and get paid a lot
more, than they would in a big company. And because startups tend to get
founded by self-selecting groups of ambitious people who already know one
another (at least by reputation), the level of measurement is more precise
than you get from smallness alone. A startup is not merely ten people, but ten
people like you.
Steve Jobs once said that the success or failure of a startup depends on the
first ten employees. I agree. If anything, it's more like the first five.
Being small is not, in itself, what makes startups kick butt, but rather that
small groups can be select. You don't want small in the sense of a village,
but small in the sense of an all-star team.
The larger a group, the closer its average member will be to the average for
the population as a whole. So all other things being equal, a very able person
in a big company is probably getting a bad deal, because his performance is
dragged down by the overall lower performance of the others. Of course, all
other things often are not equal: the able person may not care about money, or
may prefer the stability of a large company. But a very able person who does
care about money will ordinarily do better to go off and work with a small
group of peers.
**Technology = Leverage**
Startups offer anyone a way to be in a situation with measurement and
leverage. They allow measurement because they're small, and they offer
leverage because they make money by inventing new technology.
What is technology? It's _technique_. It's the way we all do things. And when
you discover a new way to do things, its value is multiplied by all the people
who use it. It is the proverbial fishing rod, rather than the fish. That's the
difference between a startup and a restaurant or a barber shop. You fry eggs
or cut hair one customer at a time. Whereas if you solve a technical problem
that a lot of people care about, you help everyone who uses your solution.
That's leverage.
If you look at history, it seems that most people who got rich by creating
wealth did it by developing new technology. You just can't fry eggs or cut
hair fast enough. What made the Florentines rich in 1200 was the discovery of
new techniques for making the high-tech product of the time, fine woven cloth.
What made the Dutch rich in 1600 was the discovery of shipbuilding and
navigation techniques that enabled them to dominate the seas of the Far East.
Fortunately there is a natural fit between smallness and solving hard
problems. The leading edge of technology moves fast. Technology that's
valuable today could be worthless in a couple years. Small companies are more
at home in this world, because they don't have layers of bureaucracy to slow
them down. Also, technical advances tend to come from unorthodox approaches,
and small companies are less constrained by convention.
Big companies can develop technology. They just can't do it quickly. Their
size makes them slow and prevents them from rewarding employees for the
extraordinary effort required. So in practice big companies only get to
develop technology in fields where large capital requirements prevent startups
from competing with them, like microprocessors, power plants, or passenger
aircraft. And even in those fields they depend heavily on startups for
components and ideas.
It's obvious that biotech or software startups exist to solve hard technical
problems, but I think it will also be found to be true in businesses that
don't seem to be about technology. McDonald's, for example, grew big by
designing a system, the McDonald's franchise, that could then be reproduced at
will all over the face of the earth. A McDonald's franchise is controlled by
rules so precise that it is practically a piece of software. Write once, run
everywhere. Ditto for Wal-Mart. Sam Walton got rich not by being a retailer,
but by designing a new kind of store.
Use difficulty as a guide not just in selecting the overall aim of your
company, but also at decision points along the way. At Viaweb one of our rules
of thumb was _run upstairs._ Suppose you are a little, nimble guy being chased
by a big, fat, bully. You open a door and find yourself in a staircase. Do you
go up or down? I say up. The bully can probably run downstairs as fast as you
can. Going upstairs his bulk will be more of a disadvantage. Running upstairs
is hard for you but even harder for him.
What this meant in practice was that we deliberately sought hard problems. If
there were two features we could add to our software, both equally valuable in
proportion to their difficulty, we'd always take the harder one. Not just
because it was more valuable, but _because it was harder._ We delighted in
forcing bigger, slower competitors to follow us over difficult ground. Like
guerillas, startups prefer the difficult terrain of the mountains, where the
troops of the central government can't follow. I can remember times when we
were just exhausted after wrestling all day with some horrible technical
problem. And I'd be delighted, because something that was hard for us would be
impossible for our competitors.
This is not just a good way to run a startup. It's what a startup is. Venture
capitalists know about this and have a phrase for it: _barriers to entry._ If
you go to a VC with a new idea and ask him to invest in it, one of the first
things he'll ask is, how hard would this be for someone else to develop? That
is, how much difficult ground have you put between yourself and potential
pursuers? And you had better have a convincing explanation of why your
technology would be hard to duplicate. Otherwise as soon as some big company
becomes aware of it, they'll make their own, and with their brand name,
capital, and distribution clout, they'll take away your market overnight.
You'd be like guerillas caught in the open field by regular army forces.
One way to put up barriers to entry is through patents. But patents may not
provide much protection. Competitors commonly find ways to work around a
patent. And if they can't, they may simply violate it and invite you to sue
them. A big company is not afraid to be sued; it's an everyday thing for them.
They'll make sure that suing them is expensive and takes a long time. Ever
heard of Philo Farnsworth? He invented television. The reason you've never
heard of him is that his company was not the one to make money from it.
The company that did was RCA, and Farnsworth's reward for his efforts was a
decade of patent litigation.
Here, as so often, the best defense is a good offense. If you can develop
technology that's simply too hard for competitors to duplicate, you don't need
to rely on other defenses. Start by picking a hard problem, and then at every
decision point, take the harder choice.
**The Catch(es)**
If it were simply a matter of working harder than an ordinary employee and
getting paid proportionately, it would obviously be a good deal to start a
startup. Up to a point it would be more fun. I don't think many people like
the slow pace of big companies, the interminable meetings, the water-cooler
conversations, the clueless middle managers, and so on.
Unfortunately there are a couple catches. One is that you can't choose the
point on the curve that you want to inhabit. You can't decide, for example,
that you'd like to work just two or three times as hard, and get paid that
much more. When you're running a startup, your competitors decide how hard you
work. And they pretty much all make the same decision: as hard as you possibly
can.
The other catch is that the payoff is only on average proportionate to your
productivity. There is, as I said before, a large random multiplier in the
success of any company. So in practice the deal is not that you're 30 times as
productive and get paid 30 times as much. It is that you're 30 times as
productive, and get paid between zero and a thousand times as much. If the
mean is 30x, the median is probably zero. Most startups tank, and not just the
dogfood portals we all heard about during the Internet Bubble. It's common for
a startup to be developing a genuinely good product, take slightly too long to
do it, run out of money, and have to shut down.
A startup is like a mosquito. A bear can absorb a hit and a crab is armored
against one, but a mosquito is designed for one thing: to score. No energy is
wasted on defense. The defense of mosquitos, as a species, is that there are a
lot of them, but this is little consolation to the individual mosquito.
Startups, like mosquitos, tend to be an all-or-nothing proposition. And you
don't generally know which of the two you're going to get till the last
minute. Viaweb came close to tanking several times. Our trajectory was like a
sine wave. Fortunately we got bought at the top of the cycle, but it was
damned close. While we were visiting Yahoo in California to talk about selling
the company to them, we had to borrow a conference room to reassure an
investor who was about to back out of a new round of funding that we needed to
stay alive.
The all-or-nothing aspect of startups was not something we wanted. Viaweb's
hackers were all extremely risk-averse. If there had been some way just to
work super hard and get paid for it, without having a lottery mixed in, we
would have been delighted. We would have much preferred a 100% chance of $1
million to a 20% chance of $10 million, even though theoretically the second
is worth twice as much. Unfortunately, there is not currently any space in the
business world where you can get the first deal.
The closest you can get is by selling your startup in the early stages, giving
up upside (and risk) for a smaller but guaranteed payoff. We had a chance to
do this, and stupidly, as we then thought, let it slip by. After that we
became comically eager to sell. For the next year or so, if anyone expressed
the slightest curiosity about Viaweb we would try to sell them the company.
But there were no takers, so we had to keep going.
It would have been a bargain to buy us at an early stage, but companies doing
acquisitions are not looking for bargains. A company big enough to acquire
startups will be big enough to be fairly conservative, and within the company
the people in charge of acquisitions will be among the more conservative,
because they are likely to be business school types who joined the company
late. They would rather overpay for a safe choice. So it is easier to sell an
established startup, even at a large premium, than an early-stage one.
**Get Users**
I think it's a good idea to get bought, if you can. Running a business is
different from growing one. It is just as well to let a big company take over
once you reach cruising altitude. It's also financially wiser, because selling
allows you to diversify. What would you think of a financial advisor who put
all his client's assets into one volatile stock?
How do you get bought? Mostly by doing the same things you'd do if you didn't
intend to sell the company. Being profitable, for example. But getting bought
is also an art in its own right, and one that we spent a lot of time trying to
master.
Potential buyers will always delay if they can. The hard part about getting
bought is getting them to act. For most people, the most powerful motivator is
not the hope of gain, but the fear of loss. For potential acquirers, the most
powerful motivator is the prospect that one of their competitors will buy you.
This, as we found, causes CEOs to take red-eyes. The second biggest is the
worry that, if they don't buy you now, you'll continue to grow rapidly and
will cost more to acquire later, or even become a competitor.
In both cases, what it all comes down to is users. You'd think that a company
about to buy you would do a lot of research and decide for themselves how
valuable your technology was. Not at all. What they go by is the number of
users you have.
In effect, acquirers assume the customers know who has the best technology.
And this is not as stupid as it sounds. Users are the only real proof that
you've created wealth. Wealth is what people want, and if people aren't using
your software, maybe it's not just because you're bad at marketing. Maybe it's
because you haven't made what they want.
Venture capitalists have a list of danger signs to watch out for. Near the top
is the company run by techno-weenies who are obsessed with solving interesting
technical problems, instead of making users happy. In a startup, you're not
just trying to solve problems. You're trying to solve problems _that users
care about._
So I think you should make users the test, just as acquirers do. Treat a
startup as an optimization problem in which performance is measured by number
of users. As anyone who has tried to optimize software knows, the key is
measurement. When you try to guess where your program is slow, and what would
make it faster, you almost always guess wrong.
Number of users may not be the perfect test, but it will be very close. It's
what acquirers care about. It's what revenues depend on. It's what makes
competitors unhappy. It's what impresses reporters, and potential new users.
Certainly it's a better test than your a priori notions of what problems are
important to solve, no matter how technically adept you are.
Among other things, treating a startup as an optimization problem will help
you avoid another pitfall that VCs worry about, and rightly-- taking a long
time to develop a product. Now we can recognize this as something hackers
already know to avoid: premature optimization. Get a version 1.0 out there as
soon as you can. Until you have some users to measure, you're optimizing based
on guesses.
The ball you need to keep your eye on here is the underlying principle that
wealth is what people want. If you plan to get rich by creating wealth, you
have to know what people want. So few businesses really pay attention to
making customers happy. How often do you walk into a store, or call a company
on the phone, with a feeling of dread in the back of your mind? When you hear
"your call is important to us, please stay on the line," do you think, oh
good, now everything will be all right?
A restaurant can afford to serve the occasional burnt dinner. But in
technology, you cook one thing and that's what everyone eats. So any
difference between what people want and what you deliver is multiplied. You
please or annoy customers wholesale. The closer you can get to what they want,
the more wealth you generate.
**Wealth and Power**
Making wealth is not the only way to get rich. For most of human history it
has not even been the most common. Until a few centuries ago, the main sources
of wealth were mines, slaves and serfs, land, and cattle, and the only ways to
acquire these rapidly were by inheritance, marriage, conquest, or
confiscation. Naturally wealth had a bad reputation.
Two things changed. The first was the rule of law. For most of the world's
history, if you did somehow accumulate a fortune, the ruler or his henchmen
would find a way to steal it. But in medieval Europe something new happened. A
new class of merchants and manufacturers began to collect in towns.
Together they were able to withstand the local feudal lord. So for the first
time in our history, the bullies stopped stealing the nerds' lunch money. This
was naturally a great incentive, and possibly indeed the main cause of the
second big change, industrialization.
A great deal has been written about the causes of the Industrial Revolution.
But surely a necessary, if not sufficient, condition was that people who made
fortunes be able to enjoy them in peace. One piece of evidence is what
happened to countries that tried to return to the old model, like the Soviet
Union, and to a lesser extent Britain under the labor governments of the 1960s
and early 1970s. Take away the incentive of wealth, and technical innovation
grinds to a halt.
Remember what a startup is, economically: a way of saying, I want to work
faster. Instead of accumulating money slowly by being paid a regular wage for
fifty years, I want to get it over with as soon as possible. So governments
that forbid you to accumulate wealth are in effect decreeing that you work
slowly. They're willing to let you earn $3 million over fifty years, but
they're not willing to let you work so hard that you can do it in two. They
are like the corporate boss that you can't go to and say, I want to work ten
times as hard, so please pay me ten times a much. Except this is not a boss
you can escape by starting your own company.
The problem with working slowly is not just that technical innovation happens
slowly. It's that it tends not to happen at all. It's only when you're
deliberately looking for hard problems, as a way to use speed to the greatest
advantage, that you take on this kind of project. Developing new technology is
a pain in the ass. It is, as Edison said, one percent inspiration and ninety-
nine percent perspiration. Without the incentive of wealth, no one wants to do
it. Engineers will work on sexy projects like fighter planes and moon rockets
for ordinary salaries, but more mundane technologies like light bulbs or
semiconductors have to be developed by entrepreneurs.
Startups are not just something that happened in Silicon Valley in the last
couple decades. Since it became possible to get rich by creating wealth,
everyone who has done it has used essentially the same recipe: measurement and
leverage, where measurement comes from working with a small group, and
leverage from developing new techniques. The recipe was the same in Florence
in 1200 as it is in Santa Clara today.
Understanding this may help to answer an important question: why Europe grew
so powerful. Was it something about the geography of Europe? Was it that
Europeans are somehow racially superior? Was it their religion? The answer (or
at least the proximate cause) may be that the Europeans rode on the crest of a
powerful new idea: allowing those who made a lot of money to keep it.
Once you're allowed to do that, people who want to get rich can do it by
generating wealth instead of stealing it. The resulting technological growth
translates not only into wealth but into military power. The theory that led
to the stealth plane was developed by a Soviet mathematician. But because the
Soviet Union didn't have a computer industry, it remained for them a theory;
they didn't have hardware capable of executing the calculations fast enough to
design an actual airplane.
In that respect the Cold War teaches the same lesson as World War II and, for
that matter, most wars in recent history. Don't let a ruling class of warriors
and politicians squash the entrepreneurs. The same recipe that makes
individuals rich makes countries powerful. Let the nerds keep their lunch
money, and you rule the world.
** |
|
September 2007
In high school I decided I was going to study philosophy in college. I had
several motives, some more honorable than others. One of the less honorable
was to shock people. College was regarded as job training where I grew up, so
studying philosophy seemed an impressively impractical thing to do. Sort of
like slashing holes in your clothes or putting a safety pin through your ear,
which were other forms of impressive impracticality then just coming into
fashion.
But I had some more honest motives as well. I thought studying philosophy
would be a shortcut straight to wisdom. All the people majoring in other
things would just end up with a bunch of domain knowledge. I would be learning
what was really what.
I'd tried to read a few philosophy books. Not recent ones; you wouldn't find
those in our high school library. But I tried to read Plato and Aristotle. I
doubt I believed I understood them, but they sounded like they were talking
about something important. I assumed I'd learn what in college.
The summer before senior year I took some college classes. I learned a lot in
the calculus class, but I didn't learn much in Philosophy 101\. And yet my
plan to study philosophy remained intact. It was my fault I hadn't learned
anything. I hadn't read the books we were assigned carefully enough. I'd give
Berkeley's _Principles of Human Knowledge_ another shot in college. Anything
so admired and so difficult to read must have something in it, if one could
only figure out what.
Twenty-six years later, I still don't understand Berkeley. I have a nice
edition of his collected works. Will I ever read it? Seems unlikely.
The difference between then and now is that now I understand why Berkeley is
probably not worth trying to understand. I think I see now what went wrong
with philosophy, and how we might fix it.
**Words**
I did end up being a philosophy major for most of college. It didn't work out
as I'd hoped. I didn't learn any magical truths compared to which everything
else was mere domain knowledge. But I do at least know now why I didn't.
Philosophy doesn't really have a subject matter in the way math or history or
most other university subjects do. There is no core of knowledge one must
master. The closest you come to that is a knowledge of what various individual
philosophers have said about different topics over the years. Few were
sufficiently correct that people have forgotten who discovered what they
discovered.
Formal logic has some subject matter. I took several classes in logic. I don't
know if I learned anything from them. It does seem to me very important to
be able to flip ideas around in one's head: to see when two ideas don't fully
cover the space of possibilities, or when one idea is the same as another but
with a couple things changed. But did studying logic teach me the importance
of thinking this way, or make me any better at it? I don't know.
There are things I know I learned from studying philosophy. The most dramatic
I learned immediately, in the first semester of freshman year, in a class
taught by Sydney Shoemaker. I learned that I don't exist. I am (and you are) a
collection of cells that lurches around driven by various forces, and calls
itself _I_. But there's no central, indivisible thing that your identity goes
with. You could conceivably lose half your brain and live. Which means your
brain could conceivably be split into two halves and each transplanted into
different bodies. Imagine waking up after such an operation. You have to
imagine being two people.
The real lesson here is that the concepts we use in everyday life are fuzzy,
and break down if pushed too hard. Even a concept as dear to us as _I_. It
took me a while to grasp this, but when I did it was fairly sudden, like
someone in the nineteenth century grasping evolution and realizing the story
of creation they'd been told as a child was all wrong. Outside of math
there's a limit to how far you can push words; in fact, it would not be a bad
definition of math to call it the study of terms that have precise meanings.
Everyday words are inherently imprecise. They work well enough in everyday
life that you don't notice. Words seem to work, just as Newtonian physics
seems to. But you can always make them break if you push them far enough.
I would say that this has been, unfortunately for philosophy, the central fact
of philosophy. Most philosophical debates are not merely afflicted by but
driven by confusions over words. Do we have free will? Depends what you mean
by "free." Do abstract ideas exist? Depends what you mean by "exist."
Wittgenstein is popularly credited with the idea that most philosophical
controversies are due to confusions over language. I'm not sure how much
credit to give him. I suspect a lot of people realized this, but reacted
simply by not studying philosophy, rather than becoming philosophy professors.
How did things get this way? Can something people have spent thousands of
years studying really be a waste of time? Those are interesting questions. In
fact, some of the most interesting questions you can ask about philosophy. The
most valuable way to approach the current philosophical tradition may be
neither to get lost in pointless speculations like Berkeley, nor to shut them
down like Wittgenstein, but to study it as an example of reason gone wrong.
**History**
Western philosophy really begins with Socrates, Plato, and Aristotle. What we
know of their predecessors comes from fragments and references in later works;
their doctrines could be described as speculative cosmology that occasionally
strays into analysis. Presumably they were driven by whatever makes people in
every other society invent cosmologies.
With Socrates, Plato, and particularly Aristotle, this tradition turned a
corner. There started to be a lot more analysis. I suspect Plato and Aristotle
were encouraged in this by progress in math. Mathematicians had by then shown
that you could figure things out in a much more conclusive way than by making
up fine sounding stories about them.
People talk so much about abstractions now that we don't realize what a leap
it must have been when they first started to. It was presumably many thousands
of years between when people first started describing things as hot or cold
and when someone asked "what is heat?" No doubt it was a very gradual process.
We don't know if Plato or Aristotle were the first to ask any of the questions
they did. But their works are the oldest we have that do this on a large
scale, and there is a freshness (not to say naivete) about them that suggests
some of the questions they asked were new to them, at least.
Aristotle in particular reminds me of the phenomenon that happens when people
discover something new, and are so excited by it that they race through a huge
percentage of the newly discovered territory in one lifetime. If so, that's
evidence of how new this kind of thinking was.
This is all to explain how Plato and Aristotle can be very impressive and yet
naive and mistaken. It was impressive even to ask the questions they did. That
doesn't mean they always came up with good answers. It's not considered
insulting to say that ancient Greek mathematicians were naive in some
respects, or at least lacked some concepts that would have made their lives
easier. So I hope people will not be too offended if I propose that ancient
philosophers were similarly naive. In particular, they don't seem to have
fully grasped what I earlier called the central fact of philosophy: that words
break if you push them too far.
"Much to the surprise of the builders of the first digital computers," Rod
Brooks wrote, "programs written for them usually did not work." Something
similar happened when people first started trying to talk about abstractions.
Much to their surprise, they didn't arrive at answers they agreed upon. In
fact, they rarely seemed to arrive at answers at all.
They were in effect arguing about artifacts induced by sampling at too low a
resolution.
The proof of how useless some of their answers turned out to be is how little
effect they have. No one after reading Aristotle's _Metaphysics_ does anything
differently as a result.
Surely I'm not claiming that ideas have to have practical applications to be
interesting? No, they may not have to. Hardy's boast that number theory had no
use whatsoever wouldn't disqualify it. But he turned out to be mistaken. In
fact, it's suspiciously hard to find a field of math that truly has no
practical use. And Aristotle's explanation of the ultimate goal of philosophy
in Book A of the _Metaphysics_ implies that philosophy should be useful too.
**Theoretical Knowledge**
Aristotle's goal was to find the most general of general principles. The
examples he gives are convincing: an ordinary worker builds things a certain
way out of habit; a master craftsman can do more because he grasps the
underlying principles. The trend is clear: the more general the knowledge, the
more admirable it is. But then he makes a mistake—possibly the most important
mistake in the history of philosophy. He has noticed that theoretical
knowledge is often acquired for its own sake, out of curiosity, rather than
for any practical need. So he proposes there are two kinds of theoretical
knowledge: some that's useful in practical matters and some that isn't. Since
people interested in the latter are interested in it for its own sake, it must
be more noble. So he sets as his goal in the _Metaphysics_ the exploration of
knowledge that has no practical use. Which means no alarms go off when he
takes on grand but vaguely understood questions and ends up getting lost in a
sea of words.
His mistake was to confuse motive and result. Certainly, people who want a
deep understanding of something are often driven by curiosity rather than any
practical need. But that doesn't mean what they end up learning is useless.
It's very valuable in practice to have a deep understanding of what you're
doing; even if you're never called on to solve advanced problems, you can see
shortcuts in the solution of simple ones, and your knowledge won't break down
in edge cases, as it would if you were relying on formulas you didn't
understand. Knowledge is power. That's what makes theoretical knowledge
prestigious. It's also what causes smart people to be curious about certain
things and not others; our DNA is not so disinterested as we might think.
So while ideas don't have to have immediate practical applications to be
interesting, the kinds of things we find interesting will surprisingly often
turn out to have practical applications.
The reason Aristotle didn't get anywhere in the _Metaphysics_ was partly that
he set off with contradictory aims: to explore the most abstract ideas, guided
by the assumption that they were useless. He was like an explorer looking for
a territory to the north of him, starting with the assumption that it was
located to the south.
And since his work became the map used by generations of future explorers, he
sent them off in the wrong direction as well. Perhaps worst of all, he
protected them from both the criticism of outsiders and the promptings of
their own inner compass by establishing the principle that the most noble sort
of theoretical knowledge had to be useless.
The _Metaphysics_ is mostly a failed experiment. A few ideas from it turned
out to be worth keeping; the bulk of it has had no effect at all. The
_Metaphysics_ is among the least read of all famous books. It's not hard to
understand the way Newton's _Principia_ is, but the way a garbled message is.
Arguably it's an interesting failed experiment. But unfortunately that was not
the conclusion Aristotle's successors derived from works like the
_Metaphysics_. Soon after, the western world fell on intellectual hard
times. Instead of version 1s to be superseded, the works of Plato and
Aristotle became revered texts to be mastered and discussed. And so things
remained for a shockingly long time. It was not till around 1600 (in Europe,
where the center of gravity had shifted by then) that one found people
confident enough to treat Aristotle's work as a catalog of mistakes. And even
then they rarely said so outright.
If it seems surprising that the gap was so long, consider how little progress
there was in math between Hellenistic times and the Renaissance.
In the intervening years an unfortunate idea took hold: that it was not only
acceptable to produce works like the _Metaphysics_ , but that it was a
particularly prestigious line of work, done by a class of people called
philosophers. No one thought to go back and debug Aristotle's motivating
argument. And so instead of correcting the problem Aristotle discovered by
falling into it—that you can easily get lost if you talk too loosely about
very abstract ideas—they continued to fall into it.
**The Singularity**
Curiously, however, the works they produced continued to attract new readers.
Traditional philosophy occupies a kind of singularity in this respect. If you
write in an unclear way about big ideas, you produce something that seems
tantalizingly attractive to inexperienced but intellectually ambitious
students. Till one knows better, it's hard to distinguish something that's
hard to understand because the writer was unclear in his own mind from
something like a mathematical proof that's hard to understand because the
ideas it represents are hard to understand. To someone who hasn't learned the
difference, traditional philosophy seems extremely attractive: as hard (and
therefore impressive) as math, yet broader in scope. That was what lured me in
as a high school student.
This singularity is even more singular in having its own defense built in.
When things are hard to understand, people who suspect they're nonsense
generally keep quiet. There's no way to prove a text is meaningless. The
closest you can get is to show that the official judges of some class of texts
can't distinguish them from placebos.
And so instead of denouncing philosophy, most people who suspected it was a
waste of time just studied other things. That alone is fairly damning
evidence, considering philosophy's claims. It's supposed to be about the
ultimate truths. Surely all smart people would be interested in it, if it
delivered on that promise.
Because philosophy's flaws turned away the sort of people who might have
corrected them, they tended to be self-perpetuating. Bertrand Russell wrote in
a letter in 1912:
> Hitherto the people attracted to philosophy have been mostly those who loved
> the big generalizations, which were all wrong, so that few people with exact
> minds have taken up the subject.
His response was to launch Wittgenstein at it, with dramatic results.
I think Wittgenstein deserves to be famous not for the discovery that most
previous philosophy was a waste of time, which judging from the circumstantial
evidence must have been made by every smart person who studied a little
philosophy and declined to pursue it further, but for how he acted in
response. Instead of quietly switching to another field, he made a fuss,
from inside. He was Gorbachev.
The field of philosophy is still shaken from the fright Wittgenstein gave it.
Later in life he spent a lot of time talking about how words worked.
Since that seems to be allowed, that's what a lot of philosophers do now.
Meanwhile, sensing a vacuum in the metaphysical speculation department, the
people who used to do literary criticism have been edging Kantward, under new
names like "literary theory," "critical theory," and when they're feeling
ambitious, plain "theory." The writing is the familiar word salad:
> Gender is not like some of the other grammatical modes which express
> precisely a mode of conception without any reality that corresponds to the
> conceptual mode, and consequently do not express precisely something in
> reality by which the intellect could be moved to conceive a thing the way it
> does, even where that motive is not something in the thing as such.
The singularity I've described is not going away. There's a market for writing
that sounds impressive and can't be disproven. There will always be both
supply and demand. So if one group abandons this territory, there will always
be others ready to occupy it.
**A Proposal**
We may be able to do better. Here's an intriguing possibility. Perhaps we
should do what Aristotle meant to do, instead of what he did. The goal he
announces in the _Metaphysics_ seems one worth pursuing: to discover the most
general truths. That sounds good. But instead of trying to discover them
because they're useless, let's try to discover them because they're useful.
I propose we try again, but that we use that heretofore despised criterion,
applicability, as a guide to keep us from wondering off into a swamp of
abstractions. Instead of trying to answer the question:
> What are the most general truths?
let's try to answer the question
> Of all the useful things we can say, which are the most general?
The test of utility I propose is whether we cause people who read what we've
written to do anything differently afterward. Knowing we have to give definite
(if implicit) advice will keep us from straying beyond the resolution of the
words we're using.
The goal is the same as Aristotle's; we just approach it from a different
direction.
As an example of a useful, general idea, consider that of the controlled
experiment. There's an idea that has turned out to be widely applicable. Some
might say it's part of science, but it's not part of any specific science;
it's literally meta-physics (in our sense of "meta"). The idea of evolution is
another. It turns out to have quite broad applications—for example, in genetic
algorithms and even product design. Frankfurt's distinction between lying and
bullshitting seems a promising recent example.
These seem to me what philosophy should look like: quite general observations
that would cause someone who understood them to do something differently.
Such observations will necessarily be about things that are imprecisely
defined. Once you start using words with precise meanings, you're doing math.
So starting from utility won't entirely solve the problem I described above—it
won't flush out the metaphysical singularity. But it should help. It gives
people with good intentions a new roadmap into abstraction. And they may
thereby produce things that make the writing of the people with bad intentions
look bad by comparison.
One drawback of this approach is that it won't produce the sort of writing
that gets you tenure. And not just because it's not currently the fashion. In
order to get tenure in any field you must not arrive at conclusions that
members of tenure committees can disagree with. In practice there are two
kinds of solutions to this problem. In math and the sciences, you can prove
what you're saying, or at any rate adjust your conclusions so you're not
claiming anything false ("6 of 8 subjects had lower blood pressure after the
treatment"). In the humanities you can either avoid drawing any definite
conclusions (e.g. conclude that an issue is a complex one), or draw
conclusions so narrow that no one cares enough to disagree with you.
The kind of philosophy I'm advocating won't be able to take either of these
routes. At best you'll be able to achieve the essayist's standard of proof,
not the mathematician's or the experimentalist's. And yet you won't be able to
meet the usefulness test without implying definite and fairly broadly
applicable conclusions. Worse still, the usefulness test will tend to produce
results that annoy people: there's no use in telling people things they
already believe, and people are often upset to be told things they don't.
Here's the exciting thing, though. Anyone can do this. Getting to general plus
useful by starting with useful and cranking up the generality may be
unsuitable for junior professors trying to get tenure, but it's better for
everyone else, including professors who already have it. This side of the
mountain is a nice gradual slope. You can start by writing things that are
useful but very specific, and then gradually make them more general. Joe's has
good burritos. What makes a good burrito? What makes good food? What makes
anything good? You can take as long as you want. You don't have to get all the
way to the top of the mountain. You don't have to tell anyone you're doing
philosophy.
If it seems like a daunting task to do philosophy, here's an encouraging
thought. The field is a lot younger than it seems. Though the first
philosophers in the western tradition lived about 2500 years ago, it would be
misleading to say the field is 2500 years old, because for most of that time
the leading practitioners weren't doing much more than writing commentaries on
Plato or Aristotle while watching over their shoulders for the next invading
army. In the times when they weren't, philosophy was hopelessly intermingled
with religion. It didn't shake itself free till a couple hundred years ago,
and even then was afflicted by the structural problems I've described above.
If I say this, some will say it's a ridiculously overbroad and uncharitable
generalization, and others will say it's old news, but here goes: judging from
their works, most philosophers up to the present have been wasting their time.
So in a sense the field is still at the first step.
That sounds a preposterous claim to make. It won't seem so preposterous in
10,000 years. Civilization always seems old, because it's always the oldest
it's ever been. The only way to say whether something is really old or not is
by looking at structural evidence, and structurally philosophy is young; it's
still reeling from the unexpected breakdown of words.
Philosophy is as young now as math was in 1500. There is a lot more to
discover.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2006
In the Q & A period after a recent talk, someone asked what made startups
fail. After standing there gaping for a few seconds I realized this was kind
of a trick question. It's equivalent to asking how to make a startup succeed —
if you avoid every cause of failure, you succeed — and that's too big a
question to answer on the fly.
Afterwards I realized it could be helpful to look at the problem from this
direction. If you have a list of all the things you shouldn't do, you can turn
that into a recipe for succeeding just by negating. And this form of list may
be more useful in practice. It's easier to catch yourself doing something you
shouldn't than always to remember to do something you should.
In a sense there's just one mistake that kills startups: not making something
users want. If you make something users want, you'll probably be fine,
whatever else you do or don't do. And if you don't make something users want,
then you're dead, whatever else you do or don't do. So really this is a list
of 18 things that cause startups not to make something users want. Nearly all
failure funnels through that.
**1\. Single Founder**
Have you ever noticed how few successful startups were founded by just one
person? Even companies you think of as having one founder, like Oracle,
usually turn out to have more. It seems unlikely this is a coincidence.
What's wrong with having one founder? To start with, it's a vote of no
confidence. It probably means the founder couldn't talk any of his friends
into starting the company with him. That's pretty alarming, because his
friends are the ones who know him best.
But even if the founder's friends were all wrong and the company is a good
bet, he's still at a disadvantage. Starting a startup is too hard for one
person. Even if you could do all the work yourself, you need colleagues to
brainstorm with, to talk you out of stupid decisions, and to cheer you up when
things go wrong.
The last one might be the most important. The low points in a startup are so
low that few could bear them alone. When you have multiple founders, esprit de
corps binds them together in a way that seems to violate conservation laws.
Each thinks "I can't let my friends down." This is one of the most powerful
forces in human nature, and it's missing when there's just one founder.
**2\. Bad Location**
Startups prosper in some places and not others. Silicon Valley dominates, then
Boston, then Seattle, Austin, Denver, and New York. After that there's not
much. Even in New York the number of startups per capita is probably a 20th of
what it is in Silicon Valley. In towns like Houston and Chicago and Detroit
it's too small to measure.
Why is the falloff so sharp? Probably for the same reason it is in other
industries. What's the sixth largest fashion center in the US? The sixth
largest center for oil, or finance, or publishing? Whatever they are they're
probably so far from the top that it would be misleading even to call them
centers.
It's an interesting question why cities become startup hubs, but the reason
startups prosper in them is probably the same as it is for any industry:
that's where the experts are. Standards are higher; people are more
sympathetic to what you're doing; the kind of people you want to hire want to
live there; supporting industries are there; the people you run into in chance
meetings are in the same business. Who knows exactly how these factors combine
to boost startups in Silicon Valley and squish them in Detroit, but it's clear
they do from the number of startups per capita in each.
**3\. Marginal Niche**
Most of the groups that apply to Y Combinator suffer from a common problem:
choosing a small, obscure niche in the hope of avoiding competition.
If you watch little kids playing sports, you notice that below a certain age
they're afraid of the ball. When the ball comes near them their instinct is to
avoid it. I didn't make a lot of catches as an eight year old outfielder,
because whenever a fly ball came my way, I used to close my eyes and hold my
glove up more for protection than in the hope of catching it.
Choosing a marginal project is the startup equivalent of my eight year old
strategy for dealing with fly balls. If you make anything good, you're going
to have competitors, so you may as well face that. You can only avoid
competition by avoiding good ideas.
I think this shrinking from big problems is mostly unconscious. It's not that
people think of grand ideas but decide to pursue smaller ones because they
seem safer. Your unconscious won't even let you think of grand ideas. So the
solution may be to think about ideas without involving yourself. What would be
a great idea for _someone else_ to do as a startup?
**4\. Derivative Idea**
Many of the applications we get are imitations of some existing company.
That's one source of ideas, but not the best. If you look at the origins of
successful startups, few were started in imitation of some other startup.
Where did they get their ideas? Usually from some specific, unsolved problem
the founders identified.
Our startup made software for making online stores. When we started it, there
wasn't any; the few sites you could order from were hand-made at great expense
by web consultants. We knew that if online shopping ever took off, these sites
would have to be generated by software, so we wrote some. Pretty
straightforward.
It seems like the best problems to solve are ones that affect you personally.
Apple happened because Steve Wozniak wanted a computer, Google because Larry
and Sergey couldn't find stuff online, Hotmail because Sabeer Bhatia and Jack
Smith couldn't exchange email at work.
So instead of copying the Facebook, with some variation that the Facebook
rightly ignored, look for ideas from the other direction. Instead of starting
from companies and working back to the problems they solved, look for problems
and imagine the company that might solve them. What do people complain
about? What do you wish there was?
**5\. Obstinacy**
In some fields the way to succeed is to have a vision of what you want to
achieve, and to hold true to it no matter what setbacks you encounter.
Starting startups is not one of them. The stick-to-your-vision approach works
for something like winning an Olympic gold medal, where the problem is well-
defined. Startups are more like science, where you need to follow the trail
wherever it leads.
So don't get too attached to your original plan, because it's probably wrong.
Most successful startups end up doing something different than they originally
intended — often so different that it doesn't even seem like the same company.
You have to be prepared to see the better idea when it arrives. And the
hardest part of that is often discarding your old idea.
But openness to new ideas has to be tuned just right. Switching to a new idea
every week will be equally fatal. Is there some kind of external test you can
use? One is to ask whether the ideas represent some kind of progression. If in
each new idea you're able to re-use most of what you built for the previous
ones, then you're probably in a process that converges. Whereas if you keep
restarting from scratch, that's a bad sign.
Fortunately there's someone you can ask for advice: your users. If you're
thinking about turning in some new direction and your users seem excited about
it, it's probably a good bet.
**6\. Hiring Bad Programmers**
I forgot to include this in the early versions of the list, because nearly all
the founders I know are programmers. This is not a serious problem for them.
They might accidentally hire someone bad, but it's not going to kill the
company. In a pinch they can do whatever's required themselves.
But when I think about what killed most of the startups in the e-commerce
business back in the 90s, it was bad programmers. A lot of those companies
were started by business guys who thought the way startups worked was that you
had some clever idea and then hired programmers to implement it. That's
actually much harder than it sounds — almost impossibly hard in fact — because
business guys can't tell which are the good programmers. They don't even get a
shot at the best ones, because no one really good wants a job implementing the
vision of a business guy.
In practice what happens is that the business guys choose people they think
are good programmers (it says here on his resume that he's a Microsoft
Certified Developer) but who aren't. Then they're mystified to find that their
startup lumbers along like a World War II bomber while their competitors
scream past like jet fighters. This kind of startup is in the same position as
a big company, but without the advantages.
So how do you pick good programmers if you're not a programmer? I don't think
there's an answer. I was about to say you'd have to find a good programmer to
help you hire people. But if you can't recognize good programmers, how would
you even do that?
**7\. Choosing the Wrong Platform**
A related problem (since it tends to be done by bad programmers) is choosing
the wrong platform. For example, I think a lot of startups during the Bubble
killed themselves by deciding to build server-based applications on Windows.
Hotmail was still running on FreeBSD for years after Microsoft bought it,
presumably because Windows couldn't handle the load. If Hotmail's founders had
chosen to use Windows, they would have been swamped.
PayPal only just dodged this bullet. After they merged with X.com, the new CEO
wanted to switch to Windows — even after PayPal cofounder Max Levchin showed
that their software scaled only 1% as well on Windows as Unix. Fortunately for
PayPal they switched CEOs instead.
Platform is a vague word. It could mean an operating system, or a programming
language, or a "framework" built on top of a programming language. It implies
something that both supports and limits, like the foundation of a house.
The scary thing about platforms is that there are always some that seem to
outsiders to be fine, responsible choices and yet, like Windows in the 90s,
will destroy you if you choose them. Java applets were probably the most
spectacular example. This was supposed to be the new way of delivering
applications. Presumably it killed just about 100% of the startups who
believed that.
How do you pick the right platforms? The usual way is to hire good programmers
and let them choose. But there is a trick you could use if you're not a
programmer: visit a top computer science department and see what they use in
research projects.
**8\. Slowness in Launching**
Companies of all sizes have a hard time getting software done. It's intrinsic
to the medium; software is always 85% done. It takes an effort of will to push
through this and get something released to users.
Startups make all kinds of excuses for delaying their launch. Most are
equivalent to the ones people use for procrastinating in everyday life.
There's something that needs to happen first. Maybe. But if the software were
100% finished and ready to launch at the push of a button, would they still be
waiting?
One reason to launch quickly is that it forces you to actually _finish_ some
quantum of work. Nothing is truly finished till it's released; you can see
that from the rush of work that's always involved in releasing anything, no
matter how finished you thought it was. The other reason you need to launch is
that it's only by bouncing your idea off users that you fully understand it.
Several distinct problems manifest themselves as delays in launching: working
too slowly; not truly understanding the problem; fear of having to deal with
users; fear of being judged; working on too many different things; excessive
perfectionism. Fortunately you can combat all of them by the simple expedient
of forcing yourself to launch _something_ fairly quickly.
**9\. Launching Too Early**
Launching too slowly has probably killed a hundred times more startups than
launching too fast, but it is possible to launch too fast. The danger here is
that you ruin your reputation. You launch something, the early adopters try it
out, and if it's no good they may never come back.
So what's the minimum you need to launch? We suggest startups think about what
they plan to do, identify a core that's both (a) useful on its own and (b)
something that can be incrementally expanded into the whole project, and then
get that done as soon as possible.
This is the same approach I (and many other programmers) use for writing
software. Think about the overall goal, then start by writing the smallest
subset of it that does anything useful. If it's a subset, you'll have to write
it anyway, so in the worst case you won't be wasting your time. But more
likely you'll find that implementing a working subset is both good for morale
and helps you see more clearly what the rest should do.
The early adopters you need to impress are fairly tolerant. They don't expect
a newly launched product to do everything; it just has to do _something_.
**10\. Having No Specific User in Mind**
You can't build things users like without understanding them. I mentioned
earlier that the most successful startups seem to have begun by trying to
solve a problem their founders had. Perhaps there's a rule here: perhaps you
create wealth in proportion to how well you understand the problem you're
solving, and the problems you understand best are your own.
That's just a theory. What's not a theory is the converse: if you're trying to
solve problems you don't understand, you're hosed.
And yet a surprising number of founders seem willing to assume that someone,
they're not sure exactly who, will want what they're building. Do the founders
want it? No, they're not the target market. Who is? Teenagers. People
interested in local events (that one is a perennial tarpit). Or "business"
users. What business users? Gas stations? Movie studios? Defense contractors?
You can of course build something for users other than yourself. We did. But
you should realize you're stepping into dangerous territory. You're flying on
instruments, in effect, so you should (a) consciously shift gears, instead of
assuming you can rely on your intuitions as you ordinarily would, and (b) look
at the instruments.
In this case the instruments are the users. When designing for other people
you have to be empirical. You can no longer guess what will work; you have to
find users and measure their responses. So if you're going to make something
for teenagers or "business" users or some other group that doesn't include
you, you have to be able to talk some specific ones into using what you're
making. If you can't, you're on the wrong track.
**11\. Raising Too Little Money**
Most successful startups take funding at some point. Like having more than one
founder, it seems a good bet statistically. How much should you take, though?
Startup funding is measured in time. Every startup that isn't profitable
(meaning nearly all of them, initially) has a certain amount of time left
before the money runs out and they have to stop. This is sometimes referred to
as runway, as in "How much runway do you have left?" It's a good metaphor
because it reminds you that when the money runs out you're going to be
airborne or dead.
Too little money means not enough to get airborne. What airborne means depends
on the situation. Usually you have to advance to a visibly higher level: if
all you have is an idea, a working prototype; if you have a prototype,
launching; if you're launched, significant growth. It depends on investors,
because until you're profitable that's who you have to convince.
So if you take money from investors, you have to take enough to get to the
next step, whatever that is. Fortunately you have some control over both
how much you spend and what the next step is. We advise startups to set both
low, initially: spend practically nothing, and make your initial goal simply
to build a solid prototype. This gives you maximum flexibility.
**12\. Spending Too Much**
It's hard to distinguish spending too much from raising too little. If you run
out of money, you could say either was the cause. The only way to decide which
to call it is by comparison with other startups. If you raised five million
and ran out of money, you probably spent too much.
Burning through too much money is not as common as it used to be. Founders
seem to have learned that lesson. Plus it keeps getting cheaper to start a
startup. So as of this writing few startups spend too much. None of the ones
we've funded have. (And not just because we make small investments; many have
gone on to raise further rounds.)
The classic way to burn through cash is by hiring a lot of people. This bites
you twice: in addition to increasing your costs, it slows you down—so money
that's getting consumed faster has to last longer. Most hackers understand why
that happens; Fred Brooks explained it in The Mythical Man-Month.
We have three general suggestions about hiring: (a) don't do it if you can
avoid it, (b) pay people with equity rather than salary, not just to save
money, but because you want the kind of people who are committed enough to
prefer that, and (c) only hire people who are either going to write code or go
out and get users, because those are the only things you need at first.
**13\. Raising Too Much Money**
It's obvious how too little money could kill you, but is there such a thing as
having too much?
Yes and no. The problem is not so much the money itself as what comes with it.
As one VC who spoke at Y Combinator said, "Once you take several million
dollars of my money, the clock is ticking." If VCs fund you, they're not going
to let you just put the money in the bank and keep operating as two guys
living on ramen. They want that money to go to work. At the very least
you'll move into proper office space and hire more people. That will change
the atmosphere, and not entirely for the better. Now most of your people will
be employees rather than founders. They won't be as committed; they'll need to
be told what to do; they'll start to engage in office politics.
When you raise a lot of money, your company moves to the suburbs and has kids.
Perhaps more dangerously, once you take a lot of money it gets harder to
change direction. Suppose your initial plan was to sell something to
companies. After taking VC money you hire a sales force to do that. What
happens now if you realize you should be making this for consumers instead of
businesses? That's a completely different kind of selling. What happens, in
practice, is that you don't realize that. The more people you have, the more
you stay pointed in the same direction.
Another drawback of large investments is the time they take. The time required
to raise money grows with the amount. When the amount rises into the
millions, investors get very cautious. VCs never quite say yes or no; they
just engage you in an apparently endless conversation. Raising VC scale
investments is thus a huge time sink — more work, probably, than the startup
itself. And you don't want to be spending all your time talking to investors
while your competitors are spending theirs building things.
We advise founders who go on to seek VC money to take the first reasonable
deal they get. If you get an offer from a reputable firm at a reasonable
valuation with no unusually onerous terms, just take it and get on with
building the company. Who cares if you could get a 30% better deal
elsewhere? Economically, startups are an all-or-nothing game. Bargain-hunting
among investors is a waste of time.
**14\. Poor Investor Management**
As a founder, you have to manage your investors. You shouldn't ignore them,
because they may have useful insights. But neither should you let them run the
company. That's supposed to be your job. If investors had sufficient vision to
run the companies they fund, why didn't they start them?
Pissing off investors by ignoring them is probably less dangerous than caving
in to them. In our startup, we erred on the ignoring side. A lot of our energy
got drained away in disputes with investors instead of going into the product.
But this was less costly than giving in, which would probably have destroyed
the company. If the founders know what they're doing, it's better to have half
their attention focused on the product than the full attention of investors
who don't.
How hard you have to work on managing investors usually depends on how much
money you've taken. When you raise VC-scale money, the investors get a great
deal of control. If they have a board majority, they're literally your bosses.
In the more common case, where founders and investors are equally represented
and the deciding vote is cast by neutral outside directors, all the investors
have to do is convince the outside directors and they control the company.
If things go well, this shouldn't matter. So long as you seem to be advancing
rapidly, most investors will leave you alone. But things don't always go
smoothly in startups. Investors have made trouble even for the most successful
companies. One of the most famous examples is Apple, whose board made a nearly
fatal blunder in firing Steve Jobs. Apparently even Google got a lot of grief
from their investors early on.
**15\. Sacrificing Users to (Supposed) Profit**
When I said at the beginning that if you make something users want, you'll be
fine, you may have noticed I didn't mention anything about having the right
business model. That's not because making money is unimportant. I'm not
suggesting that founders start companies with no chance of making money in the
hope of unloading them before they tank. The reason we tell founders not to
worry about the business model initially is that making something people want
is so much harder.
I don't know why it's so hard to make something people want. It seems like it
should be straightforward. But you can tell it must be hard by how few
startups do it.
Because making something people want is so much harder than making money from
it, you should leave business models for later, just as you'd leave some
trivial but messy feature for version 2. In version 1, solve the core problem.
And the core problem in a startup is how to create wealth (= how much people
want something x the number who want it), not how to convert that wealth into
money.
The companies that win are the ones that put users first. Google, for example.
They made search work, then worried about how to make money from it. And yet
some startup founders still think it's irresponsible not to focus on the
business model from the beginning. They're often encouraged in this by
investors whose experience comes from less malleable industries.
It _is_ irresponsible not to think about business models. It's just ten times
more irresponsible not to think about the product.
**16\. Not Wanting to Get Your Hands Dirty**
Nearly all programmers would rather spend their time writing code and have
someone else handle the messy business of extracting money from it. And not
just the lazy ones. Larry and Sergey apparently felt this way too at first.
After developing their new search algorithm, the first thing they tried was to
get some other company to buy it.
Start a company? Yech. Most hackers would rather just have ideas. But as Larry
and Sergey found, there's not much of a market for ideas. No one trusts an
idea till you embody it in a product and use that to grow a user base. Then
they'll pay big time.
Maybe this will change, but I doubt it will change much. There's nothing like
users for convincing acquirers. It's not just that the risk is decreased. The
acquirers are human, and they have a hard time paying a bunch of young guys
millions of dollars just for being clever. When the idea is embodied in a
company with a lot of users, they can tell themselves they're buying the users
rather than the cleverness, and this is easier for them to swallow.
If you're going to attract users, you'll probably have to get up from your
computer and go find some. It's unpleasant work, but if you can make yourself
do it you have a much greater chance of succeeding. In the first batch of
startups we funded, in the summer of 2005, most of the founders spent all
their time building their applications. But there was one who was away half
the time talking to executives at cell phone companies, trying to arrange
deals. Can you imagine anything more painful for a hacker? But it paid
off, because this startup seems the most successful of that group by an order
of magnitude.
If you want to start a startup, you have to face the fact that you can't just
hack. At least one hacker will have to spend some of the time doing business
stuff.
**17\. Fights Between Founders**
Fights between founders are surprisingly common. About 20% of the startups
we've funded have had a founder leave. It happens so often that we've reversed
our attitude to vesting. We still don't require it, but now we advise founders
to vest so there will be an orderly way for people to quit.
A founder leaving doesn't necessarily kill a startup, though. Plenty of
successful startups have had that happen. Fortunately it's usually the
least committed founder who leaves. If there are three founders and one who
was lukewarm leaves, big deal. If you have two and one leaves, or a guy with
critical technical skills leaves, that's more of a problem. But even that is
survivable. Blogger got down to one person, and they bounced back.
Most of the disputes I've seen between founders could have been avoided if
they'd been more careful about who they started a company with. Most disputes
are not due to the situation but the people. Which means they're inevitable.
And most founders who've been burned by such disputes probably had misgivings,
which they suppressed, when they started the company. Don't suppress
misgivings. It's much easier to fix problems before the company is started
than after. So don't include your housemate in your startup because he'd feel
left out otherwise. Don't start a company with someone you dislike because
they have some skill you need and you worry you won't find anyone else. The
people are the most important ingredient in a startup, so don't compromise
there.
**18\. A Half-Hearted Effort**
The failed startups you hear most about are the spectacular flameouts. Those
are actually the elite of failures. The most common type is not the one that
makes spectacular mistakes, but the one that doesn't do much of anything — the
one we never even hear about, because it was some project a couple guys
started on the side while working on their day jobs, but which never got
anywhere and was gradually abandoned.
Statistically, if you want to avoid failure, it would seem like the most
important thing is to quit your day job. Most founders of failed startups
don't quit their day jobs, and most founders of successful ones do. If startup
failure were a disease, the CDC would be issuing bulletins warning people to
avoid day jobs.
Does that mean you should quit your day job? Not necessarily. I'm guessing
here, but I'd guess that many of these would-be founders may not have the kind
of determination it takes to start a company, and that in the back of their
minds, they know it. The reason they don't invest more time in their startup
is that they know it's a bad investment.
I'd also guess there's some band of people who could have succeeded if they'd
taken the leap and done it full-time, but didn't. I have no idea how wide this
band is, but if the winner/borderline/hopeless progression has the sort of
distribution you'd expect, the number of people who could have made it, if
they'd quit their day job, is probably an order of magnitude larger than the
number who do make it.
If that's true, most startups that could succeed fail because the founders
don't devote their whole efforts to them. That certainly accords with what I
see out in the world. Most startups fail because they don't make something
people want, and the reason most don't is that they don't try hard enough.
In other words, starting startups is just like everything else. The biggest
mistake you can make is not to try hard enough. To the extent there's a secret
to success, it's not to be in denial about that.
** |
|
December 2005
The most impressive people I know are all terrible procrastinators. So could
it be that procrastination isn't always bad?
Most people who write about procrastination write about how to cure it. But
this is, strictly speaking, impossible. There are an infinite number of things
you could be doing. No matter what you work on, you're not working on
everything else. So the question is not how to avoid procrastination, but how
to procrastinate well.
There are three variants of procrastination, depending on what you do instead
of working on something: you could work on (a) nothing, (b) something less
important, or (c) something more important. That last type, I'd argue, is good
procrastination.
That's the "absent-minded professor," who forgets to shave, or eat, or even
perhaps look where he's going while he's thinking about some interesting
question. His mind is absent from the everyday world because it's hard at work
in another.
That's the sense in which the most impressive people I know are all
procrastinators. They're type-C procrastinators: they put off working on small
stuff to work on big stuff.
What's "small stuff?" Roughly, work that has zero chance of being mentioned in
your obituary. It's hard to say at the time what will turn out to be your best
work (will it be your magnum opus on Sumerian temple architecture, or the
detective thriller you wrote under a pseudonym?), but there's a whole class of
tasks you can safely rule out: shaving, doing your laundry, cleaning the
house, writing thank-you notes—anything that might be called an errand.
Good procrastination is avoiding errands to do real work.
Good in a sense, at least. The people who want you to do the errands won't
think it's good. But you probably have to annoy them if you want to get
anything done. The mildest seeming people, if they want to do real work, all
have a certain degree of ruthlessness when it comes to avoiding errands.
Some errands, like replying to letters, go away if you ignore them (perhaps
taking friends with them). Others, like mowing the lawn, or filing tax
returns, only get worse if you put them off. In principle it shouldn't work to
put off the second kind of errand. You're going to have to do whatever it is
eventually. Why not (as past-due notices are always saying) do it now?
The reason it pays to put off even those errands is that real work needs two
things errands don't: big chunks of time, and the right mood. If you get
inspired by some project, it can be a net win to blow off everything you were
supposed to do for the next few days to work on it. Yes, those errands may
cost you more time when you finally get around to them. But if you get a lot
done during those few days, you will be net more productive.
In fact, it may not be a difference in degree, but a difference in kind. There
may be types of work that can only be done in long, uninterrupted stretches,
when inspiration hits, rather than dutifully in scheduled little slices.
Empirically it seems to be so. When I think of the people I know who've done
great things, I don't imagine them dutifully crossing items off to-do lists. I
imagine them sneaking off to work on some new idea.
Conversely, forcing someone to perform errands synchronously is bound to limit
their productivity. The cost of an interruption is not just the time it takes,
but that it breaks the time on either side in half. You probably only have to
interrupt someone a couple times a day before they're unable to work on hard
problems at all.
I've wondered a lot about why startups are most productive at the very
beginning, when they're just a couple guys in an apartment. The main reason
may be that there's no one to interrupt them yet. In theory it's good when the
founders finally get enough money to hire people to do some of the work for
them. But it may be better to be overworked than interrupted. Once you dilute
a startup with ordinary office workers—with type-B procrastinators—the whole
company starts to resonate at their frequency. They're interrupt-driven, and
soon you are too.
Errands are so effective at killing great projects that a lot of people use
them for that purpose. Someone who has decided to write a novel, for example,
will suddenly find that the house needs cleaning. People who fail to write
novels don't do it by sitting in front of a blank page for days without
writing anything. They do it by feeding the cat, going out to buy something
they need for their apartment, meeting a friend for coffee, checking email. "I
don't have time to work," they say. And they don't; they've made sure of that.
(There's also a variant where one has no place to work. The cure is to visit
the places where famous people worked, and see how unsuitable they were.)
I've used both these excuses at one time or another. I've learned a lot of
tricks for making myself work over the last 20 years, but even now I don't win
consistently. Some days I get real work done. Other days are eaten up by
errands. And I know it's usually my fault: I _let_ errands eat up the day, to
avoid facing some hard problem.
The most dangerous form of procrastination is unacknowledged type-B
procrastination, because it doesn't feel like procrastination. You're "getting
things done." Just the wrong things.
Any advice about procrastination that concentrates on crossing things off your
to-do list is not only incomplete, but positively misleading, if it doesn't
consider the possibility that the to-do list is itself a form of type-B
procrastination. In fact, possibility is too weak a word. Nearly everyone's
is. Unless you're working on the biggest things you could be working on,
you're type-B procrastinating, no matter how much you're getting done.
In his famous essay You and Your Research (which I recommend to anyone
ambitious, no matter what they're working on), Richard Hamming suggests that
you ask yourself three questions:
1. What are the most important problems in your field?
2. Are you working on one of them?
3. Why not?
Hamming was at Bell Labs when he started asking such questions. In principle
anyone there ought to have been able to work on the most important problems in
their field. Perhaps not everyone can make an equally dramatic mark on the
world; I don't know; but whatever your capacities, there are projects that
stretch them. So Hamming's exercise can be generalized to:
> What's the best thing you could be working on, and why aren't you?
Most people will shy away from this question. I shy away from it myself; I see
it there on the page and quickly move on to the next sentence. Hamming used to
go around actually asking people this, and it didn't make him popular. But
it's a question anyone ambitious should face.
The trouble is, you may end up hooking a very big fish with this bait. To do
good work, you need to do more than find good projects. Once you've found
them, you have to get yourself to work on them, and that can be hard. The
bigger the problem, the harder it is to get yourself to work on it.
Of course, the main reason people find it difficult to work on a particular
problem is that they don't enjoy it. When you're young, especially, you often
find yourself working on stuff you don't really like-- because it seems
impressive, for example, or because you've been assigned to work on it. Most
grad students are stuck working on big problems they don't really like, and
grad school is thus synonymous with procrastination.
But even when you like what you're working on, it's easier to get yourself to
work on small problems than big ones. Why? Why is it so hard to work on big
problems? One reason is that you may not get any reward in the forseeable
future. If you work on something you can finish in a day or two, you can
expect to have a nice feeling of accomplishment fairly soon. If the reward is
indefinitely far in the future, it seems less real.
Another reason people don't work on big projects is, ironically, fear of
wasting time. What if they fail? Then all the time they spent on it will be
wasted. (In fact it probably won't be, because work on hard projects almost
always leads somewhere.)
But the trouble with big problems can't be just that they promise no immediate
reward and might cause you to waste a lot of time. If that were all, they'd be
no worse than going to visit your in-laws. There's more to it than that. Big
problems are _terrifying_. There's an almost physical pain in facing them.
It's like having a vacuum cleaner hooked up to your imagination. All your
initial ideas get sucked out immediately, and you don't have any more, and yet
the vacuum cleaner is still sucking.
You can't look a big problem too directly in the eye. You have to approach it
somewhat obliquely. But you have to adjust the angle just right: you have to
be facing the big problem directly enough that you catch some of the
excitement radiating from it, but not so much that it paralyzes you. You can
tighten the angle once you get going, just as a sailboat can sail closer to
the wind once it gets underway.
If you want to work on big things, you seem to have to trick yourself into
doing it. You have to work on small things that could grow into big things, or
work on successively larger things, or split the moral load with
collaborators. It's not a sign of weakness to depend on such tricks. The very
best work has been done this way.
When I talk to people who've managed to make themselves work on big things, I
find that all blow off errands, and all feel guilty about it. I don't think
they should feel guilty. There's more to do than anyone could. So someone
doing the best work they can is inevitably going to leave a lot of errands
undone. It seems a mistake to feel bad about that.
I think the way to "solve" the problem of procrastination is to let delight
pull you instead of making a to-do list push you. Work on an ambitious project
you really enjoy, and sail as close to the wind as you can, and you'll leave
the right things undone.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this.
---
---
| | Romanian Translation
| | | | Russian Translation
| | Hebrew Translation
| | | | German Translation
| | Portuguese Translation
| | | | Italian Translation
| | Japanese Translation
| | | | Spanish Translation
--- |
|
February 2007
A few days ago I finally figured out something I've wondered about for 25
years: the relationship between wisdom and intelligence. Anyone can see
they're not the same by the number of people who are smart, but not very wise.
And yet intelligence and wisdom do seem related. How?
What is wisdom? I'd say it's knowing what to do in a lot of situations. I'm
not trying to make a deep point here about the true nature of wisdom, just to
figure out how we use the word. A wise person is someone who usually knows the
right thing to do.
And yet isn't being smart also knowing what to do in certain situations? For
example, knowing what to do when the teacher tells your elementary school
class to add all the numbers from 1 to 100?
Some say wisdom and intelligence apply to different types of problems—wisdom
to human problems and intelligence to abstract ones. But that isn't true. Some
wisdom has nothing to do with people: for example, the wisdom of the engineer
who knows certain structures are less prone to failure than others. And
certainly smart people can find clever solutions to human problems as well as
abstract ones.
Another popular explanation is that wisdom comes from experience while
intelligence is innate. But people are not simply wise in proportion to how
much experience they have. Other things must contribute to wisdom besides
experience, and some may be innate: a reflective disposition, for example.
Neither of the conventional explanations of the difference between wisdom and
intelligence stands up to scrutiny. So what is the difference? If we look at
how people use the words "wise" and "smart," what they seem to mean is
different shapes of performance.
**Curve**
"Wise" and "smart" are both ways of saying someone knows what to do. The
difference is that "wise" means one has a high average outcome across all
situations, and "smart" means one does spectacularly well in a few. That is,
if you had a graph in which the x axis represented situations and the y axis
the outcome, the graph of the wise person would be high overall, and the graph
of the smart person would have high peaks.
The distinction is similar to the rule that one should judge talent at its
best and character at its worst. Except you judge intelligence at its best,
and wisdom by its average. That's how the two are related: they're the two
different senses in which the same curve can be high.
So a wise person knows what to do in most situations, while a smart person
knows what to do in situations where few others could. We need to add one more
qualification: we should ignore cases where someone knows what to do because
they have inside information. But aside from that, I don't think we can
get much more specific without starting to be mistaken.
Nor do we need to. Simple as it is, this explanation predicts, or at least
accords with, both of the conventional stories about the distinction between
wisdom and intelligence. Human problems are the most common type, so being
good at solving those is key in achieving a high average outcome. And it seems
natural that a high average outcome depends mostly on experience, but that
dramatic peaks can only be achieved by people with certain rare, innate
qualities; nearly anyone can learn to be a good swimmer, but to be an Olympic
swimmer you need a certain body type.
This explanation also suggests why wisdom is such an elusive concept: there's
no such thing. "Wise" means something—that one is on average good at making
the right choice. But giving the name "wisdom" to the supposed quality that
enables one to do that doesn't mean such a thing exists. To the extent
"wisdom" means anything, it refers to a grab-bag of qualities as various as
self-discipline, experience, and empathy.
Likewise, though "intelligent" means something, we're asking for trouble if we
insist on looking for a single thing called "intelligence." And whatever its
components, they're not all innate. We use the word "intelligent" as an
indication of ability: a smart person can grasp things few others could. It
does seem likely there's some inborn predisposition to intelligence (and
wisdom too), but this predisposition is not itself intelligence.
One reason we tend to think of intelligence as inborn is that people trying to
measure it have concentrated on the aspects of it that are most measurable. A
quality that's inborn will obviously be more convenient to work with than one
that's influenced by experience, and thus might vary in the course of a study.
The problem comes when we drag the word "intelligence" over onto what they're
measuring. If they're measuring something inborn, they can't be measuring
intelligence. Three year olds aren't smart. When we describe one as smart,
it's shorthand for "smarter than other three year olds."
**Split**
Perhaps it's a technicality to point out that a predisposition to intelligence
is not the same as intelligence. But it's an important technicality, because
it reminds us that we can become smarter, just as we can become wiser.
The alarming thing is that we may have to choose between the two.
If wisdom and intelligence are the average and peaks of the same curve, then
they converge as the number of points on the curve decreases. If there's just
one point, they're identical: the average and maximum are the same. But as the
number of points increases, wisdom and intelligence diverge. And historically
the number of points on the curve seems to have been increasing: our ability
is tested in an ever wider range of situations.
In the time of Confucius and Socrates, people seem to have regarded wisdom,
learning, and intelligence as more closely related than we do. Distinguishing
between "wise" and "smart" is a modern habit. And the reason we do is that
they've been diverging. As knowledge gets more specialized, there are more
points on the curve, and the distinction between the spikes and the average
becomes sharper, like a digital image rendered with more pixels.
One consequence is that some old recipes may have become obsolete. At the very
least we have to go back and figure out if they were really recipes for wisdom
or intelligence. But the really striking change, as intelligence and wisdom
drift apart, is that we may have to decide which we prefer. We may not be able
to optimize for both simultaneously.
Society seems to have voted for intelligence. We no longer admire the sage—not
the way people did two thousand years ago. Now we admire the genius. Because
in fact the distinction we began with has a rather brutal converse: just as
you can be smart without being very wise, you can be wise without being very
smart. That doesn't sound especially admirable. That gets you James Bond, who
knows what to do in a lot of situations, but has to rely on Q for the ones
involving math.
Intelligence and wisdom are obviously not mutually exclusive. In fact, a high
average may help support high peaks. But there are reasons to believe that at
some point you have to choose between them. One is the example of very smart
people, who are so often unwise that in popular culture this now seems to be
regarded as the rule rather than the exception. Perhaps the absent-minded
professor is wise in his way, or wiser than he seems, but he's not wise in the
way Confucius or Socrates wanted people to be.
**New**
For both Confucius and Socrates, wisdom, virtue, and happiness were
necessarily related. The wise man was someone who knew what the right choice
was and always made it; to be the right choice, it had to be morally right; he
was therefore always happy, knowing he'd done the best he could. I can't think
of many ancient philosophers who would have disagreed with that, so far as it
goes.
"The superior man is always happy; the small man sad," said Confucius.
Whereas a few years ago I read an interview with a mathematician who said that
most nights he went to bed discontented, feeling he hadn't made enough
progress. The Chinese and Greek words we translate as "happy" didn't mean
exactly what we do by it, but there's enough overlap that this remark
contradicts them.
Is the mathematician a small man because he's discontented? No; he's just
doing a kind of work that wasn't very common in Confucius's day.
Human knowledge seems to grow fractally. Time after time, something that
seemed a small and uninteresting area—experimental error, even—turns out, when
examined up close, to have as much in it as all knowledge up to that point.
Several of the fractal buds that have exploded since ancient times involve
inventing and discovering new things. Math, for example, used to be something
a handful of people did part-time. Now it's the career of thousands. And in
work that involves making new things, some old rules don't apply.
Recently I've spent some time advising people, and there I find the ancient
rule still works: try to understand the situation as well as you can, give the
best advice you can based on your experience, and then don't worry about it,
knowing you did all you could. But I don't have anything like this serenity
when I'm writing an essay. Then I'm worried. What if I run out of ideas? And
when I'm writing, four nights out of five I go to bed discontented, feeling I
didn't get enough done.
Advising people and writing are fundamentally different types of work. When
people come to you with a problem and you have to figure out the right thing
to do, you don't (usually) have to invent anything. You just weigh the
alternatives and try to judge which is the prudent choice. But _prudence_
can't tell me what sentence to write next. The search space is too big.
Someone like a judge or a military officer can in much of his work be guided
by duty, but duty is no guide in making things. Makers depend on something
more precarious: inspiration. And like most people who lead a precarious
existence, they tend to be worried, not contented. In that respect they're
more like the small man of Confucius's day, always one bad harvest (or ruler)
away from starvation. Except instead of being at the mercy of weather and
officials, they're at the mercy of their own imagination.
**Limits**
To me it was a relief just to realize it might be ok to be discontented. The
idea that a successful person should be happy has thousands of years of
momentum behind it. If I was any good, why didn't I have the easy confidence
winners are supposed to have? But that, I now believe, is like a runner asking
"If I'm such a good athlete, why do I feel so tired?" Good runners still get
tired; they just get tired at higher speeds.
People whose work is to invent or discover things are in the same position as
the runner. There's no way for them to do the best they can, because there's
no limit to what they could do. The closest you can come is to compare
yourself to other people. But the better you do, the less this matters. An
undergrad who gets something published feels like a star. But for someone at
the top of the field, what's the test of doing well? Runners can at least
compare themselves to others doing exactly the same thing; if you win an
Olympic gold medal, you can be fairly content, even if you think you could
have run a bit faster. But what is a novelist to do?
Whereas if you're doing the kind of work in which problems are presented to
you and you have to choose between several alternatives, there's an upper
bound on your performance: choosing the best every time. In ancient societies,
nearly all work seems to have been of this type. The peasant had to decide
whether a garment was worth mending, and the king whether or not to invade his
neighbor, but neither was expected to invent anything. In principle they could
have; the king could have invented firearms, then invaded his neighbor. But in
practice innovations were so rare that they weren't expected of you, any more
than goalkeepers are expected to score goals. In practice, it seemed as if
there was a correct decision in every situation, and if you made it you'd done
your job perfectly, just as a goalkeeper who prevents the other team from
scoring is considered to have played a perfect game.
In this world, wisdom seemed paramount. Even now, most people do work in
which problems are put before them and they have to choose the best
alternative. But as knowledge has grown more specialized, there are more and
more types of work in which people have to make up new things, and in which
performance is therefore unbounded. Intelligence has become increasingly
important relative to wisdom because there is more room for spikes.
**Recipes**
Another sign we may have to choose between intelligence and wisdom is how
different their recipes are. Wisdom seems to come largely from curing childish
qualities, and intelligence largely from cultivating them.
Recipes for wisdom, particularly ancient ones, tend to have a remedial
character. To achieve wisdom one must cut away all the debris that fills one's
head on emergence from childhood, leaving only the important stuff. Both self-
control and experience have this effect: to eliminate the random biases that
come from your own nature and from the circumstances of your upbringing
respectively. That's not all wisdom is, but it's a large part of it. Much of
what's in the sage's head is also in the head of every twelve year old. The
difference is that in the head of the twelve year old it's mixed together with
a lot of random junk.
The path to intelligence seems to be through working on hard problems. You
develop intelligence as you might develop muscles, through exercise. But there
can't be too much compulsion here. No amount of discipline can replace genuine
curiosity. So cultivating intelligence seems to be a matter of identifying
some bias in one's character—some tendency to be interested in certain types
of things—and nurturing it. Instead of obliterating your idiosyncrasies in an
effort to make yourself a neutral vessel for the truth, you select one and try
to grow it from a seedling into a tree.
The wise are all much alike in their wisdom, but very smart people tend to be
smart in distinctive ways.
Most of our educational traditions aim at wisdom. So perhaps one reason
schools work badly is that they're trying to make intelligence using recipes
for wisdom. Most recipes for wisdom have an element of subjection. At the very
least, you're supposed to do what the teacher says. The more extreme recipes
aim to break down your individuality the way basic training does. But that's
not the route to intelligence. Whereas wisdom comes through humility, it may
actually help, in cultivating intelligence, to have a mistakenly high opinion
of your abilities, because that encourages you to keep working. Ideally till
you realize how mistaken you were.
(The reason it's hard to learn new skills late in life is not just that one's
brain is less malleable. Another probably even worse obstacle is that one has
higher standards.)
I realize we're on dangerous ground here. I'm not proposing the primary goal
of education should be to increase students' "self-esteem." That just breeds
laziness. And in any case, it doesn't really fool the kids, not the smart
ones. They can tell at a young age that a contest where everyone wins is a
fraud.
A teacher has to walk a narrow path: you want to encourage kids to come up
with things on their own, but you can't simply applaud everything they
produce. You have to be a good audience: appreciative, but not too easily
impressed. And that's a lot of work. You have to have a good enough grasp of
kids' capacities at different ages to know when to be surprised.
That's the opposite of traditional recipes for education. Traditionally the
student is the audience, not the teacher; the student's job is not to invent,
but to absorb some prescribed body of material. (The use of the term
"recitation" for sections in some colleges is a fossil of this.) The problem
with these old traditions is that they're too much influenced by recipes for
wisdom.
**Different**
I deliberately gave this essay a provocative title; of course it's worth being
wise. But I think it's important to understand the relationship between
intelligence and wisdom, and particularly what seems to be the growing gap
between them. That way we can avoid applying rules and standards to
intelligence that are really meant for wisdom. These two senses of "knowing
what to do" are more different than most people realize. The path to wisdom is
through discipline, and the path to intelligence through carefully selected
self-indulgence. Wisdom is universal, and intelligence idiosyncratic. And
while wisdom yields calmness, intelligence much of the time leads to
discontentment.
That's particularly worth remembering. A physicist friend recently told me
half his department was on Prozac. Perhaps if we acknowledge that some amount
of frustration is inevitable in certain kinds of work, we can mitigate its
effects. Perhaps we can box it up and put it away some of the time, instead of
letting it flow together with everyday sadness to produce what seems an
alarmingly large pool. At the very least, we can avoid being discontented
about being discontented.
If you feel exhausted, it's not necessarily because there's something wrong
with you. Maybe you're just running fast.
** |
|
May 2021
Noora Health, a nonprofit I've supported for years, just launched a new NFT.
It has a dramatic name, _Save Thousands of Lives_, because that's what the
proceeds will do.
Noora has been saving lives for 7 years. They run programs in hospitals in
South Asia to teach new mothers how to take care of their babies once they get
home. They're in 165 hospitals now. And because they know the numbers before
and after they start at a new hospital, they can measure the impact they have.
It is massive. For every 1000 live births, they save 9 babies.
This number comes from a _study_ of 133,733 families at 28 different hospitals
that Noora conducted in collaboration with the Better Birth team at Ariadne
Labs, a joint center for health systems innovation at Brigham and Womens
Hospital and Harvard T.H. Chan School of Public Health.
Noora is so effective that even if you measure their costs in the most
conservative way, by dividing their entire budget by the number of lives
saved, the cost of saving a life is the lowest I've seen. $1,235.
For this NFT, they're going to issue a public report tracking how this
specific tranche of money is spent, and estimating the number of lives saved
as a result.
NFTs are a new territory, and this way of using them is especially new, but
I'm excited about its potential. And I'm excited to see what happens with this
particular auction, because unlike an NFT representing something that has
already happened, this NFT gets better as the price gets higher.
The reserve price was about $2.5 million, because that's what it takes for the
name to be accurate: that's what it costs to save 2000 lives. But the higher
the price of this NFT goes, the more lives will be saved. What a sentence to
be able to write.
---
* * *
--- |
|
April 2020
I recently saw a _video_ of TV journalists and politicians confidently saying
that the coronavirus would be no worse than the flu. What struck me about it
was not just how mistaken they seemed, but how daring. How could they feel
safe saying such things?
The answer, I realized, is that they didn't think they could get caught. They
didn't realize there was any danger in making false predictions. These people
constantly make false predictions, and get away with it, because the things
they make predictions about either have mushy enough outcomes that they can
bluster their way out of trouble, or happen so far in the future that few
remember what they said.
An epidemic is different. It falsifies your predictions rapidly and
unequivocally.
But epidemics are rare enough that these people clearly didn't realize this
was even a possibility. Instead they just continued to use their ordinary
m.o., which, as the epidemic has made clear, is to talk confidently about
things they don't understand.
An event like this is thus a uniquely powerful way of taking people's measure.
As Warren Buffett said, "It's only when the tide goes out that you learn who's
been swimming naked." And the tide has just gone out like never before.
Now that we've seen the results, let's remember what we saw, because this is
the most accurate test of credibility we're ever likely to have. I hope.
---
---
Finnish Translation
| | German Translation
French Translation
* * *
--- |
|
February 2020
What should an essay be? Many people would say persuasive. That's what a lot
of us were taught essays should be. But I think we can aim for something more
ambitious: that an essay should be useful.
To start with, that means it should be correct. But it's not enough merely to
be correct. It's easy to make a statement correct by making it vague. That's a
common flaw in academic writing, for example. If you know nothing at all about
an issue, you can't go wrong by saying that the issue is a complex one, that
there are many factors to be considered, that it's a mistake to take too
simplistic a view of it, and so on.
Though no doubt correct, such statements tell the reader nothing. Useful
writing makes claims that are as strong as they can be made without becoming
false.
For example, it's more useful to say that Pike's Peak is near the middle of
Colorado than merely somewhere in Colorado. But if I say it's in the exact
middle of Colorado, I've now gone too far, because it's a bit east of the
middle.
Precision and correctness are like opposing forces. It's easy to satisfy one
if you ignore the other. The converse of vaporous academic writing is the
bold, but false, rhetoric of demagogues. Useful writing is bold, but true.
It's also two other things: it tells people something important, and that at
least some of them didn't already know.
Telling people something they didn't know doesn't always mean surprising them.
Sometimes it means telling them something they knew unconsciously but had
never put into words. In fact those may be the more valuable insights, because
they tend to be more fundamental.
Let's put them all together. Useful writing tells people something true and
important that they didn't already know, and tells them as unequivocally as
possible.
Notice these are all a matter of degree. For example, you can't expect an idea
to be novel to everyone. Any insight that you have will probably have already
been had by at least one of the world's 7 billion people. But it's sufficient
if an idea is novel to a lot of readers.
Ditto for correctness, importance, and strength. In effect the four components
are like numbers you can multiply together to get a score for usefulness.
Which I realize is almost awkwardly reductive, but nonetheless true.
_____
How can you ensure that the things you say are true and novel and important?
Believe it or not, there is a trick for doing this. I learned it from my
friend Robert Morris, who has a horror of saying anything dumb. His trick is
not to say anything unless he's sure it's worth hearing. This makes it hard to
get opinions out of him, but when you do, they're usually right.
Translated into essay writing, what this means is that if you write a bad
sentence, you don't publish it. You delete it and try again. Often you abandon
whole branches of four or five paragraphs. Sometimes a whole essay.
You can't ensure that every idea you have is good, but you can ensure that
every one you publish is, by simply not publishing the ones that aren't.
In the sciences, this is called publication bias, and is considered bad. When
some hypothesis you're exploring gets inconclusive results, you're supposed to
tell people about that too. But with essay writing, publication bias is the
way to go.
My strategy is loose, then tight. I write the first draft of an essay fast,
trying out all kinds of ideas. Then I spend days rewriting it very carefully.
I've never tried to count how many times I proofread essays, but I'm sure
there are sentences I've read 100 times before publishing them. When I
proofread an essay, there are usually passages that stick out in an annoying
way, sometimes because they're clumsily written, and sometimes because I'm not
sure they're true. The annoyance starts out unconscious, but after the tenth
reading or so I'm saying "Ugh, that part" each time I hit it. They become like
briars that catch your sleeve as you walk past. Usually I won't publish an
essay till they're all gone till I can read through the whole thing without
the feeling of anything catching.
I'll sometimes let through a sentence that seems clumsy, if I can't think of a
way to rephrase it, but I will never knowingly let through one that doesn't
seem correct. You never have to. If a sentence doesn't seem right, all you
have to do is ask why it doesn't, and you've usually got the replacement right
there in your head.
This is where essayists have an advantage over journalists. You don't have a
deadline. You can work for as long on an essay as you need to get it right.
You don't have to publish the essay at all, if you can't get it right.
Mistakes seem to lose courage in the face of an enemy with unlimited
resources. Or that's what it feels like. What's really going on is that you
have different expectations for yourself. You're like a parent saying to a
child "we can sit here all night till you eat your vegetables." Except you're
the child too.
I'm not saying no mistake gets through. For example, I added condition (c) in
_"A Way to Detect Bias"_ after readers pointed out that I'd omitted it. But in
practice you can catch nearly all of them.
There's a trick for getting importance too. It's like the trick I suggest to
young founders for getting startup ideas: to make something you yourself want.
You can use yourself as a proxy for the reader. The reader is not completely
unlike you, so if you write about topics that seem important to you, they'll
probably seem important to a significant number of readers as well.
Importance has two factors. It's the number of people something matters to,
times how much it matters to them. Which means of course that it's not a
rectangle, but a sort of ragged comb, like a Riemann sum.
The way to get novelty is to write about topics you've thought about a lot.
Then you can use yourself as a proxy for the reader in this department too.
Anything you notice that surprises you, who've thought about the topic a lot,
will probably also surprise a significant number of readers. And here, as with
correctness and importance, you can use the Morris technique to ensure that
you will. If you don't learn anything from writing an essay, don't publish it.
You need humility to measure novelty, because acknowledging the novelty of an
idea means acknowledging your previous ignorance of it. Confidence and
humility are often seen as opposites, but in this case, as in many others,
confidence helps you to be humble. If you know you're an expert on some topic,
you can freely admit when you learn something you didn't know, because you can
be confident that most other people wouldn't know it either.
The fourth component of useful writing, strength, comes from two things:
thinking well, and the skillful use of qualification. These two counterbalance
each other, like the accelerator and clutch in a car with a manual
transmission. As you try to refine the expression of an idea, you adjust the
qualification accordingly. Something you're sure of, you can state baldly with
no qualification at all, as I did the four components of useful writing.
Whereas points that seem dubious have to be held at arm's length with
perhapses.
As you refine an idea, you're pushing in the direction of less qualification.
But you can rarely get it down to zero. Sometimes you don't even want to, if
it's a side point and a fully refined version would be too long.
Some say that qualifications weaken writing. For example, that you should
never begin a sentence in an essay with "I think," because if you're saying
it, then of course you think it. And it's true that "I think x" is a weaker
statement than simply "x." Which is exactly why you need "I think." You need
it to express your degree of certainty.
But qualifications are not scalars. They're not just experimental error. There
must be 50 things they can express: how broadly something applies, how you
know it, how happy you are it's so, even how it could be falsified. I'm not
going to try to explore the structure of qualification here. It's probably
more complex than the whole topic of writing usefully. Instead I'll just give
you a practical tip: Don't underestimate qualification. It's an important
skill in its own right, not just a sort of tax you have to pay in order to
avoid saying things that are false. So learn and use its full range. It may
not be fully half of having good ideas, but it's part of having them.
There's one other quality I aim for in essays: to say things as simply as
possible. But I don't think this is a component of usefulness. It's more a
matter of consideration for the reader. And it's a practical aid in getting
things right; a mistake is more obvious when expressed in simple language. But
I'll admit that the main reason I write simply is not for the reader's sake or
because it helps get things right, but because it bothers me to use more or
fancier words than I need to. It seems inelegant, like a program that's too
long.
I realize florid writing works for some people. But unless you're sure you're
one of them, the best advice is to write as simply as you can.
_____
I believe the formula I've given you, importance + novelty + correctness +
strength, is the recipe for a good essay. But I should warn you that it's also
a recipe for making people mad.
The root of the problem is novelty. When you tell people something they didn't
know, they don't always thank you for it. Sometimes the reason people don't
know something is because they don't want to know it. Usually because it
contradicts some cherished belief. And indeed, if you're looking for novel
ideas, popular but mistaken beliefs are a good place to find them. Every
popular mistaken belief creates a _dead zone_ of ideas around it that are
relatively unexplored because they contradict it.
The strength component just makes things worse. If there's anything that
annoys people more than having their cherished assumptions contradicted, it's
having them flatly contradicted.
Plus if you've used the Morris technique, your writing will seem quite
confident. Perhaps offensively confident, to people who disagree with you. The
reason you'll seem confident is that you are confident: you've cheated, by
only publishing the things you're sure of. It will seem to people who try to
disagree with you that you never admit you're wrong. In fact you constantly
admit you're wrong. You just do it before publishing instead of after.
And if your writing is as simple as possible, that just makes things worse.
Brevity is the diction of command. If you watch someone delivering unwelcome
news from a position of inferiority, you'll notice they tend to use lots of
words, to soften the blow. Whereas to be short with someone is more or less to
be rude to them.
It can sometimes work to deliberately phrase statements more weakly than you
mean. To put "perhaps" in front of something you're actually quite sure of.
But you'll notice that when writers do this, they usually do it with a wink.
I don't like to do this too much. It's cheesy to adopt an ironic tone for a
whole essay. I think we just have to face the fact that elegance and curtness
are two names for the same thing.
You might think that if you work sufficiently hard to ensure that an essay is
correct, it will be invulnerable to attack. That's sort of true. It will be
invulnerable to valid attacks. But in practice that's little consolation.
In fact, the strength component of useful writing will make you particularly
vulnerable to misrepresentation. If you've stated an idea as strongly as you
could without making it false, all anyone has to do is to exaggerate slightly
what you said, and now it is false.
Much of the time they're not even doing it deliberately. One of the most
surprising things you'll discover, if you start writing essays, is that people
who disagree with you rarely disagree with what you've actually written.
Instead they make up something you said and disagree with that.
For what it's worth, the countermove is to ask someone who does this to quote
a specific sentence or passage you wrote that they believe is false, and
explain why. I say "for what it's worth" because they never do. So although it
might seem that this could get a broken discussion back on track, the truth is
that it was never on track in the first place.
Should you explicitly forestall likely misinterpretations? Yes, if they're
misinterpretations a reasonably smart and well-intentioned person might make.
In fact it's sometimes better to say something slightly misleading and then
add the correction than to try to get an idea right in one shot. That can be
more efficient, and can also model the way such an idea would be discovered.
But I don't think you should explicitly forestall intentional
misinterpretations in the body of an essay. An essay is a place to meet honest
readers. You don't want to spoil your house by putting bars on the windows to
protect against dishonest ones. The place to protect against intentional
misinterpretations is in end-notes. But don't think you can predict them all.
People are as ingenious at misrepresenting you when you say something they
don't want to hear as they are at coming up with rationalizations for things
they want to do but know they shouldn't. I suspect it's the same skill.
_____
As with most other things, the way to get better at writing essays is to
practice. But how do you start? Now that we've examined the structure of
useful writing, we can rephrase that question more precisely. Which constraint
do you relax initially? The answer is, the first component of importance: the
number of people who care about what you write.
If you narrow the topic sufficiently, you can probably find something you're
an expert on. Write about that to start with. If you only have ten readers who
care, that's fine. You're helping them, and you're writing. Later you can
expand the breadth of topics you write about.
The other constraint you can relax is a little surprising: publication.
Writing essays doesn't have to mean publishing them. That may seem strange now
that the trend is to publish every random thought, but it worked for me. I
wrote what amounted to essays in notebooks for about 15 years. I never
published any of them and never expected to. I wrote them as a way of figuring
things out. But when the web came along I'd had a lot of practice.
Incidentally, _Steve Wozniak_ did the same thing. In high school he designed
computers on paper for fun. He couldn't build them because he couldn't afford
the components. But when Intel launched 4K DRAMs in 1975, he was ready.
_____
How many essays are there left to write though? The answer to that question is
probably the most exciting thing I've learned about essay writing. Nearly all
of them are left to write.
Although _the essay_ is an old form, it hasn't been assiduously cultivated. In
the print era, publication was expensive, and there wasn't enough demand for
essays to publish that many. You could publish essays if you were already well
known for writing something else, like novels. Or you could write book reviews
that you took over to express your own ideas. But there was not really a
direct path to becoming an essayist. Which meant few essays got written, and
those that did tended to be about a narrow range of subjects.
Now, thanks to the internet, there's a path. Anyone can publish essays online.
You start in obscurity, perhaps, but at least you can start. You don't need
anyone's permission.
It sometimes happens that an area of knowledge sits quietly for years, till
some change makes it explode. Cryptography did this to number theory. The
internet is doing it to the essay.
The exciting thing is not that there's a lot left to write, but that there's a
lot left to discover. There's a certain kind of idea that's best discovered by
writing essays. If most essays are still unwritten, most such ideas are still
undiscovered.
** |
|
April 2021
When intellectuals talk about the death penalty, they talk about things like
whether it's permissible for the state to take someone's life, whether the
death penalty acts as a deterrent, and whether more death sentences are given
to some groups than others. But in practice the debate about the death penalty
is not about whether it's ok to kill murderers. It's about whether it's ok to
kill innocent people, because at least 4% of people on death row are
_innocent_.
When I was a kid I imagined that it was unusual for people to be convicted of
crimes they hadn't committed, and that in murder cases especially this must be
very rare. Far from it. Now, thanks to organizations like the _Innocence
Project_, we see a constant stream of stories about murder convictions being
overturned after new evidence emerges. Sometimes the police and prosecutors
were just very sloppy. Sometimes they were crooked, and knew full well they
were convicting an innocent person.
Kenneth Adams and three other men spent 18 years in prison on a murder
conviction. They were exonerated after DNA testing implicated three different
men, two of whom later confessed. The police had been told about the other men
early in the investigation, but never followed up the lead.
Keith Harward spent 33 years in prison on a murder conviction. He was
convicted because "experts" said his teeth matched photos of bite marks on one
victim. He was exonerated after DNA testing showed the murder had been
committed by another man, Jerry Crotty.
Ricky Jackson and two other men spent 39 years in prison after being convicted
of murder on the testimony of a 12 year old boy, who later recanted and said
he'd been coerced by police. Multiple people have confirmed the boy was
elsewhere at the time. The three men were exonerated after the county
prosecutor dropped the charges, saying "The state is conceding the obvious."
Alfred Brown spent 12 years in prison on a murder conviction, including 10
years on death row. He was exonerated after it was discovered that the
assistant district attorney had concealed phone records proving he could not
have committed the crimes.
Glenn Ford spent 29 years on death row after having been convicted of murder.
He was exonerated after new evidence proved he was not even at the scene when
the murder occurred. The attorneys assigned to represent him had never tried a
jury case before.
Cameron Willingham was actually executed in 2004 by lethal injection. The
"expert" who testified that he deliberately set fire to his house has since
been discredited. A re-examination of the case ordered by the state of Texas
in 2009 concluded that "a finding of arson could not be sustained."
_Rich Glossip_ has spent 20 years on death row after being convicted of murder
on the testimony of the actual killer, who escaped with a life sentence in
return for implicating him. In 2015 he came within minutes of execution before
it emerged that Oklahoma had been planning to kill him with an illegal
combination of drugs. They still plan to go ahead with the execution, perhaps
as soon as this summer, despite _new evidence_ exonerating him.
I could go on. There are hundreds of similar cases. In Florida alone, 29 death
row prisoners have been exonerated so far.
Far from being rare, wrongful murder convictions are _very common_. Police are
under pressure to solve a crime that has gotten a lot of attention. When they
find a suspect, they want to believe he's guilty, and ignore or even destroy
evidence suggesting otherwise. District attorneys want to be seen as effective
and tough on crime, and in order to win convictions are willing to manipulate
witnesses and withhold evidence. Court-appointed defense attorneys are
overworked and often incompetent. There's a ready supply of criminals willing
to give false testimony in return for a lighter sentence, suggestible
witnesses who can be made to say whatever police want, and bogus "experts"
eager to claim that science proves the defendant is guilty. And juries want to
believe them, since otherwise some terrible crime remains unsolved.
This circus of incompetence and dishonesty is the real issue with the death
penalty. We don't even reach the point where theoretical questions about the
moral justification or effectiveness of capital punishment start to matter,
because so many of the people sentenced to death are actually innocent.
Whatever it means in theory, in practice capital punishment means killing
innocent people.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Don Knight for reading
drafts of this.
**Related:**
---
---
Will Florida Kill an Innocent Man?
Was Kevin Cooper Framed for Murder?
Did Texas execute an innocent man?
* * *
--- |
Subsets and Splits