text
stringlengths 389
68.1k
|
---|
|
December 2019
Before I had kids, I was afraid of having kids. Up to that point I felt about
kids the way the young Augustine felt about living virtuously. I'd have been
sad to think I'd never have children. But did I want them now? No.
If I had kids, I'd become a parent, and parents, as I'd known since I was a
kid, were uncool. They were dull and responsible and had no fun. And while
it's not surprising that kids would believe that, to be honest I hadn't seen
much as an adult to change my mind. Whenever I'd noticed parents with kids,
the kids seemed to be terrors, and the parents pathetic harried creatures,
even when they prevailed.
When people had babies, I congratulated them enthusiastically, because that
seemed to be what one did. But I didn't feel it at all. "Better you than me,"
I was thinking.
Now when people have babies I congratulate them enthusiastically and I mean
it. Especially the first one. I feel like they just got the best gift in the
world.
What changed, of course, is that I had kids. Something I dreaded turned out to
be wonderful.
Partly, and I won't deny it, this is because of serious chemical changes that
happened almost instantly when our first child was born. It was like someone
flipped a switch. I suddenly felt protective not just toward our child, but
toward all children. As I was driving my wife and new son home from the
hospital, I approached a crosswalk full of pedestrians, and I found myself
thinking "I have to be really careful of all these people. Every one of them
is someone's child!"
So to some extent you can't trust me when I say having kids is great. To some
extent I'm like a religious cultist telling you that you'll be happy if you
join the cult too but only because joining the cult will alter your mind in
a way that will make you happy to be a cult member.
But not entirely. There were some things about having kids that I clearly got
wrong before I had them.
For example, there was a huge amount of selection bias in my observations of
parents and children. Some parents may have noticed that I wrote "Whenever I'd
noticed parents with kids." Of course the times I noticed kids were when
things were going wrong. I only noticed them when they made noise. And where
was I when I noticed them? Ordinarily I never went to places with kids, so the
only times I encountered them were in shared bottlenecks like airplanes. Which
is not exactly a representative sample. Flying with a toddler is something
very few parents enjoy.
What I didn't notice, because they tend to be much quieter, were all the great
moments parents had with kids. People don't talk about these much the magic
is hard to put into words, and all other parents know about them anyway but
one of the great things about having kids is that there are so many times when
you feel there is nowhere else you'd rather be, and nothing else you'd rather
be doing. You don't have to be doing anything special. You could just be going
somewhere together, or putting them to bed, or pushing them on the swings at
the park. But you wouldn't trade these moments for anything. One doesn't tend
to associate kids with peace, but that's what you feel. You don't need to look
any further than where you are right now.
Before I had kids, I had moments of this kind of peace, but they were rarer.
With kids it can happen several times a day.
My other source of data about kids was my own childhood, and that was
similarly misleading. I was pretty bad, and was always in trouble for
something or other. So it seemed to me that parenthood was essentially law
enforcement. I didn't realize there were good times too.
I remember my mother telling me once when I was about 30 that she'd really
enjoyed having me and my sister. My god, I thought, this woman is a saint. She
not only endured all the pain we subjected her to, but actually enjoyed it?
Now I realize she was simply telling the truth.
She said that one reason she liked having us was that we'd been interesting to
talk to. That took me by surprise when I had kids. You don't just love them.
They become your friends too. They're really interesting. And while I admit
small children are disastrously fond of repetition (anything worth doing once
is worth doing fifty times) it's often genuinely fun to play with them. That
surprised me too. Playing with a 2 year old was fun when I was 2 and
definitely not fun when I was 6. Why would it become fun again later? But it
does.
There are of course times that are pure drudgery. Or worse still, terror.
Having kids is one of those intense types of experience that are hard to
imagine unless you've had them. But it is not, as I implicitly believed before
having kids, simply your DNA heading for the lifeboats.
Some of my worries about having kids were right, though. They definitely make
you less productive. I know having kids makes some people get their act
together, but if your act was already together, you're going to have less time
to do it in. In particular, you're going to have to work to a schedule. Kids
have schedules. I'm not sure if it's because that's how kids are, or because
it's the only way to integrate their lives with adults', but once you have
kids, you tend to have to work on their schedule.
You will have chunks of time to work. But you can't let work spill
promiscuously through your whole life, like I used to before I had kids.
You're going to have to work at the same time every day, whether inspiration
is flowing or not, and there are going to be times when you have to stop, even
if it is.
I've been able to adapt to working this way. Work, like love, finds a way. If
there are only certain times it can happen, it happens at those times. So
while I don't get as much done as before I had kids, I get enough done.
I hate to say this, because being ambitious has always been a part of my
identity, but having kids may make one less ambitious. It hurts to see that
sentence written down. I squirm to avoid it. But if there weren't something
real there, why would I squirm? The fact is, once you have kids, you're
probably going to care more about them than you do about yourself. And
attention is a zero-sum game. Only one idea at a time can be the _top idea in
your mind_. Once you have kids, it will often be your kids, and that means it
will less often be some project you're working on.
I have some hacks for sailing close to this wind. For example, when I write
essays, I think about what I'd want my kids to know. That drives me to get
things right. And when I was writing _Bel_, I told my kids that once I
finished it I'd take them to Africa. When you say that sort of thing to a
little kid, they treat it as a promise. Which meant I had to finish or I'd be
taking away their trip to Africa. Maybe if I'm really lucky such tricks could
put me net ahead. But the wind is there, no question.
On the other hand, what kind of wimpy ambition do you have if it won't survive
having kids? Do you have so little to spare?
And while having kids may be warping my present judgement, it hasn't
overwritten my memory. I remember perfectly well what life was like before.
Well enough to miss some things a lot, like the ability to take off for some
other country at a moment's notice. That was so great. Why did I never do
that?
See what I did there? The fact is, most of the freedom I had before kids, I
never used. I paid for it in loneliness, but I never used it.
I had plenty of happy times before I had kids. But if I count up happy
moments, not just potential happiness but actual happy moments, there are more
after kids than before. Now I practically have it on tap, almost any bedtime.
People's experiences as parents vary a lot, and I know I've been lucky. But I
think the worries I had before having kids must be pretty common, and judging
by other parents' faces when they see their kids, so must the happiness that
kids bring.
**Note**
Adults are sophisticated enough to see 2 year olds for the fascinatingly
complex characters they are, whereas to most 6 year olds, 2 year olds are just
defective 6 year olds.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this.
---
---
Arabic Translation
| | Slovak Translation
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
January 2006
To do something well you have to like it. That idea is not exactly novel.
We've got it down to four words: "Do what you love." But it's not enough just
to tell people that. Doing what you love is complicated.
The very idea is foreign to what most of us learn as kids. When I was a kid,
it seemed as if work and fun were opposites by definition. Life had two
states: some of the time adults were making you do things, and that was called
work; the rest of the time you could do what you wanted, and that was called
playing. Occasionally the things adults made you do were fun, just as,
occasionally, playing wasn't — for example, if you fell and hurt yourself. But
except for these few anomalous cases, work was pretty much defined as not-fun.
And it did not seem to be an accident. School, it was implied, was tedious
_because_ it was preparation for grownup work.
The world then was divided into two groups, grownups and kids. Grownups, like
some kind of cursed race, had to work. Kids didn't, but they did have to go to
school, which was a dilute version of work meant to prepare us for the real
thing. Much as we disliked school, the grownups all agreed that grownup work
was worse, and that we had it easy.
Teachers in particular all seemed to believe implicitly that work was not fun.
Which is not surprising: work wasn't fun for most of them. Why did we have to
memorize state capitals instead of playing dodgeball? For the same reason they
had to watch over a bunch of kids instead of lying on a beach. You couldn't
just do what you wanted.
I'm not saying we should let little kids do whatever they want. They may have
to be made to work on certain things. But if we make kids work on dull stuff,
it might be wise to tell them that tediousness is not the defining quality of
work, and indeed that the reason they have to work on dull stuff now is so
they can work on more interesting stuff later.
Once, when I was about 9 or 10, my father told me I could be whatever I wanted
when I grew up, so long as I enjoyed it. I remember that precisely because it
seemed so anomalous. It was like being told to use dry water. Whatever I
thought he meant, I didn't think he meant work could _literally_ be fun — fun
like playing. It took me years to grasp that.
**Jobs**
By high school, the prospect of an actual job was on the horizon. Adults would
sometimes come to speak to us about their work, or we would go to see them at
work. It was always understood that they enjoyed what they did. In retrospect
I think one may have: the private jet pilot. But I don't think the bank
manager really did.
The main reason they all acted as if they enjoyed their work was presumably
the upper-middle class convention that you're supposed to. It would not merely
be bad for your career to say that you despised your job, but a social faux-
pas.
Why is it conventional to pretend to like what you do? The first sentence of
this essay explains that. If you have to like something to do it well, then
the most successful people will all like what they do. That's where the upper-
middle class tradition comes from. Just as houses all over America are full of
chairs that are, without the owners even knowing it, nth-degree imitations of
chairs designed 250 years ago for French kings, conventional attitudes about
work are, without the owners even knowing it, nth-degree imitations of the
attitudes of people who've done great things.
What a recipe for alienation. By the time they reach an age to think about
what they'd like to do, most kids have been thoroughly misled about the idea
of loving one's work. School has trained them to regard work as an unpleasant
duty. Having a job is said to be even more onerous than schoolwork. And yet
all the adults claim to like what they do. You can't blame kids for thinking
"I am not like these people; I am not suited to this world."
Actually they've been told three lies: the stuff they've been taught to regard
as work in school is not real work; grownup work is not (necessarily) worse
than schoolwork; and many of the adults around them are lying when they say
they like what they do.
The most dangerous liars can be the kids' own parents. If you take a boring
job to give your family a high standard of living, as so many people do, you
risk infecting your kids with the idea that work is boring. Maybe it would
be better for kids in this one case if parents were not so unselfish. A parent
who set an example of loving their work might help their kids more than an
expensive house.
It was not till I was in college that the idea of work finally broke free from
the idea of making a living. Then the important question became not how to
make money, but what to work on. Ideally these coincided, but some spectacular
boundary cases (like Einstein in the patent office) proved they weren't
identical.
The definition of work was now to make some original contribution to the
world, and in the process not to starve. But after the habit of so many years
my idea of work still included a large component of pain. Work still seemed to
require discipline, because only hard problems yielded grand results, and hard
problems couldn't literally be fun. Surely one had to force oneself to work on
them.
If you think something's supposed to hurt, you're less likely to notice if
you're doing it wrong. That about sums up my experience of graduate school.
**Bounds**
_How much_ are you supposed to like what you do? Unless you know that, you
don't know when to stop searching. And if, like most people, you underestimate
it, you'll tend to stop searching too early. You'll end up doing something
chosen for you by your parents, or the desire to make money, or prestige — or
sheer inertia.
Here's an upper bound: Do what you love doesn't mean, do what you would like
to do most _this second_. Even Einstein probably had moments when he wanted to
have a cup of coffee, but told himself he ought to finish what he was working
on first.
It used to perplex me when I read about people who liked what they did so much
that there was nothing they'd rather do. There didn't seem to be any sort of
work I liked _that_ much. If I had a choice of (a) spending the next hour
working on something or (b) be teleported to Rome and spend the next hour
wandering about, was there any sort of work I'd prefer? Honestly, no.
But the fact is, almost anyone would rather, at any given moment, float about
in the Carribbean, or have sex, or eat some delicious food, than work on hard
problems. The rule about doing what you love assumes a certain length of time.
It doesn't mean, do what will make you happiest this second, but what will
make you happiest over some longer period, like a week or a month.
Unproductive pleasures pall eventually. After a while you get tired of lying
on the beach. If you want to stay happy, you have to do something.
As a lower bound, you have to like your work more than any unproductive
pleasure. You have to like what you do enough that the concept of "spare time"
seems mistaken. Which is not to say you have to spend all your time working.
You can only work so much before you get tired and start to screw up. Then you
want to do something else — even something mindless. But you don't regard this
time as the prize and the time you spend working as the pain you endure to
earn it.
I put the lower bound there for practical reasons. If your work is not your
favorite thing to do, you'll have terrible problems with procrastination.
You'll have to force yourself to work, and when you resort to that the results
are distinctly inferior.
To be happy I think you have to be doing something you not only enjoy, but
admire. You have to be able to say, at the end, wow, that's pretty cool. This
doesn't mean you have to make something. If you learn how to hang glide, or to
speak a foreign language fluently, that will be enough to make you say, for a
while at least, wow, that's pretty cool. What there has to be is a test.
So one thing that falls just short of the standard, I think, is reading books.
Except for some books in math and the hard sciences, there's no test of how
well you've read a book, and that's why merely reading books doesn't quite
feel like work. You have to do something with what you've read to feel
productive.
I think the best test is one Gino Lee taught me: to try to do things that
would make your friends say wow. But it probably wouldn't start to work
properly till about age 22, because most people haven't had a big enough
sample to pick friends from before then.
**Sirens**
What you should not do, I think, is worry about the opinion of anyone beyond
your friends. You shouldn't worry about prestige. Prestige is the opinion of
the rest of the world. When you can ask the opinions of people whose judgement
you respect, what does it add to consider the opinions of people you don't
even know?
This is easy advice to give. It's hard to follow, especially when you're
young. Prestige is like a powerful magnet that warps even your beliefs
about what you enjoy. It causes you to work not on what you like, but what
you'd like to like.
That's what leads people to try to write novels, for example. They like
reading novels. They notice that people who write them win Nobel prizes. What
could be more wonderful, they think, than to be a novelist? But liking the
idea of being a novelist is not enough; you have to like the actual work of
novel-writing if you're going to be good at it; you have to like making up
elaborate lies.
Prestige is just fossilized inspiration. If you do anything well enough,
you'll _make_ it prestigious. Plenty of things we now consider prestigious
were anything but at first. Jazz comes to mind — though almost any established
art form would do. So just do what you like, and let prestige take care of
itself.
Prestige is especially dangerous to the ambitious. If you want to make
ambitious people waste their time on errands, the way to do it is to bait the
hook with prestige. That's the recipe for getting people to give talks, write
forewords, serve on committees, be department heads, and so on. It might be a
good rule simply to avoid any prestigious task. If it didn't suck, they
wouldn't have had to make it prestigious.
Similarly, if you admire two kinds of work equally, but one is more
prestigious, you should probably choose the other. Your opinions about what's
admirable are always going to be slightly influenced by prestige, so if the
two seem equal to you, you probably have more genuine admiration for the less
prestigious one.
The other big force leading people astray is money. Money by itself is not
that dangerous. When something pays well but is regarded with contempt, like
telemarketing, or prostitution, or personal injury litigation, ambitious
people aren't tempted by it. That kind of work ends up being done by people
who are "just trying to make a living." (Tip: avoid any field whose
practitioners say this.) The danger is when money is combined with prestige,
as in, say, corporate law, or medicine. A comparatively safe and prosperous
career with some automatic baseline prestige is dangerously tempting to
someone young, who hasn't thought much about what they really like.
The test of whether people love what they do is whether they'd do it even if
they weren't paid for it — even if they had to work at another job to make a
living. How many corporate lawyers would do their current work if they had to
do it for free, in their spare time, and take day jobs as waiters to support
themselves?
This test is especially helpful in deciding between different kinds of
academic work, because fields vary greatly in this respect. Most good
mathematicians would work on math even if there were no jobs as math
professors, whereas in the departments at the other end of the spectrum, the
availability of teaching jobs is the driver: people would rather be English
professors than work in ad agencies, and publishing papers is the way you
compete for such jobs. Math would happen without math departments, but it is
the existence of English majors, and therefore jobs teaching them, that calls
into being all those thousands of dreary papers about gender and identity in
the novels of Conrad. No one does that kind of thing for fun.
The advice of parents will tend to err on the side of money. It seems safe to
say there are more undergrads who want to be novelists and whose parents want
them to be doctors than who want to be doctors and whose parents want them to
be novelists. The kids think their parents are "materialistic." Not
necessarily. All parents tend to be more conservative for their kids than they
would for themselves, simply because, as parents, they share risks more than
rewards. If your eight year old son decides to climb a tall tree, or your
teenage daughter decides to date the local bad boy, you won't get a share in
the excitement, but if your son falls, or your daughter gets pregnant, you'll
have to deal with the consequences.
**Discipline**
With such powerful forces leading us astray, it's not surprising we find it so
hard to discover what we like to work on. Most people are doomed in childhood
by accepting the axiom that work = pain. Those who escape this are nearly all
lured onto the rocks by prestige or money. How many even discover something
they love to work on? A few hundred thousand, perhaps, out of billions.
It's hard to find work you love; it must be, if so few do. So don't
underestimate this task. And don't feel bad if you haven't succeeded yet. In
fact, if you admit to yourself that you're discontented, you're a step ahead
of most people, who are still in denial. If you're surrounded by colleagues
who claim to enjoy work that you find contemptible, odds are they're lying to
themselves. Not necessarily, but probably.
Although doing great work takes less discipline than people think — because
the way to do great work is to find something you like so much that you don't
have to force yourself to do it — _finding_ work you love does usually require
discipline. Some people are lucky enough to know what they want to do when
they're 12, and just glide along as if they were on railroad tracks. But this
seems the exception. More often people who do great things have careers with
the trajectory of a ping-pong ball. They go to school to study A, drop out and
get a job doing B, and then become famous for C after taking it up on the
side.
Sometimes jumping from one sort of work to another is a sign of energy, and
sometimes it's a sign of laziness. Are you dropping out, or boldly carving a
new path? You often can't tell yourself. Plenty of people who will later do
great things seem to be disappointments early on, when they're trying to find
their niche.
Is there some test you can use to keep yourself honest? One is to try to do a
good job at whatever you're doing, even if you don't like it. Then at least
you'll know you're not using dissatisfaction as an excuse for being lazy.
Perhaps more importantly, you'll get into the habit of doing things well.
Another test you can use is: always produce. For example, if you have a day
job you don't take seriously because you plan to be a novelist, are you
producing? Are you writing pages of fiction, however bad? As long as you're
producing, you'll know you're not merely using the hazy vision of the grand
novel you plan to write one day as an opiate. The view of it will be
obstructed by the all too palpably flawed one you're actually writing.
"Always produce" is also a heuristic for finding the work you love. If you
subject yourself to that constraint, it will automatically push you away from
things you think you're supposed to work on, toward things you actually like.
"Always produce" will discover your life's work the way water, with the aid of
gravity, finds the hole in your roof.
Of course, figuring out what you like to work on doesn't mean you get to work
on it. That's a separate question. And if you're ambitious you have to keep
them separate: you have to make a conscious effort to keep your ideas about
what you want from being contaminated by what seems possible.
It's painful to keep them apart, because it's painful to observe the gap
between them. So most people pre-emptively lower their expectations. For
example, if you asked random people on the street if they'd like to be able to
draw like Leonardo, you'd find most would say something like "Oh, I can't
draw." This is more a statement of intention than fact; it means, I'm not
going to try. Because the fact is, if you took a random person off the street
and somehow got them to work as hard as they possibly could at drawing for the
next twenty years, they'd get surprisingly far. But it would require a great
moral effort; it would mean staring failure in the eye every day for years.
And so to protect themselves people say "I can't."
Another related line you often hear is that not everyone can do work they love
— that someone has to do the unpleasant jobs. Really? How do you make them? In
the US the only mechanism for forcing people to do unpleasant jobs is the
draft, and that hasn't been invoked for over 30 years. All we can do is
encourage people to do unpleasant work, with money and prestige.
If there's something people still won't do, it seems as if society just has to
make do without. That's what happened with domestic servants. For millennia
that was the canonical example of a job "someone had to do." And yet in the
mid twentieth century servants practically disappeared in rich countries, and
the rich have just had to do without.
So while there may be some things someone has to do, there's a good chance
anyone saying that about any particular job is mistaken. Most unpleasant jobs
would either get automated or go undone if no one were willing to do them.
**Two Routes**
There's another sense of "not everyone can do work they love" that's all too
true, however. One has to make a living, and it's hard to get paid for doing
work you love. There are two routes to that destination:
> The organic route: as you become more eminent, gradually to increase the
> parts of your job that you like at the expense of those you don't.
>
> The two-job route: to work at things you don't like to get money to work on
> things you do.
The organic route is more common. It happens naturally to anyone who does good
work. A young architect has to take whatever work he can get, but if he does
well he'll gradually be in a position to pick and choose among projects. The
disadvantage of this route is that it's slow and uncertain. Even tenure is not
real freedom.
The two-job route has several variants depending on how long you work for
money at a time. At one extreme is the "day job," where you work regular hours
at one job to make money, and work on what you love in your spare time. At the
other extreme you work at something till you make enough not to have to work
for money again.
The two-job route is less common than the organic route, because it requires a
deliberate choice. It's also more dangerous. Life tends to get more expensive
as you get older, so it's easy to get sucked into working longer than you
expected at the money job. Worse still, anything you work on changes you. If
you work too long on tedious stuff, it will rot your brain. And the best
paying jobs are most dangerous, because they require your full attention.
The advantage of the two-job route is that it lets you jump over obstacles.
The landscape of possible jobs isn't flat; there are walls of varying heights
between different kinds of work. The trick of maximizing the parts of your
job that you like can get you from architecture to product design, but not,
probably, to music. If you make money doing one thing and then work on
another, you have more freedom of choice.
Which route should you take? That depends on how sure you are of what you want
to do, how good you are at taking orders, how much risk you can stand, and the
odds that anyone will pay (in your lifetime) for what you want to do. If
you're sure of the general area you want to work in and it's something people
are likely to pay you for, then you should probably take the organic route.
But if you don't know what you want to work on, or don't like to take orders,
you may want to take the two-job route, if you can stand the risk.
Don't decide too soon. Kids who know early what they want to do seem
impressive, as if they got the answer to some math question before the other
kids. They have an answer, certainly, but odds are it's wrong.
A friend of mine who is a quite successful doctor complains constantly about
her job. When people applying to medical school ask her for advice, she wants
to shake them and yell "Don't do it!" (But she never does.) How did she get
into this fix? In high school she already wanted to be a doctor. And she is so
ambitious and determined that she overcame every obstacle along the way —
including, unfortunately, not liking it.
Now she has a life chosen for her by a high-school kid.
When you're young, you're given the impression that you'll get enough
information to make each choice before you need to make it. But this is
certainly not so with work. When you're deciding what to do, you have to
operate on ridiculously incomplete information. Even in college you get little
idea what various types of work are like. At best you may have a couple
internships, but not all jobs offer internships, and those that do don't teach
you much more about the work than being a batboy teaches you about playing
baseball.
In the design of lives, as in the design of most other things, you get better
results if you use flexible media. So unless you're fairly sure what you want
to do, your best bet may be to choose a type of work that could turn into
either an organic or two-job career. That was probably part of the reason I
chose computers. You can be a professor, or make a lot of money, or morph it
into any number of other kinds of work.
It's also wise, early on, to seek jobs that let you do many different things,
so you can learn faster what various kinds of work are like. Conversely, the
extreme version of the two-job route is dangerous because it teaches you so
little about what you like. If you work hard at being a bond trader for ten
years, thinking that you'll quit and write novels when you have enough money,
what happens when you quit and then discover that you don't actually like
writing novels?
Most people would say, I'd take that problem. Give me a million dollars and
I'll figure out what to do. But it's harder than it looks. Constraints give
your life shape. Remove them and most people have no idea what to do: look at
what happens to those who win lotteries or inherit money. Much as everyone
thinks they want financial security, the happiest people are not those who
have it, but those who like what they do. So a plan that promises freedom at
the expense of knowing what to do with it may not be as good as it seems.
Whichever route you take, expect a struggle. Finding work you love is very
difficult. Most people fail. Even if you succeed, it's rare to be free to work
on what you want till your thirties or forties. But if you have the
destination in sight you'll be more likely to arrive at it. If you know you
can love work, you're in the home stretch, and if you know what work you love,
you're practically there.
** |
|
December 2006
I grew up believing that taste is just a matter of personal preference. Each
person has things they like, but no one's preferences are any better than
anyone else's. There is no such thing as _good_ taste.
Like a lot of things I grew up believing, this turns out to be false, and I'm
going to try to explain why.
One problem with saying there's no such thing as good taste is that it also
means there's no such thing as good art. If there were good art, then people
who liked it would have better taste than people who didn't. So if you discard
taste, you also have to discard the idea of art being good, and artists being
good at making it.
It was pulling on that thread that unravelled my childhood faith in
relativism. When you're trying to make things, taste becomes a practical
matter. You have to decide what to do next. Would it make the painting better
if I changed that part? If there's no such thing as better, it doesn't matter
what you do. In fact, it doesn't matter if you paint at all. You could just go
out and buy a ready-made blank canvas. If there's no such thing as good, that
would be just as great an achievement as the ceiling of the Sistine Chapel.
Less laborious, certainly, but if you can achieve the same level of
performance with less effort, surely that's more impressive, not less.
Yet that doesn't seem quite right, does it?
**Audience**
I think the key to this puzzle is to remember that art has an audience. Art
has a purpose, which is to interest its audience. Good art (like good
anything) is art that achieves its purpose particularly well. The meaning of
"interest" can vary. Some works of art are meant to shock, and others to
please; some are meant to jump out at you, and others to sit quietly in the
background. But all art has to work on an audience, and—here's the critical
point—members of the audience share things in common.
For example, nearly all humans find human faces engaging. It seems to be wired
into us. Babies can recognize faces practically from birth. In fact, faces
seem to have co-evolved with our interest in them; the face is the body's
billboard. So all other things being equal, a painting with faces in it will
interest people more than one without.
One reason it's easy to believe that taste is merely personal preference is
that, if it isn't, how do you pick out the people with better taste? There are
billions of people, each with their own opinion; on what grounds can you
prefer one to another?
But if audiences have a lot in common, you're not in a position of having to
choose one out of a random set of individual biases, because the set isn't
random. All humans find faces engaging—practically by definition: face
recognition is in our DNA. And so having a notion of good art, in the sense of
art that does its job well, doesn't require you to pick out a few individuals
and label their opinions as correct. No matter who you pick, they'll find
faces engaging.
Of course, space aliens probably wouldn't find human faces engaging. But there
might be other things they shared in common with us. The most likely source of
examples is math. I expect space aliens would agree with us most of the time
about which of two proofs was better. Erdos thought so. He called a maximally
elegant proof one out of God's book, and presumably God's book is universal.
Once you start talking about audiences, you don't have to argue simply that
there are or aren't standards of taste. Instead tastes are a series of
concentric rings, like ripples in a pond. There are some things that will
appeal to you and your friends, others that will appeal to most people your
age, others that will appeal to most humans, and perhaps others that would
appeal to most sentient beings (whatever that means).
The picture is slightly more complicated than that, because in the middle of
the pond there are overlapping sets of ripples. For example, there might be
things that appealed particularly to men, or to people from a certain culture.
If good art is art that interests its audience, then when you talk about art
being good, you also have to say for what audience. So is it meaningless to
talk about art simply being good or bad? No, because one audience is the set
of all possible humans. I think that's the audience people are implicitly
talking about when they say a work of art is good: they mean it would engage
any human.
And that is a meaningful test, because although, like any everyday concept,
"human" is fuzzy around the edges, there are a lot of things practically all
humans have in common. In addition to our interest in faces, there's something
special about primary colors for nearly all of us, because it's an artifact of
the way our eyes work. Most humans will also find images of 3D objects
engaging, because that also seems to be built into our visual perception.
And beneath that there's edge-finding, which makes images with definite shapes
more engaging than mere blur.
Humans have a lot more in common than this, of course. My goal is not to
compile a complete list, just to show that there's some solid ground here.
People's preferences aren't random. So an artist working on a painting and
trying to decide whether to change some part of it doesn't have to think "Why
bother? I might as well flip a coin." Instead he can ask "What would make the
painting more interesting to people?" And the reason you can't equal
Michelangelo by going out and buying a blank canvas is that the ceiling of the
Sistine Chapel is more interesting to people.
A lot of philosophers have had a hard time believing it was possible for there
to be objective standards for art. It seemed obvious that beauty, for example,
was something that happened in the head of the observer, not something that
was a property of objects. It was thus "subjective" rather than "objective."
But in fact if you narrow the definition of beauty to something that works a
certain way on humans, and you observe how much humans have in common, it
turns out to be a property of objects after all. You don't have to choose
between something being a property of the subject or the object if subjects
all react similarly. Being good art is thus a property of objects as much as,
say, being toxic to humans is: it's good art if it consistently affects humans
in a certain way.
**Error**
So could we figure out what the best art is by taking a vote? After all, if
appealing to humans is the test, we should be able to just ask them, right?
Well, not quite. For products of nature that might work. I'd be willing to eat
the apple the world's population had voted most delicious, and I'd probably be
willing to visit the beach they voted most beautiful, but having to look at
the painting they voted the best would be a crapshoot.
Man-made stuff is different. For one thing, artists, unlike apple trees, often
deliberately try to trick us. Some tricks are quite subtle. For example, any
work of art sets expectations by its level of finish. You don't expect
photographic accuracy in something that looks like a quick sketch. So one
widely used trick, especially among illustrators, is to intentionally make a
painting or drawing look like it was done faster than it was. The average
person looks at it and thinks: how amazingly skillful. It's like saying
something clever in a conversation as if you'd thought of it on the spur of
the moment, when in fact you'd worked it out the day before.
Another much less subtle influence is brand. If you go to see the Mona Lisa,
you'll probably be disappointed, because it's hidden behind a thick glass wall
and surrounded by a frenzied crowd taking pictures of themselves in front of
it. At best you can see it the way you see a friend across the room at a
crowded party. The Louvre might as well replace it with copy; no one would be
able to tell. And yet the Mona Lisa is a small, dark painting. If you found
people who'd never seen an image of it and sent them to a museum in which it
was hanging among other paintings with a tag labelling it as a portrait by an
unknown fifteenth century artist, most would walk by without giving it a
second look.
For the average person, brand dominates all other factors in the judgement of
art. Seeing a painting they recognize from reproductions is so overwhelming
that their response to it as a painting is drowned out.
And then of course there are the tricks people play on themselves. Most adults
looking at art worry that if they don't like what they're supposed to, they'll
be thought uncultured. This doesn't just affect what they claim to like; they
actually make themselves like things they're supposed to.
That's why you can't just take a vote. Though appeal to people is a meaningful
test, in practice you can't measure it, just as you can't find north using a
compass with a magnet sitting next to it. There are sources of error so
powerful that if you take a vote, all you're measuring is the error.
We can, however, approach our goal from another direction, by using ourselves
as guinea pigs. You're human. If you want to know what the basic human
reaction to a piece of art would be, you can at least approach that by getting
rid of the sources of error in your own judgements.
For example, while anyone's reaction to a famous painting will be warped at
first by its fame, there are ways to decrease its effects. One is to come back
to the painting over and over. After a few days the fame wears off, and you
can start to see it as a painting. Another is to stand close. A painting
familiar from reproductions looks more familiar from ten feet away; close in
you see details that get lost in reproductions, and which you're therefore
seeing for the first time.
There are two main kinds of error that get in the way of seeing a work of art:
biases you bring from your own circumstances, and tricks played by the artist.
Tricks are straightforward to correct for. Merely being aware of them usually
prevents them from working. For example, when I was ten I used to be very
impressed by airbrushed lettering that looked like shiny metal. But once you
study how it's done, you see that it's a pretty cheesy trick—one of the sort
that relies on pushing a few visual buttons really hard to temporarily
overwhelm the viewer. It's like trying to convince someone by shouting at
them.
The way not to be vulnerable to tricks is to explicitly seek out and catalog
them. When you notice a whiff of dishonesty coming from some kind of art, stop
and figure out what's going on. When someone is obviously pandering to an
audience that's easily fooled, whether it's someone making shiny stuff to
impress ten year olds, or someone making conspicuously avant-garde stuff to
impress would-be intellectuals, learn how they do it. Once you've seen enough
examples of specific types of tricks, you start to become a connoisseur of
trickery in general, just as professional magicians are.
What counts as a trick? Roughly, it's something done with contempt for the
audience. For example, the guys designing Ferraris in the 1950s were probably
designing cars that they themselves admired. Whereas I suspect over at General
Motors the marketing people are telling the designers, "Most people who buy
SUVs do it to seem manly, not to drive off-road. So don't worry about the
suspension; just make that sucker as big and tough-looking as you can."
I think with some effort you can make yourself nearly immune to tricks. It's
harder to escape the influence of your own circumstances, but you can at least
move in that direction. The way to do it is to travel widely, in both time and
space. If you go and see all the different kinds of things people like in
other cultures, and learn about all the different things people have liked in
the past, you'll probably find it changes what you like. I doubt you could
ever make yourself into a completely universal person, if only because you can
only travel in one direction in time. But if you find a work of art that would
appeal equally to your friends, to people in Nepal, and to the ancient Greeks,
you're probably onto something.
My main point here is not how to have good taste, but that there can even be
such a thing. And I think I've shown that. There is such a thing as good art.
It's art that interests its human audience, and since humans have a lot in
common, what interests them is not random. Since there's such a thing as good
art, there's also such a thing as good taste, which is the ability to
recognize it.
If we were talking about the taste of apples, I'd agree that taste is just
personal preference. Some people like certain kinds of apples and others like
other kinds, but how can you say that one is right and the other wrong?
The thing is, art isn't apples. Art is man-made. It comes with a lot of
cultural baggage, and in addition the people who make it often try to trick
us. Most people's judgement of art is dominated by these extraneous factors;
they're like someone trying to judge the taste of apples in a dish made of
equal parts apples and jalapeno peppers. All they're tasting is the peppers.
So it turns out you can pick out some people and say that they have better
taste than others: they're the ones who actually taste art like apples.
Or to put it more prosaically, they're the people who (a) are hard to trick,
and (b) don't just like whatever they grew up with. If you could find people
who'd eliminated all such influences on their judgement, you'd probably still
see variation in what they liked. But because humans have so much in common,
you'd also find they agreed on a lot. They'd nearly all prefer the ceiling of
the Sistine Chapel to a blank canvas.
**Making It**
I wrote this essay because I was tired of hearing "taste is subjective" and
wanted to kill it once and for all. Anyone who makes things knows intuitively
that's not true. When you're trying to make art, the temptation to be lazy is
as great as in any other kind of work. Of course it matters to do a good job.
And yet you can see how great a hold "taste is subjective" has even in the art
world by how nervous it makes people to talk about art being good or bad.
Those whose jobs require them to judge art, like curators, mostly resort to
euphemisms like "significant" or "important" or (getting dangerously close)
"realized."
I don't have any illusions that being able to talk about art being good or bad
will cause the people who talk about it to have anything more useful to say.
Indeed, one of the reasons "taste is subjective" found such a receptive
audience is that, historically, the things people have said about good taste
have generally been such nonsense.
It's not for the people who talk about art that I want to free the idea of
good art, but for those who make it. Right now, ambitious kids going to art
school run smack into a brick wall. They arrive hoping one day to be as good
as the famous artists they've seen in books, and the first thing they learn is
that the concept of good has been retired. Instead everyone is just supposed
to explore their own personal vision.
When I was in art school, we were looking one day at a slide of some great
fifteenth century painting, and one of the students asked "Why don't artists
paint like that now?" The room suddenly got quiet. Though rarely asked out
loud, this question lurks uncomfortably in the back of every art student's
mind. It was as if someone had brought up the topic of lung cancer in a
meeting within Philip Morris.
"Well," the professor replied, "we're interested in different questions now."
He was a pretty nice guy, but at the time I couldn't help wishing I could send
him back to fifteenth century Florence to explain in person to Leonardo & Co.
how we had moved beyond their early, limited concept of art. Just imagine that
conversation.
In fact, one of the reasons artists in fifteenth century Florence made such
great things was that they believed you could make great things. They
were intensely competitive and were always trying to outdo one another, like
mathematicians or physicists today—maybe like anyone who has ever done
anything really well.
The idea that you could make great things was not just a useful illusion. They
were actually right. So the most important consequence of realizing there can
be good art is that it frees artists to try to make it. To the ambitious kids
arriving at art school this year hoping one day to make great things, I say:
don't believe it when they tell you this is a naive and outdated ambition.
There is such a thing as good art, and if you try to make it, there are people
who will notice.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2007
_(This essay is derived from a keynote at FOWA in October 2007.)_
There's something interesting happening right now. Startups are undergoing the
same transformation that technology does when it becomes cheaper.
It's a pattern we see over and over in technology. Initially there's some
device that's very expensive and made in small quantities. Then someone
discovers how to make them cheaply; many more get built; and as a result they
can be used in new ways.
Computers are a familiar example. When I was a kid, computers were big,
expensive machines built one at a time. Now they're a commodity. Now we can
stick computers in everything.
This pattern is very old. Most of the turning points in economic history are
instances of it. It happened to steel in the 1850s, and to power in the 1780s.
It happened to cloth manufacture in the thirteenth century, generating the
wealth that later brought about the Renaissance. Agriculture itself was an
instance of this pattern.
Now as well as being produced by startups, this pattern is happening _to_
startups. It's so cheap to start web startups that orders of magnitudes more
will be started. If the pattern holds true, that should cause dramatic
changes.
**1\. Lots of Startups**
So my first prediction about the future of web startups is pretty
straightforward: there will be a lot of them. When starting a startup was
expensive, you had to get the permission of investors to do it. Now the only
threshold is courage.
Even that threshold is getting lower, as people watch others take the plunge
and survive. In the last batch of startups we funded, we had several founders
who said they'd thought of applying before, but weren't sure and got jobs
instead. It was only after hearing reports of friends who'd done it that they
decided to try it themselves.
Starting a startup is hard, but having a 9 to 5 job is hard too, and in some
ways a worse kind of hard. In a startup you have lots of worries, but you
don't have that feeling that your life is flying by like you do in a big
company. Plus in a startup you could make much more money.
As word spreads that startups work, the number may grow to a point that would
now seem surprising.
We now think of it as normal to have a job at a company, but this is the
thinnest of historical veneers. Just two or three lifetimes ago, most people
in what are now called industrialized countries lived by farming. So while it
may seem surprising to propose that large numbers of people will change the
way they make a living, it would be more surprising if they didn't.
**2\. Standardization**
When technology makes something dramatically cheaper, standardization always
follows. When you make things in large volumes you tend to standardize
everything that doesn't need to change.
At Y Combinator we still only have four people, so we try to standardize
everything. We could hire employees, but we want to be forced to figure out
how to scale investing.
We often tell startups to release a minimal version one quickly, then let the
needs of the users determine what to do next. In essense, let the market
design the product. We've done the same thing ourselves. We think of the
techniques we're developing for dealing with large numbers of startups as like
software. Sometimes it literally is software, like Hacker News and our
application system.
One of the most important things we've been working on standardizing are
investment terms. Till now investment terms have been individually negotiated.
This is a problem for founders, because it makes raising money take longer and
cost more in legal fees. So as well as using the same paperwork for every deal
we do, we've commissioned generic angel paperwork that all the startups we
fund can use for future rounds.
Some investors will still want to cook up their own deal terms. Series A
rounds, where you raise a million dollars or more, will be custom deals for
the forseeable future. But I think angel rounds will start to be done mostly
with standardized agreements. An angel who wants to insert a bunch of
complicated terms into the agreement is probably not one you want anyway.
**3\. New Attitude to Acquisition**
Another thing I see starting to get standardized is acquisitions. As the
volume of startups increases, big companies will start to develop standardized
procedures that make acquisitions little more work than hiring someone.
Google is the leader here, as in so many areas of technology. They buy a lot
of startups— more than most people realize, because they only announce a
fraction of them. And being Google, they're figuring out how to do it
efficiently.
One problem they've solved is how to think about acquisitions. For most
companies, acquisitions still carry some stigma of inadequacy. Companies do
them because they have to, but there's usually some feeling they shouldn't
have to—that their own programmers should be able to build everything they
need.
Google's example should cure the rest of the world of this idea. Google has by
far the best programmers of any public technology company. If they don't have
a problem doing acquisitions, the others should have even less problem.
However many Google does, Microsoft should do ten times as many.
One reason Google doesn't have a problem with acquisitions is that they know
first-hand the quality of the people they can get that way. Larry and Sergey
only started Google after making the rounds of the search engines trying to
sell their idea and finding no takers. They've _been_ the guys coming in to
visit the big company, so they know who might be sitting across that
conference table from them.
**4\. Riskier Strategies are Possible**
Risk is always proportionate to reward. The way to get really big returns is
to do things that seem crazy, like starting a new search engine in 1998, or
turning down a billion dollar acquisition offer.
This has traditionally been a problem in venture funding. Founders and
investors have different attitudes to risk. Knowing that risk is on average
proportionate to reward, investors like risky strategies, while founders, who
don't have a big enough sample size to care what's true on average, tend to be
more conservative.
If startups are easy to start, this conflict goes away, because founders can
start them younger, when it's rational to take more risk, and can start more
startups total in their careers. When founders can do lots of startups, they
can start to look at the world in the same portfolio-optimizing way as
investors. And that means the overall amount of wealth created can be greater,
because strategies can be riskier.
**5\. Younger, Nerdier Founders**
If startups become a cheap commodity, more people will be able to have them,
just as more people could have computers once microprocessors made them cheap.
And in particular, younger and more technical founders will be able to start
startups than could before.
Back when it cost a lot to start a startup, you had to convince investors to
let you do it. And that required very different skills from actually doing the
startup. If investors were perfect judges, the two would require exactly the
same skills. But unfortunately most investors are terrible judges. I know
because I see behind the scenes what an enormous amount of work it takes to
raise money, and the amount of selling required in an industry is always
inversely proportional to the judgement of the buyers.
Fortunately, if startups get cheaper to start, there's another way to convince
investors. Instead of going to venture capitalists with a business plan and
trying to convince them to fund it, you can get a product launched on a few
tens of thousands of dollars of seed money from us or your uncle, and approach
them with a working company instead of a plan for one. Then instead of having
to seem smooth and confident, you can just point them to Alexa.
This way of convincing investors is better suited to hackers, who often went
into technology in part because they felt uncomfortable with the amount of
fakeness required in other fields.
**6\. Startup Hubs Will Persist**
It might seem that if startups get cheap to start, it will mean the end of
startup hubs like Silicon Valley. If all you need to start a startup is rent
money, you should be able to do it anywhere.
This is kind of true and kind of false. It's true that you can now _start_ a
startup anywhere. But you have to do more with a startup than just start it.
You have to make it succeed. And that is more likely to happen in a startup
hub.
I've thought a lot about this question, and it seems to me the increasing
cheapness of web startups will if anything increase the importance of startup
hubs. The value of startup hubs, like centers for any kind of business, lies
in something very old-fashioned: face to face meetings. No technology in the
immediate future will replace walking down University Ave and running into a
friend who tells you how to fix a bug that's been bothering you all weekend,
or visiting a friend's startup down the street and ending up in a conversation
with one of their investors.
The question of whether to be in a startup hub is like the question of whether
to take outside investment. The question is not whether you _need_ it, but
whether it brings any advantage at all. Because anything that brings an
advantage will give your competitors an advantage over you if they do it and
you don't. So if you hear someone saying "we don't need to be in Silicon
Valley," that use of the word "need" is a sign they're not even thinking about
the question right.
And while startup hubs are as powerful magnets as ever, the increasing
cheapness of starting a startup means the particles they're attracting are
getting lighter. A startup now can be just a pair of 22 year old guys. A
company like that can move much more easily than one with 10 people, half of
whom have kids.
We know because we make people move for Y Combinator, and it doesn't seem to
be a problem. The advantage of being able to work together face to face for
three months outweighs the inconvenience of moving. Ask anyone who's done it.
The mobility of seed-stage startups means that seed funding is a national
business. One of the most common emails we get is from people asking if we can
help them set up a local clone of Y Combinator. But this just wouldn't work.
Seed funding isn't regional, just as big research universities aren't.
Is seed funding not merely national, but international? Interesting question.
There are signs it may be. We've had an ongoing stream of founders from
outside the US, and they tend to do particularly well, because they're all
people who were so determined to succeed that they were willing to move to
another country to do it.
The more mobile startups get, the harder it would be to start new silicon
valleys. If startups are mobile, the best local talent will go to the real
Silicon Valley, and all they'll get at the local one will be the people who
didn't have the energy to move.
This is not a nationalistic idea, incidentally. It's cities that compete, not
countries. Atlanta is just as hosed as Munich.
**7\. Better Judgement Needed**
If the number of startups increases dramatically, then the people whose job is
to judge them are going to have to get better at it. I'm thinking particularly
of investors and acquirers. We now get on the order of 1000 applications a
year. What are we going to do if we get 10,000?
That's actually an alarming idea. But we'll figure out some kind of answer.
We'll have to. It will probably involve writing some software, but fortunately
we can do that.
Acquirers will also have to get better at picking winners. They generally do
better than investors, because they pick later, when there's more performance
to measure. But even at the most advanced acquirers, identifying companies to
buy is extremely ad hoc, and completing the acquisition often involves a great
deal of unneccessary friction.
I think acquirers may eventually have chief acquisition officers who will both
identify good acquisitions and make the deals happen. At the moment those two
functions are separate. Promising new startups are often discovered by
developers. If someone powerful enough wants to buy them, the deal is handed
over to corp dev guys to negotiate. It would be better if both were combined
in one group, headed by someone with a technical background and some vision of
what they wanted to accomplish. Maybe in the future big companies will have
both a VP of Engineering responsible for technology developed in-house, and a
CAO responsible for bringing technology in from outside.
At the moment, there is no one within big companies who gets in trouble when
they buy a startup for $200 million that they could have bought earlier for
$20 million. There should start to be someone who gets in trouble for that.
**8\. College Will Change**
If the best hackers start their own companies after college instead of getting
jobs, that will change what happens in college. Most of these changes will be
for the better. I think the experience of college is warped in a bad way by
the expectation that afterward you'll be judged by potential employers.
One change will be in the meaning of "after college," which will switch from
when one graduates from college to when one leaves it. If you're starting your
own company, why do you need a degree? We don't encourage people to start
startups during college, but the best founders are certainly capable of it.
Some of the most successful companies we've funded were started by undergrads.
I grew up in a time where college degrees seemed really important, so I'm
alarmed to be saying things like this, but there's nothing magical about a
degree. There's nothing that magically changes after you take that last exam.
The importance of degrees is due solely to the administrative needs of large
organizations. These can certainly affect your life—it's hard to get into grad
school, or to get a work visa in the US, without an undergraduate degree—but
tests like this will matter less and less.
As well as mattering less whether students get degrees, it will also start to
matter less where they go to college. In a startup you're judged by users, and
they don't care where you went to college. So in a world of startups, elite
universities will play less of a role as gatekeepers. In the US it's a
national scandal how easily children of rich parents game college admissions.
But the way this problem ultimately gets solved may not be by reforming the
universities but by going around them. We in the technology world are used to
that sort of solution: you don't beat the incumbents; you redefine the problem
to make them irrelevant.
The greatest value of universities is not the brand name or perhaps even the
classes so much as the people you meet. If it becomes common to start a
startup after college, students may start trying to maximize this. Instead of
focusing on getting internships at companies they want to work for, they may
start to focus on working with other students they want as cofounders.
What students do in their classes will change too. Instead of trying to get
good grades to impress future employers, students will try to learn things.
We're talking about some pretty dramatic changes here.
**9\. Lots of Competitors**
If it gets easier to start a startup, it's easier for competitors too. That
doesn't erase the advantage of increased cheapness, however. You're not all
playing a zero-sum game. There's not some fixed number of startups that can
succeed, regardless of how many are started.
In fact, I don't think there's any limit to the number of startups that could
succeed. Startups succeed by creating wealth, which is the satisfaction of
people's desires. And people's desires seem to be effectively infinite, at
least in the short term.
What the increasing number of startups does mean is that you won't be able to
sit on a good idea. Other people have your idea, and they'll be increasingly
likely to do something about it.
**10\. Faster Advances**
There's a good side to that, at least for consumers of technology. If people
get right to work implementing ideas instead of sitting on them, technology
will evolve faster.
Some kinds of innovations happen a company at a time, like the punctuated
equilibrium model of evolution. There are some kinds of ideas that are so
threatening that it's hard for big companies even to think of them. Look at
what a hard time Microsoft is having discovering web apps. They're like a
character in a movie that everyone in the audience can see something bad is
about to happen to, but who can't see it himself. The big innovations that
happen a company at a time will obviously happen faster if the rate of new
companies increases.
But in fact there will be a double speed increase. People won't wait as long
to act on new ideas, but also those ideas will increasingly be developed
within startups rather than big companies. Which means technology will evolve
faster per company as well.
Big companies are just not a good place to make things happen fast. I talked
recently to a founder whose startup had been acquired by a big company. He was
a precise sort of guy, so he'd measured their productivity before and after.
He counted lines of code, which can be a dubious measure, but in this case was
meaningful because it was the same group of programmers. He found they were
one thirteenth as productive after the acquisition.
The company that bought them was not a particularly stupid one. I think what
he was measuring was mostly the cost of bigness. I experienced this myself,
and his number sounds about right. There's something about big companies that
just sucks the energy out of you.
Imagine what all that energy could do if it were put to use. There is an
enormous latent capacity in the world's hackers that most people don't even
realize is there. That's the main reason we do Y Combinator: to let loose all
this energy by making it easy for hackers to start their own startups.
**A Series of Tubes**
The process of starting startups is currently like the plumbing in an old
house. The pipes are narrow and twisty, and there are leaks in every joint. In
the future this mess will gradually be replaced by a single, huge pipe. The
water will still have to get from A to B, but it will get there faster and
without the risk of spraying out through some random leak.
This will change a lot of things for the better. In a big, straight pipe like
that, the force of being measured by one's performance will propagate back
through the whole system. Performance is always the ultimate test, but there
are so many kinks in the plumbing now that most people are insulated from it
most of the time. So you end up with a world in which high school students
think they need to get good grades to get into elite colleges, and college
students think they need to get good grades to impress employers, within which
the employees waste most of their time in political battles, and from which
consumers have to buy anyway because there are so few choices. Imagine if that
sequence became a big, straight pipe. Then the effects of being measured by
performance would propagate all the way back to high school, flushing out all
the arbitrary stuff people are measured by now. That is the future of web
startups.
**Thanks** to Brian Oberkirch and Simon Willison for inviting me to speak, and
the crew at Carson Systems for making everything run smoothly.
---
Japanese Translation
* * *
--- |
|
April 2004
To the popular press, "hacker" means someone who breaks into computers. Among
programmers it means a good programmer. But the two meanings are connected. To
programmers, "hacker" connotes mastery in the most literal sense: someone who
can make a computer do what he wants—whether the computer wants to or not.
To add to the confusion, the noun "hack" also has two senses. It can be either
a compliment or an insult. It's called a hack when you do something in an ugly
way. But when you do something so clever that you somehow beat the system,
that's also called a hack. The word is used more often in the former than the
latter sense, probably because ugly solutions are more common than brilliant
ones.
Believe it or not, the two senses of "hack" are also connected. Ugly and
imaginative solutions have something in common: they both break the rules. And
there is a gradual continuum between rule breaking that's merely ugly (using
duct tape to attach something to your bike) and rule breaking that is
brilliantly imaginative (discarding Euclidean space).
Hacking predates computers. When he was working on the Manhattan Project,
Richard Feynman used to amuse himself by breaking into safes containing secret
documents. This tradition continues today. When we were in grad school, a
hacker friend of mine who spent too much time around MIT had his own lock
picking kit. (He now runs a hedge fund, a not unrelated enterprise.)
It is sometimes hard to explain to authorities why one would want to do such
things. Another friend of mine once got in trouble with the government for
breaking into computers. This had only recently been declared a crime, and the
FBI found that their usual investigative technique didn't work. Police
investigation apparently begins with a motive. The usual motives are few:
drugs, money, sex, revenge. Intellectual curiosity was not one of the motives
on the FBI's list. Indeed, the whole concept seemed foreign to them.
Those in authority tend to be annoyed by hackers' general attitude of
disobedience. But that disobedience is a byproduct of the qualities that make
them good programmers. They may laugh at the CEO when he talks in generic
corporate newspeech, but they also laugh at someone who tells them a certain
problem can't be solved. Suppress one, and you suppress the other.
This attitude is sometimes affected. Sometimes young programmers notice the
eccentricities of eminent hackers and decide to adopt some of their own in
order to seem smarter. The fake version is not merely annoying; the prickly
attitude of these posers can actually slow the process of innovation.
But even factoring in their annoying eccentricities, the disobedient attitude
of hackers is a net win. I wish its advantages were better understood.
For example, I suspect people in Hollywood are simply mystified by hackers'
attitudes toward copyrights. They are a perennial topic of heated discussion
on Slashdot. But why should people who program computers be so concerned about
copyrights, of all things?
Partly because some companies use _mechanisms_ to prevent copying. Show any
hacker a lock and his first thought is how to pick it. But there is a deeper
reason that hackers are alarmed by measures like copyrights and patents. They
see increasingly aggressive measures to protect "intellectual property" as a
threat to the intellectual freedom they need to do their job. And they are
right.
It is by poking about inside current technology that hackers get ideas for the
next generation. No thanks, intellectual homeowners may say, we don't need any
outside help. But they're wrong. The next generation of computer technology
has often—perhaps more often than not—been developed by outsiders.
In 1977 there was no doubt some group within IBM developing what they expected
to be the next generation of business computer. They were mistaken. The next
generation of business computer was being developed on entirely different
lines by two long-haired guys called Steve in a garage in Los Altos. At about
the same time, the powers that be were cooperating to develop the official
next generation operating system, Multics. But two guys who thought Multics
excessively complex went off and wrote their own. They gave it a name that was
a joking reference to Multics: Unix.
The latest intellectual property laws impose unprecedented restrictions on the
sort of poking around that leads to new ideas. In the past, a competitor might
use patents to prevent you from selling a copy of something they made, but
they couldn't prevent you from taking one apart to see how it worked. The
latest laws make this a crime. How are we to develop new technology if we
can't study current technology to figure out how to improve it?
Ironically, hackers have brought this on themselves. Computers are responsible
for the problem. The control systems inside machines used to be physical:
gears and levers and cams. Increasingly, the brains (and thus the value) of
products is in software. And by this I mean software in the general sense:
i.e. data. A song on an LP is physically stamped into the plastic. A song on
an iPod's disk is merely stored on it.
Data is by definition easy to copy. And the Internet makes copies easy to
distribute. So it is no wonder companies are afraid. But, as so often happens,
fear has clouded their judgement. The government has responded with draconian
laws to protect intellectual property. They probably mean well. But they may
not realize that such laws will do more harm than good.
Why are programmers so violently opposed to these laws? If I were a
legislator, I'd be interested in this mystery—for the same reason that, if I
were a farmer and suddenly heard a lot of squawking coming from my hen house
one night, I'd want to go out and investigate. Hackers are not stupid, and
unanimity is very rare in this world. So if they're all squawking, perhaps
there is something amiss.
Could it be that such laws, though intended to protect America, will actually
harm it? Think about it. There is something very _American_ about Feynman
breaking into safes during the Manhattan Project. It's hard to imagine the
authorities having a sense of humor about such things over in Germany at that
time. Maybe it's not a coincidence.
Hackers are unruly. That is the essence of hacking. And it is also the essence
of Americanness. It is no accident that Silicon Valley is in America, and not
France, or Germany, or England, or Japan. In those countries, people color
inside the lines.
I lived for a while in Florence. But after I'd been there a few months I
realized that what I'd been unconsciously hoping to find there was back in the
place I'd just left. The reason Florence is famous is that in 1450, it was New
York. In 1450 it was filled with the kind of turbulent and ambitious people
you find now in America. (So I went back to America.)
It is greatly to America's advantage that it is a congenial atmosphere for the
right sort of unruliness—that it is a home not just for the smart, but for
smart-alecks. And hackers are invariably smart-alecks. If we had a national
holiday, it would be April 1st. It says a great deal about our work that we
use the same word for a brilliant or a horribly cheesy solution. When we cook
one up we're not always 100% sure which kind it is. But as long as it has the
right sort of wrongness, that's a promising sign. It's odd that people think
of programming as precise and methodical. _Computers_ are precise and
methodical. Hacking is something you do with a gleeful laugh.
In our world some of the most characteristic solutions are not far removed
from practical jokes. IBM was no doubt rather surprised by the consequences of
the licensing deal for DOS, just as the hypothetical "adversary" must be when
Michael Rabin solves a problem by redefining it as one that's easier to solve.
Smart-alecks have to develop a keen sense of how much they can get away with.
And lately hackers have sensed a change in the atmosphere. Lately hackerliness
seems rather frowned upon.
To hackers the recent contraction in civil liberties seems especially ominous.
That must also mystify outsiders. Why should we care especially about civil
liberties? Why programmers, more than dentists or salesmen or landscapers?
Let me put the case in terms a government official would appreciate. Civil
liberties are not just an ornament, or a quaint American tradition. Civil
liberties make countries rich. If you made a graph of GNP per capita vs. civil
liberties, you'd notice a definite trend. Could civil liberties really be a
cause, rather than just an effect? I think so. I think a society in which
people can do and say what they want will also tend to be one in which the
most efficient solutions win, rather than those sponsored by the most
influential people. Authoritarian countries become corrupt; corrupt countries
become poor; and poor countries are weak. It seems to me there is a Laffer
curve for government power, just as for tax revenues. At least, it seems
likely enough that it would be stupid to try the experiment and find out.
Unlike high tax rates, you can't repeal totalitarianism if it turns out to be
a mistake.
This is why hackers worry. The government spying on people doesn't literally
make programmers write worse code. It just leads eventually to a world in
which bad ideas win. And because this is so important to hackers, they're
especially sensitive to it. They can sense totalitarianism approaching from a
distance, as animals can sense an approaching thunderstorm.
It would be ironic if, as hackers fear, recent measures intended to protect
national security and intellectual property turned out to be a missile aimed
right at what makes America successful. But it would not be the first time
that measures taken in an atmosphere of panic had the opposite of the intended
effect.
There is such a thing as Americanness. There's nothing like living abroad to
teach you that. And if you want to know whether something will nurture or
squash this quality, it would be hard to find a better focus group than
hackers, because they come closest of any group I know to embodying it.
Closer, probably, than the men running our government, who for all their talk
of patriotism remind me more of Richelieu or Mazarin than Thomas Jefferson or
George Washington.
When you read what the founding fathers had to say for themselves, they sound
more like hackers. "The spirit of resistance to government," Jefferson wrote,
"is so valuable on certain occasions, that I wish it always to be kept alive."
Imagine an American president saying that today. Like the remarks of an
outspoken old grandmother, the sayings of the founding fathers have
embarrassed generations of their less confident successors. They remind us
where we come from. They remind us that it is the people who break rules that
are the source of America's wealth and power.
Those in a position to impose rules naturally want them to be obeyed. But be
careful what you ask for. You might get it.
**Thanks** to Ken Anderson, Trevor Blackwell, Daniel Giffin, Sarah Harlin,
Shiro Kawai, Jessica Livingston, Matz, Jackie McDonough, Robert Morris, Eric
Raymond, Guido van Rossum, David Weinberger, and Steven Wolfram for reading
drafts of this essay.
(The image shows Steves Jobs and Wozniak with a "blue box." Photo by Margret
Wozniak. Reproduced by permission of Steve Wozniak.)
---
---
| | Portuguese Translation
| | | | Hebrew Translation
| | Romanian Translation
* * *
| You'll find this essay and 14 others in **_Hackers & Painters_**. |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
March 2005
_(Parts of this essay began as replies to students who wrote to me with
questions.)_
Recently I've had several emails from computer science undergrads asking what
to do in college. I might not be the best source of advice, because I was a
philosophy major in college. But I took so many CS classes that most CS majors
thought I was one. I was certainly a hacker, at least.
**Hacking**
What should you do in college to become a good hacker? There are two main
things you can do: become very good at programming, and learn a lot about
specific, cool problems. These turn out to be equivalent, because each drives
you to do the other.
The way to be good at programming is to work (a) a lot (b) on hard problems.
And the way to make yourself work on hard problems is to work on some very
engaging project.
Odds are this project won't be a class assignment. My friend Robert learned a
lot by writing network software when he was an undergrad. One of his projects
was to connect Harvard to the Arpanet; it had been one of the original nodes,
but by 1984 the connection had died. Not only was this work not for a
class, but because he spent all his time on it and neglected his studies, he
was kicked out of school for a year. It all evened out in the end, and now
he's a professor at MIT. But you'll probably be happier if you don't go to
that extreme; it caused him a lot of worry at the time.
Another way to be good at programming is to find other people who are good at
it, and learn what they know. Programmers tend to sort themselves into tribes
according to the type of work they do and the tools they use, and some tribes
are smarter than others. Look around you and see what the smart people seem to
be working on; there's usually a reason.
Some of the smartest people around you are professors. So one way to find
interesting work is to volunteer as a research assistant. Professors are
especially interested in people who can solve tedious system-administration
type problems for them, so that is a way to get a foot in the door. What they
fear are flakes and resume padders. It's all too common for an assistant to
result in a net increase in work. So you have to make it clear you'll mean a
net decrease.
Don't be put off if they say no. Rejection is almost always less personal than
the rejectee imagines. Just move on to the next. (This applies to dating too.)
Beware, because although most professors are smart, not all of them work on
interesting stuff. Professors have to publish novel results to advance their
careers, but there is more competition in more interesting areas of research.
So what less ambitious professors do is turn out a series of papers whose
conclusions are novel because no one else cares about them. You're better off
avoiding these.
I never worked as a research assistant, so I feel a bit dishonest recommending
that route. I learned to program by writing stuff of my own, particularly by
trying to reverse-engineer Winograd's SHRDLU. I was as obsessed with that
program as a mother with a new baby.
Whatever the disadvantages of working by yourself, the advantage is that the
project is all your own. You never have to compromise or ask anyone's
permission, and if you have a new idea you can just sit down and start
implementing it.
In your own projects you don't have to worry about novelty (as professors do)
or profitability (as businesses do). All that matters is how hard the project
is technically, and that has no correlation to the nature of the application.
"Serious" applications like databases are often trivial and dull technically
(if you ever suffer from insomnia, try reading the technical literature about
databases) while "frivolous" applications like games are often very
sophisticated. I'm sure there are game companies out there working on products
with more intellectual content than the research at the bottom nine tenths of
university CS departments.
If I were in college now I'd probably work on graphics: a network game, for
example, or a tool for 3D animation. When I was an undergrad there weren't
enough cycles around to make graphics interesting, but it's hard to imagine
anything more fun to work on now.
**Math**
When I was in college, a lot of the professors believed (or at least wished)
that computer science was a branch of math. This idea was strongest at
Harvard, where there wasn't even a CS major till the 1980s; till then one had
to major in applied math. But it was nearly as bad at Cornell. When I told the
fearsome Professor Conway that I was interested in AI (a hot topic then), he
told me I should major in math. I'm still not sure whether he thought AI
required math, or whether he thought AI was nonsense and that majoring in
something rigorous would cure me of such stupid ambitions.
In fact, the amount of math you need as a hacker is a lot less than most
university departments like to admit. I don't think you need much more than
high school math plus a few concepts from the theory of computation. (You have
to know what an n^2 algorithm is if you want to avoid writing them.) Unless
you're planning to write math applications, of course. Robotics, for example,
is all math.
But while you don't literally need math for most kinds of hacking, in the
sense of knowing 1001 tricks for differentiating formulas, math is very much
worth studying for its own sake. It's a valuable source of metaphors for
almost any kind of work. I wish I'd studied more math in college for that
reason.
Like a lot of people, I was mathematically abused as a child. I learned to
think of math as a collection of formulas that were neither beautiful nor had
any relation to my life (despite attempts to translate them into "word
problems"), but had to be memorized in order to do well on tests.
One of the most valuable things you could do in college would be to learn what
math is really about. This may not be easy, because a lot of good
mathematicians are bad teachers. And while there are many popular books on
math, few seem good. The best I can think of are W. W. Sawyer's. And of course
Euclid.
**Everything**
Thomas Huxley said "Try to learn something about everything and everything
about something." Most universities aim at this ideal.
But what's everything? To me it means, all that people learn in the course of
working honestly on hard problems. All such work tends to be related, in that
ideas and techniques from one field can often be transplanted successfully to
others. Even others that seem quite distant. For example, I write essays the
same way I write software: I sit down and blow out a lame version 1 as fast as
I can type, then spend several weeks rewriting it.
Working on hard problems is not, by itself, enough. Medieval alchemists were
working on a hard problem, but their approach was so bogus that there was
little to learn from studying it, except possibly about people's ability to
delude themselves. Unfortunately the sort of AI I was trying to learn in
college had the same flaw: a very hard problem, blithely approached with
hopelessly inadequate techniques. Bold? Closer to fraudulent.
The social sciences are also fairly bogus, because they're so much influenced
by intellectual fashions. If a physicist met a colleague from 100 years ago,
he could teach him some new things; if a psychologist met a colleague from 100
years ago, they'd just get into an ideological argument. Yes, of course,
you'll learn something by taking a psychology class. The point is, you'll
learn more by taking a class in another department.
The worthwhile departments, in my opinion, are math, the hard sciences,
engineering, history (especially economic and social history, and the history
of science), architecture, and the classics. A survey course in art history
may be worthwhile. Modern literature is important, but the way to learn about
it is just to read. I don't know enough about music to say.
You can skip the social sciences, philosophy, and the various departments
created recently in response to political pressures. Many of these fields talk
about important problems, certainly. But the way they talk about them is
useless. For example, philosophy talks, among other things, about our
obligations to one another; but you can learn more about this from a wise
grandmother or E. B. White than from an academic philosopher.
I speak here from experience. I should probably have been offended when people
laughed at Clinton for saying "It depends on what the meaning of the word 'is'
is." I took about five classes in college on what the meaning of "is" is.
Another way to figure out which fields are worth studying is to create the
_dropout graph._ For example, I know many people who switched from math to
computer science because they found math too hard, and no one who did the
opposite. People don't do hard things gratuitously; no one will work on a
harder problem unless it is proportionately (or at least log(n)) more
rewarding. So probably math is more worth studying than computer science. By
similar comparisons you can make a graph of all the departments in a
university. At the bottom you'll find the subjects with least intellectual
content.
If you use this method, you'll get roughly the same answer I just gave.
Language courses are an anomaly. I think they're better considered as
extracurricular activities, like pottery classes. They'd be far more useful
when combined with some time living in a country where the language is spoken.
On a whim I studied Arabic as a freshman. It was a lot of work, and the only
lasting benefits were a weird ability to identify semitic roots and some
insights into how people recognize words.
Studio art and creative writing courses are wildcards. Usually you don't get
taught much: you just work (or don't work) on whatever you want, and then sit
around offering "crits" of one another's creations under the vague supervision
of the teacher. But writing and art are both very hard problems that (some)
people work honestly at, so they're worth doing, especially if you can find a
good teacher.
**Jobs**
Of course college students have to think about more than just learning. There
are also two practical problems to consider: jobs, and graduate school.
In theory a liberal education is not supposed to supply job training. But
everyone knows this is a bit of a fib. Hackers at every college learn
practical skills, and not by accident.
What you should learn to get a job depends on the kind you want. If you want
to work in a big company, learn how to hack Blub on Windows. If you want to
work at a cool little company or research lab, you'll do better to learn Ruby
on Linux. And if you want to start your own company, which I think will be
more and more common, master the most powerful tools you can find, because
you're going to be in a race against your competitors, and they'll be your
horse.
There is not a direct correlation between the skills you should learn in
college and those you'll use in a job. You should aim slightly high in
college.
In workouts a football player may bench press 300 pounds, even though he may
never have to exert anything like that much force in the course of a game.
Likewise, if your professors try to make you learn stuff that's more advanced
than you'll need in a job, it may not just be because they're academics,
detached from the real world. They may be trying to make you lift weights with
your brain.
The programs you write in classes differ in three critical ways from the ones
you'll write in the real world: they're small; you get to start from scratch;
and the problem is usually artificial and predetermined. In the real world,
programs are bigger, tend to involve existing code, and often require you to
figure out what the problem is before you can solve it.
You don't have to wait to leave (or even enter) college to learn these skills.
If you want to learn how to deal with existing code, for example, you can
contribute to open-source projects. The sort of employer you want to work for
will be as impressed by that as good grades on class assignments.
In existing open-source projects you don't get much practice at the third
skill, deciding what problems to solve. But there's nothing to stop you
starting new projects of your own. And good employers will be even more
impressed with that.
What sort of problem should you try to solve? One way to answer that is to ask
what you need as a user. For example, I stumbled on a good algorithm for spam
filtering because I wanted to stop getting spam. Now what I wish I had was a
mail reader that somehow prevented my inbox from filling up. I tend to use my
inbox as a todo list. But that's like using a screwdriver to open bottles;
what one really wants is a bottle opener.
**Grad School**
What about grad school? Should you go? And how do you get into a good one?
In principle, grad school is professional training in research, and you
shouldn't go unless you want to do research as a career. And yet half the
people who get PhDs in CS don't go into research. I didn't go to grad school
to become a professor. I went because I wanted to learn more.
So if you're mainly interested in hacking and you go to grad school, you'll
find a lot of other people who are similarly out of their element. And if half
the people around you are out of their element in the same way you are, are
you really out of your element?
There's a fundamental problem in "computer science," and it surfaces in
situations like this. No one is sure what "research" is supposed to be. A lot
of research is hacking that had to be crammed into the form of an academic
paper to yield one more quantum of publication.
So it's kind of misleading to ask whether you'll be at home in grad school,
because very few people are quite at home in computer science. The whole field
is uncomfortable in its own skin. So the fact that you're mainly interested in
hacking shouldn't deter you from going to grad school. Just be warned you'll
have to do a lot of stuff you don't like.
Number one will be your dissertation. Almost everyone hates their dissertation
by the time they're done with it. The process inherently tends to produce an
unpleasant result, like a cake made out of whole wheat flour and baked for
twelve hours. Few dissertations are read with pleasure, especially by their
authors.
But thousands before you have suffered through writing a dissertation. And
aside from that, grad school is close to paradise. Many people remember it as
the happiest time of their lives. And nearly all the rest, including me,
remember it as a period that would have been, if they hadn't had to write a
dissertation.
The danger with grad school is that you don't see the scary part upfront. PhD
programs start out as college part 2, with several years of classes. So by the
time you face the horror of writing a dissertation, you're already several
years in. If you quit now, you'll be a grad-school dropout, and you probably
won't like that idea. When Robert got kicked out of grad school for writing
the Internet worm of 1988, I envied him enormously for finding a way out
without the stigma of failure.
On the whole, grad school is probably better than most alternatives. You meet
a lot of smart people, and your glum procrastination will at least be a
powerful common bond. And of course you have a PhD at the end. I forgot about
that. I suppose that's worth something.
The greatest advantage of a PhD (besides being the union card of academia, of
course) may be that it gives you some baseline confidence. For example, the
Honeywell thermostats in my house have the most atrocious UI. My mother, who
has the same model, diligently spent a day reading the user's manual to learn
how to operate hers. She assumed the problem was with her. But I can think to
myself "If someone with a PhD in computer science can't understand this
thermostat, it _must_ be badly designed."
If you still want to go to grad school after this equivocal recommendation, I
can give you solid advice about how to get in. A lot of my friends are CS
professors now, so I have the inside story about admissions. It's quite
different from college. At most colleges, admissions officers decide who gets
in. For PhD programs, the professors do. And they try to do it well, because
the people they admit are going to be working for them.
Apparently only recommendations really matter at the best schools.
Standardized tests count for nothing, and grades for little. The essay is
mostly an opportunity to disqualify yourself by saying something stupid. The
only thing professors trust is recommendations, preferably from people they
know.
So if you want to get into a PhD program, the key is to impress your
professors. And from my friends who are professors I know what impresses them:
not merely trying to impress them. They're not impressed by students who get
good grades or want to be their research assistants so they can get into grad
school. They're impressed by students who get good grades and want to be their
research assistants because they're genuinely interested in the topic.
So the best thing you can do in college, whether you want to get into grad
school or just be good at hacking, is figure out what you truly like. It's
hard to trick professors into letting you into grad school, and impossible to
trick problems into letting you solve them. College is where faking stops
working. From this point, unless you want to go work for a big company, which
is like reverting to high school, the only way forward is through doing what
you love.
** |
|
May 2004
When people care enough about something to do it well, those who do it best
tend to be far better than everyone else. There's a huge gap between Leonardo
and second-rate contemporaries like Borgognone. You see the same gap between
Raymond Chandler and the average writer of detective novels. A top-ranked
professional chess player could play ten thousand games against an ordinary
club player without losing once.
Like chess or painting or writing novels, making money is a very specialized
skill. But for some reason we treat this skill differently. No one complains
when a few people surpass all the rest at playing chess or writing novels, but
when a few people make more money than the rest, we get editorials saying this
is wrong.
Why? The pattern of variation seems no different than for any other skill.
What causes people to react so strongly when the skill is making money?
I think there are three reasons we treat making money as different: the
misleading model of wealth we learn as children; the disreputable way in
which, till recently, most fortunes were accumulated; and the worry that great
variations in income are somehow bad for society. As far as I can tell, the
first is mistaken, the second outdated, and the third empirically false. Could
it be that, in a modern democracy, variation in income is actually a sign of
health?
**The Daddy Model of Wealth**
When I was five I thought electricity was created by electric sockets. I
didn't realize there were power plants out there generating it. Likewise, it
doesn't occur to most kids that wealth is something that has to be generated.
It seems to be something that flows from parents.
Because of the circumstances in which they encounter it, children tend to
misunderstand wealth. They confuse it with money. They think that there is a
fixed amount of it. And they think of it as something that's distributed by
authorities (and so should be distributed equally), rather than something that
has to be created (and might be created unequally).
In fact, wealth is not money. Money is just a convenient way of trading one
form of wealth for another. Wealth is the underlying stuff—the goods and
services we buy. When you travel to a rich or poor country, you don't have to
look at people's bank accounts to tell which kind you're in. You can _see_
wealth—in buildings and streets, in the clothes and the health of the people.
Where does wealth come from? People make it. This was easier to grasp when
most people lived on farms, and made many of the things they wanted with their
own hands. Then you could see in the house, the herds, and the granary the
wealth that each family created. It was obvious then too that the wealth of
the world was not a fixed quantity that had to be shared out, like slices of a
pie. If you wanted more wealth, you could make it.
This is just as true today, though few of us create wealth directly for
ourselves (except for a few vestigial domestic tasks). Mostly we create wealth
for other people in exchange for money, which we then trade for the forms of
wealth we want.
Because kids are unable to create wealth, whatever they have has to be given
to them. And when wealth is something you're given, then of course it seems
that it should be distributed equally. As in most families it is. The kids
see to that. "Unfair," they cry, when one sibling gets more than another.
In the real world, you can't keep living off your parents. If you want
something, you either have to make it, or do something of equivalent value for
someone else, in order to get them to give you enough money to buy it. In the
real world, wealth is (except for a few specialists like thieves and
speculators) something you have to create, not something that's distributed by
Daddy. And since the ability and desire to create it vary from person to
person, it's not made equally.
You get paid by doing or making something people want, and those who make more
money are often simply better at doing what people want. Top actors make a lot
more money than B-list actors. The B-list actors might be almost as
charismatic, but when people go to the theater and look at the list of movies
playing, they want that extra oomph that the big stars have.
Doing what people want is not the only way to get money, of course. You could
also rob banks, or solicit bribes, or establish a monopoly. Such tricks
account for some variation in wealth, and indeed for some of the biggest
individual fortunes, but they are not the root cause of variation in income.
The root cause of variation in income, as Occam's Razor implies, is the same
as the root cause of variation in every other human skill.
In the United States, the CEO of a large public company makes about 100 times
as much as the average person. Basketball players make about 128 times as
much, and baseball players 72 times as much. Editorials quote this kind of
statistic with horror. But I have no trouble imagining that one person could
be 100 times as productive as another. In ancient Rome the price of _slaves_
varied by a factor of 50 depending on their skills. And that's without
considering motivation, or the extra leverage in productivity that you can get
from modern technology.
Editorials about athletes' or CEOs' salaries remind me of early Christian
writers, arguing from first principles about whether the Earth was round, when
they could just walk outside and check. How much someone's work is worth
is not a policy question. It's something the market already determines.
"Are they really worth 100 of us?" editorialists ask. Depends on what you mean
by worth. If you mean worth in the sense of what people will pay for their
skills, the answer is yes, apparently.
A few CEOs' incomes reflect some kind of wrongdoing. But are there not others
whose incomes really do reflect the wealth they generate? Steve Jobs saved a
company that was in a terminal decline. And not merely in the way a turnaround
specialist does, by cutting costs; he had to decide what Apple's next products
should be. Few others could have done it. And regardless of the case with
CEOs, it's hard to see how anyone could argue that the salaries of
professional basketball players don't reflect supply and demand.
It may seem unlikely in principle that one individual could really generate so
much more wealth than another. The key to this mystery is to revisit that
question, are they really worth 100 of us? _Would_ a basketball team trade one
of their players for 100 random people? What would Apple's next product look
like if you replaced Steve Jobs with a committee of 100 random people?
These things don't scale linearly. Perhaps the CEO or the professional athlete
has only ten times (whatever that means) the skill and determination of an
ordinary person. But it makes all the difference that it's concentrated in one
individual.
When we say that one kind of work is overpaid and another underpaid, what are
we really saying? In a free market, prices are determined by what buyers want.
People like baseball more than poetry, so baseball players make more than
poets. To say that a certain kind of work is underpaid is thus identical with
saying that people want the wrong things.
Well, of course people want the wrong things. It seems odd to be surprised by
that. And it seems even odder to say that it's _unjust_ that certain kinds of
work are underpaid. Then you're saying that it's unjust that people want
the wrong things. It's lamentable that people prefer reality TV and corndogs
to Shakespeare and steamed vegetables, but unjust? That seems like saying that
blue is heavy, or that up is circular.
The appearance of the word "unjust" here is the unmistakable spectral
signature of the Daddy Model. Why else would this idea occur in this odd
context? Whereas if the speaker were still operating on the Daddy Model, and
saw wealth as something that flowed from a common source and had to be shared
out, rather than something generated by doing what other people wanted, this
is exactly what you'd get on noticing that some people made much more than
others.
When we talk about "unequal distribution of income," we should also ask, where
does that income come from? Who made the wealth it represents? Because to
the extent that income varies simply according to how much wealth people
create, the distribution may be unequal, but it's hardly unjust.
**Stealing It**
The second reason we tend to find great disparities of wealth alarming is that
for most of human history the usual way to accumulate a fortune was to steal
it: in pastoral societies by cattle raiding; in agricultural societies by
appropriating others' estates in times of war, and taxing them in times of
peace.
In conflicts, those on the winning side would receive the estates confiscated
from the losers. In England in the 1060s, when William the Conqueror
distributed the estates of the defeated Anglo-Saxon nobles to his followers,
the conflict was military. By the 1530s, when Henry VIII distributed the
estates of the monasteries to his followers, it was mostly political. But
the principle was the same. Indeed, the same principle is at work now in
Zimbabwe.
In more organized societies, like China, the ruler and his officials used
taxation instead of confiscation. But here too we see the same principle: the
way to get rich was not to create wealth, but to serve a ruler powerful enough
to appropriate it.
This started to change in Europe with the rise of the middle class. Now we
think of the middle class as people who are neither rich nor poor, but
originally they were a distinct group. In a feudal society, there are just two
classes: a warrior aristocracy, and the serfs who work their estates. The
middle class were a new, third group who lived in towns and supported
themselves by manufacturing and trade.
Starting in the tenth and eleventh centuries, petty nobles and former serfs
banded together in towns that gradually became powerful enough to ignore the
local feudal lords. Like serfs, the middle class made a living largely by
creating wealth. (In port cities like Genoa and Pisa, they also engaged in
piracy.) But unlike serfs they had an incentive to create a lot of it. Any
wealth a serf created belonged to his master. There was not much point in
making more than you could hide. Whereas the independence of the townsmen
allowed them to keep whatever wealth they created.
Once it became possible to get rich by creating wealth, society as a whole
started to get richer very rapidly. Nearly everything we have was created by
the middle class. Indeed, the other two classes have effectively disappeared
in industrial societies, and their names been given to either end of the
middle class. (In the original sense of the word, Bill Gates is middle class.)
But it was not till the Industrial Revolution that wealth creation
definitively replaced corruption as the best way to get rich. In England, at
least, corruption only became unfashionable (and in fact only started to be
called "corruption") when there started to be other, faster ways to get rich.
Seventeenth-century England was much like the third world today, in that
government office was a recognized route to wealth. The great fortunes of that
time still derived more from what we would now call corruption than from
commerce. By the nineteenth century that had changed. There continued to
be bribes, as there still are everywhere, but politics had by then been left
to men who were driven more by vanity than greed. Technology had made it
possible to create wealth faster than you could steal it. The prototypical
rich man of the nineteenth century was not a courtier but an industrialist.
With the rise of the middle class, wealth stopped being a zero-sum game. Jobs
and Wozniak didn't have to make us poor to make themselves rich. Quite the
opposite: they created things that made our lives materially richer. They had
to, or we wouldn't have paid for them.
But since for most of the world's history the main route to wealth was to
steal it, we tend to be suspicious of rich people. Idealistic undergraduates
find their unconsciously preserved child's model of wealth confirmed by
eminent writers of the past. It is a case of the mistaken meeting the
outdated.
"Behind every great fortune, there is a crime," Balzac wrote. Except he
didn't. What he actually said was that a great fortune with no apparent cause
was probably due to a crime well enough executed that it had been forgotten.
If we were talking about Europe in 1000, or most of the third world today, the
standard misquotation would be spot on. But Balzac lived in nineteenth-century
France, where the Industrial Revolution was well advanced. He knew you could
make a fortune without stealing it. After all, he did himself, as a popular
novelist.
Only a few countries (by no coincidence, the richest ones) have reached this
stage. In most, corruption still has the upper hand. In most, the fastest way
to get wealth is by stealing it. And so when we see increasing differences in
income in a rich country, there is a tendency to worry that it's sliding back
toward becoming another Venezuela. I think the opposite is happening. I think
you're seeing a country a full step ahead of Venezuela.
**The Lever of Technology**
Will technology increase the gap between rich and poor? It will certainly
increase the gap between the productive and the unproductive. That's the whole
point of technology. With a tractor an energetic farmer could plow six times
as much land in a day as he could with a team of horses. But only if he
mastered a new kind of farming.
I've seen the lever of technology grow visibly in my own time. In high school
I made money by mowing lawns and scooping ice cream at Baskin-Robbins. This
was the only kind of work available at the time. Now high school kids could
write software or design web sites. But only some of them will; the rest will
still be scooping ice cream.
I remember very vividly when in 1985 improved technology made it possible for
me to buy a computer of my own. Within months I was using it to make money as
a freelance programmer. A few years before, I couldn't have done this. A few
years before, there was no such _thing_ as a freelance programmer. But Apple
created wealth, in the form of powerful, inexpensive computers, and
programmers immediately set to work using it to create more.
As this example suggests, the rate at which technology increases our
productive capacity is probably exponential, rather than linear. So we should
expect to see ever-increasing variation in individual productivity as time
goes on. Will that increase the gap between rich and the poor? Depends which
gap you mean.
Technology should increase the gap in income, but it seems to decrease other
gaps. A hundred years ago, the rich led a different _kind_ of life from
ordinary people. They lived in houses full of servants, wore elaborately
uncomfortable clothes, and travelled about in carriages drawn by teams of
horses which themselves required their own houses and servants. Now, thanks to
technology, the rich live more like the average person.
Cars are a good example of why. It's possible to buy expensive, handmade cars
that cost hundreds of thousands of dollars. But there is not much point.
Companies make more money by building a large number of ordinary cars than a
small number of expensive ones. So a company making a mass-produced car can
afford to spend a lot more on its design. If you buy a custom-made car,
something will always be breaking. The only point of buying one now is to
advertise that you can.
Or consider watches. Fifty years ago, by spending a lot of money on a watch
you could get better performance. When watches had mechanical movements,
expensive watches kept better time. Not any more. Since the invention of the
quartz movement, an ordinary Timex is more accurate than a Patek Philippe
costing hundreds of thousands of dollars. Indeed, as with expensive cars,
if you're determined to spend a lot of money on a watch, you have to put up
with some inconvenience to do it: as well as keeping worse time, mechanical
watches have to be wound.
The only thing technology can't cheapen is brand. Which is precisely why we
hear ever more about it. Brand is the residue left as the substantive
differences between rich and poor evaporate. But what label you have on your
stuff is a much smaller matter than having it versus not having it. In 1900,
if you kept a carriage, no one asked what year or brand it was. If you had
one, you were rich. And if you weren't rich, you took the omnibus or walked.
Now even the poorest Americans drive cars, and it is only because we're so
well trained by advertising that we can even recognize the especially
expensive ones.
The same pattern has played out in industry after industry. If there is enough
demand for something, technology will make it cheap enough to sell in large
volumes, and the mass-produced versions will be, if not better, at least more
convenient. And there is nothing the rich like more than convenience. The
rich people I know drive the same cars, wear the same clothes, have the same
kind of furniture, and eat the same foods as my other friends. Their houses
are in different neighborhoods, or if in the same neighborhood are different
sizes, but within them life is similar. The houses are made using the same
construction techniques and contain much the same objects. It's inconvenient
to do something expensive and custom.
The rich spend their time more like everyone else too. Bertie Wooster seems
long gone. Now, most people who are rich enough not to work do anyway. It's
not just social pressure that makes them; idleness is lonely and demoralizing.
Nor do we have the social distinctions there were a hundred years ago. The
novels and etiquette manuals of that period read now like descriptions of some
strange tribal society. "With respect to the continuance of friendships..."
hints _Mrs. Beeton's Book of Household Management_ (1880), "it may be found
necessary, in some cases, for a mistress to relinquish, on assuming the
responsibility of a household, many of those commenced in the earlier part of
her life." A woman who married a rich man was expected to drop friends who
didn't. You'd seem a barbarian if you behaved that way today. You'd also have
a very boring life. People still tend to segregate themselves somewhat, but
much more on the basis of education than wealth.
Materially and socially, technology seems to be decreasing the gap between the
rich and the poor, not increasing it. If Lenin walked around the offices of a
company like Yahoo or Intel or Cisco, he'd think communism had won. Everyone
would be wearing the same clothes, have the same kind of office (or rather,
cubicle) with the same furnishings, and address one another by their first
names instead of by honorifics. Everything would seem exactly as he'd
predicted, until he looked at their bank accounts. Oops.
Is it a problem if technology increases that gap? It doesn't seem to be so
far. As it increases the gap in income, it seems to decrease most other gaps.
**Alternative to an Axiom**
One often hears a policy criticized on the grounds that it would increase the
income gap between rich and poor. As if it were an axiom that this would be
bad. It might be true that increased variation in income would be bad, but I
don't see how we can say it's _axiomatic._
Indeed, it may even be false, in industrial democracies. In a society of serfs
and warlords, certainly, variation in income is a sign of an underlying
problem. But serfdom is not the only cause of variation in income. A 747 pilot
doesn't make 40 times as much as a checkout clerk because he is a warlord who
somehow holds her in thrall. His skills are simply much more valuable.
I'd like to propose an alternative idea: that in a modern society, increasing
variation in income is a sign of health. Technology seems to increase the
variation in productivity at faster than linear rates. If we don't see
corresponding variation in income, there are three possible explanations: (a)
that technical innovation has stopped, (b) that the people who would create
the most wealth aren't doing it, or (c) that they aren't getting paid for it.
I think we can safely say that (a) and (b) would be bad. If you disagree, try
living for a year using only the resources available to the average Frankish
nobleman in 800, and report back to us. (I'll be generous and not send you
back to the stone age.)
The only option, if you're going to have an increasingly prosperous society
without increasing variation in income, seems to be (c), that people will
create a lot of wealth without being paid for it. That Jobs and Wozniak, for
example, will cheerfully work 20-hour days to produce the Apple computer for a
society that allows them, after taxes, to keep just enough of their income to
match what they would have made working 9 to 5 at a big company.
Will people create wealth if they can't get paid for it? Only if it's fun.
People will write operating systems for free. But they won't install them, or
take support calls, or train customers to use them. And at least 90% of the
work that even the highest tech companies do is of this second, unedifying
kind.
All the unfun kinds of wealth creation slow dramatically in a society that
confiscates private fortunes. We can confirm this empirically. Suppose you
hear a strange noise that you think may be due to a nearby fan. You turn the
fan off, and the noise stops. You turn the fan back on, and the noise starts
again. Off, quiet. On, noise. In the absence of other information, it would
seem the noise is caused by the fan.
At various times and places in history, whether you could accumulate a fortune
by creating wealth has been turned on and off. Northern Italy in 800, off
(warlords would steal it). Northern Italy in 1100, on. Central France in 1100,
off (still feudal). England in 1800, on. England in 1974, off (98% tax on
investment income). United States in 1974, on. We've even had a twin study:
West Germany, on; East Germany, off. In every case, the creation of wealth
seems to appear and disappear like the noise of a fan as you switch on and off
the prospect of keeping it.
There is some momentum involved. It probably takes at least a generation to
turn people into East Germans (luckily for England). But if it were merely a
fan we were studying, without all the extra baggage that comes from the
controversial topic of wealth, no one would have any doubt that the fan was
causing the noise.
If you suppress variations in income, whether by stealing private fortunes, as
feudal rulers used to do, or by taxing them away, as some modern governments
have done, the result always seems to be the same. Society as a whole ends up
poorer.
If I had a choice of living in a society where I was materially much better
off than I am now, but was among the poorest, or in one where I was the
richest, but much worse off than I am now, I'd take the first option. If I had
children, it would arguably be immoral not to. It's absolute poverty you want
to avoid, not relative poverty. If, as the evidence so far implies, you have
to have one or the other in your society, take relative poverty.
You need rich people in your society not so much because in spending their
money they create jobs, but because of what they have to do to _get_ rich. I'm
not talking about the trickle-down effect here. I'm not saying that if you let
Henry Ford get rich, he'll hire you as a waiter at his next party. I'm saying
that he'll make you a tractor to replace your horse.
** |
|
March 2005
All the best hackers I know are gradually switching to Macs. My friend Robert
said his whole research group at MIT recently bought themselves Powerbooks.
These guys are not the graphic designers and grandmas who were buying Macs at
Apple's low point in the mid 1990s. They're about as hardcore OS hackers as
you can get.
The reason, of course, is OS X. Powerbooks are beautifully designed and run
FreeBSD. What more do you need to know?
I got a Powerbook at the end of last year. When my IBM Thinkpad's hard disk
died soon after, it became my only laptop. And when my friend Trevor showed up
at my house recently, he was carrying a Powerbook identical to mine.
For most of us, it's not a switch to Apple, but a return. Hard as this was to
believe in the mid 90s, the Mac was in its time the canonical hacker's
computer.
In the fall of 1983, the professor in one of my college CS classes got up and
announced, like a prophet, that there would soon be a computer with half a
MIPS of processing power that would fit under an airline seat and cost so
little that we could save enough to buy one from a summer job. The whole room
gasped. And when the Mac appeared, it was even better than we'd hoped. It was
small and powerful and cheap, as promised. But it was also something we'd
never considered a computer could be: fabulously well designed.
I had to have one. And I wasn't alone. In the mid to late 1980s, all the
hackers I knew were either writing software for the Mac, or wanted to. Every
futon sofa in Cambridge seemed to have the same fat white book lying open on
it. If you turned it over, it said "Inside Macintosh."
Then came Linux and FreeBSD, and hackers, who follow the most powerful OS
wherever it leads, found themselves switching to Intel boxes. If you cared
about design, you could buy a Thinkpad, which was at least not actively
repellent, if you could get the Intel and Microsoft stickers off the front.
With OS X, the hackers are back. When I walked into the Apple store in
Cambridge, it was like coming home. Much was changed, but there was still that
Apple coolness in the air, that feeling that the show was being run by someone
who really cared, instead of random corporate deal-makers.
So what, the business world may say. Who cares if hackers like Apple again?
How big is the hacker market, after all?
Quite small, but important out of proportion to its size. When it comes to
computers, what hackers are doing now, everyone will be doing in ten years.
Almost all technology, from Unix to bitmapped displays to the Web, became
popular first within CS departments and research labs, and gradually spread to
the rest of the world.
I remember telling my father back in 1986 that there was a new kind of
computer called a Sun that was a serious Unix machine, but so small and cheap
that you could have one of your own to sit in front of, instead of sitting in
front of a VT100 connected to a single central Vax. Maybe, I suggested, he
should buy some stock in this company. I think he really wishes he'd listened.
In 1994 my friend Koling wanted to talk to his girlfriend in Taiwan, and to
save long-distance bills he wrote some software that would convert sound to
data packets that could be sent over the Internet. We weren't sure at the time
whether this was a proper use of the Internet, which was still then a quasi-
government entity. What he was doing is now called VoIP, and it is a huge and
rapidly growing business.
If you want to know what ordinary people will be doing with computers in ten
years, just walk around the CS department at a good university. Whatever
they're doing, you'll be doing.
In the matter of "platforms" this tendency is even more pronounced, because
novel software originates with great hackers, and they tend to write it first
for whatever computer they personally use. And software sells hardware. Many
if not most of the initial sales of the Apple II came from people who bought
one to run VisiCalc. And why did Bricklin and Frankston write VisiCalc for the
Apple II? Because they personally liked it. They could have chosen any machine
to make into a star.
If you want to attract hackers to write software that will sell your hardware,
you have to make it something that they themselves use. It's not enough to
make it "open." It has to be open and good.
And open and good is what Macs are again, finally. The intervening years have
created a situation that is, as far as I know, without precedent: Apple is
popular at the low end and the high end, but not in the middle. My seventy
year old mother has a Mac laptop. My friends with PhDs in computer science
have Mac laptops. And yet Apple's overall market share is still small.
Though unprecedented, I predict this situation is also temporary.
So Dad, there's this company called Apple. They make a new kind of computer
that's as well designed as a Bang & Olufsen stereo system, and underneath is
the best Unix machine you can buy. Yes, the price to earnings ratio is kind of
high, but I think a lot of people are going to want these.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2006
_(This essay is derived from a talk at MIT.)_
Till recently graduating seniors had two choices: get a job or go to grad
school. I think there will increasingly be a third option: to start your own
startup. But how common will that be?
I'm sure the default will always be to get a job, but starting a startup could
well become as popular as grad school. In the late 90s my professor friends
used to complain that they couldn't get grad students, because all the
undergrads were going to work for startups. I wouldn't be surprised if that
situation returns, but with one difference: this time they'll be starting
their own instead of going to work for other people's.
The most ambitious students will at this point be asking: Why wait till you
graduate? Why not start a startup while you're in college? In fact, why go to
college at all? Why not start a startup instead?
A year and a half ago I gave a talk where I said that the average age of the
founders of Yahoo, Google, and Microsoft was 24, and that if grad students
could start startups, why not undergrads? I'm glad I phrased that as a
question, because now I can pretend it wasn't merely a rhetorical one. At the
time I couldn't imagine why there should be any lower limit for the age of
startup founders. Graduation is a bureaucratic change, not a biological one.
And certainly there are undergrads as competent technically as most grad
students. So why shouldn't undergrads be able to start startups as well as
grad students?
I now realize that something does change at graduation: you lose a huge excuse
for failing. Regardless of how complex your life is, you'll find that everyone
else, including your family and friends, will discard all the low bits and
regard you as having a single occupation at any given time. If you're in
college and have a summer job writing software, you still read as a student.
Whereas if you graduate and get a job programming, you'll be instantly
regarded by everyone as a programmer.
The problem with starting a startup while you're still in school is that
there's a built-in escape hatch. If you start a startup in the summer between
your junior and senior year, it reads to everyone as a summer job. So if it
goes nowhere, big deal; you return to school in the fall with all the other
seniors; no one regards you as a failure, because your occupation is student,
and you didn't fail at that. Whereas if you start a startup just one year
later, after you graduate, as long as you're not accepted to grad school in
the fall the startup reads to everyone as your occupation. You're now a
startup founder, so you have to do well at that.
For nearly everyone, the opinion of one's peers is the most powerful motivator
of all—more powerful even than the nominal goal of most startup founders,
getting rich. About a month into each funding cycle we have an event
called Prototype Day where each startup presents to the others what they've
got so far. You might think they wouldn't need any more motivation. They're
working on their cool new idea; they have funding for the immediate future;
and they're playing a game with only two outcomes: wealth or failure. You'd
think that would be motivation enough. And yet the prospect of a demo pushes
most of them into a rush of activity.
Even if you start a startup explicitly to get rich, the money you might get
seems pretty theoretical most of the time. What drives you day to day is not
wanting to look bad.
You probably can't change that. Even if you could, I don't think you'd want
to; someone who really, truly doesn't care what his peers think of him is
probably a psychopath. So the best you can do is consider this force like a
wind, and set up your boat accordingly. If you know your peers are going to
push you in some direction, choose good peers, and position yourself so they
push you in a direction you like.
Graduation changes the prevailing winds, and those make a difference. Starting
a startup is so hard that it's a close call even for the ones that succeed.
However high a startup may be flying now, it probably has a few leaves stuck
in the landing gear from those trees it barely cleared at the end of the
runway. In such a close game, the smallest increase in the forces against you
can be enough to flick you over the edge into failure.
When we first started Y Combinator we encouraged people to start startups
while they were still in college. That's partly because Y Combinator began as
a kind of summer program. We've kept the program shape—all of us having dinner
together once a week turns out to be a good idea—but we've decided now that
the party line should be to tell people to wait till they graduate.
Does that mean you can't start a startup in college? Not at all. Sam Altman,
the co-founder of Loopt, had just finished his sophomore year when we funded
them, and Loopt is probably the most promising of all the startups we've
funded so far. But Sam Altman is a very unusual guy. Within about three
minutes of meeting him, I remember thinking "Ah, so this is what Bill Gates
must have been like when he was 19."
If it can work to start a startup during college, why do we tell people not
to? For the same reason that the probably apocryphal violinist, whenever he
was asked to judge someone's playing, would always say they didn't have enough
talent to make it as a pro. Succeeding as a musician takes determination as
well as talent, so this answer works out to be the right advice for everyone.
The ones who are uncertain believe it and give up, and the ones who are
sufficiently determined think "screw that, I'll succeed anyway."
So our official policy now is only to fund undergrads we can't talk out of it.
And frankly, if you're not certain, you _should_ wait. It's not as if all the
opportunities to start companies are going to be gone if you don't do it now.
Maybe the window will close on some idea you're working on, but that won't be
the last idea you'll have. For every idea that times out, new ones become
feasible. Historically the opportunities to start startups have only increased
with time.
In that case, you might ask, why not wait longer? Why not go work for a while,
or go to grad school, and then start a startup? And indeed, that might be a
good idea. If I had to pick the sweet spot for startup founders, based on who
we're most excited to see applications from, I'd say it's probably the mid-
twenties. Why? What advantages does someone in their mid-twenties have over
someone who's 21? And why isn't it older? What can 25 year olds do that 32
year olds can't? Those turn out to be questions worth examining.
**Plus**
If you start a startup soon after college, you'll be a young founder by
present standards, so you should know what the relative advantages of young
founders are. They're not what you might think. As a young founder your
strengths are: stamina, poverty, rootlessness, colleagues, and ignorance.
The importance of stamina shouldn't be surprising. If you've heard anything
about startups you've probably heard about the long hours. As far as I can
tell these are universal. I can't think of any successful startups whose
founders worked 9 to 5. And it's particularly necessary for younger founders
to work long hours because they're probably not as efficient as they'll be
later.
Your second advantage, poverty, might not sound like an advantage, but it is a
huge one. Poverty implies you can live cheaply, and this is critically
important for startups. Nearly every startup that fails, fails by running out
of money. It's a little misleading to put it this way, because there's usually
some other underlying cause. But regardless of the source of your problems, a
low burn rate gives you more opportunity to recover from them. And since most
startups make all kinds of mistakes at first, room to recover from mistakes is
a valuable thing to have.
Most startups end up doing something different than they planned. The way the
successful ones find something that works is by trying things that don't. So
the worst thing you can do in a startup is to have a rigid, pre-ordained plan
and then start spending a lot of money to implement it. Better to operate
cheaply and give your ideas time to evolve.
Recent grads can live on practically nothing, and this gives you an edge over
older founders, because the main cost in software startups is people. The guys
with kids and mortgages are at a real disadvantage. This is one reason I'd bet
on the 25 year old over the 32 year old. The 32 year old probably is a better
programmer, but probably also has a much more expensive life. Whereas a 25
year old has some work experience (more on that later) but can live as cheaply
as an undergrad.
Robert Morris and I were 29 and 30 respectively when we started Viaweb, but
fortunately we still lived like 23 year olds. We both had roughly zero assets.
I would have loved to have a mortgage, since that would have meant I had a
_house_. But in retrospect having nothing turned out to be convenient. I
wasn't tied down and I was used to living cheaply.
Even more important than living cheaply, though, is thinking cheaply. One
reason the Apple II was so popular was that it was cheap. The computer itself
was cheap, and it used cheap, off-the-shelf peripherals like a cassette tape
recorder for data storage and a TV as a monitor. And you know why? Because Woz
designed this computer for himself, and he couldn't afford anything more.
We benefitted from the same phenomenon. Our prices were daringly low for the
time. The top level of service was $300 a month, which was an order of
magnitude below the norm. In retrospect this was a smart move, but we didn't
do it because we were smart. $300 a month seemed like a lot of money to us.
Like Apple, we created something inexpensive, and therefore popular, simply
because we were poor.
A lot of startups have that form: someone comes along and makes something for
a tenth or a hundredth of what it used to cost, and the existing players can't
follow because they don't even want to think about a world in which that's
possible. Traditional long distance carriers, for example, didn't even want to
think about VoIP. (It was coming, all the same.) Being poor helps in this
game, because your own personal bias points in the same direction technology
evolves in.
The advantages of rootlessness are similar to those of poverty. When you're
young you're more mobile—not just because you don't have a house or much
stuff, but also because you're less likely to have serious relationships. This
turns out to be important, because a lot of startups involve someone moving.
The founders of Kiko, for example, are now en route to the Bay Area to start
their next startup. It's a better place for what they want to do. And it was
easy for them to decide to go, because neither as far as I know has a serious
girlfriend, and everything they own will fit in one car—or more precisely,
will either fit in one car or is crappy enough that they don't mind leaving it
behind.
They at least were in Boston. What if they'd been in Nebraska, like Evan
Williams was at their age? Someone wrote recently that the drawback of Y
Combinator was that you had to move to participate. It couldn't be any other
way. The kind of conversations we have with founders, we have to have in
person. We fund a dozen startups at a time, and we can't be in a dozen places
at once. But even if we could somehow magically save people from moving, we
wouldn't. We wouldn't be doing founders a favor by letting them stay in
Nebraska. Places that aren't startup hubs are toxic to startups. You can tell
that from indirect evidence. You can tell how hard it must be to start a
startup in Houston or Chicago or Miami from the microscopically small number,
per capita, that succeed there. I don't know exactly what's suppressing all
the startups in these towns—probably a hundred subtle little things—but
something must be.
Maybe this will change. Maybe the increasing cheapness of startups will mean
they'll be able to survive anywhere, instead of only in the most hospitable
environments. Maybe 37signals is the pattern for the future. But maybe not.
Historically there have always been certain towns that were centers for
certain industries, and if you weren't in one of them you were at a
disadvantage. So my guess is that 37signals is an anomaly. We're looking at a
pattern much older than "Web 2.0" here.
Perhaps the reason more startups per capita happen in the Bay Area than Miami
is simply that there are more founder-type people there. Successful startups
are almost never started by one person. Usually they begin with a conversation
in which someone mentions that something would be a good idea for a company,
and his friend says, "Yeah, that is a good idea, let's try it." If you're
missing that second person who says "let's try it," the startup never happens.
And that is another area where undergrads have an edge. They're surrounded by
people willing to say that. At a good college you're concentrated together
with a lot of other ambitious and technically minded people—probably more
concentrated than you'll ever be again. If your nucleus spits out a neutron,
there's a good chance it will hit another nucleus.
The number one question people ask us at Y Combinator is: Where can I find a
co-founder? That's the biggest problem for someone starting a startup at 30.
When they were in school they knew a lot of good co-founders, but by 30
they've either lost touch with them or these people are tied down by jobs they
don't want to leave.
Viaweb was an anomaly in this respect too. Though we were comparatively old,
we weren't tied down by impressive jobs. I was trying to be an artist, which
is not very constraining, and Robert, though 29, was still in grad school due
to a little interruption in his academic career back in 1988. So arguably the
Worm made Viaweb possible. Otherwise Robert would have been a junior professor
at that age, and he wouldn't have had time to work on crazy speculative
projects with me.
Most of the questions people ask Y Combinator we have some kind of answer for,
but not the co-founder question. There is no good answer. Co-founders really
should be people you already know. And by far the best place to meet them is
school. You have a large sample of smart people; you get to compare how they
all perform on identical tasks; and everyone's life is pretty fluid. A lot of
startups grow out of schools for this reason. Google, Yahoo, and Microsoft,
among others, were all founded by people who met in school. (In Microsoft's
case, it was high school.)
Many students feel they should wait and get a little more experience before
they start a company. All other things being equal, they should. But all other
things are not quite as equal as they look. Most students don't realize how
rich they are in the scarcest ingredient in startups, co-founders. If you wait
too long, you may find that your friends are now involved in some project they
don't want to abandon. The better they are, the more likely this is to happen.
One way to mitigate this problem might be to actively plan your startup while
you're getting those n years of experience. Sure, go off and get jobs or go to
grad school or whatever, but get together regularly to scheme, so the idea of
starting a startup stays alive in everyone's brain. I don't know if this
works, but it can't hurt to try.
It would be helpful just to realize what an advantage you have as students.
Some of your classmates are probably going to be successful startup founders;
at a great technical university, that is a near certainty. So which ones? If I
were you I'd look for the people who are not just smart, but incurable
builders. Look for the people who keep starting projects, and finish at least
some of them. That's what we look for. Above all else, above academic
credentials and even the idea you apply with, we look for people who build
things.
The other place co-founders meet is at work. Fewer do than at school, but
there are things you can do to improve the odds. The most important,
obviously, is to work somewhere that has a lot of smart, young people. Another
is to work for a company located in a startup hub. It will be easier to talk a
co-worker into quitting with you in a place where startups are happening all
around you.
You might also want to look at the employment agreement you sign when you get
hired. Most will say that any ideas you think of while you're employed by the
company belong to them. In practice it's hard for anyone to prove what ideas
you had when, so the line gets drawn at code. If you're going to start a
startup, don't write any of the code while you're still employed. Or at least
discard any code you wrote while still employed and start over. It's not so
much that your employer will find out and sue you. It won't come to that;
investors or acquirers or (if you're so lucky) underwriters will nail you
first. Between t = 0 and when you buy that yacht, _someone_ is going to ask if
any of your code legally belongs to anyone else, and you need to be able to
say no.
The most overreaching employee agreement I've seen so far is Amazon's. In
addition to the usual clauses about owning your ideas, you also can't be a
founder of a startup that has another founder who worked at Amazon—even if you
didn't know them or even work there at the same time. I suspect they'd have a
hard time enforcing this, but it's a bad sign they even try. There are plenty
of other places to work; you may as well choose one that keeps more of your
options open.
Speaking of cool places to work, there is of course Google. But I notice
something slightly frightening about Google: zero startups come out of there.
In that respect it's a black hole. People seem to like working at Google too
much to leave. So if you hope to start a startup one day, the evidence so far
suggests you shouldn't work there.
I realize this seems odd advice. If they make your life so good that you don't
want to leave, why not work there? Because, in effect, you're probably getting
a local maximum. You need a certain activation energy to start a startup. So
an employer who's fairly pleasant to work for can lull you into staying
indefinitely, even if it would be a net win for you to leave.
The best place to work, if you want to start a startup, is probably a startup.
In addition to being the right sort of experience, one way or another it will
be over quickly. You'll either end up rich, in which case problem solved, or
the startup will get bought, in which case it it will start to suck to work
there and it will be easy to leave, or most likely, the thing will blow up and
you'll be free again.
Your final advantage, ignorance, may not sound very useful. I deliberately
used a controversial word for it; you might equally call it innocence. But it
seems to be a powerful force. My Y Combinator co-founder Jessica Livingston is
just about to publish a book of interviews with startup founders, and I
noticed a remarkable pattern in them. One after another said that if they'd
known how hard it would be, they would have been too intimidated to start.
Ignorance can be useful when it's a counterweight to other forms of stupidity.
It's useful in starting startups because you're capable of more than you
realize. Starting startups is harder than you expect, but you're also capable
of more than you expect, so they balance out.
Most people look at a company like Apple and think, how could I ever make such
a thing? Apple is an institution, and I'm just a person. But every institution
was at one point just a handful of people in a room deciding to start
something. Institutions are made up, and made up by people no different from
you.
I'm not saying everyone could start a startup. I'm sure most people couldn't;
I don't know much about the population at large. When you get to groups I know
well, like hackers, I can say more precisely. At the top schools, I'd guess as
many as a quarter of the CS majors could make it as startup founders if they
wanted.
That "if they wanted" is an important qualification—so important that it's
almost cheating to append it like that—because once you get over a certain
threshold of intelligence, which most CS majors at top schools are past, the
deciding factor in whether you succeed as a founder is how much you want to.
You don't have to be that smart. If you're not a genius, just start a startup
in some unsexy field where you'll have less competition, like software for
human resources departments. I picked that example at random, but I feel safe
in predicting that whatever they have now, it wouldn't take genius to do
better. There are a lot of people out there working on boring stuff who are
desperately in need of better software, so however short you think you fall of
Larry and Sergey, you can ratchet down the coolness of the idea far enough to
compensate.
As well as preventing you from being intimidated, ignorance can sometimes help
you discover new ideas. Steve Wozniak put this very strongly:
> All the best things that I did at Apple came from (a) not having money and
> (b) not having done it before, ever. Every single thing that we came out
> with that was really great, I'd never once done that thing in my life.
When you know nothing, you have to reinvent stuff for yourself, and if you're
smart your reinventions may be better than what preceded them. This is
especially true in fields where the rules change. All our ideas about software
were developed in a time when processors were slow, and memories and disks
were tiny. Who knows what obsolete assumptions are embedded in the
conventional wisdom? And the way these assumptions are going to get fixed is
not by explicitly deallocating them, but by something more akin to garbage
collection. Someone ignorant but smart will come along and reinvent
everything, and in the process simply fail to reproduce certain existing
ideas.
**Minus**
So much for the advantages of young founders. What about the disadvantages?
I'm going to start with what goes wrong and try to trace it back to the root
causes.
What goes wrong with young founders is that they build stuff that looks like
class projects. It was only recently that we figured this out ourselves. We
noticed a lot of similarities between the startups that seemed to be falling
behind, but we couldn't figure out how to put it into words. Then finally we
realized what it was: they were building class projects.
But what does that really mean? What's wrong with class projects? What's the
difference between a class project and a real startup? If we could answer that
question it would be useful not just to would-be startup founders but to
students in general, because we'd be a long way toward explaining the mystery
of the so-called real world.
There seem to be two big things missing in class projects: (1) an iterative
definition of a real problem and (2) intensity.
The first is probably unavoidable. Class projects will inevitably solve fake
problems. For one thing, real problems are rare and valuable. If a professor
wanted to have students solve real problems, he'd face the same paradox as
someone trying to give an example of whatever "paradigm" might succeed the
Standard Model of physics. There may well be something that does, but if you
could think of an example you'd be entitled to the Nobel Prize. Similarly,
good new problems are not to be had for the asking.
In technology the difficulty is compounded by the fact that real startups tend
to discover the problem they're solving by a process of evolution. Someone has
an idea for something; they build it; and in doing so (and probably only by
doing so) they realize the problem they should be solving is another one. Even
if the professor let you change your project description on the fly, there
isn't time enough to do that in a college class, or a market to supply
evolutionary pressures. So class projects are mostly about implementation,
which is the least of your problems in a startup.
It's not just that in a startup you work on the idea as well as
implementation. The very implementation is different. Its main purpose is to
refine the idea. Often the only value of most of the stuff you build in the
first six months is that it proves your initial idea was mistaken. And that's
extremely valuable. If you're free of a misconception that everyone else still
shares, you're in a powerful position. But you're not thinking that way about
a class project. Proving your initial plan was mistaken would just get you a
bad grade. Instead of building stuff to throw away, you tend to want every
line of code to go toward that final goal of showing you did a lot of work.
That leads to our second difference: the way class projects are measured.
Professors will tend to judge you by the distance between the starting point
and where you are now. If someone has achieved a lot, they should get a good
grade. But customers will judge you from the other direction: the distance
remaining between where you are now and the features they need. The market
doesn't give a shit how hard you worked. Users just want your software to do
what they need, and you get a zero otherwise. That is one of the most
distinctive differences between school and the real world: there is no reward
for putting in a good effort. In fact, the whole concept of a "good effort" is
a fake idea adults invented to encourage kids. It is not found in nature.
Such lies seem to be helpful to kids. But unfortunately when you graduate they
don't give you a list of all the lies they told you during your education. You
have to get them beaten out of you by contact with the real world. And this is
why so many jobs want work experience. I couldn't understand that when I was
in college. I knew how to program. In fact, I could tell I knew how to program
better than most people doing it for a living. So what was this mysterious
"work experience" and why did I need it?
Now I know what it is, and part of the confusion is grammatical. Describing it
as "work experience" implies it's like experience operating a certain kind of
machine, or using a certain programming language. But really what work
experience refers to is not some specific expertise, but the elimination of
certain habits left over from childhood.
One of the defining qualities of kids is that they flake. When you're a kid
and you face some hard test, you can cry and say "I can't" and they won't make
you do it. Of course, no one can make you do anything in the grownup world
either. What they do instead is fire you. And when motivated by that you find
you can do a lot more than you realized. So one of the things employers expect
from someone with "work experience" is the elimination of the flake reflex—the
ability to get things done, with no excuses.
The other thing you get from work experience is an understanding of what work
is, and in particular, how intrinsically horrible it is. Fundamentally the
equation is a brutal one: you have to spend most of your waking hours doing
stuff someone else wants, or starve. There are a few places where the work is
so interesting that this is concealed, because what other people want done
happens to coincide with what you want to work on. But you only have to
imagine what would happen if they diverged to see the underlying reality.
It's not so much that adults lie to kids about this as never explain it. They
never explain what the deal is with money. You know from an early age that
you'll have some sort of job, because everyone asks what you're going to "be"
when you grow up. What they don't tell you is that as a kid you're sitting on
the shoulders of someone else who's treading water, and that starting working
means you get thrown into the water on your own, and have to start treading
water yourself or sink. "Being" something is incidental; the immediate problem
is not to drown.
The relationship between work and money tends to dawn on you only gradually.
At least it did for me. One's first thought tends to be simply "This sucks.
I'm in debt. Plus I have to get up on monday and go to work." Gradually you
realize that these two things are as tightly connected as only a market can
make them.
So the most important advantage 24 year old founders have over 20 year old
founders is that they know what they're trying to avoid. To the average
undergrad the idea of getting rich translates into buying Ferraris, or being
admired. To someone who has learned from experience about the relationship
between money and work, it translates to something way more important: it
means you get to opt out of the brutal equation that governs the lives of
99.9% of people. Getting rich means you can stop treading water.
Someone who gets this will work much harder at making a startup succeed—with
the proverbial energy of a drowning man, in fact. But understanding the
relationship between money and work also changes the way you work. You don't
get money just for working, but for doing things other people want. Someone
who's figured that out will automatically focus more on the user. And that
cures the other half of the class-project syndrome. After you've been working
for a while, you yourself tend to measure what you've done the same way the
market does.
Of course, you don't have to spend years working to learn this stuff. If
you're sufficiently perceptive you can grasp these things while you're still
in school. Sam Altman did. He must have, because Loopt is no class project.
And as his example suggests, this can be valuable knowledge. At a minimum, if
you get this stuff, you already have most of what you gain from the "work
experience" employers consider so desirable. But of course if you really get
it, you can use this information in a way that's more valuable to you than
that.
**Now**
So suppose you think you might start a startup at some point, either when you
graduate or a few years after. What should you do now? For both jobs and grad
school, there are ways to prepare while you're in college. If you want to get
a job when you graduate, you should get summer jobs at places you'd like to
work. If you want to go to grad school, it will help to work on research
projects as an undergrad. What's the equivalent for startups? How do you keep
your options maximally open?
One thing you can do while you're still in school is to learn how startups
work. Unfortunately that's not easy. Few if any colleges have classes about
startups. There may be business school classes on entrepreneurship, as they
call it over there, but these are likely to be a waste of time. Business
schools like to talk about startups, but philosophically they're at the
opposite end of the spectrum. Most books on startups also seem to be useless.
I've looked at a few and none get it right. Books in most fields are written
by people who know the subject from experience, but for startups there's a
unique problem: by definition the founders of successful startups don't need
to write books to make money. As a result most books on the subject end up
being written by people who don't understand it.
So I'd be skeptical of classes and books. The way to learn about startups is
by watching them in action, preferably by working at one. How do you do that
as an undergrad? Probably by sneaking in through the back door. Just hang
around a lot and gradually start doing things for them. Most startups are (or
should be) very cautious about hiring. Every hire increases the burn rate, and
bad hires early on are hard to recover from. However, startups usually have a
fairly informal atmosphere, and there's always a lot that needs to be done. If
you just start doing stuff for them, many will be too busy to shoo you away.
You can thus gradually work your way into their confidence, and maybe turn it
into an official job later, or not, whichever you prefer. This won't work for
all startups, but it would work for most I've known.
Number two, make the most of the great advantage of school: the wealth of co-
founders. Look at the people around you and ask yourself which you'd like to
work with. When you apply that test, you may find you get surprising results.
You may find you'd prefer the quiet guy you've mostly ignored to someone who
seems impressive but has an attitude to match. I'm not suggesting you suck up
to people you don't really like because you think one day they'll be
successful. Exactly the opposite, in fact: you should only start a startup
with someone you like, because a startup will put your friendship through a
stress test. I'm just saying you should think about who you really admire and
hang out with them, instead of whoever circumstances throw you together with.
Another thing you can do is learn skills that will be useful to you in a
startup. These may be different from the skills you'd learn to get a job. For
example, thinking about getting a job will make you want to learn programming
languages you think employers want, like Java and C++. Whereas if you start a
startup, you get to pick the language, so you have to think about which will
actually let you get the most done. If you use that test you might end up
learning Ruby or Python instead.
But the most important skill for a startup founder isn't a programming
technique. It's a knack for understanding users and figuring out how to give
them what they want. I know I repeat this, but that's because it's so
important. And it's a skill you can learn, though perhaps habit might be a
better word. Get into the habit of thinking of software as having users. What
do those users want? What would make them say wow?
This is particularly valuable for undergrads, because the concept of users is
missing from most college programming classes. The way you get taught
programming in college would be like teaching writing as grammar, without
mentioning that its purpose is to communicate something to an audience.
Fortunately an audience for software is now only an http request away. So in
addition to the programming you do for your classes, why not build some kind
of website people will find useful? At the very least it will teach you how to
write software with users. In the best case, it might not just be preparation
for a startup, but the startup itself, like it was for Yahoo and Google.
** |
|
September 2007
A few weeks ago I had a thought so heretical that it really surprised me. It
may not matter all that much where you go to college.
For me, as for a lot of middle class kids, getting into a good college was
more or less the meaning of life when I was growing up. What was I? A student.
To do that well meant to get good grades. Why did one have to get good grades?
To get into a good college. And why did one want to do that? There seemed to
be several reasons: you'd learn more, get better jobs, make more money. But it
didn't matter exactly what the benefits would be. College was a bottleneck
through which all your future prospects passed; everything would be better if
you went to a better college.
A few weeks ago I realized that somewhere along the line I had stopped
believing that.
What first set me thinking about this was the new trend of worrying
obsessively about what kindergarten your kids go to. It seemed to me this
couldn't possibly matter. Either it won't help your kid get into Harvard, or
if it does, getting into Harvard won't mean much anymore. And then I thought:
how much does it mean even now?
It turns out I have a lot of data about that. My three partners and I run a
seed stage investment firm called Y Combinator. We invest when the company is
just a couple guys and an idea. The idea doesn't matter much; it will change
anyway. Most of our decision is based on the founders. The average founder is
three years out of college. Many have just graduated; a few are still in
school. So we're in much the same position as a graduate program, or a company
hiring people right out of college. Except our choices are immediately and
visibly tested. There are two possible outcomes for a startup: success or
failure—and usually you know within a year which it will be.
The test applied to a startup is among the purest of real world tests. A
startup succeeds or fails depending almost entirely on the efforts of the
founders. Success is decided by the market: you only succeed if users like
what you've built. And users don't care where you went to college.
As well as having precisely measurable results, we have a lot of them. Instead
of doing a small number of large deals like a traditional venture capital
fund, we do a large number of small ones. We currently fund about 40 companies
a year, selected from about 900 applications representing a total of about
2000 people.
Between the volume of people we judge and the rapid, unequivocal test that's
applied to our choices, Y Combinator has been an unprecedented opportunity for
learning how to pick winners. One of the most surprising things we've learned
is how little it matters where people went to college.
I thought I'd already been cured of caring about that. There's nothing like
going to grad school at Harvard to cure you of any illusions you might have
about the average Harvard undergrad. And yet Y Combinator showed us we were
still overestimating people who'd been to elite colleges. We'd interview
people from MIT or Harvard or Stanford and sometimes find ourselves thinking:
they _must_ be smarter than they seem. It took us a few iterations to learn to
trust our senses.
Practically everyone thinks that someone who went to MIT or Harvard or
Stanford must be smart. Even people who hate you for it believe it.
But when you think about what it means to have gone to an elite college, how
could this be true? We're talking about a decision made by admissions
officers—basically, HR people—based on a cursory examination of a huge pile of
depressingly similar applications submitted by seventeen year olds. And what
do they have to go on? An easily gamed standardized test; a short essay
telling you what the kid thinks you want to hear; an interview with a random
alum; a high school record that's largely an index of obedience. Who would
rely on such a test?
And yet a lot of companies do. A lot of companies are very much influenced by
where applicants went to college. How could they be? I think I know the answer
to that.
There used to be a saying in the corporate world: "No one ever got fired for
buying IBM." You no longer hear this about IBM specifically, but the idea is
very much alive; there is a whole category of "enterprise" software companies
that exist to take advantage of it. People buying technology for large
organizations don't care if they pay a fortune for mediocre software. It's not
their money. They just want to buy from a supplier who seems safe—a company
with an established name, confident salesmen, impressive offices, and software
that conforms to all the current fashions. Not necessarily a company that will
deliver so much as one that, if they do let you down, will still seem to have
been a prudent choice. So companies have evolved to fill that niche.
A recruiter at a big company is in much the same position as someone buying
technology for one. If someone went to Stanford and is not obviously insane,
they're probably a safe bet. And a safe bet is enough. No one ever measures
recruiters by the later performance of people they turn down.
I'm not saying, of course, that elite colleges have evolved to prey upon the
weaknesses of large organizations the way enterprise software companies have.
But they work as if they had. In addition to the power of the brand name,
graduates of elite colleges have two critical qualities that plug right into
the way large organizations work. They're good at doing what they're asked,
since that's what it takes to please the adults who judge you at seventeen.
And having been to an elite college makes them more confident.
Back in the days when people might spend their whole career at one big
company, these qualities must have been very valuable. Graduates of elite
colleges would have been capable, yet amenable to authority. And since
individual performance is so hard to measure in large organizations, their own
confidence would have been the starting point for their reputation.
Things are very different in the new world of startups. We couldn't save
someone from the market's judgement even if we wanted to. And being charming
and confident counts for nothing with users. All users care about is whether
you make something they like. If you don't, you're dead.
Knowing that test is coming makes us work a lot harder to get the right
answers than anyone would if they were merely hiring people. We can't afford
to have any illusions about the predictors of success. And what we've found is
that the variation between schools is so much smaller than the variation
between individuals that it's negligible by comparison. We can learn more
about someone in the first minute of talking to them than by knowing where
they went to school.
It seems obvious when you put it that way. Look at the individual, not where
they went to college. But that's a weaker statement than the idea I began
with, that it doesn't matter much where a given individual goes to college.
Don't you learn things at the best schools that you wouldn't learn at lesser
places?
Apparently not. Obviously you can't prove this in the case of a single
individual, but you can tell from aggregate evidence: you can't, without
asking them, distinguish people who went to one school from those who went to
another three times as far down the _US News_ list. Try it and see.
How can this be? Because how much you learn in college depends a lot more on
you than the college. A determined party animal can get through the best
school without learning anything. And someone with a real thirst for knowledge
will be able to find a few smart people to learn from at a school that isn't
prestigious at all.
The other students are the biggest advantage of going to an elite college; you
learn more from them than the professors. But you should be able to reproduce
this at most colleges if you make a conscious effort to find smart friends. At
most colleges you can find at least a handful of other smart students, and
most people have only a handful of close friends in college anyway. The
odds of finding smart professors are even better. The curve for faculty is a
lot flatter than for students, especially in math and the hard sciences; you
have to go pretty far down the list of colleges before you stop finding smart
professors in the math department.
So it's not surprising that we've found the relative prestige of different
colleges useless in judging individuals. There's a lot of randomness in how
colleges select people, and what they learn there depends much more on them
than the college. Between these two sources of variation, the college someone
went to doesn't mean a lot. It is to some degree a predictor of ability, but
so weak that we regard it mainly as a source of error and try consciously to
ignore it.
I doubt what we've discovered is an anomaly specific to startups. Probably
people have always overestimated the importance of where one goes to college.
We're just finally able to measure it.
The unfortunate thing is not just that people are judged by such a superficial
test, but that so many judge themselves by it. A lot of people, probably the
majority of people in America, have some amount of insecurity about where, or
whether, they went to college. The tragedy of the situation is that by far the
greatest liability of not having gone to the college you'd have liked is your
own feeling that you're thereby lacking something. Colleges are a bit like
exclusive clubs in this respect. There is only one real advantage to being a
member of most exclusive clubs: you know you wouldn't be missing much if you
weren't. When you're excluded, you can only imagine the advantages of being an
insider. But invariably they're larger in your imagination than in real life.
So it is with colleges. Colleges differ, but they're nothing like the stamp of
destiny so many imagine them to be. People aren't what some admissions officer
decides about them at seventeen. They're what they make themselves.
Indeed, the great advantage of not caring where people went to college is not
just that you can stop judging them (and yourself) by superficial measures,
but that you can focus instead on what really matters. What matters is what
you make of yourself. I think that's what we should tell kids. Their job isn't
to get good grades so they can get into a good college, but to learn and do.
And not just because that's more rewarding than worldly success. That will
increasingly _be_ the route to worldly success.
** |
|
March 2024
I met the Reddits before we even started Y Combinator. In fact they were one
of the reasons we started it.
YC grew out of a talk I gave to the Harvard Computer Society (the undergrad
computer club) about how to start a startup. Everyone else in the audience was
probably local, but Steve and Alexis came up on the train from the University
of Virginia, where they were seniors. Since they'd come so far I agreed to
meet them for coffee. They told me about the startup idea we'd later fund them
to drop: a way to order fast food on your cellphone.
This was before smartphones. They'd have had to make deals with cell carriers
and fast food chains just to get it launched. So it was not going to happen.
It still doesn't exist, 19 years later. But I was impressed with their brains
and their energy. In fact I was so impressed with them and some of the other
people I met at that talk that I decided to start something to fund them. A
few days later I told Steve and Alexis that we were starting Y Combinator, and
encouraged them to apply.
That first batch we didn't have any way to identify applicants, so we made up
nicknames for them. The Reddits were the "Cell food muffins." "Muffin" is a
term of endearment Jessica uses for things like small dogs and two year olds.
So that gives you some idea what kind of impression Steve and Alexis made in
those days. They had the look of slightly ruffled surprise that baby birds
have.
Their idea was bad though. And since we thought then that we were funding
ideas rather than founders, we rejected them. But we felt bad about it.
Jessica was sad that we'd rejected the muffins. And it seemed wrong to me to
turn down the people we'd been inspired to start YC to fund.
I don't think the startup sense of the word "pivot" had been invented yet, but
we wanted to fund Steve and Alexis, so if their idea was bad, they'd have to
work on something else. And I knew what else. In those days there was a site
called Delicious where you could save links. It had a page called
del.icio.us/popular that listed the most-saved links, and people were using
this page as a de facto Reddit. I knew because a lot of the traffic to my site
was coming from it. There needed to be something like del.icio.us/popular, but
designed for sharing links instead of being a byproduct of saving them.
So I called Steve and Alexis and said that we liked them, just not their idea,
so we'd fund them if they'd work on something else. They were on the train
home to Virginia at that point. They got off at the next station and got on
the next train north, and by the end of the day were committed to working on
what's now called Reddit.
They would have liked to call it Snoo, as in "What snoo?" But snoo.com was too
expensive, so they settled for calling the mascot Snoo and picked a name for
the site that wasn't registered. Early on Reddit was just a provisional name,
or so they told me at least, but it's probably too late to change it now.
As with all the really great startups, there's an uncannily close match
between the company and the founders. Steve in particular. Reddit has a
certain personality — curious, skeptical, ready to be amused — and that
personality is Steve's.
Steve will roll his eyes at this, but he's an intellectual; he's interested in
ideas for their own sake. That was how he came to be in that audience in
Cambridge in the first place. He knew me because he was interested in a
programming language I've written about called Lisp, and Lisp is one of those
languages few people learn except out of intellectual curiosity. Steve's kind
of vacuum-cleaner curiosity is exactly what you want when you're starting a
site that's a list of links to literally anything interesting.
Steve was not a big fan of authority, so he also liked the idea of a site
without editors. In those days the top forum for programmers was a site called
Slashdot. It was a lot like Reddit, except the stories on the frontpage were
chosen by human moderators. And though they did a good job, that one small
difference turned out to be a big difference. Being driven by user submissions
meant Reddit was fresher than Slashdot. News there was newer, and users will
always go where the newest news is.
I pushed the Reddits to launch fast. A version one didn't need to be more than
a couple hundred lines of code. How could that take more than a week or two to
build? And they did launch comparatively fast, about three weeks into the
first YC batch. The first users were Steve, Alexis, me, and some of their YC
batchmates and college friends. It turns out you don't need that many users to
collect a decent list of interesting links, especially if you have multiple
accounts per user.
Reddit got two more people from their YC batch: Chris Slowe and Aaron Swartz,
and they too were unusually smart. Chris was just finishing his PhD in physics
at Harvard. Aaron was younger, a college freshman, and even more anti-
authority than Steve. It's not exaggerating to describe him as a martyr for
what authority later did to him.
Slowly but inexorably Reddit's traffic grew. At first the numbers were so
small they were hard to distinguish from background noise. But within a few
weeks it was clear that there was a core of real users returning regularly to
the site. And although all kinds of things have happened to Reddit the company
in the years since, Reddit the _site_ never looked back.
Reddit the site (and now app) is such a fundamentally useful thing that it's
almost unkillable. Which is why, despite a long stretch after Steve left when
the management strategy ranged from benign neglect to spectacular blunders,
traffic just kept growing. You can't do that with most companies. Most
companies you take your eye off the ball for six months and you're in deep
trouble. But Reddit was special, and when Steve came back in 2015, I knew the
world was in for a surprise.
People thought they had Reddit's number: one of the players in Silicon Valley,
but not one of the big ones. But those who knew what had been going on behind
the scenes knew there was more to the story than this. If Reddit could grow to
the size it had with management that was harmless at best, what could it do if
Steve came back? We now know the answer to that question. Or at least a lower
bound on the answer. Steve is not out of ideas yet.
---
* * *
--- |
|
1993 _(This essay is from the introduction to_On Lisp _.)_
It's a long-standing principle of programming style that the functional
elements of a program should not be too large. If some component of a program
grows beyond the stage where it's readily comprehensible, it becomes a mass of
complexity which conceals errors as easily as a big city conceals fugitives.
Such software will be hard to read, hard to test, and hard to debug.
In accordance with this principle, a large program must be divided into
pieces, and the larger the program, the more it must be divided. How do you
divide a program? The traditional approach is called _top-down design:_ you
say "the purpose of the program is to do these seven things, so I divide it
into seven major subroutines. The first subroutine has to do these four
things, so it in turn will have four of its own subroutines," and so on. This
process continues until the whole program has the right level of granularity--
each part large enough to do something substantial, but small enough to be
understood as a single unit.
Experienced Lisp programmers divide up their programs differently. As well as
top-down design, they follow a principle which could be called _bottom-up
design_ \-- changing the language to suit the problem. In Lisp, you don't just
write your program down toward the language, you also build the language up
toward your program. As you're writing a program you may think "I wish Lisp
had such-and-such an operator." So you go and write it. Afterward you realize
that using the new operator would simplify the design of another part of the
program, and so on. Language and program evolve together. Like the border
between two warring states, the boundary between language and program is drawn
and redrawn, until eventually it comes to rest along the mountains and rivers,
the natural frontiers of your problem. In the end your program will look as if
the language had been designed for it. And when language and program fit one
another well, you end up with code which is clear, small, and efficient.
It's worth emphasizing that bottom-up design doesn't mean just writing the
same program in a different order. When you work bottom-up, you usually end up
with a different program. Instead of a single, monolithic program, you will
get a larger language with more abstract operators, and a smaller program
written in it. Instead of a lintel, you'll get an arch.
In typical code, once you abstract out the parts which are merely bookkeeping,
what's left is much shorter; the higher you build up the language, the less
distance you will have to travel from the top down to it. This brings several
advantages:
1. By making the language do more of the work, bottom-up design yields programs which are smaller and more agile. A shorter program doesn't have to be divided into so many components, and fewer components means programs which are easier to read or modify. Fewer components also means fewer connections between components, and thus less chance for errors there. As industrial designers strive to reduce the number of moving parts in a machine, experienced Lisp programmers use bottom-up design to reduce the size and complexity of their programs.
2. Bottom-up design promotes code re-use. When you write two or more programs, many of the utilities you wrote for the first program will also be useful in the succeeding ones. Once you've acquired a large substrate of utilities, writing a new program can take only a fraction of the effort it would require if you had to start with raw Lisp.
3. Bottom-up design makes programs easier to read. An instance of this type of abstraction asks the reader to understand a general-purpose operator; an instance of functional abstraction asks the reader to understand a special-purpose subroutine.
4. Because it causes you always to be on the lookout for patterns in your code, working bottom-up helps to clarify your ideas about the design of your program. If two distant components of a program are similar in form, you'll be led to notice the similarity and perhaps to redesign the program in a simpler way.
Bottom-up design is possible to a certain degree in languages other than Lisp.
Whenever you see library functions, bottom-up design is happening. However,
Lisp gives you much broader powers in this department, and augmenting the
language plays a proportionately larger role in Lisp style-- so much so that
Lisp is not just a different language, but a whole different way of
programming.
It's true that this style of development is better suited to programs which
can be written by small groups. However, at the same time, it extends the
limits of what can be done by a small group. In _The Mythical Man-Month_ ,
Frederick Brooks proposed that the productivity of a group of programmers does
not grow linearly with its size. As the size of the group increases, the
productivity of individual programmers goes down. The experience of Lisp
programming suggests a more cheerful way to phrase this law: as the size of
the group decreases, the productivity of individual programmers goes up. A
small group wins, relatively speaking, simply because it's smaller. When a
small group also takes advantage of the techniques that Lisp makes possible,
it can win outright.
**New:** Download On Lisp for Free.
* * *
"But no one can read the program without understanding all your new
utilities." To see why such statements are usually mistaken, see Section 4.8.
---
* * *
--- |
|
May 2002
| | "The quantity of meaning compressed into a small space by algebraic signs, is another circumstance that facilitates the reasonings we are accustomed to carry on by their aid."
\- Charles Babbage, quoted in Iverson's Turing Award Lecture
---
In the discussion about issues raised by Revenge of the Nerds on the LL1
mailing list, Paul Prescod wrote something that stuck in my mind.
> Python's goal is regularity and readability, not succinctness.
On the face of it, this seems a rather damning thing to claim about a
programming language. As far as I can tell, succinctness = power. If so, then
substituting, we get
> Python's goal is regularity and readability, not power.
and this doesn't seem a tradeoff (if it _is_ a tradeoff) that you'd want to
make. It's not far from saying that Python's goal is not to be effective as a
programming language.
Does succinctness = power? This seems to me an important question, maybe the
most important question for anyone interested in language design, and one that
it would be useful to confront directly. I don't feel sure yet that the answer
is a simple yes, but it seems a good hypothesis to begin with.
**Hypothesis**
My hypothesis is that succinctness is power, or is close enough that except in
pathological examples you can treat them as identical.
It seems to me that succinctness is what programming languages are _for._
Computers would be just as happy to be told what to do directly in machine
language. I think that the main reason we take the trouble to develop high-
level languages is to get leverage, so that we can say (and more importantly,
think) in 10 lines of a high-level language what would require 1000 lines of
machine language. In other words, the main point of high-level languages is to
make source code smaller.
If smaller source code is the purpose of high-level languages, and the power
of something is how well it achieves its purpose, then the measure of the
power of a programming language is how small it makes your programs.
Conversely, a language that doesn't make your programs small is doing a bad
job of what programming languages are supposed to do, like a knife that
doesn't cut well, or printing that's illegible.
**Metrics**
Small in what sense though? The most common measure of code size is lines of
code. But I think that this metric is the most common because it is the
easiest to measure. I don't think anyone really believes it is the true test
of the length of a program. Different languages have different conventions for
how much you should put on a line; in C a lot of lines have nothing on them
but a delimiter or two.
Another easy test is the number of characters in a program, but this is not
very good either; some languages (Perl, for example) just use shorter
identifiers than others.
I think a better measure of the size of a program would be the number of
elements, where an element is anything that would be a distinct node if you
drew a tree representing the source code. The name of a variable or function
is an element; an integer or a floating-point number is an element; a segment
of literal text is an element; an element of a pattern, or a format directive,
is an element; a new block is an element. There are borderline cases (is -5
two elements or one?) but I think most of them are the same for every
language, so they don't affect comparisons much.
This metric needs fleshing out, and it could require interpretation in the
case of specific languages, but I think it tries to measure the right thing,
which is the number of parts a program has. I think the tree you'd draw in
this exercise is what you have to make in your head in order to conceive of
the program, and so its size is proportionate to the amount of work you have
to do to write or read it.
**Design**
This kind of metric would allow us to compare different languages, but that is
not, at least for me, its main value. The main value of the succinctness test
is as a guide in _designing_ languages. The most useful comparison between
languages is between two potential variants of the same language. What can I
do in the language to make programs shorter?
If the conceptual load of a program is proportionate to its complexity, and a
given programmer can tolerate a fixed conceptual load, then this is the same
as asking, what can I do to enable programmers to get the most done? And that
seems to me identical to asking, how can I design a good language?
(Incidentally, nothing makes it more patently obvious that the old chestnut
"all languages are equivalent" is false than designing languages. When you are
designing a new language, you're _constantly_ comparing two languages-- the
language if I did x, and if I didn't-- to decide which is better. If this were
really a meaningless question, you might as well flip a coin.)
Aiming for succinctness seems a good way to find new ideas. If you can do
something that makes many different programs shorter, it is probably not a
coincidence: you have probably discovered a useful new abstraction. You might
even be able to write a program to help by searching source code for repeated
patterns. Among other languages, those with a reputation for succinctness
would be the ones to look to for new ideas: Forth, Joy, Icon.
**Comparison**
The first person to write about these issues, as far as I know, was Fred
Brooks in the _Mythical Man Month_. He wrote that programmers seemed to
generate about the same amount of code per day regardless of the language.
When I first read this in my early twenties, it was a big surprise to me and
seemed to have huge implications. It meant that (a) the only way to get
software written faster was to use a more succinct language, and (b) someone
who took the trouble to do this could leave competitors who didn't in the
dust.
Brooks' hypothesis, if it's true, seems to be at the very heart of hacking. In
the years since, I've paid close attention to any evidence I could get on the
question, from formal studies to anecdotes about individual projects. I have
seen nothing to contradict him.
I have not yet seen evidence that seemed to me conclusive, and I don't expect
to. Studies like Lutz Prechelt's comparison of programming languages, while
generating the kind of results I expected, tend to use problems that are too
short to be meaningful tests. A better test of a language is what happens in
programs that take a month to write. And the only real test, if you believe as
I do that the main purpose of a language is to be good to think in (rather
than just to tell a computer what to do once you've thought of it) is what new
things you can write in it. So any language comparison where you have to meet
a predefined spec is testing slightly the wrong thing.
The true test of a language is how well you can discover and solve new
problems, not how well you can use it to solve a problem someone else has
already formulated. These two are quite different criteria. In art, mediums
like embroidery and mosaic work well if you know beforehand what you want to
make, but are absolutely lousy if you don't. When you want to discover the
image as you make it-- as you have to do with anything as complex as an image
of a person, for example-- you need to use a more fluid medium like pencil or
ink wash or oil paint. And indeed, the way tapestries and mosaics are made in
practice is to make a painting first, then copy it. (The word "cartoon" was
originally used to describe a painting intended for this purpose).
What this means is that we are never likely to have accurate comparisons of
the relative power of programming languages. We'll have precise comparisons,
but not accurate ones. In particular, explicit studies for the purpose of
comparing languages, because they will probably use small problems, and will
necessarily use predefined problems, will tend to underestimate the power of
the more powerful languages.
Reports from the field, though they will necessarily be less precise than
"scientific" studies, are likely to be more meaningful. For example, Ulf Wiger
of Ericsson did a study that concluded that Erlang was 4-10x more succinct
than C++, and proportionately faster to develop software in:
> Comparisons between Ericsson-internal development projects indicate similar
> line/hour productivity, including all phases of software development, rather
> independently of which language (Erlang, PLEX, C, C++, or Java) was used.
> What differentiates the different languages then becomes source code volume.
The study also deals explictly with a point that was only implicit in Brooks'
book (since he measured lines of debugged code): programs written in more
powerful languages tend to have fewer bugs. That becomes an end in itself,
possibly more important than programmer productivity, in applications like
network switches.
**The Taste Test**
Ultimately, I think you have to go with your gut. What does it feel like to
program in the language? I think the way to find (or design) the best language
is to become hypersensitive to how well a language lets you think, then
choose/design the language that feels best. If some language feature is
awkward or restricting, don't worry, you'll know about it.
Such hypersensitivity will come at a cost. You'll find that you can't _stand_
programming in clumsy languages. I find it unbearably restrictive to program
in languages without macros, just as someone used to dynamic typing finds it
unbearably restrictive to have to go back to programming in a language where
you have to declare the type of every variable, and can't make a list of
objects of different types.
I'm not the only one. I know many Lisp hackers that this has happened to. In
fact, the most accurate measure of the relative power of programming languages
might be the percentage of people who know the language who will take any job
where they get to use that language, regardless of the application domain.
**Restrictiveness**
I think most hackers know what it means for a language to feel restrictive.
What's happening when you feel that? I think it's the same feeling you get
when the street you want to take is blocked off, and you have to take a long
detour to get where you wanted to go. There is something you want to say, and
the language won't let you.
What's really going on here, I think, is that a restrictive language is one
that isn't succinct enough. The problem is not simply that you can't say what
you planned to. It's that the detour the language makes you take is _longer._
Try this thought experiment. Suppose there were some program you wanted to
write, and the language wouldn't let you express it the way you planned to,
but instead forced you to write the program in some other way that was
_shorter._ For me at least, that wouldn't feel very restrictive. It would be
like the street you wanted to take being blocked off, and the policeman at the
intersection directing you to a shortcut instead of a detour. Great!
I think most (ninety percent?) of the feeling of restrictiveness comes from
being forced to make the program you write in the language longer than one you
have in your head. Restrictiveness is mostly lack of succinctness. So when a
language feels restrictive, what that (mostly) means is that it isn't succinct
enough, and when a language isn't succinct, it will feel restrictive.
**Readability**
The quote I began with mentions two other qualities, regularity and
readability. I'm not sure what regularity is, or what advantage, if any, code
that is regular and readable has over code that is merely readable. But I
think I know what is meant by readability, and I think it is also related to
succinctness.
We have to be careful here to distinguish between the readability of an
individual line of code and the readability of the whole program. It's the
second that matters. I agree that a line of Basic is likely to be more
readable than a line of Lisp. But a program written in Basic is is going to
have more lines than the same program written in Lisp (especially once you
cross over into Greenspunland). The total effort of reading the Basic program
will surely be greater.
> total effort = effort per line x number of lines
I'm not as sure that readability is directly proportionate to succinctness as
I am that power is, but certainly succinctness is a factor (in the
mathematical sense; see equation above) in readability. So it may not even be
meaningful to say that the goal of a language is readability, not
succinctness; it could be like saying the goal was readability, not
readability.
What readability-per-line does mean, to the user encountering the language for
the first time, is that source code will _look unthreatening_. So readability-
per-line could be a good marketing decision, even if it is a bad design
decision. It's isomorphic to the very successful technique of letting people
pay in installments: instead of frightening them with a high upfront price,
you tell them the low monthly payment. Installment plans are a net lose for
the buyer, though, as mere readability-per-line probably is for the
programmer. The buyer is going to make a _lot_ of those low, low payments; and
the programmer is going to read a _lot_ of those individually readable lines.
This tradeoff predates programming languages. If you're used to reading novels
and newspaper articles, your first experience of reading a math paper can be
dismaying. It could take half an hour to read a single page. And yet, I am
pretty sure that the notation is not the problem, even though it may feel like
it is. The math paper is hard to read because the ideas are hard. If you
expressed the same ideas in prose (as mathematicians had to do before they
evolved succinct notations), they wouldn't be any easier to read, because the
paper would grow to the size of a book.
**To What Extent?**
A number of people have rejected the idea that succinctness = power. I think
it would be more useful, instead of simply arguing that they are the same or
aren't, to ask: to what _extent_ does succinctness = power? Because clearly
succinctness is a large part of what higher-level languages are for. If it is
not all they're for, then what else are they for, and how important,
relatively, are these other functions?
I'm not proposing this just to make the debate more civilized. I really want
to know the answer. When, if ever, is a language too succinct for its own
good?
The hypothesis I began with was that, except in pathological examples, I
thought succinctness could be considered identical with power. What I meant
was that in any language anyone would design, they would be identical, but
that if someone wanted to design a language explicitly to disprove this
hypothesis, they could probably do it. I'm not even sure of that, actually.
**Languages, not Programs**
We should be clear that we are talking about the succinctness of languages,
not of individual programs. It certainly is possible for individual programs
to be written too densely.
I wrote about this in On Lisp. A complex macro may have to save many times its
own length to be justified. If writing some hairy macro could save you ten
lines of code every time you use it, and the macro is itself ten lines of
code, then you get a net saving in lines if you use it more than once. But
that could still be a bad move, because macro definitions are harder to read
than ordinary code. You might have to use the macro ten or twenty times before
it yielded a net improvement in readability.
I'm sure every language has such tradeoffs (though I suspect the stakes get
higher as the language gets more powerful). Every programmer must have seen
code that some clever person has made marginally shorter by using dubious
programming tricks.
So there is no argument about that-- at least, not from me. Individual
programs can certainly be too succinct for their own good. The question is,
can a language be? Can a language compel programmers to write code that's
short (in elements) at the expense of overall readability?
One reason it's hard to imagine a language being too succinct is that if there
were some excessively compact way to phrase something, there would probably
also be a longer way. For example, if you felt Lisp programs using a lot of
macros or higher-order functions were too dense, you could, if you preferred,
write code that was isomorphic to Pascal. If you don't want to express
factorial in Arc as a call to a higher-order function (rec zero 1 * 1-) you
can also write out a recursive definition: (rfn fact (x) (if (zero x) 1 (* x
(fact (1- x))))) Though I can't off the top of my head think of any examples,
I am interested in the question of whether a language could be too succinct.
Are there languages that force you to write code in a way that is crabbed and
incomprehensible? If anyone has examples, I would be very interested to see
them.
(Reminder: What I'm looking for are programs that are very dense according to
the metric of "elements" sketched above, not merely programs that are short
because delimiters can be omitted and everything has a one-character name.)
---
| | Japanese Translation
| | Russian Translation
| | Lutz Prechelt: Comparison of Seven Languages
| | Erann Gat: Lisp vs. Java
| | Peter Norvig Tries Prechelt's Test
| | Matthias Felleisen: Expressive Power of Languages
| | Kragen Sitaker: Redundancy and Power
| | Forth
| | Joy
| | Icon
| | J
| | K
* * *
--- |
|
July 2020
| | "Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are even incapable of forming such opinions."
Einstein
---
There has been a lot of talk about privilege lately. Although the concept is
overused, there is something to it, and in particular to the idea that
privilege makes you blind that you can't see things that are visible to
someone whose life is very different from yours.
But one of the most pervasive examples of this kind of blindness is one that I
haven't seen mentioned explicitly. I'm going to call it _orthodox privilege_ :
The more conventional-minded someone is, the more it seems to them that it's
safe for everyone to express their opinions.
It's safe for _them_ to express their opinions, because the source of their
opinions is whatever it's currently acceptable to believe. So it seems to them
that it must be safe for everyone. They literally can't imagine a true
statement that would get you in trouble.
And yet at every point in history, there _were_ true things that would get you
in trouble to say. Is ours the first where this isn't so? What an amazing
coincidence that would be.
Surely it should at least be the default assumption that our time is not
unique, and that there are true things you can't say now, just as there have
always been. You would think. But even in the face of such overwhelming
historical evidence, most people will go with their gut on this one.
In the most extreme cases, people suffering from orthodox privilege will not
only deny that there's anything true that you can't say, but will accuse you
of heresy merely for saying there is. Though if there's more than one heresy
current in your time, these accusations will be weirdly non-deterministic: you
must either be an xist or a yist.
Frustrating as it is to deal with these people, it's important to realize that
they're in earnest. They're not pretending they think it's impossible for an
idea to be both unorthodox and true. The world really looks that way to them.
Indeed, this is a uniquely tenacious form of privilege. People can overcome
the blindness induced by most forms of privilege by learning more about
whatever they're not. But they can't overcome orthodox privilege just by
learning more. They'd have to become more independent-minded. If that happens
at all, it doesn't happen on the time scale of one conversation.
It may be possible to convince some people that orthodox privilege must exist
even though they can't sense it, just as one can with, say, dark matter. There
may be some who could be convinced, for example, that it's very unlikely that
this is the first point in history at which there's nothing true you can't
say, even if they can't imagine specific examples.
But in general I don't think it will work to say "check your privilege" about
this type of privilege, because those in its demographic don't realize they're
in it. It doesn't seem to conventional-minded people that they're
conventional-minded. It just seems to them that they're right. Indeed, they
tend to be particularly sure of it.
Perhaps the solution is to appeal to politeness. If someone says they can hear
a high-pitched noise that you can't, it's only polite to take them at their
word, instead of demanding evidence that's impossible to produce, or simply
denying that they hear anything. Imagine how rude that would seem. Similarly,
if someone says they can think of things that are true but that cannot be
said, it's only polite to take them at their word, even if you can't think of
any yourself.
**Thanks** to Sam Altman, Trevor Blackwell, Patrick Collison, Antonio Garcia-
Martinez, Jessica Livingston, Robert Morris, Michael Nielsen, Geoff Ralston,
Max Roser, and Harj Taggar for reading drafts of this.
* * *
--- |
|
May 2021
There's one kind of opinion I'd be very afraid to express publicly. If someone
I knew to be both a domain expert and a reasonable person proposed an idea
that sounded preposterous, I'd be very reluctant to say "That will never
work."
Anyone who has studied the history of ideas, and especially the history of
science, knows that's how big things start. Someone proposes an idea that
sounds crazy, most people dismiss it, then it gradually takes over the world.
Most implausible-sounding ideas are in fact bad and could be safely dismissed.
But not when they're proposed by reasonable domain experts. If the person
proposing the idea is reasonable, then they know how implausible it sounds.
And yet they're proposing it anyway. That suggests they know something you
don't. And if they have deep domain expertise, that's probably the source of
it.
Such ideas are not merely unsafe to dismiss, but disproportionately likely to
be interesting. When the average person proposes an implausible-sounding idea,
its implausibility is evidence of their incompetence. But when a reasonable
domain expert does it, the situation is reversed. There's something like an
efficient market here: on average the ideas that seem craziest will, if
correct, have the biggest effect. So if you can eliminate the theory that the
person proposing an implausible-sounding idea is incompetent, its
implausibility switches from evidence that it's boring to evidence that it's
exciting.
Such ideas are not guaranteed to work. But they don't have to be. They just
have to be sufficiently good bets — to have sufficiently high expected value.
And I think on average they do. I think if you bet on the entire set of
implausible-sounding ideas proposed by reasonable domain experts, you'd end up
net ahead.
The reason is that everyone is too conservative. The word "paradigm" is
overused, but this is a case where it's warranted. Everyone is too much in the
grip of the current paradigm. Even the people who have the new ideas
undervalue them initially. Which means that before they reach the stage of
proposing them publicly, they've already subjected them to an excessively
strict filter.
The wise response to such an idea is not to make statements, but to ask
questions, because there's a real mystery here. Why has this smart and
reasonable person proposed an idea that seems so wrong? Are they mistaken, or
are you? One of you has to be. If you're the one who's mistaken, that would be
good to know, because it means there's a hole in your model of the world. But
even if they're mistaken, it should be interesting to learn why. A trap that
an expert falls into is one you have to worry about too.
This all seems pretty obvious. And yet there are clearly a lot of people who
don't share my fear of dismissing new ideas. Why do they do it? Why risk
looking like a jerk now and a fool later, instead of just reserving judgement?
One reason they do it is envy. If you propose a radical new idea and it
succeeds, your reputation (and perhaps also your wealth) will increase
proportionally. Some people would be envious if that happened, and this
potential envy propagates back into a conviction that you must be wrong.
Another reason people dismiss new ideas is that it's an easy way to seem
sophisticated. When a new idea first emerges, it usually seems pretty feeble.
It's a mere hatchling. Received wisdom is a full-grown eagle by comparison. So
it's easy to launch a devastating attack on a new idea, and anyone who does
will seem clever to those who don't understand this asymmetry.
This phenomenon is exacerbated by the difference between how those working on
new ideas and those attacking them are rewarded. The rewards for working on
new ideas are weighted by the value of the outcome. So it's worth working on
something that only has a 10% chance of succeeding if it would make things
more than 10x better. Whereas the rewards for attacking new ideas are roughly
constant; such attacks seem roughly equally clever regardless of the target.
People will also attack new ideas when they have a vested interest in the old
ones. It's not surprising, for example, that some of Darwin's harshest critics
were churchmen. People build whole careers on some ideas. When someone claims
they're false or obsolete, they feel threatened.
The lowest form of dismissal is mere factionalism: to automatically dismiss
any idea associated with the opposing faction. The lowest form of all is to
dismiss an idea because of who proposed it.
But the main thing that leads reasonable people to dismiss new ideas is the
same thing that holds people back from proposing them: the sheer pervasiveness
of the current paradigm. It doesn't just affect the way we think; it is the
Lego blocks we build thoughts out of. Popping out of the current paradigm is
something only a few people can do. And even they usually have to suppress
their intuitions at first, like a pilot flying through cloud who has to trust
his instruments over his sense of balance.
Paradigms don't just define our present thinking. They also vacuum up the
trail of crumbs that led to them, making our standards for new ideas
impossibly high. The current paradigm seems so perfect to us, its offspring,
that we imagine it must have been accepted completely as soon as it was
discovered — that whatever the church thought of the heliocentric model,
astronomers must have been convinced as soon as Copernicus proposed it. Far,
in fact, from it. Copernicus published the heliocentric model in 1532, but it
wasn't till the mid seventeenth century that the balance of scientific opinion
shifted in its favor.
Few understand how feeble new ideas look when they first appear. So if you
want to have new ideas yourself, one of the most valuable things you can do is
to learn what they look like when they're born. Read about how new ideas
happened, and try to get yourself into the heads of people at the time. How
did things look to them, when the new idea was only half-finished, and even
the person who had it was only half-convinced it was right?
But you don't have to stop at history. You can observe big new ideas being
born all around you right now. Just look for a reasonable domain expert
proposing something that sounds wrong.
If you're nice, as well as wise, you won't merely resist attacking such
people, but encourage them. Having new ideas is a lonely business. Only those
who've tried it know how lonely. These people need your help. And if you help
them, you'll probably learn something in the process.
** |
|
May 2021
Most people think of nerds as quiet, diffident people. In ordinary social
situations they are — as quiet and diffident as the star quarterback would be
if he found himself in the middle of a physics symposium. And for the same
reason: they are fish out of water. But the apparent diffidence of nerds is an
illusion due to the fact that when non-nerds observe them, it's usually in
ordinary social situations. In fact some nerds are quite fierce.
The fierce nerds are a small but interesting group. They are as a rule
extremely competitive — more competitive, I'd say, than highly competitive
non-nerds. Competition is more personal for them. Partly perhaps because
they're not emotionally mature enough to distance themselves from it, but also
because there's less randomness in the kinds of competition they engage in,
and they are thus more justified in taking the results personally.
Fierce nerds also tend to be somewhat overconfident, especially when young. It
might seem like it would be a disadvantage to be mistaken about one's
abilities, but empirically it isn't. Up to a point, confidence is a self-
fullfilling prophecy.
Another quality you find in most fierce nerds is intelligence. Not all nerds
are smart, but the fierce ones are always at least moderately so. If they
weren't, they wouldn't have the confidence to be fierce.
There's also a natural connection between nerdiness and _independent-
mindedness_. It's hard to be independent-minded without being somewhat
socially awkward, because conventional beliefs are so often mistaken, or at
least arbitrary. No one who was both independent-minded and ambitious would
want to waste the effort it takes to fit in. And the independent-mindedness of
the fierce nerds will obviously be of the _aggressive_ rather than the passive
type: they'll be annoyed by rules, rather than dreamily unaware of them.
I'm less sure why fierce nerds are impatient, but most seem to be. You notice
it first in conversation, where they tend to interrupt you. This is merely
annoying, but in the more promising fierce nerds it's connected to a deeper
impatience about solving problems. Perhaps the competitiveness and impatience
of fierce nerds are not separate qualities, but two manifestations of a single
underlying drivenness.
When you combine all these qualities in sufficient quantities, the result is
quite formidable. The most vivid example of fierce nerds in action may be
James Watson's _The Double Helix_. The first sentence of the book is "I have
never seen Francis Crick in a modest mood," and the portrait he goes on to
paint of Crick is the quintessential fierce nerd: brilliant, socially awkward,
competitive, independent-minded, overconfident. But so is the implicit
portrait he paints of himself. Indeed, his lack of social awareness makes both
portraits that much more realistic, because he baldly states all sorts of
opinions and motivations that a smoother person would conceal. And moreover
it's clear from the story that Crick and Watson's fierce nerdiness was
integral to their success. Their independent-mindedness caused them to
consider approaches that most others ignored, their overconfidence allowed
them to work on problems they only half understood (they were literally
described as "clowns" by one eminent insider), and their impatience and
competitiveness got them to the answer ahead of two other groups that would
otherwise have found it within the next year, if not the next several months.
The idea that there could be fierce nerds is an unfamiliar one not just to
many normal people but even to some young nerds. Especially early on, nerds
spend so much of their time in ordinary social situations and so little doing
real work that they get a lot more evidence of their awkwardness than their
power. So there will be some who read this description of the fierce nerd and
realize "Hmm, that's me." And it is to you, young fierce nerd, that I now
turn.
I have some good news, and some bad news. The good news is that your
fierceness will be a great help in solving difficult problems. And not just
the kind of scientific and technical problems that nerds have traditionally
solved. As the world progresses, the number of things you can win at by
getting the right answer increases. Recently _getting rich_ became one of
them: 7 of the 8 richest people in America are now fierce nerds.
Indeed, being a fierce nerd is probably even more helpful in business than in
nerds' original territory of scholarship. Fierceness seems optional there.
Darwin for example doesn't seem to have been especially fierce. Whereas it's
impossible to be the CEO of a company over a certain size without being
fierce, so now that nerds can win at business, fierce nerds will increasingly
monopolize the really big successes.
The bad news is that if it's not exercised, your fierceness will turn to
bitterness, and you will become an intellectual playground bully: the grumpy
sysadmin, the forum troll, the _hater_, the shooter down of _new ideas_.
How do you avoid this fate? Work on ambitious projects. If you succeed, it
will bring you a kind of satisfaction that neutralizes bitterness. But you
don't need to have succeeded to feel this; merely working on hard projects
gives most fierce nerds some feeling of satisfaction. And those it doesn't, it
at least keeps busy.
Another solution may be to somehow turn off your fierceness, by devoting
yourself to meditation or psychotherapy or something like that. Maybe that's
the right answer for some people. I have no idea. But it doesn't seem the
optimal solution to me. If you're given a sharp knife, it seems to me better
to use it than to blunt its edge to avoid cutting yourself.
If you do choose the ambitious route, you'll have a tailwind behind you. There
has never been a better time to be a nerd. In the past century we've seen a
continuous transfer of power from dealmakers to technicians — from the
charismatic to the competent — and I don't see anything on the horizon that
will end it. At least not till the nerds end it themselves by bringing about
the singularity.
** |
|
July 2020
One of the most revealing ways to classify people is by the degree and
aggressiveness of their conformism. Imagine a Cartesian coordinate system
whose horizontal axis runs from conventional-minded on the left to
independent-minded on the right, and whose vertical axis runs from passive at
the bottom to aggressive at the top. The resulting four quadrants define four
types of people. Starting in the upper left and going counter-clockwise:
aggressively conventional-minded, passively conventional-minded, passively
independent-minded, and aggressively independent-minded.
I think that you'll find all four types in most societies, and that which
quadrant people fall into depends more on their own personality than the
beliefs prevalent in their society.
Young children offer some of the best evidence for both points. Anyone who's
been to primary school has seen the four types, and the fact that school rules
are so arbitrary is strong evidence that which quadrant people fall into
depends more on them than the rules.
The kids in the upper left quadrant, the aggressively conventional-minded
ones, are the tattletales. They believe not only that rules must be obeyed,
but that those who disobey them must be punished.
The kids in the lower left quadrant, the passively conventional-minded, are
the sheep. They're careful to obey the rules, but when other kids break them,
their impulse is to worry that those kids will be punished, not to ensure that
they will.
The kids in the lower right quadrant, the passively independent-minded, are
the dreamy ones. They don't care much about rules and probably aren't 100%
sure what the rules even are.
And the kids in the upper right quadrant, the aggressively independent-minded,
are the naughty ones. When they see a rule, their first impulse is to question
it. Merely being told what to do makes them inclined to do the opposite.
When measuring conformism, of course, you have to say with respect to what,
and this changes as kids get older. For younger kids it's the rules set by
adults. But as kids get older, the source of rules becomes their peers. So a
pack of teenagers who all flout school rules in the same way are not
independent-minded; rather the opposite.
In adulthood we can recognize the four types by their distinctive calls, much
as you could recognize four species of birds. The call of the aggressively
conventional-minded is "Crush <outgroup>!" (It's rather alarming to see an
exclamation point after a variable, but that's the whole problem with the
aggressively conventional-minded.) The call of the passively conventional-
minded is "What will the neighbors think?" The call of the passively
independent-minded is "To each his own." And the call of the aggressively
independent-minded is "Eppur si muove."
The four types are not equally common. There are more passive people than
aggressive ones, and far more conventional-minded people than independent-
minded ones. So the passively conventional-minded are the largest group, and
the aggressively independent-minded the smallest.
Since one's quadrant depends more on one's personality than the nature of the
rules, most people would occupy the same quadrant even if they'd grown up in a
quite different society.
Princeton professor Robert George recently wrote:
> I sometimes ask students what their position on slavery would have been had
> they been white and living in the South before abolition. Guess what? They
> all would have been abolitionists! They all would have bravely spoken out
> against slavery, and worked tirelessly against it.
He's too polite to say so, but of course they wouldn't. And indeed, our
default assumption should not merely be that his students would, on average,
have behaved the same way people did at the time, but that the ones who are
aggressively conventional-minded today would have been aggressively
conventional-minded then too. In other words, that they'd not only not have
fought against slavery, but that they'd have been among its staunchest
defenders.
I'm biased, I admit, but it seems to me that aggressively conventional-minded
people are responsible for a disproportionate amount of the trouble in the
world, and that a lot of the customs we've evolved since the Enlightenment
have been designed to protect the rest of us from them. In particular, the
retirement of the concept of heresy and its replacement by the principle of
freely debating all sorts of different ideas, even ones that are currently
considered unacceptable, without any punishment for those who try them out to
see if they work.
Why do the independent-minded need to be protected, though? Because they have
all the new ideas. To be a successful scientist, for example, it's not enough
just to be right. You have to be right when everyone else is wrong.
Conventional-minded people can't do that. For similar reasons, all successful
startup CEOs are not merely independent-minded, but aggressively so. So it's
no coincidence that societies prosper only to the extent that they have
customs for keeping the conventional-minded at bay.
In the last few years, many of us have noticed that the customs protecting
free inquiry have been weakened. Some say we're overreacting that they
haven't been weakened very much, or that they've been weakened in the service
of a greater good. The latter I'll dispose of immediately. When the
conventional-minded get the upper hand, they always say it's in the service of
a greater good. It just happens to be a different, incompatible greater good
each time.
As for the former worry, that the independent-minded are being oversensitive,
and that free inquiry hasn't been shut down that much, you can't judge that
unless you are yourself independent-minded. You can't know how much of the
space of ideas is being lopped off unless you have them, and only the
independent-minded have the ones at the edges. Precisely because of this, they
tend to be very sensitive to changes in how freely one can explore ideas.
They're the canaries in this coalmine.
The conventional-minded say, as they always do, that they don't want to shut
down the discussion of all ideas, just the bad ones.
You'd think it would be obvious just from that sentence what a dangerous game
they're playing. But I'll spell it out. There are two reasons why we need to
be able to discuss even "bad" ideas.
The first is that any process for deciding which ideas to ban is bound to make
mistakes. All the more so because no one intelligent wants to undertake that
kind of work, so it ends up being done by the stupid. And when a process makes
a lot of mistakes, you need to leave a margin for error. Which in this case
means you need to ban fewer ideas than you'd like to. But that's hard for the
aggressively conventional-minded to do, partly because they enjoy seeing
people punished, as they have since they were children, and partly because
they compete with one another. Enforcers of orthodoxy can't allow a borderline
idea to exist, because that gives other enforcers an opportunity to one-up
them in the moral purity department, and perhaps even to turn enforcer upon
them. So instead of getting the margin for error we need, we get the opposite:
a race to the bottom in which any idea that seems at all bannable ends up
being banned.
The second reason it's dangerous to ban the discussion of ideas is that ideas
are more closely related than they look. Which means if you restrict the
discussion of some topics, it doesn't only affect those topics. The
restrictions propagate back into any topic that yields implications in the
forbidden ones. And that is not an edge case. The best ideas do exactly that:
they have consequences in fields far removed from their origins. Having ideas
in a world where some ideas are banned is like playing soccer on a pitch that
has a minefield in one corner. You don't just play the same game you would
have, but on a different shaped pitch. You play a much more subdued game even
on the ground that's safe.
In the past, the way the independent-minded protected themselves was to
congregate in a handful of places first in courts, and later in universities
where they could to some extent make their own rules. Places where people
work with ideas tend to have customs protecting free inquiry, for the same
reason wafer fabs have powerful air filters, or recording studios good sound
insulation. For the last couple centuries at least, when the aggressively
conventional-minded were on the rampage for whatever reason, universities were
the safest places to be.
That may not work this time though, due to the unfortunate fact that the
latest wave of intolerance began in universities. It began in the mid 1980s,
and by 2000 seemed to have died down, but it has recently flared up again with
the arrival of social media. This seems, unfortunately, to have been an own
goal by Silicon Valley. Though the people who run Silicon Valley are almost
all independent-minded, they've handed the aggressively conventional-minded a
tool such as they could only have dreamed of.
On the other hand, perhaps the decline in the spirit of free inquiry within
universities is as much the symptom of the departure of the independent-minded
as the cause. People who would have become professors 50 years ago have other
options now. Now they can become quants or start startups. You have to be
independent-minded to succeed at either of those. If these people had been
professors, they'd have put up a stiffer resistance on behalf of academic
freedom. So perhaps the picture of the independent-minded fleeing declining
universities is too gloomy. Perhaps the universities are declining because so
many have already left.
Though I've spent a lot of time thinking about this situation, I can't predict
how it plays out. Could some universities reverse the current trend and remain
places where the independent-minded want to congregate? Or will the
independent-minded gradually abandon them? I worry a lot about what we might
lose if that happened.
But I'm hopeful long term. The independent-minded are good at protecting
themselves. If existing institutions are compromised, they'll create new ones.
That may require some imagination. But imagination is, after all, their
specialty.
** |
|
Kevin Kelleher suggested an interesting way to compare programming languages:
to describe each in terms of the problem it fixes. The surprising thing is how
many, and how well, languages can be described this way.
**Algol:** Assembly language is too low-level.
**Pascal:** Algol doesn't have enough data types.
**Modula:** Pascal is too wimpy for systems programming.
**Simula:** Algol isn't good enough at simulations.
**Smalltalk:** Not everything in Simula is an object.
**Fortran:** Assembly language is too low-level.
**Cobol:** Fortran is scary.
**PL/1:** Fortran doesn't have enough data types.
**Ada:** Every existing language is missing something.
**Basic:** Fortran is scary.
**APL:** Fortran isn't good enough at manipulating arrays.
**J:** APL requires its own character set.
**C:** Assembly language is too low-level.
**C++:** C is too low-level.
**Java:** C++ is a kludge. And Microsoft is going to crush us.
**C#:** Java is controlled by Sun.
**Lisp:** Turing Machines are an awkward way to describe computation.
**Scheme:** MacLisp is a kludge.
**T:** Scheme has no libraries.
**Common Lisp:** There are too many dialects of Lisp.
**Dylan:** Scheme has no libraries, and Lisp syntax is scary.
**Perl:** Shell scripts/awk/sed are not enough like programming languages.
**Python:** Perl is a kludge.
**Ruby:** Perl is a kludge, and Lisp syntax is scary.
**Prolog:** Programming is not enough like logic.
---
---
| | Japanese Translation
| | | | French Translation
| | Portuguese Translation
* * *
--- |
|
November 2016
If you're a California voter, there is an important proposition on your ballot
this year: Proposition 62, which bans the death penalty.
When I was younger I used to think the debate about the death penalty was
about when it's ok to take a human life. Is it ok to kill a killer?
But that is not the issue here.
The real world does not work like the version I was shown on TV growing up.
The police often arrest the wrong person. Defendants' lawyers are often
incompetent. And prosecutors are often motivated more by publicity than
justice.
In the real world, about 4% of people sentenced to death are innocent. So this
is not about whether it's ok to kill killers. This is about whether it's ok to
kill innocent people.
A child could answer that one for you.
This year, in California, you have a chance to end this, by voting yes on
Proposition 62. But beware, because there is another proposition, Proposition
66, whose goal is to make it easier to execute people. So yes on 62, no on 66.
It's time.
---
* * *
--- |
|
July 2010
When we sold our startup in 1998 I suddenly got a lot of money. I now had to
think about something I hadn't had to think about before: how not to lose it.
I knew it was possible to go from rich to poor, just as it was possible to go
from poor to rich. But while I'd spent a lot of the past several years
studying the paths from poor to rich, I knew practically nothing about the
paths from rich to poor. Now, in order to avoid them, I had to learn where
they were.
So I started to pay attention to how fortunes are lost. If you'd asked me as a
kid how rich people became poor, I'd have said by spending all their money.
That's how it happens in books and movies, because that's the colorful way to
do it. But in fact the way most fortunes are lost is not through excessive
expenditure, but through bad investments.
It's hard to spend a fortune without noticing. Someone with ordinary tastes
would find it hard to blow through more than a few tens of thousands of
dollars without thinking "wow, I'm spending a lot of money." Whereas if you
start trading derivatives, you can lose a million dollars (as much as you
want, really) in the blink of an eye.
In most people's minds, spending money on luxuries sets off alarms that making
investments doesn't. Luxuries seem self-indulgent. And unless you got the
money by inheriting it or winning a lottery, you've already been thoroughly
trained that self-indulgence leads to trouble. Investing bypasses those
alarms. You're not spending the money; you're just moving it from one asset to
another. Which is why people trying to sell you expensive things say "it's an
investment."
The solution is to develop new alarms. This can be a tricky business, because
while the alarms that prevent you from overspending are so basic that they may
even be in our DNA, the ones that prevent you from making bad investments have
to be learned, and are sometimes fairly counterintuitive.
A few days ago I realized something surprising: the situation with time is
much the same as with money. The most dangerous way to lose time is not to
spend it having fun, but to spend it doing fake work. When you spend time
having fun, you know you're being self-indulgent. Alarms start to go off
fairly quickly. If I woke up one morning and sat down on the sofa and watched
TV all day, I'd feel like something was terribly wrong. Just thinking about it
makes me wince. I'd start to feel uncomfortable after sitting on a sofa
watching TV for 2 hours, let alone a whole day.
And yet I've definitely had days when I might as well have sat in front of a
TV all day — days at the end of which, if I asked myself what I got done that
day, the answer would have been: basically, nothing. I feel bad after these
days too, but nothing like as bad as I'd feel if I spent the whole day on the
sofa watching TV. If I spent a whole day watching TV I'd feel like I was
descending into perdition. But the same alarms don't go off on the days when I
get nothing done, because I'm doing stuff that seems, superficially, like real
work. Dealing with email, for example. You do it sitting at a desk. It's not
fun. So it must be work.
With time, as with money, avoiding pleasure is no longer enough to protect
you. It probably was enough to protect hunter-gatherers, and perhaps all pre-
industrial societies. So nature and nurture combine to make us avoid self-
indulgence. But the world has gotten more complicated: the most dangerous
traps now are new behaviors that bypass our alarms about self-indulgence by
mimicking more virtuous types. And the worst thing is, they're not even fun.
**Thanks** to Sam Altman, Trevor Blackwell, Patrick Collison, Jessica
Livingston, and Robert Morris for reading drafts of this.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
August 2007
_(This is a talk I gave at the last Y Combinator dinner of the summer.
Usually we don't have a speaker at the last dinner; it's more of a party. But
it seemed worth spoiling the atmosphere if I could save some of the startups
from preventable deaths. So at the last minute I cooked up this rather grim
talk. I didn't mean this as an essay; I wrote it down because I only had two
hours before dinner and think fastest while writing.)_
A couple days ago I told a reporter that we expected about a third of the
companies we funded to succeed. Actually I was being conservative. I'm hoping
it might be as much as a half. Wouldn't it be amazing if we could achieve a
50% success rate?
Another way of saying that is that half of you are going to die. Phrased that
way, it doesn't sound good at all. In fact, it's kind of weird when you think
about it, because our definition of success is that the founders get rich. If
half the startups we fund succeed, then half of you are going to get rich and
the other half are going to get nothing.
If you can just avoid dying, you get rich. That sounds like a joke, but it's
actually a pretty good description of what happens in a typical startup. It
certainly describes what happened in Viaweb. We avoided dying till we got
rich.
It was really close, too. When we were visiting Yahoo to talk about being
acquired, we had to interrupt everything and borrow one of their conference
rooms to talk down an investor who was about to back out of a new funding
round we needed to stay alive. So even in the middle of getting rich we were
fighting off the grim reaper.
You may have heard that quote about luck consisting of opportunity meeting
preparation. You've now done the preparation. The work you've done so far has,
in effect, put you in a position to get lucky: you can now get rich by not
letting your company die. That's more than most people have. So let's talk
about how not to die.
We've done this five times now, and we've seen a bunch of startups die. About
10 of them so far. We don't know exactly what happens when they die, because
they generally don't die loudly and heroically. Mostly they crawl off
somewhere and die.
For us the main indication of impending doom is when we don't hear from you.
When we haven't heard from, or about, a startup for a couple months, that's a
bad sign. If we send them an email asking what's up, and they don't reply,
that's a really bad sign. So far that is a 100% accurate predictor of death.
Whereas if a startup regularly does new deals and releases and either sends us
mail or shows up at YC events, they're probably going to live.
I realize this will sound naive, but maybe the linkage works in both
directions. Maybe if you can arrange that we keep hearing from you, you won't
die.
That may not be so naive as it sounds. You've probably noticed that having
dinners every Tuesday with us and the other founders causes you to get more
done than you would otherwise, because every dinner is a mini Demo Day. Every
dinner is a kind of a deadline. So the mere constraint of staying in regular
contact with us will push you to make things happen, because otherwise you'll
be embarrassed to tell us that you haven't done anything new since the last
time we talked.
If this works, it would be an amazing hack. It would be pretty cool if merely
by staying in regular contact with us you could get rich. It sounds crazy, but
there's a good chance that would work.
A variant is to stay in touch with other YC-funded startups. There is now a
whole neighborhood of them in San Francisco. If you move there, the peer
pressure that made you work harder all summer will continue to operate.
When startups die, the official cause of death is always either running out of
money or a critical founder bailing. Often the two occur simultaneously. But I
think the underlying cause is usually that they've become demoralized. You
rarely hear of a startup that's working around the clock doing deals and
pumping out new features, and dies because they can't pay their bills and
their ISP unplugs their server.
Startups rarely die in mid keystroke. So keep typing!
If so many startups get demoralized and fail when merely by hanging on they
could get rich, you have to assume that running a startup can be demoralizing.
That is certainly true. I've been there, and that's why I've never done
another startup. The low points in a startup are just unbelievably low. I bet
even Google had moments where things seemed hopeless.
Knowing that should help. If you know it's going to feel terrible sometimes,
then when it feels terrible you won't think "ouch, this feels terrible, I give
up." It feels that way for everyone. And if you just hang on, things will
probably get better. The metaphor people use to describe the way a startup
feels is at least a roller coaster and not drowning. You don't just sink and
sink; there are ups after the downs.
Another feeling that seems alarming but is in fact normal in a startup is the
feeling that what you're doing isn't working. The reason you can expect to
feel this is that what you do probably won't work. Startups almost never get
it right the first time. Much more commonly you launch something, and no one
cares. Don't assume when this happens that you've failed. That's normal for
startups. But don't sit around doing nothing. Iterate.
I like Paul Buchheit's suggestion of trying to make something that at least
someone really loves. As long as you've made something that a few users are
ecstatic about, you're on the right track. It will be good for your morale to
have even a handful of users who really love you, and startups run on morale.
But also it will tell you what to focus on. What is it about you that they
love? Can you do more of that? Where can you find more people who love that
sort of thing? As long as you have some core of users who love you, all you
have to do is expand it. It may take a while, but as long as you keep plugging
away, you'll win in the end. Both Blogger and Delicious did that. Both took
years to succeed. But both began with a core of fanatically devoted users, and
all Evan and Joshua had to do was grow that core incrementally. Wufoo is on
the same trajectory now.
So when you release something and it seems like no one cares, look more
closely. Are there zero users who really love you, or is there at least some
little group that does? It's quite possible there will be zero. In that case,
tweak your product and try again. Every one of you is working on a space that
contains at least one winning permutation somewhere in it. If you just keep
trying, you'll find it.
Let me mention some things not to do. The number one thing not to do is other
things. If you find yourself saying a sentence that ends with "but we're going
to keep working on the startup," you are in big trouble. Bob's going to grad
school, but we're going to keep working on the startup. We're moving back to
Minnesota, but we're going to keep working on the startup. We're taking on
some consulting projects, but we're going to keep working on the startup. You
may as well just translate these to "we're giving up on the startup, but we're
not willing to admit that to ourselves," because that's what it means most of
the time. A startup is so hard that working on it can't be preceded by "but."
In particular, don't go to graduate school, and don't start other projects.
Distraction is fatal to startups. Going to (or back to) school is a huge
predictor of death because in addition to the distraction it gives you
something to say you're doing. If you're only doing a startup, then if the
startup fails, you fail. If you're in grad school and your startup fails, you
can say later "Oh yeah, we had this startup on the side when I was in grad
school, but it didn't go anywhere."
You can't use euphemisms like "didn't go anywhere" for something that's your
only occupation. People won't let you.
One of the most interesting things we've discovered from working on Y
Combinator is that founders are more motivated by the fear of looking bad than
by the hope of getting millions of dollars. So if you want to get millions of
dollars, put yourself in a position where failure will be public and
humiliating.
When we first met the founders of Octopart, they seemed very smart, but not a
great bet to succeed, because they didn't seem especially committed. One of
the two founders was still in grad school. It was the usual story: he'd drop
out if it looked like the startup was taking off. Since then he has not only
dropped out of grad school, but appeared full length in Newsweek with the word
"Billionaire" printed across his chest. He just cannot fail now. Everyone he
knows has seen that picture. Girls who dissed him in high school have seen it.
His mom probably has it on the fridge. It would be unthinkably humiliating to
fail now. At this point he is committed to fight to the death.
I wish every startup we funded could appear in a Newsweek article describing
them as the next generation of billionaires, because then none of them would
be able to give up. The success rate would be 90%. I'm not kidding.
When we first knew the Octoparts they were lighthearted, cheery guys. Now when
we talk to them they seem grimly determined. The electronic parts distributors
are trying to squash them to keep their monopoly pricing. (If it strikes you
as odd that people still order electronic parts out of thick paper catalogs in
2007, there's a reason for that. The distributors want to prevent the
transparency that comes from having prices online.) I feel kind of bad that
we've transformed these guys from lighthearted to grimly determined. But that
comes with the territory. If a startup succeeds, you get millions of dollars,
and you don't get that kind of money just by asking for it. You have to assume
it takes some amount of pain.
And however tough things get for the Octoparts, I predict they'll succeed.
They may have to morph themselves into something totally different, but they
won't just crawl off and die. They're smart; they're working in a promising
field; and they just cannot give up.
All of you guys already have the first two. You're all smart and working on
promising ideas. Whether you end up among the living or the dead comes down to
the third ingredient, not giving up.
So I'll tell you now: bad shit is coming. It always is in a startup. The odds
of getting from launch to liquidity without some kind of disaster happening
are one in a thousand. So don't get demoralized. When the disaster strikes,
just say to yourself, ok, this was what Paul was talking about. What did he
say to do? Oh, yeah. Don't give up.
---
| | Japanese Translation
| | | | Arabic Translation
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
August 2006, rev. April 2007, September 2010
In a few days it will be Demo Day, when the startups we funded this summer
present to investors. Y Combinator funds startups twice a year, in January and
June. Ten weeks later we invite all the investors we know to hear them present
what they've built so far.
Ten weeks is not much time. The average startup probably doesn't have much to
show for itself after ten weeks. But the average startup fails. When you look
at the ones that went on to do great things, you find a lot that began with
someone pounding out a prototype in a week or two of nonstop work. Startups
are a counterexample to the rule that haste makes waste.
(Too much money seems to be as bad for startups as too much time, so we don't
give them much money either.)
A week before Demo Day, we have a dress rehearsal called Rehearsal Day. At
other Y Combinator events we allow outside guests, but not at Rehearsal Day.
No one except the other founders gets to see the rehearsals.
The presentations on Rehearsal Day are often pretty rough. But this is to be
expected. We try to pick founders who are good at building things, not ones
who are slick presenters. Some of the founders are just out of college, or
even still in it, and have never spoken to a group of people they didn't
already know.
So we concentrate on the basics. On Demo Day each startup will only get ten
minutes, so we encourage them to focus on just two goals: (a) explain what
you're doing, and (b) explain why users will want it.
That might sound easy, but it's not when the speakers have no experience
presenting, and they're explaining technical matters to an audience that's
mostly non-technical.
This situation is constantly repeated when startups present to investors:
people who are bad at explaining, talking to people who are bad at
understanding. Practically every successful startup, including stars like
Google, presented at some point to investors who didn't get it and turned them
down. Was it because the founders were bad at presenting, or because the
investors were obtuse? It's probably always some of both.
At the most recent Rehearsal Day, we four Y Combinator partners found
ourselves saying a lot of the same things we said at the last two. So at
dinner afterward we collected all our tips about presenting to investors. Most
startups face similar challenges, so we hope these will be useful to a wider
audience.
**1\. Explain what you're doing.**
Investors' main question when judging a very early startup is whether you've
made a compelling product. Before they can judge whether you've built a good
x, they have to understand what kind of x you've built. They will get very
frustrated if instead of telling them what you do, you make them sit through
some kind of preamble.
Say what you're doing as soon as possible, preferably in the first sentence.
"We're Jeff and Bob and we've built an easy to use web-based database. Now
we'll show it to you and explain why people need this."
If you're a great public speaker you may be able to violate this rule. Last
year one founder spent the whole first half of his talk on a fascinating
analysis of the limits of the conventional desktop metaphor. He got away with
it, but unless you're a captivating speaker, which most hackers aren't, it's
better to play it safe.
**2\. Get rapidly to demo.**
_This section is now obsolete for YC founders presenting at Demo Day, because
Demo Day presentations are now so short that they rarely include much if any
demo. They seem to work just as well without, however, which makes me think I
was wrong to emphasize demos so much before._
A demo explains what you've made more effectively than any verbal description.
The only thing worth talking about first is the problem you're trying to solve
and why it's important. But don't spend more than a tenth of your time on
that. Then demo.
When you demo, don't run through a catalog of features. Instead start with the
problem you're solving, and then show how your product solves it. Show
features in an order driven by some kind of purpose, rather than the order in
which they happen to appear on the screen.
If you're demoing something web-based, assume that the network connection will
mysteriously die 30 seconds into your presentation, and come prepared with a
copy of the server software running on your laptop.
**3\. Better a narrow description than a vague one.**
One reason founders resist describing their projects concisely is that, at
this early stage, there are all kinds of possibilities. The most concise
descriptions seem misleadingly narrow. So for example a group that has built
an easy web-based database might resist calling their applicaton that, because
it could be so much more. In fact, it could be anything...
The problem is, as you approach (in the calculus sense) a description of
something that could be anything, the content of your description approaches
zero. If you describe your web-based database as "a system to allow people to
collaboratively leverage the value of information," it will go in one investor
ear and out the other. They'll just discard that sentence as meaningless
boilerplate, and hope, with increasing impatience, that in the next sentence
you'll actually explain what you've made.
Your primary goal is not to describe everything your system might one day
become, but simply to convince investors you're worth talking to further. So
approach this like an algorithm that gets the right answer by successive
approximations. Begin with a description that's gripping but perhaps overly
narrow, then flesh it out to the extent you can. It's the same principle as
incremental development: start with a simple prototype, then add features, but
at every point have working code. In this case, "working code" means a working
description in the investor's head.
**4\. Don't talk and drive.**
Have one person talk while another uses the computer. If the same person does
both, they'll inevitably mumble downwards at the computer screen instead of
talking clearly at the audience.
As long as you're standing near the audience and looking at them, politeness
(and habit) compel them to pay attention to you. Once you stop looking at them
to fuss with something on your computer, their minds drift off to the errands
they have to run later.
**5\. Don't talk about secondary matters at length.**
If you only have a few minutes, spend them explaining what your product does
and why it's great. Second order issues like competitors or resumes should be
single slides you go through quickly at the end. If you have impressive
resumes, just flash them on the screen for 15 seconds and say a few words. For
competitors, list the top 3 and explain in one sentence each what they lack
that you have. And put this kind of thing at the end, after you've made it
clear what you've built.
**6\. Don't get too deeply into business models.**
It's good to talk about how you plan to make money, but mainly because it
shows you care about that and have thought about it. Don't go into detail
about your business model, because (a) that's not what smart investors care
about in a brief presentation, and (b) any business model you have at this
point is probably wrong anyway.
Recently a VC who came to speak at Y Combinator talked about a company he just
invested in. He said their business model was wrong and would probably change
three times before they got it right. The founders were experienced guys who'd
done startups before and who'd just succeeded in getting millions from one of
the top VC firms, and even their business model was crap. (And yet he invested
anyway, because he expected it to be crap at this stage.)
If you're solving an important problem, you're going to sound a lot smarter
talking about that than the business model. The business model is just a bunch
of guesses, and guesses about stuff that's probably not your area of
expertise. So don't spend your precious few minutes talking about crap when
you could be talking about solid, interesting things you know a lot about: the
problem you're solving and what you've built so far.
As well as being a bad use of time, if your business model seems spectacularly
wrong, that will push the stuff you want investors to remember out of their
heads. They'll just remember you as the company with the boneheaded plan for
making money, rather than the company that solved that important problem.
**7\. Talk slowly and clearly at the audience.**
Everyone at Rehearsal Day could see the difference between the people who'd
been out in the world for a while and had presented to groups, and those who
hadn't.
You need to use a completely different voice and manner talking to a roomful
of people than you would in conversation. Everyday life gives you no practice
in this. If you can't already do it, the best solution is to treat it as a
consciously artificial trick, like juggling.
However, that doesn't mean you should talk like some kind of announcer.
Audiences tune that out. What you need to do is talk in this artificial way,
and yet make it seem conversational. (Writing is the same. Good writing is an
elaborate effort to seem spontaneous.)
If you want to write out your whole presentation beforehand and memorize it,
that's ok. That has worked for some groups in the past. But make sure to write
something that sounds like spontaneous, informal speech, and deliver it that
way too.
Err on the side of speaking slowly. At Rehearsal Day, one of the founders
mentioned a rule actors use: if you feel you're speaking too slowly, you're
speaking at about the right speed.
**8\. Have one person talk.**
Startups often want to show that all the founders are equal partners. This is
a good instinct; investors dislike unbalanced teams. But trying to show it by
partitioning the presentation is going too far. It's distracting. You can
demonstrate your respect for one another in more subtle ways. For example,
when one of the groups presented at Demo Day, the more extroverted of the two
founders did most of the talking, but he described his co-founder as the best
hacker he'd ever met, and you could tell he meant it.
Pick the one or at most two best speakers, and have them do most of the
talking.
Exception: If one of the founders is an expert in some specific technical
field, it can be good for them to talk about that for a minute or so. This
kind of "expert witness" can add credibility, even if the audience doesn't
understand all the details. If Jobs and Wozniak had 10 minutes to present the
Apple II, it might be a good plan to have Jobs speak for 9 minutes and have
Woz speak for a minute in the middle about some of the technical feats he'd
pulled off in the design. (Though of course if it were actually those two,
Jobs would speak for the entire 10 minutes.)
**9\. Seem confident.**
Between the brief time available and their lack of technical background, many
in the audience will have a hard time evaluating what you're doing. Probably
the single biggest piece of evidence, initially, will be your own confidence
in it. You have to show you're impressed with what you've made.
And I mean show, not tell. Never say "we're passionate" or "our product is
great." People just ignore that—or worse, write you off as bullshitters. Such
messages must be implicit.
What you must not do is seem nervous and apologetic. If you've truly made
something good, you're doing investors a _favor_ by telling them about it. If
you don't genuinely believe that, perhaps you ought to change what your
company is doing. If you don't believe your startup has such promise that
you'd be doing them a favor by letting them invest, why are you investing your
time in it?
**10\. Don't try to seem more than you are.**
Don't worry if your company is just a few months old and doesn't have an
office yet, or your founders are technical people with no business experience.
Google was like that once, and they turned out ok. Smart investors can see
past such superficial flaws. They're not looking for finished, smooth
presentations. They're looking for raw talent. All you need to convince them
of is that you're smart and that you're onto something good. If you try too
hard to conceal your rawness—by trying to seem corporate, or pretending to
know about stuff you don't—you may just conceal your talent.
You can afford to be candid about what you haven't figured out yet. Don't go
out of your way to bring it up (e.g. by having a slide about what might go
wrong), but don't try to pretend either that you're further along than you
are. If you're a hacker and you're presenting to experienced investors,
they're probably better at detecting bullshit than you are at producing it.
**11\. Don't put too many words on slides.**
When there are a lot of words on a slide, people just skip reading it. So look
at your slides and ask of each word "could I cross this out?" This includes
gratuitous clip art. Try to get your slides under 20 words if you can.
Don't read your slides. They should be something in the background as you face
the audience and talk to them, not something you face and read to an audience
sitting behind you.
Cluttered sites don't do well in demos, especially when they're projected onto
a screen. At the very least, crank up the font size big enough to make all the
text legible. But cluttered sites are bad anyway, so perhaps you should use
this opportunity to make your design simpler.
**12\. Specific numbers are good.**
If you have any kind of data, however preliminary, tell the audience. Numbers
stick in people's heads. If you can claim that the median visitor generates 12
page views, that's great.
But don't give them more than four or five numbers, and only give them numbers
specific to you. You don't need to tell them the size of the market you're in.
Who cares, really, if it's 500 million or 5 billion a year? Talking about that
is like an actor at the beginning of his career telling his parents how much
Tom Hanks makes. Yeah, sure, but first you have to become Tom Hanks. The
important part is not whether he makes ten million a year or a hundred, but
how you get there.
**13\. Tell stories about users.**
The biggest fear of investors looking at early stage startups is that you've
built something based on your own a priori theories of what the world needs,
but that no one will actually want. So it's good if you can talk about
problems specific users have and how you solve them.
Greg Mcadoo said one thing Sequoia looks for is the "proxy for demand." What
are people doing now, using inadequate tools, that shows they need what you're
making?
Another sign of user need is when people pay a lot for something. It's easy to
convince investors there will be demand for a cheaper alternative to something
popular, if you preserve the qualities that made it popular.
The best stories about user needs are about your own. A remarkable number of
famous startups grew out of some need the founders had: Apple, Microsoft,
Yahoo, Google. Experienced investors know that, so stories of this type will
get their attention. The next best thing is to talk about the needs of people
you know personally, like your friends or siblings.
**14\. Make a soundbite stick in their heads.**
Professional investors hear a lot of pitches. After a while they all blur
together. The first cut is simply to be one of those they remember. And the
way to ensure that is to create a descriptive phrase about yourself that
sticks in their heads.
In Hollywood, these phrases seem to be of the form "x meets y." In the
startup world, they're usually "the x of y" or "the x y." Viaweb's was "the
Microsoft Word of ecommerce."
Find one and launch it clearly (but apparently casually) in your talk,
preferably near the beginning.
It's a good exercise for you, too, to sit down and try to figure out how to
describe your startup in one compelling phrase. If you can't, your plans may
not be sufficiently focused.
---
| | How to Fund a Startup
| | | | Hackers' Guide to Investors
| | Spanish Translation
| | | | Japanese Translation
| | Russian Translation
* * *
Image: Casey Muller: Trevor Blackwell at Rehearsal Day, summer 2006
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
April 2005
This summer, as an experiment, some friends and I are giving seed funding to a
bunch of new startups. It's an experiment because we're prepared to fund
younger founders than most investors would. That's why we're doing it during
the summer—so even college students can participate.
We know from Google and Yahoo that grad students can start successful
startups. And we know from experience that some undergrads are as capable as
most grad students. The accepted age for startup founders has been creeping
downward. We're trying to find the lower bound.
The deadline has now passed, and we're sifting through 227 applications. We
expected to divide them into two categories, promising and unpromising. But we
soon saw we needed a third: promising people with unpromising ideas.
**The Artix Phase**
We should have expected this. It's very common for a group of founders to go
through one lame idea before realizing that a startup has to make something
people will pay for. In fact, we ourselves did.
Viaweb wasn't the first startup Robert Morris and I started. In January 1995,
we and a couple friends started a company called Artix. The plan was to put
art galleries on the Web. In retrospect, I wonder how we could have wasted our
time on anything so stupid. Galleries are not especially excited about being
on the Web even now, ten years later. They don't want to have their stock
visible to any random visitor, like an antique store.
Besides which, art dealers are the most technophobic people on earth. They
didn't become art dealers after a difficult choice between that and a career
in the hard sciences. Most of them had never seen the Web before we came to
tell them why they should be on it. Some didn't even have computers. It
doesn't do justice to the situation to describe it as a hard _sell_ ; we soon
sank to building sites for free, and it was hard to convince galleries even to
do that.
Gradually it dawned on us that instead of trying to make Web sites for people
who didn't want them, we could make sites for people who did. In fact,
software that would let people who wanted sites make their own. So we ditched
Artix and started a new company, Viaweb, to make software for building online
stores. That one succeeded.
We're in good company here. Microsoft was not the first company Paul Allen and
Bill Gates started either. The first was called Traf-o-data. It does not seem
to have done as well as Micro-soft.
In Robert's defense, he was skeptical about Artix. I dragged him into it.
But there were moments when he was optimistic. And if we, who were 29 and 30
at the time, could get excited about such a thoroughly boneheaded idea, we
should not be surprised that hackers aged 21 or 22 are pitching us ideas with
little hope of making money.
**The Still Life Effect**
Why does this happen? Why do good hackers have bad business ideas?
Let's look at our case. One reason we had such a lame idea was that it was the
first thing we thought of. I was in New York trying to be a starving artist at
the time (the starving part is actually quite easy), so I was haunting
galleries anyway. When I learned about the Web, it seemed natural to mix the
two. Make Web sites for galleries—that's the ticket!
If you're going to spend years working on something, you'd think it might be
wise to spend at least a couple days considering different ideas, instead of
going with the first that comes into your head. You'd think. But people don't.
In fact, this is a constant problem when you're painting still lifes. You
plonk down a bunch of stuff on a table, and maybe spend five or ten minutes
rearranging it to look interesting. But you're so impatient to get started
painting that ten minutes of rearranging feels very long. So you start
painting. Three days later, having spent twenty hours staring at it, you're
kicking yourself for having set up such an awkward and boring composition, but
by then it's too late.
Part of the problem is that big projects tend to grow out of small ones. You
set up a still life to make a quick sketch when you have a spare hour, and
days later you're still working on it. I once spent a month painting three
versions of a still life I set up in about four minutes. At each point (a day,
a week, a month) I thought I'd already put in so much time that it was too
late to change.
So the biggest cause of bad ideas is the still life effect: you come up with a
random idea, plunge into it, and then at each point (a day, a week, a month)
feel you've put so much time into it that this must be _the_ idea.
How do we fix that? I don't think we should discard plunging. Plunging into an
idea is a good thing. The solution is at the other end: to realize that having
invested time in something doesn't make it good.
This is clearest in the case of names. Viaweb was originally called Webgen,
but we discovered someone else had a product called that. We were so attached
to our name that we offered him _5% of the company_ if he'd let us have it.
But he wouldn't, so we had to think of another. The best we could do was
Viaweb, which we disliked at first. It was like having a new mother. But
within three days we loved it, and Webgen sounded lame and old-fashioned.
If it's hard to change something so simple as a name, imagine how hard it is
to garbage-collect an idea. A name only has one point of attachment into your
head. An idea for a company gets woven into your thoughts. So you must
consciously discount for that. Plunge in, by all means, but remember later to
look at your idea in the harsh light of morning and ask: is this something
people will pay for? Is this, of all the things we could make, the thing
people will pay most for?
**Muck**
The second mistake we made with Artix is also very common. Putting galleries
on the Web seemed cool.
One of the most valuable things my father taught me is an old Yorkshire
saying: where there's muck, there's brass. Meaning that unpleasant work pays.
And more to the point here, vice versa. Work people like doesn't pay well, for
reasons of supply and demand. The most extreme case is developing programming
languages, which doesn't pay at all, because people like it so much they do it
for free.
When we started Artix, I was still ambivalent about business. I wanted to keep
one foot in the art world. Big, big, mistake. Going into business is like a
hang-glider launch: you'd better do it wholeheartedly, or not at all. The
purpose of a company, and a startup especially, is to make money. You can't
have divided loyalties.
Which is not to say that you have to do the most disgusting sort of work, like
spamming, or starting a company whose only purpose is patent litigation. What
I mean is, if you're starting a company that will do something cool, the aim
had better be to make money and maybe be cool, not to be cool and maybe make
money.
It's hard enough to make money that you can't do it by accident. Unless it's
your first priority, it's unlikely to happen at all.
**Hyenas**
When I probe our motives with Artix, I see a third mistake: timidity. If you'd
proposed at the time that we go into the e-commerce business, we'd have found
the idea terrifying. Surely a field like that would be dominated by fearsome
startups with five million dollars of VC money each. Whereas we felt pretty
sure that we could hold our own in the slightly less competitive business of
generating Web sites for art galleries.
We erred ridiculously far on the side of safety. As it turns out, VC-backed
startups are not that fearsome. They're too busy trying to spend all that
money to get software written. In 1995, the e-commerce business was very
competitive as measured in press releases, but not as measured in software.
And really it never was. The big fish like Open Market (rest their souls) were
just consulting companies pretending to be product companies , and the
offerings at our end of the market were a couple hundred lines of Perl
scripts. Or could have been implemented as a couple hundred lines of Perl; in
fact they were probably tens of thousands of lines of C++ or Java. Once we
actually took the plunge into e-commerce, it turned out to be surprisingly
easy to compete.
So why were we afraid? We felt we were good at programming, but we lacked
confidence in our ability to do a mysterious, undifferentiated thing we called
"business." In fact there is no such thing as "business." There's selling,
promotion, figuring out what people want, deciding how much to charge,
customer support, paying your bills, getting customers to pay you, getting
incorporated, raising money, and so on. And the combination is not as hard as
it seems, because some tasks (like raising money and getting incorporated) are
an O(1) pain in the ass, whether you're big or small, and others (like selling
and promotion) depend more on energy and imagination than any kind of special
training.
Artix was like a hyena, content to survive on carrion because we were afraid
of the lions. Except the lions turned out not to have any teeth, and the
business of putting galleries online barely qualified as carrion.
**A Familiar Problem**
Sum up all these sources of error, and it's no wonder we had such a bad idea
for a company. We did the first thing we thought of; we were ambivalent about
being in business at all; and we deliberately chose an impoverished market to
avoid competition.
Looking at the applications for the Summer Founders Program, I see signs of
all three. But the first is by far the biggest problem. Most of the groups
applying have not stopped to ask: of all the things we could do, is _this_ the
one with the best chance of making money?
If they'd already been through their Artix phase, they'd have learned to ask
that. After the reception we got from art dealers, we were ready to. This
time, we thought, let's make something people want.
Reading the _Wall Street Journal_ for a week should give anyone ideas for two
or three new startups. The articles are full of descriptions of problems that
need to be solved. But most of the applicants don't seem to have looked far
for ideas.
We expected the most common proposal to be for multiplayer games. We were not
far off: this was the second most common. The most common was some combination
of a blog, a calendar, a dating site, and Friendster. Maybe there is some new
killer app to be discovered here, but it seems perverse to go poking around in
this fog when there are valuable, unsolved problems lying about in the open
for anyone to see. Why did no one propose a new scheme for micropayments? An
ambitious project, perhaps, but I can't believe we've considered every
alternative. And newspapers and magazines are (literally) dying for a
solution.
Why did so few applicants really think about what customers want? I think the
problem with many, as with people in their early twenties generally, is that
they've been trained their whole lives to jump through predefined hoops.
They've spent 15-20 years solving problems other people have set for them. And
how much time deciding what problems would be good to solve? Two or three
course projects? They're good at solving problems, but bad at choosing them.
But that, I'm convinced, is just the effect of training. Or more precisely,
the effect of grading. To make grading efficient, everyone has to solve the
same problem, and that means it has to be decided in advance. It would be
great if schools taught students how to choose problems as well as how to
solve them, but I don't know how you'd run such a class in practice.
**Copper and Tin**
The good news is, choosing problems is something that can be learned. I know
that from experience. Hackers can learn to make things customers want.
This is a controversial view. One expert on "entrepreneurship" told me that
any startup had to include business people, because only they could focus on
what customers wanted. I'll probably alienate this guy forever by quoting him,
but I have to risk it, because his email was such a perfect example of this
view:
> 80% of MIT spinoffs succeed _provided_ they have at least one management
> person in the team at the start. The business person represents the "voice
> of the customer" and that's what keeps the engineers and product development
> on track.
This is, in my opinion, a crock. Hackers are perfectly capable of hearing the
voice of the customer without a business person to amplify the signal for
them. Larry Page and Sergey Brin were grad students in computer science, which
presumably makes them "engineers." Do you suppose Google is only good because
they had some business guy whispering in their ears what customers wanted? It
seems to me the business guys who did the most for Google were the ones who
obligingly flew Altavista into a hillside just as Google was getting started.
The hard part about figuring out what customers want is figuring out that you
need to figure it out. But that's something you can learn quickly. It's like
seeing the other interpretation of an ambiguous picture. As soon as someone
tells you there's a rabbit as well as a duck, it's hard not to see it.
And compared to the sort of problems hackers are used to solving, giving
customers what they want is easy. Anyone who can write an optimizing compiler
can design a UI that doesn't confuse users, once they _choose_ to focus on
that problem. And once you apply that kind of brain power to petty but
profitable questions, you can create wealth very rapidly.
That's the essence of a startup: having brilliant people do work that's
beneath them. Big companies try to hire the right person for the job. Startups
win because they don't—because they take people so smart that they would in a
big company be doing "research," and set them to work instead on problems of
the most immediate and mundane sort. Think Einstein designing refrigerators.
If you want to learn what people want, read Dale Carnegie's _How to Win
Friends and Influence People._ When a friend recommended this book, I
couldn't believe he was serious. But he insisted it was good, so I read it,
and he was right. It deals with the most difficult problem in human
experience: how to see things from other people's point of view, instead of
thinking only of yourself.
Most smart people don't do that very well. But adding this ability to raw
brainpower is like adding tin to copper. The result is bronze, which is so
much harder that it seems a different metal.
A hacker who has learned what to make, and not just how to make, is
extraordinarily powerful. And not just at making money: look what a small
group of volunteers has achieved with Firefox.
Doing an Artix teaches you to make something people want in the same way that
not drinking anything would teach you how much you depend on water. But it
would be more convenient for all involved if the Summer Founders didn't learn
this on our dime—if they could skip the Artix phase and go right on to make
something customers wanted. That, I think, is going to be the real experiment
this summer. How long will it take them to grasp this?
We decided we ought to have T-Shirts for the SFP, and we'd been thinking about
what to print on the back. Till now we'd been planning to use
> If you can read this, I should be working.
but now we've decided it's going to be
> Make something people want.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
July 2004
_(This essay is derived from a talk at Oscon 2004.)_
A few months ago I finished a new book, and in reviews I keep noticing words
like "provocative'' and "controversial.'' To say nothing of "idiotic.''
I didn't mean to make the book controversial. I was trying to make it
efficient. I didn't want to waste people's time telling them things they
already knew. It's more efficient just to give them the diffs. But I suppose
that's bound to yield an alarming book.
**Edisons**
There's no controversy about which idea is most controversial: the suggestion
that variation in wealth might not be as big a problem as we think.
I didn't say in the book that variation in wealth was in itself a good thing.
I said in some situations it might be a sign of good things. A throbbing
headache is not a good thing, but it can be a sign of a good thing-- for
example, that you're recovering consciousness after being hit on the head.
Variation in wealth can be a sign of variation in productivity. (In a society
of one, they're identical.) And _that_ is almost certainly a good thing: if
your society has no variation in productivity, it's probably not because
everyone is Thomas Edison. It's probably because you have no Thomas Edisons.
In a low-tech society you don't see much variation in productivity. If you
have a tribe of nomads collecting sticks for a fire, how much more productive
is the best stick gatherer going to be than the worst? A factor of two?
Whereas when you hand people a complex tool like a computer, the variation in
what they can do with it is enormous.
That's not a new idea. Fred Brooks wrote about it in 1974, and the study he
quoted was published in 1968. But I think he underestimated the variation
between programmers. He wrote about productivity in lines of code: the best
programmers can solve a given problem in a tenth the time. But what if the
problem isn't given? In programming, as in many fields, the hard part isn't
solving problems, but deciding what problems to solve. Imagination is hard to
measure, but in practice it dominates the kind of productivity that's measured
in lines of code.
Productivity varies in any field, but there are few in which it varies so
much. The variation between programmers is so great that it becomes a
difference in kind. I don't think this is something intrinsic to programming,
though. In every field, technology magnifies differences in productivity. I
think what's happening in programming is just that we have a lot of
technological leverage. But in every field the lever is getting longer, so the
variation we see is something that more and more fields will see as time goes
on. And the success of companies, and countries, will depend increasingly on
how they deal with it.
If variation in productivity increases with technology, then the contribution
of the most productive individuals will not only be disproportionately large,
but will actually grow with time. When you reach the point where 90% of a
group's output is created by 1% of its members, you lose big if something
(whether Viking raids, or central planning) drags their productivity down to
the average.
If we want to get the most out of them, we need to understand these especially
productive people. What motivates them? What do they need to do their jobs?
How do you recognize them? How do you get them to come and work for you? And
then of course there's the question, how do you become one?
**More than Money**
I know a handful of super-hackers, so I sat down and thought about what they
have in common. Their defining quality is probably that they really love to
program. Ordinary programmers write code to pay the bills. Great hackers think
of it as something they do for fun, and which they're delighted to find people
will pay them for.
Great programmers are sometimes said to be indifferent to money. This isn't
quite true. It is true that all they really care about is doing interesting
work. But if you make enough money, you get to work on whatever you want, and
for that reason hackers _are_ attracted by the idea of making really large
amounts of money. But as long as they still have to show up for work every
day, they care more about what they do there than how much they get paid for
it.
Economically, this is a fact of the greatest importance, because it means you
don't have to pay great hackers anything like what they're worth. A great
programmer might be ten or a hundred times as productive as an ordinary one,
but he'll consider himself lucky to get paid three times as much. As I'll
explain later, this is partly because great hackers don't know how good they
are. But it's also because money is not the main thing they want.
What do hackers want? Like all craftsmen, hackers like good tools. In fact,
that's an understatement. Good hackers find it unbearable to use bad tools.
They'll simply refuse to work on projects with the wrong infrastructure.
At a startup I once worked for, one of the things pinned up on our bulletin
board was an ad from IBM. It was a picture of an AS400, and the headline read,
I think, "hackers despise it.''
When you decide what infrastructure to use for a project, you're not just
making a technical decision. You're also making a social decision, and this
may be the more important of the two. For example, if your company wants to
write some software, it might seem a prudent choice to write it in Java. But
when you choose a language, you're also choosing a community. The programmers
you'll be able to hire to work on a Java project won't be as smart as the ones
you could get to work on a project written in Python. And the quality of your
hackers probably matters more than the language you choose. Though, frankly,
the fact that good hackers prefer Python to Java should tell you something
about the relative merits of those languages.
Business types prefer the most popular languages because they view languages
as standards. They don't want to bet the company on Betamax. The thing about
languages, though, is that they're not just standards. If you have to move
bits over a network, by all means use TCP/IP. But a programming language isn't
just a format. A programming language is a medium of expression.
I've read that Java has just overtaken Cobol as the most popular language. As
a standard, you couldn't wish for more. But as a medium of expression, you
could do a lot better. Of all the great programmers I can think of, I know of
only one who would voluntarily program in Java. And of all the great
programmers I can think of who don't work for Sun, on Java, I know of zero.
Great hackers also generally insist on using open source software. Not just
because it's better, but because it gives them more control. Good hackers
insist on control. This is part of what makes them good hackers: when
something's broken, they need to fix it. You want them to feel this way about
the software they're writing for you. You shouldn't be surprised when they
feel the same way about the operating system.
A couple years ago a venture capitalist friend told me about a new startup he
was involved with. It sounded promising. But the next time I talked to him, he
said they'd decided to build their software on Windows NT, and had just hired
a very experienced NT developer to be their chief technical officer. When I
heard this, I thought, these guys are doomed. One, the CTO couldn't be a first
rate hacker, because to become an eminent NT developer he would have had to
use NT voluntarily, multiple times, and I couldn't imagine a great hacker
doing that; and two, even if he was good, he'd have a hard time hiring anyone
good to work for him if the project had to be built on NT.
**The Final Frontier**
After software, the most important tool to a hacker is probably his office.
Big companies think the function of office space is to express rank. But
hackers use their offices for more than that: they use their office as a place
to think in. And if you're a technology company, their thoughts are your
product. So making hackers work in a noisy, distracting environment is like
having a paint factory where the air is full of soot.
The cartoon strip Dilbert has a lot to say about cubicles, and with good
reason. All the hackers I know despise them. The mere prospect of being
interrupted is enough to prevent hackers from working on hard problems. If you
want to get real work done in an office with cubicles, you have two options:
work at home, or come in early or late or on a weekend, when no one else is
there. Don't companies realize this is a sign that something is broken? An
office environment is supposed to be something that _helps_ you work, not
something you work despite.
Companies like Cisco are proud that everyone there has a cubicle, even the
CEO. But they're not so advanced as they think; obviously they still view
office space as a badge of rank. Note too that Cisco is famous for doing very
little product development in house. They get new technology by buying the
startups that created it-- where presumably the hackers did have somewhere
quiet to work.
One big company that understands what hackers need is Microsoft. I once saw a
recruiting ad for Microsoft with a big picture of a door. Work for us, the
premise was, and we'll give you a place to work where you can actually get
work done. And you know, Microsoft is remarkable among big companies in that
they are able to develop software in house. Not well, perhaps, but well
enough.
If companies want hackers to be productive, they should look at what they do
at home. At home, hackers can arrange things themselves so they can get the
most done. And when they work at home, hackers don't work in noisy, open
spaces; they work in rooms with doors. They work in cosy, neighborhoody places
with people around and somewhere to walk when they need to mull something
over, instead of in glass boxes set in acres of parking lots. They have a sofa
they can take a nap on when they feel tired, instead of sitting in a coma at
their desk, pretending to work. There's no crew of people with vacuum cleaners
that roars through every evening during the prime hacking hours. There are no
meetings or, God forbid, corporate retreats or team-building exercises. And
when you look at what they're doing on that computer, you'll find it
reinforces what I said earlier about tools. They may have to use Java and
Windows at work, but at home, where they can choose for themselves, you're
more likely to find them using Perl and Linux.
Indeed, these statistics about Cobol or Java being the most popular language
can be misleading. What we ought to look at, if we want to know what tools are
best, is what hackers choose when they can choose freely-- that is, in
projects of their own. When you ask that question, you find that open source
operating systems already have a dominant market share, and the number one
language is probably Perl.
**Interesting**
Along with good tools, hackers want interesting projects. What makes a project
interesting? Well, obviously overtly sexy applications like stealth planes or
special effects software would be interesting to work on. But any application
can be interesting if it poses novel technical challenges. So it's hard to
predict which problems hackers will like, because some become interesting only
when the people working on them discover a new kind of solution. Before ITA
(who wrote the software inside Orbitz), the people working on airline fare
searches probably thought it was one of the most boring applications
imaginable. But ITA made it interesting by redefining the problem in a more
ambitious way.
I think the same thing happened at Google. When Google was founded, the
conventional wisdom among the so-called portals was that search was boring and
unimportant. But the guys at Google didn't think search was boring, and that's
why they do it so well.
This is an area where managers can make a difference. Like a parent saying to
a child, I bet you can't clean up your whole room in ten minutes, a good
manager can sometimes redefine a problem as a more interesting one. Steve Jobs
seems to be particularly good at this, in part simply by having high
standards. There were a lot of small, inexpensive computers before the Mac. He
redefined the problem as: make one that's beautiful. And that probably drove
the developers harder than any carrot or stick could.
They certainly delivered. When the Mac first appeared, you didn't even have to
turn it on to know it would be good; you could tell from the case. A few weeks
ago I was walking along the street in Cambridge, and in someone's trash I saw
what appeared to be a Mac carrying case. I looked inside, and there was a Mac
SE. I carried it home and plugged it in, and it booted. The happy Macintosh
face, and then the finder. My God, it was so simple. It was just like ...
Google.
Hackers like to work for people with high standards. But it's not enough just
to be exacting. You have to insist on the right things. Which usually means
that you have to be a hacker yourself. I've seen occasional articles about how
to manage programmers. Really there should be two articles: one about what to
do if you are yourself a programmer, and one about what to do if you're not.
And the second could probably be condensed into two words: give up.
The problem is not so much the day to day management. Really good hackers are
practically self-managing. The problem is, if you're not a hacker, you can't
tell who the good hackers are. A similar problem explains why American cars
are so ugly. I call it the _design paradox._ You might think that you could
make your products beautiful just by hiring a great designer to design them.
But if you yourself don't have good taste, how are you going to recognize a
good designer? By definition you can't tell from his portfolio. And you can't
go by the awards he's won or the jobs he's had, because in design, as in most
fields, those tend to be driven by fashion and schmoozing, with actual ability
a distant third. There's no way around it: you can't manage a process intended
to produce beautiful things without knowing what beautiful is. American cars
are ugly because American car companies are run by people with bad taste.
Many people in this country think of taste as something elusive, or even
frivolous. It is neither. To drive design, a manager must be the most
demanding user of a company's products. And if you have really good taste, you
can, as Steve Jobs does, make satisfying you the kind of problem that good
people like to work on.
**Nasty Little Problems**
It's pretty easy to say what kinds of problems are not interesting: those
where instead of solving a few big, clear, problems, you have to solve a lot
of nasty little ones. One of the worst kinds of projects is writing an
interface to a piece of software that's full of bugs. Another is when you have
to customize something for an individual client's complex and ill-defined
needs. To hackers these kinds of projects are the death of a thousand cuts.
The distinguishing feature of nasty little problems is that you don't learn
anything from them. Writing a compiler is interesting because it teaches you
what a compiler is. But writing an interface to a buggy piece of software
doesn't teach you anything, because the bugs are random. So it's not just
fastidiousness that makes good hackers avoid nasty little problems. It's more
a question of self-preservation. Working on nasty little problems makes you
stupid. Good hackers avoid it for the same reason models avoid cheeseburgers.
Of course some problems inherently have this character. And because of supply
and demand, they pay especially well. So a company that found a way to get
great hackers to work on tedious problems would be very successful. How would
you do it?
One place this happens is in startups. At our startup we had Robert Morris
working as a system administrator. That's like having the Rolling Stones play
at a bar mitzvah. You can't hire that kind of talent. But people will do any
amount of drudgery for companies of which they're the founders.
Bigger companies solve the problem by partitioning the company. They get smart
people to work for them by establishing a separate R&D department where
employees don't have to work directly on customers' nasty little problems.
In this model, the research department functions like a mine. They produce new
ideas; maybe the rest of the company will be able to use them.
You may not have to go to this extreme. Bottom-up programming suggests another
way to partition the company: have the smart people work as toolmakers. If
your company makes software to do x, have one group that builds tools for
writing software of that type, and another that uses these tools to write the
applications. This way you might be able to get smart people to write 99% of
your code, but still keep them almost as insulated from users as they would be
in a traditional research department. The toolmakers would have users, but
they'd only be the company's own developers.
If Microsoft used this approach, their software wouldn't be so full of
security holes, because the less smart people writing the actual applications
wouldn't be doing low-level stuff like allocating memory. Instead of writing
Word directly in C, they'd be plugging together big Lego blocks of Word-
language. (Duplo, I believe, is the technical term.)
**Clumping**
Along with interesting problems, what good hackers like is other good hackers.
Great hackers tend to clump together-- sometimes spectacularly so, as at Xerox
Parc. So you won't attract good hackers in linear proportion to how good an
environment you create for them. The tendency to clump means it's more like
the square of the environment. So it's winner take all. At any given time,
there are only about ten or twenty places where hackers most want to work, and
if you aren't one of them, you won't just have fewer great hackers, you'll
have zero.
Having great hackers is not, by itself, enough to make a company successful.
It works well for Google and ITA, which are two of the hot spots right now,
but it didn't help Thinking Machines or Xerox. Sun had a good run for a while,
but their business model is a down elevator. In that situation, even the best
hackers can't save you.
I think, though, that all other things being equal, a company that can attract
great hackers will have a huge advantage. There are people who would disagree
with this. When we were making the rounds of venture capital firms in the
1990s, several told us that software companies didn't win by writing great
software, but through brand, and dominating channels, and doing the right
deals.
They really seemed to believe this, and I think I know why. I think what a lot
of VCs are looking for, at least unconsciously, is the next Microsoft. And of
course if Microsoft is your model, you shouldn't be looking for companies that
hope to win by writing great software. But VCs are mistaken to look for the
next Microsoft, because no startup can be the next Microsoft unless some other
company is prepared to bend over at just the right moment and be the next IBM.
It's a mistake to use Microsoft as a model, because their whole culture
derives from that one lucky break. Microsoft is a bad data point. If you throw
them out, you find that good products do tend to win in the market. What VCs
should be looking for is the next Apple, or the next Google.
I think Bill Gates knows this. What worries him about Google is not the power
of their brand, but the fact that they have better hackers.
**Recognition**
So who are the great hackers? How do you know when you meet one? That turns
out to be very hard. Even hackers can't tell. I'm pretty sure now that my
friend Trevor Blackwell is a great hacker. You may have read on Slashdot how
he made his own Segway. The remarkable thing about this project was that he
wrote all the software in one day (in Python, incidentally).
For Trevor, that's par for the course. But when I first met him, I thought he
was a complete idiot. He was standing in Robert Morris's office babbling at
him about something or other, and I remember standing behind him making
frantic gestures at Robert to shoo this nut out of his office so we could go
to lunch. Robert says he misjudged Trevor at first too. Apparently when Robert
first met him, Trevor had just begun a new scheme that involved writing down
everything about every aspect of his life on a stack of index cards, which he
carried with him everywhere. He'd also just arrived from Canada, and had a
strong Canadian accent and a mullet.
The problem is compounded by the fact that hackers, despite their reputation
for social obliviousness, sometimes put a good deal of effort into seeming
smart. When I was in grad school I used to hang around the MIT AI Lab
occasionally. It was kind of intimidating at first. Everyone there spoke so
fast. But after a while I learned the trick of speaking fast. You don't have
to think any faster; just use twice as many words to say everything.
With this amount of noise in the signal, it's hard to tell good hackers when
you meet them. I can't tell, even now. You also can't tell from their resumes.
It seems like the only way to judge a hacker is to work with him on something.
And this is the reason that high-tech areas only happen around universities.
The active ingredient here is not so much the professors as the students.
Startups grow up around universities because universities bring together
promising young people and make them work on the same projects. The smart ones
learn who the other smart ones are, and together they cook up new projects of
their own.
Because you can't tell a great hacker except by working with him, hackers
themselves can't tell how good they are. This is true to a degree in most
fields. I've found that people who are great at something are not so much
convinced of their own greatness as mystified at why everyone else seems so
incompetent.
But it's particularly hard for hackers to know how good they are, because it's
hard to compare their work. This is easier in most other fields. In the
hundred meters, you know in 10 seconds who's fastest. Even in math there seems
to be a general consensus about which problems are hard to solve, and what
constitutes a good solution. But hacking is like writing. Who can say which of
two novels is better? Certainly not the authors.
With hackers, at least, other hackers can tell. That's because, unlike
novelists, hackers collaborate on projects. When you get to hit a few
difficult problems over the net at someone, you learn pretty quickly how hard
they hit them back. But hackers can't watch themselves at work. So if you ask
a great hacker how good he is, he's almost certain to reply, I don't know.
He's not just being modest. He really doesn't know.
And none of us know, except about people we've actually worked with. Which
puts us in a weird situation: we don't know who our heroes should be. The
hackers who become famous tend to become famous by random accidents of PR.
Occasionally I need to give an example of a great hacker, and I never know who
to use. The first names that come to mind always tend to be people I know
personally, but it seems lame to use them. So, I think, maybe I should say
Richard Stallman, or Linus Torvalds, or Alan Kay, or someone famous like that.
But I have no idea if these guys are great hackers. I've never worked with
them on anything.
If there is a Michael Jordan of hacking, no one knows, including him.
**Cultivation**
Finally, the question the hackers have all been wondering about: how do you
become a great hacker? I don't know if it's possible to make yourself into
one. But it's certainly possible to do things that make you stupid, and if you
can make yourself stupid, you can probably make yourself smart too.
The key to being a good hacker may be to work on what you like. When I think
about the great hackers I know, one thing they have in common is the extreme
difficulty of making them work on anything they don't want to. I don't know if
this is cause or effect; it may be both.
To do something well you have to love it. So to the extent you can preserve
hacking as something you love, you're likely to do it well. Try to keep the
sense of wonder you had about programming at age 14. If you're worried that
your current job is rotting your brain, it probably is.
The best hackers tend to be smart, of course, but that's true in a lot of
fields. Is there some quality that's unique to hackers? I asked some friends,
and the number one thing they mentioned was curiosity. I'd always supposed
that all smart people were curious-- that curiosity was simply the first
derivative of knowledge. But apparently hackers are particularly curious,
especially about how things work. That makes sense, because programs are in
effect giant descriptions of how things work.
Several friends mentioned hackers' ability to concentrate-- their ability, as
one put it, to "tune out everything outside their own heads.'' I've certainly
noticed this. And I've heard several hackers say that after drinking even half
a beer they can't program at all. So maybe hacking does require some special
ability to focus. Perhaps great hackers can load a large amount of context
into their head, so that when they look at a line of code, they see not just
that line but the whole program around it. John McPhee wrote that Bill
Bradley's success as a basketball player was due partly to his extraordinary
peripheral vision. "Perfect'' eyesight means about 47 degrees of vertical
peripheral vision. Bill Bradley had 70; he could see the basket when he was
looking at the floor. Maybe great hackers have some similar inborn ability. (I
cheat by using a very dense language, which shrinks the court.)
This could explain the disconnect over cubicles. Maybe the people in charge of
facilities, not having any concentration to shatter, have no idea that working
in a cubicle feels to a hacker like having one's brain in a blender. (Whereas
Bill, if the rumors of autism are true, knows all too well.)
One difference I've noticed between great hackers and smart people in general
is that hackers are more politically incorrect. To the extent there is a
secret handshake among good hackers, it's when they know one another well
enough to express opinions that would get them stoned to death by the general
public. And I can see why political incorrectness would be a useful quality in
programming. Programs are very complex and, at least in the hands of good
programmers, very fluid. In such situations it's helpful to have a habit of
questioning assumptions.
Can you cultivate these qualities? I don't know. But you can at least not
repress them. So here is my best shot at a recipe. If it is possible to make
yourself into a great hacker, the way to do it may be to make the following
deal with yourself: you never have to work on boring projects (unless your
family will starve otherwise), and in return, you'll never allow yourself to
do a half-assed job. All the great hackers I know seem to have made that deal,
though perhaps none of them had any choice in the matter.
** |
|
August 2003
We may be able to improve the accuracy of Bayesian spam filters by having them
follow links to see what's waiting at the other end. Richard Jowsey of
death2spam now does this in borderline cases, and reports that it works well.
Why only do it in borderline cases? And why only do it once?
As I mentioned in Will Filters Kill Spam?, following all the urls in a spam
would have an amusing side-effect. If popular email clients did this in order
to filter spam, the spammer's servers would take a serious pounding. The more
I think about this, the better an idea it seems. This isn't just amusing; it
would be hard to imagine a more perfectly targeted counterattack on spammers.
So I'd like to suggest an additional feature to those working on spam filters:
a "punish" mode which, if turned on, would spider every url in a suspected
spam n times, where n could be set by the user.
As many people have noted, one of the problems with the current email system
is that it's too passive. It does whatever you tell it. So far all the
suggestions for fixing the problem seem to involve new protocols. This one
wouldn't.
If widely used, auto-retrieving spam filters would make the email system
_rebound._ The huge volume of the spam, which has so far worked in the
spammer's favor, would now work against him, like a branch snapping back in
his face. Auto-retrieving spam filters would drive the spammer's costs up, and
his sales down: his bandwidth usage would go through the roof, and his servers
would grind to a halt under the load, which would make them unavailable to the
people who would have responded to the spam.
Pump out a million emails an hour, get a million hits an hour on your servers.
We would want to ensure that this is only done to suspected spams. As a rule,
any url sent to millions of people is likely to be a spam url, so submitting
every http request in every email would work fine nearly all the time. But
there are a few cases where this isn't true: the urls at the bottom of mails
sent from free email services like Yahoo Mail and Hotmail, for example.
To protect such sites, and to prevent abuse, auto-retrieval should be combined
with blacklists of spamvertised sites. Only sites on a blacklist would get
crawled, and sites would be blacklisted only after being inspected by humans.
The lifetime of a spam must be several hours at least, so it should be easy to
update such a list in time to interfere with a spam promoting a new site.
High-volume auto-retrieval would only be practical for users on high-bandwidth
connections, but there are enough of those to cause spammers serious trouble.
Indeed, this solution neatly mirrors the problem. The problem with spam is
that in order to reach a few gullible people the spammer sends mail to
everyone. The non-gullible recipients are merely collateral damage. But the
non-gullible majority won't stop getting spam until they can stop (or threaten
to stop) the gullible from responding to it. Auto-retrieving spam filters
offer them a way to do this.
Would that kill spam? Not quite. The biggest spammers could probably protect
their servers against auto-retrieving filters. However, the easiest and
cheapest way for them to do it would be to include working unsubscribe links
in their mails. And this would be a necessity for smaller fry, and for
"legitimate" sites that hired spammers to promote them. So if auto-retrieving
filters became widespread, they'd become auto-unsubscribing filters.
In this scenario, spam would, like OS crashes, viruses, and popups, become one
of those plagues that only afflict people who don't bother to use the right
software.
** |
|
November 2005
In the next few years, venture capital funds will find themselves squeezed
from four directions. They're already stuck with a seller's market, because of
the huge amounts they raised at the end of the Bubble and still haven't
invested. This by itself is not the end of the world. In fact, it's just a
more extreme version of the norm in the VC business: too much money chasing
too few deals.
Unfortunately, those few deals now want less and less money, because it's
getting so cheap to start a startup. The four causes: open source, which makes
software free; Moore's law, which makes hardware geometrically closer to free;
the Web, which makes promotion free if you're good; and better languages,
which make development a lot cheaper.
When we started our startup in 1995, the first three were our biggest
expenses. We had to pay $5000 for the Netscape Commerce Server, the only
software that then supported secure http connections. We paid $3000 for a
server with a 90 MHz processor and 32 meg of memory. And we paid a PR firm
about $30,000 to promote our launch.
Now you could get all three for nothing. You can get the software for free;
people throw away computers more powerful than our first server; and if you
make something good you can generate ten times as much traffic by word of
mouth online than our first PR firm got through the print media.
And of course another big change for the average startup is that programming
languages have improved-- or rather, the median language has. At most startups
ten years ago, software development meant ten programmers writing code in C++.
Now the same work might be done by one or two using Python or Ruby.
During the Bubble, a lot of people predicted that startups would outsource
their development to India. I think a better model for the future is David
Heinemeier Hansson, who outsourced his development to a more powerful language
instead. A lot of well-known applications are now, like BaseCamp, written by
just one programmer. And one guy is more than 10x cheaper than ten, because
(a) he won't waste any time in meetings, and (b) since he's probably a
founder, he can pay himself nothing.
Because starting a startup is so cheap, venture capitalists now often want to
give startups more money than the startups want to take. VCs like to invest
several million at a time. But as one VC told me after a startup he funded
would only take about half a million, "I don't know what we're going to do.
Maybe we'll just have to give some of it back." Meaning give some of the fund
back to the institutional investors who supplied it, because it wasn't going
to be possible to invest it all.
Into this already bad situation comes the third problem: Sarbanes-Oxley.
Sarbanes-Oxley is a law, passed after the Bubble, that drastically increases
the regulatory burden on public companies. And in addition to the cost of
compliance, which is at least two million dollars a year, the law introduces
frightening legal exposure for corporate officers. An experienced CFO I know
said flatly: "I would not want to be CFO of a public company now."
You might think that responsible corporate governance is an area where you
can't go too far. But you can go too far in any law, and this remark convinced
me that Sarbanes-Oxley must have. This CFO is both the smartest and the most
upstanding money guy I know. If Sarbanes-Oxley deters people like him from
being CFOs of public companies, that's proof enough that it's broken.
Largely because of Sarbanes-Oxley, few startups go public now. For all
practical purposes, succeeding now equals getting bought. Which means VCs are
now in the business of finding promising little 2-3 man startups and pumping
them up into companies that cost $100 million to acquire. They didn't mean to
be in this business; it's just what their business has evolved into.
Hence the fourth problem: the acquirers have begun to realize they can buy
wholesale. Why should they wait for VCs to make the startups they want more
expensive? Most of what the VCs add, acquirers don't want anyway. The
acquirers already have brand recognition and HR departments. What they really
want is the software and the developers, and that's what the startup is in the
early phase: concentrated software and developers.
Google, typically, seems to have been the first to figure this out. "Bring us
your startups early," said Google's speaker at the Startup School. They're
quite explicit about it: they like to acquire startups at just the point where
they would do a Series A round. (The Series A round is the first round of real
VC funding; it usually happens in the first year.) It is a brilliant strategy,
and one that other big technology companies will no doubt try to duplicate.
Unless they want to have still more of their lunch eaten by Google.
Of course, Google has an advantage in buying startups: a lot of the people
there are rich, or expect to be when their options vest. Ordinary employees
find it very hard to recommend an acquisition; it's just too annoying to see a
bunch of twenty year olds get rich when you're still working for salary. Even
if it's the right thing for your company to do.
**The Solution(s)**
Bad as things look now, there is a way for VCs to save themselves. They need
to do two things, one of which won't surprise them, and another that will seem
an anathema.
Let's start with the obvious one: lobby to get Sarbanes-Oxley loosened. This
law was created to prevent future Enrons, not to destroy the IPO market. Since
the IPO market was practically dead when it passed, few saw what bad effects
it would have. But now that technology has recovered from the last bust, we
can see clearly what a bottleneck Sarbanes-Oxley has become.
Startups are fragile plants—seedlings, in fact. These seedlings are worth
protecting, because they grow into the trees of the economy. Much of the
economy's growth is their growth. I think most politicians realize that. But
they don't realize just how fragile startups are, and how easily they can
become collateral damage of laws meant to fix some other problem.
Still more dangerously, when you destroy startups, they make very little
noise. If you step on the toes of the coal industry, you'll hear about it. But
if you inadvertantly squash the startup industry, all that happens is that the
founders of the next Google stay in grad school instead of starting a company.
My second suggestion will seem shocking to VCs: let founders cash out
partially in the Series A round. At the moment, when VCs invest in a startup,
all the stock they get is newly issued and all the money goes to the company.
They could buy some stock directly from the founders as well.
Most VCs have an almost religious rule against doing this. They don't want
founders to get a penny till the company is sold or goes public. VCs are
obsessed with control, and they worry that they'll have less leverage over the
founders if the founders have any money.
This is a dumb plan. In fact, letting the founders sell a little stock early
would generally be better for the company, because it would cause the
founders' attitudes toward risk to be aligned with the VCs'. As things
currently work, their attitudes toward risk tend to be diametrically opposed:
the founders, who have nothing, would prefer a 100% chance of $1 million to a
20% chance of $10 million, while the VCs can afford to be "rational" and
prefer the latter.
Whatever they say, the reason founders are selling their companies early
instead of doing Series A rounds is that they get paid up front. That first
million is just worth so much more than the subsequent ones. If founders could
sell a little stock early, they'd be happy to take VC money and bet the rest
on a bigger outcome.
So why not let the founders have that first million, or at least half million?
The VCs would get same number of shares for the money. So what if some of the
money would go to the founders instead of the company?
Some VCs will say this is unthinkable—that they want all their money to be put
to work growing the company. But the fact is, the huge size of current VC
investments is dictated by the structure of VC funds, not the needs of
startups. Often as not these large investments go to work destroying the
company rather than growing it.
The angel investors who funded our startup let the founders sell some stock
directly to them, and it was a good deal for everyone. The angels made a huge
return on that investment, so they're happy. And for us founders it blunted
the terrifying all-or-nothingness of a startup, which in its raw form is more
a distraction than a motivator.
If VCs are frightened at the idea of letting founders partially cash out, let
me tell them something still more frightening: you are now competing directly
with Google.
**Thanks** to Trevor Blackwell, Sarah Harlin, Jessica Livingston, and Robert
Morris for reading drafts of this.
---
---
| | Romanian Translation
| | | | Hebrew Translation
| | Japanese Translation
* * *
| If you liked this, you may also like **_Hackers & Painters_**. |
|
March 2006, rev August 2009
A couple days ago I found to my surprise that I'd been granted a patent. It
issued in 2003, but no one told me. I wouldn't know about it now except that a
few months ago, while visiting Yahoo, I happened to run into a Big Cheese I
knew from working there in the late nineties. He brought up something called
Revenue Loop, which Viaweb had been working on when they bought us.
The idea is basically that you sort search results not in order of textual
"relevance" (as search engines did then) nor in order of how much advertisers
bid (as Overture did) but in order of the bid times the number of
transactions. Ordinarily you'd do this for shopping searches, though in fact
one of the features of our scheme is that it automatically detects which
searches are shopping searches.
If you just order the results in order of bids, you can make the search
results useless, because the first results could be dominated by lame sites
that had bid the most. But if you order results by bid multiplied by
transactions, far from selling out, you're getting a _better_ measure of
relevance. What could be a better sign that someone was satisfied with a
search result than going to the site and buying something?
And, of course, this algorithm automatically maximizes the revenue of the
search engine.
Everyone is focused on this type of approach now, but few were in 1998\. In
1998 it was all about selling banner ads. We didn't know that, so we were
pretty excited when we figured out what seemed to us the optimal way of doing
shopping searches.
When Yahoo was thinking of buying us, we had a meeting with Jerry Yang in New
York. For him, I now realize, this was supposed to be one of those meetings
when you check out a company you've pretty much decided to buy, just to make
sure they're ok guys. We weren't expected to do more than chat and seem smart
and reasonable. He must have been dismayed when I jumped up to the whiteboard
and launched into a presentation of our exciting new technology.
I was just as dismayed when he didn't seem to care at all about it. At the
time I thought, "boy, is this guy poker-faced. We present to him what has to
be the optimal way of sorting product search results, and he's not even
curious." I didn't realize till much later why he didn't care. In 1998,
advertisers were overpaying enormously for ads on web sites. In 1998, if
advertisers paid the maximum that traffic was worth to them, Yahoo's revenues
would have _decreased._
Things are different now, of course. Now this sort of thing is all the rage.
So when I ran into the Yahoo exec I knew from the old days in the Yahoo
cafeteria a few months ago, the first thing he remembered was not
(fortunately) all the fights I had with him, but Revenue Loop.
"Well," I said, "I think we actually applied for a patent on it. I'm not sure
what happened to the application after I left."
"Really? That would be an important patent."
So someone investigated, and sure enough, that patent application had
continued in the pipeline for several years after, and finally issued in 2003.
The main thing that struck me on reading it, actually, is that lawyers at some
point messed up my nice clear writing. Some clever person with a spell checker
reduced one section to Zen-like incomprehensibility:
> Also, common spelling errors will tend to get fixed. For example, if users
> searching for "compact disc player" end up spending considerable money at
> sites offering compact disc players, then those pages will have a higher
> relevance for that search phrase, even though the phrase "compact disc
> player" is not present on those pages.
(That "compat disc player" wasn't a typo, guys.)
For the fine prose of the original, see the provisional application of
February 1998, back when we were still Viaweb and couldn't afford to pay
lawyers to turn every "a lot of" into "considerable."
---
* * *
--- |
|
January 2003
_(This article was given as a talk at the 2003 Spam Conference. It describes
the work I've done to improve the performance of the algorithm described inA
Plan for Spam, and what I plan to do in the future.)_
The first discovery I'd like to present here is an algorithm for lazy
evaluation of research papers. Just write whatever you want and don't cite any
previous work, and indignant readers will send you references to all the
papers you should have cited. I discovered this algorithm after ``A Plan for
Spam'' was on Slashdot.
Spam filtering is a subset of text classification, which is a well established
field, but the first papers about Bayesian spam filtering per se seem to have
been two given at the same conference in 1998, one by Pantel and Lin , and
another by a group from Microsoft Research .
When I heard about this work I was a bit surprised. If people had been onto
Bayesian filtering four years ago, why wasn't everyone using it? When I read
the papers I found out why. Pantel and Lin's filter was the more effective of
the two, but it only caught 92% of spam, with 1.16% false positives.
When I tried writing a Bayesian spam filter, it caught 99.5% of spam with less
than .03% false positives . It's always alarming when two people trying the
same experiment get widely divergent results. It's especially alarming here
because those two sets of numbers might yield opposite conclusions. Different
users have different requirements, but I think for many people a filtering
rate of 92% with 1.16% false positives means that filtering is not an
acceptable solution, whereas 99.5% with less than .03% false positives means
that it is.
So why did we get such different numbers? I haven't tried to reproduce Pantel
and Lin's results, but from reading the paper I see five things that probably
account for the difference.
One is simply that they trained their filter on very little data: 160 spam and
466 nonspam mails. Filter performance should still be climbing with data sets
that small. So their numbers may not even be an accurate measure of the
performance of their algorithm, let alone of Bayesian spam filtering in
general.
But I think the most important difference is probably that they ignored
message headers. To anyone who has worked on spam filters, this will seem a
perverse decision. And yet in the very first filters I tried writing, I
ignored the headers too. Why? Because I wanted to keep the problem neat. I
didn't know much about mail headers then, and they seemed to me full of random
stuff. There is a lesson here for filter writers: don't ignore data. You'd
think this lesson would be too obvious to mention, but I've had to learn it
several times.
Third, Pantel and Lin stemmed the tokens, meaning they reduced e.g. both
``mailing'' and ``mailed'' to the root ``mail''. They may have felt they were
forced to do this by the small size of their corpus, but if so this is a kind
of premature optimization.
Fourth, they calculated probabilities differently. They used all the tokens,
whereas I only use the 15 most significant. If you use all the tokens you'll
tend to miss longer spams, the type where someone tells you their life story
up to the point where they got rich from some multilevel marketing scheme. And
such an algorithm would be easy for spammers to spoof: just add a big chunk of
random text to counterbalance the spam terms.
Finally, they didn't bias against false positives. I think any spam filtering
algorithm ought to have a convenient knob you can twist to decrease the false
positive rate at the expense of the filtering rate. I do this by counting the
occurrences of tokens in the nonspam corpus double.
I don't think it's a good idea to treat spam filtering as a straight text
classification problem. You can use text classification techniques, but
solutions can and should reflect the fact that the text is email, and spam in
particular. Email is not just text; it has structure. Spam filtering is not
just classification, because false positives are so much worse than false
negatives that you should treat them as a different kind of error. And the
source of error is not just random variation, but a live human spammer working
actively to defeat your filter.
**Tokens**
Another project I heard about after the Slashdot article was Bill Yerazunis'
CRM114 . This is the counterexample to the design principle I just
mentioned. It's a straight text classifier, but such a stunningly effective
one that it manages to filter spam almost perfectly without even knowing
that's what it's doing.
Once I understood how CRM114 worked, it seemed inevitable that I would
eventually have to move from filtering based on single words to an approach
like this. But first, I thought, I'll see how far I can get with single words.
And the answer is, surprisingly far.
Mostly I've been working on smarter tokenization. On current spam, I've been
able to achieve filtering rates that approach CRM114's. These techniques are
mostly orthogonal to Bill's; an optimal solution might incorporate both.
``A Plan for Spam'' uses a very simple definition of a token. Letters, digits,
dashes, apostrophes, and dollar signs are constituent characters, and
everything else is a token separator. I also ignored case.
Now I have a more complicated definition of a token:
1. Case is preserved.
2. Exclamation points are constituent characters.
3. Periods and commas are constituents if they occur between two digits. This lets me get ip addresses and prices intact.
4. A price range like $20-25 yields two tokens, $20 and $25.
5. Tokens that occur within the To, From, Subject, and Return-Path lines, or within urls, get marked accordingly. E.g. ``foo'' in the Subject line becomes ``Subject*foo''. (The asterisk could be any character you don't allow as a constituent.)
Such measures increase the filter's vocabulary, which makes it more
discriminating. For example, in the current filter, ``free'' in the Subject
line has a spam probability of 98%, whereas the same token in the body has a
spam probability of only 65%.
Here are some of the current probabilities :
Subject*FREE 0.9999
free!! 0.9999
To*free 0.9998
Subject*free 0.9782
free! 0.9199
Free 0.9198
Url*free 0.9091
FREE 0.8747
From*free 0.7636
free 0.6546
In the Plan for Spam filter, all these tokens would have had the same
probability, .7602. That filter recognized about 23,000 tokens. The current
one recognizes about 187,000.
The disadvantage of having a larger universe of tokens is that there is more
chance of misses. Spreading your corpus out over more tokens has the same
effect as making it smaller. If you consider exclamation points as
constituents, for example, then you could end up not having a spam probability
for free with seven exclamation points, even though you know that free with
just two exclamation points has a probability of 99.99%.
One solution to this is what I call degeneration. If you can't find an exact
match for a token, treat it as if it were a less specific version. I consider
terminal exclamation points, uppercase letters, and occurring in one of the
five marked contexts as making a token more specific. For example, if I don't
find a probability for ``Subject*free!'', I look for probabilities for
``Subject*free'', ``free!'', and ``free'', and take whichever one is farthest
from .5.
Here are the alternatives considered if the filter sees ``FREE!!!'' in the
Subject line and doesn't have a probability for it.
Subject*Free!!!
Subject*free!!!
Subject*FREE!
Subject*Free!
Subject*free!
Subject*FREE
Subject*Free
Subject*free
FREE!!!
Free!!!
free!!!
FREE!
Free!
free!
FREE
Free
free
If you do this, be sure to consider versions with initial caps as well as all
uppercase and all lowercase. Spams tend to have more sentences in imperative
mood, and in those the first word is a verb. So verbs with initial caps have
higher spam probabilities than they would in all lowercase. In my filter, the
spam probability of ``Act'' is 98% and for ``act'' only 62%.
If you increase your filter's vocabulary, you can end up counting the same
word multiple times, according to your old definition of ``same''. Logically,
they're not the same token anymore. But if this still bothers you, let me add
from experience that the words you seem to be counting multiple times tend to
be exactly the ones you'd want to.
Another effect of a larger vocabulary is that when you look at an incoming
mail you find more interesting tokens, meaning those with probabilities far
from .5. I use the 15 most interesting to decide if mail is spam. But you can
run into a problem when you use a fixed number like this. If you find a lot of
maximally interesting tokens, the result can end up being decided by whatever
random factor determines the ordering of equally interesting tokens. One way
to deal with this is to treat some as more interesting than others.
For example, the token ``dalco'' occurs 3 times in my spam corpus and never in
my legitimate corpus. The token ``Url*optmails'' (meaning ``optmails'' within
a url) occurs 1223 times. And yet, as I used to calculate probabilities for
tokens, both would have the same spam probability, the threshold of .99.
That doesn't feel right. There are theoretical arguments for giving these two
tokens substantially different probabilities (Pantel and Lin do), but I
haven't tried that yet. It does seem at least that if we find more than 15
tokens that only occur in one corpus or the other, we ought to give priority
to the ones that occur a lot. So now there are two threshold values. For
tokens that occur only in the spam corpus, the probability is .9999 if they
occur more than 10 times and .9998 otherwise. Ditto at the other end of the
scale for tokens found only in the legitimate corpus.
I may later scale token probabilities substantially, but this tiny amount of
scaling at least ensures that tokens get sorted the right way.
Another possibility would be to consider not just 15 tokens, but all the
tokens over a certain threshold of interestingness. Steven Hauser does this in
his statistical spam filter . If you use a threshold, make it very high, or
spammers could spoof you by packing messages with more innocent words.
Finally, what should one do about html? I've tried the whole spectrum of
options, from ignoring it to parsing it all. Ignoring html is a bad idea,
because it's full of useful spam signs. But if you parse it all, your filter
might degenerate into a mere html recognizer. The most effective approach
seems to be the middle course, to notice some tokens but not others. I look at
a, img, and font tags, and ignore the rest. Links and images you should
certainly look at, because they contain urls.
I could probably be smarter about dealing with html, but I don't think it's
worth putting a lot of time into this. Spams full of html are easy to filter.
The smarter spammers already avoid it. So performance in the future should not
depend much on how you deal with html.
**Performance**
Between December 10 2002 and January 10 2003 I got about 1750 spams. Of these,
4 got through. That's a filtering rate of about 99.75%.
Two of the four spams I missed got through because they happened to use words
that occur often in my legitimate email.
The third was one of those that exploit an insecure cgi script to send mail to
third parties. They're hard to filter based just on the content because the
headers are innocent and they're careful about the words they use. Even so I
can usually catch them. This one squeaked by with a probability of .88, just
under the threshold of .9.
Of course, looking at multiple token sequences would catch it easily. ``Below
is the result of your feedback form'' is an instant giveaway.
The fourth spam was what I call a spam-of-the-future, because this is what I
expect spam to evolve into: some completely neutral text followed by a url. In
this case it was was from someone saying they had finally finished their
homepage and would I go look at it. (The page was of course an ad for a porn
site.)
If the spammers are careful about the headers and use a fresh url, there is
nothing in spam-of-the-future for filters to notice. We can of course counter
by sending a crawler to look at the page. But that might not be necessary. The
response rate for spam-of-the-future must be low, or everyone would be doing
it. If it's low enough, it won't pay for spammers to send it, and we won't
have to work too hard on filtering it.
Now for the really shocking news: during that same one-month period I got
_three_ false positives.
In a way it's a relief to get some false positives. When I wrote ``A Plan for
Spam'' I hadn't had any, and I didn't know what they'd be like. Now that I've
had a few, I'm relieved to find they're not as bad as I feared. False
positives yielded by statistical filters turn out to be mails that sound a lot
like spam, and these tend to be the ones you would least mind missing .
Two of the false positives were newsletters from companies I've bought things
from. I never asked to receive them, so arguably they were spams, but I count
them as false positives because I hadn't been deleting them as spams before.
The reason the filters caught them was that both companies in January switched
to commercial email senders instead of sending the mails from their own
servers, and both the headers and the bodies became much spammier.
The third false positive was a bad one, though. It was from someone in Egypt
and written in all uppercase. This was a direct result of making tokens case
sensitive; the Plan for Spam filter wouldn't have caught it.
It's hard to say what the overall false positive rate is, because we're up in
the noise, statistically. Anyone who has worked on filters (at least,
effective filters) will be aware of this problem. With some emails it's hard
to say whether they're spam or not, and these are the ones you end up looking
at when you get filters really tight. For example, so far the filter has
caught two emails that were sent to my address because of a typo, and one sent
to me in the belief that I was someone else. Arguably, these are neither my
spam nor my nonspam mail.
Another false positive was from a vice president at Virtumundo. I wrote to
them pretending to be a customer, and since the reply came back through
Virtumundo's mail servers it had the most incriminating headers imaginable.
Arguably this isn't a real false positive either, but a sort of Heisenberg
uncertainty effect: I only got it because I was writing about spam filtering.
Not counting these, I've had a total of five false positives so far, out of
about 7740 legitimate emails, a rate of .06%. The other two were a notice that
something I bought was back-ordered, and a party reminder from Evite.
I don't think this number can be trusted, partly because the sample is so
small, and partly because I think I can fix the filter not to catch some of
these.
False positives seem to me a different kind of error from false negatives.
Filtering rate is a measure of performance. False positives I consider more
like bugs. I approach improving the filtering rate as optimization, and
decreasing false positives as debugging.
So these five false positives are my bug list. For example, the mail from
Egypt got nailed because the uppercase text made it look to the filter like a
Nigerian spam. This really is kind of a bug. As with html, the email being all
uppercase is really conceptually _one_ feature, not one for each word. I need
to handle case in a more sophisticated way.
So what to make of this .06%? Not much, I think. You could treat it as an
upper bound, bearing in mind the small sample size. But at this stage it is
more a measure of the bugs in my implementation than some intrinsic false
positive rate of Bayesian filtering.
**Future**
What next? Filtering is an optimization problem, and the key to optimization
is profiling. Don't try to guess where your code is slow, because you'll guess
wrong. _Look_ at where your code is slow, and fix that. In filtering, this
translates to: look at the spams you miss, and figure out what you could have
done to catch them.
For example, spammers are now working aggressively to evade filters, and one
of the things they're doing is breaking up and misspelling words to prevent
filters from recognizing them. But working on this is not my first priority,
because I still have no trouble catching these spams .
There are two kinds of spams I currently do have trouble with. One is the type
that pretends to be an email from a woman inviting you to go chat with her or
see her profile on a dating site. These get through because they're the one
type of sales pitch you can make without using sales talk. They use the same
vocabulary as ordinary email.
The other kind of spams I have trouble filtering are those from companies in
e.g. Bulgaria offering contract programming services. These get through
because I'm a programmer too, and the spams are full of the same words as my
real mail.
I'll probably focus on the personal ad type first. I think if I look closer
I'll be able to find statistical differences between these and my real mail.
The style of writing is certainly different, though it may take multiword
filtering to catch that. Also, I notice they tend to repeat the url, and
someone including a url in a legitimate mail wouldn't do that .
The outsourcing type are going to be hard to catch. Even if you sent a crawler
to the site, you wouldn't find a smoking statistical gun. Maybe the only
answer is a central list of domains advertised in spams . But there can't
be that many of this type of mail. If the only spams left were unsolicited
offers of contract programming services from Bulgaria, we could all probably
move on to working on something else.
Will statistical filtering actually get us to that point? I don't know. Right
now, for me personally, spam is not a problem. But spammers haven't yet made a
serious effort to spoof statistical filters. What will happen when they do?
I'm not optimistic about filters that work at the network level . When
there is a static obstacle worth getting past, spammers are pretty efficient
at getting past it. There is already a company called Assurance Systems that
will run your mail through Spamassassin and tell you whether it will get
filtered out.
Network-level filters won't be completely useless. They may be enough to kill
all the "opt-in" spam, meaning spam from companies like Virtumundo and
Equalamail who claim that they're really running opt-in lists. You can filter
those based just on the headers, no matter what they say in the body. But
anyone willing to falsify headers or use open relays, presumably including
most porn spammers, should be able to get some message past network-level
filters if they want to. (By no means the message they'd like to send though,
which is something.)
The kind of filters I'm optimistic about are ones that calculate probabilities
based on each individual user's mail. These can be much more effective, not
only in avoiding false positives, but in filtering too: for example, finding
the recipient's email address base-64 encoded anywhere in a message is a very
good spam indicator.
But the real advantage of individual filters is that they'll all be different.
If everyone's filters have different probabilities, it will make the spammers'
optimization loop, what programmers would call their edit-compile-test cycle,
appallingly slow. Instead of just tweaking a spam till it gets through a copy
of some filter they have on their desktop, they'll have to do a test mailing
for each tweak. It would be like programming in a language without an
interactive toplevel, and I wouldn't wish that on anyone.
** |
|
February 2002
| | "...Copernicus' aesthetic objections to [equants] provided one essential motive for his rejection of the Ptolemaic system...."
\- Thomas Kuhn, _The Copernican Revolution_
"All of us had been trained by Kelly Johnson and believed fanatically in his
insistence that an airplane that looked beautiful would fly the same way."
\- Ben Rich, _Skunk Works_
"Beauty is the first test: there is no permanent place in this world for ugly
mathematics."
\- G. H. Hardy, _A Mathematician's Apology_
---
I was talking recently to a friend who teaches at MIT. His field is hot now
and every year he is inundated by applications from would-be graduate
students. "A lot of them seem smart," he said. "What I can't tell is whether
they have any kind of taste."
Taste. You don't hear that word much now. And yet we still need the underlying
concept, whatever we call it. What my friend meant was that he wanted students
who were not just good technicians, but who could use their technical
knowledge to design beautiful things.
Mathematicians call good work "beautiful," and so, either now or in the past,
have scientists, engineers, musicians, architects, designers, writers, and
painters. Is it just a coincidence that they used the same word, or is there
some overlap in what they meant? If there is an overlap, can we use one
field's discoveries about beauty to help us in another?
For those of us who design things, these are not just theoretical questions.
If there is such a thing as beauty, we need to be able to recognize it. We
need good taste to make good things. Instead of treating beauty as an airy
abstraction, to be either blathered about or avoided depending on how one
feels about airy abstractions, let's try considering it as a practical
question: _how do you make good stuff?_
If you mention taste nowadays, a lot of people will tell you that "taste is
subjective." They believe this because it really feels that way to them. When
they like something, they have no idea why. It could be because it's
beautiful, or because their mother had one, or because they saw a movie star
with one in a magazine, or because they know it's expensive. Their thoughts
are a tangle of unexamined impulses.
Most of us are encouraged, as children, to leave this tangle unexamined. If
you make fun of your little brother for coloring people green in his coloring
book, your mother is likely to tell you something like "you like to do it your
way and he likes to do it his way."
Your mother at this point is not trying to teach you important truths about
aesthetics. She's trying to get the two of you to stop bickering.
Like many of the half-truths adults tell us, this one contradicts other things
they tell us. After dinning into you that taste is merely a matter of personal
preference, they take you to the museum and tell you that you should pay
attention because Leonardo is a great artist.
What goes through the kid's head at this point? What does he think "great
artist" means? After having been told for years that everyone just likes to do
things their own way, he is unlikely to head straight for the conclusion that
a great artist is someone whose work is _better_ than the others'. A far more
likely theory, in his Ptolemaic model of the universe, is that a great artist
is something that's good for you, like broccoli, because someone said so in a
book.
Saying that taste is just personal preference is a good way to prevent
disputes. The trouble is, it's not true. You feel this when you start to
design things.
Whatever job people do, they naturally want to do better. Football players
like to win games. CEOs like to increase earnings. It's a matter of pride, and
a real pleasure, to get better at your job. But if your job is to design
things, and there is no such thing as beauty, then there is _no way to get
better at your job._ If taste is just personal preference, then everyone's is
already perfect: you like whatever you like, and that's it.
As in any job, as you continue to design things, you'll get better at it. Your
tastes will change. And, like anyone who gets better at their job, you'll know
you're getting better. If so, your old tastes were not merely different, but
worse. Poof goes the axiom that taste can't be wrong.
Relativism is fashionable at the moment, and that may hamper you from thinking
about taste, even as yours grows. But if you come out of the closet and admit,
at least to yourself, that there is such a thing as good and bad design, then
you can start to study good design in detail. How has your taste changed? When
you made mistakes, what caused you to make them? What have other people
learned about design?
Once you start to examine the question, it's surprising how much different
fields' ideas of beauty have in common. The same principles of good design
crop up again and again.
**Good design is simple.** You hear this from math to painting. In math it
means that a shorter proof tends to be a better one. Where axioms are
concerned, especially, less is more. It means much the same thing in
programming. For architects and designers it means that beauty should depend
on a few carefully chosen structural elements rather than a profusion of
superficial ornament. (Ornament is not in itself bad, only when it's
camouflage on insipid form.) Similarly, in painting, a still life of a few
carefully observed and solidly modelled objects will tend to be more
interesting than a stretch of flashy but mindlessly repetitive painting of,
say, a lace collar. In writing it means: say what you mean and say it briefly.
It seems strange to have to emphasize simplicity. You'd think simple would be
the default. Ornate is more work. But something seems to come over people when
they try to be creative. Beginning writers adopt a pompous tone that doesn't
sound anything like the way they speak. Designers trying to be artistic resort
to swooshes and curlicues. Painters discover that they're expressionists. It's
all evasion. Underneath the long words or the "expressive" brush strokes,
there is not much going on, and that's frightening.
When you're forced to be simple, you're forced to face the real problem. When
you can't deliver ornament, you have to deliver substance.
**Good design is timeless.** In math, every proof is timeless unless it
contains a mistake. So what does Hardy mean when he says there is no permanent
place for ugly mathematics? He means the same thing Kelly Johnson did: if
something is ugly, it can't be the best solution. There must be a better one,
and eventually someone will discover it.
Aiming at timelessness is a way to make yourself find the best answer: if you
can imagine someone surpassing you, you should do it yourself. Some of the
greatest masters did this so well that they left little room for those who
came after. Every engraver since Durer has had to live in his shadow.
Aiming at timelessness is also a way to evade the grip of fashion. Fashions
almost by definition change with time, so if you can make something that will
still look good far into the future, then its appeal must derive more from
merit and less from fashion.
Strangely enough, if you want to make something that will appeal to future
generations, one way to do it is to try to appeal to past generations. It's
hard to guess what the future will be like, but we can be sure it will be like
the past in caring nothing for present fashions. So if you can make something
that appeals to people today and would also have appealed to people in 1500,
there is a good chance it will appeal to people in 2500.
**Good design solves the right problem.** The typical stove has four burners
arranged in a square, and a dial to control each. How do you arrange the
dials? The simplest answer is to put them in a row. But this is a simple
answer to the wrong question. The dials are for humans to use, and if you put
them in a row, the unlucky human will have to stop and think each time about
which dial matches which burner. Better to arrange the dials in a square like
the burners.
A lot of bad design is industrious, but misguided. In the mid twentieth
century there was a vogue for setting text in sans-serif fonts. These fonts
_are_ closer to the pure, underlying letterforms. But in text that's not the
problem you're trying to solve. For legibility it's more important that
letters be easy to tell apart. It may look Victorian, but a Times Roman
lowercase g is easy to tell from a lowercase y.
Problems can be improved as well as solutions. In software, an intractable
problem can usually be replaced by an equivalent one that's easy to solve.
Physics progressed faster as the problem became predicting observable
behavior, instead of reconciling it with scripture.
**Good design is suggestive.** Jane Austen's novels contain almost no
description; instead of telling you how everything looks, she tells her story
so well that you envision the scene for yourself. Likewise, a painting that
suggests is usually more engaging than one that tells. Everyone makes up their
own story about the Mona Lisa.
In architecture and design, this principle means that a building or object
should let you use it how you want: a good building, for example, will serve
as a backdrop for whatever life people want to lead in it, instead of making
them live as if they were executing a program written by the architect.
In software, it means you should give users a few basic elements that they can
combine as they wish, like Lego. In math it means a proof that becomes the
basis for a lot of new work is preferable to a proof that was difficult, but
doesn't lead to future discoveries; in the sciences generally, citation is
considered a rough indicator of merit.
**Good design is often slightly funny.** This one may not always be true. But
Durer's engravings and Saarinen's womb chair and the Pantheon and the original
Porsche 911 all seem to me slightly funny. Godel's incompleteness theorem
seems like a practical joke.
I think it's because humor is related to strength. To have a sense of humor is
to be strong: to keep one's sense of humor is to shrug off misfortunes, and to
lose one's sense of humor is to be wounded by them. And so the mark-- or at
least the prerogative-- of strength is not to take oneself too seriously. The
confident will often, like swallows, seem to be making fun of the whole
process slightly, as Hitchcock does in his films or Bruegel in his paintings--
or Shakespeare, for that matter.
Good design may not have to be funny, but it's hard to imagine something that
could be called humorless also being good design.
**Good design is hard.** If you look at the people who've done great work, one
thing they all seem to have in common is that they worked very hard. If you're
not working hard, you're probably wasting your time.
Hard problems call for great efforts. In math, difficult proofs require
ingenious solutions, and those tend to be interesting. Ditto in engineering.
When you have to climb a mountain you toss everything unnecessary out of your
pack. And so an architect who has to build on a difficult site, or a small
budget, will find that he is forced to produce an elegant design. Fashions and
flourishes get knocked aside by the difficult business of solving the problem
at all.
Not every kind of hard is good. There is good pain and bad pain. You want the
kind of pain you get from going running, not the kind you get from stepping on
a nail. A difficult problem could be good for a designer, but a fickle client
or unreliable materials would not be.
In art, the highest place has traditionally been given to paintings of people.
There is something to this tradition, and not just because pictures of faces
get to press buttons in our brains that other pictures don't. We are so good
at looking at faces that we force anyone who draws them to work hard to
satisfy us. If you draw a tree and you change the angle of a branch five
degrees, no one will know. When you change the angle of someone's eye five
degrees, people notice.
When Bauhaus designers adopted Sullivan's "form follows function," what they
meant was, form _should_ follow function. And if function is hard enough, form
is forced to follow it, because there is no effort to spare for error. Wild
animals are beautiful because they have hard lives.
**Good design looks easy.** Like great athletes, great designers make it look
easy. Mostly this is an illusion. The easy, conversational tone of good
writing comes only on the eighth rewrite.
In science and engineering, some of the greatest discoveries seem so simple
that you say to yourself, I could have thought of that. The discoverer is
entitled to reply, why didn't you?
Some Leonardo heads are just a few lines. You look at them and you think, all
you have to do is get eight or ten lines in the right place and you've made
this beautiful portrait. Well, yes, but you have to get them in _exactly_ the
right place. The slightest error will make the whole thing collapse.
Line drawings are in fact the most difficult visual medium, because they
demand near perfection. In math terms, they are a closed-form solution; lesser
artists literally solve the same problems by successive approximation. One of
the reasons kids give up drawing at ten or so is that they decide to start
drawing like grownups, and one of the first things they try is a line drawing
of a face. Smack!
In most fields the appearance of ease seems to come with practice. Perhaps
what practice does is train your unconscious mind to handle tasks that used to
require conscious thought. In some cases you literally train your body. An
expert pianist can play notes faster than the brain can send signals to his
hand. Likewise an artist, after a while, can make visual perception flow in
through his eye and out through his hand as automatically as someone tapping
his foot to a beat.
When people talk about being in "the zone," I think what they mean is that the
spinal cord has the situation under control. Your spinal cord is less
hesitant, and it frees conscious thought for the hard problems.
**Good design uses symmetry.** I think symmetry may just be one way to achieve
simplicity, but it's important enough to be mentioned on its own. Nature uses
it a lot, which is a good sign.
There are two kinds of symmetry, repetition and recursion. Recursion means
repetition in subelements, like the pattern of veins in a leaf.
Symmetry is unfashionable in some fields now, in reaction to excesses in the
past. Architects started consciously making buildings asymmetric in Victorian
times and by the 1920s asymmetry was an explicit premise of modernist
architecture. Even these buildings only tended to be asymmetric about major
axes, though; there were hundreds of minor symmetries.
In writing you find symmetry at every level, from the phrases in a sentence to
the plot of a novel. You find the same in music and art. Mosaics (and some
Cezannes) get extra visual punch by making the whole picture out of the same
atoms. Compositional symmetry yields some of the most memorable paintings,
especially when two halves react to one another, as in the _Creation of Adam_
or _American Gothic._
In math and engineering, recursion, especially, is a big win. Inductive proofs
are wonderfully short. In software, a problem that can be solved by recursion
is nearly always best solved that way. The Eiffel Tower looks striking partly
because it is a recursive solution, a tower on a tower.
The danger of symmetry, and repetition especially, is that it can be used as a
substitute for thought.
**Good design resembles nature.** It's not so much that resembling nature is
intrinsically good as that nature has had a long time to work on the problem.
It's a good sign when your answer resembles nature's.
It's not cheating to copy. Few would deny that a story should be like life.
Working from life is a valuable tool in painting too, though its role has
often been misunderstood. The aim is not simply to make a record. The point of
painting from life is that it gives your mind something to chew on: when your
eyes are looking at something, your hand will do more interesting work.
Imitating nature also works in engineering. Boats have long had spines and
ribs like an animal's ribcage. In some cases we may have to wait for better
technology: early aircraft designers were mistaken to design aircraft that
looked like birds, because they didn't have materials or power sources light
enough (the Wrights' engine weighed 152 lbs. and generated only 12 hp.) or
control systems sophisticated enough for machines that flew like birds, but I
could imagine little unmanned reconnaissance planes flying like birds in fifty
years.
Now that we have enough computer power, we can imitate nature's method as well
as its results. Genetic algorithms may let us create things too complex to
design in the ordinary sense.
**Good design is redesign.** It's rare to get things right the first time.
Experts expect to throw away some early work. They plan for plans to change.
It takes confidence to throw work away. You have to be able to think, _there's
more where that came from._ When people first start drawing, for example,
they're often reluctant to redo parts that aren't right; they feel they've
been lucky to get that far, and if they try to redo something, it will turn
out worse. Instead they convince themselves that the drawing is not that bad,
really-- in fact, maybe they meant it to look that way.
Dangerous territory, that; if anything you should cultivate dissatisfaction.
In Leonardo's drawings there are often five or six attempts to get a line
right. The distinctive back of the Porsche 911 only appeared in the redesign
of an awkward prototype. In Wright's early plans for the Guggenheim, the right
half was a ziggurat; he inverted it to get the present shape.
Mistakes are natural. Instead of treating them as disasters, make them easy to
acknowledge and easy to fix. Leonardo more or less invented the sketch, as a
way to make drawing bear a greater weight of exploration. Open-source software
has fewer bugs because it admits the possibility of bugs.
It helps to have a medium that makes change easy. When oil paint replaced
tempera in the fifteenth century, it helped painters to deal with difficult
subjects like the human figure because, unlike tempera, oil can be blended and
overpainted.
**Good design can copy.** Attitudes to copying often make a round trip. A
novice imitates without knowing it; next he tries consciously to be original;
finally, he decides it's more important to be right than original.
Unknowing imitation is almost a recipe for bad design. If you don't know where
your ideas are coming from, you're probably imitating an imitator. Raphael so
pervaded mid-nineteenth century taste that almost anyone who tried to draw was
imitating him, often at several removes. It was this, more than Raphael's own
work, that bothered the Pre-Raphaelites.
The ambitious are not content to imitate. The second phase in the growth of
taste is a conscious attempt at originality.
I think the greatest masters go on to achieve a kind of selflessness. They
just want to get the right answer, and if part of the right answer has already
been discovered by someone else, that's no reason not to use it. They're
confident enough to take from anyone without feeling that their own vision
will be lost in the process.
**Good design is often strange.** Some of the very best work has an uncanny
quality: Euler's Formula, Bruegel's _Hunters in the Snow,_ the SR-71, Lisp.
They're not just beautiful, but strangely beautiful.
I'm not sure why. It may just be my own stupidity. A can-opener must seem
miraculous to a dog. Maybe if I were smart enough it would seem the most
natural thing in the world that ei*pi = -1. It is after all necessarily true.
Most of the qualities I've mentioned are things that can be cultivated, but I
don't think it works to cultivate strangeness. The best you can do is not
squash it if it starts to appear. Einstein didn't try to make relativity
strange. He tried to make it true, and the truth turned out to be strange.
At an art school where I once studied, the students wanted most of all to
develop a personal style. But if you just try to make good things, you'll
inevitably do it in a distinctive way, just as each person walks in a
distinctive way. Michelangelo was not trying to paint like Michelangelo. He
was just trying to paint well; he couldn't help painting like Michelangelo.
The only style worth having is the one you can't help. And this is especially
true for strangeness. There is no shortcut to it. The Northwest Passage that
the Mannerists, the Romantics, and two generations of American high school
students have searched for does not seem to exist. The only way to get there
is to go through good and come out the other side.
**Good design happens in chunks.** The inhabitants of fifteenth century
Florence included Brunelleschi, Ghiberti, Donatello, Masaccio, Filippo Lippi,
Fra Angelico, Verrocchio, Botticelli, Leonardo, and Michelangelo. Milan at the
time was as big as Florence. How many fifteenth century Milanese artists can
you name?
Something was happening in Florence in the fifteenth century. And it can't
have been heredity, because it isn't happening now. You have to assume that
whatever inborn ability Leonardo and Michelangelo had, there were people born
in Milan with just as much. What happened to the Milanese Leonardo?
There are roughly a thousand times as many people alive in the US right now as
lived in Florence during the fifteenth century. A thousand Leonardos and a
thousand Michelangelos walk among us. If DNA ruled, we should be greeted daily
by artistic marvels. We aren't, and the reason is that to make Leonardo you
need more than his innate ability. You also need Florence in 1450.
Nothing is more powerful than a community of talented people working on
related problems. Genes count for little by comparison: being a genetic
Leonardo was not enough to compensate for having been born near Milan instead
of Florence. Today we move around more, but great work still comes
disproportionately from a few hotspots: the Bauhaus, the Manhattan Project,
the _New Yorker,_ Lockheed's Skunk Works, Xerox Parc.
At any given time there are a few hot topics and a few groups doing great work
on them, and it's nearly impossible to do good work yourself if you're too far
removed from one of these centers. You can push or pull these trends to some
extent, but you can't break away from them. (Maybe _you_ can, but the Milanese
Leonardo couldn't.)
**Good design is often daring.** At every period of history, people have
believed things that were just ridiculous, and believed them so strongly that
you risked ostracism or even violence by saying otherwise.
If our own time were any different, that would be remarkable. As far as I can
tell it isn't.
This problem afflicts not just every era, but in some degree every field. Much
Renaissance art was in its time considered shockingly secular: according to
Vasari, Botticelli repented and gave up painting, and Fra Bartolommeo and
Lorenzo di Credi actually burned some of their work. Einstein's theory of
relativity offended many contemporary physicists, and was not fully accepted
for decades-- in France, not until the 1950s.
Today's experimental error is tomorrow's new theory. If you want to discover
great new things, then instead of turning a blind eye to the places where
conventional wisdom and truth don't quite meet, you should pay particular
attention to them.
As a practical matter, I think it's easier to see ugliness than to imagine
beauty. Most of the people who've made beautiful things seem to have done it
by fixing something that they thought ugly. Great work usually seems to happen
because someone sees something and thinks, _I could do better than that._
Giotto saw traditional Byzantine madonnas painted according to a formula that
had satisfied everyone for centuries, and to him they looked wooden and
unnatural. Copernicus was so troubled by a hack that all his contemporaries
could tolerate that he felt there must be a better solution.
Intolerance for ugliness is not in itself enough. You have to understand a
field well before you develop a good nose for what needs fixing. You have to
do your homework. But as you become expert in a field, you'll start to hear
little voices saying, _What a hack! There must be a better way._ Don't ignore
those voices. Cultivate them. The recipe for great work is: very exacting
taste, plus the ability to gratify it.
** |
|
October 2022
If there were intelligent beings elsewhere in the universe, they'd share
certain truths in common with us. The truths of mathematics would be the same,
because they're true by definition. Ditto for the truths of physics; the mass
of a carbon atom would be the same on their planet. But I think we'd share
other truths with aliens besides the truths of math and physics, and that it
would be worthwhile to think about what these might be.
For example, I think we'd share the principle that a controlled experiment
testing some hypothesis entitles us to have proportionally increased belief in
it. It seems fairly likely, too, that it would be true for aliens that one can
get better at something by practicing. We'd probably share Occam's razor.
There doesn't seem anything specifically human about any of these ideas.
We can only guess, of course. We can't say for sure what forms intelligent
life might take. Nor is it my goal here to explore that question, interesting
though it is. The point of the idea of alien truth is not that it gives us a
way to speculate about what forms intelligent life might take, but that it
gives us a threshold, or more precisely a target, for truth. If you're trying
to find the most general truths short of those of math or physics, then
presumably they'll be those we'd share in common with other forms of
intelligent life.
Alien truth will work best as a heuristic if we err on the side of generosity.
If an idea might plausibly be relevant to aliens, that's enough. Justice, for
example. I wouldn't want to bet that all intelligent beings would understand
the concept of justice, but I wouldn't want to bet against it either.
The idea of alien truth is related to Erdos's idea of God's book. He used to
describe a particularly good proof as being in God's book, the implication
being (a) that a sufficiently good proof was more discovered than invented,
and (b) that its goodness would be universally recognized. If there's such a
thing as alien truth, then there's more in God's book than math.
What should we call the search for alien truth? The obvious choice is
"philosophy." Whatever else philosophy includes, it should probably include
this. I'm fairly sure Aristotle would have thought so. One could even make the
case that the search for alien truth is, if not an accurate description _of_
philosophy, a good definition _for_ it. I.e. that it's what people who call
themselves philosophers should be doing, whether or not they currently are.
But I'm not wedded to that; doing it is what matters, not what we call it.
We may one day have something like alien life among us in the form of AIs. And
that may in turn allow us to be precise about what truths an intelligent being
would have to share with us. We might find, for example, that it's impossible
to create something we'd consider intelligent that doesn't use Occam's razor.
We might one day even be able to prove that. But though this sort of research
would be very interesting, it's not necessary for our purposes, or even the
same field; the goal of philosophy, if we're going to call it that, would be
to see what ideas we come up with using alien truth as a target, not to say
precisely where the threshold of it is. Those two questions might one day
converge, but they'll converge from quite different directions, and till they
do, it would be too constraining to restrict ourselves to thinking only about
things we're certain would be alien truths. Especially since this will
probably be one of those areas where the best guesses turn out to be
surprisingly close to optimal. (Let's see if that one does.)
Whatever we call it, the attempt to discover alien truths would be a
worthwhile undertaking. And curiously enough, that is itself probably an alien
truth.
**Thanks** to Trevor Blackwell, Greg Brockman, Patrick Collison, Robert
Morris, and Michael Nielsen for reading drafts of this.
---
* * *
--- |
|
January 2020
_(I originally intended this for startup founders, who are often surprised by
the attention they get as their companies grow, but it applies equally to
anyone who becomes famous.)_
If you become sufficiently famous, you'll acquire some fans who like you too
much. These people are sometimes called "fanboys," and though I dislike that
term, I'm going to have to use it here. We need some word for them, because
this is a distinct phenomenon from someone simply liking your work.
A fanboy is obsessive and uncritical. Liking you becomes part of their
identity, and they create an image of you in their own head that is much
better than reality. Everything you do is good, because you do it. If you do
something bad, they find a way to see it as good. And their love for you is
not, usually, a quiet, private one. They want everyone to know how great you
are.
Well, you may be thinking, I could do without this kind of obsessive fan, but
I know there are all kinds of people in the world, and if this is the worst
consequence of fame, that's not so bad.
Unfortunately this is not the worst consequence of fame. As well as fanboys,
you'll have haters.
A hater is obsessive and uncritical. Disliking you becomes part of their
identity, and they create an image of you in their own head that is much worse
than reality. Everything you do is bad, because you do it. If you do something
good, they find a way to see it as bad. And their dislike for you is not,
usually, a quiet, private one. They want everyone to know how awful you are.
If you're thinking of checking, I'll save you the trouble. The second and
fifth paragraphs are identical except for "good" being switched to "bad" and
so on.
I spent years puzzling about haters. What are they, and where do they come
from? Then one day it dawned on me. Haters are just fanboys with the sign
switched.
Note that by haters, I don't simply mean trolls. I'm not talking about people
who say bad things about you and then move on. I'm talking about the much
smaller group of people for whom this becomes a kind of obsession and who do
it repeatedly over a long period.
Like fans, haters seem to be an automatic consequence of fame. Anyone
sufficiently famous will have them. And like fans, haters are energized by the
fame of whoever they hate. They hear a song by some pop singer. They don't
like it much. If the singer were an obscure one, they'd just forget about it.
But instead they keep hearing her name, and this seems to drive some people
crazy. Everyone's always going on about this singer, but she's no good! She's
a fraud!
That word "fraud" is an important one. It's the spectral signature of a hater
to regard the object of their hatred as a _fraud_. They can't deny their fame.
Indeed, their fame is if anything exaggerated in the hater's mind. They notice
every mention of the singer's name, because every mention makes them angrier.
In their own minds they exaggerate both the singer's fame and her lack of
talent, and the only way to reconcile those two ideas is to conclude that she
has tricked everyone.
What sort of people become haters? Can anyone become one? I'm not sure about
this, but I've noticed some patterns. Haters are generally losers in a very
specific sense: although they are occasionally talented, they have never
achieved much. And indeed, anyone successful enough to have achieved
significant fame would be unlikely to regard another famous person as a fraud
on that account, because anyone famous knows how random fame is.
But haters are not always complete losers. They are not always the proverbial
guy living in his mom's basement. Many are, but some have some amount of
talent. In fact I suspect that a sense of frustrated talent is what drives
some people to become haters. They're not just saying "It's unfair that so-
and-so is famous," but "It's unfair that so-and-so is famous, and not me."
Could a hater be cured if they achieved something impressive? My guess is
that's a moot point, because they _never will_. I've been able to observe for
long enough that I'm fairly confident the pattern works both ways: not only do
people who do great work never become haters, haters never do great work.
Although I dislike the word "fanboy," it's evocative of something important
about both haters and fanboys. It implies that the fanboy is so slavishly
predictable in his admiration that he's diminished as a result, that he's less
than a man.
Haters seem even more diminished. I can imagine being a fanboy. I can think of
people whose work I admire so much that I could abase myself before them out
of sheer gratitude. If P. G. Wodehouse were still alive, I could see myself
being a Wodehouse fanboy. But I could not imagine being a hater.
Knowing that haters are just fanboys with the sign bit flipped makes it much
easier to deal with them. We don't need a separate theory of haters. We can
just use existing techniques for dealing with obsessive fans.
The most important of which is simply not to think much about them. If you're
like most people who become famous enough to acquire haters, your initial
reaction will be one of mystification. Why does this guy seem to have it in
for me? Where does his obsessive energy come from, and what makes him so
appallingly nasty? What did I do to set him off? Is it something I can fix?
The mistake here is to think of the hater as someone you have a dispute with.
When you have a dispute with someone, it's usually a good idea to try to
understand why they're upset and then fix things if you can. Disputes are
distracting. But it's a false analogy to think of a hater as someone you have
a dispute with. It's an understandable mistake, if you've never encountered
haters before. But when you realize that you're dealing with a hater, and what
a hater is, it's clear that it's a waste of time even to think about them. If
you have obsessive fans, do you spend any time wondering what makes them love
you so much? No, you just think "some people are kind of crazy," and that's
the end of it.
Since haters are equivalent to fanboys, that's the way to deal with them too.
There may have been something that set them off. But it's not something that
would have set off a normal person, so there's no reason to spend any time
thinking about it. It's not you, it's them.
** |
|
January 2020
When I was young, I thought old people had everything figured out. Now that
I'm old, I know this isn't true.
I constantly feel like a noob. It seems like I'm always talking to some
startup working in a new field I know nothing about, or reading a book about a
topic I don't understand well enough, or visiting some new country where I
don't know how things work.
It's not pleasant to feel like a noob. And the word "noob" is certainly not a
compliment. And yet today I realized something encouraging about being a noob:
the more of a noob you are locally, the less of a noob you are globally.
For example, if you stay in your home country, you'll feel less of a noob than
if you move to Farawavia, where everything works differently. And yet you'll
know more if you move. So the feeling of being a noob is inversely correlated
with actual ignorance.
But if the feeling of being a noob is good for us, why do we dislike it? What
evolutionary purpose could such an aversion serve?
I think the answer is that there are two sources of feeling like a noob: being
stupid, and doing something novel. Our dislike of feeling like a noob is our
brain telling us "Come on, come on, figure this out." Which was the right
thing to be thinking for most of human history. The life of hunter-gatherers
was complex, but it didn't change as much as life does now. They didn't
suddenly have to figure out what to do about cryptocurrency. So it made sense
to be biased toward competence at existing problems over the discovery of new
ones. It made sense for humans to dislike the feeling of being a noob, just
as, in a world where food was scarce, it made sense for them to dislike the
feeling of being hungry.
Now that too much food is more of a problem than too little, our dislike of
feeling hungry leads us astray. And I think our dislike of feeling like a noob
does too.
Though it feels unpleasant, and people will sometimes ridicule you for it, the
more you feel like a noob, the better.
---
---
Japanese Translation
| | Arabic Translation
French Translation
| | Korean Translation
Polish Translation
| | Chinese Translation
Serbian Translation
| | French Translation
* * *
--- |
|
November 2022
Since I was about 9 I've been puzzled by the apparent contradiction between
being made of matter that behaves in a predictable way, and the feeling that I
could choose to do whatever I wanted. At the time I had a self-interested
motive for exploring the question. At that age (like most succeeding ages) I
was always in trouble with the authorities, and it seemed to me that there
might possibly be some way to get out of trouble by arguing that I wasn't
responsible for my actions. I gradually lost hope of that, but the puzzle
remained: How do you reconcile being a machine made of matter with the feeling
that you're free to choose what you do?
The best way to explain the answer may be to start with a slightly wrong
version, and then fix it. The wrong version is: You can do what you want, but
you can't want what you want. Yes, you can control what you do, but you'll do
what you want, and you can't control that.
The reason this is mistaken is that people do sometimes change what they want.
People who don't want to want something — drug addicts, for example — can
sometimes make themselves stop wanting it. And people who want to want
something — who want to like classical music, or broccoli — sometimes succeed.
So we modify our initial statement: You can do what you want, but you can't
want to want what you want.
That's still not quite true. It's possible to change what you want to want. I
can imagine someone saying "I decided to stop wanting to like classical
music." But we're getting closer to the truth. It's rare for people to change
what they want to want, and the more "want to"s we add, the rarer it gets.
We can get arbitrarily close to a true statement by adding more "want to"s in
much the same way we can get arbitrarily close to 1 by adding more 9s to a
string of 9s following a decimal point. In practice three or four "want to"s
must surely be enough. It's hard even to envision what it would mean to change
what you want to want to want to want, let alone actually do it.
So one way to express the correct answer is to use a regular expression. You
can do what you want, but there's some statement of the form "you can't (want
to)* want what you want" that's true. Ultimately you get back to a want that
you don't control.
** |
|
There is a kind of mania for object-oriented programming at the moment, but
some of the smartest programmers I know are some of the least excited about
it.
My own feeling is that object-oriented programming is a useful technique in
some cases, but it isn't something that has to pervade every program you
write. You should be able to define new types, but you shouldn't have to
express every program as the definition of new types.
I think there are five reasons people like object-oriented programming, and
three and a half of them are bad:
1. Object-oriented programming is exciting if you have a statically-typed language without lexical closures or macros. To some degree, it offers a way around these limitations. (See Greenspun's Tenth Rule.)
2. Object-oriented programming is popular in big companies, because it suits the way they write software. At big companies, software tends to be written by large (and frequently changing) teams of mediocre programmers. Object-oriented programming imposes a discipline on these programmers that prevents any one of them from doing too much damage. The price is that the resulting code is bloated with protocols and full of duplication. This is not too high a price for big companies, because their software is probably going to be bloated and full of duplication anyway.
3. Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.
4. If a language is itself an object-oriented program, it can be extended by users. Well, maybe. Or maybe you can do even better by offering the sub-concepts of object-oriented programming a la carte. Overloading, for example, is not intrinsically tied to classes. We'll see.
5. Object-oriented abstractions map neatly onto the domains of certain specific kinds of programs, like simulations and CAD systems.
I personally have never needed object-oriented abstractions. Common Lisp has
an enormously powerful object system and I've never used it once. I've done a
lot of things (e.g. making hash tables full of closures) that would have
required object-oriented techniques to do in wimpier languages, but I have
never had to use CLOS.
Maybe I'm just stupid, or have worked on some limited subset of applications.
There is a danger in designing a language based on one's own experience of
programming. But it seems more dangerous to put stuff in that you've never
needed because it's thought to be a good idea.
---
---
Rees Re: OO
Spanish Translation
* * *
--- |
|
January 2003
_(This article is derived from a keynote talk at the fall 2002 meeting of
NEPLS.)_
Visitors to this country are often surprised to find that Americans like to
begin a conversation by asking "what do you do?" I've never liked this
question. I've rarely had a neat answer to it. But I think I have finally
solved the problem. Now, when someone asks me what I do, I look them straight
in the eye and say "I'm designing a new dialect of Lisp." I recommend this
answer to anyone who doesn't like being asked what they do. The conversation
will turn immediately to other topics.
I don't consider myself to be doing research on programming languages. I'm
just designing one, in the same way that someone might design a building or a
chair or a new typeface. I'm not trying to discover anything new. I just want
to make a language that will be good to program in. In some ways, this
assumption makes life a lot easier.
The difference between design and research seems to be a question of new
versus good. Design doesn't have to be new, but it has to be good. Research
doesn't have to be good, but it has to be new. I think these two paths
converge at the top: the best design surpasses its predecessors by using new
ideas, and the best research solves problems that are not only new, but
actually worth solving. So ultimately we're aiming for the same destination,
just approaching it from different directions.
What I'm going to talk about today is what your target looks like from the
back. What do you do differently when you treat programming languages as a
design problem instead of a research topic?
The biggest difference is that you focus more on the user. Design begins by
asking, who is this for and what do they need from it? A good architect, for
example, does not begin by creating a design that he then imposes on the
users, but by studying the intended users and figuring out what they need.
Notice I said "what they need," not "what they want." I don't mean to give the
impression that working as a designer means working as a sort of short-order
cook, making whatever the client tells you to. This varies from field to field
in the arts, but I don't think there is any field in which the best work is
done by the people who just make exactly what the customers tell them to.
The customer _is_ always right in the sense that the measure of good design is
how well it works for the user. If you make a novel that bores everyone, or a
chair that's horribly uncomfortable to sit in, then you've done a bad job,
period. It's no defense to say that the novel or the chair is designed
according to the most advanced theoretical principles.
And yet, making what works for the user doesn't mean simply making what the
user tells you to. Users don't know what all the choices are, and are often
mistaken about what they really want.
The answer to the paradox, I think, is that you have to design for the user,
but you have to design what the user needs, not simply what he says he wants.
It's much like being a doctor. You can't just treat a patient's symptoms. When
a patient tells you his symptoms, you have to figure out what's actually wrong
with him, and treat that.
This focus on the user is a kind of axiom from which most of the practice of
good design can be derived, and around which most design issues center.
If good design must do what the user needs, who is the user? When I say that
design must be for users, I don't mean to imply that good design aims at some
kind of lowest common denominator. You can pick any group of users you want.
If you're designing a tool, for example, you can design it for anyone from
beginners to experts, and what's good design for one group might be bad for
another. The point is, you have to pick some group of users. I don't think you
can even talk about good or bad design except with reference to some intended
user.
You're most likely to get good design if the intended users include the
designer himself. When you design something for a group that doesn't include
you, it tends to be for people you consider to be less sophisticated than you,
not more sophisticated.
That's a problem, because looking down on the user, however benevolently,
seems inevitably to corrupt the designer. I suspect that very few housing
projects in the US were designed by architects who expected to live in them.
You can see the same thing in programming languages. C, Lisp, and Smalltalk
were created for their own designers to use. Cobol, Ada, and Java, were
created for other people to use.
If you think you're designing something for idiots, the odds are that you're
not designing something good, even for idiots.
Even if you're designing something for the most sophisticated users, though,
you're still designing for humans. It's different in research. In math you
don't choose abstractions because they're easy for humans to understand; you
choose whichever make the proof shorter. I think this is true for the sciences
generally. Scientific ideas are not meant to be ergonomic.
Over in the arts, things are very different. Design is all about people. The
human body is a strange thing, but when you're designing a chair, that's what
you're designing for, and there's no way around it. All the arts have to
pander to the interests and limitations of humans. In painting, for example,
all other things being equal a painting with people in it will be more
interesting than one without. It is not merely an accident of history that the
great paintings of the Renaissance are all full of people. If they hadn't
been, painting as a medium wouldn't have the prestige that it does.
Like it or not, programming languages are also for people, and I suspect the
human brain is just as lumpy and idiosyncratic as the human body. Some ideas
are easy for people to grasp and some aren't. For example, we seem to have a
very limited capacity for dealing with detail. It's this fact that makes
programing languages a good idea in the first place; if we could handle the
detail, we could just program in machine language.
Remember, too, that languages are not primarily a form for finished programs,
but something that programs have to be developed in. Anyone in the arts could
tell you that you might want different mediums for the two situations. Marble,
for example, is a nice, durable medium for finished ideas, but a hopelessly
inflexible one for developing new ideas.
A program, like a proof, is a pruned version of a tree that in the past has
had false starts branching off all over it. So the test of a language is not
simply how clean the finished program looks in it, but how clean the path to
the finished program was. A design choice that gives you elegant finished
programs may not give you an elegant design process. For example, I've written
a few macro-defining macros full of nested backquotes that look now like
little gems, but writing them took hours of the ugliest trial and error, and
frankly, I'm still not entirely sure they're correct.
We often act as if the test of a language were how good finished programs look
in it. It seems so convincing when you see the same program written in two
languages, and one version is much shorter. When you approach the problem from
the direction of the arts, you're less likely to depend on this sort of test.
You don't want to end up with a programming language like marble.
For example, it is a huge win in developing software to have an interactive
toplevel, what in Lisp is called a read-eval-print loop. And when you have one
this has real effects on the design of the language. It would not work well
for a language where you have to declare variables before using them, for
example. When you're just typing expressions into the toplevel, you want to be
able to set x to some value and then start doing things to x. You don't want
to have to declare the type of x first. You may dispute either of the
premises, but if a language has to have a toplevel to be convenient, and
mandatory type declarations are incompatible with a toplevel, then no language
that makes type declarations mandatory could be convenient to program in.
In practice, to get good design you have to get close, and stay close, to your
users. You have to calibrate your ideas on actual users constantly, especially
in the beginning. One of the reasons Jane Austen's novels are so good is that
she read them out loud to her family. That's why she never sinks into self-
indulgently arty descriptions of landscapes, or pretentious philosophizing.
(The philosophy's there, but it's woven into the story instead of being pasted
onto it like a label.) If you open an average "literary" novel and imagine
reading it out loud to your friends as something you'd written, you'll feel
all too keenly what an imposition that kind of thing is upon the reader.
In the software world, this idea is known as Worse is Better. Actually, there
are several ideas mixed together in the concept of Worse is Better, which is
why people are still arguing about whether worse is actually better or not.
But one of the main ideas in that mix is that if you're building something
new, you should get a prototype in front of users as soon as possible.
The alternative approach might be called the Hail Mary strategy. Instead of
getting a prototype out quickly and gradually refining it, you try to create
the complete, finished, product in one long touchdown pass. As far as I know,
this is a recipe for disaster. Countless startups destroyed themselves this
way during the Internet bubble. I've never heard of a case where it worked.
What people outside the software world may not realize is that Worse is Better
is found throughout the arts. In drawing, for example, the idea was discovered
during the Renaissance. Now almost every drawing teacher will tell you that
the right way to get an accurate drawing is not to work your way slowly around
the contour of an object, because errors will accumulate and you'll find at
the end that the lines don't meet. Instead you should draw a few quick lines
in roughly the right place, and then gradually refine this initial sketch.
In most fields, prototypes have traditionally been made out of different
materials. Typefaces to be cut in metal were initially designed with a brush
on paper. Statues to be cast in bronze were modelled in wax. Patterns to be
embroidered on tapestries were drawn on paper with ink wash. Buildings to be
constructed from stone were tested on a smaller scale in wood.
What made oil paint so exciting, when it first became popular in the fifteenth
century, was that you could actually make the finished work _from_ the
prototype. You could make a preliminary drawing if you wanted to, but you
weren't held to it; you could work out all the details, and even make major
changes, as you finished the painting.
You can do this in software too. A prototype doesn't have to be just a model;
you can refine it into the finished product. I think you should always do this
when you can. It lets you take advantage of new insights you have along the
way. But perhaps even more important, it's good for morale.
Morale is key in design. I'm surprised people don't talk more about it. One of
my first drawing teachers told me: if you're bored when you're drawing
something, the drawing will look boring. For example, suppose you have to draw
a building, and you decide to draw each brick individually. You can do this if
you want, but if you get bored halfway through and start making the bricks
mechanically instead of observing each one, the drawing will look worse than
if you had merely suggested the bricks.
Building something by gradually refining a prototype is good for morale
because it keeps you engaged. In software, my rule is: always have working
code. If you're writing something that you'll be able to test in an hour, then
you have the prospect of an immediate reward to motivate you. The same is true
in the arts, and particularly in oil painting. Most painters start with a
blurry sketch and gradually refine it. If you work this way, then in principle
you never have to end the day with something that actually looks unfinished.
Indeed, there is even a saying among painters: "A painting is never finished,
you just stop working on it." This idea will be familiar to anyone who has
worked on software.
Morale is another reason that it's hard to design something for an
unsophisticated user. It's hard to stay interested in something you don't like
yourself. To make something good, you have to be thinking, "wow, this is
really great," not "what a piece of shit; those fools will love it."
Design means making things for humans. But it's not just the user who's human.
The designer is human too.
Notice all this time I've been talking about "the designer." Design usually
has to be under the control of a single person to be any good. And yet it
seems to be possible for several people to collaborate on a research project.
This seems to me one of the most interesting differences between research and
design.
There have been famous instances of collaboration in the arts, but most of
them seem to have been cases of molecular bonding rather than nuclear fusion.
In an opera it's common for one person to write the libretto and another to
write the music. And during the Renaissance, journeymen from northern Europe
were often employed to do the landscapes in the backgrounds of Italian
paintings. But these aren't true collaborations. They're more like examples of
Robert Frost's "good fences make good neighbors." You can stick instances of
good design together, but within each individual project, one person has to be
in control.
I'm not saying that good design requires that one person think of everything.
There's nothing more valuable than the advice of someone whose judgement you
trust. But after the talking is done, the decision about what to do has to
rest with one person.
Why is it that research can be done by collaborators and design can't? This is
an interesting question. I don't know the answer. Perhaps, if design and
research converge, the best research is also good design, and in fact can't be
done by collaborators. A lot of the most famous scientists seem to have worked
alone. But I don't know enough to say whether there is a pattern here. It
could be simply that many famous scientists worked when collaboration was less
common.
Whatever the story is in the sciences, true collaboration seems to be
vanishingly rare in the arts. Design by committee is a synonym for bad design.
Why is that so? Is there some way to beat this limitation?
I'm inclined to think there isn't-- that good design requires a dictator. One
reason is that good design has to be all of a piece. Design is not just for
humans, but for individual humans. If a design represents an idea that fits in
one person's head, then the idea will fit in the user's head too.
**Related:**
---
---
| | Japanese Translation
| | Taste for Makers
| | Romanian Translation
| | Spanish Translation
* * *
--- |
|
March 2006, rev August 2009
Yesterday one of the founders we funded asked me why we started Y Combinator.
Or more precisely, he asked if we'd started YC mainly for fun.
Kind of, but not quite. It is enormously fun to be able to work with Rtm and
Trevor again. I missed that after we sold Viaweb, and for all the years after
I always had a background process running, looking for something we could do
together. There is definitely an aspect of a band reunion to Y Combinator.
Every couple days I slip and call it "Viaweb."
Viaweb we started very explicitly to make money. I was sick of living from one
freelance project to the next, and decided to just work as hard as I could
till I'd made enough to solve the problem once and for all. Viaweb was
sometimes fun, but it wasn't designed for fun, and mostly it wasn't. I'd be
surprised if any startup is. All startups are mostly schleps.
The real reason we started Y Combinator is neither selfish nor virtuous. We
didn't start it mainly to make money; we have no idea what our average returns
might be, and won't know for years. Nor did we start YC mainly to help out
young would-be founders, though we do like the idea, and comfort ourselves
occasionally with the thought that if all our investments tank, we will thus
have been doing something unselfish. (It's oddly nondeterministic.)
The real reason we started Y Combinator is one probably only a hacker would
understand. We did it because it seems such a great hack. There are thousands
of smart people who could start companies and don't, and with a relatively
small amount of force applied at just the right place, we can spring on the
world a stream of new startups that might otherwise not have existed.
In a way this is virtuous, because I think startups are a good thing. But
really what motivates us is the completely amoral desire that would motivate
any hacker who looked at some complex device and realized that with a tiny
tweak he could make it run more efficiently. In this case, the device is the
world's economy, which fortunately happens to be open source.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
October 2005
_(This essay is derived from a talk at the 2005Startup School.)_
How do you get good ideas for startups? That's probably the number one
question people ask me.
I'd like to reply with another question: why do people think it's hard to come
up with ideas for startups?
That might seem a stupid thing to ask. Why do they _think_ it's hard? If
people can't do it, then it _is_ hard, at least for them. Right?
Well, maybe not. What people usually say is not that they can't think of
ideas, but that they don't have any. That's not quite the same thing. It could
be the reason they don't have any is that they haven't tried to generate them.
I think this is often the case. I think people believe that coming up with
ideas for startups is very hard-- that it _must_ be very hard-- and so they
don't try do to it. They assume ideas are like miracles: they either pop into
your head or they don't.
I also have a theory about why people think this. They overvalue ideas. They
think creating a startup is just a matter of implementing some fabulous
initial idea. And since a successful startup is worth millions of dollars, a
good idea is therefore a million dollar idea.
If coming up with an idea for a startup equals coming up with a million dollar
idea, then of course it's going to seem hard. Too hard to bother trying. Our
instincts tell us something so valuable would not be just lying around for
anyone to discover.
Actually, startup ideas are not million dollar ideas, and here's an experiment
you can try to prove it: just try to sell one. Nothing evolves faster than
markets. The fact that there's no market for startup ideas suggests there's no
demand. Which means, in the narrow sense of the word, that startup ideas are
worthless.
**Questions**
The fact is, most startups end up nothing like the initial idea. It would be
closer to the truth to say the main value of your initial idea is that, in the
process of discovering it's broken, you'll come up with your real idea.
The initial idea is just a starting point-- not a blueprint, but a question.
It might help if they were expressed that way. Instead of saying that your
idea is to make a collaborative, web-based spreadsheet, say: could one make a
collaborative, web-based spreadsheet? A few grammatical tweaks, and a woefully
incomplete idea becomes a promising question to explore.
There's a real difference, because an assertion provokes objections in a way a
question doesn't. If you say: I'm going to build a web-based spreadsheet, then
critics-- the most dangerous of which are in your own head-- will immediately
reply that you'd be competing with Microsoft, that you couldn't give people
the kind of UI they expect, that users wouldn't want to have their data on
your servers, and so on.
A question doesn't seem so challenging. It becomes: let's try making a web-
based spreadsheet and see how far we get. And everyone knows that if you tried
this you'd be able to make _something_ useful. Maybe what you'd end up with
wouldn't even be a spreadsheet. Maybe it would be some kind of new spreasheet-
like collaboration tool that doesn't even have a name yet. You wouldn't have
thought of something like that except by implementing your way toward it.
Treating a startup idea as a question changes what you're looking for. If an
idea is a blueprint, it has to be right. But if it's a question, it can be
wrong, so long as it's wrong in a way that leads to more ideas.
One valuable way for an idea to be wrong is to be only a partial solution.
When someone's working on a problem that seems too big, I always ask: is there
some way to bite off some subset of the problem, then gradually expand from
there? That will generally work unless you get trapped on a local maximum,
like 1980s-style AI, or C.
**Upwind**
So far, we've reduced the problem from thinking of a million dollar idea to
thinking of a mistaken question. That doesn't seem so hard, does it?
To generate such questions you need two things: to be familiar with promising
new technologies, and to have the right kind of friends. New technologies are
the ingredients startup ideas are made of, and conversations with friends are
the kitchen they're cooked in.
Universities have both, and that's why so many startups grow out of them.
They're filled with new technologies, because they're trying to produce
research, and only things that are new count as research. And they're full of
exactly the right kind of people to have ideas with: the other students, who
will be not only smart but elastic-minded to a fault.
The opposite extreme would be a well-paying but boring job at a big company.
Big companies are biased against new technologies, and the people you'd meet
there would be wrong too.
In an essay I wrote for high school students, I said a good rule of thumb was
to stay upwind-- to work on things that maximize your future options. The
principle applies for adults too, though perhaps it has to be modified to:
stay upwind for as long as you can, then cash in the potential energy you've
accumulated when you need to pay for kids.
I don't think people consciously realize this, but one reason downwind jobs
like churning out Java for a bank pay so well is precisely that they are
downwind. The market price for that kind of work is higher because it gives
you fewer options for the future. A job that lets you work on exciting new
stuff will tend to pay less, because part of the compensation is in the form
of the new skills you'll learn.
Grad school is the other end of the spectrum from a coding job at a big
company: the pay's low but you spend most of your time working on new stuff.
And of course, it's called "school," which makes that clear to everyone,
though in fact all jobs are some percentage school.
The right environment for having startup ideas need not be a university per
se. It just has to be a situation with a large percentage of school.
It's obvious why you want exposure to new technology, but why do you need
other people? Can't you just think of new ideas yourself? The empirical answer
is: no. Even Einstein needed people to bounce ideas off. Ideas get developed
in the process of explaining them to the right kind of person. You need that
resistance, just as a carver needs the resistance of the wood.
This is one reason Y Combinator has a rule against investing in startups with
only one founder. Practically every successful company has at least two. And
because startup founders work under great pressure, it's critical they be
friends.
I didn't realize it till I was writing this, but that may help explain why
there are so few female startup founders. I read on the Internet (so it must
be true) that only 1.7% of VC-backed startups are founded by women. The
percentage of female hackers is small, but not that small. So why the
discrepancy?
When you realize that successful startups tend to have multiple founders who
were already friends, a possible explanation emerges. People's best friends
are likely to be of the same sex, and if one group is a minority in some
population, _pairs_ of them will be a minority squared.
**Doodling**
What these groups of co-founders do together is more complicated than just
sitting down and trying to think of ideas. I suspect the most productive setup
is a kind of together-alone-together sandwich. Together you talk about some
hard problem, probably getting nowhere. Then, the next morning, one of you has
an idea in the shower about how to solve it. He runs eagerly to to tell the
others, and together they work out the kinks.
What happens in that shower? It seems to me that ideas just pop into my head.
But can we say more than that?
Taking a shower is like a form of meditation. You're alert, but there's
nothing to distract you. It's in a situation like this, where your mind is
free to roam, that it bumps into new ideas.
What happens when your mind wanders? It may be like doodling. Most people have
characteristic ways of doodling. This habit is unconscious, but not random: I
found my doodles changed after I started studying painting. I started to make
the kind of gestures I'd make if I were drawing from life. They were atoms of
drawing, but arranged randomly.
Perhaps letting your mind wander is like doodling with ideas. You have certain
mental gestures you've learned in your work, and when you're not paying
attention, you keep making these same gestures, but somewhat randomly. In
effect, you call the same functions on random arguments. That's what a
metaphor is: a function applied to an argument of the wrong type.
Conveniently, as I was writing this, my mind wandered: would it be useful to
have metaphors in a programming language? I don't know; I don't have time to
think about this. But it's convenient because this is an example of what I
mean by habits of mind. I spend a lot of time thinking about language design,
and my habit of always asking "would x be useful in a programming language"
just got invoked.
If new ideas arise like doodles, this would explain why you have to work at
something for a while before you have any. It's not just that you can't judge
ideas till you're an expert in a field. You won't even generate ideas, because
you won't have any habits of mind to invoke.
Of course the habits of mind you invoke on some field don't have to be derived
from working in that field. In fact, it's often better if they're not. You're
not just looking for good ideas, but for good _new_ ideas, and you have a
better chance of generating those if you combine stuff from distant fields. As
hackers, one of our habits of mind is to ask, could one open-source x? For
example, what if you made an open-source operating system? A fine idea, but
not very novel. Whereas if you ask, could you make an open-source play? you
might be onto something.
Are some kinds of work better sources of habits of mind than others? I suspect
harder fields may be better sources, because to attack hard problems you need
powerful solvents. I find math is a good source of metaphors-- good enough
that it's worth studying just for that. Related fields are also good sources,
especially when they're related in unexpected ways. Everyone knows computer
science and electrical engineering are related, but precisely because everyone
knows it, importing ideas from one to the other doesn't yield great profits.
It's like importing something from Wisconsin to Michigan. Whereas (I claim)
hacking and painting are also related, in the sense that hackers and painters
are both makers, and this source of new ideas is practically virgin territory.
**Problems**
In theory you could stick together ideas at random and see what you came up
with. What if you built a peer-to-peer dating site? Would it be useful to have
an automatic book? Could you turn theorems into a commodity? When you assemble
ideas at random like this, they may not be just stupid, but semantically ill-
formed. What would it even mean to make theorems a commodity? You got me. I
didn't think of that idea, just its name.
You might come up with something useful this way, but I never have. It's like
knowing a fabulous sculpture is hidden inside a block of marble, and all you
have to do is remove the marble that isn't part of it. It's an encouraging
thought, because it reminds you there is an answer, but it's not much use in
practice because the search space is too big.
I find that to have good ideas I need to be working on some problem. You can't
start with randomness. You have to start with a problem, then let your mind
wander just far enough for new ideas to form.
In a way, it's harder to see problems than their solutions. Most people prefer
to remain in denial about problems. It's obvious why: problems are irritating.
They're problems! Imagine if people in 1700 saw their lives the way we'd see
them. It would have been unbearable. This denial is such a powerful force
that, even when presented with possible solutions, people often prefer to
believe they wouldn't work.
I saw this phenomenon when I worked on spam filters. In 2002, most people
preferred to ignore spam, and most of those who didn't preferred to believe
the heuristic filters then available were the best you could do.
I found spam intolerable, and I felt it had to be possible to recognize it
statistically. And it turns out that was all you needed to solve the problem.
The algorithm I used was ridiculously simple. Anyone who'd really tried to
solve the problem would have found it. It was just that no one had really
tried to solve the problem.
Let me repeat that recipe: finding the problem intolerable and feeling it must
be possible to solve it. Simple as it seems, that's the recipe for a lot of
startup ideas.
**Wealth**
So far most of what I've said applies to ideas in general. What's special
about startup ideas? Startup ideas are ideas for companies, and companies have
to make money. And the way to make money is to make something people want.
Wealth is what people want. I don't mean that as some kind of philosophical
statement; I mean it as a tautology.
So an idea for a startup is an idea for something people want. Wouldn't any
good idea be something people want? Unfortunately not. I think new theorems
are a fine thing to create, but there is no great demand for them. Whereas
there appears to be great demand for celebrity gossip magazines. Wealth is
defined democratically. Good ideas and valuable ideas are not quite the same
thing; the difference is individual tastes.
But valuable ideas are very close to good ideas, especially in technology. I
think they're so close that you can get away with working as if the goal were
to discover good ideas, so long as, in the final stage, you stop and ask: will
people actually pay for this? Only a few ideas are likely to make it that far
and then get shot down; RPN calculators might be one example.
One way to make something people want is to look at stuff people use now
that's broken. Dating sites are a prime example. They have millions of users,
so they must be promising something people want. And yet they work horribly.
Just ask anyone who uses them. It's as if they used the worse-is-better
approach but stopped after the first stage and handed the thing over to
marketers.
Of course, the most obvious breakage in the average computer user's life is
Windows itself. But this is a special case: you can't defeat a monopoly by a
frontal attack. Windows can and will be overthrown, but not by giving people a
better desktop OS. The way to kill it is to redefine the problem as a superset
of the current one. The problem is not, what operating system should people
use on desktop computers? but how should people use applications? There are
answers to that question that don't even involve desktop computers.
Everyone thinks Google is going to solve this problem, but it is a very subtle
one, so subtle that a company as big as Google might well get it wrong. I
think the odds are better than 50-50 that the Windows killer-- or more
accurately, Windows transcender-- will come from some little startup.
Another classic way to make something people want is to take a luxury and make
it into a commmodity. People must want something if they pay a lot for it. And
it is a very rare product that can't be made dramatically cheaper if you try.
This was Henry Ford's plan. He made cars, which had been a luxury item, into a
commodity. But the idea is much older than Henry Ford. Water mills transformed
mechanical power from a luxury into a commodity, and they were used in the
Roman empire. Arguably pastoralism transformed a luxury into a commodity.
When you make something cheaper you can sell more of them. But if you make
something dramatically cheaper you often get qualitative changes, because
people start to use it in different ways. For example, once computers get so
cheap that most people can have one of their own, you can use them as
communication devices.
Often to make something dramatically cheaper you have to redefine the problem.
The Model T didn't have all the features previous cars did. It only came in
black, for example. But it solved the problem people cared most about, which
was getting from place to place.
One of the most useful mental habits I know I learned from Michael Rabin: that
the best way to solve a problem is often to redefine it. A lot of people use
this technique without being consciously aware of it, but Rabin was
spectacularly explicit. You need a big prime number? Those are pretty
expensive. How about if I give you a big number that only has a 10 to the
minus 100 chance of not being prime? Would that do? Well, probably; I mean,
that's probably smaller than the chance that I'm imagining all this anyway.
Redefining the problem is a particularly juicy heuristic when you have
competitors, because it's so hard for rigid-minded people to follow. You can
work in plain sight and they don't realize the danger. Don't worry about us.
We're just working on search. Do one thing and do it well, that's our motto.
Making things cheaper is actually a subset of a more general technique: making
things easier. For a long time it was most of making things easier, but now
that the things we build are so complicated, there's another rapidly growing
subset: making things easier to _use_.
This is an area where there's great room for improvement. What you want to be
able to say about technology is: it just works. How often do you say that now?
Simplicity takes effort-- genius, even. The average programmer seems to
produce UI designs that are almost willfully bad. I was trying to use the
stove at my mother's house a couple weeks ago. It was a new one, and instead
of physical knobs it had buttons and an LED display. I tried pressing some
buttons I thought would cause it to get hot, and you know what it said? "Err."
Not even "Error." "Err." You can't just say "Err" to the user of a _stove_.
You should design the UI so that errors are impossible. And the boneheads who
designed this stove even had an example of such a UI to work from: the old
one. You turn one knob to set the temperature and another to set the timer.
What was wrong with that? It just worked.
It seems that, for the average engineer, more options just means more rope to
hang yourself. So if you want to start a startup, you can take almost any
existing technology produced by a big company, and assume you could build
something way easier to use.
**Design for Exit**
Success for a startup approximately equals getting bought. You need some kind
of exit strategy, because you can't get the smartest people to work for you
without giving them options likely to be worth something. Which means you
either have to get bought or go public, and the number of startups that go
public is very small.
If success probably means getting bought, should you make that a conscious
goal? The old answer was no: you were supposed to pretend that you wanted to
create a giant, public company, and act surprised when someone made you an
offer. Really, you want to buy us? Well, I suppose we'd consider it, for the
right price.
I think things are changing. If 98% of the time success means getting bought,
why not be open about it? If 98% of the time you're doing product development
on spec for some big company, why not think of that as your task? One
advantage of this approach is that it gives you another source of ideas: look
at big companies, think what they should be doing, and do it yourself. Even if
they already know it, you'll probably be done faster.
Just be sure to make something multiple acquirers will want. Don't fix
Windows, because the only potential acquirer is Microsoft, and when there's
only one acquirer, they don't have to hurry. They can take their time and copy
you instead of buying you. If you want to get market price, work on something
where there's competition.
If an increasing number of startups are created to do product development on
spec, it will be a natural counterweight to monopolies. Once some type of
technology is captured by a monopoly, it will only evolve at big company rates
instead of startup rates, whereas alternatives will evolve with especial
speed. A free market interprets monopoly as damage and routes around it.
**The Woz Route**
The most productive way to generate startup ideas is also the most unlikely-
sounding: by accident. If you look at how famous startups got started, a lot
of them weren't initially supposed to be startups. Lotus began with a program
Mitch Kapor wrote for a friend. Apple got started because Steve Wozniak wanted
to build microcomputers, and his employer, Hewlett-Packard, wouldn't let him
do it at work. Yahoo began as David Filo's personal collection of links.
This is not the only way to start startups. You can sit down and consciously
come up with an idea for a company; we did. But measured in total market cap,
the build-stuff-for-yourself model might be more fruitful. It certainly has to
be the most fun way to come up with startup ideas. And since a startup ought
to have multiple founders who were already friends before they decided to
start a company, the rather surprising conclusion is that the best way to
generate startup ideas is to do what hackers do for fun: cook up amusing hacks
with your friends.
It seems like it violates some kind of conservation law, but there it is: the
best way to get a "million dollar idea" is just to do what hackers enjoy doing
anyway.
** |
|
January 2004
Have you ever seen an old photo of yourself and been embarrassed at the way
you looked? _Did we actually dress like that?_ We did. And we had no idea how
silly we looked. It's the nature of fashion to be invisible, in the same way
the movement of the earth is invisible to all of us riding on it.
What scares me is that there are moral fashions too. They're just as
arbitrary, and just as invisible to most people. But they're much more
dangerous. Fashion is mistaken for good design; moral fashion is mistaken for
good. Dressing oddly gets you laughed at. Violating moral fashions can get you
fired, ostracized, imprisoned, or even killed.
If you could travel back in a time machine, one thing would be true no matter
where you went: you'd have to watch what you said. Opinions we consider
harmless could have gotten you in big trouble. I've already said at least one
thing that would have gotten me in big trouble in most of Europe in the
seventeenth century, and did get Galileo in big trouble when he said it that
the earth moves.
It seems to be a constant throughout history: In every period, people believed
things that were just ridiculous, and believed them so strongly that you would
have gotten in terrible trouble for saying otherwise.
Is our time any different? To anyone who has read any amount of history, the
answer is almost certainly no. It would be a remarkable coincidence if ours
were the first era to get everything just right.
It's tantalizing to think we believe things that people in the future will
find ridiculous. What _would_ someone coming back to visit us in a time
machine have to be careful not to say? That's what I want to study here. But I
want to do more than just shock everyone with the heresy du jour. I want to
find general recipes for discovering what you can't say, in any era.
**The Conformist Test**
Let's start with a test: Do you have any opinions that you would be reluctant
to express in front of a group of your peers?
If the answer is no, you might want to stop and think about that. If
everything you believe is something you're supposed to believe, could that
possibly be a coincidence? Odds are it isn't. Odds are you just think what
you're told.
The other alternative would be that you independently considered every
question and came up with the exact same answers that are now considered
acceptable. That seems unlikely, because you'd also have to make the same
mistakes. Mapmakers deliberately put slight mistakes in their maps so they can
tell when someone copies them. If another map has the same mistake, that's
very convincing evidence.
Like every other era in history, our moral map almost certainly contains a few
mistakes. And anyone who makes the same mistakes probably didn't do it by
accident. It would be like someone claiming they had independently decided in
1972 that bell-bottom jeans were a good idea.
If you believe everything you're supposed to now, how can you be sure you
wouldn't also have believed everything you were supposed to if you had grown
up among the plantation owners of the pre-Civil War South, or in Germany in
the 1930s or among the Mongols in 1200, for that matter? Odds are you would
have.
Back in the era of terms like "well-adjusted," the idea seemed to be that
there was something wrong with you if you thought things you didn't dare say
out loud. This seems backward. Almost certainly, there is something wrong with
you if you _don't_ think things you don't dare say out loud.
**Trouble**
What can't we say? One way to find these ideas is simply to look at things
people do say, and get in trouble for.
Of course, we're not just looking for things we can't say. We're looking for
things we can't say that are true, or at least have enough chance of being
true that the question should remain open. But many of the things people get
in trouble for saying probably do make it over this second, lower threshold.
No one gets in trouble for saying that 2 + 2 is 5, or that people in
Pittsburgh are ten feet tall. Such obviously false statements might be treated
as jokes, or at worst as evidence of insanity, but they are not likely to make
anyone mad. The statements that make people mad are the ones they worry might
be believed. I suspect the statements that make people maddest are those they
worry might be true.
If Galileo had said that people in Padua were ten feet tall, he would have
been regarded as a harmless eccentric. Saying the earth orbited the sun was
another matter. The church knew this would set people thinking.
Certainly, as we look back on the past, this rule of thumb works well. A lot
of the statements people got in trouble for seem harmless now. So it's likely
that visitors from the future would agree with at least some of the statements
that get people in trouble today. Do we have no Galileos? Not likely.
To find them, keep track of opinions that get people in trouble, and start
asking, could this be true? Ok, it may be heretical (or whatever modern
equivalent), but might it also be true?
**Heresy**
This won't get us all the answers, though. What if no one happens to have
gotten in trouble for a particular idea yet? What if some idea would be so
radioactively controversial that no one would dare express it in public? How
can we find these too?
Another approach is to follow that word, heresy. In every period of history,
there seem to have been labels that got applied to statements to shoot them
down before anyone had a chance to ask if they were true or not. "Blasphemy",
"sacrilege", and "heresy" were such labels for a good part of western history,
as in more recent times "indecent", "improper", and "unamerican" have been. By
now these labels have lost their sting. They always do. By now they're mostly
used ironically. But in their time, they had real force.
The word "defeatist", for example, has no particular political connotations
now. But in Germany in 1917 it was a weapon, used by Ludendorff in a purge of
those who favored a negotiated peace. At the start of World War II it was used
extensively by Churchill and his supporters to silence their opponents. In
1940, any argument against Churchill's aggressive policy was "defeatist". Was
it right or wrong? Ideally, no one got far enough to ask that.
We have such labels today, of course, quite a lot of them, from the all-
purpose "inappropriate" to the dreaded "divisive." In any period, it should be
easy to figure out what such labels are, simply by looking at what people call
ideas they disagree with besides untrue. When a politician says his opponent
is mistaken, that's a straightforward criticism, but when he attacks a
statement as "divisive" or "racially insensitive" instead of arguing that it's
false, we should start paying attention.
So another way to figure out which of our taboos future generations will laugh
at is to start with the labels. Take a label "sexist", for example and try
to think of some ideas that would be called that. Then for each ask, might
this be true?
Just start listing ideas at random? Yes, because they won't really be random.
The ideas that come to mind first will be the most plausible ones. They'll be
things you've already noticed but didn't let yourself think.
In 1989 some clever researchers tracked the eye movements of radiologists as
they scanned chest images for signs of lung cancer. They found that even
when the radiologists missed a cancerous lesion, their eyes had usually paused
at the site of it. Part of their brain knew there was something there; it just
didn't percolate all the way up into conscious knowledge. I think many
interesting heretical thoughts are already mostly formed in our minds. If we
turn off our self-censorship temporarily, those will be the first to emerge.
**Time and Space**
If we could look into the future it would be obvious which of our taboos
they'd laugh at. We can't do that, but we can do something almost as good: we
can look into the past. Another way to figure out what we're getting wrong is
to look at what used to be acceptable and is now unthinkable.
Changes between the past and the present sometimes do represent progress. In a
field like physics, if we disagree with past generations it's because we're
right and they're wrong. But this becomes rapidly less true as you move away
from the certainty of the hard sciences. By the time you get to social
questions, many changes are just fashion. The age of consent fluctuates like
hemlines.
We may imagine that we are a great deal smarter and more virtuous than past
generations, but the more history you read, the less likely this seems. People
in past times were much like us. Not heroes, not barbarians. Whatever their
ideas were, they were ideas reasonable people could believe.
So here is another source of interesting heresies. Diff present ideas against
those of various past cultures, and see what you get. Some will be
shocking by present standards. Ok, fine; but which might also be true?
You don't have to look into the past to find big differences. In our own time,
different societies have wildly varying ideas of what's ok and what isn't. So
you can try diffing other cultures' ideas against ours as well. (The best way
to do that is to visit them.) Any idea that's considered harmless in a
significant percentage of times and places, and yet is taboo in ours, is a
candidate for something we're mistaken about.
For example, at the high water mark of political correctness in the early
1990s, Harvard distributed to its faculty and staff a brochure saying, among
other things, that it was inappropriate to compliment a colleague or student's
clothes. No more "nice shirt." I think this principle is rare among the
world's cultures, past or present. There are probably more where it's
considered especially polite to compliment someone's clothing than where it's
considered improper. Odds are this is, in a mild form, an example of one of
the taboos a visitor from the future would have to be careful to avoid if he
happened to set his time machine for Cambridge, Massachusetts, 1992.
**Prigs**
Of course, if they have time machines in the future they'll probably have a
separate reference manual just for Cambridge. This has always been a fussy
place, a town of i dotters and t crossers, where you're liable to get both
your grammar and your ideas corrected in the same conversation. And that
suggests another way to find taboos. Look for prigs, and see what's inside
their heads.
Kids' heads are repositories of all our taboos. It seems fitting to us that
kids' ideas should be bright and clean. The picture we give them of the world
is not merely simplified, to suit their developing minds, but sanitized as
well, to suit our ideas of what kids ought to think.
You can see this on a small scale in the matter of dirty words. A lot of my
friends are starting to have children now, and they're all trying not to use
words like "fuck" and "shit" within baby's hearing, lest baby start using
these words too. But these words are part of the language, and adults use them
all the time. So parents are giving their kids an inaccurate idea of the
language by not using them. Why do they do this? Because they don't think it's
fitting that kids should use the whole language. We like children to seem
innocent.
Most adults, likewise, deliberately give kids a misleading view of the world.
One of the most obvious examples is Santa Claus. We think it's cute for little
kids to believe in Santa Claus. I myself think it's cute for little kids to
believe in Santa Claus. But one wonders, do we tell them this stuff for their
sake, or for ours?
I'm not arguing for or against this idea here. It is probably inevitable that
parents should want to dress up their kids' minds in cute little baby outfits.
I'll probably do it myself. The important thing for our purposes is that, as a
result, a well brought-up teenage kid's brain is a more or less complete
collection of all our taboos and in mint condition, because they're
untainted by experience. Whatever we think that will later turn out to be
ridiculous, it's almost certainly inside that head.
How do we get at these ideas? By the following thought experiment. Imagine a
kind of latter-day Conrad character who has worked for a time as a mercenary
in Africa, for a time as a doctor in Nepal, for a time as the manager of a
nightclub in Miami. The specifics don't matter just someone who has seen a
lot. Now imagine comparing what's inside this guy's head with what's inside
the head of a well-behaved sixteen year old girl from the suburbs. What does
he think that would shock her? He knows the world; she knows, or at least
embodies, present taboos. Subtract one from the other, and the result is what
we can't say.
**Mechanism**
I can think of one more way to figure out what we can't say: to look at how
taboos are created. How do moral fashions arise, and why are they adopted? If
we can understand this mechanism, we may be able to see it at work in our own
time.
Moral fashions don't seem to be created the way ordinary fashions are.
Ordinary fashions seem to arise by accident when everyone imitates the whim of
some influential person. The fashion for broad-toed shoes in late fifteenth
century Europe began because Charles VIII of France had six toes on one foot.
The fashion for the name Gary began when the actor Frank Cooper adopted the
name of a tough mill town in Indiana. Moral fashions more often seem to be
created deliberately. When there's something we can't say, it's often because
some group doesn't want us to.
The prohibition will be strongest when the group is nervous. The irony of
Galileo's situation was that he got in trouble for repeating Copernicus's
ideas. Copernicus himself didn't. In fact, Copernicus was a canon of a
cathedral, and dedicated his book to the pope. But by Galileo's time the
church was in the throes of the Counter-Reformation and was much more worried
about unorthodox ideas.
To launch a taboo, a group has to be poised halfway between weakness and
power. A confident group doesn't need taboos to protect it. It's not
considered improper to make disparaging remarks about Americans, or the
English. And yet a group has to be powerful enough to enforce a taboo.
Coprophiles, as of this writing, don't seem to be numerous or energetic enough
to have had their interests promoted to a lifestyle.
I suspect the biggest source of moral taboos will turn out to be power
struggles in which one side only barely has the upper hand. That's where
you'll find a group powerful enough to enforce taboos, but weak enough to need
them.
Most struggles, whatever they're really about, will be cast as struggles
between competing ideas. The English Reformation was at bottom a struggle for
wealth and power, but it ended up being cast as a struggle to preserve the
souls of Englishmen from the corrupting influence of Rome. It's easier to get
people to fight for an idea. And whichever side wins, their ideas will also be
considered to have triumphed, as if God wanted to signal his agreement by
selecting that side as the victor.
We often like to think of World War II as a triumph of freedom over
totalitarianism. We conveniently forget that the Soviet Union was also one of
the winners.
I'm not saying that struggles are never about ideas, just that they will
always be made to seem to be about ideas, whether they are or not. And just as
there is nothing so unfashionable as the last, discarded fashion, there is
nothing so wrong as the principles of the most recently defeated opponent.
Representational art is only now recovering from the approval of both Hitler
and Stalin.
Although moral fashions tend to arise from different sources than fashions in
clothing, the mechanism of their adoption seems much the same. The early
adopters will be driven by ambition: self-consciously cool people who want to
distinguish themselves from the common herd. As the fashion becomes
established they'll be joined by a second, much larger group, driven by fear.
This second group adopt the fashion not because they want to stand out but
because they are afraid of standing out.
So if you want to figure out what we can't say, look at the machinery of
fashion and try to predict what it would make unsayable. What groups are
powerful but nervous, and what ideas would they like to suppress? What ideas
were tarnished by association when they ended up on the losing side of a
recent struggle? If a self-consciously cool person wanted to differentiate
himself from preceding fashions (e.g. from his parents), which of their ideas
would he tend to reject? What are conventional-minded people afraid of saying?
This technique won't find us all the things we can't say. I can think of some
that aren't the result of any recent struggle. Many of our taboos are rooted
deep in the past. But this approach, combined with the preceding four, will
turn up a good number of unthinkable ideas.
**Why**
Some would ask, why would one want to do this? Why deliberately go poking
around among nasty, disreputable ideas? Why look under rocks?
I do it, first of all, for the same reason I did look under rocks as a kid:
plain curiosity. And I'm especially curious about anything that's forbidden.
Let me see and decide for myself.
Second, I do it because I don't like the idea of being mistaken. If, like
other eras, we believe things that will later seem ridiculous, I want to know
what they are so that I, at least, can avoid believing them.
Third, I do it because it's good for the brain. To do good work you need a
brain that can go anywhere. And you especially need a brain that's in the
habit of going where it's not supposed to.
Great work tends to grow out of ideas that others have overlooked, and no idea
is so overlooked as one that's unthinkable. Natural selection, for example.
It's so simple. Why didn't anyone think of it before? Well, that is all too
obvious. Darwin himself was careful to tiptoe around the implications of his
theory. He wanted to spend his time thinking about biology, not arguing with
people who accused him of being an atheist.
In the sciences, especially, it's a great advantage to be able to question
assumptions. The m.o. of scientists, or at least of the good ones, is
precisely that: look for places where conventional wisdom is broken, and then
try to pry apart the cracks and see what's underneath. That's where new
theories come from.
A good scientist, in other words, does not merely ignore conventional wisdom,
but makes a special effort to break it. Scientists go looking for trouble.
This should be the m.o. of any scholar, but scientists seem much more willing
to look under rocks.
Why? It could be that the scientists are simply smarter; most physicists
could, if necessary, make it through a PhD program in French literature, but
few professors of French literature could make it through a PhD program in
physics. Or it could be because it's clearer in the sciences whether theories
are true or false, and this makes scientists bolder. (Or it could be that,
because it's clearer in the sciences whether theories are true or false, you
have to be smart to get jobs as a scientist, rather than just a good
politician.)
Whatever the reason, there seems a clear correlation between intelligence and
willingness to consider shocking ideas. This isn't just because smart people
actively work to find holes in conventional thinking. I think conventions also
have less hold over them to start with. You can see that in the way they
dress.
It's not only in the sciences that heresy pays off. In any competitive field,
you can win big by seeing things that others daren't. And in every field there
are probably heresies few dare utter. Within the US car industry there is a
lot of hand-wringing now about declining market share. Yet the cause is so
obvious that any observant outsider could explain it in a second: they make
bad cars. And they have for so long that by now the US car brands are
antibrands something you'd buy a car despite, not because of. Cadillac
stopped being the Cadillac of cars in about 1970. And yet I suspect no one
dares say this. Otherwise these companies would have tried to fix the
problem.
Training yourself to think unthinkable thoughts has advantages beyond the
thoughts themselves. It's like stretching. When you stretch before running,
you put your body into positions much more extreme than any it will assume
during the run. If you can think things so outside the box that they'd make
people's hair stand on end, you'll have no trouble with the small trips
outside the box that people call innovative.
**_Pensieri Stretti_**
When you find something you can't say, what do you do with it? My advice is,
don't say it. Or at least, pick your battles.
Suppose in the future there is a movement to ban the color yellow. Proposals
to paint anything yellow are denounced as "yellowist", as is anyone suspected
of liking the color. People who like orange are tolerated but viewed with
suspicion. Suppose you realize there is nothing wrong with yellow. If you go
around saying this, you'll be denounced as a yellowist too, and you'll find
yourself having a lot of arguments with anti-yellowists. If your aim in life
is to rehabilitate the color yellow, that may be what you want. But if you're
mostly interested in other questions, being labelled as a yellowist will just
be a distraction. Argue with idiots, and you become an idiot.
The most important thing is to be able to think what you want, not to say what
you want. And if you feel you have to say everything you think, it may inhibit
you from thinking improper thoughts. I think it's better to follow the
opposite policy. Draw a sharp line between your thoughts and your speech.
Inside your head, anything is allowed. Within my head I make a point of
encouraging the most outrageous thoughts I can imagine. But, as in a secret
society, nothing that happens within the building should be told to outsiders.
The first rule of Fight Club is, you do not talk about Fight Club.
When Milton was going to visit Italy in the 1630s, Sir Henry Wootton, who had
been ambassador to Venice, told him his motto should be _"i pensieri stretti &
il viso sciolto."_ Closed thoughts and an open face. Smile at everyone, and
don't tell them what you're thinking. This was wise advice. Milton was an
argumentative fellow, and the Inquisition was a bit restive at that time. But
I think the difference between Milton's situation and ours is only a matter of
degree. Every era has its heresies, and if you don't get imprisoned for them
you will at least get in enough trouble that it becomes a complete
distraction.
I admit it seems cowardly to keep quiet. When I read about the harassment to
which the Scientologists subject their critics , or that pro-Israel groups
are "compiling dossiers" on those who speak out against Israeli human rights
abuses , or about people being sued for violating the DMCA , part of
me wants to say, "All right, you bastards, bring it on." The problem is, there
are so many things you can't say. If you said them all you'd have no time left
for your real work. You'd have to turn into Noam Chomsky.
The trouble with keeping your thoughts secret, though, is that you lose the
advantages of discussion. Talking about an idea leads to more ideas. So the
optimal plan, if you can manage it, is to have a few trusted friends you can
speak openly to. This is not just a way to develop ideas; it's also a good
rule of thumb for choosing friends. The people you can say heretical things to
without getting jumped on are also the most interesting to know.
**_Viso Sciolto?_**
I don't think we need the _viso sciolto_ so much as the _pensieri stretti._
Perhaps the best policy is to make it plain that you don't agree with whatever
zealotry is current in your time, but not to be too specific about what you
disagree with. Zealots will try to draw you out, but you don't have to answer
them. If they try to force you to treat a question on their terms by asking
"are you with us or against us?" you can always just answer "neither".
Better still, answer "I haven't decided." That's what Larry Summers did when a
group tried to put him in this position. Explaining himself later, he said "I
don't do litmus tests." A lot of the questions people get hot about are
actually quite complicated. There is no prize for getting the answer quickly.
If the anti-yellowists seem to be getting out of hand and you want to fight
back, there are ways to do it without getting yourself accused of being a
yellowist. Like skirmishers in an ancient army, you want to avoid directly
engaging the main body of the enemy's troops. Better to harass them with
arrows from a distance.
One way to do this is to ratchet the debate up one level of abstraction. If
you argue against censorship in general, you can avoid being accused of
whatever heresy is contained in the book or film that someone is trying to
censor. You can attack labels with meta-labels: labels that refer to the use
of labels to prevent discussion. The spread of the term "political
correctness" meant the beginning of the end of political correctness, because
it enabled one to attack the phenomenon as a whole without being accused of
any of the specific heresies it sought to suppress.
Another way to counterattack is with metaphor. Arthur Miller undermined the
House Un-American Activities Committee by writing a play, "The Crucible,"
about the Salem witch trials. He never referred directly to the committee and
so gave them no way to reply. What could HUAC do, defend the Salem witch
trials? And yet Miller's metaphor stuck so well that to this day the
activities of the committee are often described as a "witch-hunt."
Best of all, probably, is humor. Zealots, whatever their cause, invariably
lack a sense of humor. They can't reply in kind to jokes. They're as unhappy
on the territory of humor as a mounted knight on a skating rink. Victorian
prudishness, for example, seems to have been defeated mainly by treating it as
a joke. Likewise its reincarnation as political correctness. "I am glad that I
managed to write 'The Crucible,'" Arthur Miller wrote, "but looking back I
have often wished I'd had the temperament to do an absurd comedy, which is
what the situation deserved."
**ABQ**
A Dutch friend says I should use Holland as an example of a tolerant society.
It's true they have a long tradition of comparative open-mindedness. For
centuries the low countries were the place to go to say things you couldn't
say anywhere else, and this helped to make the region a center of scholarship
and industry (which have been closely tied for longer than most people
realize). Descartes, though claimed by the French, did much of his thinking in
Holland.
And yet, I wonder. The Dutch seem to live their lives up to their necks in
rules and regulations. There's so much you can't do there; is there really
nothing you can't say?
Certainly the fact that they value open-mindedness is no guarantee. Who thinks
they're not open-minded? Our hypothetical prim miss from the suburbs thinks
she's open-minded. Hasn't she been taught to be? Ask anyone, and they'll say
the same thing: they're pretty open-minded, though they draw the line at
things that are really wrong. (Some tribes may avoid "wrong" as judgemental,
and may instead use a more neutral sounding euphemism like "negative" or
"destructive".)
When people are bad at math, they know it, because they get the wrong answers
on tests. But when people are bad at open-mindedness they don't know it. In
fact they tend to think the opposite. Remember, it's the nature of fashion to
be invisible. It wouldn't work otherwise. Fashion doesn't seem like fashion to
someone in the grip of it. It just seems like the right thing to do. It's only
by looking from a distance that we see oscillations in people's idea of the
right thing to do, and can identify them as fashions.
Time gives us such distance for free. Indeed, the arrival of new fashions
makes old fashions easy to see, because they seem so ridiculous by contrast.
From one end of a pendulum's swing, the other end seems especially far away.
To see fashion in your own time, though, requires a conscious effort. Without
time to give you distance, you have to create distance yourself. Instead of
being part of the mob, stand as far away from it as you can and watch what
it's doing. And pay especially close attention whenever an idea is being
suppressed. Web filters for children and employees often ban sites containing
pornography, violence, and hate speech. What counts as pornography and
violence? And what, exactly, is "hate speech?" This sounds like a phrase out
of _1984._
Labels like that are probably the biggest external clue. If a statement is
false, that's the worst thing you can say about it. You don't need to say that
it's heretical. And if it isn't false, it shouldn't be suppressed. So when you
see statements being attacked as x-ist or y-ic (substitute your current values
of x and y), whether in 1630 or 2030, that's a sure sign that something is
wrong. When you hear such labels being used, ask why.
Especially if you hear yourself using them. It's not just the mob you need to
learn to watch from a distance. You need to be able to watch your own thoughts
from a distance. That's not a radical idea, by the way; it's the main
difference between children and adults. When a child gets angry because he's
tired, he doesn't know what's happening. An adult can distance himself enough
from the situation to say "never mind, I'm just tired." I don't see why one
couldn't, by a similar process, learn to recognize and discount the effects of
moral fashions.
You have to take that extra step if you want to think clearly. But it's
harder, because now you're working against social customs instead of with
them. Everyone encourages you to grow up to the point where you can discount
your own bad moods. Few encourage you to continue to the point where you can
discount society's bad moods.
How can you see the wave, when you're the water? Always be questioning. That's
the only defence. What can't you say? And why?
_** |
|
July 2006
I've discovered a handy test for figuring out what you're addicted to. Imagine
you were going to spend the weekend at a friend's house on a little island off
the coast of Maine. There are no shops on the island and you won't be able to
leave while you're there. Also, you've never been to this house before, so you
can't assume it will have more than any house might.
What, besides clothes and toiletries, do you make a point of packing? That's
what you're addicted to. For example, if you find yourself packing a bottle of
vodka (just in case), you may want to stop and think about that.
For me the list is four things: books, earplugs, a notebook, and a pen.
There are other things I might bring if I thought of it, like music, or tea,
but I can live without them. I'm not so addicted to caffeine that I wouldn't
risk the house not having any tea, just for a weekend.
Quiet is another matter. I realize it seems a bit eccentric to take earplugs
on a trip to an island off the coast of Maine. If anywhere should be quiet,
that should. But what if the person in the next room snored? What if there was
a kid playing basketball? (Thump, thump, thump... thump.) Why risk it?
Earplugs are small.
Sometimes I can think with noise. If I already have momentum on some project,
I can work in noisy places. I can edit an essay or debug code in an airport.
But airports are not so bad: most of the noise is whitish. I couldn't work
with the sound of a sitcom coming through the wall, or a car in the street
playing thump-thump music.
And of course there's another kind of thinking, when you're starting something
new, that requires complete quiet. You never know when this will strike. It's
just as well to carry plugs.
The notebook and pen are professional equipment, as it were. Though actually
there is something druglike about them, in the sense that their main purpose
is to make me feel better. I hardly ever go back and read stuff I write down
in notebooks. It's just that if I can't write things down, worrying about
remembering one idea gets in the way of having the next. Pen and paper wick
ideas.
The best notebooks I've found are made by a company called Miquelrius. I use
their smallest size, which is about 2.5 x 4 in. The secret to writing on such
narrow pages is to break words only when you run out of space, like a Latin
inscription. I use the cheapest plastic Bic ballpoints, partly because their
gluey ink doesn't seep through pages, and partly so I don't worry about losing
them.
I only started carrying a notebook about three years ago. Before that I used
whatever scraps of paper I could find. But the problem with scraps of paper is
that they're not ordered. In a notebook you can guess what a scribble means by
looking at the pages around it. In the scrap era I was constantly finding
notes I'd written years before that might say something I needed to remember,
if I could only figure out what.
As for books, I know the house would probably have something to read. On the
average trip I bring four books and only read one of them, because I find new
books to read en route. Really bringing books is insurance.
I realize this dependence on books is not entirely good—that what I need them
for is distraction. The books I bring on trips are often quite virtuous, the
sort of stuff that might be assigned reading in a college class. But I know my
motives aren't virtuous. I bring books because if the world gets boring I need
to be able to slip into another distilled by some writer. It's like eating jam
when you know you should be eating fruit.
There is a point where I'll do without books. I was walking in some steep
mountains once, and decided I'd rather just think, if I was bored, rather than
carry a single unnecessary ounce. It wasn't so bad. I found I could entertain
myself by having ideas instead of reading other people's. If you stop eating
jam, fruit starts to taste better.
So maybe I'll try not bringing books on some future trip. They're going to
have to pry the plugs out of my cold, dead ears, however.
---
---
| | Spanish Translation
| | | | Japanese Translation
* * *
--- |
|
March 2006
_(This essay is derived from a talk at Google.)_
A few weeks ago I found to my surprise that I'd been granted four patents.
This was all the more surprising because I'd only applied for three. The
patents aren't mine, of course. They were assigned to Viaweb, and became
Yahoo's when they bought us. But the news set me thinking about the question
of software patents generally.
Patents are a hard problem. I've had to advise most of the startups we've
funded about them, and despite years of experience I'm still not always sure
I'm giving the right advice.
One thing I do feel pretty certain of is that if you're against software
patents, you're against patents in general. Gradually our machines consist
more and more of software. Things that used to be done with levers and cams
and gears are now done with loops and trees and closures. There's nothing
special about physical embodiments of control systems that should make them
patentable, and the software equivalent not.
Unfortunately, patent law is inconsistent on this point. Patent law in most
countries says that algorithms aren't patentable. This rule is left over from
a time when "algorithm" meant something like the Sieve of Eratosthenes. In
1800, people could not see as readily as we can that a great many patents on
mechanical objects were really patents on the algorithms they embodied.
Patent lawyers still have to pretend that's what they're doing when they
patent algorithms. You must not use the word "algorithm" in the title of a
patent application, just as you must not use the word "essays" in the title of
a book. If you want to patent an algorithm, you have to frame it as a computer
system executing that algorithm. Then it's mechanical; phew. The default
euphemism for algorithm is "system and method." Try a patent search for that
phrase and see how many results you get.
Since software patents are no different from hardware patents, people who say
"software patents are evil" are saying simply "patents are evil." So why do so
many people complain about software patents specifically?
I think the problem is more with the patent office than the concept of
software patents. Whenever software meets government, bad things happen,
because software changes fast and government changes slow. The patent office
has been overwhelmed by both the volume and the novelty of applications for
software patents, and as a result they've made a lot of mistakes.
The most common is to grant patents that shouldn't be granted. To be
patentable, an invention has to be more than new. It also has to be non-
obvious. And this, especially, is where the USPTO has been dropping the ball.
Slashdot has an icon that expresses the problem vividly: a knife and fork with
the words "patent pending" superimposed.
The scary thing is, this is the _only_ icon they have for patent stories.
Slashdot readers now take it for granted that a story about a patent will be
about a bogus patent. That's how bad the problem has become.
The problem with Amazon's notorious one-click patent, for example, is not that
it's a software patent, but that it's obvious. Any online store that kept
people's shipping addresses would have implemented this. The reason Amazon did
it first was not that they were especially smart, but because they were one of
the earliest sites with enough clout to force customers to log in before they
could buy something.
We, as hackers, know the USPTO is letting people patent the knives and forks
of our world. The problem is, the USPTO are not hackers. They're probably good
at judging new inventions for casting steel or grinding lenses, but they don't
understand software yet.
At this point an optimist would be tempted to add "but they will eventually."
Unfortunately that might not be true. The problem with software patents is an
instance of a more general one: the patent office takes a while to understand
new technology. If so, this problem will only get worse, because the rate of
technological change seems to be increasing. In thirty years, the patent
office may understand the sort of things we now patent as software, but there
will be other new types of inventions they understand even less.
Applying for a patent is a negotiation. You generally apply for a broader
patent than you think you'll be granted, and the examiners reply by throwing
out some of your claims and granting others. So I don't really blame Amazon
for applying for the one-click patent. The big mistake was the patent
office's, for not insisting on something narrower, with real technical
content. By granting such an over-broad patent, the USPTO in effect slept with
Amazon on the first date. Was Amazon supposed to say no?
Where Amazon went over to the dark side was not in applying for the patent,
but in enforcing it. A lot of companies (Microsoft, for example) have been
granted large numbers of preposterously over-broad patents, but they keep them
mainly for defensive purposes. Like nuclear weapons, the main role of big
companies' patent portfolios is to threaten anyone who attacks them with a
counter-suit. Amazon's suit against Barnes & Noble was thus the equivalent of
a nuclear first strike.
That suit probably hurt Amazon more than it helped them. Barnes & Noble was a
lame site; Amazon would have crushed them anyway. To attack a rival they could
have ignored, Amazon put a lasting black mark on their own reputation. Even
now I think if you asked hackers to free-associate about Amazon, the one-click
patent would turn up in the first ten topics.
Google clearly doesn't feel that merely holding patents is evil. They've
applied for a lot of them. Are they hypocrites? Are patents evil?
There are really two variants of that question, and people answering it often
aren't clear in their own minds which they're answering. There's a narrow
variant: is it bad, given the current legal system, to apply for patents? and
also a broader one: is it bad that the current legal system allows patents?
These are separate questions. For example, in preindustrial societies like
medieval Europe, when someone attacked you, you didn't call the police. There
were no police. When attacked, you were supposed to fight back, and there were
conventions about how to do it. Was this wrong? That's two questions: was it
wrong to take justice into your own hands, and was it wrong that you had to?
We tend to say yes to the second, but no to the first. If no one else will
defend you, you have to defend yourself.
The situation with patents is similar. Business is a kind of ritualized
warfare. Indeed, it evolved from actual warfare: most early traders switched
on the fly from merchants to pirates depending on how strong you seemed. In
business there are certain rules describing how companies may and may not
compete with one another, and someone deciding that they're going to play by
their own rules is missing the point. Saying "I'm not going to apply for
patents just because everyone else does" is not like saying "I'm not going to
lie just because everyone else does." It's more like saying "I'm not going to
use TCP/IP just because everyone else does." Oh yes you are.
A closer comparison might be someone seeing a hockey game for the first time,
realizing with shock that the players were _deliberately_ bumping into one
another, and deciding that one would on no account be so rude when playing
hockey oneself.
Hockey allows checking. It's part of the game. If your team refuses to do it,
you simply lose. So it is in business. Under the present rules, patents are
part of the game.
What does that mean in practice? We tell the startups we fund not to worry
about infringing patents, because startups rarely get sued for patent
infringement. There are only two reasons someone might sue you: for money, or
to prevent you from competing with them. Startups are too poor to be worth
suing for money. And in practice they don't seem to get sued much by
competitors, either. They don't get sued by other startups because (a) patent
suits are an expensive distraction, and (b) since the other startups are as
young as they are, their patents probably haven't issued yet. Nor do
startups, at least in the software business, seem to get sued much by
established competitors. Despite all the patents Microsoft holds, I don't know
of an instance where they sued a startup for patent infringement. Companies
like Microsoft and Oracle don't win by winning lawsuits. That's too uncertain.
They win by locking competitors out of their sales channels. If you do manage
to threaten them, they're more likely to buy you than sue you.
When you read of big companies filing patent suits against smaller ones, it's
usually a big company on the way down, grasping at straws. For example,
Unisys's attempts to enforce their patent on LZW compression. When you see a
big company threatening patent suits, sell. When a company starts fighting
over IP, it's a sign they've lost the real battle, for users.
A company that sues competitors for patent infringement is like a defender who
has been beaten so thoroughly that he turns to plead with the referee. You
don't do that if you can still reach the ball, even if you genuinely believe
you've been fouled. So a company threatening patent suits is a company in
trouble.
When we were working on Viaweb, a bigger company in the e-commerce business
was granted a patent on online ordering, or something like that. I got a call
from a VP there asking if we'd like to license it. I replied that I thought
the patent was completely bogus, and would never hold up in court. "Ok," he
replied. "So, are you guys hiring?"
If your startup grows big enough, however, you'll start to get sued, no matter
what you do. If you go public, for example, you'll be sued by multiple patent
trolls who hope you'll pay them off to go away. More on them later.
In other words, no one will sue you for patent infringement till you have
money, and once you have money, people will sue you whether they have grounds
to or not. So I advise fatalism. Don't waste your time worrying about patent
infringement. You're probably violating a patent every time you tie your
shoelaces. At the start, at least, just worry about making something great and
getting lots of users. If you grow to the point where anyone considers you
worth attacking, you're doing well.
We do advise the companies we fund to apply for patents, but not so they can
sue competitors. Successful startups either get bought or grow into big
companies. If a startup wants to grow into a big company, they should apply
for patents to build up the patent portfolio they'll need to maintain an armed
truce with other big companies. If they want to get bought, they should apply
for patents because patents are part of the mating dance with acquirers.
Most startups that succeed do it by getting bought, and most acquirers care
about patents. Startup acquisitions are usually a build-vs-buy decision for
the acquirer. Should we buy this little startup or build our own? And two
things, especially, make them decide not to build their own: if you already
have a large and rapidly growing user base, and if you have a fairly solid
patent application on critical parts of your software.
There's a third reason big companies should prefer buying to building: that if
they built their own, they'd screw it up. But few big companies are smart
enough yet to admit this to themselves. It's usually the acquirer's engineers
who are asked how hard it would be for the company to build their own, and
they overestimate their abilities. A patent seems to change the balance.
It gives the acquirer an excuse to admit they couldn't copy what you're doing.
It may also help them to grasp what's special about your technology.
Frankly, it surprises me how small a role patents play in the software
business. It's kind of ironic, considering all the dire things experts say
about software patents stifling innovation, but when one looks closely at the
software business, the most striking thing is how little patents seem to
matter.
In other fields, companies regularly sue competitors for patent infringement.
For example, the airport baggage scanning business was for many years a cozy
duopoly shared between two companies, InVision and L-3. In 2002 a startup
called Reveal appeared, with new technology that let them build scanners a
third the size. They were sued for patent infringement before they'd even
released a product.
You rarely hear that kind of story in our world. The one example I've found
is, embarrassingly enough, Yahoo, which filed a patent suit against a gaming
startup called Xfire in 2005. Xfire doesn't seem to be a very big deal, and
it's hard to say why Yahoo felt threatened. Xfire's VP of engineering had
worked at Yahoo on similar stuff-- in fact, he was listed as an inventor on
the patent Yahoo sued over-- so perhaps there was something personal about it.
My guess is that someone at Yahoo goofed. At any rate they didn't pursue the
suit very vigorously.
Why do patents play so small a role in software? I can think of three possible
reasons.
One is that software is so complicated that patents by themselves are not
worth very much. I may be maligning other fields here, but it seems that in
most types of engineering you can hand the details of some new technique to a
group of medium-high quality people and get the desired result. For example,
if someone develops a new process for smelting ore that gets a better yield,
and you assemble a team of qualified experts and tell them about it, they'll
be able to get the same yield. This doesn't seem to work in software. Software
is so subtle and unpredictable that "qualified experts" don't get you very
far.
That's why we rarely hear phrases like "qualified expert" in the software
business. What that level of ability can get you is, say, to make your
software compatible with some other piece of software-- in eight months, at
enormous cost. To do anything harder you need individual brilliance. If you
assemble a team of qualified experts and tell them to make a new web-based
email program, they'll get their asses kicked by a team of inspired nineteen
year olds.
Experts can implement, but they can't design. Or rather, expertise in
implementation is the only kind most people, including the experts themselves,
can measure.
But design is a definite skill. It's not just an airy intangible. Things
always seem intangible when you don't understand them. Electricity seemed an
airy intangible to most people in 1800. Who knew there was so much to know
about it? So it is with design. Some people are good at it and some people are
bad at it, and there's something very tangible they're good or bad at.
The reason design counts so much in software is probably that there are fewer
constraints than on physical things. Building physical things is expensive and
dangerous. The space of possible choices is smaller; you tend to have to work
as part of a larger group; and you're subject to a lot of regulations. You
don't have any of that if you and a couple friends decide to create a new web-
based application.
Because there's so much scope for design in software, a successful application
tends to be way more than the sum of its patents. What protects little
companies from being copied by bigger competitors is not just their patents,
but the thousand little things the big company will get wrong if they try.
The second reason patents don't count for much in our world is that startups
rarely attack big companies head-on, the way Reveal did. In the software
business, startups beat established companies by transcending them. Startups
don't build desktop word processing programs to compete with Microsoft Word.
They build Writely. If this paradigm is crowded, just wait for the next
one; they run pretty frequently on this route.
Fortunately for startups, big companies are extremely good at denial. If you
take the trouble to attack them from an oblique angle, they'll meet you half-
way and maneuver to keep you in their blind spot. To sue a startup would mean
admitting it was dangerous, and that often means seeing something the big
company doesn't want to see. IBM used to sue its mainframe competitors
regularly, but they didn't bother much about the microcomputer industry
because they didn't want to see the threat it posed. Companies building web
based apps are similarly protected from Microsoft, which even now doesn't want
to imagine a world in which Windows is irrelevant.
The third reason patents don't seem to matter very much in software is public
opinion-- or rather, hacker opinion. In a recent interview, Steve Ballmer
coyly left open the possibility of attacking Linux on patent grounds. But I
doubt Microsoft would ever be so stupid. They'd face the mother of all
boycotts. And not just from the technical community in general; a lot of their
own people would rebel.
Good hackers care a lot about matters of principle, and they are highly
mobile. If a company starts misbehaving, smart people won't work there. For
some reason this seems to be more true in software than other businesses. I
don't think it's because hackers have intrinsically higher principles so much
as that their skills are easily transferrable. Perhaps we can split the
difference and say that mobility gives hackers the luxury of being principled.
Google's "don't be evil" policy may for this reason be the most valuable thing
they've discovered. It's very constraining in some ways. If Google does do
something evil, they get doubly whacked for it: once for whatever they did,
and again for hypocrisy. But I think it's worth it. It helps them to hire the
best people, and it's better, even from a purely selfish point of view, to be
constrained by principles than by stupidity.
(I wish someone would get this point across to the present administration.)
I'm not sure what the proportions are of the preceding three ingredients, but
the custom among the big companies seems to be not to sue the small ones, and
the startups are mostly too busy and too poor to sue one another. So despite
the huge number of software patents there's not a lot of suing going on. With
one exception: patent trolls.
Patent trolls are companies consisting mainly of lawyers whose whole business
is to accumulate patents and threaten to sue companies who actually make
things. Patent trolls, it seems safe to say, are evil. I feel a bit stupid
saying that, because when you're saying something that Richard Stallman and
Bill Gates would both agree with, you must be perilously close to tautologies.
The CEO of Forgent, one of the most notorious patent trolls, says that what
his company does is "the American way." Actually that's not true. The American
way is to make money by creating wealth, not by suing people. What
companies like Forgent do is actually the proto-industrial way. In the period
just before the industrial revolution, some of the greatest fortunes in
countries like England and France were made by courtiers who extracted some
lucrative right from the crown-- like the right to collect taxes on the import
of silk-- and then used this to squeeze money from the merchants in that
business. So when people compare patent trolls to the mafia, they're more
right than they know, because the mafia too are not merely bad, but bad
specifically in the sense of being an obsolete business model.
Patent trolls seem to have caught big companies by surprise. In the last
couple years they've extracted hundreds of millions of dollars from them.
Patent trolls are hard to fight precisely because they create nothing. Big
companies are safe from being sued by other big companies because they can
threaten a counter-suit. But because patent trolls don't make anything,
there's nothing they can be sued for. I predict this loophole will get closed
fairly quickly, at least by legal standards. It's clearly an abuse of the
system, and the victims are powerful.
But evil as patent trolls are, I don't think they hamper innovation much. They
don't sue till a startup has made money, and by that point the innovation that
generated it has already happened. I can't think of a startup that avoided
working on some problem because of patent trolls.
So much for hockey as the game is played now. What about the more theoretical
question of whether hockey would be a better game without checking? Do patents
encourage or discourage innovation?
This is a very hard question to answer in the general case. People write whole
books on the topic. One of my main hobbies is the history of technology, and
even though I've studied the subject for years, it would take me several weeks
of research to be able to say whether patents have in general been a net win.
One thing I can say is that 99.9% of the people who express opinions on the
subject do it not based on such research, but out of a kind of religious
conviction. At least, that's the polite way of putting it; the colloquial
version involves speech coming out of organs not designed for that purpose.
Whether they encourage innovation or not, patents were at least intended to.
You don't get a patent for nothing. In return for the exclusive right to use
an idea, you have to _publish_ it, and it was largely to encourage such
openness that patents were established.
Before patents, people protected ideas by keeping them secret. With patents,
central governments said, in effect, if you tell everyone your idea, we'll
protect it for you. There is a parallel here to the rise of civil order, which
happened at roughly the same time. Before central governments were powerful
enough to enforce order, rich people had private armies. As governments got
more powerful, they gradually compelled magnates to cede most responsibility
for protecting them. (Magnates still have bodyguards, but no longer to protect
them from other magnates.)
Patents, like police, are involved in many abuses. But in both cases the
default is something worse. The choice is not "patents or freedom?" any more
than it is "police or freedom?" The actual questions are respectively "patents
or secrecy?" and "police or gangs?"
As with gangs, we have some idea what secrecy would be like, because that's
how things used to be. The economy of medieval Europe was divided up into
little tribes, each jealously guarding their privileges and secrets. In
Shakespeare's time, "mystery" was synonymous with "craft." Even today we can
see an echo of the secrecy of medieval guilds, in the now pointless secrecy of
the Masons.
The most memorable example of medieval industrial secrecy is probably Venice,
which forbade glassblowers to leave the city, and sent assassins after those
who tried. We might like to think we wouldn't go so far, but the movie
industry has already tried to pass laws prescribing three year prison terms
just for putting movies on public networks. Want to try a frightening thought
experiment? If the movie industry could have any law they wanted, where would
they stop? Short of the death penalty, one assumes, but how close would they
get?
Even worse than the spectacular abuses might be the overall decrease in
efficiency that would accompany increased secrecy. As anyone who has dealt
with organizations that operate on a "need to know" basis can attest, dividing
information up into little cells is terribly inefficient. The flaw in the
"need to know" principle is that you don't _know_ who needs to know something.
An idea from one area might spark a great discovery in another. But the
discoverer doesn't know he needs to know it.
If secrecy were the only protection for ideas, companies wouldn't just have to
be secretive with other companies; they'd have to be secretive internally.
This would encourage what is already the worst trait of big companies.
I'm not saying secrecy would be worse than patents, just that we couldn't
discard patents for free. Businesses would become more secretive to
compensate, and in some fields this might get ugly. Nor am I defending the
current patent system. There is clearly a lot that's broken about it. But the
breakage seems to affect software less than most other fields.
In the software business I know from experience whether patents encourage or
discourage innovation, and the answer is the type that people who like to
argue about public policy least like to hear: they don't affect innovation
much, one way or the other. Most innovation in the software business happens
in startups, and startups should simply ignore other companies' patents. At
least, that's what we advise, and we bet money on that advice.
The only real role of patents, for most startups, is as an element of the
mating dance with acquirers. There patents do help a little. And so they do
encourage innovation indirectly, in that they give more power to startups,
which is where, pound for pound, the most innovation happens. But even in the
mating dance, patents are of secondary importance. It matters more to make
something great and get a lot of users.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
November 2005
Venture funding works like gears. A typical startup goes through several
rounds of funding, and at each round you want to take just enough money to
reach the speed where you can shift into the next gear.
Few startups get it quite right. Many are underfunded. A few are overfunded,
which is like trying to start driving in third gear.
I think it would help founders to understand funding better—not just the
mechanics of it, but what investors are thinking. I was surprised recently
when I realized that all the worst problems we faced in our startup were due
not to competitors, but investors. Dealing with competitors was easy by
comparison.
I don't mean to suggest that our investors were nothing but a drag on us. They
were helpful in negotiating deals, for example. I mean more that conflicts
with investors are particularly nasty. Competitors punch you in the jaw, but
investors have you by the balls.
Apparently our situation was not unusual. And if trouble with investors is one
of the biggest threats to a startup, managing them is one of the most
important skills founders need to learn.
Let's start by talking about the five sources of startup funding. Then we'll
trace the life of a hypothetical (very fortunate) startup as it shifts gears
through successive rounds.
**Friends and Family**
A lot of startups get their first funding from friends and family. Excite did,
for example: after the founders graduated from college, they borrowed $15,000
from their parents to start a company. With the help of some part-time jobs
they made it last 18 months.
If your friends or family happen to be rich, the line blurs between them and
angel investors. At Viaweb we got our first $10,000 of seed money from our
friend Julian, but he was sufficiently rich that it's hard to say whether he
should be classified as a friend or angel. He was also a lawyer, which was
great, because it meant we didn't have to pay legal bills out of that initial
small sum.
The advantage of raising money from friends and family is that they're easy to
find. You already know them. There are three main disadvantages: you mix
together your business and personal life; they will probably not be as well
connected as angels or venture firms; and they may not be accredited
investors, which could complicate your life later.
The SEC defines an "accredited investor" as someone with over a million
dollars in liquid assets or an income of over $200,000 a year. The regulatory
burden is much lower if a company's shareholders are all accredited investors.
Once you take money from the general public you're more restricted in what you
can do.
A startup's life will be more complicated, legally, if any of the investors
aren't accredited. In an IPO, it might not merely add expense, but change the
outcome. A lawyer I asked about it said:
> When the company goes public, the SEC will carefully study all prior
> issuances of stock by the company and demand that it take immediate action
> to cure any past violations of securities laws. Those remedial actions can
> delay, stall or even kill the IPO.
Of course the odds of any given startup doing an IPO are small. But not as
small as they might seem. A lot of startups that end up going public didn't
seem likely to at first. (Who could have guessed that the company Wozniak and
Jobs started in their spare time selling plans for microcomputers would yield
one of the biggest IPOs of the decade?) Much of the value of a startup
consists of that tiny probability multiplied by the huge outcome.
It wasn't because they weren't accredited investors that I didn't ask my
parents for seed money, though. When we were starting Viaweb, I didn't know
about the concept of an accredited investor, and didn't stop to think about
the value of investors' connections. The reason I didn't take money from my
parents was that I didn't want them to lose it.
**Consulting**
Another way to fund a startup is to get a job. The best sort of job is a
consulting project in which you can build whatever software you wanted to sell
as a startup. Then you can gradually transform yourself from a consulting
company into a product company, and have your clients pay your development
expenses.
This is a good plan for someone with kids, because it takes most of the risk
out of starting a startup. There never has to be a time when you have no
revenues. Risk and reward are usually proportionate, however: you should
expect a plan that cuts the risk of starting a startup also to cut the average
return. In this case, you trade decreased financial risk for increased risk
that your company won't succeed as a startup.
But isn't the consulting company itself a startup? No, not generally. A
company has to be more than small and newly founded to be a startup. There are
millions of small businesses in America, but only a few thousand are startups.
To be a startup, a company has to be a product business, not a service
business. By which I mean not that it has to make something physical, but that
it has to have one thing it sells to many people, rather than doing custom
work for individual clients. Custom work doesn't scale. To be a startup you
need to be the band that sells a million copies of a song, not the band that
makes money by playing at individual weddings and bar mitzvahs.
The trouble with consulting is that clients have an awkward habit of calling
you on the phone. Most startups operate close to the margin of failure, and
the distraction of having to deal with clients could be enough to put you over
the edge. Especially if you have competitors who get to work full time on just
being a startup.
So you have to be very disciplined if you take the consulting route. You have
to work actively to prevent your company growing into a "weed tree," dependent
on this source of easy but low-margin money.
Indeed, the biggest danger of consulting may be that it gives you an excuse
for failure. In a startup, as in grad school, a lot of what ends up driving
you are the expectations of your family and friends. Once you start a startup
and tell everyone that's what you're doing, you're now on a path labelled "get
rich or bust." You now have to get rich, or you've failed.
Fear of failure is an extraordinarily powerful force. Usually it prevents
people from starting things, but once you publish some definite ambition, it
switches directions and starts working in your favor. I think it's a pretty
clever piece of jiujitsu to set this irresistible force against the slightly
less immovable object of becoming rich. You won't have it driving you if your
stated ambition is merely to start a consulting company that you will one day
morph into a startup.
An advantage of consulting, as a way to develop a product, is that you know
you're making something at least one customer wants. But if you have what it
takes to start a startup you should have sufficient vision not to need this
crutch.
**Angel Investors**
_Angels_ are individual rich people. The word was first used for backers of
Broadway plays, but now applies to individual investors generally. Angels
who've made money in technology are preferable, for two reasons: they
understand your situation, and they're a source of contacts and advice.
The contacts and advice can be more important than the money. When del.icio.us
took money from investors, they took money from, among others, Tim O'Reilly.
The amount he put in was small compared to the VCs who led the round, but Tim
is a smart and influential guy and it's good to have him on your side.
You can do whatever you want with money from consulting or friends and family.
With angels we're now talking about venture funding proper, so it's time to
introduce the concept of _exit strategy_. Younger would-be founders are often
surprised that investors expect them either to sell the company or go public.
The reason is that investors need to get their capital back. They'll only
consider companies that have an exit strategy—meaning companies that could get
bought or go public.
This is not as selfish as it sounds. There are few large, private technology
companies. Those that don't fail all seem to get bought or go public. The
reason is that employees are investors too—of their time—and they want just as
much to be able to cash out. If your competitors offer employees stock options
that might make them rich, while you make it clear you plan to stay private,
your competitors will get the best people. So the principle of an "exit" is
not just something forced on startups by investors, but part of what it means
to be a startup.
Another concept we need to introduce now is valuation. When someone buys
shares in a company, that implicitly establishes a value for it. If someone
pays $20,000 for 10% of a company, the company is in theory worth $200,000. I
say "in theory" because in early stage investing, valuations are voodoo. As a
company gets more established, its valuation gets closer to an actual market
value. But in a newly founded startup, the valuation number is just an
artifact of the respective contributions of everyone involved.
Startups often "pay" investors who will help the company in some way by
letting them invest at low valuations. If I had a startup and Steve Jobs
wanted to invest in it, I'd give him the stock for $10, just to be able to
brag that he was an investor. Unfortunately, it's impractical (if not illegal)
to adjust the valuation of the company up and down for each investor.
Startups' valuations are supposed to rise over time. So if you're going to
sell cheap stock to eminent angels, do it early, when it's natural for the
company to have a low valuation.
Some angel investors join together in syndicates. Any city where people start
startups will have one or more of them. In Boston the biggest is the Common
Angels. In the Bay Area it's the Band of Angels. You can find groups near you
through the Angel Capital Association. However, most angel investors don't
belong to these groups. In fact, the more prominent the angel, the less likely
they are to belong to a group.
Some angel groups charge you money to pitch your idea to them. Needless to
say, you should never do this.
One of the dangers of taking investment from individual angels, rather than
through an angel group or investment firm, is that they have less reputation
to protect. A big-name VC firm will not screw you too outrageously, because
other founders would avoid them if word got out. With individual angels you
don't have this protection, as we found to our dismay in our own startup. In
many startups' lives there comes a point when you're at the investors'
mercy—when you're out of money and the only place to get more is your existing
investors. When we got into such a scrape, our investors took advantage of it
in a way that a name-brand VC probably wouldn't have.
Angels have a corresponding advantage, however: they're also not bound by all
the rules that VC firms are. And so they can, for example, allow founders to
cash out partially in a funding round, by selling some of their stock directly
to the investors. I think this will become more common; the average founder is
eager to do it, and selling, say, half a million dollars worth of stock will
not, as VCs fear, cause most founders to be any less committed to the
business.
The same angels who tried to screw us also let us do this, and so on balance
I'm grateful rather than angry. (As in families, relations between founders
and investors can be complicated.)
The best way to find angel investors is through personal introductions. You
could try to cold-call angel groups near you, but angels, like VCs, will pay
more attention to deals recommended by someone they respect.
Deal terms with angels vary a lot. There are no generally accepted standards.
Sometimes angels' deal terms are as fearsome as VCs'. Other angels,
particularly in the earliest stages, will invest based on a two-page
agreement.
Angels who only invest occasionally may not themselves know what terms they
want. They just want to invest in this startup. What kind of anti-dilution
protection do they want? Hell if they know. In these situations, the deal
terms tend to be random: the angel asks his lawyer to create a vanilla
agreement, and the terms end up being whatever the lawyer considers vanilla.
Which in practice usually means, whatever existing agreement he finds lying
around his firm. (Few legal documents are created from scratch.)
These heaps o' boilerplate are a problem for small startups, because they tend
to grow into the union of all preceding documents. I know of one startup that
got from an angel investor what amounted to a five hundred pound handshake:
after deciding to invest, the angel presented them with a 70-page agreement.
The startup didn't have enough money to pay a lawyer even to read it, let
alone negotiate the terms, so the deal fell through.
One solution to this problem would be to have the startup's lawyer produce the
agreement, instead of the angel's. Some angels might balk at this, but others
would probably welcome it.
Inexperienced angels often get cold feet when the time comes to write that big
check. In our startup, one of the two angels in the initial round took months
to pay us, and only did after repeated nagging from our lawyer, who was also,
fortunately, his lawyer.
It's obvious why investors delay. Investing in startups is risky! When a
company is only two months old, every _day_ you wait gives you 1.7% more data
about their trajectory. But the investor is already being compensated for that
risk in the low price of the stock, so it is unfair to delay.
Fair or not, investors do it if you let them. Even VCs do it. And funding
delays are a big distraction for founders, who ought to be working on their
company, not worrying about investors. What's a startup to do? With both
investors and acquirers, the only leverage you have is competition. If an
investor knows you have other investors lined up, he'll be a lot more eager to
close-- and not just because he'll worry about losing the deal, but because if
other investors are interested, you must be worth investing in. It's the same
with acquisitions. No one wants to buy you till someone else wants to buy you,
and then everyone wants to buy you.
The key to closing deals is never to stop pursuing alternatives. When an
investor says he wants to invest in you, or an acquirer says they want to buy
you, _don't believe it till you get the check._ Your natural tendency when an
investor says yes will be to relax and go back to writing code. Alas, you
can't; you have to keep looking for more investors, if only to get this one to
act.
**Seed Funding Firms**
Seed firms are like angels in that they invest relatively small amounts at
early stages, but like VCs in that they're companies that do it as a business,
rather than individuals making occasional investments on the side.
Till now, nearly all seed firms have been so-called "incubators," so Y
Combinator gets called one too, though the only thing we have in common is
that we invest in the earliest phase.
According to the National Association of Business Incubators, there are about
800 incubators in the US. This is an astounding number, because I know the
founders of a lot of startups, and I can't think of one that began in an
incubator.
What is an incubator? I'm not sure myself. The defining quality seems to be
that you work in their space. That's where the name "incubator" comes from.
They seem to vary a great deal in other respects. At one extreme is the sort
of pork-barrel project where a town gets money from the state government to
renovate a vacant building as a "high-tech incubator," as if it were merely
lack of the right sort of office space that had till now prevented the town
from becoming a startup hub. At the other extreme are places like Idealab,
which generates ideas for new startups internally and hires people to work for
them.
The classic Bubble incubators, most of which now seem to be dead, were like VC
firms except that they took a much bigger role in the startups they funded. In
addition to working in their space, you were supposed to use their office
staff, lawyers, accountants, and so on.
Whereas incubators tend (or tended) to exert more control than VCs, Y
Combinator exerts less. And we think it's better if startups operate out of
their own premises, however crappy, than the offices of their investors. So
it's annoying that we keep getting called an "incubator," but perhaps
inevitable, because there's only one of us so far and no word yet for what we
are. If we have to be called something, the obvious name would be "excubator."
(The name is more excusable if one considers it as meaning that we enable
people to escape cubicles.)
Because seed firms are companies rather than individual people, reaching them
is easier than reaching angels. Just go to their web site and send them an
email. The importance of personal introductions varies, but is less than with
angels or VCs.
The fact that seed firms are companies also means the investment process is
more standardized. (This is generally true with angel groups too.) Seed firms
will probably have set deal terms they use for every startup they fund. The
fact that the deal terms are standard doesn't mean they're favorable to you,
but if other startups have signed the same agreements and things went well for
them, it's a sign the terms are reasonable.
Seed firms differ from angels and VCs in that they invest exclusively in the
earliest phases—often when the company is still just an idea. Angels and even
VC firms occasionally do this, but they also invest at later stages.
The problems are different in the early stages. For example, in the first
couple months a startup may completely redefine their idea. So seed investors
usually care less about the idea than the people. This is true of all venture
funding, but especially so in the seed stage.
Like VCs, one of the advantages of seed firms is the advice they offer. But
because seed firms operate in an earlier phase, they need to offer different
kinds of advice. For example, a seed firm should be able to give advice about
how to approach VCs, which VCs obviously don't need to do; whereas VCs should
be able to give advice about how to hire an "executive team," which is not an
issue in the seed stage.
In the earliest phases, a lot of the problems are technical, so seed firms
should be able to help with technical as well as business problems.
Seed firms and angel investors generally want to invest in the initial phases
of a startup, then hand them off to VC firms for the next round. Occasionally
startups go from seed funding direct to acquisition, however, and I expect
this to become increasingly common.
Google has been aggressively pursuing this route, and now Yahoo is too. Both
now compete directly with VCs. And this is a smart move. Why wait for further
funding rounds to jack up a startup's price? When a startup reaches the point
where VCs have enough information to invest in it, the acquirer should have
enough information to buy it. More information, in fact; with their technical
depth, the acquirers should be better at picking winners than VCs.
**Venture Capital Funds**
VC firms are like seed firms in that they're actual companies, but they invest
other people's money, and much larger amounts of it. VC investments average
several million dollars. So they tend to come later in the life of a startup,
are harder to get, and come with tougher terms.
The word "venture capitalist" is sometimes used loosely for any venture
investor, but there is a sharp difference between VCs and other investors: VC
firms are organized as _funds_ , much like hedge funds or mutual funds. The
fund managers, who are called "general partners," get about 2% of the fund
annually as a management fee, plus about 20% of the fund's gains.
There is a very sharp dropoff in performance among VC firms, because in the VC
business both success and failure are self-perpetuating. When an investment
scores spectacularly, as Google did for Kleiner and Sequoia, it generates a
lot of good publicity for the VCs. And many founders prefer to take money from
successful VC firms, because of the legitimacy it confers. Hence a vicious
(for the losers) cycle: VC firms that have been doing badly will only get the
deals the bigger fish have rejected, causing them to continue to do badly.
As a result, of the thousand or so VC funds in the US now, only about 50 are
likely to make money, and it is very hard for a new fund to break into this
group.
In a sense, the lower-tier VC firms are a bargain for founders. They may not
be quite as smart or as well connected as the big-name firms, but they are
much hungrier for deals. This means you should be able to get better terms
from them.
Better how? The most obvious is valuation: they'll take less of your company.
But as well as money, there's power. I think founders will increasingly be
able to stay on as CEO, and on terms that will make it fairly hard to fire
them later.
The most dramatic change, I predict, is that VCs will allow founders to cash
out partially by selling some of their stock direct to the VC firm. VCs have
traditionally resisted letting founders get anything before the ultimate
"liquidity event." But they're also desperate for deals. And since I know from
my own experience that the rule against buying stock from founders is a stupid
one, this is a natural place for things to give as venture funding becomes
more and more a seller's market.
The disadvantage of taking money from less known firms is that people will
assume, correctly or not, that you were turned down by the more exalted ones.
But, like where you went to college, the name of your VC stops mattering once
you have some performance to measure. So the more confident you are, the less
you need a brand-name VC. We funded Viaweb entirely with angel money; it never
occurred to us that the backing of a well known VC firm would make us seem
more impressive.
Another danger of less known firms is that, like angels, they have less
reputation to protect. I suspect it's the lower-tier firms that are
responsible for most of the tricks that have given VCs such a bad reputation
among hackers. They are doubly hosed: the general partners themselves are less
able, and yet they have harder problems to solve, because the top VCs skim off
all the best deals, leaving the lower-tier firms exactly the startups that are
likely to blow up.
For example, lower-tier firms are much more likely to pretend to want to do a
deal with you just to lock you up while they decide if they really want to.
One experienced CFO said:
> The better ones usually will not give a term sheet unless they really want
> to do a deal. The second or third tier firms have a much higher break
> rate—it could be as high as 50%.
It's obvious why: the lower-tier firms' biggest fear, when chance throws them
a bone, is that one of the big dogs will notice and take it away. The big dogs
don't have to worry about that.
Falling victim to this trick could really hurt you. As one VC told me:
> If you were talking to four VCs, told three of them that you accepted a term
> sheet, and then have to call them back to tell them you were just kidding,
> you are absolutely damaged goods.
Here's a partial solution: when a VC offers you a term sheet, ask how many of
their last 10 term sheets turned into deals. This will at least force them to
lie outright if they want to mislead you.
Not all the people who work at VC firms are partners. Most firms also have a
handful of junior employees called something like associates or analysts. If
you get a call from a VC firm, go to their web site and check whether the
person you talked to is a partner. Odds are it will be a junior person; they
scour the web looking for startups their bosses could invest in. The junior
people will tend to seem very positive about your company. They're not
pretending; they _want_ to believe you're a hot prospect, because it would be
a huge coup for them if their firm invested in a company they discovered.
Don't be misled by this optimism. It's the partners who decide, and they view
things with a colder eye.
Because VCs invest large amounts, the money comes with more restrictions. Most
only come into effect if the company gets into trouble. For example, VCs
generally write it into the deal that in any sale, they get their investment
back first. So if the company gets sold at a low price, the founders could get
nothing. Some VCs now require that in any sale they get 4x their investment
back before the common stock holders (that is, you) get anything, but this is
an abuse that should be resisted.
Another difference with large investments is that the founders are usually
required to accept "vesting"—to surrender their stock and earn it back over
the next 4-5 years. VCs don't want to invest millions in a company the
founders could just walk away from. Financially, vesting has little effect,
but in some situations it could mean founders will have less power. If VCs got
de facto control of the company and fired one of the founders, he'd lose any
unvested stock unless there was specific protection against this. So vesting
would in that situation force founders to toe the line.
The most noticeable change when a startup takes serious funding is that the
founders will no longer have complete control. Ten years ago VCs used to
insist that founders step down as CEO and hand the job over to a business guy
they supplied. This is less the rule now, partly because the disasters of the
Bubble showed that generic business guys don't make such great CEOs.
But while founders will increasingly be able to stay on as CEO, they'll have
to cede some power, because the board of directors will become more powerful.
In the seed stage, the board is generally a formality; if you want to talk to
the other board members, you just yell into the next room. This stops with VC-
scale money. In a typical VC funding deal, the board of directors might be
composed of two VCs, two founders, and one outside person acceptable to both.
The board will have ultimate power, which means the founders now have to
convince instead of commanding.
This is not as bad as it sounds, however. Bill Gates is in the same position;
he doesn't have majority control of Microsoft; in principle he also has to
convince instead of commanding. And yet he seems pretty commanding, doesn't
he? As long as things are going smoothly, boards don't interfere much. The
danger comes when there's a bump in the road, as happened to Steve Jobs at
Apple.
Like angels, VCs prefer to invest in deals that come to them through people
they know. So while nearly all VC funds have some address you can send your
business plan to, VCs privately admit the chance of getting funding by this
route is near zero. One recently told me that he did not know a single startup
that got funded this way.
I suspect VCs accept business plans "over the transom" more as a way to keep
tabs on industry trends than as a source of deals. In fact, I would strongly
advise against mailing your business plan randomly to VCs, because they treat
this as evidence of laziness. Do the extra work of getting personal
introductions. As one VC put it:
> I'm not hard to find. I know a lot of people. If you can't find some way to
> reach me, how are you going to create a successful company?
One of the most difficult problems for startup founders is deciding when to
approach VCs. You really only get one chance, because they rely heavily on
first impressions. And you can't approach some and save others for later,
because (a) they ask who else you've talked to and when and (b) they talk
among themselves. If you're talking to one VC and he finds out that you were
rejected by another several months ago, you'll definitely seem shopworn.
So when do you approach VCs? When you can convince them. If the founders have
impressive resumes and the idea isn't hard to understand, you could approach
VCs quite early. Whereas if the founders are unknown and the idea is very
novel, you might have to launch the thing and show that users loved it before
VCs would be convinced.
If several VCs are interested in you, they will sometimes be willing to split
the deal between them. They're more likely to do this if they're close in the
VC pecking order. Such deals may be a net win for founders, because you get
multiple VCs interested in your success, and you can ask each for advice about
the other. One founder I know wrote:
> Two-firm deals are great. It costs you a little more equity, but being able
> to play the two firms off each other (as well as ask one if the other is
> being out of line) is invaluable.
When you do negotiate with VCs, remember that they've done this a lot more
than you have. They've invested in dozens of startups, whereas this is
probably the first you've founded. But don't let them or the situation
intimidate you. The average founder is smarter than the average VC. So just do
what you'd do in any complex, unfamiliar situation: proceed deliberately, and
question anything that seems odd.
It is, unfortunately, common for VCs to put terms in an agreement whose
consequences surprise founders later, and also common for VCs to defend things
they do by saying that they're standard in the industry. Standard, schmandard;
the whole industry is only a few decades old, and rapidly evolving. The
concept of "standard" is a useful one when you're operating on a small scale
(Y Combinator uses identical terms for every deal because for tiny seed-stage
investments it's not worth the overhead of negotiating individual deals), but
it doesn't apply at the VC level. On that scale, every negotiation is unique.
Most successful startups get money from more than one of the preceding five
sources. And, confusingly, the names of funding sources also tend to be
used as the names of different rounds. The best way to explain how it all
works is to follow the case of a hypothetical startup.
**Stage 1: Seed Round**
Our startup begins when a group of three friends have an idea-- either an idea
for something they might build, or simply the idea "let's start a company."
Presumably they already have some source of food and shelter. But if you have
food and shelter, you probably also have something you're supposed to be
working on: either classwork, or a job. So if you want to work full-time on a
startup, your money situation will probably change too.
A lot of startup founders say they started the company without any idea of
what they planned to do. This is actually less common than it seems: many have
to claim they thought of the idea after quitting because otherwise their
former employer would own it.
The three friends decide to take the leap. Since most startups are in
competitive businesses, you not only want to work full-time on them, but more
than full-time. So some or all of the friends quit their jobs or leave school.
(Some of the founders in a startup can stay in grad school, but at least one
has to make the company his full-time job.)
They're going to run the company out of one of their apartments at first, and
since they don't have any users they don't have to pay much for
infrastructure. Their main expenses are setting up the company, which costs a
couple thousand dollars in legal work and registration fees, and the living
expenses of the founders.
The phrase "seed investment" covers a broad range. To some VC firms it means
$500,000, but to most startups it means several months' living expenses. We'll
suppose our group of friends start with $15,000 from their friend's rich
uncle, who they give 5% of the company in return. There's only common stock at
this stage. They leave 20% as an options pool for later employees (but they
set things up so that they can issue this stock to themselves if they get
bought early and most is still unissued), and the three founders each get 25%.
By living really cheaply they think they can make the remaining money last
five months. When you have five months' runway left, how soon do you need to
start looking for your next round? Answer: immediately. It takes time to find
investors, and time (always more than you expect) for the deal to close even
after they say yes. So if our group of founders know what they're doing
they'll start sniffing around for angel investors right away. But of course
their main job is to build version 1 of their software.
The friends might have liked to have more money in this first phase, but being
slightly underfunded teaches them an important lesson. For a startup,
cheapness is power. The lower your costs, the more options you have—not just
at this stage, but at every point till you're profitable. When you have a high
"burn rate," you're always under time pressure, which means (a) you don't have
time for your ideas to evolve, and (b) you're often forced to take deals you
don't like.
Every startup's rule should be: spend little, and work fast.
After ten weeks' work the three friends have built a prototype that gives one
a taste of what their product will do. It's not what they originally set out
to do—in the process of writing it, they had some new ideas. And it only does
a fraction of what the finished product will do, but that fraction includes
stuff that no one else has done before.
They've also written at least a skeleton business plan, addressing the five
fundamental questions: what they're going to do, why users need it, how large
the market is, how they'll make money, and who the competitors are and why
this company is going to beat them. (That last has to be more specific than
"they suck" or "we'll work really hard.")
If you have to choose between spending time on the demo or the business plan,
spend most on the demo. Software is not only more convincing, but a better way
to explore ideas.
**Stage 2: Angel Round**
While writing the prototype, the group has been traversing their network of
friends in search of angel investors. They find some just as the prototype is
demoable. When they demo it, one of the angels is willing to invest. Now the
group is looking for more money: they want enough to last for a year, and
maybe to hire a couple friends. So they're going to raise $200,000.
The angel agrees to invest at a pre-money valuation of $1 million. The company
issues $200,000 worth of new shares to the angel; if there were 1000 shares
before the deal, this means 200 additional shares. The angel now owns 200/1200
shares, or a sixth of the company, and all the previous shareholders'
percentage ownership is diluted by a sixth. After the deal, the capitalization
table looks like this: shareholder shares percent
\------------------------------- angel 200 16.7 uncle 50 4.2 each founder 250
20.8 option pool 200 16.7 \---- ----- total 1200 100 To keep things simple, I
had the angel do a straight cash for stock deal. In reality the angel might be
more likely to make the investment in the form of a convertible loan. A
convertible loan is a loan that can be converted into stock later; it works
out the same as a stock purchase in the end, but gives the angel more
protection against being squashed by VCs in future rounds.
Who pays the legal bills for this deal? The startup, remember, only has a
couple thousand left. In practice this turns out to be a sticky problem that
usually gets solved in some improvised way. Maybe the startup can find lawyers
who will do it cheaply in the hope of future work if the startup succeeds.
Maybe someone has a lawyer friend. Maybe the angel pays for his lawyer to
represent both sides. (Make sure if you take the latter route that the lawyer
is _representing_ you rather than merely advising you, or his only duty is to
the investor.)
An angel investing $200k would probably expect a seat on the board of
directors. He might also want preferred stock, meaning a special class of
stock that has some additional rights over the common stock everyone else has.
Typically these rights include vetoes over major strategic decisions,
protection against being diluted in future rounds, and the right to get one's
investment back first if the company is sold.
Some investors might expect the founders to accept vesting for a sum this
size, and others wouldn't. VCs are more likely to require vesting than angels.
At Viaweb we managed to raise $2.5 million from angels without ever accepting
vesting, largely because we were so inexperienced that we were appalled at the
idea. In practice this turned out to be good, because it made us harder to
push around.
Our experience was unusual; vesting is the norm for amounts that size. Y
Combinator doesn't require vesting, because (a) we invest such small amounts,
and (b) we think it's unnecessary, and that the hope of getting rich is enough
motivation to keep founders at work. But maybe if we were investing millions
we would think differently.
I should add that vesting is also a way for founders to protect themselves
against one another. It solves the problem of what to do if one of the
founders quits. So some founders impose it on themselves when they start the
company.
The angel deal takes two weeks to close, so we are now three months into the
life of the company.
The point after you get the first big chunk of angel money will usually be the
happiest phase in a startup's life. It's a lot like being a postdoc: you have
no immediate financial worries, and few responsibilities. You get to work on
juicy kinds of work, like designing software. You don't have to spend time on
bureaucratic stuff, because you haven't hired any bureaucrats yet. Enjoy it
while it lasts, and get as much done as you can, because you will never again
be so productive.
With an apparently inexhaustible sum of money sitting safely in the bank, the
founders happily set to work turning their prototype into something they can
release. They hire one of their friends—at first just as a consultant, so they
can try him out—and then a month later as employee #1. They pay him the
smallest salary he can live on, plus 3% of the company in restricted stock,
vesting over four years. (So after this the option pool is down to 13.7%).
They also spend a little money on a freelance graphic designer.
How much stock do you give early employees? That varies so much that there's
no conventional number. If you get someone really good, really early, it might
be wise to give him as much stock as the founders. The one universal rule is
that the amount of stock an employee gets decreases polynomially with the age
of the company. In other words, you get rich as a power of how early you were.
So if some friends want you to come work for their startup, don't wait several
months before deciding.
A month later, at the end of month four, our group of founders have something
they can launch. Gradually through word of mouth they start to get users.
Seeing the system in use by real users—people they don't know—gives them lots
of new ideas. Also they find they now worry obsessively about the status of
their server. (How relaxing founders' lives must have been when startups wrote
VisiCalc.)
By the end of month six, the system is starting to have a solid core of
features, and a small but devoted following. People start to write about it,
and the founders are starting to feel like experts in their field.
We'll assume that their startup is one that could put millions more to use.
Perhaps they need to spend a lot on marketing, or build some kind of expensive
infrastructure, or hire highly paid salesmen. So they decide to start talking
to VCs. They get introductions to VCs from various sources: their angel
investor connects them with a couple; they meet a few at conferences; a couple
VCs call them after reading about them.
**Step 3: Series A Round**
Armed with their now somewhat fleshed-out business plan and able to demo a
real, working system, the founders visit the VCs they have introductions to.
They find the VCs intimidating and inscrutable. They all ask the same
question: who else have you pitched to? (VCs are like high school girls:
they're acutely aware of their position in the VC pecking order, and their
interest in a company is a function of the interest other VCs show in it.)
One of the VC firms says they want to invest and offers the founders a term
sheet. A term sheet is a summary of what the deal terms will be when and if
they do a deal; lawyers will fill in the details later. By accepting the term
sheet, the startup agrees to turn away other VCs for some set amount of time
while this firm does the "due diligence" required for the deal. Due diligence
is the corporate equivalent of a background check: the purpose is to uncover
any hidden bombs that might sink the company later, like serious design flaws
in the product, pending lawsuits against the company, intellectual property
issues, and so on. VCs' legal and financial due diligence is pretty thorough,
but the technical due diligence is generally a joke.
The due diligence discloses no ticking bombs, and six weeks later they go
ahead with the deal. Here are the terms: a $2 million investment at a pre-
money valuation of $4 million, meaning that after the deal closes the VCs will
own a third of the company (2 / (4 + 2)). The VCs also insist that prior to
the deal the option pool be enlarged by an additional hundred shares. So the
total number of new shares issued is 750, and the cap table becomes:
shareholder shares percent \------------------------------- VCs 650 33.3 angel
200 10.3 uncle 50 2.6 each founder 250 12.8 employee 36* 1.8 *unvested option
pool 264 13.5 \---- ----- total 1950 100 This picture is unrealistic in
several respects. For example, while the percentages might end up looking like
this, it's unlikely that the VCs would keep the existing numbers of shares. In
fact, every bit of the startup's paperwork would probably be replaced, as if
the company were being founded anew. Also, the money might come in several
tranches, the later ones subject to various conditions—though this is
apparently more common in deals with lower-tier VCs (whose lot in life is to
fund more dubious startups) than with the top firms.
And of course any VCs reading this are probably rolling on the floor laughing
at how my hypothetical VCs let the angel keep his 10.3 of the company. I
admit, this is the Bambi version; in simplifying the picture, I've also made
everyone nicer. In the real world, VCs regard angels the way a jealous husband
feels about his wife's previous boyfriends. To them the company didn't exist
before they invested in it.
I don't want to give the impression you have to do an angel round before going
to VCs. In this example I stretched things out to show multiple sources of
funding in action. Some startups could go directly from seed funding to a VC
round; several of the companies we've funded have.
The founders are required to vest their shares over four years, and the board
is now reconstituted to consist of two VCs, two founders, and a fifth person
acceptable to both. The angel investor cheerfully surrenders his board seat.
At this point there is nothing new our startup can teach us about funding—or
at least, nothing good. The startup will almost certainly hire more
people at this point; those millions must be put to work, after all. The
company may do additional funding rounds, presumably at higher valuations.
They may if they are extraordinarily fortunate do an IPO, which we should
remember is also in principle a round of funding, regardless of its de facto
purpose. But that, if not beyond the bounds of possibility, is beyond the
scope of this article.
**Deals Fall Through**
Anyone who's been through a startup will find the preceding portrait to be
missing something: disasters. If there's one thing all startups have in
common, it's that something is always going wrong. And nowhere more than in
matters of funding.
For example, our hypothetical startup never spent more than half of one round
before securing the next. That's more ideal than typical. Many startups—even
successful ones—come close to running out of money at some point. Terrible
things happen to startups when they run out of money, because they're designed
for growth, not adversity.
But the most unrealistic thing about the series of deals I've described is
that they all closed. In the startup world, closing is not what deals do. What
deals do is fall through. If you're starting a startup you would do well to
remember that. Birds fly; fish swim; deals fall through.
Why? Partly the reason deals seem to fall through so often is that you lie to
yourself. You want the deal to close, so you start to believe it will. But
even correcting for this, startup deals fall through alarmingly often—far more
often than, say, deals to buy real estate. The reason is that it's such a
risky environment. People about to fund or acquire a startup are prone to
wicked cases of buyer's remorse. They don't really grasp the risk they're
taking till the deal's about to close. And then they panic. And not just
inexperienced angel investors, but big companies too.
So if you're a startup founder wondering why some angel investor isn't
returning your phone calls, you can at least take comfort in the thought that
the same thing is happening to other deals a hundred times the size.
The example of a startup's history that I've presented is like a
skeleton—accurate so far as it goes, but needing to be fleshed out to be a
complete picture. To get a complete picture, just add in every possible
disaster.
A frightening prospect? In a way. And yet also in a way encouraging. The very
uncertainty of startups frightens away almost everyone. People overvalue
stability—especially young people, who ironically need it least. And so in
starting a startup, as in any really bold undertaking, merely deciding to do
it gets you halfway there. On the day of the race, most of the other runners
won't show up.
** |
|
September 2001
_(This article explains why much of the next generation of software may be
server-based, what that will mean for programmers, and why this new kind of
software is a great opportunity for startups. It's derived from a talk at BBN
Labs.)_
In the summer of 1995, my friend Robert Morris and I decided to start a
startup. The PR campaign leading up to Netscape's IPO was running full blast
then, and there was a lot of talk in the press about online commerce. At the
time there might have been thirty actual stores on the Web, all made by hand.
If there were going to be a lot of online stores, there would need to be
software for making them, so we decided to write some.
For the first week or so we intended to make this an ordinary desktop
application. Then one day we had the idea of making the software run on our
Web server, using the browser as an interface. We tried rewriting the software
to work over the Web, and it was clear that this was the way to go. If we
wrote our software to run on the server, it would be a lot easier for the
users and for us as well.
This turned out to be a good plan. Now, as Yahoo Store, this software is the
most popular online store builder, with about 14,000 users.
When we started Viaweb, hardly anyone understood what we meant when we said
that the software ran on the server. It was not until Hotmail was launched a
year later that people started to get it. Now everyone knows that this is a
valid approach. There is a name now for what we were: an Application Service
Provider, or ASP.
I think that a lot of the next generation of software will be written on this
model. Even Microsoft, who have the most to lose, seem to see the inevitablity
of moving some things off the desktop. If software moves off the desktop and
onto servers, it will mean a very different world for developers. This article
describes the surprising things we saw, as some of the first visitors to this
new world. To the extent software does move onto servers, what I'm describing
here is the future.
**The Next Thing?**
When we look back on the desktop software era, I think we'll marvel at the
inconveniences people put up with, just as we marvel now at what early car
owners put up with. For the first twenty or thirty years, you had to be a car
expert to own a car. But cars were such a big win that lots of people who
weren't car experts wanted to have them as well.
Computers are in this phase now. When you own a desktop computer, you end up
learning a lot more than you wanted to know about what's happening inside it.
But more than half the households in the US own one. My mother has a computer
that she uses for email and for keeping accounts. About a year ago she was
alarmed to receive a letter from Apple, offering her a discount on a new
version of the operating system. There's something wrong when a sixty-five
year old woman who wants to use a computer for email and accounts has to think
about installing new operating systems. Ordinary users shouldn't even know the
words "operating system," much less "device driver" or "patch."
There is now another way to deliver software that will save users from
becoming system administrators. Web-based applications are programs that run
on Web servers and use Web pages as the user interface. For the average user
this new kind of software will be easier, cheaper, more mobile, more reliable,
and often more powerful than desktop software.
With Web-based software, most users won't have to think about anything except
the applications they use. All the messy, changing stuff will be sitting on a
server somewhere, maintained by the kind of people who are good at that kind
of thing. And so you won't ordinarily need a computer, per se, to use
software. All you'll need will be something with a keyboard, a screen, and a
Web browser. Maybe it will have wireless Internet access. Maybe it will also
be your cell phone. Whatever it is, it will be consumer electronics: something
that costs about $200, and that people choose mostly based on how the case
looks. You'll pay more for Internet services than you do for the hardware,
just as you do now with telephones.
It will take about a tenth of a second for a click to get to the server and
back, so users of heavily interactive software, like Photoshop, will still
want to have the computations happening on the desktop. But if you look at the
kind of things most people use computers for, a tenth of a second latency
would not be a problem. My mother doesn't really need a desktop computer, and
there are a lot of people like her.
**The Win for Users**
Near my house there is a car with a bumper sticker that reads "death before
inconvenience." Most people, most of the time, will take whatever choice
requires least work. If Web-based software wins, it will be because it's more
convenient. And it looks as if it will be, for users and developers both.
To use a purely Web-based application, all you need is a browser connected to
the Internet. So you can use a Web-based application anywhere. When you
install software on your desktop computer, you can only use it on that
computer. Worse still, your files are trapped on that computer. The
inconvenience of this model becomes more and more evident as people get used
to networks.
The thin end of the wedge here was Web-based email. Millions of people now
realize that you should have access to email messages no matter where you are.
And if you can see your email, why not your calendar? If you can discuss a
document with your colleagues, why can't you edit it? Why should any of your
data be trapped on some computer sitting on a faraway desk?
The whole idea of "your computer" is going away, and being replaced with "your
data." You should be able to get at your data from any computer. Or rather,
any client, and a client doesn't have to be a computer.
Clients shouldn't store data; they should be like telephones. In fact they may
become telephones, or vice versa. And as clients get smaller, you have another
reason not to keep your data on them: something you carry around with you can
be lost or stolen. Leaving your PDA in a taxi is like a disk crash, except
that your data is handed to someone else instead of being vaporized.
With purely Web-based software, neither your data nor the applications are
kept on the client. So you don't have to install anything to use it. And when
there's no installation, you don't have to worry about installation going
wrong. There can't be incompatibilities between the application and your
operating system, because the software doesn't run on your operating system.
Because it needs no installation, it will be easy, and common, to try Web-
based software before you "buy" it. You should expect to be able to test-drive
any Web-based application for free, just by going to the site where it's
offered. At Viaweb our whole site was like a big arrow pointing users to the
test drive.
After trying the demo, signing up for the service should require nothing more
than filling out a brief form (the briefer the better). And that should be the
last work the user has to do. With Web-based software, you should get new
releases without paying extra, or doing any work, or possibly even knowing
about it.
Upgrades won't be the big shocks they are now. Over time applications will
quietly grow more powerful. This will take some effort on the part of the
developers. They will have to design software so that it can be updated
without confusing the users. That's a new problem, but there are ways to solve
it.
With Web-based applications, everyone uses the same version, and bugs can be
fixed as soon as they're discovered. So Web-based software should have far
fewer bugs than desktop software. At Viaweb, I doubt we ever had ten known
bugs at any one time. That's orders of magnitude better than desktop software.
Web-based applications can be used by several people at the same time. This is
an obvious win for collaborative applications, but I bet users will start to
want this in most applications once they realize it's possible. It will often
be useful to let two people edit the same document, for example. Viaweb let
multiple users edit a site simultaneously, more because that was the right way
to write the software than because we expected users to want to, but it turned
out that many did.
When you use a Web-based application, your data will be safer. Disk crashes
won't be a thing of the past, but users won't hear about them anymore. They'll
happen within server farms. And companies offering Web-based applications will
actually do backups-- not only because they'll have real system administrators
worrying about such things, but because an ASP that does lose people's data
will be in big, big trouble. When people lose their own data in a disk crash,
they can't get that mad, because they only have themselves to be mad at. When
a company loses their data for them, they'll get a lot madder.
Finally, Web-based software should be less vulnerable to viruses. If the
client doesn't run anything except a browser, there's less chance of running
viruses, and no data locally to damage. And a program that attacked the
servers themselves should find them very well defended.
For users, Web-based software will be _less stressful._ I think if you looked
inside the average Windows user you'd find a huge and pretty much untapped
desire for software meeting that description. Unleashed, it could be a
powerful force.
**City of Code**
To developers, the most conspicuous difference between Web-based and desktop
software is that a Web-based application is not a single piece of code. It
will be a collection of programs of different types rather than a single big
binary. And so designing Web-based software is like desiging a city rather
than a building: as well as buildings you need roads, street signs, utilities,
police and fire departments, and plans for both growth and various kinds of
disasters.
At Viaweb, software included fairly big applications that users talked to
directly, programs that those programs used, programs that ran constantly in
the background looking for problems, programs that tried to restart things if
they broke, programs that ran occasionally to compile statistics or build
indexes for searches, programs we ran explicitly to garbage-collect resources
or to move or restore data, programs that pretended to be users (to measure
performance or expose bugs), programs for diagnosing network troubles,
programs for doing backups, interfaces to outside services, software that
drove an impressive collection of dials displaying real-time server statistics
(a hit with visitors, but indispensable for us too), modifications (including
bug fixes) to open-source software, and a great many configuration files and
settings. Trevor Blackwell wrote a spectacular program for moving stores to
new servers across the country, without shutting them down, after we were
bought by Yahoo. Programs paged us, sent faxes and email to users, conducted
transactions with credit card processors, and talked to one another through
sockets, pipes, http requests, ssh, udp packets, shared memory, and files.
Some of Viaweb even consisted of the absence of programs, since one of the
keys to Unix security is not to run unnecessary utilities that people might
use to break into your servers.
It did not end with software. We spent a lot of time thinking about server
configurations. We built the servers ourselves, from components-- partly to
save money, and partly to get exactly what we wanted. We had to think about
whether our upstream ISP had fast enough connections to all the backbones. We
serially dated RAID suppliers.
But hardware is not just something to worry about. When you control it you can
do more for users. With a desktop application, you can specify certain minimum
hardware, but you can't add more. If you administer the servers, you can in
one step enable all your users to page people, or send faxes, or send commands
by phone, or process credit cards, etc, just by installing the relevant
hardware. We always looked for new ways to add features with hardware, not
just because it pleased users, but also as a way to distinguish ourselves from
competitors who (either because they sold desktop software, or resold Web-
based applications through ISPs) didn't have direct control over the hardware.
Because the software in a Web-based application will be a collection of
programs rather than a single binary, it can be written in any number of
different languages. When you're writing desktop software, you're practically
forced to write the application in the same language as the underlying
operating system-- meaning C and C++. And so these languages (especially among
nontechnical people like managers and VCs) got to be considered as the
languages for "serious" software development. But that was just an artifact of
the way desktop software had to be delivered. For server-based software you
can use any language you want. Today a lot of the top hackers are using
languages far removed from C and C++: Perl, Python, and even Lisp.
With server-based software, no one can tell you what language to use, because
you control the whole system, right down to the hardware. Different languages
are good for different tasks. You can use whichever is best for each. And when
you have competitors, "you can" means "you must" (we'll return to this later),
because if you don't take advantage of this possibility, your competitors
will.
Most of our competitors used C and C++, and this made their software visibly
inferior because (among other things), they had no way around the
statelessness of CGI scripts. If you were going to change something, all the
changes had to happen on one page, with an Update button at the bottom. As
I've written elsewhere, by using Lisp, which many people still consider a
research language, we could make the Viaweb editor behave more like desktop
software.
**Releases**
One of the most important changes in this new world is the way you do
releases. In the desktop software business, doing a release is a huge trauma,
in which the whole company sweats and strains to push out a single, giant
piece of code. Obvious comparisons suggest themselves, both to the process and
the resulting product.
With server-based software, you can make changes almost as you would in a
program you were writing for yourself. You release software as a series of
incremental changes instead of an occasional big explosion. A typical desktop
software company might do one or two releases a year. At Viaweb we often did
three to five releases a day.
When you switch to this new model, you realize how much software development
is affected by the way it is released. Many of the nastiest problems you see
in the desktop software business are due to catastrophic nature of releases.
When you release only one new version a year, you tend to deal with bugs
wholesale. Some time before the release date you assemble a new version in
which half the code has been torn out and replaced, introducing countless
bugs. Then a squad of QA people step in and start counting them, and the
programmers work down the list, fixing them. They do not generally get to the
end of the list, and indeed, no one is sure where the end is. It's like
fishing rubble out of a pond. You never really know what's happening inside
the software. At best you end up with a statistical sort of correctness.
With server-based software, most of the change is small and incremental. That
in itself is less likely to introduce bugs. It also means you know what to
test most carefully when you're about to release software: the last thing you
changed. You end up with a much firmer grip on the code. As a general rule,
you do know what's happening inside it. You don't have the source code
memorized, of course, but when you read the source you do it like a pilot
scanning the instrument panel, not like a detective trying to unravel some
mystery.
Desktop software breeds a certain fatalism about bugs. You know that you're
shipping something loaded with bugs, and you've even set up mechanisms to
compensate for it (e.g. patch releases). So why worry about a few more? Soon
you're releasing whole features you know are broken. Apple did this earlier
this year. They felt under pressure to release their new OS, whose release
date had already slipped four times, but some of the software (support for CDs
and DVDs) wasn't ready. The solution? They released the OS without the
unfinished parts, and users will have to install them later.
With Web-based software, you never have to release software before it works,
and you can release it as soon as it does work.
The industry veteran may be thinking, it's a fine-sounding idea to say that
you never have to release software before it works, but what happens when
you've promised to deliver a new version of your software by a certain date?
With Web-based software, you wouldn't make such a promise, because there are
no versions. Your software changes gradually and continuously. Some changes
might be bigger than others, but the idea of versions just doesn't naturally
fit onto Web-based software.
If anyone remembers Viaweb this might sound odd, because we were always
announcing new versions. This was done entirely for PR purposes. The trade
press, we learned, thinks in version numbers. They will give you major
coverage for a major release, meaning a new first digit on the version number,
and generally a paragraph at most for a point release, meaning a new digit
after the decimal point.
Some of our competitors were offering desktop software and actually had
version numbers. And for these releases, the mere fact of which seemed to us
evidence of their backwardness, they would get all kinds of publicity. We
didn't want to miss out, so we started giving version numbers to our software
too. When we wanted some publicity, we'd make a list of all the features we'd
added since the last "release," stick a new version number on the software,
and issue a press release saying that the new version was available
immediately. Amazingly, no one ever called us on it.
By the time we were bought, we had done this three times, so we were on
Version 4. Version 4.1 if I remember correctly. After Viaweb became Yahoo
Store, there was no longer such a desperate need for publicity, so although
the software continued to evolve, the whole idea of version numbers was
quietly dropped.
**Bugs**
The other major technical advantage of Web-based software is that you can
reproduce most bugs. You have the users' data right there on your disk. If
someone breaks your software, you don't have to try to guess what's going on,
as you would with desktop software: you should be able to reproduce the error
while they're on the phone with you. You might even know about it already, if
you have code for noticing errors built into your application.
Web-based software gets used round the clock, so everything you do is
immediately put through the wringer. Bugs turn up quickly.
Software companies are sometimes accused of letting the users debug their
software. And that is just what I'm advocating. For Web-based software it's
actually a good plan, because the bugs are fewer and transient. When you
release software gradually you get far fewer bugs to start with. And when you
can reproduce errors and release changes instantly, you can find and fix most
bugs as soon as they appear. We never had enough bugs at any one time to
bother with a formal bug-tracking system.
You should test changes before you release them, of course, so no major bugs
should get released. Those few that inevitably slip through will involve
borderline cases and will only affect the few users that encounter them before
someone calls in to complain. As long as you fix bugs right away, the net
effect, for the average user, is far fewer bugs. I doubt the average Viaweb
user ever saw a bug.
Fixing fresh bugs is easier than fixing old ones. It's usually fairly quick to
find a bug in code you just wrote. When it turns up you often know what's
wrong before you even look at the source, because you were already worrying
about it subconsciously. Fixing a bug in something you wrote six months ago
(the average case if you release once a year) is a lot more work. And since
you don't understand the code as well, you're more likely to fix it in an ugly
way, or even introduce more bugs.
When you catch bugs early, you also get fewer compound bugs. Compound bugs are
two separate bugs that interact: you trip going downstairs, and when you reach
for the handrail it comes off in your hand. In software this kind of bug is
the hardest to find, and also tends to have the worst consequences. The
traditional "break everything and then filter out the bugs" approach
inherently yields a lot of compound bugs. And software that's released in a
series of small changes inherently tends not to. The floors are constantly
being swept clean of any loose objects that might later get stuck in
something.
It helps if you use a technique called functional programming. Functional
programming means avoiding side-effects. It's something you're more likely to
see in research papers than commercial software, but for Web-based
applications it turns out to be really useful. It's hard to write entire
programs as purely functional code, but you can write substantial chunks this
way. It makes those parts of your software easier to test, because they have
no state, and that is very convenient in a situation where you are constantly
making and testing small modifications. I wrote much of Viaweb's editor in
this style, and we made our scripting language, RTML, a purely functional
language.
People from the desktop software business will find this hard to credit, but
at Viaweb bugs became almost a game. Since most released bugs involved
borderline cases, the users who encountered them were likely to be advanced
users, pushing the envelope. Advanced users are more forgiving about bugs,
especially since you probably introduced them in the course of adding some
feature they were asking for. In fact, because bugs were rare and you had to
be doing sophisticated things to see them, advanced users were often proud to
catch one. They would call support in a spirit more of triumph than anger, as
if they had scored points off us.
**Support**
When you can reproduce errors, it changes your approach to customer support.
At most software companies, support is offered as a way to make customers feel
better. They're either calling you about a known bug, or they're just doing
something wrong and you have to figure out what. In either case there's not
much you can learn from them. And so you tend to view support calls as a pain
in the ass that you want to isolate from your developers as much as possible.
This was not how things worked at Viaweb. At Viaweb, support was free, because
we wanted to hear from customers. If someone had a problem, we wanted to know
about it right away so that we could reproduce the error and release a fix.
So at Viaweb the developers were always in close contact with support. The
customer support people were about thirty feet away from the programmers, and
knew that they could always interrupt anything with a report of a genuine bug.
We would leave a board meeting to fix a serious bug.
Our approach to support made everyone happier. The customers were delighted.
Just imagine how it would feel to call a support line and be treated as
someone bringing important news. The customer support people liked it because
it meant they could help the users, instead of reading scripts to them. And
the programmers liked it because they could reproduce bugs instead of just
hearing vague second-hand reports about them.
Our policy of fixing bugs on the fly changed the relationship between customer
support people and hackers. At most software companies, support people are
underpaid human shields, and hackers are little copies of God the Father,
creators of the world. Whatever the procedure for reporting bugs, it is likely
to be one-directional: support people who hear about bugs fill out some form
that eventually gets passed on (possibly via QA) to programmers, who put it on
their list of things to do. It was very different at Viaweb. Within a minute
of hearing about a bug from a customer, the support people could be standing
next to a programmer hearing him say "Shit, you're right, it's a bug." It
delighted the support people to hear that "you're right" from the hackers.
They used to bring us bugs with the same expectant air as a cat bringing you a
mouse it has just killed. It also made them more careful in judging the
seriousness of a bug, because now their honor was on the line.
After we were bought by Yahoo, the customer support people were moved far away
from the programmers. It was only then that we realized that they were
effectively QA and to some extent marketing as well. In addition to catching
bugs, they were the keepers of the knowledge of vaguer, buglike things, like
features that confused users. They were also a kind of proxy focus group;
we could ask them which of two new features users wanted more, and they were
always right.
**Morale**
Being able to release software immediately is a big motivator. Often as I was
walking to work I would think of some change I wanted to make to the software,
and do it that day. This worked for bigger features as well. Even if something
was going to take two weeks to write (few projects took longer), I knew I
could see the effect in the software as soon as it was done.
If I'd had to wait a year for the next release, I would have shelved most of
these ideas, for a while at least. The thing about ideas, though, is that they
lead to more ideas. Have you ever noticed that when you sit down to write
something, half the ideas that end up in it are ones you thought of while
writing it? The same thing happens with software. Working to implement one
idea gives you more ideas. So shelving an idea costs you not only that delay
in implementing it, but also all the ideas that implementing it would have led
to. In fact, shelving an idea probably even inhibits new ideas: as you start
to think of some new feature, you catch sight of the shelf and think "but I
already have a lot of new things I want to do for the next release."
What big companies do instead of implementing features is plan them. At Viaweb
we sometimes ran into trouble on this account. Investors and analysts would
ask us what we had planned for the future. The truthful answer would have
been, we didn't have any plans. We had general ideas about things we wanted to
improve, but if we knew how we would have done it already. What were we going
to do in the next six months? Whatever looked like the biggest win. I don't
know if I ever dared give this answer, but that was the truth. Plans are just
another word for ideas on the shelf. When we thought of good ideas, we
implemented them.
At Viaweb, as at many software companies, most code had one definite owner.
But when you owned something you really owned it: no one except the owner of a
piece of software had to approve (or even know about) a release. There was no
protection against breakage except the fear of looking like an idiot to one's
peers, and that was more than enough. I may have given the impression that we
just blithely plowed forward writing code. We did go fast, but we thought very
carefully before we released software onto those servers. And paying attention
is more important to reliability than moving slowly. Because he pays close
attention, a Navy pilot can land a 40,000 lb. aircraft at 140 miles per hour
on a pitching carrier deck, at night, more safely than the average teenager
can cut a bagel.
This way of writing software is a double-edged sword of course. It works a lot
better for a small team of good, trusted programmers than it would for a big
company of mediocre ones, where bad ideas are caught by committees instead of
the people that had them.
**Brooks in Reverse**
Fortunately, Web-based software does require fewer programmers. I once worked
for a medium-sized desktop software company that had over 100 people working
in engineering as a whole. Only 13 of these were in product development. All
the rest were working on releases, ports, and so on. With Web-based software,
all you need (at most) are the 13 people, because there are no releases,
ports, and so on.
Viaweb was written by just three people. I was always under pressure to
hire more, because we wanted to get bought, and we knew that buyers would have
a hard time paying a high price for a company with only three programmers.
(Solution: we hired more, but created new projects for them.)
When you can write software with fewer programmers, it saves you more than
money. As Fred Brooks pointed out in _The Mythical Man-Month,_ adding people
to a project tends to slow it down. The number of possible connections between
developers grows exponentially with the size of the group. The larger the
group, the more time they'll spend in meetings negotiating how their software
will work together, and the more bugs they'll get from unforeseen
interactions. Fortunately, this process also works in reverse: as groups get
smaller, software development gets exponentially more efficient. I can't
remember the programmers at Viaweb ever having an actual meeting. We never had
more to say at any one time than we could say as we were walking to lunch.
If there is a downside here, it is that all the programmers have to be to some
degree system administrators as well. When you're hosting software, someone
has to be watching the servers, and in practice the only people who can do
this properly are the ones who wrote the software. At Viaweb our system had so
many components and changed so frequently that there was no definite border
between software and infrastructure. Arbitrarily declaring such a border would
have constrained our design choices. And so although we were constantly hoping
that one day ("in a couple months") everything would be stable enough that we
could hire someone whose job was just to worry about the servers, it never
happened.
I don't think it could be any other way, as long as you're still actively
developing the product. Web-based software is never going to be something you
write, check in, and go home. It's a live thing, running on your servers right
now. A bad bug might not just crash one user's process; it could crash them
all. If a bug in your code corrupts some data on disk, you have to fix it. And
so on. We found that you don't have to watch the servers every minute (after
the first year or so), but you definitely want to keep an eye on things you've
changed recently. You don't release code late at night and then go home.
**Watching Users**
With server-based software, you're in closer touch with your code. You can
also be in closer touch with your users. Intuit is famous for introducing
themselves to customers at retail stores and asking to follow them home. If
you've ever watched someone use your software for the first time, you know
what surprises must have awaited them.
Software should do what users think it will. But you can't have any idea what
users will be thinking, believe me, until you watch them. And server-based
software gives you unprecedented information about their behavior. You're not
limited to small, artificial focus groups. You can see every click made by
every user. You have to consider carefully what you're going to look at,
because you don't want to violate users' privacy, but even the most general
statistical sampling can be very useful.
When you have the users on your server, you don't have to rely on benchmarks,
for example. Benchmarks are simulated users. With server-based software, you
can watch actual users. To decide what to optimize, just log into a server and
see what's consuming all the CPU. And you know when to stop optimizing too: we
eventually got the Viaweb editor to the point where it was memory-bound rather
than CPU-bound, and since there was nothing we could do to decrease the size
of users' data (well, nothing easy), we knew we might as well stop there.
Efficiency matters for server-based software, because you're paying for the
hardware. The number of users you can support per server is the divisor of
your capital cost, so if you can make your software very efficient you can
undersell competitors and still make a profit. At Viaweb we got the capital
cost per user down to about $5. It would be less now, probably less than the
cost of sending them the first month's bill. Hardware is free now, if your
software is reasonably efficient.
Watching users can guide you in design as well as optimization. Viaweb had a
scripting language called RTML that let advanced users define their own page
styles. We found that RTML became a kind of suggestion box, because users only
used it when the predefined page styles couldn't do what they wanted.
Originally the editor put button bars across the page, for example, but after
a number of users used RTML to put buttons down the left side, we made that an
option (in fact the default) in the predefined page styles.
Finally, by watching users you can often tell when they're in trouble. And
since the customer is always right, that's a sign of something you need to
fix. At Viaweb the key to getting users was the online test drive. It was not
just a series of slides built by marketing people. In our test drive, users
actually used the software. It took about five minutes, and at the end of it
they had built a real, working store.
The test drive was the way we got nearly all our new users. I think it will be
the same for most Web-based applications. If users can get through a test
drive successfully, they'll like the product. If they get confused or bored,
they won't. So anything we could do to get more people through the test drive
would increase our growth rate.
I studied click trails of people taking the test drive and found that at a
certain step they would get confused and click on the browser's Back button.
(If you try writing Web-based applications, you'll find that the Back button
becomes one of your most interesting philosophical problems.) So I added a
message at that point, telling users that they were nearly finished, and
reminding them not to click on the Back button. Another great thing about Web-
based software is that you get instant feedback from changes: the number of
people completing the test drive rose immediately from 60% to 90%. And since
the number of new users was a function of the number of completed test drives,
our revenue growth increased by 50%, just from that change.
**Money**
In the early 1990s I read an article in which someone said that software was a
subscription business. At first this seemed a very cynical statement. But
later I realized that it reflects reality: software development is an ongoing
process. I think it's cleaner if you openly charge subscription fees, instead
of forcing people to keep buying and installing new versions so that they'll
keep paying you. And fortunately, subscriptions are the natural way to bill
for Web-based applications.
Hosting applications is an area where companies will play a role that is not
likely to be filled by freeware. Hosting applications is a lot of stress, and
has real expenses. No one is going to want to do it for free.
For companies, Web-based applications are an ideal source of revenue. Instead
of starting each quarter with a blank slate, you have a recurring revenue
stream. Because your software evolves gradually, you don't have to worry that
a new model will flop; there never need be a new model, per se, and if you do
something to the software that users hate, you'll know right away. You have no
trouble with uncollectable bills; if someone won't pay you can just turn off
the service. And there is no possibility of piracy.
That last "advantage" may turn out to be a problem. Some amount of piracy is
to the advantage of software companies. If some user really would not have
bought your software at any price, you haven't lost anything if he uses a
pirated copy. In fact you gain, because he is one more user helping to make
your software the standard-- or who might buy a copy later, when he graduates
from high school.
When they can, companies like to do something called price discrimination,
which means charging each customer as much as they can afford. Software is
particularly suitable for price discrimination, because the marginal cost is
close to zero. This is why some software costs more to run on Suns than on
Intel boxes: a company that uses Suns is not interested in saving money and
can safely be charged more. Piracy is effectively the lowest tier of price
discrimination. I think that software companies understand this and
deliberately turn a blind eye to some kinds of piracy. With server-based
software they are going to have to come up with some other solution.
Web-based software sells well, especially in comparison to desktop software,
because it's easy to buy. You might think that people decide to buy something,
and then buy it, as two separate steps. That's what I thought before Viaweb,
to the extent I thought about the question at all. In fact the second step can
propagate back into the first: if something is hard to buy, people will change
their mind about whether they wanted it. And vice versa: you'll sell more of
something when it's easy to buy. I buy more books because Amazon exists. Web-
based software is just about the easiest thing in the world to buy, especially
if you have just done an online demo. Users should not have to do much more
than enter a credit card number. (Make them do more at your peril.)
Sometimes Web-based software is offered through ISPs acting as resellers. This
is a bad idea. You have to be administering the servers, because you need to
be constantly improving both hardware and software. If you give up direct
control of the servers, you give up most of the advantages of developing Web-
based applications.
Several of our competitors shot themselves in the foot this way-- usually, I
think, because they were overrun by suits who were excited about this huge
potential channel, and didn't realize that it would ruin the product they
hoped to sell through it. Selling Web-based software through ISPs is like
selling sushi through vending machines.
**Customers**
Who will the customers be? At Viaweb they were initially individuals and
smaller companies, and I think this will be the rule with Web-based
applications. These are the users who are ready to try new things, partly
because they're more flexible, and partly because they want the lower costs of
new technology.
Web-based applications will often be the best thing for big companies too
(though they'll be slow to realize it). The best intranet is the Internet. If
a company uses true Web-based applications, the software will work better, the
servers will be better administered, and employees will have access to the
system from anywhere.
The argument against this approach usually hinges on security: if access is
easier for employees, it will be for bad guys too. Some larger merchants were
reluctant to use Viaweb because they thought customers' credit card
information would be safer on their own servers. It was not easy to make this
point diplomatically, but in fact the data was almost certainly safer in our
hands than theirs. Who can hire better people to manage security, a technology
startup whose whole business is running servers, or a clothing retailer? Not
only did we have better people worrying about security, we worried more about
it. If someone broke into the clothing retailer's servers, it would affect at
most one merchant, could probably be hushed up, and in the worst case might
get one person fired. If someone broke into ours, it could affect thousands of
merchants, would probably end up as news on CNet, and could put us out of
business.
If you want to keep your money safe, do you keep it under your mattress at
home, or put it in a bank? This argument applies to every aspect of server
administration: not just security, but uptime, bandwidth, load management,
backups, etc. Our existence depended on doing these things right. Server
problems were the big no-no for us, like a dangerous toy would be for a toy
maker, or a salmonella outbreak for a food processor.
A big company that uses Web-based applications is to that extent outsourcing
IT. Drastic as it sounds, I think this is generally a good idea. Companies are
likely to get better service this way than they would from in-house system
administrators. System administrators can become cranky and unresponsive
because they're not directly exposed to competitive pressure: a salesman has
to deal with customers, and a developer has to deal with competitors'
software, but a system administrator, like an old bachelor, has few external
forces to keep him in line. At Viaweb we had external forces in plenty to
keep us in line. The people calling us were customers, not just co-workers. If
a server got wedged, we jumped; just thinking about it gives me a jolt of
adrenaline, years later.
So Web-based applications will ordinarily be the right answer for big
companies too. They will be the last to realize it, however, just as they were
with desktop computers. And partly for the same reason: it will be worth a lot
of money to convince big companies that they need something more expensive.
There is always a tendency for rich customers to buy expensive solutions, even
when cheap solutions are better, because the people offering expensive
solutions can spend more to sell them. At Viaweb we were always up against
this. We lost several high-end merchants to Web consulting firms who convinced
them they'd be better off if they paid half a million dollars for a custom-
made online store on their own server. They were, as a rule, not better off,
as more than one discovered when Christmas shopping season came around and
loads rose on their server. Viaweb was a lot more sophisticated than what most
of these merchants got, but we couldn't afford to tell them. At $300 a month,
we couldn't afford to send a team of well-dressed and authoritative-sounding
people to make presentations to customers.
A large part of what big companies pay extra for is the cost of selling
expensive things to them. (If the Defense Department pays a thousand dollars
for toilet seats, it's partly because it costs a lot to sell toilet seats for
a thousand dollars.) And this is one reason intranet software will continue to
thrive, even though it is probably a bad idea. It's simply more expensive.
There is nothing you can do about this conundrum, so the best plan is to go
for the smaller customers first. The rest will come in time.
**Son of Server**
Running software on the server is nothing new. In fact it's the old model:
mainframe applications are all server-based. If server-based software is such
a good idea, why did it lose last time? Why did desktop computers eclipse
mainframes?
At first desktop computers didn't look like much of a threat. The first users
were all hackers-- or hobbyists, as they were called then. They liked
microcomputers because they were cheap. For the first time, you could have
your own computer. The phrase "personal computer" is part of the language now,
but when it was first used it had a deliberately audacious sound, like the
phrase "personal satellite" would today.
Why did desktop computers take over? I think it was because they had better
software. And I think the reason microcomputer software was better was that it
could be written by small companies.
I don't think many people realize how fragile and tentative startups are in
the earliest stage. Many startups begin almost by accident-- as a couple guys,
either with day jobs or in school, writing a prototype of something that
might, if it looks promising, turn into a company. At this larval stage, any
significant obstacle will stop the startup dead in its tracks. Writing
mainframe software required too much commitment up front. Development machines
were expensive, and because the customers would be big companies, you'd need
an impressive-looking sales force to sell it to them. Starting a startup to
write mainframe software would be a much more serious undertaking than just
hacking something together on your Apple II in the evenings. And so you didn't
get a lot of startups writing mainframe applications.
The arrival of desktop computers inspired a lot of new software, because
writing applications for them seemed an attainable goal to larval startups.
Development was cheap, and the customers would be individual people that you
could reach through computer stores or even by mail-order.
The application that pushed desktop computers out into the mainstream was
VisiCalc, the first spreadsheet. It was written by two guys working in an
attic, and yet did things no mainframe software could do. VisiCalc was
such an advance, in its time, that people bought Apple IIs just to run it. And
this was the beginning of a trend: desktop computers won because startups
wrote software for them.
It looks as if server-based software will be good this time around, because
startups will write it. Computers are so cheap now that you can get started,
as we did, using a desktop computer as a server. Inexpensive processors have
eaten the workstation market (you rarely even hear the word now) and are most
of the way through the server market; Yahoo's servers, which deal with loads
as high as any on the Internet, all have the same inexpensive Intel processors
that you have in your desktop machine. And once you've written the software,
all you need to sell it is a Web site. Nearly all our users came direct to our
site through word of mouth and references in the press.
Viaweb was a typical larval startup. We were terrified of starting a company,
and for the first few months comforted ourselves by treating the whole thing
as an experiment that we might call off at any moment. Fortunately, there were
few obstacles except technical ones. While we were writing the software, our
Web server was the same desktop machine we used for development, connected to
the outside world by a dialup line. Our only expenses in that phase were food
and rent.
There is all the more reason for startups to write Web-based software now,
because writing desktop software has become a lot less fun. If you want to
write desktop software now you do it on Microsoft's terms, calling their APIs
and working around their buggy OS. And if you manage to write something that
takes off, you may find that you were merely doing market research for
Microsoft.
If a company wants to make a platform that startups will build on, they have
to make it something that hackers themselves will want to use. That means it
has to be inexpensive and well-designed. The Mac was popular with hackers when
it first came out, and a lot of them wrote software for it. You see this
less with Windows, because hackers don't use it. The kind of people who are
good at writing software tend to be running Linux or FreeBSD now.
I don't think we would have started a startup to write desktop software,
because desktop software has to run on Windows, and before we could write
software for Windows we'd have to use it. The Web let us do an end-run around
Windows, and deliver software running on Unix direct to users through the
browser. That is a liberating prospect, a lot like the arrival of PCs twenty-
five years ago.
**Microsoft**
Back when desktop computers arrived, IBM was the giant that everyone was
afraid of. It's hard to imagine now, but I remember the feeling very well. Now
the frightening giant is Microsoft, and I don't think they are as blind to the
threat facing them as IBM was. After all, Microsoft deliberately built their
business in IBM's blind spot.
I mentioned earlier that my mother doesn't really need a desktop computer.
Most users probably don't. That's a problem for Microsoft, and they know it.
If applications run on remote servers, no one needs Windows. What will
Microsoft do? Will they be able to use their control of the desktop to
prevent, or constrain, this new generation of software?
My guess is that Microsoft will develop some kind of server/desktop hybrid,
where the operating system works together with servers they control. At a
minimum, files will be centrally available for users who want that. I don't
expect Microsoft to go all the way to the extreme of doing the computations on
the server, with only a browser for a client, if they can avoid it. If you
only need a browser for a client, you don't need Microsoft on the client, and
if Microsoft doesn't control the client, they can't push users towards their
server-based applications.
I think Microsoft will have a hard time keeping the genie in the bottle. There
will be too many different types of clients for them to control them all. And
if Microsoft's applications only work with some clients, competitors will be
able to trump them by offering applications that work from any client.
In a world of Web-based applications, there is no automatic place for
Microsoft. They may succeed in making themselves a place, but I don't think
they'll dominate this new world as they did the world of desktop applications.
It's not so much that a competitor will trip them up as that they will trip
over themselves. With the rise of Web-based software, they will be facing not
just technical problems but their own wishful thinking. What they need to do
is cannibalize their existing business, and I can't see them facing that. The
same single-mindedness that has brought them this far will now be working
against them. IBM was in exactly the same situation, and they could not master
it. IBM made a late and half-hearted entry into the microcomputer business
because they were ambivalent about threatening their cash cow, mainframe
computing. Microsoft will likewise be hampered by wanting to save the desktop.
A cash cow can be a damned heavy monkey on your back.
I'm not saying that no one will dominate server-based applications. Someone
probably will eventually. But I think that there will be a good long period of
cheerful chaos, just as there was in the early days of microcomputers. That
was a good time for startups. Lots of small companies flourished, and did it
by making cool things.
**Startups but More So**
The classic startup is fast and informal, with few people and little money.
Those few people work very hard, and technology magnifies the effect of the
decisions they make. If they win, they win big.
In a startup writing Web-based applications, everything you associate with
startups is taken to an extreme. You can write and launch a product with even
fewer people and even less money. You have to be even faster, and you can get
away with being more informal. You can literally launch your product as three
guys sitting in the living room of an apartment, and a server collocated at an
ISP. We did.
Over time the teams have gotten smaller, faster, and more informal. In 1960,
software development meant a roomful of men with horn rimmed glasses and
narrow black neckties, industriously writing ten lines of code a day on IBM
coding forms. In 1980, it was a team of eight to ten people wearing jeans to
the office and typing into vt100s. Now it's a couple of guys sitting in a
living room with laptops. (And jeans turn out not to be the last word in
informality.)
Startups are stressful, and this, unfortunately, is also taken to an extreme
with Web-based applications. Many software companies, especially at the
beginning, have periods where the developers slept under their desks and so
on. The alarming thing about Web-based software is that there is nothing to
prevent this becoming the default. The stories about sleeping under desks
usually end: then at last we shipped it and we all went home and slept for a
week. Web-based software never ships. You can work 16-hour days for as long as
you want to. And because you can, and your competitors can, you tend to be
forced to. You can, so you must. It's Parkinson's Law running in reverse.
The worst thing is not the hours but the responsibility. Programmers and
system administrators traditionally each have their own separate worries.
Programmers have to worry about bugs, and system administrators have to worry
about infrastructure. Programmers may spend a long day up to their elbows in
source code, but at some point they get to go home and forget about it. System
administrators never quite leave the job behind, but when they do get paged at
4:00 AM, they don't usually have to do anything very complicated. With Web-
based applications, these two kinds of stress get combined. The programmers
become system administrators, but without the sharply defined limits that
ordinarily make the job bearable.
At Viaweb we spent the first six months just writing software. We worked the
usual long hours of an early startup. In a desktop software company, this
would have been the part where we were working hard, but it felt like a
vacation compared to the next phase, when we took users onto our server. The
second biggest benefit of selling Viaweb to Yahoo (after the money) was to be
able to dump ultimate responsibility for the whole thing onto the shoulders of
a big company.
Desktop software forces users to become system administrators. Web-based
software forces programmers to. There is less stress in total, but more for
the programmers. That's not necessarily bad news. If you're a startup
competing with a big company, it's good news. Web-based applications
offer a straightforward way to outwork your competitors. No startup asks for
more.
**Just Good Enough**
One thing that might deter you from writing Web-based applications is the
lameness of Web pages as a UI. That is a problem, I admit. There were a few
things we would have _really_ liked to add to HTML and HTTP. What matters,
though, is that Web pages are just good enough.
There is a parallel here with the first microcomputers. The processors in
those machines weren't actually intended to be the CPUs of computers. They
were designed to be used in things like traffic lights. But guys like Ed
Roberts, who designed the Altair, realized that they were just good enough.
You could combine one of these chips with some memory (256 bytes in the first
Altair), and front panel switches, and you'd have a working computer. Being
able to have your own computer was so exciting that there were plenty of
people who wanted to buy them, however limited.
Web pages weren't designed to be a UI for applications, but they're just good
enough. And for a significant number of users, software that you can use from
any browser will be enough of a win in itself to outweigh any awkwardness in
the UI. Maybe you can't write the best-looking spreadsheet using HTML, but you
can write a spreadsheet that several people can use simultaneously from
different locations without special client software, or that can incorporate
live data feeds, or that can page you when certain conditions are triggered.
More importantly, you can write new kinds of applications that don't even have
names yet. VisiCalc was not merely a microcomputer version of a mainframe
application, after all-- it was a new type of application.
Of course, server-based applications don't have to be Web-based. You could
have some other kind of client. But I'm pretty sure that's a bad idea. It
would be very convenient if you could assume that everyone would install your
client-- so convenient that you could easily convince yourself that they all
would-- but if they don't, you're hosed. Because Web-based software assumes
nothing about the client, it will work anywhere the Web works. That's a big
advantage already, and the advantage will grow as new Web devices proliferate.
Users will like you because your software just works, and your life will be
easier because you won't have to tweak it for every new client.
I feel like I've watched the evolution of the Web as closely as anyone, and I
can't predict what's going to happen with clients. Convergence is probably
coming, but where? I can't pick a winner. One thing I can predict is conflict
between AOL and Microsoft. Whatever Microsoft's .NET turns out to be, it will
probably involve connecting the desktop to servers. Unless AOL fights back,
they will either be pushed aside or turned into a pipe between Microsoft
client and server software. If Microsoft and AOL get into a client war, the
only thing sure to work on both will be browsing the Web, meaning Web-based
applications will be the only kind that work everywhere.
How will it all play out? I don't know. And you don't have to know if you bet
on Web-based applications. No one can break that without breaking browsing.
The Web may not be the only way to deliver software, but it's one that works
now and will continue to work for a long time. Web-based applications are
cheap to develop, and easy for even the smallest startup to deliver. They're a
lot of work, and of a particularly stressful kind, but that only makes the
odds better for startups.
**Why Not?**
E. B. White was amused to learn from a farmer friend that many electrified
fences don't have any current running through them. The cows apparently learn
to stay away from them, and after that you don't need the current. "Rise up,
cows!" he wrote, "Take your liberty while despots snore!"
If you're a hacker who has thought of one day starting a startup, there are
probably two things keeping you from doing it. One is that you don't know
anything about business. The other is that you're afraid of competition.
Neither of these fences have any current in them.
There are only two things you have to know about business: build something
users love, and make more than you spend. If you get these two right, you'll
be ahead of most startups. You can figure out the rest as you go.
You may not at first make more than you spend, but as long as the gap is
closing fast enough you'll be ok. If you start out underfunded, it will at
least encourage a habit of frugality. The less you spend, the easier it is to
make more than you spend. Fortunately, it can be very cheap to launch a Web-
based application. We launched on under $10,000, and it would be even cheaper
today. We had to spend thousands on a server, and thousands more to get SSL.
(The only company selling SSL software at the time was Netscape.) Now you can
rent a much more powerful server, with SSL included, for less than we paid for
bandwidth alone. You could launch a Web-based application now for less than
the cost of a fancy office chair.
As for building something users love, here are some general tips. Start by
making something clean and simple that you would want to use yourself. Get a
version 1.0 out fast, then continue to improve the software, listening closely
to the users as you do. The customer is always right, but different customers
are right about different things; the least sophisticated users show you what
you need to simplify and clarify, and the most sophisticated tell you what
features you need to add. The best thing software can be is easy, but the way
to do this is to get the defaults right, not to limit users' choices. Don't
get complacent if your competitors' software is lame; the standard to compare
your software to is what it could be, not what your current competitors happen
to have. Use your software yourself, all the time. Viaweb was supposed to be
an online store builder, but we used it to make our own site too. Don't listen
to marketing people or designers or product managers just because of their job
titles. If they have good ideas, use them, but it's up to you to decide;
software has to be designed by hackers who understand design, not designers
who know a little about software. If you can't design software as well as
implement it, don't start a startup.
Now let's talk about competition. What you're afraid of is not presumably
groups of hackers like you, but actual companies, with offices and business
plans and salesmen and so on, right? Well, they are more afraid of you than
you are of them, and they're right. It's a lot easier for a couple of hackers
to figure out how to rent office space or hire sales people than it is for a
company of any size to get software written. I've been on both sides, and I
know. When Viaweb was bought by Yahoo, I suddenly found myself working for a
big company, and it was like trying to run through waist-deep water.
I don't mean to disparage Yahoo. They had some good hackers, and the top
management were real butt-kickers. For a big company, they were exceptional.
But they were still only about a tenth as productive as a small startup. No
big company can do much better than that. What's scary about Microsoft is that
a company so big can develop software at all. They're like a mountain that can
walk.
Don't be intimidated. You can do as much that Microsoft can't as they can do
that you can't. And no one can stop you. You don't have to ask anyone's
permission to develop Web-based applications. You don't have to do licensing
deals, or get shelf space in retail stores, or grovel to have your application
bundled with the OS. You can deliver software right to the browser, and no one
can get between you and potential users without preventing them from browsing
the Web.
You may not believe it, but I promise you, Microsoft is scared of you. The
complacent middle managers may not be, but Bill is, because he was you once,
back in 1975, the last time a new way of delivering software appeared.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
May 2002
| "We were after the C++ programmers. We managed to drag a lot of them about
halfway to Lisp."
\- Guy Steele, co-author of the Java spec
---
In the software business there is an ongoing struggle between the pointy-
headed academics, and another equally formidable force, the pointy-haired
bosses. Everyone knows who the pointy-haired boss is, right? I think most
people in the technology world not only recognize this cartoon character, but
know the actual person in their company that he is modelled upon.
The pointy-haired boss miraculously combines two qualities that are common by
themselves, but rarely seen together: (a) he knows nothing whatsoever about
technology, and (b) he has very strong opinions about it.
Suppose, for example, you need to write a piece of software. The pointy-haired
boss has no idea how this software has to work, and can't tell one programming
language from another, and yet he knows what language you should write it in.
Exactly. He thinks you should write it in Java.
Why does he think this? Let's take a look inside the brain of the pointy-
haired boss. What he's thinking is something like this. Java is a standard. I
know it must be, because I read about it in the press all the time. Since it
is a standard, I won't get in trouble for using it. And that also means there
will always be lots of Java programmers, so if the programmers working for me
now quit, as programmers working for me mysteriously always do, I can easily
replace them.
Well, this doesn't sound that unreasonable. But it's all based on one unspoken
assumption, and that assumption turns out to be false. The pointy-haired boss
believes that all programming languages are pretty much equivalent. If that
were true, he would be right on target. If languages are all equivalent, sure,
use whatever language everyone else is using.
But all languages are not equivalent, and I think I can prove this to you
without even getting into the differences between them. If you asked the
pointy-haired boss in 1992 what language software should be written in, he
would have answered with as little hesitation as he does today. Software
should be written in C++. But if languages are all equivalent, why should the
pointy-haired boss's opinion ever change? In fact, why should the developers
of Java have even bothered to create a new language?
Presumably, if you create a new language, it's because you think it's better
in some way than what people already had. And in fact, Gosling makes it clear
in the first Java white paper that Java was designed to fix some problems with
C++. So there you have it: languages are not all equivalent. If you follow the
trail through the pointy-haired boss's brain to Java and then back through
Java's history to its origins, you end up holding an idea that contradicts the
assumption you started with.
So, who's right? James Gosling, or the pointy-haired boss? Not surprisingly,
Gosling is right. Some languages _are_ better, for certain problems, than
others. And you know, that raises some interesting questions. Java was
designed to be better, for certain problems, than C++. What problems? When is
Java better and when is C++? Are there situations where other languages are
better than either of them?
Once you start considering this question, you have opened a real can of worms.
If the pointy-haired boss had to think about the problem in its full
complexity, it would make his brain explode. As long as he considers all
languages equivalent, all he has to do is choose the one that seems to have
the most momentum, and since that is more a question of fashion than
technology, even he can probably get the right answer. But if languages vary,
he suddenly has to solve two simultaneous equations, trying to find an optimal
balance between two things he knows nothing about: the relative suitability of
the twenty or so leading languages for the problem he needs to solve, and the
odds of finding programmers, libraries, etc. for each. If that's what's on the
other side of the door, it is no surprise that the pointy-haired boss doesn't
want to open it.
The disadvantage of believing that all programming languages are equivalent is
that it's not true. But the advantage is that it makes your life a lot
simpler. And I think that's the main reason the idea is so widespread. It is a
_comfortable_ idea.
We know that Java must be pretty good, because it is the cool, new programming
language. Or is it? If you look at the world of programming languages from a
distance, it looks like Java is the latest thing. (From far enough away, all
you can see is the large, flashing billboard paid for by Sun.) But if you look
at this world up close, you find that there are degrees of coolness. Within
the hacker subculture, there is another language called Perl that is
considered a lot cooler than Java. Slashdot, for example, is generated by
Perl. I don't think you would find those guys using Java Server Pages. But
there is another, newer language, called Python, whose users tend to look down
on Perl, and more waiting in the wings.
If you look at these languages in order, Java, Perl, Python, you notice an
interesting pattern. At least, you notice this pattern if you are a Lisp
hacker. Each one is progressively more like Lisp. Python copies even features
that many Lisp hackers consider to be mistakes. You could translate simple
Lisp programs into Python line for line. It's 2002, and programming languages
have almost caught up with 1958.
**Catching Up with Math**
What I mean is that Lisp was first discovered by John McCarthy in 1958, and
popular programming languages are only now catching up with the ideas he
developed then.
Now, how could that be true? Isn't computer technology something that changes
very rapidly? I mean, in 1958, computers were refrigerator-sized behemoths
with the processing power of a wristwatch. How could any technology that old
even be relevant, let alone superior to the latest developments?
I'll tell you how. It's because Lisp was not really designed to be a
programming language, at least not in the sense we mean today. What we mean by
a programming language is something we use to tell a computer what to do.
McCarthy did eventually intend to develop a programming language in this
sense, but the Lisp that we actually ended up with was based on something
separate that he did as a theoretical exercise\-- an effort to define a more
convenient alternative to the Turing Machine. As McCarthy said later,
> Another way to show that Lisp was neater than Turing machines was to write a
> universal Lisp function and show that it is briefer and more comprehensible
> than the description of a universal Turing machine. This was the Lisp
> function _eval_..., which computes the value of a Lisp expression....
> Writing _eval_ required inventing a notation representing Lisp functions as
> Lisp data, and such a notation was devised for the purposes of the paper
> with no thought that it would be used to express Lisp programs in practice.
What happened next was that, some time in late 1958, Steve Russell, one of
McCarthy's grad students, looked at this definition of _eval_ and realized
that if he translated it into machine language, the result would be a Lisp
interpreter.
This was a big surprise at the time. Here is what McCarthy said about it later
in an interview:
> Steve Russell said, look, why don't I program this _eval_..., and I said to
> him, ho, ho, you're confusing theory with practice, this _eval_ is intended
> for reading, not for computing. But he went ahead and did it. That is, he
> compiled the _eval_ in my paper into [IBM] 704 machine code, fixing bugs,
> and then advertised this as a Lisp interpreter, which it certainly was. So
> at that point Lisp had essentially the form that it has today....
Suddenly, in a matter of weeks I think, McCarthy found his theoretical
exercise transformed into an actual programming language-- and a more powerful
one than he had intended.
So the short explanation of why this 1950s language is not obsolete is that it
was not technology but math, and math doesn't get stale. The right thing to
compare Lisp to is not 1950s hardware, but, say, the Quicksort algorithm,
which was discovered in 1960 and is still the fastest general-purpose sort.
There is one other language still surviving from the 1950s, Fortran, and it
represents the opposite approach to language design. Lisp was a piece of
theory that unexpectedly got turned into a programming language. Fortran was
developed intentionally as a programming language, but what we would now
consider a very low-level one.
Fortran I, the language that was developed in 1956, was a very different
animal from present-day Fortran. Fortran I was pretty much assembly language
with math. In some ways it was less powerful than more recent assembly
languages; there were no subroutines, for example, only branches. Present-day
Fortran is now arguably closer to Lisp than to Fortran I.
Lisp and Fortran were the trunks of two separate evolutionary trees, one
rooted in math and one rooted in machine architecture. These two trees have
been converging ever since. Lisp started out powerful, and over the next
twenty years got fast. So-called mainstream languages started out fast, and
over the next forty years gradually got more powerful, until now the most
advanced of them are fairly close to Lisp. Close, but they are still missing a
few things....
**What Made Lisp Different**
When it was first developed, Lisp embodied nine new ideas. Some of these we
now take for granted, others are only seen in more advanced languages, and two
are still unique to Lisp. The nine ideas are, in order of their adoption by
the mainstream,
1. Conditionals. A conditional is an if-then-else construct. We take these for granted now, but Fortran I didn't have them. It had only a conditional goto closely based on the underlying machine instruction.
2. A function type. In Lisp, functions are a data type just like integers or strings. They have a literal representation, can be stored in variables, can be passed as arguments, and so on.
3. Recursion. Lisp was the first programming language to support it.
4. Dynamic typing. In Lisp, all variables are effectively pointers. Values are what have types, not variables, and assigning or binding variables means copying pointers, not what they point to.
5. Garbage-collection.
6. Programs composed of expressions. Lisp programs are trees of expressions, each of which returns a value. This is in contrast to Fortran and most succeeding languages, which distinguish between expressions and statements.
It was natural to have this distinction in Fortran I because you could not
nest statements. And so while you needed expressions for math to work, there
was no point in making anything else return a value, because there could not
be anything waiting for it.
This limitation went away with the arrival of block-structured languages, but
by then it was too late. The distinction between expressions and statements
was entrenched. It spread from Fortran into Algol and then to both their
descendants.
7. A symbol type. Symbols are effectively pointers to strings stored in a hash table. So you can test equality by comparing a pointer, instead of comparing each character.
8. A notation for code using trees of symbols and constants.
9. The whole language there all the time. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.
Running code at read-time lets users reprogram Lisp's syntax; running code at
compile-time is the basis of macros; compiling at runtime is the basis of
Lisp's use as an extension language in programs like Emacs; and reading at
runtime enables programs to communicate using s-expressions, an idea recently
reinvented as XML.
When Lisp first appeared, these ideas were far removed from ordinary
programming practice, which was dictated largely by the hardware available in
the late 1950s. Over time, the default language, embodied in a succession of
popular languages, has gradually evolved toward Lisp. Ideas 1-5 are now
widespread. Number 6 is starting to appear in the mainstream. Python has a
form of 7, though there doesn't seem to be any syntax for it.
As for number 8, this may be the most interesting of the lot. Ideas 8 and 9
only became part of Lisp by accident, because Steve Russell implemented
something McCarthy had never intended to be implemented. And yet these ideas
turn out to be responsible for both Lisp's strange appearance and its most
distinctive features. Lisp looks strange not so much because it has a strange
syntax as because it has no syntax; you express programs directly in the parse
trees that get built behind the scenes when other languages are parsed, and
these trees are made of lists, which are Lisp data structures.
Expressing the language in its own data structures turns out to be a very
powerful feature. Ideas 8 and 9 together mean that you can write programs that
write programs. That may sound like a bizarre idea, but it's an everyday thing
in Lisp. The most common way to do it is with something called a _macro._
The term "macro" does not mean in Lisp what it means in other languages. A
Lisp macro can be anything from an abbreviation to a compiler for a new
language. If you want to really understand Lisp, or just expand your
programming horizons, I would learn more about macros.
Macros (in the Lisp sense) are still, as far as I know, unique to Lisp. This
is partly because in order to have macros you probably have to make your
language look as strange as Lisp. It may also be because if you do add that
final increment of power, you can no longer claim to have invented a new
language, but only a new dialect of Lisp.
I mention this mostly as a joke, but it is quite true. If you define a
language that has car, cdr, cons, quote, cond, atom, eq, and a notation for
functions expressed as lists, then you can build all the rest of Lisp out of
it. That is in fact the defining quality of Lisp: it was in order to make this
so that McCarthy gave Lisp the shape it has.
**Where Languages Matter**
So suppose Lisp does represent a kind of limit that mainstream languages are
approaching asymptotically-- does that mean you should actually use it to
write software? How much do you lose by using a less powerful language? Isn't
it wiser, sometimes, not to be at the very edge of innovation? And isn't
popularity to some extent its own justification? Isn't the pointy-haired boss
right, for example, to want to use a language for which he can easily hire
programmers?
There are, of course, projects where the choice of programming language
doesn't matter much. As a rule, the more demanding the application, the more
leverage you get from using a powerful language. But plenty of projects are
not demanding at all. Most programming probably consists of writing little
glue programs, and for little glue programs you can use any language that
you're already familiar with and that has good libraries for whatever you need
to do. If you just need to feed data from one Windows app to another, sure,
use Visual Basic.
You can write little glue programs in Lisp too (I use it as a desktop
calculator), but the biggest win for languages like Lisp is at the other end
of the spectrum, where you need to write sophisticated programs to solve hard
problems in the face of fierce competition. A good example is the airline fare
search program that ITA Software licenses to Orbitz. These guys entered a
market already dominated by two big, entrenched competitors, Travelocity and
Expedia, and seem to have just humiliated them technologically.
The core of ITA's application is a 200,000 line Common Lisp program that
searches many orders of magnitude more possibilities than their competitors,
who apparently are still using mainframe-era programming techniques. (Though
ITA is also in a sense using a mainframe-era programming language.) I have
never seen any of ITA's code, but according to one of their top hackers they
use a lot of macros, and I am not surprised to hear it.
**Centripetal Forces**
I'm not saying there is no cost to using uncommon technologies. The pointy-
haired boss is not completely mistaken to worry about this. But because he
doesn't understand the risks, he tends to magnify them.
I can think of three problems that could arise from using less common
languages. Your programs might not work well with programs written in other
languages. You might have fewer libraries at your disposal. And you might have
trouble hiring programmers.
How much of a problem is each of these? The importance of the first varies
depending on whether you have control over the whole system. If you're writing
software that has to run on a remote user's machine on top of a buggy, closed
operating system (I mention no names), there may be advantages to writing your
application in the same language as the OS. But if you control the whole
system and have the source code of all the parts, as ITA presumably does, you
can use whatever languages you want. If any incompatibility arises, you can
fix it yourself.
In server-based applications you can get away with using the most advanced
technologies, and I think this is the main cause of what Jonathan Erickson
calls the "programming language renaissance." This is why we even hear about
new languages like Perl and Python. We're not hearing about these languages
because people are using them to write Windows apps, but because people are
using them on servers. And as software shifts off the desktop and onto servers
(a future even Microsoft seems resigned to), there will be less and less
pressure to use middle-of-the-road technologies.
As for libraries, their importance also depends on the application. For less
demanding problems, the availability of libraries can outweigh the intrinsic
power of the language. Where is the breakeven point? Hard to say exactly, but
wherever it is, it is short of anything you'd be likely to call an
application. If a company considers itself to be in the software business, and
they're writing an application that will be one of their products, then it
will probably involve several hackers and take at least six months to write.
In a project of that size, powerful languages probably start to outweigh the
convenience of pre-existing libraries.
The third worry of the pointy-haired boss, the difficulty of hiring
programmers, I think is a red herring. How many hackers do you need to hire,
after all? Surely by now we all know that software is best developed by teams
of less than ten people. And you shouldn't have trouble hiring hackers on that
scale for any language anyone has ever heard of. If you can't find ten Lisp
hackers, then your company is probably based in the wrong city for developing
software.
In fact, choosing a more powerful language probably decreases the size of the
team you need, because (a) if you use a more powerful language you probably
won't need as many hackers, and (b) hackers who work in more advanced
languages are likely to be smarter.
I'm not saying that you won't get a lot of pressure to use what are perceived
as "standard" technologies. At Viaweb (now Yahoo Store), we raised some
eyebrows among VCs and potential acquirers by using Lisp. But we also raised
eyebrows by using generic Intel boxes as servers instead of "industrial
strength" servers like Suns, for using a then-obscure open-source Unix variant
called FreeBSD instead of a real commercial OS like Windows NT, for ignoring a
supposed e-commerce standard called SET that no one now even remembers, and so
on.
You can't let the suits make technical decisions for you. Did it alarm some
potential acquirers that we used Lisp? Some, slightly, but if we hadn't used
Lisp, we wouldn't have been able to write the software that made them want to
buy us. What seemed like an anomaly to them was in fact cause and effect.
If you start a startup, don't design your product to please VCs or potential
acquirers. _Design your product to please the users._ If you win the users,
everything else will follow. And if you don't, no one will care how
comfortingly orthodox your technology choices were.
**The Cost of Being Average**
How much do you lose by using a less powerful language? There is actually some
data out there about that.
The most convenient measure of power is probably code size. The point of high-
level languages is to give you bigger abstractions-- bigger bricks, as it
were, so you don't need as many to build a wall of a given size. So the more
powerful the language, the shorter the program (not simply in characters, of
course, but in distinct elements).
How does a more powerful language enable you to write shorter programs? One
technique you can use, if the language will let you, is something called
bottom-up programming. Instead of simply writing your application in the base
language, you build on top of the base language a language for writing
programs like yours, then write your program in it. The combined code can be
much shorter than if you had written your whole program in the base language--
indeed, this is how most compression algorithms work. A bottom-up program
should be easier to modify as well, because in many cases the language layer
won't have to change at all.
Code size is important, because the time it takes to write a program depends
mostly on its length. If your program would be three times as long in another
language, it will take three times as long to write-- and you can't get around
this by hiring more people, because beyond a certain size new hires are
actually a net lose. Fred Brooks described this phenomenon in his famous book
_The Mythical Man-Month,_ and everything I've seen has tended to confirm what
he said.
So how much shorter are your programs if you write them in Lisp? Most of the
numbers I've heard for Lisp versus C, for example, have been around 7-10x. But
a recent article about ITA in _New Architect_ magazine said that "one line of
Lisp can replace 20 lines of C," and since this article was full of quotes
from ITA's president, I assume they got this number from ITA. If so then we
can put some faith in it; ITA's software includes a lot of C and C++ as well
as Lisp, so they are speaking from experience.
My guess is that these multiples aren't even constant. I think they increase
when you face harder problems and also when you have smarter programmers. A
really good hacker can squeeze more out of better tools.
As one data point on the curve, at any rate, if you were to compete with ITA
and chose to write your software in C, they would be able to develop software
twenty times faster than you. If you spent a year on a new feature, they'd be
able to duplicate it in less than three weeks. Whereas if they spent just
three months developing something new, it would be _five years_ before you had
it too.
And you know what? That's the best-case scenario. When you talk about code-
size ratios, you're implicitly assuming that you can actually write the
program in the weaker language. But in fact there are limits on what
programmers can do. If you're trying to solve a hard problem with a language
that's too low-level, you reach a point where there is just too much to keep
in your head at once.
So when I say it would take ITA's imaginary competitor five years to duplicate
something ITA could write in Lisp in three months, I mean five years if
nothing goes wrong. In fact, the way things work in most companies, any
development project that would take five years is likely never to get finished
at all.
I admit this is an extreme case. ITA's hackers seem to be unusually smart, and
C is a pretty low-level language. But in a competitive market, even a
differential of two or three to one would be enough to guarantee that you'd
always be behind.
**A Recipe**
This is the kind of possibility that the pointy-haired boss doesn't even want
to think about. And so most of them don't. Because, you know, when it comes
down to it, the pointy-haired boss doesn't mind if his company gets their ass
kicked, so long as no one can prove it's his fault. The safest plan for him
personally is to stick close to the center of the herd.
Within large organizations, the phrase used to describe this approach is
"industry best practice." Its purpose is to shield the pointy-haired boss from
responsibility: if he chooses something that is "industry best practice," and
the company loses, he can't be blamed. He didn't choose, the industry did.
I believe this term was originally used to describe accounting methods and so
on. What it means, roughly, is _don't do anything weird._ And in accounting
that's probably a good idea. The terms "cutting-edge" and "accounting" do not
sound good together. But when you import this criterion into decisions about
technology, you start to get the wrong answers.
Technology often _should_ be cutting-edge. In programming languages, as Erann
Gat has pointed out, what "industry best practice" actually gets you is not
the best, but merely the average. When a decision causes you to develop
software at a fraction of the rate of more aggressive competitors, "best
practice" is a misnomer.
So here we have two pieces of information that I think are very valuable. In
fact, I know it from my own experience. Number 1, languages vary in power.
Number 2, most managers deliberately ignore this. Between them, these two
facts are literally a recipe for making money. ITA is an example of this
recipe in action. If you want to win in a software business, just take on the
hardest problem you can find, use the most powerful language you can get, and
wait for your competitors' pointy-haired bosses to revert to the mean.
* * *
**Appendix: Power**
As an illustration of what I mean about the relative power of programming
languages, consider the following problem. We want to write a function that
generates accumulators-- a function that takes a number n, and returns a
function that takes another number i and returns n incremented by i.
(That's _incremented by_ , not plus. An accumulator has to accumulate.)
In Common Lisp this would be (defun foo (n) (lambda (i) (incf n i))) and in
Perl 5, sub foo { my ($n) = @_; sub {$n += shift} } which has more elements
than the Lisp version because you have to extract parameters manually in Perl.
In Smalltalk the code is slightly longer than in Lisp foo: n |s| s := n.
^[:i| s := s+i. ] because although in general lexical variables work, you
can't do an assignment to a parameter, so you have to create a new variable s.
In Javascript the example is, again, slightly longer, because Javascript
retains the distinction between statements and expressions, so you need
explicit `return` statements to return values: function foo(n) { return
function (i) { return n += i } } (To be fair, Perl also retains this
distinction, but deals with it in typical Perl fashion by letting you omit
`return`s.)
If you try to translate the Lisp/Perl/Smalltalk/Javascript code into Python
you run into some limitations. Because Python doesn't fully support lexical
variables, you have to create a data structure to hold the value of n. And
although Python does have a function data type, there is no literal
representation for one (unless the body is only a single expression) so you
need to create a named function to return. This is what you end up with: def
foo(n): s = [n] def bar(i): s += i return s return bar Python users
might legitimately ask why they can't just write def foo(n): return lambda i:
return n += i or even def foo(n): lambda i: n += i and my guess is that
they probably will, one day. (But if they don't want to wait for Python to
evolve the rest of the way into Lisp, they could always just...)
In OO languages, you can, to a limited extent, simulate a closure (a function
that refers to variables defined in enclosing scopes) by defining a class with
one method and a field to replace each variable from an enclosing scope. This
makes the programmer do the kind of code analysis that would be done by the
compiler in a language with full support for lexical scope, and it won't work
if more than one function refers to the same variable, but it is enough in
simple cases like this.
Python experts seem to agree that this is the preferred way to solve the
problem in Python, writing either def foo(n): class acc: def __init__(self,
s): self.s = s def inc(self, i): self.s += i return self.s return acc(n).inc
or class foo: def __init__(self, n): self.n = n def __call__(self, i): self.n
+= i return self.n I include these because I wouldn't want Python advocates
to say I was misrepresenting the language, but both seem to me more complex
than the first version. You're doing the same thing, setting up a separate
place to hold the accumulator; it's just a field in an object instead of the
head of a list. And the use of these special, reserved field names, especially
`__call__`, seems a bit of a hack.
In the rivalry between Perl and Python, the claim of the Python hackers seems
to be that that Python is a more elegant alternative to Perl, but what this
case shows is that power is the ultimate elegance: the Perl program is simpler
(has fewer elements), even if the syntax is a bit uglier.
How about other languages? In the other languages mentioned in this talk--
Fortran, C, C++, Java, and Visual Basic-- it is not clear whether you can
actually solve this problem. Ken Anderson says that the following code is
about as close as you can get in Java: public interface Inttoint { public int
call(int i); } public static Inttoint foo(final int n) { return new
Inttoint() { int s = n; public int call(int i) { s = s + i; return s; }}; }
This falls short of the spec because it only works for integers. After many
email exchanges with Java hackers, I would say that writing a properly
polymorphic version that behaves like the preceding examples is somewhere
between damned awkward and impossible. If anyone wants to write one I'd be
very curious to see it, but I personally have timed out.
It's not literally true that you can't solve this problem in other languages,
of course. The fact that all these languages are Turing-equivalent means that,
strictly speaking, you can write any program in any of them. So how would you
do it? In the limit case, by writing a Lisp interpreter in the less powerful
language.
That sounds like a joke, but it happens so often to varying degrees in large
programming projects that there is a name for the phenomenon, Greenspun's
Tenth Rule:
> Any sufficiently complicated C or Fortran program contains an ad hoc
> informally-specified bug-ridden slow implementation of half of Common Lisp.
If you try to solve a hard problem, the question is not whether you will use a
powerful enough language, but whether you will (a) use a powerful language,
(b) write a de facto interpreter for one, or (c) yourself become a human
compiler for one. We see this already begining to happen in the Python
example, where we are in effect simulating the code that a compiler would
generate to implement a lexical variable.
This practice is not only common, but institutionalized. For example, in the
OO world you hear a good deal about "patterns". I wonder if these patterns are
not sometimes evidence of case (c), the human compiler, at work. When I see
patterns in my programs, I consider it a sign of trouble. The shape of a
program should reflect only the problem it needs to solve. Any other
regularity in the code is a sign, to me at least, that I'm using abstractions
that aren't powerful enough-- often that I'm generating by hand the expansions
of some macro that I need to write.
** |
|
September 2022
I recently told applicants to Y Combinator that the best advice I could give
for getting in, per word, was
> Explain what you've learned from users.
That tests a lot of things: whether you're paying attention to users, how well
you understand them, and even how much they need what you're making.
Afterward I asked myself the same question. What have I learned from YC's
users, the startups we've funded?
The first thing that came to mind was that most startups have the same
problems. No two have exactly the same problems, but it's surprising how much
the problems remain the same, regardless of what they're making. Once you've
advised 100 startups all doing different things, you rarely encounter problems
you haven't seen before.
This fact is one of the things that makes YC work. But I didn't know it when
we started YC. I only had a few data points: our own startup, and those
started by friends. It was a surprise to me how often the same problems recur
in different forms. Many later stage investors might never realize this,
because later stage investors might not advise 100 startups in their whole
career, but a YC partner will get this much experience in the first year or
two.
That's one advantage of funding large numbers of early stage companies rather
than smaller numbers of later-stage ones. You get a lot of data. Not just
because you're looking at more companies, but also because more goes wrong.
But knowing (nearly) all the problems startups can encounter doesn't mean that
advising them can be automated, or reduced to a formula. There's no substitute
for individual office hours with a YC partner. Each startup is unique, which
means they have to be advised by specific partners who know them well.
We learned that the hard way, in the notorious "batch that broke YC" in the
summer of 2012. Up till that point we treated the partners as a pool. When a
startup requested office hours, they got the next available slot posted by any
partner. That meant every partner had to know every startup. This worked fine
up to 60 startups, but when the batch grew to 80, everything broke. The
founders probably didn't realize anything was wrong, but the partners were
confused and unhappy because halfway through the batch they still didn't know
all the companies yet.
At first I was puzzled. How could things be fine at 60 startups and broken at
80? It was only a third more. Then I realized what had happened. We were using
an _O(n 2)_ algorithm. So of course it blew up.
The solution we adopted was the classic one in these situations. We sharded
the batch into smaller groups of startups, each overseen by a dedicated group
of partners. That fixed the problem, and has worked fine ever since. But the
batch that broke YC was a powerful demonstration of how individualized the
process of advising startups has to be.
Another related surprise is how bad founders can be at realizing what their
problems are. Founders will sometimes come in to talk about some problem, and
we'll discover another much bigger one in the course of the conversation. For
example (and this case is all too common), founders will come in to talk about
the difficulties they're having raising money, and after digging into their
situation, it turns out the reason is that the company is doing badly, and
investors can tell. Or founders will come in worried that they still haven't
cracked the problem of user acquisition, and the reason turns out to be that
their product isn't good enough. There have been times when I've asked "Would
you use this yourself, if you hadn't built it?" and the founders, on thinking
about it, said "No." Well, there's the reason you're having trouble getting
users.
Often founders know what their problems are, but not their relative
importance. They'll come in to talk about three problems they're worrying
about. One is of moderate importance, one doesn't matter at all, and one will
kill the company if it isn't addressed immediately. It's like watching one of
those horror movies where the heroine is deeply upset that her boyfriend
cheated on her, and only mildly curious about the door that's mysteriously
ajar. You want to say: never mind about your boyfriend, think about that door!
Fortunately in office hours you can. So while startups still die with some
regularity, it's rarely because they wandered into a room containing a
murderer. The YC partners can warn them where the murderers are.
Not that founders listen. That was another big surprise: how often founders
don't listen to us. A couple weeks ago I talked to a partner who had been
working for YC for a couple batches and was starting to see the pattern. "They
come back a year later," she said, "and say 'We wish we'd listened to you.'"
It took me a long time to figure out why founders don't listen. At first I
thought it was mere stubbornness. That's part of the reason, but another and
probably more important reason is that so much about startups is
counterintuitive. And when you tell someone something counterintuitive, what
it sounds to them is wrong. So the reason founders don't listen to us is that
they don't _believe_ us. At least not till experience teaches them otherwise.
The reason startups are so counterintuitive is that they're so different from
most people's other experiences. No one knows what it's like except those
who've done it. Which is why YC partners should usually have been founders
themselves. But strangely enough, the counterintuitiveness of startups turns
out to be another of the things that make YC work. If it weren't
counterintuitive, founders wouldn't need our advice about how to do it.
Focus is doubly important for early stage startups, because not only do they
have a hundred different problems, they don't have anyone to work on them
except the founders. If the founders focus on things that don't matter,
there's no one focusing on the things that do. So the essence of what happens
at YC is to figure out which problems matter most, then cook up ideas for
solving them — ideally at a resolution of a week or less — and then try those
ideas and measure how well they worked. The focus is on action, with
measurable, near-term results.
This doesn't imply that founders should rush forward regardless of the
consequences. If you correct course at a high enough frequency, you can be
simultaneously decisive at a micro scale and tentative at a macro scale. The
result is a somewhat winding path, but executed very rapidly, like the path a
running back takes downfield. And in practice there's less backtracking than
you might expect. Founders usually guess right about which direction to run
in, especially if they have someone experienced like a YC partner to bounce
their hypotheses off. And when they guess wrong, they notice fast, because
they'll talk about the results at office hours the next week.
A small improvement in navigational ability can make you a lot faster, because
it has a double effect: the path is shorter, and you can travel faster along
it when you're more certain it's the right one. That's where a lot of YC's
value lies, in helping founders get an extra increment of focus that lets them
move faster. And since moving fast is the essence of a startup, YC in effect
makes startups more startup-like.
Speed defines startups. Focus enables speed. YC improves focus.
Why are founders uncertain about what to do? Partly because startups almost by
definition are doing something new, which means no one knows how to do it yet,
or in most cases even what "it" is. Partly because startups are so
counterintuitive generally. And partly because many founders, especially young
and ambitious ones, have been trained to win the wrong way. That took me years
to figure out. The educational system in most countries trains you to win by
hacking the test instead of actually doing whatever it's supposed to measure.
But that stops working when you start a startup. So part of what YC does is to
retrain founders to stop trying to hack the test. (It takes a surprisingly
long time. A year in, you still see them reverting to their old habits.)
YC is not simply more experienced founders passing on their knowledge. It's
more like specialization than apprenticeship. The knowledge of the YC partners
and the founders have different shapes: It wouldn't be worthwhile for a
founder to acquire the encyclopedic knowledge of startup problems that a YC
partner has, just as it wouldn't be worthwhile for a YC partner to acquire the
depth of domain knowledge that a founder has. That's why it can still be
valuable for an experienced founder to do YC, just as it can still be valuable
for an experienced athlete to have a coach.
The other big thing YC gives founders is colleagues, and this may be even more
important than the advice of partners. If you look at history, great work
clusters around certain places and institutions: Florence in the late 15th
century, the University of Göttingen in the late 19th, _The New Yorker_ under
Ross, Bell Labs, Xerox PARC. However good you are, good colleagues make you
better. Indeed, very ambitious people probably need colleagues more than
anyone else, because they're so starved for them in everyday life.
Whether or not YC manages one day to be listed alongside those famous
clusters, it won't be for lack of trying. We were very aware of this
historical phenomenon and deliberately designed YC to be one. By this point
it's not bragging to say that it's the biggest cluster of great startup
founders. Even people trying to attack YC concede that.
Colleagues and startup founders are two of the most powerful forces in the
world, so you'd expect it to have a big effect to combine them. Before YC, to
the extent people thought about the question at all, most assumed they
couldn't be combined — that loneliness was the price of independence. That was
how it felt to us when we started our own startup in Boston in the 1990s. We
had a handful of older people we could go to for advice (of varying quality),
but no peers. There was no one we could commiserate with about the misbehavior
of investors, or speculate with about the future of technology. I often tell
founders to make something they themselves want, and YC is certainly that: it
was designed to be exactly what we wanted when we were starting a startup.
One thing we wanted was to be able to get seed funding without having to make
the rounds of random rich people. That has become a commodity now, at least in
the US. But great colleagues can never become a commodity, because the fact
that they cluster in some places means they're proportionally absent from the
rest.
Something magical happens where they do cluster though. The energy in the room
at a YC dinner is like nothing else I've experienced. We would have been happy
just to have one or two other startups to talk to. When you have a whole
roomful it's another thing entirely.
YC founders aren't just inspired by one another. They also help one another.
That's the happiest thing I've learned about startup founders: how generous
they can be in helping one another. We noticed this in the first batch and
consciously designed YC to magnify it. The result is something far more
intense than, say, a university. Between the partners, the alumni, and their
batchmates, founders are surrounded by people who want to help them, and can.
** |
|
April 2022
One of the most surprising things I've witnessed in my lifetime is the rebirth
of the concept of heresy.
In his excellent biography of Newton, Richard Westfall writes about the moment
when he was elected a fellow of Trinity College:
> Supported comfortably, Newton was free to devote himself wholly to whatever
> he chose. To remain on, he had only to avoid the three unforgivable sins:
> crime, heresy, and marriage.
The first time I read that, in the 1990s, it sounded amusingly medieval. How
strange, to have to avoid committing heresy. But when I reread it 20 years
later it sounded like a description of contemporary employment.
There are an ever-increasing number of opinions you can be fired for. Those
doing the firing don't use the word "heresy" to describe them, but
structurally they're equivalent. Structurally there are two distinctive things
about heresy: (1) that it takes priority over the question of truth or
falsity, and (2) that it outweighs everything else the speaker has done.
For example, when someone calls a statement "x-ist," they're also implicitly
saying that this is the end of the discussion. They do not, having said this,
go on to consider whether the statement is true or not. Using such labels is
the conversational equivalent of signalling an exception. That's one of the
reasons they're used: to end a discussion.
If you find yourself talking to someone who uses these labels a lot, it might
be worthwhile to ask them explicitly if they believe any babies are being
thrown out with the bathwater. Can a statement be x-ist, for whatever value of
x, and also true? If the answer is yes, then they're admitting to banning the
truth. That's obvious enough that I'd guess most would answer no. But if they
answer no, it's easy to show that they're mistaken, and that in practice such
labels are applied to statements regardless of their truth or falsity.
The clearest evidence of this is that whether a statement is considered x-ist
often depends on who said it. Truth doesn't work that way. The same statement
can't be true when one person says it, but x-ist, and therefore false, when
another person does.
The other distinctive thing about heresies, compared to ordinary opinions, is
that the public expression of them outweighs everything else the speaker has
done. In ordinary matters, like knowledge of history, or taste in music,
you're judged by the average of your opinions. A heresy is qualitatively
different. It's like dropping a chunk of uranium onto the scale.
Back in the day (and still, in some places) the punishment for heresy was
death. You could have led a life of exemplary goodness, but if you publicly
doubted, say, the divinity of Christ, you were going to burn. Nowadays, in
civilized countries, heretics only get fired in the metaphorical sense, by
losing their jobs. But the structure of the situation is the same: the heresy
outweighs everything else. You could have spent the last ten years saving
children's lives, but if you express certain opinions, you're automatically
fired.
It's much the same as if you committed a crime. No matter how virtuously
you've lived, if you commit a crime, you must still suffer the penalty of the
law. Having lived a previously blameless life might mitigate the punishment,
but it doesn't affect whether you're guilty or not.
A heresy is an opinion whose expression is treated like a crime — one that
makes some people feel not merely that you're mistaken, but that you should be
punished. Indeed, their desire to see you punished is often stronger than it
would be if you'd committed an actual crime. There are many on the far left
who believe strongly in the reintegration of felons (as I do myself), and yet
seem to feel that anyone guilty of certain heresies should never work again.
There are always some heresies — some opinions you'd be punished for
expressing. But there are a lot more now than there were a few decades ago,
and even those who are happy about this would have to agree that it's so.
Why? Why has this antiquated-sounding religious concept come back in a secular
form? And why now?
You need two ingredients for a wave of intolerance: intolerant people, and an
ideology to guide them. The intolerant people are always there. They exist in
every sufficiently large society. That's why waves of intolerance can arise so
suddenly; all they need is something to set them off.
I've already written an _essay_ describing the aggressively conventional-
minded. The short version is that people can be classified in two dimensions
according to (1) how independent- or conventional-minded they are, and (2) how
aggressive they are about it. The aggressively conventional-minded are the
enforcers of orthodoxy.
Normally they're only locally visible. They're the grumpy, censorious people
in a group — the ones who are always first to complain when something violates
the current rules of propriety. But occasionally, like a vector field whose
elements become aligned, a large number of aggressively conventional-minded
people unite behind some ideology all at once. Then they become much more of a
problem, because a mob dynamic takes over, where the enthusiasm of each
participant is increased by the enthusiasm of the others.
The most notorious 20th century case may have been the Cultural Revolution.
Though initiated by Mao to undermine his rivals, the Cultural Revolution was
otherwise mostly a grass-roots phenomenon. Mao said in essence: There are
heretics among us. Seek them out and punish them. And that's all the
aggressively conventional-minded ever need to hear. They went at it with the
delight of dogs chasing squirrels.
To unite the conventional-minded, an ideology must have many of the features
of a religion. In particular it must have strict and arbitrary rules that
adherents can demonstrate their _purity_ by obeying, and its adherents must
believe that anyone who obeys these rules is ipso facto morally superior to
anyone who doesn't.
In the late 1980s a new ideology of this type appeared in US universities. It
had a very strong component of moral purity, and the aggressively
conventional-minded seized upon it with their usual eagerness — all the more
because the relaxation of social norms in the preceding decades meant there
had been less and less to forbid. The resulting wave of intolerance has been
eerily similar in form to the Cultural Revolution, though fortunately much
smaller in magnitude.
I've deliberately avoided mentioning any specific heresies here. Partly
because one of the universal tactics of heretic hunters, now as in the past,
is to accuse those who disapprove of the way in which they suppress ideas of
being heretics themselves. Indeed, this tactic is so consistent that you could
use it as a way of detecting witch hunts in any era.
And that's the second reason I've avoided mentioning any specific heresies. I
want this essay to work in the future, not just now. And unfortunately it
probably will. The aggressively conventional-minded will always be among us,
looking for things to forbid. All they need is an ideology to tell them what.
And it's unlikely the current one will be the last.
There are aggressively conventional-minded people on both the right and the
left. The reason the current wave of intolerance comes from the left is simply
because the new unifying ideology happened to come from the left. The next one
might come from the right. Imagine what that would be like.
Fortunately in western countries the suppression of heresies is nothing like
as bad as it used to be. Though the window of opinions you can express
publicly has narrowed in the last decade, it's still much wider than it was a
few hundred years ago. The problem is the derivative. Up till about 1985 the
window had been growing ever wider. Anyone looking into the future in 1985
would have expected freedom of expression to continue to increase. Instead it
has decreased.
The situation is similar to what's happened with infectious diseases like
measles. Anyone looking into the future in 2010 would have expected the number
of measles cases in the US to continue to decrease. Instead, thanks to anti-
vaxxers, it has increased. The absolute number is still not that high. The
problem is the derivative.
In both cases it's hard to know how much to worry. Is it really dangerous to
society as a whole if a handful of extremists refuse to get their kids
vaccinated, or shout down speakers at universities? The point to start
worrying is presumably when their efforts start to spill over into everyone
else's lives. And in both cases that does seem to be happening.
So it's probably worth spending some amount of effort on pushing back to keep
open the window of free expression. My hope is that this essay will help form
social antibodies not just against current efforts to suppress ideas, but
against the concept of heresy in general. That's the real prize. How do you
disable the concept of heresy? Since the Enlightenment, western societies have
discovered many techniques for doing that, but there are surely more to be
discovered.
Overall I'm optimistic. Though the trend in freedom of expression has been bad
over the last decade, it's been good over the longer term. And there are signs
that the current wave of intolerance is peaking. Independent-minded people I
talk to seem more confident than they did a few years ago. On the other side,
even some of the _leaders_ are starting to wonder if things have gone too far.
And popular culture among the young has already moved on. All we have to do is
keep pushing back, and the wave collapses. And then we'll be net ahead,
because as well as having defeated this wave, we'll also have developed new
tactics for resisting the next one.
** |
|
| **Like to build things?** Try Hacker News.
---
August 2002
_(This article describes the spam-filtering techniques used in the spamproof
web-based mail reader we built to exerciseArc. An improved algorithm is
described in Better Bayesian Filtering.)_
I think it's possible to stop spam, and that content-based filters are the way
to do it. The Achilles heel of the spammers is their message. They can
circumvent any other barrier you set up. They have so far, at least. But they
have to deliver their message, whatever it is. If we can write software that
recognizes their messages, there is no way they can get around that.
_ _ _
To the recipient, spam is easily recognizable. If you hired someone to read
your mail and discard the spam, they would have little trouble doing it. How
much do we have to do, short of AI, to automate this process?
I think we will be able to solve the problem with fairly simple algorithms. In
fact, I've found that you can filter present-day spam acceptably well using
nothing more than a Bayesian combination of the spam probabilities of
individual words. Using a slightly tweaked (as described below) Bayesian
filter, we now miss less than 5 per 1000 spams, with 0 false positives.
The statistical approach is not usually the first one people try when they
write spam filters. Most hackers' first instinct is to try to write software
that recognizes individual properties of spam. You look at spams and you
think, the gall of these guys to try sending me mail that begins "Dear Friend"
or has a subject line that's all uppercase and ends in eight exclamation
points. I can filter out that stuff with about one line of code.
And so you do, and in the beginning it works. A few simple rules will take a
big bite out of your incoming spam. Merely looking for the word "click" will
catch 79.7% of the emails in my spam corpus, with only 1.2% false positives.
I spent about six months writing software that looked for individual spam
features before I tried the statistical approach. What I found was that
recognizing that last few percent of spams got very hard, and that as I made
the filters stricter I got more false positives.
False positives are innocent emails that get mistakenly identified as spams.
For most users, missing legitimate email is an order of magnitude worse than
receiving spam, so a filter that yields false positives is like an acne cure
that carries a risk of death to the patient.
The more spam a user gets, the less likely he'll be to notice one innocent
mail sitting in his spam folder. And strangely enough, the better your spam
filters get, the more dangerous false positives become, because when the
filters are really good, users will be more likely to ignore everything they
catch.
I don't know why I avoided trying the statistical approach for so long. I
think it was because I got addicted to trying to identify spam features
myself, as if I were playing some kind of competitive game with the spammers.
(Nonhackers don't often realize this, but most hackers are very competitive.)
When I did try statistical analysis, I found immediately that it was much
cleverer than I had been. It discovered, of course, that terms like
"virtumundo" and "teens" were good indicators of spam. But it also discovered
that "per" and "FL" and "ff0000" are good indicators of spam. In fact,
"ff0000" (html for bright red) turns out to be as good an indicator of spam as
any pornographic term.
_ _ _
Here's a sketch of how I do statistical filtering. I start with one corpus of
spam and one of nonspam mail. At the moment each one has about 4000 messages
in it. I scan the entire text, including headers and embedded html and
javascript, of each message in each corpus. I currently consider alphanumeric
characters, dashes, apostrophes, and dollar signs to be part of tokens, and
everything else to be a token separator. (There is probably room for
improvement here.) I ignore tokens that are all digits, and I also ignore html
comments, not even considering them as token separators.
I count the number of times each token (ignoring case, currently) occurs in
each corpus. At this stage I end up with two large hash tables, one for each
corpus, mapping tokens to number of occurrences.
Next I create a third hash table, this time mapping each token to the
probability that an email containing it is a spam, which I calculate as
follows : (let ((g (* 2 (or (gethash word good) 0))) (b (or (gethash word
bad) 0))) (unless (< (+ g b) 5) (max .01 (min .99 (float (/ (min 1 (/ b nbad))
(+ (min 1 (/ g ngood)) (min 1 (/ b nbad))))))))) where word is the token
whose probability we're calculating, good and bad are the hash tables I
created in the first step, and ngood and nbad are the number of nonspam and
spam messages respectively.
I explained this as code to show a couple of important details. I want to bias
the probabilities slightly to avoid false positives, and by trial and error
I've found that a good way to do it is to double all the numbers in good. This
helps to distinguish between words that occasionally do occur in legitimate
email and words that almost never do. I only consider words that occur more
than five times in total (actually, because of the doubling, occurring three
times in nonspam mail would be enough). And then there is the question of what
probability to assign to words that occur in one corpus but not the other.
Again by trial and error I chose .01 and .99. There may be room for tuning
here, but as the corpus grows such tuning will happen automatically anyway.
The especially observant will notice that while I consider each corpus to be a
single long stream of text for purposes of counting occurrences, I use the
number of emails in each, rather than their combined length, as the divisor in
calculating spam probabilities. This adds another slight bias to protect
against false positives.
When new mail arrives, it is scanned into tokens, and the most interesting
fifteen tokens, where interesting is measured by how far their spam
probability is from a neutral .5, are used to calculate the probability that
the mail is spam. If probs is a list of the fifteen individual probabilities,
you calculate the combined probability thus: (let ((prod (apply #'* probs)))
(/ prod (+ prod (apply #'* (mapcar #'(lambda (x) (- 1 x)) probs))))) One
question that arises in practice is what probability to assign to a word
you've never seen, i.e. one that doesn't occur in the hash table of word
probabilities. I've found, again by trial and error, that .4 is a good number
to use. If you've never seen a word before, it is probably fairly innocent;
spam words tend to be all too familiar.
There are examples of this algorithm being applied to actual emails in an
appendix at the end.
I treat mail as spam if the algorithm above gives it a probability of more
than .9 of being spam. But in practice it would not matter much where I put
this threshold, because few probabilities end up in the middle of the range.
_ _ _
One great advantage of the statistical approach is that you don't have to read
so many spams. Over the past six months, I've read literally thousands of
spams, and it is really kind of demoralizing. Norbert Wiener said if you
compete with slaves you become a slave, and there is something similarly
degrading about competing with spammers. To recognize individual spam features
you have to try to get into the mind of the spammer, and frankly I want to
spend as little time inside the minds of spammers as possible.
But the real advantage of the Bayesian approach, of course, is that you know
what you're measuring. Feature-recognizing filters like SpamAssassin assign a
spam "score" to email. The Bayesian approach assigns an actual probability.
The problem with a "score" is that no one knows what it means. The user
doesn't know what it means, but worse still, neither does the developer of the
filter. How many _points_ should an email get for having the word "sex" in it?
A probability can of course be mistaken, but there is little ambiguity about
what it means, or how evidence should be combined to calculate it. Based on my
corpus, "sex" indicates a .97 probability of the containing email being a
spam, whereas "sexy" indicates .99 probability. And Bayes' Rule, equally
unambiguous, says that an email containing both words would, in the (unlikely)
absence of any other evidence, have a 99.97% chance of being a spam.
Because it is measuring probabilities, the Bayesian approach considers all the
evidence in the email, both good and bad. Words that occur disproportionately
_rarely_ in spam (like "though" or "tonight" or "apparently") contribute as
much to decreasing the probability as bad words like "unsubscribe" and "opt-
in" do to increasing it. So an otherwise innocent email that happens to
include the word "sex" is not going to get tagged as spam.
Ideally, of course, the probabilities should be calculated individually for
each user. I get a lot of email containing the word "Lisp", and (so far) no
spam that does. So a word like that is effectively a kind of password for
sending mail to me. In my earlier spam-filtering software, the user could set
up a list of such words and mail containing them would automatically get past
the filters. On my list I put words like "Lisp" and also my zipcode, so that
(otherwise rather spammy-sounding) receipts from online orders would get
through. I thought I was being very clever, but I found that the Bayesian
filter did the same thing for me, and moreover discovered of a lot of words I
hadn't thought of.
When I said at the start that our filters let through less than 5 spams per
1000 with 0 false positives, I'm talking about filtering my mail based on a
corpus of my mail. But these numbers are not misleading, because that is the
approach I'm advocating: filter each user's mail based on the spam and nonspam
mail he receives. Essentially, each user should have two delete buttons,
ordinary delete and delete-as-spam. Anything deleted as spam goes into the
spam corpus, and everything else goes into the nonspam corpus.
You could start users with a seed filter, but ultimately each user should have
his own per-word probabilities based on the actual mail he receives. This (a)
makes the filters more effective, (b) lets each user decide their own precise
definition of spam, and (c) perhaps best of all makes it hard for spammers to
tune mails to get through the filters. If a lot of the brain of the filter is
in the individual databases, then merely tuning spams to get through the seed
filters won't guarantee anything about how well they'll get through individual
users' varying and much more trained filters.
Content-based spam filtering is often combined with a whitelist, a list of
senders whose mail can be accepted with no filtering. One easy way to build
such a whitelist is to keep a list of every address the user has ever sent
mail to. If a mail reader has a delete-as-spam button then you could also add
the from address of every email the user has deleted as ordinary trash.
I'm an advocate of whitelists, but more as a way to save computation than as a
way to improve filtering. I used to think that whitelists would make filtering
easier, because you'd only have to filter email from people you'd never heard
from, and someone sending you mail for the first time is constrained by
convention in what they can say to you. Someone you already know might send
you an email talking about sex, but someone sending you mail for the first
time would not be likely to. The problem is, people can have more than one
email address, so a new from-address doesn't guarantee that the sender is
writing to you for the first time. It is not unusual for an old friend
(especially if he is a hacker) to suddenly send you an email with a new from-
address, so you can't risk false positives by filtering mail from unknown
addresses especially stringently.
In a sense, though, my filters do themselves embody a kind of whitelist (and
blacklist) because they are based on entire messages, including the headers.
So to that extent they "know" the email addresses of trusted senders and even
the routes by which mail gets from them to me. And they know the same about
spam, including the server names, mailer versions, and protocols.
_ _ _
If I thought that I could keep up current rates of spam filtering, I would
consider this problem solved. But it doesn't mean much to be able to filter
out most present-day spam, because spam evolves. Indeed, most antispam
techniques so far have been like pesticides that do nothing more than create a
new, resistant strain of bugs.
I'm more hopeful about Bayesian filters, because they evolve with the spam. So
as spammers start using "c0ck" instead of "cock" to evade simple-minded spam
filters based on individual words, Bayesian filters automatically notice.
Indeed, "c0ck" is far more damning evidence than "cock", and Bayesian filters
know precisely how much more.
Still, anyone who proposes a plan for spam filtering has to be able to answer
the question: if the spammers knew exactly what you were doing, how well could
they get past you? For example, I think that if checksum-based spam filtering
becomes a serious obstacle, the spammers will just switch to mad-lib
techniques for generating message bodies.
To beat Bayesian filters, it would not be enough for spammers to make their
emails unique or to stop using individual naughty words. They'd have to make
their mails indistinguishable from your ordinary mail. And this I think would
severely constrain them. Spam is mostly sales pitches, so unless your regular
mail is all sales pitches, spams will inevitably have a different character.
And the spammers would also, of course, have to change (and keep changing)
their whole infrastructure, because otherwise the headers would look as bad to
the Bayesian filters as ever, no matter what they did to the message body. I
don't know enough about the infrastructure that spammers use to know how hard
it would be to make the headers look innocent, but my guess is that it would
be even harder than making the message look innocent.
Assuming they could solve the problem of the headers, the spam of the future
will probably look something like this: Hey there. Thought you should check
out the following: http://www.27meg.com/foo because that is about as much
sales pitch as content-based filtering will leave the spammer room to make.
(Indeed, it will be hard even to get this past filters, because if everything
else in the email is neutral, the spam probability will hinge on the url, and
it will take some effort to make that look neutral.)
Spammers range from businesses running so-called opt-in lists who don't even
try to conceal their identities, to guys who hijack mail servers to send out
spams promoting porn sites. If we use filtering to whittle their options down
to mails like the one above, that should pretty much put the spammers on the
"legitimate" end of the spectrum out of business; they feel obliged by various
state laws to include boilerplate about why their spam is not spam, and how to
cancel your "subscription," and that kind of text is easy to recognize.
(I used to think it was naive to believe that stricter laws would decrease
spam. Now I think that while stricter laws may not decrease the amount of spam
that spammers _send,_ they can certainly help filters to decrease the amount
of spam that recipients actually see.)
All along the spectrum, if you restrict the sales pitches spammers can make,
you will inevitably tend to put them out of business. That word _business_ is
an important one to remember. The spammers are businessmen. They send spam
because it works. It works because although the response rate is abominably
low (at best 15 per million, vs 3000 per million for a catalog mailing), the
cost, to them, is practically nothing. The cost is enormous for the
recipients, about 5 man-weeks for each million recipients who spend a second
to delete the spam, but the spammer doesn't have to pay that.
Sending spam does cost the spammer something, though. So the lower we can
get the response rate-- whether by filtering, or by using filters to force
spammers to dilute their pitches-- the fewer businesses will find it worth
their while to send spam.
The reason the spammers use the kinds of sales pitches that they do is to
increase response rates. This is possibly even more disgusting than getting
inside the mind of a spammer, but let's take a quick look inside the mind of
someone who _responds_ to a spam. This person is either astonishingly
credulous or deeply in denial about their sexual interests. In either case,
repulsive or idiotic as the spam seems to us, it is exciting to them. The
spammers wouldn't say these things if they didn't sound exciting. And "thought
you should check out the following" is just not going to have nearly the pull
with the spam recipient as the kinds of things that spammers say now. Result:
if it can't contain exciting sales pitches, spam becomes less effective as a
marketing vehicle, and fewer businesses want to use it.
That is the big win in the end. I started writing spam filtering software
because I didn't want have to look at the stuff anymore. But if we get good
enough at filtering out spam, it will stop working, and the spammers will
actually stop sending it.
_ _ _
Of all the approaches to fighting spam, from software to laws, I believe
Bayesian filtering will be the single most effective. But I also think that
the more different kinds of antispam efforts we undertake, the better, because
any measure that constrains spammers will tend to make filtering easier. And
even within the world of content-based filtering, I think it will be a good
thing if there are many different kinds of software being used simultaneously.
The more different filters there are, the harder it will be for spammers to
tune spams to get through them.
**Appendix: Examples of Filtering**
Here is an example of a spam that arrived while I was writing this article.
The fifteen most interesting words in this spam are: qvp0045 indira mx-05
intimail $7500 freeyankeedom cdo bluefoxmedia jpg unsecured platinum 3d0 qves
7c5 7c266675 The words are a mix of stuff from the headers and from the
message body, which is typical of spam. Also typical of spam is that every one
of these words has a spam probability, in my database, of .99. In fact there
are more than fifteen words with probabilities of .99, and these are just the
first fifteen seen.
Unfortunately that makes this email a boring example of the use of Bayes'
Rule. To see an interesting variety of probabilities we have to look at this
actually quite atypical spam.
The fifteen most interesting words in this spam, with their probabilities,
are: madam 0.99 promotion 0.99 republic 0.99 shortest 0.047225013 mandatory
0.047225013 standardization 0.07347802 sorry 0.08221981 supported 0.09019077
people's 0.09019077 enter 0.9075001 quality 0.8921298 organization 0.12454646
investment 0.8568143 very 0.14758544 valuable 0.82347786 This time the
evidence is a mix of good and bad. A word like "shortest" is almost as much
evidence for innocence as a word like "madam" or "promotion" is for guilt. But
still the case for guilt is stronger. If you combine these numbers according
to Bayes' Rule, the resulting probability is .9027.
"Madam" is obviously from spams beginning "Dear Sir or Madam." They're not
very common, but the word "madam" _never_ occurs in my legitimate email, and
it's all about the ratio.
"Republic" scores high because it often shows up in Nigerian scam emails, and
also occurs once or twice in spams referring to Korea and South Africa. You
might say that it's an accident that it thus helps identify this spam. But
I've found when examining spam probabilities that there are a lot of these
accidents, and they have an uncanny tendency to push things in the right
direction rather than the wrong one. In this case, it is not entirely a
coincidence that the word "Republic" occurs in Nigerian scam emails and this
spam. There is a whole class of dubious business propositions involving less
developed countries, and these in turn are more likely to have names that
specify explicitly (because they aren't) that they are republics.
On the other hand, "enter" is a genuine miss. It occurs mostly in unsubscribe
instructions, but here is used in a completely innocent way. Fortunately the
statistical approach is fairly robust, and can tolerate quite a lot of misses
before the results start to be thrown off.
For comparison, here is an example of that rare bird, a spam that gets through
the filters. Why? Because by sheer chance it happens to be loaded with words
that occur in my actual email: perl 0.01 python 0.01 tcl 0.01 scripting 0.01
morris 0.01 graham 0.01491078 guarantee 0.9762507 cgi 0.9734398 paul
0.027040077 quite 0.030676773 pop3 0.042199217 various 0.06080265 prices
0.9359873 managed 0.06451222 difficult 0.071706355 There are a couple pieces
of good news here. First, this mail probably wouldn't get through the filters
of someone who didn't happen to specialize in programming languages and have a
good friend called Morris. For the average user, all the top five words here
would be neutral and would not contribute to the spam probability.
Second, I think filtering based on word pairs (see below) might well catch
this one: "cost effective", "setup fee", "money back" -- pretty incriminating
stuff. And of course if they continued to spam me (or a network I was part
of), "Hostex" itself would be recognized as a spam term.
Finally, here is an innocent email. Its fifteen most interesting words are as
follows: continuation 0.01 describe 0.01 continuations 0.01 example
0.033600237 programming 0.05214485 i'm 0.055427782 examples 0.07972858 color
0.9189189 localhost 0.09883721 hi 0.116539136 california 0.84421706 same
0.15981844 spot 0.1654587 us-ascii 0.16804294 what 0.19212411 Most of the
words here indicate the mail is an innocent one. There are two bad smelling
words, "color" (spammers love colored fonts) and "California" (which occurs in
testimonials and also in menus in forms), but they are not enough to outweigh
obviously innocent words like "continuation" and "example".
It's interesting that "describe" rates as so thoroughly innocent. It hasn't
occurred in a single one of my 4000 spams. The data turns out to be full of
such surprises. One of the things you learn when you analyze spam texts is how
narrow a subset of the language spammers operate in. It's that fact, together
with the equally characteristic vocabulary of any individual user's mail, that
makes Bayesian filtering a good bet.
**Appendix: More Ideas**
One idea that I haven't tried yet is to filter based on word pairs, or even
triples, rather than individual words. This should yield a much sharper
estimate of the probability. For example, in my current database, the word
"offers" has a probability of .96. If you based the probabilities on word
pairs, you'd end up with "special offers" and "valuable offers" having
probabilities of .99 and, say, "approach offers" (as in "this approach
offers") having a probability of .1 or less.
The reason I haven't done this is that filtering based on individual words
already works so well. But it does mean that there is room to tighten the
filters if spam gets harder to detect. (Curiously, a filter based on word
pairs would be in effect a Markov-chaining text generator running in reverse.)
Specific spam features (e.g. not seeing the recipient's address in the to:
field) do of course have value in recognizing spam. They can be considered in
this algorithm by treating them as virtual words. I'll probably do this in
future versions, at least for a handful of the most egregious spam indicators.
Feature-recognizing spam filters are right in many details; what they lack is
an overall discipline for combining evidence.
Recognizing nonspam features may be more important than recognizing spam
features. False positives are such a worry that they demand extraordinary
measures. I will probably in future versions add a second level of testing
designed specifically to avoid false positives. If a mail triggers this second
level of filters it will be accepted even if its spam probability is above the
threshold.
I don't expect this second level of filtering to be Bayesian. It will
inevitably be not only ad hoc, but based on guesses, because the number of
false positives will not tend to be large enough to notice patterns. (It is
just as well, anyway, if a backup system doesn't rely on the same technology
as the primary system.)
Another thing I may try in the future is to focus extra attention on specific
parts of the email. For example, about 95% of current spam includes the url of
a site they want you to visit. (The remaining 5% want you to call a phone
number, reply by email or to a US mail address, or in a few cases to buy a
certain stock.) The url is in such cases practically enough by itself to
determine whether the email is spam.
Domain names differ from the rest of the text in a (non-German) email in that
they often consist of several words stuck together. Though computationally
expensive in the general case, it might be worth trying to decompose them. If
a filter has never seen the token "xxxporn" before it will have an individual
spam probability of .4, whereas "xxx" and "porn" individually have
probabilities (in my corpus) of .9889 and .99 respectively, and a combined
probability of .9998.
I expect decomposing domain names to become more important as spammers are
gradually forced to stop using incriminating words in the text of their
messages. (A url with an ip address is of course an extremely incriminating
sign, except in the mail of a few sysadmins.)
It might be a good idea to have a cooperatively maintained list of urls
promoted by spammers. We'd need a trust metric of the type studied by Raph
Levien to prevent malicious or incompetent submissions, but if we had such a
thing it would provide a boost to any filtering software. It would also be a
convenient basis for boycotts.
Another way to test dubious urls would be to send out a crawler to look at the
site before the user looked at the email mentioning it. You could use a
Bayesian filter to rate the site just as you would an email, and whatever was
found on the site could be included in calculating the probability of the
email being a spam. A url that led to a redirect would of course be especially
suspicious.
One cooperative project that I think really would be a good idea would be to
accumulate a giant corpus of spam. A large, clean corpus is the key to making
Bayesian filtering work well. Bayesian filters could actually use the corpus
as input. But such a corpus would be useful for other kinds of filters too,
because it could be used to test them.
Creating such a corpus poses some technical problems. We'd need trust metrics
to prevent malicious or incompetent submissions, of course. We'd also need
ways of erasing personal information (not just to-addresses and ccs, but also
e.g. the arguments to unsubscribe urls, which often encode the to-address)
from mails in the corpus. If anyone wants to take on this project, it would be
a good thing for the world.
**Appendix: Defining Spam**
I think there is a rough consensus on what spam is, but it would be useful to
have an explicit definition. We'll need to do this if we want to establish a
central corpus of spam, or even to compare spam filtering rates meaningfully.
To start with, spam is not unsolicited commercial email. If someone in my
neighborhood heard that I was looking for an old Raleigh three-speed in good
condition, and sent me an email offering to sell me one, I'd be delighted, and
yet this email would be both commercial and unsolicited. The defining feature
of spam (in fact, its _raison d'etre_) is not that it is unsolicited, but that
it is automated.
It is merely incidental, too, that spam is usually commercial. If someone
started sending mass email to support some political cause, for example, it
would be just as much spam as email promoting a porn site.
I propose we define spam as **unsolicited automated email**. This definition
thus includes some email that many legal definitions of spam don't. Legal
definitions of spam, influenced presumably by lobbyists, tend to exclude mail
sent by companies that have an "existing relationship" with the recipient. But
buying something from a company, for example, does not imply that you have
solicited ongoing email from them. If I order something from an online store,
and they then send me a stream of spam, it's still spam.
Companies sending spam often give you a way to "unsubscribe," or ask you to go
to their site and change your "account preferences" if you want to stop
getting spam. This is not enough to stop the mail from being spam. Not opting
out is not the same as opting in. Unless the recipient explicitly checked a
clearly labelled box (whose default was no) asking to receive the email, then
it is spam.
In some business relationships, you do implicitly solicit certain kinds of
mail. When you order online, I think you implicitly solicit a receipt, and
notification when the order ships. I don't mind when Verisign sends me mail
warning that a domain name is about to expire (at least, if they are the
actual registrar for it). But when Verisign sends me email offering a FREE
Guide to Building My E-Commerce Web Site, that's spam.
** |
|
December 2001 (rev. May 2002) _(This article came about in response to some
questions on theLL1 mailing list. It is now incorporated in Revenge of the
Nerds.)_
When McCarthy designed Lisp in the late 1950s, it was a radical departure from
existing languages, the most important of which was Fortran.
Lisp embodied nine new ideas:
* * *
**1\. Conditionals.** A conditional is an if-then-else construct. We take
these for granted now. They were invented by McCarthy in the course of
developing Lisp. (Fortran at that time only had a conditional goto, closely
based on the branch instruction in the underlying hardware.) McCarthy, who was
on the Algol committee, got conditionals into Algol, whence they spread to
most other languages.
**2\. A function type.** In Lisp, functions are first class objects-- they're
a data type just like integers, strings, etc, and have a literal
representation, can be stored in variables, can be passed as arguments, and so
on.
**3\. Recursion.** Recursion existed as a mathematical concept before Lisp of
course, but Lisp was the first programming language to support it. (It's
arguably implicit in making functions first class objects.)
**4\. A new concept of variables.** In Lisp, all variables are effectively
pointers. Values are what have types, not variables, and assigning or binding
variables means copying pointers, not what they point to.
**5\. Garbage-collection.**
**6\. Programs composed of expressions.** Lisp programs are trees of
expressions, each of which returns a value. (In some Lisps expressions can
return multiple values.) This is in contrast to Fortran and most succeeding
languages, which distinguish between expressions and statements.
It was natural to have this distinction in Fortran because (not surprisingly
in a language where the input format was punched cards) the language was line-
oriented. You could not nest statements. And so while you needed expressions
for math to work, there was no point in making anything else return a value,
because there could not be anything waiting for it.
This limitation went away with the arrival of block-structured languages, but
by then it was too late. The distinction between expressions and statements
was entrenched. It spread from Fortran into Algol and thence to both their
descendants.
When a language is made entirely of expressions, you can compose expressions
however you want. You can say either (using Arc syntax)
(if foo (= x 1) (= x 2))
or
(= x (if foo 1 2))
**7\. A symbol type.** Symbols differ from strings in that you can test
equality by comparing a pointer.
**8\. A notation for code** using trees of symbols.
**9\. The whole language always available.** There is no real distinction
between read-time, compile-time, and runtime. You can compile or run code
while reading, read or run code while compiling, and read or compile code at
runtime.
Running code at read-time lets users reprogram Lisp's syntax; running code at
compile-time is the basis of macros; compiling at runtime is the basis of
Lisp's use as an extension language in programs like Emacs; and reading at
runtime enables programs to communicate using s-expressions, an idea recently
reinvented as XML.
* * *
When Lisp was first invented, all these ideas were far removed from ordinary
programming practice, which was dictated largely by the hardware available in
the late 1950s.
Over time, the default language, embodied in a succession of popular
languages, has gradually evolved toward Lisp. 1-5 are now widespread. 6 is
starting to appear in the mainstream. Python has a form of 7, though there
doesn't seem to be any syntax for it. 8, which (with 9) is what makes Lisp
macros possible, is so far still unique to Lisp, perhaps because (a) it
requires those parens, or something just as bad, and (b) if you add that final
increment of power, you can no longer claim to have invented a new language,
but only to have designed a new dialect of Lisp ; -)
Though useful to present-day programmers, it's strange to describe Lisp in
terms of its variation from the random expedients other languages adopted.
That was not, probably, how McCarthy thought of it. Lisp wasn't designed to
fix the mistakes in Fortran; it came about more as the byproduct of an attempt
to axiomatize computation.
---
---
| | Japanese Translation
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
November 2005
Does "Web 2.0" mean anything? Till recently I thought it didn't, but the truth
turns out to be more complicated. Originally, yes, it was meaningless. Now it
seems to have acquired a meaning. And yet those who dislike the term are
probably right, because if it means what I think it does, we don't need it.
I first heard the phrase "Web 2.0" in the name of the Web 2.0 conference in
2004. At the time it was supposed to mean using "the web as a platform," which
I took to refer to web-based applications.
So I was surprised at a conference this summer when Tim O'Reilly led a session
intended to figure out a definition of "Web 2.0." Didn't it already mean using
the web as a platform? And if it didn't already mean something, why did we
need the phrase at all?
**Origins**
Tim says the phrase "Web 2.0" first arose in "a brainstorming session between
O'Reilly and Medialive International." What is Medialive International?
"Producers of technology tradeshows and conferences," according to their site.
So presumably that's what this brainstorming session was about. O'Reilly
wanted to organize a conference about the web, and they were wondering what to
call it.
I don't think there was any deliberate plan to suggest there was a new
_version_ of the web. They just wanted to make the point that the web mattered
again. It was a kind of semantic deficit spending: they knew new things were
coming, and the "2.0" referred to whatever those might turn out to be.
And they were right. New things were coming. But the new version number led to
some awkwardness in the short term. In the process of developing the pitch for
the first conference, someone must have decided they'd better take a stab at
explaining what that "2.0" referred to. Whatever it meant, "the web as a
platform" was at least not too constricting.
The story about "Web 2.0" meaning the web as a platform didn't live much past
the first conference. By the second conference, what "Web 2.0" seemed to mean
was something about democracy. At least, it did when people wrote about it
online. The conference itself didn't seem very grassroots. It cost $2800, so
the only people who could afford to go were VCs and people from big companies.
And yet, oddly enough, Ryan Singel's article about the conference in _Wired
News_ spoke of "throngs of geeks." When a friend of mine asked Ryan about
this, it was news to him. He said he'd originally written something like
"throngs of VCs and biz dev guys" but had later shortened it just to
"throngs," and that this must have in turn been expanded by the editors into
"throngs of geeks." After all, a Web 2.0 conference would presumably be full
of geeks, right?
Well, no. There were about 7. Even Tim O'Reilly was wearing a suit, a sight so
alien I couldn't parse it at first. I saw him walk by and said to one of the
O'Reilly people "that guy looks just like Tim."
"Oh, that's Tim. He bought a suit." I ran after him, and sure enough, it was.
He explained that he'd just bought it in Thailand.
The 2005 Web 2.0 conference reminded me of Internet trade shows during the
Bubble, full of prowling VCs looking for the next hot startup. There was that
same odd atmosphere created by a large number of people determined not to miss
out. Miss out on what? They didn't know. Whatever was going to happen—whatever
Web 2.0 turned out to be.
I wouldn't quite call it "Bubble 2.0" just because VCs are eager to invest
again. The Internet is a genuinely big deal. The bust was as much an
overreaction as the boom. It's to be expected that once we started to pull out
of the bust, there would be a lot of growth in this area, just as there was in
the industries that spiked the sharpest before the Depression.
The reason this won't turn into a second Bubble is that the IPO market is
gone. Venture investors are driven by exit strategies. The reason they were
funding all those laughable startups during the late 90s was that they hoped
to sell them to gullible retail investors; they hoped to be laughing all the
way to the bank. Now that route is closed. Now the default exit strategy is to
get bought, and acquirers are less prone to irrational exuberance than IPO
investors. The closest you'll get to Bubble valuations is Rupert Murdoch
paying $580 million for Myspace. That's only off by a factor of 10 or so.
**1\. Ajax**
Does "Web 2.0" mean anything more than the name of a conference yet? I don't
like to admit it, but it's starting to. When people say "Web 2.0" now, I have
some idea what they mean. And the fact that I both despise the phrase and
understand it is the surest proof that it has started to mean something.
One ingredient of its meaning is certainly Ajax, which I can still only just
bear to use without scare quotes. Basically, what "Ajax" means is "Javascript
now works." And that in turn means that web-based applications can now be made
to work much more like desktop ones.
As you read this, a whole new generation of software is being written to take
advantage of Ajax. There hasn't been such a wave of new applications since
microcomputers first appeared. Even Microsoft sees it, but it's too late for
them to do anything more than leak "internal" documents designed to give the
impression they're on top of this new trend.
In fact the new generation of software is being written way too fast for
Microsoft even to channel it, let alone write their own in house. Their only
hope now is to buy all the best Ajax startups before Google does. And even
that's going to be hard, because Google has as big a head start in buying
microstartups as it did in search a few years ago. After all, Google Maps, the
canonical Ajax application, was the result of a startup they bought.
So ironically the original description of the Web 2.0 conference turned out to
be partially right: web-based applications are a big component of Web 2.0. But
I'm convinced they got this right by accident. The Ajax boom didn't start till
early 2005, when Google Maps appeared and the term "Ajax" was coined.
**2\. Democracy**
The second big element of Web 2.0 is democracy. We now have several examples
to prove that amateurs can surpass professionals, when they have the right
kind of system to channel their efforts. Wikipedia may be the most famous.
Experts have given Wikipedia middling reviews, but they miss the critical
point: it's good enough. And it's free, which means people actually read it.
On the web, articles you have to pay for might as well not exist. Even if you
were willing to pay to read them yourself, you can't link to them. They're not
part of the conversation.
Another place democracy seems to win is in deciding what counts as news. I
never look at any news site now except Reddit. I know if something major
happens, or someone writes a particularly interesting article, it will show up
there. Why bother checking the front page of any specific paper or magazine?
Reddit's like an RSS feed for the whole web, with a filter for quality.
Similar sites include Digg, a technology news site that's rapidly approaching
Slashdot in popularity, and del.icio.us, the collaborative bookmarking network
that set off the "tagging" movement. And whereas Wikipedia's main appeal is
that it's good enough and free, these sites suggest that voters do a
significantly better job than human editors.
The most dramatic example of Web 2.0 democracy is not in the selection of
ideas, but their production. I've noticed for a while that the stuff I read
on individual people's sites is as good as or better than the stuff I read in
newspapers and magazines. And now I have independent evidence: the top links
on Reddit are generally links to individual people's sites rather than to
magazine articles or news stories.
My experience of writing for magazines suggests an explanation. Editors. They
control the topics you can write about, and they can generally rewrite
whatever you produce. The result is to damp extremes. Editing yields 95th
percentile writing—95% of articles are improved by it, but 5% are dragged
down. 5% of the time you get "throngs of geeks."
On the web, people can publish whatever they want. Nearly all of it falls
short of the editor-damped writing in print publications. But the pool of
writers is very, very large. If it's large enough, the lack of damping means
the best writing online should surpass the best in print. And now that the
web has evolved mechanisms for selecting good stuff, the web wins net.
Selection beats damping, for the same reason market economies beat centrally
planned ones.
Even the startups are different this time around. They are to the startups of
the Bubble what bloggers are to the print media. During the Bubble, a startup
meant a company headed by an MBA that was blowing through several million
dollars of VC money to "get big fast" in the most literal sense. Now it means
a smaller, younger, more technical group that just decided to make something
great. They'll decide later if they want to raise VC-scale funding, and if
they take it, they'll take it on their terms.
**3\. Don't Maltreat Users**
I think everyone would agree that democracy and Ajax are elements of "Web
2.0." I also see a third: not to maltreat users. During the Bubble a lot of
popular sites were quite high-handed with users. And not just in obvious ways,
like making them register, or subjecting them to annoying ads. The very design
of the average site in the late 90s was an abuse. Many of the most popular
sites were loaded with obtrusive branding that made them slow to load and sent
the user the message: this is our site, not yours. (There's a physical analog
in the Intel and Microsoft stickers that come on some laptops.)
I think the root of the problem was that sites felt they were giving something
away for free, and till recently a company giving anything away for free could
be pretty high-handed about it. Sometimes it reached the point of economic
sadism: site owners assumed that the more pain they caused the user, the more
benefit it must be to them. The most dramatic remnant of this model may be at
salon.com, where you can read the beginning of a story, but to get the rest
you have sit through a _movie_.
At Y Combinator we advise all the startups we fund never to lord it over
users. Never make users register, unless you need to in order to store
something for them. If you do make users register, never make them wait for a
confirmation link in an email; in fact, don't even ask for their email address
unless you need it for some reason. Don't ask them any unnecessary questions.
Never send them email unless they explicitly ask for it. Never frame pages you
link to, or open them in new windows. If you have a free version and a pay
version, don't make the free version too restricted. And if you find yourself
asking "should we allow users to do x?" just answer "yes" whenever you're
unsure. Err on the side of generosity.
In How to Start a Startup I advised startups never to let anyone fly under
them, meaning never to let any other company offer a cheaper, easier solution.
Another way to fly low is to give users more power. Let users do what they
want. If you don't and a competitor does, you're in trouble.
iTunes is Web 2.0ish in this sense. Finally you can buy individual songs
instead of having to buy whole albums. The recording industry hated the idea
and resisted it as long as possible. But it was obvious what users wanted, so
Apple flew under the labels. Though really it might be better to describe
iTunes as Web 1.5. Web 2.0 applied to music would probably mean individual
bands giving away DRMless songs for free.
The ultimate way to be nice to users is to give them something for free that
competitors charge for. During the 90s a lot of people probably thought we'd
have some working system for micropayments by now. In fact things have gone in
the other direction. The most successful sites are the ones that figure out
new ways to give stuff away for free. Craigslist has largely destroyed the
classified ad sites of the 90s, and OkCupid looks likely to do the same to the
previous generation of dating sites.
Serving web pages is very, very cheap. If you can make even a fraction of a
cent per page view, you can make a profit. And technology for targeting ads
continues to improve. I wouldn't be surprised if ten years from now eBay had
been supplanted by an ad-supported freeBay (or, more likely, gBay).
Odd as it might sound, we tell startups that they should try to make as little
money as possible. If you can figure out a way to turn a billion dollar
industry into a fifty million dollar industry, so much the better, if all
fifty million go to you. Though indeed, making things cheaper often turns out
to generate more money in the end, just as automating things often turns out
to generate more jobs.
The ultimate target is Microsoft. What a bang that balloon is going to make
when someone pops it by offering a free web-based alternative to MS Office.
Who will? Google? They seem to be taking their time. I suspect the pin
will be wielded by a couple of 20 year old hackers who are too naive to be
intimidated by the idea. (How hard can it be?)
**The Common Thread**
Ajax, democracy, and not dissing users. What do they all have in common? I
didn't realize they had anything in common till recently, which is one of the
reasons I disliked the term "Web 2.0" so much. It seemed that it was being
used as a label for whatever happened to be new—that it didn't predict
anything.
But there is a common thread. Web 2.0 means using the web the way it's meant
to be used. The "trends" we're seeing now are simply the inherent nature of
the web emerging from under the broken models that got imposed on it during
the Bubble.
I realized this when I read an interview with Joe Kraus, the co-founder of
Excite.
> Excite really never got the business model right at all. We fell into the
> classic problem of how when a new medium comes out it adopts the practices,
> the content, the business models of the old medium—which fails, and then the
> more appropriate models get figured out.
It may have seemed as if not much was happening during the years after the
Bubble burst. But in retrospect, something was happening: the web was finding
its natural angle of repose. The democracy component, for example—that's not
an innovation, in the sense of something someone made happen. That's what the
web naturally tends to produce.
Ditto for the idea of delivering desktop-like applications over the web. That
idea is almost as old as the web. But the first time around it was co-opted by
Sun, and we got Java applets. Java has since been remade into a generic
replacement for C++, but in 1996 the story about Java was that it represented
a new model of software. Instead of desktop applications, you'd run Java
"applets" delivered from a server.
This plan collapsed under its own weight. Microsoft helped kill it, but it
would have died anyway. There was no uptake among hackers. When you find PR
firms promoting something as the next development platform, you can be sure
it's not. If it were, you wouldn't need PR firms to tell you, because hackers
would already be writing stuff on top of it, the way sites like Busmonster
used Google Maps as a platform before Google even meant it to be one.
The proof that Ajax is the next hot platform is that thousands of hackers have
spontaneously started building things on top of it. Mikey likes it.
There's another thing all three components of Web 2.0 have in common. Here's a
clue. Suppose you approached investors with the following idea for a Web 2.0
startup:
> Sites like del.icio.us and flickr allow users to "tag" content with
> descriptive tokens. But there is also huge source of _implicit_ tags that
> they ignore: the text within web links. Moreover, these links represent a
> social network connecting the individuals and organizations who created the
> pages, and by using graph theory we can compute from this network an
> estimate of the reputation of each member. We plan to mine the web for these
> implicit tags, and use them together with the reputation hierarchy they
> embody to enhance web searches.
How long do you think it would take them on average to realize that it was a
description of Google?
Google was a pioneer in all three components of Web 2.0: their core business
sounds crushingly hip when described in Web 2.0 terms, "Don't maltreat users"
is a subset of "Don't be evil," and of course Google set off the whole Ajax
boom with Google Maps.
Web 2.0 means using the web as it was meant to be used, and Google does.
That's their secret. They're sailing with the wind, instead of sitting
becalmed praying for a business model, like the print media, or trying to tack
upwind by suing their customers, like Microsoft and the record labels.
Google doesn't try to force things to happen their way. They try to figure out
what's going to happen, and arrange to be standing there when it does. That's
the way to approach technology—and as business includes an ever larger
technological component, the right way to do business.
The fact that Google is a "Web 2.0" company shows that, while meaningful, the
term is also rather bogus. It's like the word "allopathic." It just means
doing things right, and it's a bad sign when you have a special word for that.
** |
|
April 2006, rev August 2009
Plato quotes Socrates as saying "the unexamined life is not worth living."
Part of what he meant was that the proper role of humans is to think, just as
the proper role of anteaters is to poke their noses into anthills.
A lot of ancient philosophy had the quality — and I don't mean this in an
insulting way — of the kind of conversations freshmen have late at night in
common rooms:
> What is our purpose? Well, we humans are as conspicuously different from
> other animals as the anteater. In our case the distinguishing feature is the
> ability to reason. So obviously that is what we should be doing, and a human
> who doesn't is doing a bad job of being human — is no better than an animal.
Now we'd give a different answer. At least, someone Socrates's age would. We'd
ask why we even suppose we have a "purpose" in life. We may be better adapted
for some things than others; we may be happier doing things we're adapted for;
but why assume purpose?
The history of ideas is a history of gradually discarding the assumption that
it's all about us. No, it turns out, the earth is not the center of the
universe — not even the center of the solar system. No, it turns out, humans
are not created by God in his own image; they're just one species among many,
descended not merely from apes, but from microorganisms. Even the concept of
"me" turns out to be fuzzy around the edges if you examine it closely.
The idea that we're the center of things is difficult to discard. So difficult
that there's probably room to discard more. Richard Dawkins made another step
in that direction only in the last several decades, with the idea of the
selfish gene. No, it turns out, we're not even the protagonists: we're just
the latest model vehicle our genes have constructed to travel around in. And
having kids is our genes heading for the lifeboats. Reading that book snapped
my brain out of its previous way of thinking the way Darwin's must have when
it first appeared.
(Few people can experience now what Darwin's contemporaries did when _The
Origin of Species_ was first published, because everyone now is raised either
to take evolution for granted, or to regard it as a heresy. No one encounters
the idea of natural selection for the first time as an adult.)
So if you want to discover things that have been overlooked till now, one
really good place to look is in our blind spot: in our natural, naive belief
that it's all about us. And expect to encounter ferocious opposition if you
do.
Conversely, if you have to choose between two theories, prefer the one that
doesn't center on you.
This principle isn't only for big ideas. It works in everyday life, too. For
example, suppose you're saving a piece of cake in the fridge, and you come
home one day to find your housemate has eaten it. Two possible theories:
> a) Your housemate did it deliberately to upset you. He _knew_ you were
> saving that piece of cake.
>
> b) Your housemate was hungry.
I say pick b. No one knows who said "never attribute to malice what can be
explained by incompetence," but it is a powerful idea. Its more general
version is our answer to the Greeks:
> Don't see purpose where there isn't.
Or better still, the positive version:
> See randomness.
---
---
Korean Translation
* * *
--- |
|
July 2006
When I was in high school I spent a lot of time imitating bad writers. What we
studied in English classes was mostly fiction, so I assumed that was the
highest form of writing. Mistake number one. The stories that seemed to be
most admired were ones in which people suffered in complicated ways. Anything
funny or gripping was ipso facto suspect, unless it was old enough to be hard
to understand, like Shakespeare or Chaucer. Mistake number two. The ideal
medium seemed the short story, which I've since learned had quite a brief
life, roughly coincident with the peak of magazine publishing. But since their
size made them perfect for use in high school classes, we read a lot of them,
which gave us the impression the short story was flourishing. Mistake number
three. And because they were so short, nothing really had to happen; you could
just show a randomly truncated slice of life, and that was considered
advanced. Mistake number four. The result was that I wrote a lot of stories in
which nothing happened except that someone was unhappy in a way that seemed
deep.
For most of college I was a philosophy major. I was very impressed by the
papers published in philosophy journals. They were so beautifully typeset, and
their tone was just captivating—alternately casual and buffer-overflowingly
technical. A fellow would be walking along a street and suddenly modality qua
modality would spring upon him. I didn't ever quite understand these papers,
but I figured I'd get around to that later, when I had time to reread them
more closely. In the meantime I tried my best to imitate them. This was, I can
now see, a doomed undertaking, because they weren't really saying anything. No
philosopher ever refuted another, for example, because no one said anything
definite enough to refute. Needless to say, my imitations didn't say anything
either.
In grad school I was still wasting time imitating the wrong things. There was
then a fashionable type of program called an expert system, at the core of
which was something called an inference engine. I looked at what these things
did and thought "I could write that in a thousand lines of code." And yet
eminent professors were writing books about them, and startups were selling
them for a year's salary a copy. What an opportunity, I thought; these
impressive things seem easy to me; I must be pretty sharp. Wrong. It was
simply a fad. The books the professors wrote about expert systems are now
ignored. They were not even on a _path_ to anything interesting. And the
customers paying so much for them were largely the same government agencies
that paid thousands for screwdrivers and toilet seats.
How do you avoid copying the wrong things? Copy only what you genuinely like.
That would have saved me in all three cases. I didn't enjoy the short stories
we had to read in English classes; I didn't learn anything from philosophy
papers; I didn't use expert systems myself. I believed these things were good
because they were admired.
It can be hard to separate the things you like from the things you're
impressed with. One trick is to ignore presentation. Whenever I see a painting
impressively hung in a museum, I ask myself: how much would I pay for this if
I found it at a garage sale, dirty and frameless, and with no idea who painted
it? If you walk around a museum trying this experiment, you'll find you get
some truly startling results. Don't ignore this data point just because it's
an outlier.
Another way to figure out what you like is to look at what you enjoy as guilty
pleasures. Many things people like, especially if they're young and ambitious,
they like largely for the feeling of virtue in liking them. 99% of people
reading _Ulysses_ are thinking "I'm reading _Ulysses_ " as they do it. A
guilty pleasure is at least a pure one. What do you read when you don't feel
up to being virtuous? What kind of book do you read and feel sad that there's
only half of it left, instead of being impressed that you're half way through?
That's what you really like.
Even when you find genuinely good things to copy, there's another pitfall to
be avoided. Be careful to copy what makes them good, rather than their flaws.
It's easy to be drawn into imitating flaws, because they're easier to see, and
of course easier to copy too. For example, most painters in the eighteenth and
nineteenth centuries used brownish colors. They were imitating the great
painters of the Renaissance, whose paintings by that time were brown with
dirt. Those paintings have since been cleaned, revealing brilliant colors;
their imitators are of course still brown.
It was painting, incidentally, that cured me of copying the wrong things.
Halfway through grad school I decided I wanted to try being a painter, and the
art world was so manifestly corrupt that it snapped the leash of credulity.
These people made philosophy professors seem as scrupulous as mathematicians.
It was so clearly a choice of doing good work xor being an insider that I was
forced to see the distinction. It's there to some degree in almost every
field, but I had till then managed to avoid facing it.
That was one of the most valuable things I learned from painting: you have to
figure out for yourself what's good. You can't trust authorities. They'll lie
to you on this one.
Comment on this essay.
---
---
| | Chinese Translation
| | | | Romanian Translation
| | Spanish Translation
| | | | Russian Translation
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
June 2006
_(This essay is derived from talks at Usenix 2006 and Railsconf 2006.)_
A couple years ago my friend Trevor and I went to look at the Apple garage. As
we stood there, he said that as a kid growing up in Saskatchewan he'd been
amazed at the dedication Jobs and Wozniak must have had to work in a garage.
"Those guys must have been freezing!"
That's one of California's hidden advantages: the mild climate means there's
lots of marginal space. In cold places that margin gets trimmed off. There's a
sharper line between outside and inside, and only projects that are officially
sanctioned — by organizations, or parents, or wives, or at least by oneself —
get proper indoor space. That raises the activation energy for new ideas. You
can't just tinker. You have to justify.
Some of Silicon Valley's most famous companies began in garages: Hewlett-
Packard in 1938, Apple in 1976, Google in 1998. In Apple's case the garage
story is a bit of an urban legend. Woz says all they did there was assemble
some computers, and that he did all the actual design of the Apple I and Apple
II in his apartment or his cube at HP. This was apparently too marginal
even for Apple's PR people.
By conventional standards, Jobs and Wozniak were marginal people too.
Obviously they were smart, but they can't have looked good on paper. They were
at the time a pair of college dropouts with about three years of school
between them, and hippies to boot. Their previous business experience
consisted of making "blue boxes" to hack into the phone system, a business
with the rare distinction of being both illegal and unprofitable.
**Outsiders**
Now a startup operating out of a garage in Silicon Valley would feel part of
an exalted tradition, like the poet in his garret, or the painter who can't
afford to heat his studio and thus has to wear a beret indoors. But in 1976 it
didn't seem so cool. The world hadn't yet realized that starting a computer
company was in the same category as being a writer or a painter. It hadn't
been for long. Only in the preceding couple years had the dramatic fall in the
cost of hardware allowed outsiders to compete.
In 1976, everyone looked down on a company operating out of a garage,
including the founders. One of the first things Jobs did when they got some
money was to rent office space. He wanted Apple to seem like a real company.
They already had something few real companies ever have: a fabulously well
designed product. You'd think they'd have had more confidence. But I've talked
to a lot of startup founders, and it's always this way. They've built
something that's going to change the world, and they're worried about some nit
like not having proper business cards.
That's the paradox I want to explore: great new things often come from the
margins, and yet the people who discover them are looked down on by everyone,
including themselves.
It's an old idea that new things come from the margins. I want to examine its
internal structure. Why do great ideas come from the margins? What kind of
ideas? And is there anything we can do to encourage the process?
**Insiders**
One reason so many good ideas come from the margin is simply that there's so
much of it. There have to be more outsiders than insiders, if insider means
anything. If the number of outsiders is huge it will always seem as if a lot
of ideas come from them, even if few do per capita. But I think there's more
going on than this. There are real disadvantages to being an insider, and in
some kinds of work they can outweigh the advantages.
Imagine, for example, what would happen if the government decided to
commission someone to write an official Great American Novel. First there'd be
a huge ideological squabble over who to choose. Most of the best writers would
be excluded for having offended one side or the other. Of the remainder, the
smart ones would refuse such a job, leaving only a few with the wrong sort of
ambition. The committee would choose one at the height of his career — that
is, someone whose best work was behind him — and hand over the project with
copious free advice about how the book should show in positive terms the
strength and diversity of the American people, etc, etc.
The unfortunate writer would then sit down to work with a huge weight of
expectation on his shoulders. Not wanting to blow such a public commission,
he'd play it safe. This book had better command respect, and the way to ensure
that would be to make it a tragedy. Audiences have to be enticed to laugh, but
if you kill people they feel obliged to take you seriously. As everyone knows,
America plus tragedy equals the Civil War, so that's what it would have to be
about. When finally completed twelve years later, the book would be a 900-page
pastiche of existing popular novels — roughly _Gone with the Wind_ plus
_Roots_. But its bulk and celebrity would make it a bestseller for a few
months, until blown out of the water by a talk-show host's autobiography. The
book would be made into a movie and thereupon forgotten, except by the more
waspish sort of reviewers, among whom it would be a byword for bogusness like
Milli Vanilli or _Battlefield Earth_.
Maybe I got a little carried away with this example. And yet is this not at
each point the way such a project would play out? The government knows better
than to get into the novel business, but in other fields where they have a
natural monopoly, like nuclear waste dumps, aircraft carriers, and regime
change, you'd find plenty of projects isomorphic to this one — and indeed,
plenty that were less successful.
This little thought experiment suggests a few of the disadvantages of insider
projects: the selection of the wrong kind of people, the excessive scope, the
inability to take risks, the need to seem serious, the weight of expectations,
the power of vested interests, the undiscerning audience, and perhaps most
dangerous, the tendency of such work to become a duty rather than a pleasure.
**Tests**
A world with outsiders and insiders implies some kind of test for
distinguishing between them. And the trouble with most tests for selecting
elites is that there are two ways to pass them: to be good at what they try to
measure, and to be good at hacking the test itself.
So the first question to ask about a field is how honest its tests are,
because this tells you what it means to be an outsider. This tells you how
much to trust your instincts when you disagree with authorities, whether it's
worth going through the usual channels to become one yourself, and perhaps
whether you want to work in this field at all.
Tests are least hackable when there are consistent standards for quality, and
the people running the test really care about its integrity. Admissions to PhD
programs in the hard sciences are fairly honest, for example. The professors
will get whoever they admit as their own grad students, so they try hard to
choose well, and they have a fair amount of data to go on. Whereas
undergraduate admissions seem to be much more hackable.
One way to tell whether a field has consistent standards is the overlap
between the leading practitioners and the people who teach the subject in
universities. At one end of the scale you have fields like math and physics,
where nearly all the teachers are among the best practitioners. In the middle
are medicine, law, history, architecture, and computer science, where many
are. At the bottom are business, literature, and the visual arts, where
there's almost no overlap between the teachers and the leading practitioners.
It's this end that gives rise to phrases like "those who can't do, teach."
Incidentally, this scale might be helpful in deciding what to study in
college. When I was in college the rule seemed to be that you should study
whatever you were most interested in. But in retrospect you're probably better
off studying something moderately interesting with someone who's good at it
than something very interesting with someone who isn't. You often hear people
say that you shouldn't major in business in college, but this is actually an
instance of a more general rule: don't learn things from teachers who are bad
at them.
How much you should worry about being an outsider depends on the quality of
the insiders. If you're an amateur mathematician and think you've solved a
famous open problem, better go back and check. When I was in grad school, a
friend in the math department had the job of replying to people who sent in
proofs of Fermat's last theorem and so on, and it did not seem as if he saw it
as a valuable source of tips — more like manning a mental health hotline.
Whereas if the stuff you're writing seems different from what English
professors are interested in, that's not necessarily a problem.
**Anti-Tests**
Where the method of selecting the elite is thoroughly corrupt, most of the
good people will be outsiders. In art, for example, the image of the poor,
misunderstood genius is not just one possible image of a great artist: it's
the _standard_ image. I'm not saying it's correct, incidentally, but it is
telling how well this image has stuck. You couldn't make a rap like that stick
to math or medicine.
If it's corrupt enough, a test becomes an anti-test, filtering out the people
it should select by making them to do things only the wrong people would do.
Popularity in high school seems to be such a test. There are plenty of similar
ones in the grownup world. For example, rising up through the hierarchy of the
average big company demands an attention to politics few thoughtful people
could spare. Someone like Bill Gates can grow a company under him, but
it's hard to imagine him having the patience to climb the corporate ladder at
General Electric — or Microsoft, actually.
It's kind of strange when you think about it, because lord-of-the-flies
schools and bureaucratic companies are both the default. There are probably a
lot of people who go from one to the other and never realize the whole world
doesn't work this way.
I think that's one reason big companies are so often blindsided by startups.
People at big companies don't realize the extent to which they live in an
environment that is one large, ongoing test for the wrong qualities.
If you're an outsider, your best chances for beating insiders are obviously in
fields where corrupt tests select a lame elite. But there's a catch: if the
tests are corrupt, your victory won't be recognized, at least in your
lifetime. You may feel you don't need that, but history suggests it's
dangerous to work in fields with corrupt tests. You may beat the insiders, and
yet not do as good work, on an absolute scale, as you would in a field that
was more honest.
Standards in art, for example, were almost as corrupt in the first half of the
eighteenth century as they are today. This was the era of those fluffy
idealized portraits of countesses with their lapdogs. Chardin decided to skip
all that and paint ordinary things as he saw them. He's now considered the
best of that period — and yet not the equal of Leonardo or Bellini or Memling,
who all had the additional encouragement of honest standards.
It can be worth participating in a corrupt contest, however, if it's followed
by another that isn't corrupt. For example, it would be worth competing with a
company that can spend more than you on marketing, as long as you can survive
to the next round, when customers compare your actual products. Similarly, you
shouldn't be discouraged by the comparatively corrupt test of college
admissions, because it's followed immediately by less hackable tests.
**Risk**
Even in a field with honest tests, there are still advantages to being an
outsider. The most obvious is that outsiders have nothing to lose. They can do
risky things, and if they fail, so what? Few will even notice.
The eminent, on the other hand, are weighed down by their eminence. Eminence
is like a suit: it impresses the wrong people, and it constrains the wearer.
Outsiders should realize the advantage they have here. Being able to take
risks is hugely valuable. Everyone values safety too much, both the obscure
and the eminent. No one wants to look like a fool. But it's very useful to be
able to. If most of your ideas aren't stupid, you're probably being too
conservative. You're not bracketing the problem.
Lord Acton said we should judge talent at its best and character at its worst.
For example, if you write one great book and ten bad ones, you still count as
a great writer — or at least, a better writer than someone who wrote eleven
that were merely good. Whereas if you're a quiet, law-abiding citizen most of
the time but occasionally cut someone up and bury them in your backyard,
you're a bad guy.
Almost everyone makes the mistake of treating ideas as if they were
indications of character rather than talent — as if having a stupid idea made
you stupid. There's a huge weight of tradition advising us to play it safe.
"Even a fool is thought wise if he keeps silent," says the Old Testament
(Proverbs 17:28).
Well, that may be fine advice for a bunch of goatherds in Bronze Age
Palestine. There conservatism would be the order of the day. But times have
changed. It might still be reasonable to stick with the Old Testament in
political questions, but materially the world now has a lot more state.
Tradition is less of a guide, not just because things change faster, but
because the space of possibilities is so large. The more complicated the world
gets, the more valuable it is to be willing to look like a fool.
**Delegation**
And yet the more successful people become, the more heat they get if they
screw up — or even seem to screw up. In this respect, as in many others, the
eminent are prisoners of their own success. So the best way to understand the
advantages of being an outsider may be to look at the disadvantages of being
an insider.
If you ask eminent people what's wrong with their lives, the first thing
they'll complain about is the lack of time. A friend of mine at Google is
fairly high up in the company and went to work for them long before they went
public. In other words, he's now rich enough not to have to work. I asked him
if he could still endure the annoyances of having a job, now that he didn't
have to. And he said that there weren't really any annoyances, except — and he
got a wistful look when he said this — that he got _so much email_.
The eminent feel like everyone wants to take a bite out of them. The problem
is so widespread that people pretending to be eminent do it by pretending to
be overstretched.
The lives of the eminent become scheduled, and that's not good for thinking.
One of the great advantages of being an outsider is long, uninterrupted blocks
of time. That's what I remember about grad school: apparently endless supplies
of time, which I spent worrying about, but not writing, my dissertation.
Obscurity is like health food — unpleasant, perhaps, but good for you. Whereas
fame tends to be like the alcohol produced by fermentation. When it reaches a
certain concentration, it kills off the yeast that produced it.
The eminent generally respond to the shortage of time by turning into
managers. They don't have time to work. They're surrounded by junior people
they're supposed to help or supervise. The obvious solution is to have the
junior people do the work. Some good stuff happens this way, but there are
problems it doesn't work so well for: the kind where it helps to have
everything in one head.
For example, it recently emerged that the famous glass artist Dale Chihuly
hasn't actually blown glass for 27 years. He has assistants do the work for
him. But one of the most valuable sources of ideas in the visual arts is the
resistance of the medium. That's why oil paintings look so different from
watercolors. In principle you could make any mark in any medium; in practice
the medium steers you. And if you're no longer doing the work yourself, you
stop learning from this.
So if you want to beat those eminent enough to delegate, one way to do it is
to take advantage of direct contact with the medium. In the arts it's obvious
how: blow your own glass, edit your own films, stage your own plays. And in
the process pay close attention to accidents and to new ideas you have on the
fly. This technique can be generalized to any sort of work: if you're an
outsider, don't be ruled by plans. Planning is often just a weakness forced on
those who delegate.
Is there a general rule for finding problems best solved in one head? Well,
you can manufacture them by taking any project usually done by multiple people
and trying to do it all yourself. Wozniak's work was a classic example: he did
everything himself, hardware and software, and the result was miraculous. He
claims not one bug was ever found in the Apple II, in either hardware or
software.
Another way to find good problems to solve in one head is to focus on the
grooves in the chocolate bar — the places where tasks are divided when they're
split between several people. If you want to beat delegation, focus on a
vertical slice: for example, be both writer and editor, or both design
buildings and construct them.
One especially good groove to span is the one between tools and things made
with them. For example, programming languages and applications are usually
written by different people, and this is responsible for a lot of the worst
flaws in programming languages. I think every language should be designed
simultaneously with a large application written in it, the way C was with
Unix.
Techniques for competing with delegation translate well into business, because
delegation is endemic there. Instead of avoiding it as a drawback of senility,
many companies embrace it as a sign of maturity. In big companies software is
often designed, implemented, and sold by three separate types of people. In
startups one person may have to do all three. And though this feels stressful,
it's one reason startups win. The needs of customers and the means of
satisfying them are all in one head.
**Focus**
The very skill of insiders can be a weakness. Once someone is good at
something, they tend to spend all their time doing that. This kind of focus is
very valuable, actually. Much of the skill of experts is the ability to ignore
false trails. But focus has drawbacks: you don't learn from other fields, and
when a new approach arrives, you may be the last to notice.
For outsiders this translates into two ways to win. One is to work on a
variety of things. Since you can't derive as much benefit (yet) from a narrow
focus, you may as well cast a wider net and derive what benefit you can from
similarities between fields. Just as you can compete with delegation by
working on larger vertical slices, you can compete with specialization by
working on larger horizontal slices — by both writing and illustrating your
book, for example.
The second way to compete with focus is to see what focus overlooks. In
particular, new things. So if you're not good at anything yet, consider
working on something so new that no one else is either. It won't have any
prestige yet, if no one is good at it, but you'll have it all to yourself.
The potential of a new medium is usually underestimated, precisely because no
one has yet explored its possibilities. Before Durer tried making engravings,
no one took them very seriously. Engraving was for making little devotional
images — basically fifteenth century baseball cards of saints. Trying to make
masterpieces in this medium must have seemed to Durer's contemporaries the way
that, say, making masterpieces in comics might seem to the average person
today.
In the computer world we get not new mediums but new platforms: the
minicomputer, the microprocessor, the web-based application. At first they're
always dismissed as being unsuitable for real work. And yet someone always
decides to try anyway, and it turns out you can do more than anyone expected.
So in the future when you hear people say of a new platform: yeah, it's
popular and cheap, but not ready yet for real work, jump on it.
As well as being more comfortable working on established lines, insiders
generally have a vested interest in perpetuating them. The professor who made
his reputation by discovering some new idea is not likely to be the one to
discover its replacement. This is particularly true with companies, who have
not only skill and pride anchoring them to the status quo, but money as well.
The Achilles heel of successful companies is their inability to cannibalize
themselves. Many innovations consist of replacing something with a cheaper
alternative, and companies just don't want to see a path whose immediate
effect is to cut an existing source of revenue.
So if you're an outsider you should actively seek out contrarian projects.
Instead of working on things the eminent have made prestigious, work on things
that could steal that prestige.
The really juicy new approaches are not the ones insiders reject as
impossible, but those they ignore as undignified. For example, after Wozniak
designed the Apple II he offered it first to his employer, HP. They passed.
One of the reasons was that, to save money, he'd designed the Apple II to use
a TV as a monitor, and HP felt they couldn't produce anything so declasse.
**Less**
Wozniak used a TV as a monitor for the simple reason that he couldn't afford a
monitor. Outsiders are not merely free but compelled to make things that are
cheap and lightweight. And both are good bets for growth: cheap things spread
faster, and lightweight things evolve faster.
The eminent, on the other hand, are almost forced to work on a large scale.
Instead of garden sheds they must design huge art museums. One reason they
work on big things is that they can: like our hypothetical novelist, they're
flattered by such opportunities. They also know that big projects will by
their sheer bulk impress the audience. A garden shed, however lovely, would be
easy to ignore; a few might even snicker at it. You can't snicker at a giant
museum, no matter how much you dislike it. And finally, there are all those
people the eminent have working for them; they have to choose projects that
can keep them all busy.
Outsiders are free of all this. They can work on small things, and there's
something very pleasing about small things. Small things can be perfect; big
ones always have something wrong with them. But there's a magic in small
things that goes beyond such rational explanations. All kids know it. Small
things have more personality.
Plus making them is more fun. You can do what you want; you don't have to
satisfy committees. And perhaps most important, small things can be done fast.
The prospect of seeing the finished project hangs in the air like the smell of
dinner cooking. If you work fast, maybe you could have it done tonight.
Working on small things is also a good way to learn. The most important kinds
of learning happen one project at a time. ("Next time, I won't...") The faster
you cycle through projects, the faster you'll evolve.
Plain materials have a charm like small scale. And in addition there's the
challenge of making do with less. Every designer's ears perk up at the mention
of that game, because it's a game you can't lose. Like the JV playing the
varsity, if you even tie, you win. So paradoxically there are cases where
fewer resources yield better results, because the designers' pleasure at their
own ingenuity more than compensates.
So if you're an outsider, take advantage of your ability to make small and
inexpensive things. Cultivate the pleasure and simplicity of that kind of
work; one day you'll miss it.
**Responsibility**
When you're old and eminent, what will you miss about being young and obscure?
What people seem to miss most is the lack of responsibilities.
Responsibility is an occupational disease of eminence. In principle you could
avoid it, just as in principle you could avoid getting fat as you get old, but
few do. I sometimes suspect that responsibility is a trap and that the most
virtuous route would be to shirk it, but regardless it's certainly
constraining.
When you're an outsider you're constrained too, of course. You're short of
money, for example. But that constrains you in different ways. How does
responsibility constrain you? The worst thing is that it allows you not to
focus on real work. Just as the most dangerous forms of procrastination are
those that seem like work, the danger of responsibilities is not just that
they can consume a whole day, but that they can do it without setting off the
kind of alarms you'd set off if you spent a whole day sitting on a park bench.
A lot of the pain of being an outsider is being aware of one's own
procrastination. But this is actually a good thing. You're at least close
enough to work that the smell of it makes you hungry.
As an outsider, you're just one step away from getting things done. A huge
step, admittedly, and one that most people never seem to make, but only one
step. If you can summon up the energy to get started, you can work on projects
with an intensity (in both senses) that few insiders can match. For insiders
work turns into a duty, laden with responsibilities and expectations. It's
never so pure as it was when they were young.
Work like a dog being taken for a walk, instead of an ox being yoked to the
plow. That's what they miss.
**Audience**
A lot of outsiders make the mistake of doing the opposite; they admire the
eminent so much that they copy even their flaws. Copying is a good way to
learn, but copy the right things. When I was in college I imitated the pompous
diction of famous professors. But this wasn't what _made_ them eminent — it
was more a flaw their eminence had allowed them to sink into. Imitating it was
like pretending to have gout in order to seem rich.
Half the distinguishing qualities of the eminent are actually disadvantages.
Imitating these is not only a waste of time, but will make you seem a fool to
your models, who are often well aware of it.
What are the genuine advantages of being an insider? The greatest is an
audience. It often seems to outsiders that the great advantage of insiders is
money — that they have the resources to do what they want. But so do people
who inherit money, and that doesn't seem to help, not as much as an audience.
It's good for morale to know people want to see what you're making; it draws
work out of you.
If I'm right that the defining advantage of insiders is an audience, then we
live in exciting times, because just in the last ten years the Internet has
made audiences a lot more liquid. Outsiders don't have to content themselves
anymore with a proxy audience of a few smart friends. Now, thanks to the
Internet, they can start to grow themselves actual audiences. This is great
news for the marginal, who retain the advantages of outsiders while
increasingly being able to siphon off what had till recently been the
prerogative of the elite.
Though the Web has been around for more than ten years, I think we're just
beginning to see its democratizing effects. Outsiders are still learning how
to steal audiences. But more importantly, audiences are still learning how to
be stolen — they're still just beginning to realize how much deeper bloggers
can dig than journalists, how much more interesting a democratic news site can
be than a front page controlled by editors, and how much funnier a bunch of
kids with webcams can be than mass-produced sitcoms.
The big media companies shouldn't worry that people will post their
copyrighted material on YouTube. They should worry that people will post their
own stuff on YouTube, and audiences will watch that instead.
**Hacking**
If I had to condense the power of the marginal into one sentence it would be:
just try hacking something together. That phrase draws in most threads I've
mentioned here. Hacking something together means deciding what to do as you're
doing it, not a subordinate executing the vision of his boss. It implies the
result won't be pretty, because it will be made quickly out of inadequate
materials. It may work, but it won't be the sort of thing the eminent would
want to put their name on. Something hacked together means something that
barely solves the problem, or maybe doesn't solve the problem at all, but
another you discovered en route. But that's ok, because the main value of that
initial version is not the thing itself, but what it leads to. Insiders who
daren't walk through the mud in their nice clothes will never make it to the
solid ground on the other side.
The word "try" is an especially valuable component. I disagree here with Yoda,
who said there is no try. There is try. It implies there's no punishment if
you fail. You're driven by curiosity instead of duty. That means the wind of
procrastination will be in your favor: instead of avoiding this work, this
will be what you do as a way of avoiding other work. And when you do it,
you'll be in a better mood. The more the work depends on imagination, the more
that matters, because most people have more ideas when they're happy.
If I could go back and redo my twenties, that would be one thing I'd do more
of: just try hacking things together. Like many people that age, I spent a lot
of time worrying about what I should do. I also spent some time trying to
build stuff. I should have spent less time worrying and more time building. If
you're not sure what to do, make something.
Raymond Chandler's advice to thriller writers was "When in doubt, have a man
come through a door with a gun in his hand." He followed that advice. Judging
from his books, he was often in doubt. But though the result is occasionally
cheesy, it's never boring. In life, as in books, action is underrated.
Fortunately the number of things you can just hack together keeps increasing.
People fifty years ago would be astonished that one could just hack together a
movie, for example. Now you can even hack together distribution. Just make
stuff and put it online.
**Inappropriate**
If you really want to score big, the place to focus is the margin of the
margin: the territories only recently captured from the insiders. That's where
you'll find the juiciest projects still undone, either because they seemed too
risky, or simply because there were too few insiders to explore everything.
This is why I spend most of my time writing essays lately. The writing of
essays used to be limited to those who could get them published. In principle
you could have written them and just shown them to your friends; in practice
that didn't work. An essayist needs the resistance of an audience, just as
an engraver needs the resistance of the plate.
Up till a few years ago, writing essays was the ultimate insider's game.
Domain experts were allowed to publish essays about their field, but the pool
allowed to write on general topics was about eight people who went to the
right parties in New York. Now the reconquista has overrun this territory,
and, not surprisingly, found it sparsely cultivated. There are so many essays
yet unwritten. They tend to be the naughtier ones; the insiders have pretty
much exhausted the motherhood and apple pie topics.
This leads to my final suggestion: a technique for determining when you're on
the right track. You're on the right track when people complain that you're
unqualified, or that you've done something inappropriate. If people are
complaining, that means you're doing something rather than sitting around,
which is the first step. And if they're driven to such empty forms of
complaint, that means you've probably done something good.
If you make something and people complain that it doesn't _work_ , that's a
problem. But if the worst thing they can hit you with is your own status as an
outsider, that implies that in every other respect you've succeeded. Pointing
out that someone is unqualified is as desperate as resorting to racial slurs.
It's just a legitimate sounding way of saying: we don't like your type around
here.
But the best thing of all is when people call what you're doing inappropriate.
I've been hearing this word all my life and I only recently realized that it
is, in fact, the sound of the homing beacon. "Inappropriate" is the null
criticism. It's merely the adjective form of "I don't like it."
So that, I think, should be the highest goal for the marginal. Be
inappropriate. When you hear people saying that, you're golden. And they,
incidentally, are busted.
** |
|
October 2005
The first Summer Founders Program has just finished. We were surprised how
well it went. Overall only about 10% of startups succeed, but if I had to
guess now, I'd predict three or four of the eight startups we funded will make
it.
Of the startups that needed further funding, I believe all have either closed
a round or are likely to soon. Two have already turned down (lowball)
acquisition offers.
We would have been happy if just one of the eight seemed promising by the end
of the summer. What's going on? Did some kind of anomaly make this summer's
applicants especially good? We worry about that, but we can't think of one.
We'll find out this winter.
The whole summer was full of surprises. The best was that the hypothesis we
were testing seems to be correct. Young hackers can start viable companies.
This is good news for two reasons: (a) it's an encouraging thought, and (b) it
means that Y Combinator, which is predicated on the idea, is not hosed.
**Age**
More precisely, the hypothesis was that success in a startup depends mainly on
how smart and energetic you are, and much less on how old you are or how much
business experience you have. The results so far bear this out. The 2005
summer founders ranged in age from 18 to 28 (average 23), and there is no
correlation between their ages and how well they're doing.
This should not really be surprising. Bill Gates and Michael Dell were both 19
when they started the companies that made them famous. Young founders are not
a new phenomenon: the trend began as soon as computers got cheap enough for
college kids to afford them.
Another of our hypotheses was that you can start a startup on less money than
most people think. Other investors were surprised to hear the most we gave any
group was $20,000. But we knew it was possible to start on that little because
we started Viaweb on $10,000.
And so it proved this summer. Three months' funding is enough to get into
second gear. We had a demo day for potential investors ten weeks in, and seven
of the eight groups had a prototype ready by that time. One, Reddit, had
already launched, and were able to give a demo of their live site.
A researcher who studied the SFP startups said the one thing they had in
common was that they all worked ridiculously hard. People this age are
commonly seen as lazy. I think in some cases it's not so much that they lack
the appetite for work, but that the work they're offered is unappetizing.
The experience of the SFP suggests that if you let motivated people do real
work, they work hard, whatever their age. As one of the founders said "I'd
read that starting a startup consumed your life, but I had no idea what that
meant until I did it."
I'd feel guilty if I were a boss making people work this hard. But we're not
these people's bosses. They're working on their own projects. And what makes
them work is not us but their competitors. Like good athletes, they don't work
hard because the coach yells at them, but because they want to win.
We have less power than bosses, and yet the founders work harder than
employees. It seems like a win for everyone. The only catch is that we get on
average only about 5-7% of the upside, while an employer gets nearly all of
it. (We're counting on it being 5-7% of a much larger number.)
As well as working hard, the groups all turned out to be extraordinarily
responsible. I can't think of a time when one failed to do something they'd
promised to, even by being late for an appointment. This is another lesson the
world has yet to learn. One of the founders discovered that the hardest part
of arranging a meeting with executives at a big cell phone carrier was getting
a rental company to rent him a car, because he was too young.
I think the problem here is much the same as with the apparent laziness of
people this age. They seem lazy because the work they're given is pointless,
and they act irresponsible because they're not given any power. Some of them,
anyway. We only have a sample size of about twenty, but it seems so far that
if you let people in their early twenties be their own bosses, they rise to
the occasion.
**Morale**
The summer founders were as a rule very idealistic. They also wanted very much
to get rich. These qualities might seem incompatible, but they're not. These
guys want to get rich, but they want to do it by changing the world. They
wouldn't (well, seven of the eight groups wouldn't) be interested in making
money by speculating in stocks. They want to make something people use.
I think this makes them more effective as founders. As hard as people will
work for money, they'll work harder for a cause. And since success in a
startup depends so much on motivation, the paradoxical result is that the
people likely to make the most money are those who aren't in it just for the
money.
The founders of Kiko, for example, are working on an Ajax calendar. They want
to get rich, but they pay more attention to design than they would if that
were their only motivation. You can tell just by looking at it.
I never considered it till this summer, but this might be another reason
startups run by hackers tend to do better than those run by MBAs. Perhaps it's
not just that hackers understand technology better, but that they're driven by
more powerful motivations. Microsoft, as I've said before, is a dangerously
misleading example. Their mean corporate culture only works for monopolies.
Google is a better model.
Considering that the summer founders are the sharks in this ocean, we were
surprised how frightened most of them were of competitors. But now that I
think of it, we were just as frightened when we started Viaweb. For the first
year, our initial reaction to news of a competitor was always: we're doomed.
Just as a hypochondriac magnifies his symptoms till he's convinced he has some
terrible disease, when you're not used to competitors you magnify them into
monsters.
Here's a handy rule for startups: competitors are rarely as dangerous as they
seem. Most will self-destruct before you can destroy them. And it certainly
doesn't matter how many of them there are, any more than it matters to the
winner of a marathon how many runners are behind him.
"It's a crowded market," I remember one founder saying worriedly.
"Are you the current leader?" I asked.
"Yes."
"Is anyone able to develop software faster than you?"
"Probably not."
"Well, if you're ahead now, and you're the fastest, then you'll stay ahead.
What difference does it make how many others there are?"
Another group was worried when they realized they had to rewrite their
software from scratch. I told them it would be a bad sign if they didn't. The
main function of your initial version is to be rewritten.
That's why we advise groups to ignore issues like scalability,
internationalization, and heavy-duty security at first. I can imagine an
advocate of "best practices" saying these ought to be considered from the
start. And he'd be right, except that they interfere with the primary function
of software in a startup: to be a vehicle for experimenting with its own
design. Having to retrofit internationalization or scalability is a pain,
certainly. The only bigger pain is not needing to, because your initial
version was too big and rigid to evolve into something users wanted.
I suspect this is another reason startups beat big companies. Startups can be
irresponsible and release version 1s that are light enough to evolve. In big
companies, all the pressure is in the direction of over-engineering.
**What Got Learned**
One thing we were curious about this summer was where these groups would need
help. That turned out to vary a lot. Some we helped with technical advice--
for example, about how to set up an application to run on multiple servers.
Most we helped with strategy questions, like what to patent, and what to
charge for and what to give away. Nearly all wanted advice about dealing with
future investors: how much money should they take and what kind of terms
should they expect?
However, all the groups quickly learned how to deal with stuff like patents
and investors. These problems aren't intrinsically difficult, just unfamiliar.
It was surprising-- slightly frightening even-- how fast they learned. The
weekend before the demo day for investors, we had a practice session where all
the groups gave their presentations. They were all terrible. We tried to
explain how to make them better, but we didn't have much hope. So on demo day
I told the assembled angels and VCs that these guys were hackers, not MBAs,
and so while their software was good, we should not expect slick presentations
from them.
The groups then proceeded to give fabulously slick presentations. Gone were
the mumbling recitations of lists of features. It was as if they'd spent the
past week at acting school. I still don't know how they did it.
Perhaps watching each others' presentations helped them see what they'd been
doing wrong. Just as happens in college, the summer founders learned a lot
from one another-- maybe more than they learned from us. A lot of the problems
they face are the same, from dealing with investors to hacking Javascript.
I don't want to give the impression there were no problems this summer. A lot
went wrong, as usually happens with startups. One group got an "exploding
term-sheet" from some VCs. Pretty much all the groups who had dealings with
big companies found that big companies do everything infinitely slowly. (This
is to be expected. If big companies weren't incapable, there would be no room
for startups to exist.) And of course there were the usual nightmares
associated with servers.
In short, the disasters this summer were just the usual childhood diseases.
Some of this summer's eight startups will probably die eventually; it would be
extraordinary if all eight succeeded. But what kills them will not be
dramatic, external threats, but a mundane, internal one: not getting enough
done.
So far, though, the news is all good. In fact, we were surprised how much fun
the summer was for us. The main reason was how much we liked the founders.
They're so earnest and hard-working. They seem to like us too. And this
illustrates another advantage of investing over hiring: our relationship with
them is way better than it would be between a boss and an employee. Y
Combinator ends up being more like an older brother than a parent.
I was surprised how much time I spent making introductions. Fortunately I
discovered that when a startup needed to talk to someone, I could usually get
to the right person by at most one hop. I remember wondering, how did my
friends get to be so eminent? and a second later realizing: shit, I'm forty.
Another surprise was that the three-month batch format, which we were forced
into by the constraints of the summer, turned out to be an advantage. When we
started Y Combinator, we planned to invest the way other venture firms do: as
proposals came in, we'd evaluate them and decide yes or no. The SFP was just
an experiment to get things started. But it worked so well that we plan to do
all our investing this way, one cycle in the summer and one in winter. It's
more efficient for us, and better for the startups too.
Several groups said our weekly dinners saved them from a common problem
afflicting startups: working so hard that one has no social life. (I remember
that part all too well.) This way, they were guaranteed a social event at
least once a week.
**Independence**
I've heard Y Combinator described as an "incubator." Actually we're the
opposite: incubators exert more control than ordinary VCs, and we make a point
of exerting less. Among other things, incubators usually make you work in
their office-- that's where the word "incubator" comes from. That seems the
wrong model. If investors get too involved, they smother one of the most
powerful forces in a startup: the feeling that it's your own company.
Incubators were conspicuous failures during the Bubble. There's still debate
about whether this was because of the Bubble, or because they're a bad idea.
My vote is they're a bad idea. I think they fail because they select for the
wrong people. When we were starting a startup, we would never have taken
funding from an "incubator." We can find office space, thanks; just give us
the money. And people with that attitude are the ones likely to succeed in
startups.
Indeed, one quality all the founders shared this summer was a spirit of
independence. I've been wondering about that. Are some people just a lot more
independent than others, or would everyone be this way if they were allowed
to?
As with most nature/nurture questions, the answer is probably: some of each.
But my main conclusion from the summer is that there's more environment in the
mix than most people realize. I could see that from how the founders'
attitudes _changed_ during the summer. Most were emerging from twenty or so
years of being told what to do. They seemed a little surprised at having total
freedom. But they grew into it really quickly; some of these guys now seem
about four inches taller (metaphorically) than they did at the beginning of
the summer.
When we asked the summer founders what surprised them most about starting a
company, one said "the most shocking thing is that it worked."
It will take more experience to know for sure, but my guess is that a lot of
hackers could do this-- that if you put people in a position of independence,
they develop the qualities they need. Throw them off a cliff, and most will
find on the way down that they have wings.
The reason this is news to anyone is that the same forces work in the other
direction too. Most hackers are employees, and this molds you into someone to
whom starting a startup seems impossible as surely as starting a startup molds
you into someone who can handle it.
If I'm right, "hacker" will mean something different in twenty years than it
does now. Increasingly it will mean the people who run the company. Y
Combinator is just accelerating a process that would have happened anyway.
Power is shifting from the people who deal with money to the people who create
technology, and if our experience this summer is any guide, this will be a
good thing.
** |
|
April 2007
_(This essay is derived from a keynote talk at the 2007 ASES Summit at
Stanford.)_
The world of investors is a foreign one to most hackers—partly because
investors are so unlike hackers, and partly because they tend to operate in
secret. I've been dealing with this world for many years, both as a founder
and an investor, and I still don't fully understand it.
In this essay I'm going to list some of the more surprising things I've
learned about investors. Some I only learned in the past year.
Teaching hackers how to deal with investors is probably the second most
important thing we do at Y Combinator. The most important thing for a startup
is to make something good. But everyone knows that's important. The dangerous
thing about investors is that hackers don't know how little they know about
this strange world.
**1\. The investors are what make a startup hub.**
About a year ago I tried to figure out what you'd need to reproduce Silicon
Valley. I decided the critical ingredients were rich people and
nerds—investors and founders. People are all you need to make technology, and
all the other people will move.
If I had to narrow that down, I'd say investors are the limiting factor. Not
because they contribute more to the startup, but simply because they're least
willing to move. They're rich. They're not going to move to Albuquerque just
because there are some smart hackers there they could invest in. Whereas
hackers will move to the Bay Area to find investors.
**2\. Angel investors are the most critical.**
There are several types of investors. The two main categories are angels and
VCs: VCs invest other people's money, and angels invest their own.
Though they're less well known, the angel investors are probably the more
critical ingredient in creating a silicon valley. Most companies that VCs
invest in would never have made it that far if angels hadn't invested first.
VCs say between half and three quarters of companies that raise series A
rounds have taken some outside investment already.
Angels are willing to fund riskier projects than VCs. They also give valuable
advice, because (unlike VCs) many have been startup founders themselves.
Google's story shows the key role angels play. A lot of people know Google
raised money from Kleiner and Sequoia. What most don't realize is how late.
That VC round was a series B round; the premoney valuation was $75 million.
Google was already a successful company at that point. Really, Google was
funded with angel money.
It may seem odd that the canonical Silicon Valley startup was funded by
angels, but this is not so surprising. Risk is always proportionate to reward.
So the most successful startup of all is likely to have seemed an extremely
risky bet at first, and that is exactly the kind VCs won't touch.
Where do angel investors come from? From other startups. So startup hubs like
Silicon Valley benefit from something like the marketplace effect, but shifted
in time: startups are there because startups were there.
**3\. Angels don't like publicity.**
If angels are so important, why do we hear more about VCs? Because VCs like
publicity. They need to market themselves to the investors who are their
"customers"—the endowments and pension funds and rich families whose money
they invest—and also to founders who might come to them for funding.
Angels don't need to market themselves to investors because they invest their
own money. Nor do they want to market themselves to founders: they don't want
random people pestering them with business plans. Actually, neither do VCs.
Both angels and VCs get deals almost exclusively through personal
introductions.
The reason VCs want a strong brand is not to draw in more business plans over
the transom, but so they win deals when competing against other VCs. Whereas
angels are rarely in direct competition, because (a) they do fewer deals, (b)
they're happy to split them, and (c) they invest at a point where the stream
is broader.
**4\. Most investors, especially VCs, are not like founders.**
Some angels are, or were, hackers. But most VCs are a different type of
people: they're dealmakers.
If you're a hacker, here's a thought experiment you can run to understand why
there are basically no hacker VCs: How would you like a job where you never
got to make anything, but instead spent all your time listening to other
people pitch (mostly terrible) projects, deciding whether to fund them, and
sitting on their boards if you did? That would not be fun for most hackers.
Hackers like to make things. This would be like being an administrator.
Because most VCs are a different species of people from founders, it's hard to
know what they're thinking. If you're a hacker, the last time you had to deal
with these guys was in high school. Maybe in college you walked past their
fraternity on your way to the lab. But don't underestimate them. They're as
expert in their world as you are in yours. What they're good at is reading
people, and making deals work to their advantage. Think twice before you try
to beat them at that.
**5\. Most investors are momentum investors.**
Because most investors are dealmakers rather than technology people, they
generally don't understand what you're doing. I knew as a founder that most
VCs didn't get technology. I also knew some made a lot of money. And yet it
never occurred to me till recently to put those two ideas together and ask
"How can VCs make money by investing in stuff they don't understand?"
The answer is that they're like momentum investors. You can (or could once)
make a lot of money by noticing sudden changes in stock prices. When a stock
jumps upward, you buy, and when it suddenly drops, you sell. In effect you're
insider trading, without knowing what you know. You just know someone knows
something, and that's making the stock move.
This is how most venture investors operate. They don't try to look at
something and predict whether it will take off. They win by noticing that
something _is_ taking off a little sooner than everyone else. That generates
almost as good returns as actually being able to pick winners. They may have
to pay a little more than they would if they got in at the very beginning, but
only a little.
Investors always say what they really care about is the team. Actually what
they care most about is your traffic, then what other investors think, then
the team. If you don't yet have any traffic, they fall back on number 2, what
other investors think. And this, as you can imagine, produces wild
oscillations in the "stock price" of a startup. One week everyone wants you,
and they're begging not to be cut out of the deal. But all it takes is for one
big investor to cool on you, and the next week no one will return your phone
calls. We regularly have startups go from hot to cold or cold to hot in a
matter of days, and literally nothing has changed.
There are two ways to deal with this phenomenon. If you're feeling really
confident, you can try to ride it. You can start by asking a comparatively
lowly VC for a small amount of money, and then after generating interest
there, ask more prestigious VCs for larger amounts, stirring up a crescendo of
buzz, and then "sell" at the top. This is extremely risky, and takes months
even if you succeed. I wouldn't try it myself. My advice is to err on the side
of safety: when someone offers you a decent deal, just take it and get on with
building the company. Startups win or lose based on the quality of their
product, not the quality of their funding deals.
**6\. Most investors are looking for big hits.**
Venture investors like companies that could go public. That's where the big
returns are. They know the odds of any individual startup going public are
small, but they want to invest in those that at least have a _chance_ of going
public.
Currently the way VCs seem to operate is to invest in a bunch of companies,
most of which fail, and one of which is Google. Those few big wins compensate
for losses on their other investments. What this means is that most VCs will
only invest in you if you're a potential Google. They don't care about
companies that are a safe bet to be acquired for $20 million. There needs to
be a chance, however small, of the company becoming really big.
Angels are different in this respect. They're happy to invest in a company
where the most likely outcome is a $20 million acquisition if they can do it
at a low enough valuation. But of course they like companies that could go
public too. So having an ambitious long-term plan pleases everyone.
If you take VC money, you have to mean it, because the structure of VC deals
prevents early acquisitions. If you take VC money, they won't let you sell
early.
**7\. VCs want to invest large amounts.**
The fact that they're running investment funds makes VCs want to invest large
amounts. A typical VC fund is now hundreds of millions of dollars. If $400
million has to be invested by 10 partners, they have to invest $40 million
each. VCs usually sit on the boards of companies they fund. If the average
deal size was $1 million, each partner would have to sit on 40 boards, which
would not be fun. So they prefer bigger deals, where they can put a lot of
money to work at once.
VCs don't regard you as a bargain if you don't need a lot of money. That may
even make you less attractive, because it means their investment creates less
of a barrier to entry for competitors.
Angels are in a different position because they're investing their own money.
They're happy to invest small amounts—sometimes as little as $20,000—as long
as the potential returns look good enough. So if you're doing something
inexpensive, go to angels.
**8\. Valuations are fiction.**
VCs admit that valuations are an artifact. They decide how much money you need
and how much of the company they want, and those two constraints yield a
valuation.
Valuations increase as the size of the investment does. A company that an
angel is willing to put $50,000 into at a valuation of a million can't take $6
million from VCs at that valuation. That would leave the founders less than a
seventh of the company between them (since the option pool would also come out
of that seventh). Most VCs wouldn't want that, which is why you never hear of
deals where a VC invests $6 million at a premoney valuation of $1 million.
If valuations change depending on the amount invested, that shows how far they
are from reflecting any kind of value of the company.
Since valuations are made up, founders shouldn't care too much about them.
That's not the part to focus on. In fact, a high valuation can be a bad thing.
If you take funding at a premoney valuation of $10 million, you won't be
selling the company for 20. You'll have to sell for over 50 for the VCs to get
even a 5x return, which is low to them. More likely they'll want you to hold
out for 100. But needing to get a high price decreases the chance of getting
bought at all; many companies can buy you for $10 million, but only a handful
for 100. And since a startup is like a pass/fail course for the founders, what
you want to optimize is your chance of a good outcome, not the percentage of
the company you keep.
So why do founders chase high valuations? They're tricked by misplaced
ambition. They feel they've achieved more if they get a higher valuation. They
usually know other founders, and if they get a higher valuation they can say
"mine is bigger than yours." But funding is not the real test. The real test
is the final outcome for the founder, and getting too high a valuation may
just make a good outcome less likely.
The one advantage of a high valuation is that you get less dilution. But there
is another less sexy way to achieve that: just take less money.
**9\. Investors look for founders like the current stars.**
Ten years ago investors were looking for the next Bill Gates. This was a
mistake, because Microsoft was a very anomalous startup. They started almost
as a contract programming operation, and the reason they became huge was that
IBM happened to drop the PC standard in their lap.
Now all the VCs are looking for the next Larry and Sergey. This is a good
trend, because Larry and Sergey are closer to the ideal startup founders.
Historically investors thought it was important for a founder to be an expert
in business. So they were willing to fund teams of MBAs who planned to use the
money to pay programmers to build their product for them. This is like funding
Steve Ballmer in the hope that the programmer he'll hire is Bill Gates—kind of
backward, as the events of the Bubble showed. Now most VCs know they should be
funding technical guys. This is more pronounced among the very top funds; the
lamer ones still want to fund MBAs.
If you're a hacker, it's good news that investors are looking for Larry and
Sergey. The bad news is, the only investors who can do it right are the ones
who knew them when they were a couple of CS grad students, not the confident
media stars they are today. What investors still don't get is how clueless and
tentative great founders can seem at the very beginning.
**10\. The contribution of investors tends to be underestimated.**
Investors do more for startups than give them money. They're helpful in doing
deals and arranging introductions, and some of the smarter ones, particularly
angels, can give good advice about the product.
In fact, I'd say what separates the great investors from the mediocre ones is
the quality of their advice. Most investors give advice, but the top ones give
_good_ advice.
Whatever help investors give a startup tends to be underestimated. It's to
everyone's advantage to let the world think the founders thought of
everything. The goal of the investors is for the company to become valuable,
and the company seems more valuable if it seems like all the good ideas came
from within.
This trend is compounded by the obsession that the press has with founders. In
a company founded by two people, 10% of the ideas might come from the first
guy they hire. Arguably they've done a bad job of hiring otherwise. And yet
this guy will be almost entirely overlooked by the press.
I say this as a founder: the contribution of founders is always overestimated.
The danger here is that new founders, looking at existing founders, will think
that they're supermen that one couldn't possibly equal oneself. Actually they
have a hundred different types of support people just offscreen making the
whole show possible.
**11\. VCs are afraid of looking bad.**
I've been very surprised to discover how timid most VCs are. They seem to be
afraid of looking bad to their partners, and perhaps also to the limited
partners—the people whose money they invest.
You can measure this fear in how much less risk VCs are willing to take. You
can tell they won't make investments for their fund that they might be willing
to make themselves as angels. Though it's not quite accurate to say that VCs
are less willing to take risks. They're less willing to do things that might
look bad. That's not the same thing.
For example, most VCs would be very reluctant to invest in a startup founded
by a pair of 18 year old hackers, no matter how brilliant, because if the
startup failed their partners could turn on them and say "What, you invested
$x million of our money in a pair of 18 year olds?" Whereas if a VC invested
in a startup founded by three former banking executives in their 40s who
planned to outsource their product development—which to my mind is actually a
lot riskier than investing in a pair of really smart 18 year olds—he couldn't
be faulted, if it failed, for making such an apparently prudent investment.
As a friend of mine said, "Most VCs can't do anything that would sound bad to
the kind of doofuses who run pension funds." Angels can take greater risks
because they don't have to answer to anyone.
**12\. Being turned down by investors doesn't mean much.**
Some founders are quite dejected when they get turned down by investors. They
shouldn't take it so much to heart. To start with, investors are often wrong.
It's hard to think of a successful startup that wasn't turned down by
investors at some point. Lots of VCs rejected Google. So obviously the
reaction of investors is not a very meaningful test.
Investors will often reject you for what seem to be superficial reasons. I
read of one VC who turned down a startup simply because they'd given away so
many little bits of stock that the deal required too many signatures to close.
The reason investors can get away with this is that they see so many
deals. It doesn't matter if they underestimate you because of some surface
imperfection, because the next best deal will be almost as good. Imagine
picking out apples at a grocery store. You grab one with a little bruise.
Maybe it's just a surface bruise, but why even bother checking when there are
so many other unbruised apples to choose from?
Investors would be the first to admit they're often wrong. So when you get
rejected by investors, don't think "we suck," but instead ask "do we suck?"
Rejection is a question, not an answer.
**13\. Investors are emotional.**
I've been surprised to discover how emotional investors can be. You'd expect
them to be cold and calculating, or at least businesslike, but often they're
not. I'm not sure if it's their position of power that makes them this way, or
the large sums of money involved, but investment negotiations can easily turn
personal. If you offend investors, they'll leave in a huff.
A while ago an eminent VC firm offered a series A round to a startup we'd seed
funded. Then they heard a rival VC firm was also interested. They were so
afraid that they'd be rejected in favor of this other firm that they gave the
startup what's known as an "exploding termsheet." They had, I think, 24 hours
to say yes or no, or the deal was off. Exploding termsheets are a somewhat
dubious device, but not uncommon. What surprised me was their reaction when I
called to talk about it. I asked if they'd still be interested in the startup
if the rival VC didn't end up making an offer, and they said no. What rational
basis could they have had for saying that? If they thought the startup was
worth investing in, what difference should it make what some other VC thought?
Surely it was their duty to their limited partners simply to invest in the
best opportunities they found; they should be delighted if the other VC said
no, because it would mean they'd overlooked a good opportunity. But of course
there was no rational basis for their decision. They just couldn't stand the
idea of taking this rival firm's rejects.
In this case the exploding termsheet was not (or not only) a tactic to
pressure the startup. It was more like the high school trick of breaking up
with someone before they can break up with you. In an earlier essay I said
that VCs were a lot like high school girls. A few VCs have joked about that
characterization, but none have disputed it.
**14\. The negotiation never stops till the closing.**
Most deals, for investment or acquisition, happen in two phases. There's an
initial phase of negotiation about the big questions. If this succeeds you get
a termsheet, so called because it outlines the key terms of a deal. A
termsheet is not legally binding, but it is a definite step. It's supposed to
mean that a deal is going to happen, once the lawyers work out all the
details. In theory these details are minor ones; by definition all the
important points are supposed to be covered in the termsheet.
Inexperience and wishful thinking combine to make founders feel that when they
have a termsheet, they have a deal. They want there to be a deal; everyone
acts like they have a deal; so there must be a deal. But there isn't and may
not be for several months. A lot can change for a startup in several months.
It's not uncommon for investors and acquirers to get buyer's remorse. So you
have to keep pushing, keep selling, all the way to the close. Otherwise all
the "minor" details left unspecified in the termsheet will be interpreted to
your disadvantage. The other side may even break the deal; if they do that,
they'll usually seize on some technicality or claim you misled them, rather
than admitting they changed their minds.
It can be hard to keep the pressure on an investor or acquirer all the way to
the closing, because the most effective pressure is competition from other
investors or acquirers, and these tend to drop away when you get a termsheet.
You should try to stay as close friends as you can with these rivals, but the
most important thing is just to keep up the momentum in your startup. The
investors or acquirers chose you because you seemed hot. Keep doing whatever
made you seem hot. Keep releasing new features; keep getting new users; keep
getting mentioned in the press and in blogs.
**15\. Investors like to co-invest.**
I've been surprised how willing investors are to split deals. You might think
that if they found a good deal they'd want it all to themselves, but they seem
positively eager to syndicate. This is understandable with angels; they invest
on a smaller scale and don't like to have too much money tied up in any one
deal. But VCs also share deals a lot. Why?
Partly I think this is an artifact of the rule I quoted earlier: after
traffic, VCs care most what other VCs think. A deal that has multiple VCs
interested in it is more likely to close, so of deals that close, more will
have multiple investors.
There is one rational reason to want multiple VCs in a deal: Any investor who
co-invests with you is one less investor who could fund a competitor.
Apparently Kleiner and Sequoia didn't like splitting the Google deal, but it
did at least have the advantage, from each one's point of view, that there
probably wouldn't be a competitor funded by the other. Splitting deals thus
has similar advantages to confusing paternity.
But I think the main reason VCs like splitting deals is the fear of looking
bad. If another firm shares the deal, then in the event of failure it will
seem to have been a prudent choice—a consensus decision, rather than just the
whim of an individual partner.
**16\. Investors collude.**
Investing is not covered by antitrust law. At least, it better not be, because
investors regularly do things that would be illegal otherwise. I know
personally of cases where one investor has talked another out of making a
competitive offer, using the promise of sharing future deals.
In principle investors are all competing for the same deals, but the spirit of
cooperation is stronger than the spirit of competition. The reason, again, is
that there are so many deals. Though a professional investor may have a closer
relationship with a founder he invests in than with other investors, his
relationship with the founder is only going to last a couple years, whereas
his relationship with other firms will last his whole career. There isn't so
much at stake in his interactions with other investors, but there will be a
lot of them. Professional investors are constantly trading little favors.
Another reason investors stick together is to preserve the power of investors
as a whole. So you will not, as of this writing, be able to get investors into
an auction for your series A round. They'd rather lose the deal than establish
a precedent of VCs competitively bidding against one another. An efficient
startup funding market may be coming in the distant future; things tend to
move in that direction; but it's certainly not here now.
**17\. Large-scale investors care about their portfolio, not any individual
company.**
The reason startups work so well is that everyone with power also has equity.
The only way any of them can succeed is if they all do. This makes everyone
naturally pull in the same direction, subject to differences of opinion about
tactics.
The problem is, larger scale investors don't have exactly the same motivation.
Close, but not identical. They don't need any given startup to succeed, like
founders do, just their portfolio as a whole to. So in borderline cases the
rational thing for them to do is to sacrifice unpromising startups.
Large-scale investors tend to put startups in three categories: successes,
failures, and the "living dead"—companies that are plugging along but don't
seem likely in the immediate future to get bought or go public. To the
founders, "living dead" sounds harsh. These companies may be far from failures
by ordinary standards. But they might as well be from a venture investor's
point of view, and they suck up just as much time and attention as the
successes. So if such a company has two possible strategies, a conservative
one that's slightly more likely to work in the end, or a risky one that within
a short time will either yield a giant success or kill the company, VCs will
push for the kill-or-cure option. To them the company is already a write-off.
Better to have resolution, one way or the other, as soon as possible.
If a startup gets into real trouble, instead of trying to save it VCs may just
sell it at a low price to another of their portfolio companies. Philip
Greenspun said in _Founders at Work_ that Ars Digita's VCs did this to them.
**18\. Investors have different risk profiles from founders.**
Most people would rather a 100% chance of $1 million than a 20% chance of $10
million. Investors are rich enough to be rational and prefer the latter. So
they'll always tend to encourage founders to keep rolling the dice. If a
company is doing well, investors will want founders to turn down most
acquisition offers. And indeed, most startups that turn down acquisition
offers ultimately do better. But it's still hair-raising for the founders,
because they might end up with nothing. When someone's offering to buy you for
a price at which your stock is worth $5 million, saying no is equivalent to
having $5 million and betting it all on one spin of the roulette wheel.
Investors will tell you the company is worth more. And they may be right. But
that doesn't mean it's wrong to sell. Any financial advisor who put all his
client's assets in the stock of a single, private company would probably lose
his license for it.
More and more, investors are letting founders cash out partially. That should
correct the problem. Most founders have such low standards that they'll feel
rich with a sum that doesn't seem huge to investors. But this custom is
spreading too slowly, because VCs are afraid of seeming irresponsible. No one
wants to be the first VC to give someone fuck-you money and then actually get
told "fuck you." But until this does start to happen, we know VCs are being
too conservative.
**19\. Investors vary greatly.**
Back when I was a founder I used to think all VCs were the same. And in fact
they do all look the same. They're all what hackers call "suits." But since
I've been dealing with VCs more I've learned that some suits are smarter than
others.
They're also in a business where winners tend to keep winning and losers to
keep losing. When a VC firm has been successful in the past, everyone wants
funding from them, so they get the pick of all the new deals. The self-
reinforcing nature of the venture funding market means that the top ten firms
live in a completely different world from, say, the hundredth. As well as
being smarter, they tend to be calmer and more upstanding; they don't need to
do iffy things to get an edge, and don't want to because they have more brand
to protect.
There are only two kinds of VCs you want to take money from, if you have the
luxury of choosing: the "top tier" VCs, meaning about the top 20 or so firms,
plus a few new ones that are not among the top 20 only because they haven't
been around long enough.
It's particularly important to raise money from a top firm if you're a hacker,
because they're more confident. That means they're less likely to stick you
with a business guy as CEO, like VCs used to do in the 90s. If you seem smart
and want to do it, they'll let you run the company.
**20\. Investors don't realize how much it costs to raise money from them.**
Raising money is a huge time suck at just the point where startups can least
afford it. It's not unusual for it to take five or six months to close a
funding round. Six weeks is fast. And raising money is not just something you
can leave running as a background process. When you're raising money, it's
inevitably the main focus of the company. Which means building the product
isn't.
Suppose a Y Combinator company starts talking to VCs after demo day, and is
successful in raising money from them, closing the deal after a comparatively
short 8 weeks. Since demo day occurs after 10 weeks, the company is now 18
weeks old. Raising money, rather than working on the product, has been the
company's main focus for 44% of its existence. And mind you, this an example
where things turned out _well_.
When a startup does return to working on the product after a funding round
finally closes, it's as if they were returning to work after a months-long
illness. They've lost most of their momentum.
Investors have no idea how much they damage the companies they invest in by
taking so long to do it. But companies do. So there is a big opportunity here
for a new kind of venture fund that invests smaller amounts at lower
valuations, but promises to either close or say no very quickly. If there were
such a firm, I'd recommend it to startups in preference to any other, no
matter how prestigious. Startups live on speed and momentum.
**21\. Investors don't like to say no.**
The reason funding deals take so long to close is mainly that investors can't
make up their minds. VCs are not big companies; they can do a deal in 24 hours
if they need to. But they usually let the initial meetings stretch out over a
couple weeks. The reason is the selection algorithm I mentioned earlier. Most
don't try to predict whether a startup will win, but to notice quickly that it
already is winning. They care what the market thinks of you and what other VCs
think of you, and they can't judge those just from meeting you.
Because they're investing in things that (a) change fast and (b) they don't
understand, a lot of investors will reject you in a way that can later be
claimed not to have been a rejection. Unless you know this world, you may not
even realize you've been rejected. Here's a VC saying no:
> We're really excited about your project, and we want to keep in close touch
> as you develop it further.
Translated into more straightforward language, this means: We're not investing
in you, but we may change our minds if it looks like you're taking off.
Sometimes they're more candid and say explicitly that they need to "see some
traction." They'll invest in you if you start to get lots of users. But so
would any VC. So all they're saying is that you're still at square 1.
Here's a test for deciding whether a VC's response was yes or no. Look down at
your hands. Are you holding a termsheet?
**22\. You need investors.**
Some founders say "Who needs investors?" Empirically the answer seems to be:
everyone who wants to succeed. Practically every successful startup takes
outside investment at some point.
Why? What the people who think they don't need investors forget is that they
will have competitors. The question is not whether you _need_ outside
investment, but whether it could help you at all. If the answer is yes, and
you don't take investment, then competitors who do will have an advantage over
you. And in the startup world a little advantage can expand into a lot.
Mike Moritz famously said that he invested in Yahoo because he thought they
had a few weeks' lead over their competitors. That may not have mattered quite
so much as he thought, because Google came along three years later and kicked
Yahoo's ass. But there is something in what he said. Sometimes a small lead
can grow into the yes half of a binary choice.
Maybe as it gets cheaper to start a startup, it will start to be possible to
succeed in a competitive market without outside funding. There are certainly
costs to raising money. But as of this writing the empirical evidence says
it's a net win.
**23\. Investors like it when you don't need them.**
A lot of founders approach investors as if they needed their permission to
start a company—as if it were like getting into college. But you don't need
investors to start most companies; they just make it easier.
And in fact, investors greatly prefer it if you don't need them. What excites
them, both consciously and unconsciously, is the sort of startup that
approaches them saying "the train's leaving the station; are you in or out?"
not the one saying "please can we have some money to start a company?"
Most investors are "bottoms" in the sense that the startups they like most are
those that are rough with them. When Google stuck Kleiner and Sequoia with a
$75 million premoney valuation, their reaction was probably "Ouch! That feels
so good." And they were right, weren't they? That deal probably made them more
than any other they've done.
The thing is, VCs are pretty good at reading people. So don't try to act tough
with them unless you really are the next Google, or they'll see through you in
a second. Instead of acting tough, what most startups should do is simply
always have a backup plan. Always have some alternative plan for getting
started if any given investor says no. Having one is the best insurance
against needing one.
So you shouldn't start a startup that's expensive to start, because then
you'll be at the mercy of investors. If you ultimately want to do something
that will cost a lot, start by doing a cheaper subset of it, and expand your
ambitions when and if you raise more money.
Apparently the most likely animals to be left alive after a nuclear war are
cockroaches, because they're so hard to kill. That's what you want to be as a
startup, initially. Instead of a beautiful but fragile flower that needs to
have its stem in a plastic tube to support itself, better to be small, ugly,
and indestructible.
** |
|
November 2022
In the science fiction books I read as a kid, reading had often been replaced
by some more efficient way of acquiring knowledge. Mysterious "tapes" would
load it into one's brain like a program being loaded into a computer.
That sort of thing is unlikely to happen anytime soon. Not just because it
would be hard to build a replacement for reading, but because even if one
existed, it would be insufficient. Reading about x doesn't just teach you
about x; it also teaches you how to write.
Would that matter? If we replaced reading, would anyone need to be good at
writing?
The reason it would matter is that writing is not just a way to convey ideas,
but also a way to have them.
A good writer doesn't just think, and then write down what he thought, as a
sort of transcript. A good writer will almost always discover new things in
the process of writing. And there is, as far as I know, no substitute for this
kind of discovery. Talking about your ideas with other people is a good way to
develop them. But even after doing this, you'll find you still discover new
things when you sit down to write. There is a kind of thinking that can only
be done by _writing_.
There are of course kinds of thinking that can be done without writing. If you
don't need to go too deeply into a problem, you can solve it without writing.
If you're thinking about how two pieces of machinery should fit together,
writing about it probably won't help much. And when a problem can be described
formally, you can sometimes solve it in your head. But if you need to solve a
complicated, ill-defined problem, it will almost always help to write about
it. Which in turn means that someone who's not good at writing will almost
always be at a disadvantage in solving such problems.
You can't think well without writing well, and you can't write well without
reading well. And I mean that last "well" in both senses. You have to be good
at reading, and read good things.
People who just want information may find other ways to get it. But people who
want to have ideas can't afford to.
** |
|
May 2001
_(This article was written as a kind of business plan for anew language. So
it is missing (because it takes for granted) the most important feature of a
good programming language: very powerful abstractions.)_
A friend of mine once told an eminent operating systems expert that he wanted
to design a really good programming language. The expert told him that it
would be a waste of time, that programming languages don't become popular or
unpopular based on their merits, and so no matter how good his language was,
no one would use it. At least, that was what had happened to the language _he_
had designed.
What does make a language popular? Do popular languages deserve their
popularity? Is it worth trying to define a good programming language? How
would you do it?
I think the answers to these questions can be found by looking at hackers, and
learning what they want. Programming languages are _for_ hackers, and a
programming language is good as a programming language (rather than, say, an
exercise in denotational semantics or compiler design) if and only if hackers
like it.
**1 The Mechanics of Popularity**
It's true, certainly, that most people don't choose programming languages
simply based on their merits. Most programmers are told what language to use
by someone else. And yet I think the effect of such external factors on the
popularity of programming languages is not as great as it's sometimes thought
to be. I think a bigger problem is that a hacker's idea of a good programming
language is not the same as most language designers'.
Between the two, the hacker's opinion is the one that matters. Programming
languages are not theorems. They're tools, designed for people, and they have
to be designed to suit human strengths and weaknesses as much as shoes have to
be designed for human feet. If a shoe pinches when you put it on, it's a bad
shoe, however elegant it may be as a piece of sculpture.
It may be that the majority of programmers can't tell a good language from a
bad one. But that's no different with any other tool. It doesn't mean that
it's a waste of time to try designing a good language. Expert hackers can tell
a good language when they see one, and they'll use it. Expert hackers are a
tiny minority, admittedly, but that tiny minority write all the good software,
and their influence is such that the rest of the programmers will tend to use
whatever language they use. Often, indeed, it is not merely influence but
command: often the expert hackers are the very people who, as their bosses or
faculty advisors, tell the other programmers what language to use.
The opinion of expert hackers is not the only force that determines the
relative popularity of programming languages — legacy software (Cobol) and
hype (Ada, Java) also play a role — but I think it is the most powerful force
over the long term. Given an initial critical mass and enough time, a
programming language probably becomes about as popular as it deserves to be.
And popularity further separates good languages from bad ones, because
feedback from real live users always leads to improvements. Look at how much
any popular language has changed during its life. Perl and Fortran are extreme
cases, but even Lisp has changed a lot. Lisp 1.5 didn't have macros, for
example; these evolved later, after hackers at MIT had spent a couple years
using Lisp to write real programs.
So whether or not a language has to be good to be popular, I think a language
has to be popular to be good. And it has to stay popular to stay good. The
state of the art in programming languages doesn't stand still. And yet the
Lisps we have today are still pretty much what they had at MIT in the
mid-1980s, because that's the last time Lisp had a sufficiently large and
demanding user base.
Of course, hackers have to know about a language before they can use it. How
are they to hear? From other hackers. But there has to be some initial group
of hackers using the language for others even to hear about it. I wonder how
large this group has to be; how many users make a critical mass? Off the top
of my head, I'd say twenty. If a language had twenty separate users, meaning
twenty users who decided on their own to use it, I'd consider it to be real.
Getting there can't be easy. I would not be surprised if it is harder to get
from zero to twenty than from twenty to a thousand. The best way to get those
initial twenty users is probably to use a trojan horse: to give people an
application they want, which happens to be written in the new language.
**2 External Factors**
Let's start by acknowledging one external factor that does affect the
popularity of a programming language. To become popular, a programming
language has to be the scripting language of a popular system. Fortran and
Cobol were the scripting languages of early IBM mainframes. C was the
scripting language of Unix, and so, later, was Perl. Tcl is the scripting
language of Tk. Java and Javascript are intended to be the scripting languages
of web browsers.
Lisp is not a massively popular language because it is not the scripting
language of a massively popular system. What popularity it retains dates back
to the 1960s and 1970s, when it was the scripting language of MIT. A lot of
the great programmers of the day were associated with MIT at some point. And
in the early 1970s, before C, MIT's dialect of Lisp, called MacLisp, was one
of the only programming languages a serious hacker would want to use.
Today Lisp is the scripting language of two moderately popular systems, Emacs
and Autocad, and for that reason I suspect that most of the Lisp programming
done today is done in Emacs Lisp or AutoLisp.
Programming languages don't exist in isolation. To hack is a transitive verb —
hackers are usually hacking something — and in practice languages are judged
relative to whatever they're used to hack. So if you want to design a popular
language, you either have to supply more than a language, or you have to
design your language to replace the scripting language of some existing
system.
Common Lisp is unpopular partly because it's an orphan. It did originally come
with a system to hack: the Lisp Machine. But Lisp Machines (along with
parallel computers) were steamrollered by the increasing power of general
purpose processors in the 1980s. Common Lisp might have remained popular if it
had been a good scripting language for Unix. It is, alas, an atrociously bad
one.
One way to describe this situation is to say that a language isn't judged on
its own merits. Another view is that a programming language really isn't a
programming language unless it's also the scripting language of something.
This only seems unfair if it comes as a surprise. I think it's no more unfair
than expecting a programming language to have, say, an implementation. It's
just part of what a programming language is.
A programming language does need a good implementation, of course, and this
must be free. Companies will pay for software, but individual hackers won't,
and it's the hackers you need to attract.
A language also needs to have a book about it. The book should be thin, well-
written, and full of good examples. K&R is the ideal here. At the moment I'd
almost say that a language has to have a book published by O'Reilly. That's
becoming the test of mattering to hackers.
There should be online documentation as well. In fact, the book can start as
online documentation. But I don't think that physical books are outmoded yet.
Their format is convenient, and the de facto censorship imposed by publishers
is a useful if imperfect filter. Bookstores are one of the most important
places for learning about new languages.
**3 Brevity**
Given that you can supply the three things any language needs — a free
implementation, a book, and something to hack — how do you make a language
that hackers will like?
One thing hackers like is brevity. Hackers are lazy, in the same way that
mathematicians and modernist architects are lazy: they hate anything
extraneous. It would not be far from the truth to say that a hacker about to
write a program decides what language to use, at least subconsciously, based
on the total number of characters he'll have to type. If this isn't precisely
how hackers think, a language designer would do well to act as if it were.
It is a mistake to try to baby the user with long-winded expressions that are
meant to resemble English. Cobol is notorious for this flaw. A hacker would
consider being asked to write
add x to y giving z
instead of
z = x+y
as something between an insult to his intelligence and a sin against God.
It has sometimes been said that Lisp should use first and rest instead of car
and cdr, because it would make programs easier to read. Maybe for the first
couple hours. But a hacker can learn quickly enough that car means the first
element of a list and cdr means the rest. Using first and rest means 50% more
typing. And they are also different lengths, meaning that the arguments won't
line up when they're called, as car and cdr often are, in successive lines.
I've found that it matters a lot how code lines up on the page. I can barely
read Lisp code when it is set in a variable-width font, and friends say this
is true for other languages too.
Brevity is one place where strongly typed languages lose. All other things
being equal, no one wants to begin a program with a bunch of declarations.
Anything that can be implicit, should be.
The individual tokens should be short as well. Perl and Common Lisp occupy
opposite poles on this question. Perl programs can be almost cryptically
dense, while the names of built-in Common Lisp operators are comically long.
The designers of Common Lisp probably expected users to have text editors that
would type these long names for them. But the cost of a long name is not just
the cost of typing it. There is also the cost of reading it, and the cost of
the space it takes up on your screen.
**4 Hackability**
There is one thing more important than brevity to a hacker: being able to do
what you want. In the history of programming languages a surprising amount of
effort has gone into preventing programmers from doing things considered to be
improper. This is a dangerously presumptuous plan. How can the language
designer know what the programmer is going to need to do? I think language
designers would do better to consider their target user to be a genius who
will need to do things they never anticipated, rather than a bumbler who needs
to be protected from himself. The bumbler will shoot himself in the foot
anyway. You may save him from referring to variables in another package, but
you can't save him from writing a badly designed program to solve the wrong
problem, and taking forever to do it.
Good programmers often want to do dangerous and unsavory things. By unsavory I
mean things that go behind whatever semantic facade the language is trying to
present: getting hold of the internal representation of some high-level
abstraction, for example. Hackers like to hack, and hacking means getting
inside things and second guessing the original designer.
_Let yourself be second guessed._ When you make any tool, people use it in
ways you didn't intend, and this is especially true of a highly articulated
tool like a programming language. Many a hacker will want to tweak your
semantic model in a way that you never imagined. I say, let them; give the
programmer access to as much internal stuff as you can without endangering
runtime systems like the garbage collector.
In Common Lisp I have often wanted to iterate through the fields of a struct —
to comb out references to a deleted object, for example, or find fields that
are uninitialized. I know the structs are just vectors underneath. And yet I
can't write a general purpose function that I can call on any struct. I can
only access the fields by name, because that's what a struct is supposed to
mean.
A hacker may only want to subvert the intended model of things once or twice
in a big program. But what a difference it makes to be able to. And it may be
more than a question of just solving a problem. There is a kind of pleasure
here too. Hackers share the surgeon's secret pleasure in poking about in gross
innards, the teenager's secret pleasure in popping zits. For boys, at
least, certain kinds of horrors are fascinating. Maxim magazine publishes an
annual volume of photographs, containing a mix of pin-ups and grisly
accidents. They know their audience.
Historically, Lisp has been good at letting hackers have their way. The
political correctness of Common Lisp is an aberration. Early Lisps let you get
your hands on everything. A good deal of that spirit is, fortunately,
preserved in macros. What a wonderful thing, to be able to make arbitrary
transformations on the source code.
Classic macros are a real hacker's tool — simple, powerful, and dangerous.
It's so easy to understand what they do: you call a function on the macro's
arguments, and whatever it returns gets inserted in place of the macro call.
Hygienic macros embody the opposite principle. They try to protect you from
understanding what they're doing. I have never heard hygienic macros explained
in one sentence. And they are a classic example of the dangers of deciding
what programmers are allowed to want. Hygienic macros are intended to protect
me from variable capture, among other things, but variable capture is exactly
what I want in some macros.
A really good language should be both clean and dirty: cleanly designed, with
a small core of well understood and highly orthogonal operators, but dirty in
the sense that it lets hackers have their way with it. C is like this. So were
the early Lisps. A real hacker's language will always have a slightly raffish
character.
A good programming language should have features that make the kind of people
who use the phrase "software engineering" shake their heads disapprovingly. At
the other end of the continuum are languages like Ada and Pascal, models of
propriety that are good for teaching and not much else.
**5 Throwaway Programs**
To be attractive to hackers, a language must be good for writing the kinds of
programs they want to write. And that means, perhaps surprisingly, that it has
to be good for writing throwaway programs.
A throwaway program is a program you write quickly for some limited task: a
program to automate some system administration task, or generate test data for
a simulation, or convert data from one format to another. The surprising thing
about throwaway programs is that, like the "temporary" buildings built at so
many American universities during World War II, they often don't get thrown
away. Many evolve into real programs, with real features and real users.
I have a hunch that the best big programs begin life this way, rather than
being designed big from the start, like the Hoover Dam. It's terrifying to
build something big from scratch. When people take on a project that's too
big, they become overwhelmed. The project either gets bogged down, or the
result is sterile and wooden: a shopping mall rather than a real downtown,
Brasilia rather than Rome, Ada rather than C.
Another way to get a big program is to start with a throwaway program and keep
improving it. This approach is less daunting, and the design of the program
benefits from evolution. I think, if one looked, that this would turn out to
be the way most big programs were developed. And those that did evolve this
way are probably still written in whatever language they were first written
in, because it's rare for a program to be ported, except for political
reasons. And so, paradoxically, if you want to make a language that is used
for big systems, you have to make it good for writing throwaway programs,
because that's where big systems come from.
Perl is a striking example of this idea. It was not only designed for writing
throwaway programs, but was pretty much a throwaway program itself. Perl began
life as a collection of utilities for generating reports, and only evolved
into a programming language as the throwaway programs people wrote in it grew
larger. It was not until Perl 5 (if then) that the language was suitable for
writing serious programs, and yet it was already massively popular.
What makes a language good for throwaway programs? To start with, it must be
readily available. A throwaway program is something that you expect to write
in an hour. So the language probably must already be installed on the computer
you're using. It can't be something you have to install before you use it. It
has to be there. C was there because it came with the operating system. Perl
was there because it was originally a tool for system administrators, and
yours had already installed it.
Being available means more than being installed, though. An interactive
language, with a command-line interface, is more available than one that you
have to compile and run separately. A popular programming language should be
interactive, and start up fast.
Another thing you want in a throwaway program is brevity. Brevity is always
attractive to hackers, and never more so than in a program they expect to turn
out in an hour.
**6 Libraries**
Of course the ultimate in brevity is to have the program already written for
you, and merely to call it. And this brings us to what I think will be an
increasingly important feature of programming languages: library functions.
Perl wins because it has large libraries for manipulating strings. This class
of library functions are especially important for throwaway programs, which
are often originally written for converting or extracting data. Many Perl
programs probably begin as just a couple library calls stuck together.
I think a lot of the advances that happen in programming languages in the next
fifty years will have to do with library functions. I think future programming
languages will have libraries that are as carefully designed as the core
language. Programming language design will not be about whether to make your
language strongly or weakly typed, or object oriented, or functional, or
whatever, but about how to design great libraries. The kind of language
designers who like to think about how to design type systems may shudder at
this. It's almost like writing applications! Too bad. Languages are for
programmers, and libraries are what programmers need.
It's hard to design good libraries. It's not simply a matter of writing a lot
of code. Once the libraries get too big, it can sometimes take longer to find
the function you need than to write the code yourself. Libraries need to be
designed using a small set of orthogonal operators, just like the core
language. It ought to be possible for the programmer to guess what library
call will do what he needs.
Libraries are one place Common Lisp falls short. There are only rudimentary
libraries for manipulating strings, and almost none for talking to the
operating system. For historical reasons, Common Lisp tries to pretend that
the OS doesn't exist. And because you can't talk to the OS, you're unlikely to
be able to write a serious program using only the built-in operators in Common
Lisp. You have to use some implementation-specific hacks as well, and in
practice these tend not to give you everything you want. Hackers would think a
lot more highly of Lisp if Common Lisp had powerful string libraries and good
OS support.
**7 Syntax**
Could a language with Lisp's syntax, or more precisely, lack of syntax, ever
become popular? I don't know the answer to this question. I do think that
syntax is not the main reason Lisp isn't currently popular. Common Lisp has
worse problems than unfamiliar syntax. I know several programmers who are
comfortable with prefix syntax and yet use Perl by default, because it has
powerful string libraries and can talk to the os.
There are two possible problems with prefix notation: that it is unfamiliar to
programmers, and that it is not dense enough. The conventional wisdom in the
Lisp world is that the first problem is the real one. I'm not so sure. Yes,
prefix notation makes ordinary programmers panic. But I don't think ordinary
programmers' opinions matter. Languages become popular or unpopular based on
what expert hackers think of them, and I think expert hackers might be able to
deal with prefix notation. Perl syntax can be pretty incomprehensible, but
that has not stood in the way of Perl's popularity. If anything it may have
helped foster a Perl cult.
A more serious problem is the diffuseness of prefix notation. For expert
hackers, that really is a problem. No one wants to write (aref a x y) when
they could write a[x,y].
In this particular case there is a way to finesse our way out of the problem.
If we treat data structures as if they were functions on indexes, we could
write (a x y) instead, which is even shorter than the Perl form. Similar
tricks may shorten other types of expressions.
We can get rid of (or make optional) a lot of parentheses by making
indentation significant. That's how programmers read code anyway: when
indentation says one thing and delimiters say another, we go by the
indentation. Treating indentation as significant would eliminate this common
source of bugs as well as making programs shorter.
Sometimes infix syntax is easier to read. This is especially true for math
expressions. I've used Lisp my whole programming life and I still don't find
prefix math expressions natural. And yet it is convenient, especially when
you're generating code, to have operators that take any number of arguments.
So if we do have infix syntax, it should probably be implemented as some kind
of read-macro.
I don't think we should be religiously opposed to introducing syntax into
Lisp, as long as it translates in a well-understood way into underlying
s-expressions. There is already a good deal of syntax in Lisp. It's not
necessarily bad to introduce more, as long as no one is forced to use it. In
Common Lisp, some delimiters are reserved for the language, suggesting that at
least some of the designers intended to have more syntax in the future.
One of the most egregiously unlispy pieces of syntax in Common Lisp occurs in
format strings; format is a language in its own right, and that language is
not Lisp. If there were a plan for introducing more syntax into Lisp, format
specifiers might be able to be included in it. It would be a good thing if
macros could generate format specifiers the way they generate any other kind
of code.
An eminent Lisp hacker told me that his copy of CLTL falls open to the section
format. Mine too. This probably indicates room for improvement. It may also
mean that programs do a lot of I/O.
**8 Efficiency**
A good language, as everyone knows, should generate fast code. But in practice
I don't think fast code comes primarily from things you do in the design of
the language. As Knuth pointed out long ago, speed only matters in certain
critical bottlenecks. And as many programmers have observed since, one is very
often mistaken about where these bottlenecks are.
So, in practice, the way to get fast code is to have a very good profiler,
rather than by, say, making the language strongly typed. You don't need to
know the type of every argument in every call in the program. You do need to
be able to declare the types of arguments in the bottlenecks. And even more,
you need to be able to find out where the bottlenecks are.
One complaint people have had with Lisp is that it's hard to tell what's
expensive. This might be true. It might also be inevitable, if you want to
have a very abstract language. And in any case I think good profiling would go
a long way toward fixing the problem: you'd soon learn what was expensive.
Part of the problem here is social. Language designers like to write fast
compilers. That's how they measure their skill. They think of the profiler as
an add-on, at best. But in practice a good profiler may do more to improve the
speed of actual programs written in the language than a compiler that
generates fast code. Here, again, language designers are somewhat out of touch
with their users. They do a really good job of solving slightly the wrong
problem.
It might be a good idea to have an active profiler — to push performance data
to the programmer instead of waiting for him to come asking for it. For
example, the editor could display bottlenecks in red when the programmer edits
the source code. Another approach would be to somehow represent what's
happening in running programs. This would be an especially big win in server-
based applications, where you have lots of running programs to look at. An
active profiler could show graphically what's happening in memory as a
program's running, or even make sounds that tell what's happening.
Sound is a good cue to problems. In one place I worked, we had a big board of
dials showing what was happening to our web servers. The hands were moved by
little servomotors that made a slight noise when they turned. I couldn't see
the board from my desk, but I found that I could tell immediately, by the
sound, when there was a problem with a server.
It might even be possible to write a profiler that would automatically detect
inefficient algorithms. I would not be surprised if certain patterns of memory
access turned out to be sure signs of bad algorithms. If there were a little
guy running around inside the computer executing our programs, he would
probably have as long and plaintive a tale to tell about his job as a federal
government employee. I often have a feeling that I'm sending the processor on
a lot of wild goose chases, but I've never had a good way to look at what it's
doing.
A number of Lisps now compile into byte code, which is then executed by an
interpreter. This is usually done to make the implementation easier to port,
but it could be a useful language feature. It might be a good idea to make the
byte code an official part of the language, and to allow programmers to use
inline byte code in bottlenecks. Then such optimizations would be portable
too.
The nature of speed, as perceived by the end-user, may be changing. With the
rise of server-based applications, more and more programs may turn out to be
i/o-bound. It will be worth making i/o fast. The language can help with
straightforward measures like simple, fast, formatted output functions, and
also with deep structural changes like caching and persistent objects.
Users are interested in response time. But another kind of efficiency will be
increasingly important: the number of simultaneous users you can support per
processor. Many of the interesting applications written in the near future
will be server-based, and the number of users per server is the critical
question for anyone hosting such applications. In the capital cost of a
business offering a server-based application, this is the divisor.
For years, efficiency hasn't mattered much in most end-user applications.
Developers have been able to assume that each user would have an increasingly
powerful processor sitting on their desk. And by Parkinson's Law, software has
expanded to use the resources available. That will change with server-based
applications. In that world, the hardware and software will be supplied
together. For companies that offer server-based applications, it will make a
very big difference to the bottom line how many users they can support per
server.
In some applications, the processor will be the limiting factor, and execution
speed will be the most important thing to optimize. But often memory will be
the limit; the number of simultaneous users will be determined by the amount
of memory you need for each user's data. The language can help here too. Good
support for threads will enable all the users to share a single heap. It may
also help to have persistent objects and/or language level support for lazy
loading.
**9 Time**
The last ingredient a popular language needs is time. No one wants to write
programs in a language that might go away, as so many programming languages
do. So most hackers will tend to wait until a language has been around for a
couple years before even considering using it.
Inventors of wonderful new things are often surprised to discover this, but
you need time to get any message through to people. A friend of mine rarely
does anything the first time someone asks him. He knows that people sometimes
ask for things that they turn out not to want. To avoid wasting his time, he
waits till the third or fourth time he's asked to do something; by then,
whoever's asking him may be fairly annoyed, but at least they probably really
do want whatever they're asking for.
Most people have learned to do a similar sort of filtering on new things they
hear about. They don't even start paying attention until they've heard about
something ten times. They're perfectly justified: the majority of hot new
whatevers do turn out to be a waste of time, and eventually go away. By
delaying learning VRML, I avoided having to learn it at all.
So anyone who invents something new has to expect to keep repeating their
message for years before people will start to get it. We wrote what was, as
far as I know, the first web-server based application, and it took us years to
get it through to people that it didn't have to be downloaded. It wasn't that
they were stupid. They just had us tuned out.
The good news is, simple repetition solves the problem. All you have to do is
keep telling your story, and eventually people will start to hear. It's not
when people notice you're there that they pay attention; it's when they notice
you're still there.
It's just as well that it usually takes a while to gain momentum. Most
technologies evolve a good deal even after they're first launched —
programming languages especially. Nothing could be better, for a new
techology, than a few years of being used only by a small number of early
adopters. Early adopters are sophisticated and demanding, and quickly flush
out whatever flaws remain in your technology. When you only have a few users
you can be in close contact with all of them. And early adopters are forgiving
when you improve your system, even if this causes some breakage.
There are two ways new technology gets introduced: the organic growth method,
and the big bang method. The organic growth method is exemplified by the
classic seat-of-the-pants underfunded garage startup. A couple guys, working
in obscurity, develop some new technology. They launch it with no marketing
and initially have only a few (fanatically devoted) users. They continue to
improve the technology, and meanwhile their user base grows by word of mouth.
Before they know it, they're big.
The other approach, the big bang method, is exemplified by the VC-backed,
heavily marketed startup. They rush to develop a product, launch it with great
publicity, and immediately (they hope) have a large user base.
Generally, the garage guys envy the big bang guys. The big bang guys are
smooth and confident and respected by the VCs. They can afford the best of
everything, and the PR campaign surrounding the launch has the side effect of
making them celebrities. The organic growth guys, sitting in their garage,
feel poor and unloved. And yet I think they are often mistaken to feel sorry
for themselves. Organic growth seems to yield better technology and richer
founders than the big bang method. If you look at the dominant technologies
today, you'll find that most of them grew organically.
This pattern doesn't only apply to companies. You see it in sponsored research
too. Multics and Common Lisp were big-bang projects, and Unix and MacLisp were
organic growth projects.
**10 Redesign**
"The best writing is rewriting," wrote E. B. White. Every good writer knows
this, and it's true for software too. The most important part of design is
redesign. Programming languages, especially, don't get redesigned enough.
To write good software you must simultaneously keep two opposing ideas in your
head. You need the young hacker's naive faith in his abilities, and at the
same time the veteran's skepticism. You have to be able to think how hard can
it be? with one half of your brain while thinking it will never work with the
other.
The trick is to realize that there's no real contradiction here. You want to
be optimistic and skeptical about two different things. You have to be
optimistic about the possibility of solving the problem, but skeptical about
the value of whatever solution you've got so far.
People who do good work often think that whatever they're working on is no
good. Others see what they've done and are full of wonder, but the creator is
full of worry. This pattern is no coincidence: it is the worry that made the
work good.
If you can keep hope and worry balanced, they will drive a project forward the
same way your two legs drive a bicycle forward. In the first phase of the two-
cycle innovation engine, you work furiously on some problem, inspired by your
confidence that you'll be able to solve it. In the second phase, you look at
what you've done in the cold light of morning, and see all its flaws very
clearly. But as long as your critical spirit doesn't outweigh your hope,
you'll be able to look at your admittedly incomplete system, and think, how
hard can it be to get the rest of the way?, thereby continuing the cycle.
It's tricky to keep the two forces balanced. In young hackers, optimism
predominates. They produce something, are convinced it's great, and never
improve it. In old hackers, skepticism predominates, and they won't even dare
to take on ambitious projects.
Anything you can do to keep the redesign cycle going is good. Prose can be
rewritten over and over until you're happy with it. But software, as a rule,
doesn't get redesigned enough. Prose has readers, but software has _users._ If
a writer rewrites an essay, people who read the old version are unlikely to
complain that their thoughts have been broken by some newly introduced
incompatibility.
Users are a double-edged sword. They can help you improve your language, but
they can also deter you from improving it. So choose your users carefully, and
be slow to grow their number. Having users is like optimization: the wise
course is to delay it. Also, as a general rule, you can at any given time get
away with changing more than you think. Introducing change is like pulling off
a bandage: the pain is a memory almost as soon as you feel it.
Everyone knows that it's not a good idea to have a language designed by a
committee. Committees yield bad design. But I think the worst danger of
committees is that they interfere with redesign. It is so much work to
introduce changes that no one wants to bother. Whatever a committee decides
tends to stay that way, even if most of the members don't like it.
Even a committee of two gets in the way of redesign. This happens particularly
in the interfaces between pieces of software written by two different people.
To change the interface both have to agree to change it at once. And so
interfaces tend not to change at all, which is a problem because they tend to
be one of the most ad hoc parts of any system.
One solution here might be to design systems so that interfaces are horizontal
instead of vertical — so that modules are always vertically stacked strata of
abstraction. Then the interface will tend to be owned by one of them. The
lower of two levels will either be a language in which the upper is written,
in which case the lower level will own the interface, or it will be a slave,
in which case the interface can be dictated by the upper level.
**11 Lisp**
What all this implies is that there is hope for a new Lisp. There is hope for
any language that gives hackers what they want, including Lisp. I think we may
have made a mistake in thinking that hackers are turned off by Lisp's
strangeness. This comforting illusion may have prevented us from seeing the
real problem with Lisp, or at least Common Lisp, which is that it sucks for
doing what hackers want to do. A hacker's language needs powerful libraries
and something to hack. Common Lisp has neither. A hacker's language is terse
and hackable. Common Lisp is not.
The good news is, it's not Lisp that sucks, but Common Lisp. If we can develop
a new Lisp that is a real hacker's language, I think hackers will use it. They
will use whatever language does the job. All we have to do is make sure this
new Lisp does some important job better than other languages.
History offers some encouragement. Over time, successive new programming
languages have taken more and more features from Lisp. There is no longer much
left to copy before the language you've made is Lisp. The latest hot language,
Python, is a watered-down Lisp with infix syntax and no macros. A new Lisp
would be a natural step in this progression.
I sometimes think that it would be a good marketing trick to call it an
improved version of Python. That sounds hipper than Lisp. To many people, Lisp
is a slow AI language with a lot of parentheses. Fritz Kunze's official
biography carefully avoids mentioning the L-word. But my guess is that we
shouldn't be afraid to call the new Lisp Lisp. Lisp still has a lot of latent
respect among the very best hackers — the ones who took 6.001 and understood
it, for example. And those are the users you need to win.
In "How to Become a Hacker," Eric Raymond describes Lisp as something like
Latin or Greek — a language you should learn as an intellectual exercise, even
though you won't actually use it:
> Lisp is worth learning for the profound enlightenment experience you will
> have when you finally get it; that experience will make you a better
> programmer for the rest of your days, even if you never actually use Lisp
> itself a lot.
If I didn't know Lisp, reading this would set me asking questions. A language
that would make me a better programmer, if it means anything at all, means a
language that would be better for programming. And that is in fact the
implication of what Eric is saying.
As long as that idea is still floating around, I think hackers will be
receptive enough to a new Lisp, even if it is called Lisp. But this Lisp must
be a hacker's language, like the classic Lisps of the 1970s. It must be terse,
simple, and hackable. And it must have powerful libraries for doing what
hackers want to do now.
In the matter of libraries I think there is room to beat languages like Perl
and Python at their own game. A lot of the new applications that will need to
be written in the coming years will be server-based applications. There's no
reason a new Lisp shouldn't have string libraries as good as Perl, and if this
new Lisp also had powerful libraries for server-based applications, it could
be very popular. Real hackers won't turn up their noses at a new tool that
will let them solve hard problems with a few library calls. Remember, hackers
are lazy.
It could be an even bigger win to have core language support for server-based
applications. For example, explicit support for programs with multiple users,
or data ownership at the level of type tags.
Server-based applications also give us the answer to the question of what this
new Lisp will be used to hack. It would not hurt to make Lisp better as a
scripting language for Unix. (It would be hard to make it worse.) But I think
there are areas where existing languages would be easier to beat. I think it
might be better to follow the model of Tcl, and supply the Lisp together with
a complete system for supporting server-based applications. Lisp is a natural
fit for server-based applications. Lexical closures provide a way to get the
effect of subroutines when the ui is just a series of web pages. S-expressions
map nicely onto html, and macros are good at generating it. There need to be
better tools for writing server-based applications, and there needs to be a
new Lisp, and the two would work very well together.
**12 The Dream Language**
By way of summary, let's try describing the hacker's dream language. The dream
language is beautiful, clean, and terse. It has an interactive toplevel that
starts up fast. You can write programs to solve common problems with very
little code. Nearly all the code in any program you write is code that's
specific to your application. Everything else has been done for you.
The syntax of the language is brief to a fault. You never have to type an
unnecessary character, or even to use the shift key much.
Using big abstractions you can write the first version of a program very
quickly. Later, when you want to optimize, there's a really good profiler that
tells you where to focus your attention. You can make inner loops blindingly
fast, even writing inline byte code if you need to.
There are lots of good examples to learn from, and the language is intuitive
enough that you can learn how to use it from examples in a couple minutes. You
don't need to look in the manual much. The manual is thin, and has few
warnings and qualifications.
The language has a small core, and powerful, highly orthogonal libraries that
are as carefully designed as the core language. The libraries all work well
together; everything in the language fits together like the parts in a fine
camera. Nothing is deprecated, or retained for compatibility. The source code
of all the libraries is readily available. It's easy to talk to the operating
system and to applications written in other languages.
The language is built in layers. The higher-level abstractions are built in a
very transparent way out of lower-level abstractions, which you can get hold
of if you want.
Nothing is hidden from you that doesn't absolutely have to be. The language
offers abstractions only as a way of saving you work, rather than as a way of
telling you what to do. In fact, the language encourages you to be an equal
participant in its design. You can change everything about it, including even
its syntax, and anything you write has, as much as possible, the same status
as what comes predefined.
** |
|
April 2003
_(This essay is derived from a keynote talk at PyCon 2003.)_
It's hard to predict what life will be like in a hundred years. There are only
a few things we can say with certainty. We know that everyone will drive
flying cars, that zoning laws will be relaxed to allow buildings hundreds of
stories tall, that it will be dark most of the time, and that women will all
be trained in the martial arts. Here I want to zoom in on one detail of this
picture. What kind of programming language will they use to write the software
controlling those flying cars?
This is worth thinking about not so much because we'll actually get to use
these languages as because, if we're lucky, we'll use languages on the path
from this point to that.
I think that, like species, languages will form evolutionary trees, with dead-
ends branching off all over. We can see this happening already. Cobol, for all
its sometime popularity, does not seem to have any intellectual descendants.
It is an evolutionary dead-end-- a Neanderthal language.
I predict a similar fate for Java. People sometimes send me mail saying, "How
can you say that Java won't turn out to be a successful language? It's already
a successful language." And I admit that it is, if you measure success by
shelf space taken up by books on it (particularly individual books on it), or
by the number of undergrads who believe they have to learn it to get a job.
When I say Java won't turn out to be a successful language, I mean something
more specific: that Java will turn out to be an evolutionary dead-end, like
Cobol.
This is just a guess. I may be wrong. My point here is not to dis Java, but to
raise the issue of evolutionary trees and get people asking, where on the tree
is language X? The reason to ask this question isn't just so that our ghosts
can say, in a hundred years, I told you so. It's because staying close to the
main branches is a useful heuristic for finding languages that will be good to
program in now.
At any given time, you're probably happiest on the main branches of an
evolutionary tree. Even when there were still plenty of Neanderthals, it must
have sucked to be one. The Cro-Magnons would have been constantly coming over
and beating you up and stealing your food.
The reason I want to know what languages will be like in a hundred years is so
that I know what branch of the tree to bet on now.
The evolution of languages differs from the evolution of species because
branches can converge. The Fortran branch, for example, seems to be merging
with the descendants of Algol. In theory this is possible for species too, but
it's not likely to have happened to any bigger than a cell.
Convergence is more likely for languages partly because the space of
possibilities is smaller, and partly because mutations are not random.
Language designers deliberately incorporate ideas from other languages.
It's especially useful for language designers to think about where the
evolution of programming languages is likely to lead, because they can steer
accordingly. In that case, "stay on a main branch" becomes more than a way to
choose a good language. It becomes a heuristic for making the right decisions
about language design.
Any programming language can be divided into two parts: some set of
fundamental operators that play the role of axioms, and the rest of the
language, which could in principle be written in terms of these fundamental
operators.
I think the fundamental operators are the most important factor in a
language's long term survival. The rest you can change. It's like the rule
that in buying a house you should consider location first of all. Everything
else you can fix later, but you can't fix the location.
I think it's important not just that the axioms be well chosen, but that there
be few of them. Mathematicians have always felt this way about axioms-- the
fewer, the better-- and I think they're onto something.
At the very least, it has to be a useful exercise to look closely at the core
of a language to see if there are any axioms that could be weeded out. I've
found in my long career as a slob that cruft breeds cruft, and I've seen this
happen in software as well as under beds and in the corners of rooms.
I have a hunch that the main branches of the evolutionary tree pass through
the languages that have the smallest, cleanest cores. The more of a language
you can write in itself, the better.
Of course, I'm making a big assumption in even asking what programming
languages will be like in a hundred years. Will we even be writing programs in
a hundred years? Won't we just tell computers what we want them to do?
There hasn't been a lot of progress in that department so far. My guess is
that a hundred years from now people will still tell computers what to do
using programs we would recognize as such. There may be tasks that we solve
now by writing programs and which in a hundred years you won't have to write
programs to solve, but I think there will still be a good deal of programming
of the type that we do today.
It may seem presumptuous to think anyone can predict what any technology will
look like in a hundred years. But remember that we already have almost fifty
years of history behind us. Looking forward a hundred years is a graspable
idea when we consider how slowly languages have evolved in the past fifty.
Languages evolve slowly because they're not really technologies. Languages are
notation. A program is a formal description of the problem you want a computer
to solve for you. So the rate of evolution in programming languages is more
like the rate of evolution in mathematical notation than, say, transportation
or communications. Mathematical notation does evolve, but not with the giant
leaps you see in technology.
Whatever computers are made of in a hundred years, it seems safe to predict
they will be much faster than they are now. If Moore's Law continues to put
out, they will be 74 quintillion (73,786,976,294,838,206,464) times faster.
That's kind of hard to imagine. And indeed, the most likely prediction in the
speed department may be that Moore's Law will stop working. Anything that is
supposed to double every eighteen months seems likely to run up against some
kind of fundamental limit eventually. But I have no trouble believing that
computers will be very much faster. Even if they only end up being a paltry
million times faster, that should change the ground rules for programming
languages substantially. Among other things, there will be more room for what
would now be considered slow languages, meaning languages that don't yield
very efficient code.
And yet some applications will still demand speed. Some of the problems we
want to solve with computers are created by computers; for example, the rate
at which you have to process video images depends on the rate at which another
computer can generate them. And there is another class of problems which
inherently have an unlimited capacity to soak up cycles: image rendering,
cryptography, simulations.
If some applications can be increasingly inefficient while others continue to
demand all the speed the hardware can deliver, faster computers will mean that
languages have to cover an ever wider range of efficiencies. We've seen this
happening already. Current implementations of some popular new languages are
shockingly wasteful by the standards of previous decades.
This isn't just something that happens with programming languages. It's a
general historical trend. As technologies improve, each generation can do
things that the previous generation would have considered wasteful. People
thirty years ago would be astonished at how casually we make long distance
phone calls. People a hundred years ago would be even more astonished that a
package would one day travel from Boston to New York via Memphis.
I can already tell you what's going to happen to all those extra cycles that
faster hardware is going to give us in the next hundred years. They're nearly
all going to be wasted.
I learned to program when computer power was scarce. I can remember taking all
the spaces out of my Basic programs so they would fit into the memory of a 4K
TRS-80. The thought of all this stupendously inefficient software burning up
cycles doing the same thing over and over seems kind of gross to me. But I
think my intuitions here are wrong. I'm like someone who grew up poor, and
can't bear to spend money even for something important, like going to the
doctor.
Some kinds of waste really are disgusting. SUVs, for example, would arguably
be gross even if they ran on a fuel which would never run out and generated no
pollution. SUVs are gross because they're the solution to a gross problem.
(How to make minivans look more masculine.) But not all waste is bad. Now that
we have the infrastructure to support it, counting the minutes of your long-
distance calls starts to seem niggling. If you have the resources, it's more
elegant to think of all phone calls as one kind of thing, no matter where the
other person is.
There's good waste, and bad waste. I'm interested in good waste-- the kind
where, by spending more, we can get simpler designs. How will we take
advantage of the opportunities to waste cycles that we'll get from new, faster
hardware?
The desire for speed is so deeply engrained in us, with our puny computers,
that it will take a conscious effort to overcome it. In language design, we
should be consciously seeking out situations where we can trade efficiency for
even the smallest increase in convenience.
Most data structures exist because of speed. For example, many languages today
have both strings and lists. Semantically, strings are more or less a subset
of lists in which the elements are characters. So why do you need a separate
data type? You don't, really. Strings only exist for efficiency. But it's lame
to clutter up the semantics of the language with hacks to make programs run
faster. Having strings in a language seems to be a case of premature
optimization.
If we think of the core of a language as a set of axioms, surely it's gross to
have additional axioms that add no expressive power, simply for the sake of
efficiency. Efficiency is important, but I don't think that's the right way to
get it.
The right way to solve that problem, I think, is to separate the meaning of a
program from the implementation details. Instead of having both lists and
strings, have just lists, with some way to give the compiler optimization
advice that will allow it to lay out strings as contiguous bytes if necessary.
Since speed doesn't matter in most of a program, you won't ordinarily need to
bother with this sort of micromanagement. This will be more and more true as
computers get faster.
Saying less about implementation should also make programs more flexible.
Specifications change while a program is being written, and this is not only
inevitable, but desirable.
The word "essay" comes from the French verb "essayer", which means "to try".
An essay, in the original sense, is something you write to try to figure
something out. This happens in software too. I think some of the best programs
were essays, in the sense that the authors didn't know when they started
exactly what they were trying to write.
Lisp hackers already know about the value of being flexible with data
structures. We tend to write the first version of a program so that it does
everything with lists. These initial versions can be so shockingly inefficient
that it takes a conscious effort not to think about what they're doing, just
as, for me at least, eating a steak requires a conscious effort not to think
where it came from.
What programmers in a hundred years will be looking for, most of all, is a
language where you can throw together an unbelievably inefficient version 1 of
a program with the least possible effort. At least, that's how we'd describe
it in present-day terms. What they'll say is that they want a language that's
easy to program in.
Inefficient software isn't gross. What's gross is a language that makes
programmers do needless work. Wasting programmer time is the true
inefficiency, not wasting machine time. This will become ever more clear as
computers get faster.
I think getting rid of strings is already something we could bear to think
about. We did it in Arc, and it seems to be a win; some operations that would
be awkward to describe as regular expressions can be described easily as
recursive functions.
How far will this flattening of data structures go? I can think of
possibilities that shock even me, with my conscientiously broadened mind. Will
we get rid of arrays, for example? After all, they're just a subset of hash
tables where the keys are vectors of integers. Will we replace hash tables
themselves with lists?
There are more shocking prospects even than that. The Lisp that McCarthy
described in 1960, for example, didn't have numbers. Logically, you don't need
to have a separate notion of numbers, because you can represent them as lists:
the integer n could be represented as a list of n elements. You can do math
this way. It's just unbearably inefficient.
No one actually proposed implementing numbers as lists in practice. In fact,
McCarthy's 1960 paper was not, at the time, intended to be implemented at all.
It was a theoretical exercise, an attempt to create a more elegant alternative
to the Turing Machine. When someone did, unexpectedly, take this paper and
translate it into a working Lisp interpreter, numbers certainly weren't
represented as lists; they were represented in binary, as in every other
language.
Could a programming language go so far as to get rid of numbers as a
fundamental data type? I ask this not so much as a serious question as as a
way to play chicken with the future. It's like the hypothetical case of an
irresistible force meeting an immovable object-- here, an unimaginably
inefficient implementation meeting unimaginably great resources. I don't see
why not. The future is pretty long. If there's something we can do to decrease
the number of axioms in the core language, that would seem to be the side to
bet on as t approaches infinity. If the idea still seems unbearable in a
hundred years, maybe it won't in a thousand.
Just to be clear about this, I'm not proposing that all numerical calculations
would actually be carried out using lists. I'm proposing that the core
language, prior to any additional notations about implementation, be defined
this way. In practice any program that wanted to do any amount of math would
probably represent numbers in binary, but this would be an optimization, not
part of the core language semantics.
Another way to burn up cycles is to have many layers of software between the
application and the hardware. This too is a trend we see happening already:
many recent languages are compiled into byte code. Bill Woods once told me
that, as a rule of thumb, each layer of interpretation costs a factor of 10 in
speed. This extra cost buys you flexibility.
The very first version of Arc was an extreme case of this sort of multi-level
slowness, with corresponding benefits. It was a classic "metacircular"
interpreter written on top of Common Lisp, with a definite family resemblance
to the eval function defined in McCarthy's original Lisp paper. The whole
thing was only a couple hundred lines of code, so it was very easy to
understand and change. The Common Lisp we used, CLisp, itself runs on top of a
byte code interpreter. So here we had two levels of interpretation, one of
them (the top one) shockingly inefficient, and the language was usable. Barely
usable, I admit, but usable.
Writing software as multiple layers is a powerful technique even within
applications. Bottom-up programming means writing a program as a series of
layers, each of which serves as a language for the one above. This approach
tends to yield smaller, more flexible programs. It's also the best route to
that holy grail, reusability. A language is by definition reusable. The more
of your application you can push down into a language for writing that type of
application, the more of your software will be reusable.
Somehow the idea of reusability got attached to object-oriented programming in
the 1980s, and no amount of evidence to the contrary seems to be able to shake
it free. But although some object-oriented software is reusable, what makes it
reusable is its bottom-upness, not its object-orientedness. Consider
libraries: they're reusable because they're language, whether they're written
in an object-oriented style or not.
I don't predict the demise of object-oriented programming, by the way. Though
I don't think it has much to offer good programmers, except in certain
specialized domains, it is irresistible to large organizations. Object-
oriented programming offers a sustainable way to write spaghetti code. It lets
you accrete programs as a series of patches. Large organizations always tend
to develop software this way, and I expect this to be as true in a hundred
years as it is today.
As long as we're talking about the future, we had better talk about parallel
computation, because that's where this idea seems to live. That is, no matter
when you're talking, parallel computation seems to be something that is going
to happen in the future.
Will the future ever catch up with it? People have been talking about parallel
computation as something imminent for at least 20 years, and it hasn't
affected programming practice much so far. Or hasn't it? Already chip
designers have to think about it, and so must people trying to write systems
software on multi-cpu computers.
The real question is, how far up the ladder of abstraction will parallelism
go? In a hundred years will it affect even application programmers? Or will it
be something that compiler writers think about, but which is usually invisible
in the source code of applications?
One thing that does seem likely is that most opportunities for parallelism
will be wasted. This is a special case of my more general prediction that most
of the extra computer power we're given will go to waste. I expect that, as
with the stupendous speed of the underlying hardware, parallelism will be
something that is available if you ask for it explicitly, but ordinarily not
used. This implies that the kind of parallelism we have in a hundred years
will not, except in special applications, be massive parallelism. I expect for
ordinary programmers it will be more like being able to fork off processes
that all end up running in parallel.
And this will, like asking for specific implementations of data structures, be
something that you do fairly late in the life of a program, when you try to
optimize it. Version 1s will ordinarily ignore any advantages to be got from
parallel computation, just as they will ignore advantages to be got from
specific representations of data.
Except in special kinds of applications, parallelism won't pervade the
programs that are written in a hundred years. It would be premature
optimization if it did.
How many programming languages will there be in a hundred years? There seem to
be a huge number of new programming languages lately. Part of the reason is
that faster hardware has allowed programmers to make different tradeoffs
between speed and convenience, depending on the application. If this is a real
trend, the hardware we'll have in a hundred years should only increase it.
And yet there may be only a few widely-used languages in a hundred years. Part
of the reason I say this is optimism: it seems that, if you did a really good
job, you could make a language that was ideal for writing a slow version 1,
and yet with the right optimization advice to the compiler, would also yield
very fast code when necessary. So, since I'm optimistic, I'm going to predict
that despite the huge gap they'll have between acceptable and maximal
efficiency, programmers in a hundred years will have languages that can span
most of it.
As this gap widens, profilers will become increasingly important. Little
attention is paid to profiling now. Many people still seem to believe that the
way to get fast applications is to write compilers that generate fast code. As
the gap between acceptable and maximal performance widens, it will become
increasingly clear that the way to get fast applications is to have a good
guide from one to the other.
When I say there may only be a few languages, I'm not including domain-
specific "little languages". I think such embedded languages are a great idea,
and I expect them to proliferate. But I expect them to be written as thin
enough skins that users can see the general-purpose language underneath.
Who will design the languages of the future? One of the most exciting trends
in the last ten years has been the rise of open-source languages like Perl,
Python, and Ruby. Language design is being taken over by hackers. The results
so far are messy, but encouraging. There are some stunningly novel ideas in
Perl, for example. Many are stunningly bad, but that's always true of
ambitious efforts. At its current rate of mutation, God knows what Perl might
evolve into in a hundred years.
It's not true that those who can't do, teach (some of the best hackers I know
are professors), but it is true that there are a lot of things that those who
teach can't do. Research imposes constraining caste restrictions. In any
academic field there are topics that are ok to work on and others that aren't.
Unfortunately the distinction between acceptable and forbidden topics is
usually based on how intellectual the work sounds when described in research
papers, rather than how important it is for getting good results. The extreme
case is probably literature; people studying literature rarely say anything
that would be of the slightest use to those producing it.
Though the situation is better in the sciences, the overlap between the kind
of work you're allowed to do and the kind of work that yields good languages
is distressingly small. (Olin Shivers has grumbled eloquently about this.) For
example, types seem to be an inexhaustible source of research papers, despite
the fact that static typing seems to preclude true macros-- without which, in
my opinion, no language is worth using.
The trend is not merely toward languages being developed as open-source
projects rather than "research", but toward languages being designed by the
application programmers who need to use them, rather than by compiler writers.
This seems a good trend and I expect it to continue.
Unlike physics in a hundred years, which is almost necessarily impossible to
predict, I think it may be possible in principle to design a language now that
would appeal to users in a hundred years.
One way to design a language is to just write down the program you'd like to
be able to write, regardless of whether there is a compiler that can translate
it or hardware that can run it. When you do this you can assume unlimited
resources. It seems like we ought to be able to imagine unlimited resources as
well today as in a hundred years.
What program would one like to write? Whatever is least work. Except not
quite: whatever _would be_ least work if your ideas about programming weren't
already influenced by the languages you're currently used to. Such influence
can be so pervasive that it takes a great effort to overcome it. You'd think
it would be obvious to creatures as lazy as us how to express a program with
the least effort. In fact, our ideas about what's possible tend to be so
limited by whatever language we think in that easier formulations of programs
seem very surprising. They're something you have to discover, not something
you naturally sink into.
One helpful trick here is to use the length of the program as an approximation
for how much work it is to write. Not the length in characters, of course, but
the length in distinct syntactic elements-- basically, the size of the parse
tree. It may not be quite true that the shortest program is the least work to
write, but it's close enough that you're better off aiming for the solid
target of brevity than the fuzzy, nearby one of least work. Then the algorithm
for language design becomes: look at a program and ask, is there any way to
write this that's shorter?
In practice, writing programs in an imaginary hundred-year language will work
to varying degrees depending on how close you are to the core. Sort routines
you can write now. But it would be hard to predict now what kinds of libraries
might be needed in a hundred years. Presumably many libraries will be for
domains that don't even exist yet. If SETI@home works, for example, we'll need
libraries for communicating with aliens. Unless of course they are
sufficiently advanced that they already communicate in XML.
At the other extreme, I think you might be able to design the core language
today. In fact, some might argue that it was already mostly designed in 1958.
If the hundred year language were available today, would we want to program in
it? One way to answer this question is to look back. If present-day
programming languages had been available in 1960, would anyone have wanted to
use them?
In some ways, the answer is no. Languages today assume infrastructure that
didn't exist in 1960. For example, a language in which indentation is
significant, like Python, would not work very well on printer terminals. But
putting such problems aside-- assuming, for example, that programs were all
just written on paper-- would programmers of the 1960s have liked writing
programs in the languages we use now?
I think so. Some of the less imaginative ones, who had artifacts of early
languages built into their ideas of what a program was, might have had
trouble. (How can you manipulate data without doing pointer arithmetic? How
can you implement flow charts without gotos?) But I think the smartest
programmers would have had no trouble making the most of present-day
languages, if they'd had them.
If we had the hundred-year language now, it would at least make a great
pseudocode. What about using it to write software? Since the hundred-year
language will need to generate fast code for some applications, presumably it
could generate code efficient enough to run acceptably well on our hardware.
We might have to give more optimization advice than users in a hundred years,
but it still might be a net win.
Now we have two ideas that, if you combine them, suggest interesting
possibilities: (1) the hundred-year language could, in principle, be designed
today, and (2) such a language, if it existed, might be good to program in
today. When you see these ideas laid out like that, it's hard not to think,
why not try writing the hundred-year language now?
When you're working on language design, I think it is good to have such a
target and to keep it consciously in mind. When you learn to drive, one of the
principles they teach you is to align the car not by lining up the hood with
the stripes painted on the road, but by aiming at some point in the distance.
Even if all you care about is what happens in the next ten feet, this is the
right answer. I think we can and should do the same thing with programming
languages.
** |
|
August 2021
When people say that in their experience all programming languages are
basically equivalent, they're making a statement not about languages but about
the kind of programming they've done.
99.5% of programming consists of gluing together calls to library functions.
All popular languages are equally good at this. So one can easily spend one's
whole career operating in the intersection of popular programming languages.
But the other .5% of programming is disproportionately interesting. If you
want to learn what it consists of, the weirdness of weird languages is a good
clue to follow.
Weird languages aren't weird by accident. Not the good ones, at least. The
weirdness of the good ones usually implies the existence of some form of
programming that's not just the usual gluing together of library calls.
A concrete example: Lisp macros. Lisp macros seem weird even to many Lisp
programmers. They're not only not in the intersection of popular languages,
but by their nature would be hard to implement properly in a language without
turning it into a dialect of Lisp. And macros are definitely evidence of
techniques that go beyond glue programming. For example, solving problems by
first writing a language for problems of that type, and then writing your
specific application in it. Nor is this all you can do with macros; it's just
one region in a space of program-manipulating techniques that even now is far
from fully explored.
So if you want to expand your concept of what programming can be, one way to
do it is by learning weird languages. Pick a language that most programmers
consider weird but whose median user is smart, and then focus on the
differences between this language and the intersection of popular languages.
What can you say in this language that would be impossibly inconvenient to say
in others? In the process of learning how to say things you couldn't
previously say, you'll probably be learning how to think things you couldn't
previously think.
**Thanks** to Trevor Blackwell, Patrick Collison, Daniel Gackle, Amjad Masad,
and Robert Morris for reading drafts of this.
---
---
Japanese Translation
* * *
--- |
|
October 2021
If you asked people what was special about Einstein, most would say that he
was really smart. Even the ones who tried to give you a more sophisticated-
sounding answer would probably think this first. Till a few years ago I would
have given the same answer myself. But that wasn't what was special about
Einstein. What was special about him was that he had important new ideas.
Being very smart was a necessary precondition for having those ideas, but the
two are not identical.
It may seem a hair-splitting distinction to point out that intelligence and
its consequences are not identical, but it isn't. There's a big gap between
them. Anyone who's spent time around universities and research labs knows how
big. There are a lot of genuinely smart people who don't achieve very much.
I grew up thinking that being smart was the thing most to be desired. Perhaps
you did too. But I bet it's not what you really want. Imagine you had a choice
between being really smart but discovering nothing new, and being less smart
but discovering lots of new ideas. Surely you'd take the latter. I would. The
choice makes me uncomfortable, but when you see the two options laid out
explicitly like that, it's obvious which is better.
The reason the choice makes me uncomfortable is that being smart still feels
like the thing that matters, even though I know intellectually that it isn't.
I spent so many years thinking it was. The circumstances of childhood are a
perfect storm for fostering this illusion. Intelligence is much easier to
measure than the value of new ideas, and you're constantly being judged by it.
Whereas even the kids who will ultimately discover new things aren't usually
discovering them yet. For kids that way inclined, intelligence is the only
game in town.
There are more subtle reasons too, which persist long into adulthood.
Intelligence wins in conversation, and thus becomes the basis of the dominance
hierarchy. Plus having new ideas is such a new thing historically, and
even now done by so few people, that society hasn't yet assimilated the fact
that this is the actual destination, and intelligence merely a means to an
end.
Why do so many smart people fail to discover anything new? Viewed from that
direction, the question seems a rather depressing one. But there's another way
to look at it that's not just more optimistic, but more interesting as well.
Clearly intelligence is not the only ingredient in having new ideas. What are
the other ingredients? Are they things we could cultivate?
Because the trouble with intelligence, they say, is that it's mostly inborn.
The evidence for this seems fairly convincing, especially considering that
most of us don't want it to be true, and the evidence thus has to face a stiff
headwind. But I'm not going to get into that question here, because it's the
other ingredients in new ideas that I care about, and it's clear that many of
them can be cultivated.
That means the truth is excitingly different from the story I got as a kid. If
intelligence is what matters, and also mostly inborn, the natural consequence
is a sort of _Brave New World_ fatalism. The best you can do is figure out
what sort of work you have an "aptitude" for, so that whatever intelligence
you were born with will at least be put to the best use, and then work as hard
as you can at it. Whereas if intelligence isn't what matters, but only one of
several ingredients in what does, and many of those aren't inborn, things get
more interesting. You have a lot more control, but the problem of how to
arrange your life becomes that much more complicated.
So what are the other ingredients in having new ideas? The fact that I can
even ask this question proves the point I raised earlier — that society hasn't
assimilated the fact that it's this and not intelligence that matters.
Otherwise we'd all know the answers to such a fundamental question.
I'm not going to try to provide a complete catalogue of the other ingredients
here. This is the first time I've posed the question to myself this way, and I
think it may take a while to answer. But I wrote recently about one of the
most important: an obsessive _interest_ in a particular topic. And this can
definitely be cultivated.
Another quality you need in order to discover new ideas is _independent-
mindedness_. I wouldn't want to claim that this is distinct from intelligence
— I'd be reluctant to call someone smart who wasn't independent-minded — but
though largely inborn, this quality seems to be something that can be
cultivated to some extent.
There are general techniques for having new ideas — for example, for working
on your own _projects_ and for overcoming the obstacles you face with _early_
work — and these can all be learned. Some of them can be learned by societies.
And there are also collections of techniques for generating specific types of
new ideas, like startup ideas and essay topics.
And of course there are a lot of fairly mundane ingredients in discovering new
ideas, like _working hard_, getting enough sleep, avoiding certain kinds of
stress, having the right colleagues, and finding tricks for working on what
you want even when it's not what you're supposed to be working on. Anything
that prevents people from doing great work has an inverse that helps them to.
And this class of ingredients is not as boring as it might seem at first. For
example, having new ideas is generally associated with youth. But perhaps it's
not youth per se that yields new ideas, but specific things that come with
youth, like good health and lack of responsibilities. Investigating this might
lead to strategies that will help people of any age to have better ideas.
One of the most surprising ingredients in having new ideas is writing ability.
There's a class of new ideas that are best discovered by writing essays and
books. And that "by" is deliberate: you don't think of the ideas first, and
then merely write them down. There is a kind of thinking that one does by
writing, and if you're clumsy at writing, or don't enjoy doing it, that will
get in your way if you try to do this kind of thinking.
I predict the gap between intelligence and new ideas will turn out to be an
interesting place. If we think of this gap merely as a measure of unrealized
potential, it becomes a sort of wasteland that we try to hurry through with
our eyes averted. But if we flip the question, and start inquiring into the
other ingredients in new ideas that it implies must exist, we can mine this
gap for discoveries about discovery.
** |
|
February 2003
When we were in junior high school, my friend Rich and I made a map of the
school lunch tables according to popularity. This was easy to do, because kids
only ate lunch with others of about the same popularity. We graded them from A
to E. A tables were full of football players and cheerleaders and so on. E
tables contained the kids with mild cases of Down's Syndrome, what in the
language of the time we called "retards."
We sat at a D table, as low as you could get without looking physically
different. We were not being especially candid to grade ourselves as D. It
would have taken a deliberate lie to say otherwise. Everyone in the school
knew exactly how popular everyone else was, including us.
My stock gradually rose during high school. Puberty finally arrived; I became
a decent soccer player; I started a scandalous underground newspaper. So I've
seen a good part of the popularity landscape.
I know a lot of people who were nerds in school, and they all tell the same
story: there is a strong correlation between being smart and being a nerd, and
an even stronger inverse correlation between being a nerd and being popular.
Being smart seems to _make_ you unpopular.
Why? To someone in school now, that may seem an odd question to ask. The mere
fact is so overwhelming that it may seem strange to imagine that it could be
any other way. But it could. Being smart doesn't make you an outcast in
elementary school. Nor does it harm you in the real world. Nor, as far as I
can tell, is the problem so bad in most other countries. But in a typical
American secondary school, being smart is likely to make your life difficult.
Why?
The key to this mystery is to rephrase the question slightly. Why don't smart
kids make themselves popular? If they're so smart, why don't they figure out
how popularity works and beat the system, just as they do for standardized
tests?
One argument says that this would be impossible, that the smart kids are
unpopular because the other kids envy them for being smart, and nothing they
could do could make them popular. I wish. If the other kids in junior high
school envied me, they did a great job of concealing it. And in any case, if
being smart were really an enviable quality, the girls would have broken
ranks. The guys that guys envy, girls like.
In the schools I went to, being smart just didn't matter much. Kids didn't
admire it or despise it. All other things being equal, they would have
preferred to be on the smart side of average rather than the dumb side, but
intelligence counted far less than, say, physical appearance, charisma, or
athletic ability.
So if intelligence in itself is not a factor in popularity, why are smart kids
so consistently unpopular? The answer, I think, is that they don't really want
to be popular.
If someone had told me that at the time, I would have laughed at him. Being
unpopular in school makes kids miserable, some of them so miserable that they
commit suicide. Telling me that I didn't want to be popular would have seemed
like telling someone dying of thirst in a desert that he didn't want a glass
of water. Of course I wanted to be popular.
But in fact I didn't, not enough. There was something else I wanted more: to
be smart. Not simply to do well in school, though that counted for something,
but to design beautiful rockets, or to write well, or to understand how to
program computers. In general, to make great things.
At the time I never tried to separate my wants and weigh them against one
another. If I had, I would have seen that being smart was more important. If
someone had offered me the chance to be the most popular kid in school, but
only at the price of being of average intelligence (humor me here), I wouldn't
have taken it.
Much as they suffer from their unpopularity, I don't think many nerds would.
To them the thought of average intelligence is unbearable. But most kids would
take that deal. For half of them, it would be a step up. Even for someone in
the eightieth percentile (assuming, as everyone seemed to then, that
intelligence is a scalar), who wouldn't drop thirty points in exchange for
being loved and admired by everyone?
And that, I think, is the root of the problem. Nerds serve two masters. They
want to be popular, certainly, but they want even more to be smart. And
popularity is not something you can do in your spare time, not in the fiercely
competitive environment of an American secondary school.
Alberti, arguably the archetype of the Renaissance Man, writes that "no art,
however minor, demands less than total dedication if you want to excel in it."
I wonder if anyone in the world works harder at anything than American school
kids work at popularity. Navy SEALs and neurosurgery residents seem slackers
by comparison. They occasionally take vacations; some even have hobbies. An
American teenager may work at being popular every waking hour, 365 days a
year.
I don't mean to suggest they do this consciously. Some of them truly are
little Machiavellis, but what I really mean here is that teenagers are always
on duty as conformists.
For example, teenage kids pay a great deal of attention to clothes. They don't
consciously dress to be popular. They dress to look good. But to who? To the
other kids. Other kids' opinions become their definition of right, not just
for clothes, but for almost everything they do, right down to the way they
walk. And so every effort they make to do things "right" is also, consciously
or not, an effort to be more popular.
Nerds don't realize this. They don't realize that it takes work to be popular.
In general, people outside some very demanding field don't realize the extent
to which success depends on constant (though often unconscious) effort. For
example, most people seem to consider the ability to draw as some kind of
innate quality, like being tall. In fact, most people who "can draw" like
drawing, and have spent many hours doing it; that's why they're good at it.
Likewise, popular isn't just something you are or you aren't, but something
you make yourself.
The main reason nerds are unpopular is that they have other things to think
about. Their attention is drawn to books or the natural world, not fashions
and parties. They're like someone trying to play soccer while balancing a
glass of water on his head. Other players who can focus their whole attention
on the game beat them effortlessly, and wonder why they seem so incapable.
Even if nerds cared as much as other kids about popularity, being popular
would be more work for them. The popular kids learned to be popular, and to
want to be popular, the same way the nerds learned to be smart, and to want to
be smart: from their parents. While the nerds were being trained to get the
right answers, the popular kids were being trained to please.
So far I've been finessing the relationship between smart and nerd, using them
as if they were interchangeable. In fact it's only the context that makes them
so. A nerd is someone who isn't socially adept enough. But "enough" depends on
where you are. In a typical American school, standards for coolness are so
high (or at least, so specific) that you don't have to be especially awkward
to look awkward by comparison.
Few smart kids can spare the attention that popularity requires. Unless they
also happen to be good-looking, natural athletes, or siblings of popular kids,
they'll tend to become nerds. And that's why smart people's lives are worst
between, say, the ages of eleven and seventeen. Life at that age revolves far
more around popularity than before or after.
Before that, kids' lives are dominated by their parents, not by other kids.
Kids do care what their peers think in elementary school, but this isn't their
whole life, as it later becomes.
Around the age of eleven, though, kids seem to start treating their family as
a day job. They create a new world among themselves, and standing in this
world is what matters, not standing in their family. Indeed, being in trouble
in their family can win them points in the world they care about.
The problem is, the world these kids create for themselves is at first a very
crude one. If you leave a bunch of eleven-year-olds to their own devices, what
you get is _Lord of the Flies._ Like a lot of American kids, I read this book
in school. Presumably it was not a coincidence. Presumably someone wanted to
point out to us that we were savages, and that we had made ourselves a cruel
and stupid world. This was too subtle for me. While the book seemed entirely
believable, I didn't get the additional message. I wish they had just told us
outright that we were savages and our world was stupid.
Nerds would find their unpopularity more bearable if it merely caused them to
be ignored. Unfortunately, to be unpopular in school is to be actively
persecuted.
Why? Once again, anyone currently in school might think this a strange
question to ask. How could things be any other way? But they could be. Adults
don't normally persecute nerds. Why do teenage kids do it?
Partly because teenagers are still half children, and many children are just
intrinsically cruel. Some torture nerds for the same reason they pull the legs
off spiders. Before you develop a conscience, torture is amusing.
Another reason kids persecute nerds is to make themselves feel better. When
you tread water, you lift yourself up by pushing water down. Likewise, in any
social hierarchy, people unsure of their own position will try to emphasize it
by maltreating those they think rank below. I've read that this is why poor
whites in the United States are the group most hostile to blacks.
But I think the main reason other kids persecute nerds is that it's part of
the mechanism of popularity. Popularity is only partially about individual
attractiveness. It's much more about alliances. To become more popular, you
need to be constantly doing things that bring you close to other popular
people, and nothing brings people closer than a common enemy.
Like a politician who wants to distract voters from bad times at home, you can
create an enemy if there isn't a real one. By singling out and persecuting a
nerd, a group of kids from higher in the hierarchy create bonds between
themselves. Attacking an outsider makes them all insiders. This is why the
worst cases of bullying happen with groups. Ask any nerd: you get much worse
treatment from a group of kids than from any individual bully, however
sadistic.
If it's any consolation to the nerds, it's nothing personal. The group of kids
who band together to pick on you are doing the same thing, and for the same
reason, as a bunch of guys who get together to go hunting. They don't actually
hate you. They just need something to chase.
Because they're at the bottom of the scale, nerds are a safe target for the
entire school. If I remember correctly, the most popular kids don't persecute
nerds; they don't need to stoop to such things. Most of the persecution comes
from kids lower down, the nervous middle classes.
The trouble is, there are a lot of them. The distribution of popularity is not
a pyramid, but tapers at the bottom like a pear. The least popular group is
quite small. (I believe we were the only D table in our cafeteria map.) So
there are more people who want to pick on nerds than there are nerds.
As well as gaining points by distancing oneself from unpopular kids, one loses
points by being close to them. A woman I know says that in high school she
liked nerds, but was afraid to be seen talking to them because the other girls
would make fun of her. Unpopularity is a communicable disease; kids too nice
to pick on nerds will still ostracize them in self-defense.
It's no wonder, then, that smart kids tend to be unhappy in middle school and
high school. Their other interests leave them little attention to spare for
popularity, and since popularity resembles a zero-sum game, this in turn makes
them targets for the whole school. And the strange thing is, this nightmare
scenario happens without any conscious malice, merely because of the shape of
the situation.
For me the worst stretch was junior high, when kid culture was new and harsh,
and the specialization that would later gradually separate the smarter kids
had barely begun. Nearly everyone I've talked to agrees: the nadir is
somewhere between eleven and fourteen.
In our school it was eighth grade, which was ages twelve and thirteen for me.
There was a brief sensation that year when one of our teachers overheard a
group of girls waiting for the school bus, and was so shocked that the next
day she devoted the whole class to an eloquent plea not to be so cruel to one
another.
It didn't have any noticeable effect. What struck me at the time was that she
was surprised. You mean she doesn't know the kind of things they say to one
another? You mean this isn't normal?
It's important to realize that, no, the adults don't know what the kids are
doing to one another. They know, in the abstract, that kids are monstrously
cruel to one another, just as we know in the abstract that people get tortured
in poorer countries. But, like us, they don't like to dwell on this depressing
fact, and they don't see evidence of specific abuses unless they go looking
for it.
Public school teachers are in much the same position as prison wardens.
Wardens' main concern is to keep the prisoners on the premises. They also need
to keep them fed, and as far as possible prevent them from killing one
another. Beyond that, they want to have as little to do with the prisoners as
possible, so they leave them to create whatever social organization they want.
From what I've read, the society that the prisoners create is warped, savage,
and pervasive, and it is no fun to be at the bottom of it.
In outline, it was the same at the schools I went to. The most important thing
was to stay on the premises. While there, the authorities fed you, prevented
overt violence, and made some effort to teach you something. But beyond that
they didn't want to have too much to do with the kids. Like prison wardens,
the teachers mostly left us to ourselves. And, like prisoners, the culture we
created was barbaric.
Why is the real world more hospitable to nerds? It might seem that the answer
is simply that it's populated by adults, who are too mature to pick on one
another. But I don't think this is true. Adults in prison certainly pick on
one another. And so, apparently, do society wives; in some parts of Manhattan,
life for women sounds like a continuation of high school, with all the same
petty intrigues.
I think the important thing about the real world is not that it's populated by
adults, but that it's very large, and the things you do have real effects.
That's what school, prison, and ladies-who-lunch all lack. The inhabitants of
all those worlds are trapped in little bubbles where nothing they do can have
more than a local effect. Naturally these societies degenerate into savagery.
They have no function for their form to follow.
When the things you do have real effects, it's no longer enough just to be
pleasing. It starts to be important to get the right answers, and that's where
nerds show to advantage. Bill Gates will of course come to mind. Though
notoriously lacking in social skills, he gets the right answers, at least as
measured in revenue.
The other thing that's different about the real world is that it's much
larger. In a large enough pool, even the smallest minorities can achieve a
critical mass if they clump together. Out in the real world, nerds collect in
certain places and form their own societies where intelligence is the most
important thing. Sometimes the current even starts to flow in the other
direction: sometimes, particularly in university math and science departments,
nerds deliberately exaggerate their awkwardness in order to seem smarter. John
Nash so admired Norbert Wiener that he adopted his habit of touching the wall
as he walked down a corridor.
As a thirteen-year-old kid, I didn't have much more experience of the world
than what I saw immediately around me. The warped little world we lived in
was, I thought, _the world._ The world seemed cruel and boring, and I'm not
sure which was worse.
Because I didn't fit into this world, I thought that something must be wrong
with me. I didn't realize that the reason we nerds didn't fit in was that in
some ways we were a step ahead. We were already thinking about the kind of
things that matter in the real world, instead of spending all our time playing
an exacting but mostly pointless game like the others.
We were a bit like an adult would be if he were thrust back into middle
school. He wouldn't know the right clothes to wear, the right music to like,
the right slang to use. He'd seem to the kids a complete alien. The thing is,
he'd know enough not to care what they thought. We had no such confidence.
A lot of people seem to think it's good for smart kids to be thrown together
with "normal" kids at this stage of their lives. Perhaps. But in at least some
cases the reason the nerds don't fit in really is that everyone else is crazy.
I remember sitting in the audience at a "pep rally" at my high school,
watching as the cheerleaders threw an effigy of an opposing player into the
audience to be torn to pieces. I felt like an explorer witnessing some bizarre
tribal ritual.
If I could go back and give my thirteen year old self some advice, the main
thing I'd tell him would be to stick his head up and look around. I didn't
really grasp it at the time, but the whole world we lived in was as fake as a
Twinkie. Not just school, but the entire town. Why do people move to suburbia?
To have kids! So no wonder it seemed boring and sterile. The whole place was a
giant nursery, an artificial town created explicitly for the purpose of
breeding children.
Where I grew up, it felt as if there was nowhere to go, and nothing to do.
This was no accident. Suburbs are deliberately designed to exclude the outside
world, because it contains things that could endanger children.
And as for the schools, they were just holding pens within this fake world.
Officially the purpose of schools is to teach kids. In fact their primary
purpose is to keep kids locked up in one place for a big chunk of the day so
adults can get things done. And I have no problem with this: in a specialized
industrial society, it would be a disaster to have kids running around loose.
What bothers me is not that the kids are kept in prisons, but that (a) they
aren't told about it, and (b) the prisons are run mostly by the inmates. Kids
are sent off to spend six years memorizing meaningless facts in a world ruled
by a caste of giants who run after an oblong brown ball, as if this were the
most natural thing in the world. And if they balk at this surreal cocktail,
they're called misfits.
Life in this twisted world is stressful for the kids. And not just for the
nerds. Like any war, it's damaging even to the winners.
Adults can't avoid seeing that teenage kids are tormented. So why don't they
do something about it? Because they blame it on puberty. The reason kids are
so unhappy, adults tell themselves, is that monstrous new chemicals,
_hormones_ , are now coursing through their bloodstream and messing up
everything. There's nothing wrong with the system; it's just inevitable that
kids will be miserable at that age.
This idea is so pervasive that even the kids believe it, which probably
doesn't help. Someone who thinks his feet naturally hurt is not going to stop
to consider the possibility that he is wearing the wrong size shoes.
I'm suspicious of this theory that thirteen-year-old kids are intrinsically
messed up. If it's physiological, it should be universal. Are Mongol nomads
all nihilists at thirteen? I've read a lot of history, and I have not seen a
single reference to this supposedly universal fact before the twentieth
century. Teenage apprentices in the Renaissance seem to have been cheerful and
eager. They got in fights and played tricks on one another of course
(Michelangelo had his nose broken by a bully), but they weren't crazy.
As far as I can tell, the concept of the hormone-crazed teenager is coeval
with suburbia. I don't think this is a coincidence. I think teenagers are
driven crazy by the life they're made to lead. Teenage apprentices in the
Renaissance were working dogs. Teenagers now are neurotic lapdogs. Their
craziness is the craziness of the idle everywhere.
When I was in school, suicide was a constant topic among the smarter kids. No
one I knew did it, but several planned to, and some may have tried. Mostly
this was just a pose. Like other teenagers, we loved the dramatic, and suicide
seemed very dramatic. But partly it was because our lives were at times
genuinely miserable.
Bullying was only part of the problem. Another problem, and possibly an even
worse one, was that we never had anything real to work on. Humans like to
work; in most of the world, your work is your identity. And all the work we
did was pointless, or seemed so at the time.
At best it was practice for real work we might do far in the future, so far
that we didn't even know at the time what we were practicing for. More often
it was just an arbitrary series of hoops to jump through, words without
content designed mainly for testability. (The three main causes of the Civil
War were.... Test: List the three main causes of the Civil War.)
And there was no way to opt out. The adults had agreed among themselves that
this was to be the route to college. The only way to escape this empty life
was to submit to it.
Teenage kids used to have a more active role in society. In pre-industrial
times, they were all apprentices of one sort or another, whether in shops or
on farms or even on warships. They weren't left to create their own societies.
They were junior members of adult societies.
Teenagers seem to have respected adults more then, because the adults were the
visible experts in the skills they were trying to learn. Now most kids have
little idea what their parents do in their distant offices, and see no
connection (indeed, there is precious little) between schoolwork and the work
they'll do as adults.
And if teenagers respected adults more, adults also had more use for
teenagers. After a couple years' training, an apprentice could be a real help.
Even the newest apprentice could be made to carry messages or sweep the
workshop.
Now adults have no immediate use for teenagers. They would be in the way in an
office. So they drop them off at school on their way to work, much as they
might drop the dog off at a kennel if they were going away for the weekend.
What happened? We're up against a hard one here. The cause of this problem is
the same as the cause of so many present ills: specialization. As jobs become
more specialized, we have to train longer for them. Kids in pre-industrial
times started working at about 14 at the latest; kids on farms, where most
people lived, began far earlier. Now kids who go to college don't start
working full-time till 21 or 22. With some degrees, like MDs and PhDs, you may
not finish your training till 30.
Teenagers now are useless, except as cheap labor in industries like fast food,
which evolved to exploit precisely this fact. In almost any other kind of
work, they'd be a net loss. But they're also too young to be left
unsupervised. Someone has to watch over them, and the most efficient way to do
this is to collect them together in one place. Then a few adults can watch all
of them.
If you stop there, what you're describing is literally a prison, albeit a
part-time one. The problem is, many schools practically do stop there. The
stated purpose of schools is to educate the kids. But there is no external
pressure to do this well. And so most schools do such a bad job of teaching
that the kids don't really take it seriously-- not even the smart kids. Much
of the time we were all, students and teachers both, just going through the
motions.
In my high school French class we were supposed to read Hugo's _Les
Miserables._ I don't think any of us knew French well enough to make our way
through this enormous book. Like the rest of the class, I just skimmed the
Cliff's |
|
April 2001
This essay developed out of conversations I've had with several other
programmers about why Java smelled suspicious. It's not a critique of Java! It
is a case study of hacker's radar.
Over time, hackers develop a nose for good (and bad) technology. I thought it
might be interesting to try and write down what made Java seem suspect to me.
Some people who've read this think it's an interesting attempt to write about
something that hasn't been written about before. Others say I will get in
trouble for appearing to be writing about things I don't understand. So, just
in case it does any good, let me clarify that I'm not writing here about Java
(which I have never used) but about hacker's radar (which I have thought about
a lot).
* * *
The aphorism "you can't tell a book by its cover" originated in the times when
books were sold in plain cardboard covers, to be bound by each purchaser
according to his own taste. In those days, you couldn't tell a book by its
cover. But publishing has advanced since then: present-day publishers work
hard to make the cover something you can tell a book by.
I spend a lot of time in bookshops and I feel as if I have by now learned to
understand everything publishers mean to tell me about a book, and perhaps a
bit more. The time I haven't spent in bookshops I've spent mostly in front of
computers, and I feel as if I've learned, to some degree, to judge technology
by its cover as well. It may be just luck, but I've saved myself from a few
technologies that turned out to be real stinkers.
So far, Java seems like a stinker to me. I've never written a Java program,
never more than glanced over reference books about it, but I have a hunch that
it won't be a very successful language. I may turn out to be mistaken; making
predictions about technology is a dangerous business. But for what it's worth,
as a sort of time capsule, here's why I don't like the look of Java:
1\. It has been so energetically hyped. Real standards don't have to be
promoted. No one had to promote C, or Unix, or HTML. A real standard tends to
be already established by the time most people hear about it. On the hacker
radar screen, Perl is as big as Java, or bigger, just on the strength of its
own merits.
2\. It's aimed low. In the original Java white paper, Gosling explicitly says
Java was designed not to be too difficult for programmers used to C. It was
designed to be another C++: C plus a few ideas taken from more advanced
languages. Like the creators of sitcoms or junk food or package tours, Java's
designers were consciously designing a product for people not as smart as
them. Historically, languages designed for other people to use have been bad:
Cobol, PL/I, Pascal, Ada, C++. The good languages have been those that were
designed for their own creators: C, Perl, Smalltalk, Lisp.
3\. It has ulterior motives. Someone once said that the world would be a
better place if people only wrote books because they had something to say,
rather than because they wanted to write a book. Likewise, the reason we hear
about Java all the time is not because it has something to say about
programming languages. We hear about Java as part of a plan by Sun to
undermine Microsoft.
4\. No one loves it. C, Perl, Python, Smalltalk, and Lisp programmers love
their languages. I've never heard anyone say that they loved Java.
5\. People are forced to use it. A lot of the people I know using Java are
using it because they feel they have to. Either it's something they felt they
had to do to get funded, or something they thought customers would want, or
something they were told to do by management. These are smart people; if the
technology was good, they'd have used it voluntarily.
6\. It has too many cooks. The best programming languages have been developed
by small groups. Java seems to be run by a committee. If it turns out to be a
good language, it will be the first time in history that a committee has
designed a good language.
7\. It's bureaucratic. From what little I know about Java, there seem to be a
lot of protocols for doing things. Really good languages aren't like that.
They let you do what you want and get out of the way.
8\. It's pseudo-hip. Sun now pretends that Java is a grassroots, open-source
language effort like Perl or Python. This one just happens to be controlled by
a giant company. So the language is likely to have the same drab clunkiness as
anything else that comes out of a big company.
9\. It's designed for large organizations. Large organizations have different
aims from hackers. They want languages that are (believed to be) suitable for
use by large teams of mediocre programmers-- languages with features that,
like the speed limiters in U-Haul trucks, prevent fools from doing too much
damage. Hackers don't like a language that talks down to them. Hackers just
want power. Historically, languages designed for large organizations (PL/I,
Ada) have lost, while hacker languages (C, Perl) have won. The reason: today's
teenage hacker is tomorrow's CTO.
10\. The wrong people like it. The programmers I admire most are not, on the
whole, captivated by Java. Who does like Java? Suits, who don't know one
language from another, but know that they keep hearing about Java in the
press; programmers at big companies, who are amazed to find that there is
something even better than C++; and plug-and-chug undergrads, who are ready to
like anything that might get them a job (will this be on the test?). These
people's opinions change with every wind.
11\. Its daddy is in a pinch. Sun's business model is being undermined on two
fronts. Cheap Intel processors, of the same type used in desktop machines, are
now more than fast enough for servers. And FreeBSD seems to be at least as
good an OS for servers as Solaris. Sun's advertising implies that you need Sun
servers for industrial strength applications. If this were true, Yahoo would
be first in line to buy Suns; but when I worked there, the servers were all
Intel boxes running FreeBSD. This bodes ill for Sun's future. If Sun runs into
trouble, they could drag Java down with them.
12\. The DoD likes it. The Defense Department is encouraging developers to use
Java. This seems to me the most damning sign of all. The Defense Department
does a fine (though expensive) job of defending the country, but they love
plans and procedures and protocols. Their culture is the opposite of hacker
culture; on questions of software they will tend to bet wrong. The last time
the DoD really liked a programming language, it was Ada.
Bear in mind, this is not a critique of Java, but a critique of its cover. I
don't know Java well enough to like it or dislike it. This is just an
explanation of why I don't find that I'm eager to learn it.
It may seem cavalier to dismiss a language before you've even tried writing
programs in it. But this is something all programmers have to do. There are
too many technologies out there to learn them all. You have to learn to judge
by outward signs which will be worth your time. I have likewise cavalierly
dismissed Cobol, Ada, Visual Basic, the IBM AS400, VRML, ISO 9000, the SET
protocol, VMS, Novell Netware, and CORBA, among others. They just smelled
wrong.
It could be that in Java's case I'm mistaken. It could be that a language
promoted by one big company to undermine another, designed by a committee for
a "mainstream" audience, hyped to the skies, and beloved of the DoD, happens
nonetheless to be a clean, beautiful, powerful language that I would love
programming in. It could be, but it seems very unlikely.
---
---
| | Trevor Re: Java's Cover
| | | | Berners-Lee Re: Java
| | Being Popular
| | | | Sun Internal Memo
| | 2005: BusinessWeek Agrees
| | | | Japanese Translation
* * *
--- |
|
January 2023
_(_Someone_ fed my essays into GPT to make something that could answer
questions based on them, then asked it where good ideas come from. The answer
was ok, but not what I would have said. This is what I would have said.)_
The way to get new ideas is to notice anomalies: what seems strange, or
missing, or broken? You can see anomalies in everyday life (much of standup
comedy is based on this), but the best place to look for them is at the
frontiers of knowledge.
Knowledge grows fractally. From a distance its edges look smooth, but when you
learn enough to get close to one, you'll notice it's full of gaps. These gaps
will seem obvious; it will seem inexplicable that no one has tried x or
wondered about y. In the best case, exploring such gaps yields whole new
fractal buds.
---
* * *
--- |
|
April 2007
There are two different ways people judge you. Sometimes judging you correctly
is the end goal. But there's a second much more common type of judgement where
it isn't. We tend to regard all judgements of us as the first type. We'd
probably be happier if we realized which are and which aren't.
The first type of judgement, the type where judging you is the end goal,
include court cases, grades in classes, and most competitions. Such judgements
can of course be mistaken, but because the goal is to judge you correctly,
there's usually some kind of appeals process. If you feel you've been
misjudged, you can protest that you've been treated unfairly.
Nearly all the judgements made on children are of this type, so we get into
the habit early in life of thinking that all judgements are.
But in fact there is a second much larger class of judgements where judging
you is only a means to something else. These include college admissions,
hiring and investment decisions, and of course the judgements made in dating.
This kind of judgement is not really about you.
Put yourself in the position of someone selecting players for a national team.
Suppose for the sake of simplicity that this is a game with no positions, and
that you have to select 20 players. There will be a few stars who clearly
should make the team, and many players who clearly shouldn't. The only place
your judgement makes a difference is in the borderline cases. Suppose you
screw up and underestimate the 20th best player, causing him not to make the
team, and his place to be taken by the 21st best. You've still picked a good
team. If the players have the usual distribution of ability, the 21st best
player will be only slightly worse than the 20th best. Probably the difference
between them will be less than the measurement error.
The 20th best player may feel he has been misjudged. But your goal here wasn't
to provide a service estimating people's ability. It was to pick a team, and
if the difference between the 20th and 21st best players is less than the
measurement error, you've still done that optimally.
It's a false analogy even to use the word unfair to describe this kind of
misjudgement. It's not aimed at producing a correct estimate of any given
individual, but at selecting a reasonably optimal set.
One thing that leads us astray here is that the selector seems to be in a
position of power. That makes him seem like a judge. If you regard someone
judging you as a customer instead of a judge, the expectation of fairness goes
away. The author of a good novel wouldn't complain that readers were _unfair_
for preferring a potboiler with a racy cover. Stupid, perhaps, but not unfair.
Our early training and our self-centeredness combine to make us believe that
every judgement of us is about us. In fact most aren't. This is a rare case
where being less self-centered will make people more confident. Once you
realize how little most people judging you care about judging you
accurately—once you realize that because of the normal distribution of most
applicant pools, it matters least to judge accurately in precisely the cases
where judgement has the most effect—you won't take rejection so personally.
And curiously enough, taking rejection less personally may help you to get
rejected less often. If you think someone judging you will work hard to judge
you correctly, you can afford to be passive. But the more you realize that
most judgements are greatly influenced by random, extraneous factors—that most
people judging you are more like a fickle novel buyer than a wise and
perceptive magistrate—the more you realize you can do things to influence the
outcome.
One good place to apply this principle is in college applications. Most high
school students applying to college do it with the usual child's mix of
inferiority and self-centeredness: inferiority in that they assume that
admissions committees must be all-seeing; self-centeredness in that they
assume admissions committees care enough about them to dig down into their
application and figure out whether they're good or not. These combine to make
applicants passive in applying and hurt when they're rejected. If college
applicants realized how quick and impersonal most selection processes are,
they'd make more effort to sell themselves, and take the outcome less
personally.
---
---
| | Spanish Translation
| | | | Russian Translation
| | Arabic Translation
* * *
--- |
|
August 2005
_(This essay is derived from a talk at Defcon 2005.)_
Suppose you wanted to get rid of economic inequality. There are two ways to do
it: give money to the poor, or take it away from the rich. But they amount to
the same thing, because if you want to give money to the poor, you have to get
it from somewhere. You can't get it from the poor, or they just end up where
they started. You have to get it from the rich.
There is of course a way to make the poor richer without simply shifting money
from the rich. You could help the poor become more productive — for example,
by improving access to education. Instead of taking money from engineers and
giving it to checkout clerks, you could enable people who would have become
checkout clerks to become engineers.
This is an excellent strategy for making the poor richer. But the evidence of
the last 200 years shows that it doesn't reduce economic inequality, because
it makes the rich richer too. If there are more engineers, then there are more
opportunities to hire them and to sell them things. Henry Ford couldn't have
made a fortune building cars in a society in which most people were still
subsistence farmers; he would have had neither workers nor customers.
If you want to reduce economic inequality instead of just improving the
overall standard of living, it's not enough just to raise up the poor. What if
one of your newly minted engineers gets ambitious and goes on to become
another Bill Gates? Economic inequality will be as bad as ever. If you
actually want to compress the gap between rich and poor, you have to push down
on the top as well as pushing up on the bottom.
How do you push down on the top? You could try to decrease the productivity of
the people who make the most money: make the best surgeons operate with their
left hands, force popular actors to overeat, and so on. But this approach is
hard to implement. The only practical solution is to let people do the best
work they can, and then (either by taxation or by limiting what they can
charge) to confiscate whatever you deem to be surplus.
So let's be clear what reducing economic inequality means. It is identical
with taking money from the rich.
When you transform a mathematical expression into another form, you often
notice new things. So it is in this case. Taking money from the rich turns out
to have consequences one might not foresee when one phrases the same idea in
terms of "reducing inequality."
The problem is, risk and reward have to be proportionate. A bet with only a
10% chance of winning has to pay more than one with a 50% chance of winning,
or no one will take it. So if you lop off the top of the possible rewards, you
thereby decrease people's willingness to take risks.
Transposing into our original expression, we get: decreasing economic
inequality means decreasing the risk people are willing to take.
There are whole classes of risks that are no longer worth taking if the
maximum return is decreased. One reason high tax rates are disastrous is that
this class of risks includes starting new companies.
**Investors**
Startups are intrinsically risky. A startup is like a small boat in the open
sea. One big wave and you're sunk. A competing product, a downturn in the
economy, a delay in getting funding or regulatory approval, a patent suit,
changing technical standards, the departure of a key employee, the loss of a
big account — any one of these can destroy you overnight. It seems only about
1 in 10 startups succeeds.
Our startup paid its first round of outside investors 36x. Which meant, with
current US tax rates, that it made sense to invest in us if we had better than
a 1 in 24 chance of succeeding. That sounds about right. That's probably
roughly how we looked when we were a couple of nerds with no business
experience operating out of an apartment.
If that kind of risk doesn't pay, venture investing, as we know it, doesn't
happen.
That might be ok if there were other sources of capital for new companies. Why
not just have the government, or some large almost-government organization
like Fannie Mae, do the venture investing instead of private funds?
I'll tell you why that wouldn't work. Because then you're asking government or
almost-government employees to do the one thing they are least able to do:
take risks.
As anyone who has worked for the government knows, the important thing is not
to make the right choices, but to make choices that can be justified later if
they fail. If there is a safe option, that's the one a bureaucrat will choose.
But that is exactly the wrong way to do venture investing. The nature of the
business means that you want to make terribly risky choices, if the upside
looks good enough.
VCs are currently paid in a way that makes them focus on the upside: they get
a percentage of the fund's gains. And that helps overcome their understandable
fear of investing in a company run by nerds who look like (and perhaps are)
college students.
If VCs weren't allowed to get rich, they'd behave like bureaucrats. Without
hope of gain, they'd have only fear of loss. And so they'd make the wrong
choices. They'd turn down the nerds in favor of the smooth-talking MBA in a
suit, because that investment would be easier to justify later if it failed.
**Founders**
But even if you could somehow redesign venture funding to work without
allowing VCs to become rich, there's another kind of investor you simply
cannot replace: the startups' founders and early employees.
What they invest is their time and ideas. But these are equivalent to money;
the proof is that investors are willing (if forced) to treat them as
interchangeable, granting the same status to "sweat equity" and the equity
they've purchased with cash.
The fact that you're investing time doesn't change the relationship between
risk and reward. If you're going to invest your time in something with a small
chance of succeeding, you'll only do it if there is a proportionately large
payoff. If large payoffs aren't allowed, you may as well play it safe.
Like many startup founders, I did it to get rich. But not because I wanted to
buy expensive things. What I wanted was security. I wanted to make enough
money that I didn't have to worry about money. If I'd been forbidden to make
enough from a startup to do this, I would have sought security by some other
means: for example, by going to work for a big, stable organization from which
it would be hard to get fired. Instead of busting my ass in a startup, I would
have tried to get a nice, low-stress job at a big research lab, or tenure at a
university.
That's what everyone does in societies where risk isn't rewarded. If you can't
ensure your own security, the next best thing is to make a nest for yourself
in some large organization where your status depends mostly on seniority.
Even if we could somehow replace investors, I don't see how we could replace
founders. Investors mainly contribute money, which in principle is the same no
matter what the source. But the founders contribute ideas. You can't replace
those.
Let's rehearse the chain of argument so far. I'm heading for a conclusion to
which many readers will have to be dragged kicking and screaming, so I've
tried to make each link unbreakable. Decreasing economic inequality means
taking money from the rich. Since risk and reward are equivalent, decreasing
potential rewards automatically decreases people's appetite for risk. Startups
are intrinsically risky. Without the prospect of rewards proportionate to the
risk, founders will not invest their time in a startup. Founders are
irreplaceable. So eliminating economic inequality means eliminating startups.
Economic inequality is not just a consequence of startups. It's the engine
that drives them, in the same way a fall of water drives a water mill. People
start startups in the hope of becoming much richer than they were before. And
if your society tries to prevent anyone from being much richer than anyone
else, it will also prevent one person from being much richer at t2 than t1.
**Growth**
This argument applies proportionately. It's not just that if you eliminate
economic inequality, you get no startups. To the extent you reduce economic
inequality, you decrease the number of startups. Increase taxes, and
willingness to take risks decreases in proportion.
And that seems bad for everyone. New technology and new jobs both come
disproportionately from new companies. Indeed, if you don't have startups,
pretty soon you won't have established companies either, just as, if you stop
having kids, pretty soon you won't have any adults.
It sounds benevolent to say we ought to reduce economic inequality. When you
phrase it that way, who can argue with you? _Inequality_ has to be bad, right?
It sounds a good deal less benevolent to say we ought to reduce the rate at
which new companies are founded. And yet the one implies the other.
Indeed, it may be that reducing investors' appetite for risk doesn't merely
kill off larval startups, but kills off the most promising ones especially.
Startups yield faster growth at greater risk than established companies. Does
this trend also hold among startups? That is, are the riskiest startups the
ones that generate most growth if they succeed? I suspect the answer is yes.
And that's a chilling thought, because it means that if you cut investors'
appetite for risk, the most beneficial startups are the first to go.
Not all rich people got that way from startups, of course. What if we let
people get rich by starting startups, but taxed away all other surplus wealth?
Wouldn't that at least decrease inequality?
Less than you might think. If you made it so that people could only get rich
by starting startups, people who wanted to get rich would all start startups.
And that might be a great thing. But I don't think it would have much effect
on the distribution of wealth. People who want to get rich will do whatever
they have to. If startups are the only way to do it, you'll just get far more
people starting startups. (If you write the laws very carefully, that is. More
likely, you'll just get a lot of people doing things that can be made to look
on paper like startups.)
If we're determined to eliminate economic inequality, there is still one way
out: we could say that we're willing to go ahead and do without startups. What
would happen if we did?
At a minimum, we'd have to accept lower rates of technological growth. If you
believe that large, established companies could somehow be made to develop new
technology as fast as startups, the ball is in your court to explain how. (If
you can come up with a remotely plausible story, you can make a fortune
writing business books and consulting for large companies.)
Ok, so we get slower growth. Is that so bad? Well, one reason it's bad in
practice is that other countries might not agree to slow down with us. If
you're content to develop new technologies at a slower rate than the rest of
the world, what happens is that you don't invent anything at all. Anything you
might discover has already been invented elsewhere. And the only thing you can
offer in return is raw materials and cheap labor. Once you sink that low,
other countries can do whatever they like with you: install puppet
governments, siphon off your best workers, use your women as prostitutes, dump
their toxic waste on your territory — all the things we do to poor countries
now. The only defense is to isolate yourself, as communist countries did in
the twentieth century. But the problem then is, you have to become a police
state to enforce it.
**Wealth and Power**
I realize startups are not the main target of those who want to eliminate
economic inequality. What they really dislike is the sort of wealth that
becomes self-perpetuating through an alliance with power. For example,
construction firms that fund politicians' campaigns in return for government
contracts, or rich parents who get their children into good colleges by
sending them to expensive schools designed for that purpose. But if you try to
attack this type of wealth through _economic_ policy, it's hard to hit without
destroying startups as collateral damage.
The problem here is not wealth, but corruption. So why not go after
corruption?
We don't need to prevent people from being rich if we can prevent wealth from
translating into power. And there has been progress on that front. Before he
died of drink in 1925, Commodore Vanderbilt's wastrel grandson Reggie ran down
pedestrians on five separate occasions, killing two of them. By 1969, when Ted
Kennedy drove off the bridge at Chappaquiddick, the limit seemed to be down to
one. Today it may well be zero. But what's changed is not variation in wealth.
What's changed is the ability to translate wealth into power.
How do you break the connection between wealth and power? Demand transparency.
Watch closely how power is exercised, and demand an account of how decisions
are made. Why aren't all police interrogations videotaped? Why did 36% of
Princeton's class of 2007 come from prep schools, when only 1.7% of American
kids attend them? Why did the US really invade Iraq? Why don't government
officials disclose more about their finances, and why only during their term
of office?
A friend of mine who knows a lot about computer security says the single most
important step is to log everything. Back when he was a kid trying to break
into computers, what worried him most was the idea of leaving a trail. He was
more inconvenienced by the need to avoid that than by any obstacle
deliberately put in his path.
Like all illicit connections, the connection between wealth and power
flourishes in secret. Expose all transactions, and you will greatly reduce it.
Log everything. That's a strategy that already seems to be working, and it
doesn't have the side effect of making your whole country poor.
I don't think many people realize there is a connection between economic
inequality and risk. I didn't fully grasp it till recently. I'd known for
years of course that if one didn't score in a startup, the other alternative
was to get a cozy, tenured research job. But I didn't understand the equation
governing my behavior. Likewise, it's obvious empirically that a country that
doesn't let people get rich is headed for disaster, whether it's Diocletian's
Rome or Harold Wilson's Britain. But I did not till recently understand the
role risk played.
If you try to attack wealth, you end up nailing risk as well, and with it
growth. If we want a fairer world, I think we're better off attacking one step
downstream, where wealth turns into power.
** |
|
May 2006
_(This essay is derived from a keynote at Xtech.)_
Startups happen in clusters. There are a lot of them in Silicon Valley and
Boston, and few in Chicago or Miami. A country that wants startups will
probably also have to reproduce whatever makes these clusters form.
I've claimed that the recipe is a great university near a town smart people
like. If you set up those conditions within the US, startups will form as
inevitably as water droplets condense on a cold piece of metal. But when I
consider what it would take to reproduce Silicon Valley in another country,
it's clear the US is a particularly humid environment. Startups condense more
easily here.
It is by no means a lost cause to try to create a silicon valley in another
country. There's room not merely to equal Silicon Valley, but to surpass it.
But if you want to do that, you have to understand the advantages startups get
from being in America.
**1\. The US Allows Immigration.**
For example, I doubt it would be possible to reproduce Silicon Valley in
Japan, because one of Silicon Valley's most distinctive features is
immigration. Half the people there speak with accents. And the Japanese don't
like immigration. When they think about how to make a Japanese silicon valley,
I suspect they unconsciously frame it as how to make one consisting only of
Japanese people. This way of framing the question probably guarantees failure.
A silicon valley has to be a mecca for the smart and the ambitious, and you
can't have a mecca if you don't let people into it.
Of course, it's not saying much that America is more open to immigration than
Japan. Immigration policy is one area where a competitor could do better.
**2\. The US Is a Rich Country.**
I could see India one day producing a rival to Silicon Valley. Obviously they
have the right people: you can tell that by the number of Indians in the
current Silicon Valley. The problem with India itself is that it's still so
poor.
In poor countries, things we take for granted are missing. A friend of mine
visiting India sprained her ankle falling down the steps in a railway station.
When she turned to see what had happened, she found the steps were all
different heights. In industrialized countries we walk down steps our whole
lives and never think about this, because there's an infrastructure that
prevents such a staircase from being built.
The US has never been so poor as some countries are now. There have never been
swarms of beggars in the streets of American cities. So we have no data about
what it takes to get from the swarms-of-beggars stage to the silicon-valley
stage. Could you have both at once, or does there have to be some baseline
prosperity before you get a silicon valley?
I suspect there is some speed limit to the evolution of an economy. Economies
are made out of people, and attitudes can only change a certain amount per
generation.
**3\. The US Is Not (Yet) a Police State.**
Another country I could see wanting to have a silicon valley is China. But I
doubt they could do it yet either. China still seems to be a police state, and
although present rulers seem enlightened compared to the last, even
enlightened despotism can probably only get you part way toward being a great
economic power.
It can get you factories for building things designed elsewhere. Can it get
you the designers, though? Can imagination flourish where people can't
criticize the government? Imagination means having odd ideas, and it's hard to
have odd ideas about technology without also having odd ideas about politics.
And in any case, many technical ideas do have political implications. So if
you squash dissent, the back pressure will propagate into technical fields.
Singapore would face a similar problem. Singapore seems very aware of the
importance of encouraging startups. But while energetic government
intervention may be able to make a port run efficiently, it can't coax
startups into existence. A state that bans chewing gum has a long way to go
before it could create a San Francisco.
Do you need a San Francisco? Might there not be an alternate route to
innovation that goes through obedience and cooperation instead of
individualism? Possibly, but I'd bet not. Most imaginative people seem to
share a certain prickly independence, whenever and wherever they lived. You
see it in Diogenes telling Alexander to get out of his light and two thousand
years later in Feynman breaking into safes at Los Alamos. Imaginative
people don't want to follow or lead. They're most productive when everyone
gets to do what they want.
Ironically, of all rich countries the US has lost the most civil liberties
recently. But I'm not too worried yet. I'm hoping once the present
administration is out, the natural openness of American culture will reassert
itself.
**4\. American Universities Are Better.**
You need a great university to seed a silicon valley, and so far there are few
outside the US. I asked a handful of American computer science professors
which universities in Europe were most admired, and they all basically said
"Cambridge" followed by a long pause while they tried to think of others.
There don't seem to be many universities elsewhere that compare with the best
in America, at least in technology.
In some countries this is the result of a deliberate policy. The German and
Dutch governments, perhaps from fear of elitism, try to ensure that all
universities are roughly equal in quality. The downside is that none are
especially good. The best professors are spread out, instead of being
concentrated as they are in the US. This probably makes them less productive,
because they don't have good colleagues to inspire them. It also means no one
university will be good enough to act as a mecca, attracting talent from
abroad and causing startups to form around it.
The case of Germany is a strange one. The Germans invented the modern
university, and up till the 1930s theirs were the best in the world. Now they
have none that stand out. As I was mulling this over, I found myself thinking:
"I can understand why German universities declined in the 1930s, after they
excluded Jews. But surely they should have bounced back by now." Then I
realized: maybe not. There are few Jews left in Germany and most Jews I know
would not want to move there. And if you took any great American university
and removed the Jews, you'd have some pretty big gaps. So maybe it would be a
lost cause trying to create a silicon valley in Germany, because you couldn't
establish the level of university you'd need as a seed.
It's natural for US universities to compete with one another because so many
are private. To reproduce the quality of American universities you probably
also have to reproduce this. If universities are controlled by the central
government, log-rolling will pull them all toward the mean: the new Institute
of X will end up at the university in the district of a powerful politician,
instead of where it should be.
**5\. You Can Fire People in America.**
I think one of the biggest obstacles to creating startups in Europe is the
attitude toward employment. The famously rigid labor laws hurt every company,
but startups especially, because startups have the least time to spare for
bureaucratic hassles.
The difficulty of firing people is a particular problem for startups because
they have no redundancy. Every person has to do their job well.
But the problem is more than just that some startup might have a problem
firing someone they needed to. Across industries and countries, there's a
strong inverse correlation between performance and job security. Actors and
directors are fired at the end of each film, so they have to deliver every
time. Junior professors are fired by default after a few years unless the
university chooses to grant them tenure. Professional athletes know they'll be
pulled if they play badly for just a couple games. At the other end of the
scale (at least in the US) are auto workers, New York City schoolteachers, and
civil servants, who are all nearly impossible to fire. The trend is so clear
that you'd have to be willfully blind not to see it.
Performance isn't everything, you say? Well, are auto workers, schoolteachers,
and civil servants _happier_ than actors, professors, and professional
athletes?
European public opinion will apparently tolerate people being fired in
industries where they really care about performance. Unfortunately the only
industry they care enough about so far is soccer. But that is at least a
precedent.
**6\. In America Work Is Less Identified with Employment.**
The problem in more traditional places like Europe and Japan goes deeper than
the employment laws. More dangerous is the attitude they reflect: that an
employee is a kind of servant, whom the employer has a duty to protect. It
used to be that way in America too. In 1970 you were still supposed to get a
job with a big company, for whom ideally you'd work your whole career. In
return the company would take care of you: they'd try not to fire you, cover
your medical expenses, and support you in old age.
Gradually employment has been shedding such paternalistic overtones and
becoming simply an economic exchange. But the importance of the new model is
not just that it makes it easier for startups to grow. More important, I
think, is that it it makes it easier for people to _start_ startups.
Even in the US most kids graduating from college still think they're supposed
to get jobs, as if you couldn't be productive without being someone's
employee. But the less you identify work with employment, the easier it
becomes to start a startup. When you see your career as a series of different
types of work, instead of a lifetime's service to a single employer, there's
less risk in starting your own company, because you're only replacing one
segment instead of discarding the whole thing.
The old ideas are so powerful that even the most successful startup founders
have had to struggle against them. A year after the founding of Apple, Steve
Wozniak still hadn't quit HP. He still planned to work there for life. And
when Jobs found someone to give Apple serious venture funding, on the
condition that Woz quit, he initially refused, arguing that he'd designed both
the Apple I and the Apple II while working at HP, and there was no reason he
couldn't continue.
**7\. America Is Not Too Fussy.**
If there are any laws regulating businesses, you can assume larval startups
will break most of them, because they don't know what the laws are and don't
have time to find out.
For example, many startups in America begin in places where it's not really
legal to run a business. Hewlett-Packard, Apple, and Google were all run out
of garages. Many more startups, including ours, were initially run out of
apartments. If the laws against such things were actually enforced, most
startups wouldn't happen.
That could be a problem in fussier countries. If Hewlett and Packard tried
running an electronics company out of their garage in Switzerland, the old
lady next door would report them to the municipal authorities.
But the worst problem in other countries is probably the effort required just
to start a company. A friend of mine started a company in Germany in the early
90s, and was shocked to discover, among many other regulations, that you
needed $20,000 in capital to incorporate. That's one reason I'm not typing
this on an Apfel laptop. Jobs and Wozniak couldn't have come up with that kind
of money in a company financed by selling a VW bus and an HP calculator. We
couldn't have started Viaweb either.
Here's a tip for governments that want to encourage startups: read the stories
of existing startups, and then try to simulate what would have happened in
your country. When you hit something that would have killed Apple, prune it
off.
_Startups aremarginal._ They're started by the poor and the timid; they begin
in marginal space and spare time; they're started by people who are supposed
to be doing something else; and though businesses, their founders often know
nothing about business. Young startups are fragile. A society that trims its
margins sharply will kill them all.
**8\. America Has a Large Domestic Market.**
What sustains a startup in the beginning is the prospect of getting their
initial product out. The successful ones therefore make the first version as
simple as possible. In the US they usually begin by making something just for
the local market.
This works in America, because the local market is 300 million people. It
wouldn't work so well in Sweden. In a small country, a startup has a harder
task: they have to sell internationally from the start.
The EU was designed partly to simulate a single, large domestic market. The
problem is that the inhabitants still speak many different languages. So a
software startup in Sweden is still at a disadvantage relative to one in the
US, because they have to deal with internationalization from the beginning.
It's significant that the most famous recent startup in Europe, Skype, worked
on a problem that was intrinsically international.
However, for better or worse it looks as if Europe will in a few decades speak
a single language. When I was a student in Italy in 1990, few Italians spoke
English. Now all educated people seem to be expected to-- and Europeans do not
like to seem uneducated. This is presumably a taboo subject, but if present
trends continue, French and German will eventually go the way of Irish and
Luxembourgish: they'll be spoken in homes and by eccentric nationalists.
**9\. America Has Venture Funding.**
Startups are easier to start in America because funding is easier to get.
There are now a few VC firms outside the US, but startup funding doesn't only
come from VC firms. A more important source, because it's more personal and
comes earlier in the process, is money from individual angel investors. Google
might never have got to the point where they could raise millions from VC
funds if they hadn't first raised a hundred thousand from Andy Bechtolsheim.
And he could help them because he was one of the founders of Sun. This pattern
is repeated constantly in startup hubs. It's this pattern that _makes_ them
startup hubs.
The good news is, all you have to do to get the process rolling is get those
first few startups successfully launched. If they stick around after they get
rich, startup founders will almost automatically fund and encourage new
startups.
The bad news is that the cycle is slow. It probably takes five years, on
average, before a startup founder can make angel investments. And while
governments _might_ be able to set up local VC funds by supplying the money
themselves and recruiting people from existing firms to run them, only organic
growth can produce angel investors.
Incidentally, America's private universities are one reason there's so much
venture capital. A lot of the money in VC funds comes from their endowments.
So another advantage of private universities is that a good chunk of the
country's wealth is managed by enlightened investors.
**10\. America Has Dynamic Typing for Careers.**
Compared to other industrialized countries the US is disorganized about
routing people into careers. For example, in America people often don't decide
to go to medical school till they've finished college. In Europe they
generally decide in high school.
The European approach reflects the old idea that each person has a single,
definite occupation-- which is not far from the idea that each person has a
natural "station" in life. If this were true, the most efficient plan would be
to discover each person's station as early as possible, so they could receive
the training appropriate to it.
In the US things are more haphazard. But that turns out to be an advantage as
an economy gets more liquid, just as dynamic typing turns out to work better
than static for ill-defined problems. This is particularly true with startups.
"Startup founder" is not the sort of career a high school student would
choose. If you ask at that age, people will choose conservatively. They'll
choose well-understood occupations like engineer, or doctor, or lawyer.
Startups are the kind of thing people don't plan, so you're more likely to get
them in a society where it's ok to make career decisions on the fly.
For example, in theory the purpose of a PhD program is to train you to do
research. But fortunately in the US this is another rule that isn't very
strictly enforced. In the US most people in CS PhD programs are there simply
because they wanted to learn more. They haven't decided what they'll do
afterward. So American grad schools spawn a lot of startups, because students
don't feel they're failing if they don't go into research.
Those worried about America's "competitiveness" often suggest spending more on
public schools. But perhaps America's lousy public schools have a hidden
advantage. Because they're so bad, the kids adopt an attitude of waiting for
college. I did; I knew I was learning so little that I wasn't even learning
what the choices were, let alone which to choose. This is demoralizing, but it
does at least make you keep an open mind.
Certainly if I had to choose between bad high schools and good universities,
like the US, and good high schools and bad universities, like most other
industrialized countries, I'd take the US system. Better to make everyone feel
like a late bloomer than a failed child prodigy.
**Attitudes**
There's one item conspicuously missing from this list: American attitudes.
Americans are said to be more entrepreneurial, and less afraid of risk. But
America has no monopoly on this. Indians and Chinese seem plenty
entrepreneurial, perhaps more than Americans.
Some say Europeans are less energetic, but I don't believe it. I think the
problem with Europe is not that they lack balls, but that they lack examples.
Even in the US, the most successful startup founders are often technical
people who are quite timid, initially, about the idea of starting their own
company. Few are the sort of backslapping extroverts one thinks of as
typically American. They can usually only summon up the activation energy to
start a startup when they meet people who've done it and realize they could
too.
I think what holds back European hackers is simply that they don't meet so
many people who've done it. You see that variation even within the US.
Stanford students are more entrepreneurial than Yale students, but not because
of some difference in their characters; the Yale students just have fewer
examples.
I admit there seem to be different attitudes toward ambition in Europe and the
US. In the US it's ok to be overtly ambitious, and in most of Europe it's not.
But this can't be an intrinsically European quality; previous generations of
Europeans were as ambitious as Americans. What happened? My hypothesis is that
ambition was discredited by the terrible things ambitious people did in the
first half of the twentieth century. Now swagger is out. (Even now the image
of a very ambitious German presses a button or two, doesn't it?)
It would be surprising if European attitudes weren't affected by the disasters
of the twentieth century. It takes a while to be optimistic after events like
that. But ambition is human nature. Gradually it will re-emerge.
**How To Do Better**
I don't mean to suggest by this list that America is the perfect place for
startups. It's the best place so far, but the sample size is small, and "so
far" is not very long. On historical time scales, what we have now is just a
prototype.
So let's look at Silicon Valley the way you'd look at a product made by a
competitor. What weaknesses could you exploit? How could you make something
users would like better? The users in this case are those critical few
thousand people you'd like to move to your silicon valley.
To start with, Silicon Valley is too far from San Francisco. Palo Alto, the
original ground zero, is about thirty miles away, and the present center more
like forty. So people who come to work in Silicon Valley face an unpleasant
choice: either live in the boring sprawl of the valley proper, or live in San
Francisco and endure an hour commute each way.
The best thing would be if the silicon valley were not merely closer to the
interesting city, but interesting itself. And there is a lot of room for
improvement here. Palo Alto is not so bad, but everything built since is the
worst sort of strip development. You can measure how demoralizing it is by the
number of people who will sacrifice two hours a day commuting rather than live
there.
Another area in which you could easily surpass Silicon Valley is public
transportation. There is a train running the length of it, and by American
standards it's not bad. Which is to say that to Japanese or Europeans it would
seem like something out of the third world.
The kind of people you want to attract to your silicon valley like to get
around by train, bicycle, and on foot. So if you want to beat America, design
a town that puts cars last. It will be a while before any American city can
bring itself to do that.
**Capital Gains**
There are also a couple things you could do to beat America at the national
level. One would be to have lower capital gains taxes. It doesn't seem
critical to have the lowest _income_ taxes, because to take advantage of
those, people have to move. But if capital gains rates vary, you move
assets, not yourself, so changes are reflected at market speeds. The lower the
rate, the cheaper it is to buy stock in growing companies as opposed to real
estate, or bonds, or stocks bought for the dividends they pay.
So if you want to encourage startups you should have a low rate on capital
gains. Politicians are caught between a rock and a hard place here, however:
make the capital gains rate low and be accused of creating "tax breaks for the
rich," or make it high and starve growing companies of investment capital. As
Galbraith said, politics is a matter of choosing between the unpalatable and
the disastrous. A lot of governments experimented with the disastrous in the
twentieth century; now the trend seems to be toward the merely unpalatable.
Oddly enough, the leaders now are European countries like Belgium, which has a
capital gains tax rate of zero.
**Immigration**
The other place you could beat the US would be with smarter immigration
policy. There are huge gains to be made here. Silicon valleys are made of
people, remember.
Like a company whose software runs on Windows, those in the current Silicon
Valley are all too aware of the shortcomings of the INS, but there's little
they can do about it. They're hostages of the platform.
America's immigration system has never been well run, and since 2001 there has
been an additional admixture of paranoia. What fraction of the smart people
who want to come to America can even get in? I doubt even half. Which means if
you made a competing technology hub that let in all smart people, you'd
immediately get more than half the world's top talent, for free.
US immigration policy is particularly ill-suited to startups, because it
reflects a model of work from the 1970s. It assumes good technical people have
college degrees, and that work means working for a big company.
If you don't have a college degree you can't get an H1B visa, the type usually
issued to programmers. But a test that excludes Steve Jobs, Bill Gates, and
Michael Dell can't be a good one. Plus you can't get a visa for working on
your own company, only for working as an employee of someone else's. And if
you want to apply for citizenship you daren't work for a startup at all,
because if your sponsor goes out of business, you have to start over.
American immigration policy keeps out most smart people, and channels the rest
into unproductive jobs. It would be easy to do better. Imagine if, instead,
you treated immigration like recruiting-- if you made a conscious effort to
seek out the smartest people and get them to come to your country.
A country that got immigration right would have a huge advantage. At this
point you could become a mecca for smart people simply by having an
immigration system that let them in.
**A Good Vector**
If you look at the kinds of things you have to do to create an environment
where startups condense, none are great sacrifices. Great universities?
Livable towns? Civil liberties? Flexible employment laws? Immigration policies
that let in smart people? Tax laws that encourage growth? It's not as if you
have to risk destroying your country to get a silicon valley; these are all
good things in their own right.
And then of course there's the question, can you afford not to? I can imagine
a future in which the default choice of ambitious young people is to start
their own company rather than work for someone else's. I'm not sure that will
happen, but it's where the trend points now. And if that is the future, places
that don't have startups will be a whole step behind, like those that missed
the Industrial Revolution.
** |
|
November 2004, corrected June 2006
Occam's razor says we should prefer the simpler of two explanations. I begin
by reminding readers of this principle because I'm about to propose a theory
that will offend both liberals and conservatives. But Occam's razor means, in
effect, that if you want to disagree with it, you have a hell of a coincidence
to explain.
Theory: In US presidential elections, the more charismatic candidate wins.
People who write about politics, whether on the left or the right, have a
consistent bias: they take politics seriously. When one candidate beats
another they look for political explanations. The country is shifting to the
left, or the right. And that sort of shift can certainly be the result of a
presidential election, which makes it easy to believe it was the cause.
But when I think about why I voted for Clinton over the first George Bush, it
wasn't because I was shifting to the left. Clinton just seemed more dynamic.
He seemed to want the job more. Bush seemed old and tired. I suspect it was
the same for a lot of voters.
Clinton didn't represent any national shift leftward. He was just more
charismatic than George Bush or (God help us) Bob Dole. In 2000 we practically
got a controlled experiment to prove it: Gore had Clinton's policies, but not
his charisma, and he suffered proportionally. Same story in 2004. Kerry
was smarter and more articulate than Bush, but rather a stiff. And Kerry lost.
As I looked further back, I kept finding the same pattern. Pundits said Carter
beat Ford because the country distrusted the Republicans after Watergate. And
yet it also happened that Carter was famous for his big grin and folksy ways,
and Ford for being a boring klutz. Four years later, pundits said the country
had lurched to the right. But Reagan, a former actor, also happened to be even
more charismatic than Carter (whose grin was somewhat less cheery after four
stressful years in office). In 1984 the charisma gap between Reagan and
Mondale was like that between Clinton and Dole, with similar results. The
first George Bush managed to win in 1988, though he would later be vanquished
by one of the most charismatic presidents ever, because in 1988 he was up
against the notoriously uncharismatic Michael Dukakis.
These are the elections I remember personally, but apparently the same pattern
played out in 1964 and 1972. The most recent counterexample appears to be
1968, when Nixon beat the more charismatic Hubert Humphrey. But when you
examine that election, it tends to support the charisma theory more than
contradict it. As Joe McGinnis recounts in his famous book _The Selling of the
President 1968_ , Nixon knew he had less charisma than Humphrey, and thus
simply refused to debate him on TV. He knew he couldn't afford to let the two
of them be seen side by side.
Now a candidate probably couldn't get away with refusing to debate. But in
1968 the custom of televised debates was still evolving. In effect, Nixon won
in 1968 because voters were never allowed to see the real Nixon. All they saw
were carefully scripted campaign spots.
Oddly enough, the most recent true counterexample is probably 1960. Though
this election is usually given as an example of the power of TV, Kennedy
apparently would not have won without fraud by party machines in Illinois and
Texas. But TV was still young in 1960; only 87% of households had it.
Undoubtedly TV helped Kennedy, so historians are correct in regarding this
election as a watershed. TV required a new kind of candidate. There would be
no more Calvin Coolidges.
The charisma theory may also explain why Democrats tend to lose presidential
elections. The core of the Democrats' ideology seems to be a belief in
government. Perhaps this tends to attract people who are earnest, but dull.
Dukakis, Gore, and Kerry were so similar in that respect that they might have
been brothers. Good thing for the Democrats that their screen lets through an
occasional Clinton, even if some scandal results.
One would like to believe elections are won and lost on issues, if only fake
ones like Willie Horton. And yet, if they are, we have a remarkable
coincidence to explain. In every presidential election since TV became
widespread, the apparently more charismatic candidate has won. Surprising,
isn't it, that voters' opinions on the issues have lined up with charisma for
11 elections in a row?
The political commentators who come up with shifts to the left or right in
their morning-after analyses are like the financial reporters stuck writing
stories day after day about the random fluctuations of the stock market. Day
ends, market closes up or down, reporter looks for good or bad news
respectively, and writes that the market was up on news of Intel's earnings,
or down on fears of instability in the Middle East. Suppose we could somehow
feed these reporters false information about market closes, but give them all
the other news intact. Does anyone believe they would notice the anomaly, and
not simply write that stocks were up (or down) on whatever good (or bad) news
there was that day? That they would say, hey, wait a minute, how can stocks be
up with all this unrest in the Middle East?
I'm not saying that issues don't matter to voters. Of course they do. But the
major parties know so well which issues matter how much to how many voters,
and adjust their message so precisely in response, that they tend to split the
difference on the issues, leaving the election to be decided by the one factor
they can't control: charisma.
If the Democrats had been running a candidate as charismatic as Clinton in the
2004 election, he'd have won. And we'd be reading that the election was a
referendum on the war in Iraq, instead of that the Democrats are out of touch
with evangelical Christians in middle America.
During the 1992 election, the Clinton campaign staff had a big sign in their
office saying "It's the economy, stupid." Perhaps it was even simpler than
they thought.
**Postscript**
Opinions seem to be divided about the charisma theory. Some say it's
impossible, others say it's obvious. This seems a good sign. Perhaps it's in
the sweet spot midway between.
As for it being impossible, I reply: here's the data; here's the theory;
theory explains data 100%. To a scientist, at least, that means it deserves
attention, however implausible it seems.
You can't believe voters are so superficial that they just choose the most
charismatic guy? My theory doesn't require that. I'm not proposing that
charisma is the only factor, just that it's the only one _left_ after the
efforts of the two parties cancel one another out.
As for the theory being obvious, as far as I know, no one has proposed it
before. Election forecasters are proud when they can achieve the same results
with much more complicated models.
Finally, to the people who say that the theory is probably true, but rather
depressing: it's not so bad as it seems. The phenomenon is like a pricing
anomaly; once people realize it's there, it will disappear. Once both parties
realize it's a waste of time to nominate uncharismatic candidates, they'll
tend to nominate only the most charismatic ones. And if the candidates are
equally charismatic, charisma will cancel out, and elections will be decided
on issues, as political commentators like to think they are now.
** |
|
August 2005
_(This essay is derived from a talk at Oscon 2005.)_
Lately companies have been paying more attention to open source. Ten years ago
there seemed a real danger Microsoft would extend its monopoly to servers. It
seems safe to say now that open source has prevented that. A recent survey
found 52% of companies are replacing Windows servers with Linux servers.
More significant, I think, is _which_ 52% they are. At this point, anyone
proposing to run Windows on servers should be prepared to explain what they
know about servers that Google, Yahoo, and Amazon don't.
But the biggest thing business has to learn from open source is not about
Linux or Firefox, but about the forces that produced them. Ultimately these
will affect a lot more than what software you use.
We may be able to get a fix on these underlying forces by triangulating from
open source and blogging. As you've probably noticed, they have a lot in
common.
Like open source, blogging is something people do themselves, for free,
because they enjoy it. Like open source hackers, bloggers compete with people
working for money, and often win. The method of ensuring quality is also the
same: Darwinian. Companies ensure quality through rules to prevent employees
from screwing up. But you don't need that when the audience can communicate
with one another. People just produce whatever they want; the good stuff
spreads, and the bad gets ignored. And in both cases, feedback from the
audience improves the best work.
Another thing blogging and open source have in common is the Web. People have
always been willing to do great work for free, but before the Web it was
harder to reach an audience or collaborate on projects.
**Amateurs**
I think the most important of the new principles business has to learn is that
people work a lot harder on stuff they like. Well, that's news to no one. So
how can I claim business has to learn it? When I say business doesn't know
this, I mean the structure of business doesn't reflect it.
Business still reflects an older model, exemplified by the French word for
working: _travailler_. It has an English cousin, travail, and what it means is
torture.
This turns out not to be the last word on work, however. As societies get
richer, they learn something about work that's a lot like what they learn
about diet. We know now that the healthiest diet is the one our peasant
ancestors were forced to eat because they were poor. Like rich food, idleness
only seems desirable when you don't get enough of it. I think we were designed
to work, just as we were designed to eat a certain amount of fiber, and we
feel bad if we don't.
There's a name for people who work for the love of it: amateurs. The word now
has such bad connotations that we forget its etymology, though it's staring us
in the face. "Amateur" was originally rather a complimentary word. But the
thing to be in the twentieth century was professional, which amateurs, by
definition, are not.
That's why the business world was so surprised by one lesson from open source:
that people working for love often surpass those working for money. Users
don't switch from Explorer to Firefox because they want to hack the source.
They switch because it's a better browser.
It's not that Microsoft isn't trying. They know controlling the browser is one
of the keys to retaining their monopoly. The problem is the same they face in
operating systems: they can't pay people enough to build something better than
a group of inspired hackers will build for free.
I suspect professionalism was always overrated-- not just in the literal sense
of working for money, but also connotations like formality and detachment.
Inconceivable as it would have seemed in, say, 1970, I think professionalism
was largely a fashion, driven by conditions that happened to exist in the
twentieth century.
One of the most powerful of those was the existence of "channels."
Revealingly, the same term was used for both products and information: there
were distribution channels, and TV and radio channels.
It was the narrowness of such channels that made professionals seem so
superior to amateurs. There were only a few jobs as professional journalists,
for example, so competition ensured the average journalist was fairly good.
Whereas anyone can express opinions about current events in a bar. And so the
average person expressing his opinions in a bar sounds like an idiot compared
to a journalist writing about the subject.
On the Web, the barrier for publishing your ideas is even lower. You don't
have to buy a drink, and they even let kids in. Millions of people are
publishing online, and the average level of what they're writing, as you might
expect, is not very good. This has led some in the media to conclude that
blogs don't present much of a threat-- that blogs are just a fad.
Actually, the fad is the word "blog," at least the way the print media now use
it. What they mean by "blogger" is not someone who publishes in a weblog
format, but anyone who publishes online. That's going to become a problem as
the Web becomes the default medium for publication. So I'd like to suggest an
alternative word for someone who publishes online. How about "writer?"
Those in the print media who dismiss the writing online because of its low
average quality are missing an important point: no one reads the _average_
blog. In the old world of channels, it meant something to talk about average
quality, because that's what you were getting whether you liked it or not. But
now you can read any writer you want. So the average quality of writing online
isn't what the print media are competing against. They're competing against
the best writing online. And, like Microsoft, they're losing.
I know that from my own experience as a reader. Though most print publications
are online, I probably read two or three articles on individual people's sites
for every one I read on the site of a newspaper or magazine.
And when I read, say, New York Times stories, I never reach them through the
Times front page. Most I find through aggregators like Google News or Slashdot
or Delicious. Aggregators show how much better you can do than the channel.
The New York Times front page is a list of articles written by people who work
for the New York Times. Delicious is a list of articles that are interesting.
And it's only now that you can see the two side by side that you notice how
little overlap there is.
Most articles in the print media are boring. For example, the president
notices that a majority of voters now think invading Iraq was a mistake, so he
makes an address to the nation to drum up support. Where is the man bites dog
in that? I didn't hear the speech, but I could probably tell you exactly what
he said. A speech like that is, in the most literal sense, not news: there is
nothing _new_ in it.
Nor is there anything new, except the names and places, in most "news" about
things going wrong. A child is abducted; there's a tornado; a ferry sinks;
someone gets bitten by a shark; a small plane crashes. And what do you learn
about the world from these stories? Absolutely nothing. They're outlying data
points; what makes them gripping also makes them irrelevant.
As in software, when professionals produce such crap, it's not surprising if
amateurs can do better. Live by the channel, die by the channel: if you depend
on an oligopoly, you sink into bad habits that are hard to overcome when you
suddenly get competition.
**Workplaces**
Another thing blogs and open source software have in common is that they're
often made by people working at home. That may not seem surprising. But it
should be. It's the architectural equivalent of a home-made aircraft shooting
down an F-18. Companies spend millions to build office buildings for a single
purpose: to be a place to work. And yet people working in their own homes,
which aren't even designed to be workplaces, end up being more productive.
This proves something a lot of us have suspected. The average office is a
miserable place to get work done. And a lot of what makes offices bad are the
very qualities we associate with professionalism. The sterility of offices is
supposed to suggest efficiency. But suggesting efficiency is a different thing
from actually being efficient.
The atmosphere of the average workplace is to productivity what flames painted
on the side of a car are to speed. And it's not just the way offices look
that's bleak. The way people act is just as bad.
Things are different in a startup. Often as not a startup begins in an
apartment. Instead of matching beige cubicles they have an assortment of
furniture they bought used. They work odd hours, wearing the most casual of
clothing. They look at whatever they want online without worrying whether it's
"work safe." The cheery, bland language of the office is replaced by wicked
humor. And you know what? The company at this stage is probably the most
productive it's ever going to be.
Maybe it's not a coincidence. Maybe some aspects of professionalism are
actually a net lose.
To me the most demoralizing aspect of the traditional office is that you're
supposed to be there at certain times. There are usually a few people in a
company who really have to, but the reason most employees work fixed hours is
that the company can't measure their productivity.
The basic idea behind office hours is that if you can't make people work, you
can at least prevent them from having fun. If employees have to be in the
building a certain number of hours a day, and are forbidden to do non-work
things while there, then they must be working. In theory. In practice they
spend a lot of their time in a no-man's land, where they're neither working
nor having fun.
If you could measure how much work people did, many companies wouldn't need
any fixed workday. You could just say: this is what you have to do. Do it
whenever you like, wherever you like. If your work requires you to talk to
other people in the company, then you may need to be here a certain amount.
Otherwise we don't care.
That may seem utopian, but it's what we told people who came to work for our
company. There were no fixed office hours. I never showed up before 11 in the
morning. But we weren't saying this to be benevolent. We were saying: if you
work here we expect you to get a lot done. Don't try to fool us just by being
here a lot.
The problem with the facetime model is not just that it's demoralizing, but
that the people pretending to work interrupt the ones actually working. I'm
convinced the facetime model is the main reason large organizations have so
many meetings. Per capita, large organizations accomplish very little. And yet
all those people have to be on site at least eight hours a day. When so much
time goes in one end and so little achievement comes out the other, something
has to give. And meetings are the main mechanism for taking up the slack.
For one year I worked at a regular nine to five job, and I remember well the
strange, cozy feeling that comes over one during meetings. I was very aware,
because of the novelty, that I was being paid for programming. It seemed just
amazing, as if there was a machine on my desk that spat out a dollar bill
every two minutes no matter what I did. Even while I was in the bathroom! But
because the imaginary machine was always running, I felt I always ought to be
working. And so meetings felt wonderfully relaxing. They counted as work, just
like programming, but they were so much easier. All you had to do was sit and
look attentive.
Meetings are like an opiate with a network effect. So is email, on a smaller
scale. And in addition to the direct cost in time, there's the cost in
fragmentation-- breaking people's day up into bits too small to be useful.
You can see how dependent you've become on something by removing it suddenly.
So for big companies I propose the following experiment. Set aside one day
where meetings are forbidden-- where everyone has to sit at their desk all day
and work without interruption on things they can do without talking to anyone
else. Some amount of communication is necessary in most jobs, but I'm sure
many employees could find eight hours worth of stuff they could do by
themselves. You could call it "Work Day."
The other problem with pretend work is that it often looks better than real
work. When I'm writing or hacking I spend as much time just thinking as I do
actually typing. Half the time I'm sitting drinking a cup of tea, or walking
around the neighborhood. This is a critical phase-- this is where ideas come
from-- and yet I'd feel guilty doing this in most offices, with everyone else
looking busy.
It's hard to see how bad some practice is till you have something to compare
it to. And that's one reason open source, and even blogging in some cases, are
so important. They show us what real work looks like.
We're funding eight new startups at the moment. A friend asked what they were
doing for office space, and seemed surprised when I said we expected them to
work out of whatever apartments they found to live in. But we didn't propose
that to save money. We did it because we want their software to be good.
Working in crappy informal spaces is one of the things startups do right
without realizing it. As soon as you get into an office, work and life start
to drift apart.
That is one of the key tenets of professionalism. Work and life are supposed
to be separate. But that part, I'm convinced, is a mistake.
**Bottom-Up**
The third big lesson we can learn from open source and blogging is that ideas
can bubble up from the bottom, instead of flowing down from the top. Open
source and blogging both work bottom-up: people make what they want, and the
best stuff prevails.
Does this sound familiar? It's the principle of a market economy. Ironically,
though open source and blogs are done for free, those worlds resemble market
economies, while most companies, for all their talk about the value of free
markets, are run internally like communist states.
There are two forces that together steer design: ideas about what to do next,
and the enforcement of quality. In the channel era, both flowed down from the
top. For example, newspaper editors assigned stories to reporters, then edited
what they wrote.
Open source and blogging show us things don't have to work that way. Ideas and
even the enforcement of quality can flow bottom-up. And in both cases the
results are not merely acceptable, but better. For example, open source
software is more reliable precisely because it's open source; anyone can find
mistakes.
The same happens with writing. As we got close to publication, I found I was
very worried about the essays in Hackers & Painters that hadn't been online.
Once an essay has had a couple thousand page views I feel reasonably confident
about it. But these had had literally orders of magnitude less scrutiny. It
felt like releasing software without testing it.
That's what all publishing used to be like. If you got ten people to read a
manuscript, you were lucky. But I'd become so used to publishing online that
the old method now seemed alarmingly unreliable, like navigating by dead
reckoning once you'd gotten used to a GPS.
The other thing I like about publishing online is that you can write what you
want and publish when you want. Earlier this year I wrote something that
seemed suitable for a magazine, so I sent it to an editor I know. As I was
waiting to hear back, I found to my surprise that I was hoping they'd reject
it. Then I could put it online right away. If they accepted it, it wouldn't be
read by anyone for months, and in the meantime I'd have to fight word-by-word
to save it from being mangled by some twenty five year old copy editor.
Many employees would _like_ to build great things for the companies they work
for, but more often than not management won't let them. How many of us have
heard stories of employees going to management and saying, please let us build
this thing to make money for you-- and the company saying no? The most famous
example is probably Steve Wozniak, who originally wanted to build
microcomputers for his then-employer, HP. And they turned him down. On the
blunderometer, this episode ranks with IBM accepting a non-exclusive license
for DOS. But I think this happens all the time. We just don't hear about it
usually, because to prove yourself right you have to quit and start your own
company, like Wozniak did.
**Startups**
So these, I think, are the three big lessons open source and blogging have to
teach business: (1) that people work harder on stuff they like, (2) that the
standard office environment is very unproductive, and (3) that bottom-up often
works better than top-down.
I can imagine managers at this point saying: what is this guy talking about?
What good does it do me to know that my programmers would be more productive
working at home on their own projects? I need their asses in here working on
version 3.2 of our software, or we're never going to make the release date.
And it's true, the benefit that specific manager could derive from the forces
I've described is near zero. When I say business can learn from open source, I
don't mean any specific business can. I mean business can learn about new
conditions the same way a gene pool does. I'm not claiming companies can get
smarter, just that dumb ones will die.
So what will business look like when it has assimilated the lessons of open
source and blogging? I think the big obstacle preventing us from seeing the
future of business is the assumption that people working for you have to be
employees. But think about what's going on underneath: the company has some
money, and they pay it to the employee in the hope that he'll make something
worth more than they paid him. Well, there are other ways to arrange that
relationship. Instead of paying the guy money as a salary, why not give it to
him as investment? Then instead of coming to your office to work on your
projects, he can work wherever he wants on projects of his own.
Because few of us know any alternative, we have no idea how much better we
could do than the traditional employer-employee relationship. Such customs
evolve with glacial slowness. Our employer-employee relationship still retains
a big chunk of master-servant DNA.
I dislike being on either end of it. I'll work my ass off for a customer, but
I resent being told what to do by a boss. And being a boss is also horribly
frustrating; half the time it's easier just to do stuff yourself than to get
someone else to do it for you. I'd rather do almost anything than give or
receive a performance review.
On top of its unpromising origins, employment has accumulated a lot of cruft
over the years. The list of what you can't ask in job interviews is now so
long that for convenience I assume it's infinite. Within the office you now
have to walk on eggshells lest anyone say or do something that makes the
company prey to a lawsuit. And God help you if you fire anyone.
Nothing shows more clearly that employment is not an ordinary economic
relationship than companies being sued for firing people. In any purely
economic relationship you're free to do what you want. If you want to stop
buying steel pipe from one supplier and start buying it from another, you
don't have to explain why. No one can accuse you of _unjustly_ switching pipe
suppliers. Justice implies some kind of paternal obligation that isn't there
in transactions between equals.
Most of the legal restrictions on employers are intended to protect employees.
But you can't have action without an equal and opposite reaction. You can't
expect employers to have some kind of paternal responsibility toward employees
without putting employees in the position of children. And that seems a bad
road to go down.
Next time you're in a moderately large city, drop by the main post office and
watch the body language of the people working there. They have the same sullen
resentment as children made to do something they don't want to. Their union
has exacted pay increases and work restrictions that would have been the envy
of previous generations of postal workers, and yet they don't seem any happier
for it. It's demoralizing to be on the receiving end of a paternalistic
relationship, no matter how cozy the terms. Just ask any teenager.
I see the disadvantages of the employer-employee relationship because I've
been on both sides of a better one: the investor-founder relationship. I
wouldn't claim it's painless. When I was running a startup, the thought of our
investors used to keep me up at night. And now that I'm an investor, the
thought of our startups keeps me up at night. All the pain of whatever problem
you're trying to solve is still there. But the pain hurts less when it isn't
mixed with resentment.
I had the misfortune to participate in what amounted to a controlled
experiment to prove that. After Yahoo bought our startup I went to work for
them. I was doing exactly the same work, except with bosses. And to my horror
I started acting like a child. The situation pushed buttons I'd forgotten I
had.
The big advantage of investment over employment, as the examples of open
source and blogging suggest, is that people working on projects of their own
are enormously more productive. And a startup is a project of one's own in two
senses, both of them important: it's creatively one's own, and also
economically ones's own.
Google is a rare example of a big company in tune with the forces I've
described. They've tried hard to make their offices less sterile than the
usual cube farm. They give employees who do great work large grants of stock
to simulate the rewards of a startup. They even let hackers spend 20% of their
time on their own projects.
Why not let people spend 100% of their time on their own projects, and instead
of trying to approximate the value of what they create, give them the actual
market value? Impossible? That is in fact what venture capitalists do.
So am I claiming that no one is going to be an employee anymore-- that
everyone should go and start a startup? Of course not. But more people could
do it than do it now. At the moment, even the smartest students leave school
thinking they have to get a job. Actually what they need to do is make
something valuable. A job is one way to do that, but the more ambitious ones
will ordinarily be better off taking money from an investor than an employer.
Hackers tend to think business is for MBAs. But business administration is not
what you're doing in a startup. What you're doing is business _creation_. And
the first phase of that is mostly product creation-- that is, hacking. That's
the hard part. It's a lot harder to create something people love than to take
something people love and figure out how to make money from it.
Another thing that keeps people away from starting startups is the risk.
Someone with kids and a mortgage should think twice before doing it. But most
young hackers have neither.
And as the example of open source and blogging suggests, you'll enjoy it more,
even if you fail. You'll be working on your own thing, instead of going to
some office and doing what you're told. There may be more pain in your own
company, but it won't hurt as much.
That may be the greatest effect, in the long run, of the forces underlying
open source and blogging: finally ditching the old paternalistic employer-
employee relationship, and replacing it with a purely economic one, between
equals.
** |
|
April 2006
_(This essay is derived from a talk at the 2006Startup School.)_
The startups we've funded so far are pretty quick, but they seem quicker to
learn some lessons than others. I think it's because some things about
startups are kind of counterintuitive.
We've now invested in enough companies that I've learned a trick for
determining which points are the counterintuitive ones: they're the ones I
have to keep repeating.
So I'm going to number these points, and maybe with future startups I'll be
able to pull off a form of Huffman coding. I'll make them all read this, and
then instead of nagging them in detail, I'll just be able to say: _number
four!_
**1\. Release Early.**
The thing I probably repeat most is this recipe for a startup: get a version 1
out fast, then improve it based on users' reactions.
By "release early" I don't mean you should release something full of bugs, but
that you should release something minimal. Users hate bugs, but they don't
seem to mind a minimal version 1, if there's more coming soon.
There are several reasons it pays to get version 1 done fast. One is that this
is simply the right way to write software, whether for a startup or not. I've
been repeating that since 1993, and I haven't seen much since to contradict
it. I've seen a lot of startups die because they were too slow to release
stuff, and none because they were too quick.
One of the things that will surprise you if you build something popular is
that you won't know your users. Reddit now has almost half a million unique
visitors a month. Who are all those people? They have no idea. No web startup
does. And since you don't know your users, it's dangerous to guess what
they'll like. Better to release something and let them tell you.
Wufoo took this to heart and released their form-builder before the underlying
database. You can't even drive the thing yet, but 83,000 people came to sit in
the driver's seat and hold the steering wheel. And Wufoo got valuable feedback
from it: Linux users complained they used too much Flash, so they rewrote
their software not to. If they'd waited to release everything at once, they
wouldn't have discovered this problem till it was more deeply wired in.
Even if you had no users, it would still be important to release quickly,
because for a startup the initial release acts as a shakedown cruise. If
anything major is broken-- if the idea's no good, for example, or the founders
hate one another-- the stress of getting that first version out will expose
it. And if you have such problems you want to find them early.
Perhaps the most important reason to release early, though, is that it makes
you work harder. When you're working on something that isn't released,
problems are intriguing. In something that's out there, problems are alarming.
There is a lot more urgency once you release. And I think that's precisely why
people put it off. They know they'll have to work a lot harder once they do.
**2\. Keep Pumping Out Features.**
Of course, "release early" has a second component, without which it would be
bad advice. If you're going to start with something that doesn't do much, you
better improve it fast.
What I find myself repeating is "pump out features." And this rule isn't just
for the initial stages. This is something all startups should do for as long
as they want to be considered startups.
I don't mean, of course, that you should make your application ever more
complex. By "feature" I mean one unit of hacking-- one quantum of making
users' lives better.
As with exercise, improvements beget improvements. If you run every day,
you'll probably feel like running tomorrow. But if you skip running for a
couple weeks, it will be an effort to drag yourself out. So it is with
hacking: the more ideas you implement, the more ideas you'll have. You should
make your system better at least in some small way every day or two.
This is not just a good way to get development done; it is also a form of
marketing. Users love a site that's constantly improving. In fact, users
expect a site to improve. Imagine if you visited a site that seemed very good,
and then returned two months later and not one thing had changed. Wouldn't it
start to seem lame?
They'll like you even better when you improve in response to their comments,
because customers are used to companies ignoring them. If you're the rare
exception-- a company that actually listens-- you'll generate fanatical
loyalty. You won't need to advertise, because your users will do it for you.
This seems obvious too, so why do I have to keep repeating it? I think the
problem here is that people get used to how things are. Once a product gets
past the stage where it has glaring flaws, you start to get used to it, and
gradually whatever features it happens to have become its identity. For
example, I doubt many people at Yahoo (or Google for that matter) realized how
much better web mail could be till Paul Buchheit showed them.
I think the solution is to assume that anything you've made is far short of
what it could be. Force yourself, as a sort of intellectual exercise, to keep
thinking of improvements. Ok, sure, what you have is perfect. But if you had
to change something, what would it be?
If your product seems finished, there are two possible explanations: (a) it is
finished, or (b) you lack imagination. Experience suggests (b) is a thousand
times more likely.
**3\. Make Users Happy.**
Improving constantly is an instance of a more general rule: make users happy.
One thing all startups have in common is that they can't force anyone to do
anything. They can't force anyone to use their software, and they can't force
anyone to do deals with them. A startup has to sing for its supper. That's why
the successful ones make great things. They have to, or die.
When you're running a startup you feel like a little bit of debris blown about
by powerful winds. The most powerful wind is users. They can either catch you
and loft you up into the sky, as they did with Google, or leave you flat on
the pavement, as they do with most startups. Users are a fickle wind, but more
powerful than any other. If they take you up, no competitor can keep you down.
As a little piece of debris, the rational thing for you to do is not to lie
flat, but to curl yourself into a shape the wind will catch.
I like the wind metaphor because it reminds you how impersonal the stream of
traffic is. The vast majority of people who visit your site will be casual
visitors. It's them you have to design your site for. The people who really
care will find what they want by themselves.
The median visitor will arrive with their finger poised on the Back button.
Think about your own experience: most links you follow lead to something lame.
Anyone who has used the web for more than a couple weeks has been _trained_ to
click on Back after following a link. So your site has to say "Wait! Don't
click on Back. This site isn't lame. Look at this, for example."
There are two things you have to do to make people pause. The most important
is to explain, as concisely as possible, what the hell your site is about. How
often have you visited a site that seemed to assume you already knew what they
did? For example, the corporate site that says the company makes
> enterprise content management solutions for business that enable
> organizations to unify people, content and processes to minimize business
> risk, accelerate time-to-value and sustain lower total cost of ownership.
An established company may get away with such an opaque description, but no
startup can. A startup should be able to explain in one or two sentences
exactly what it does. And not just to users. You need this for everyone:
investors, acquirers, partners, reporters, potential employees, and even
current employees. You probably shouldn't even start a company to do something
that can't be described compellingly in one or two sentences.
The other thing I repeat is to give people everything you've got, right away.
If you have something impressive, try to put it on the front page, because
that's the only one most visitors will see. Though indeed there's a paradox
here: the more you push the good stuff toward the front, the more likely
visitors are to explore further.
In the best case these two suggestions get combined: you tell visitors what
your site is about by _showing_ them. One of the standard pieces of advice in
fiction writing is "show, don't tell." Don't say that a character's angry;
have him grind his teeth, or break his pencil in half. Nothing will explain
what your site does so well as using it.
The industry term here is "conversion." The job of your site is to convert
casual visitors into users-- whatever your definition of a user is. You can
measure this in your growth rate. Either your site is catching on, or it
isn't, and you must know which. If you have decent growth, you'll win in the
end, no matter how obscure you are now. And if you don't, you need to fix
something.
**4\. Fear the Right Things.**
Another thing I find myself saying a lot is "don't worry." Actually, it's more
often "don't worry about this; worry about that instead." Startups are right
to be paranoid, but they sometimes fear the wrong things.
Most visible disasters are not so alarming as they seem. Disasters are normal
in a startup: a founder quits, you discover a patent that covers what you're
doing, your servers keep crashing, you run into an insoluble technical
problem, you have to change your name, a deal falls through-- these are all
par for the course. They won't kill you unless you let them.
Nor will most competitors. A lot of startups worry "what if Google builds
something like us?" Actually big companies are not the ones you have to worry
about-- not even Google. The people at Google are smart, but no smarter than
you; they're not as motivated, because Google is not going to go out of
business if this one product fails; and even at Google they have a lot of
bureaucracy to slow them down.
What you should fear, as a startup, is not the established players, but other
startups you don't know exist yet. They're way more dangerous than Google
because, like you, they're cornered animals.
Looking just at existing competitors can give you a false sense of security.
You should compete against what someone else _could_ be doing, not just what
you can see people doing. A corollary is that you shouldn't relax just because
you have no visible competitors yet. No matter what your idea, there's someone
else out there working on the same thing.
That's the downside of it being easier to start a startup: more people are
doing it. But I disagree with Caterina Fake when she says that makes this a
bad time to start a startup. More people are starting startups, but not as
many more as could. Most college graduates still think they have to get a job.
The average person can't ignore something that's been beaten into their head
since they were three just because serving web pages recently got a lot
cheaper.
And in any case, competitors are not the biggest threat. Way more startups
hose themselves than get crushed by competitors. There are a lot of ways to do
it, but the three main ones are internal disputes, inertia, and ignoring
users. Each is, by itself, enough to kill you. But if I had to pick the worst,
it would be ignoring users. If you want a recipe for a startup that's going to
die, here it is: a couple of founders who have some great idea they know
everyone is going to love, and that's what they're going to build, no matter
what.
Almost everyone's initial plan is broken. If companies stuck to their initial
plans, Microsoft would be selling programming languages, and Apple would be
selling printed circuit boards. In both cases their customers told them what
their business should be-- and they were smart enough to listen.
As Richard Feynman said, the imagination of nature is greater than the
imagination of man. You'll find more interesting things by looking at the
world than you could ever produce just by thinking. This principle is very
powerful. It's why the best abstract painting still falls short of Leonardo,
for example. And it applies to startups too. No idea for a product could ever
be so clever as the ones you can discover by smashing a beam of prototypes
into a beam of users.
**5\. Commitment Is a Self-Fulfilling Prophecy.**
I now have enough experience with startups to be able to say what the most
important quality is in a startup founder, and it's not what you might think.
The most important quality in a startup founder is determination. Not
intelligence-- determination.
This is a little depressing. I'd like to believe Viaweb succeeded because we
were smart, not merely determined. A lot of people in the startup world want
to believe that. Not just founders, but investors too. They like the idea of
inhabiting a world ruled by intelligence. And you can tell they really believe
this, because it affects their investment decisions.
Time after time VCs invest in startups founded by eminent professors. This may
work in biotech, where a lot of startups simply commercialize existing
research, but in software you want to invest in students, not professors.
Microsoft, Yahoo, and Google were all founded by people who dropped out of
school to do it. What students lack in experience they more than make up in
dedication.
Of course, if you want to get rich, it's not enough merely to be determined.
You have to be smart too, right? I'd like to think so, but I've had an
experience that convinced me otherwise: I spent several years living in New
York.
You can lose quite a lot in the brains department and it won't kill you. But
lose even a little bit in the commitment department, and that will kill you
very rapidly.
Running a startup is like walking on your hands: it's possible, but it
requires extraordinary effort. If an ordinary employee were asked to do the
things a startup founder has to, he'd be very indignant. Imagine if you were
hired at some big company, and in addition to writing software ten times
faster than you'd ever had to before, they expected you to answer support
calls, administer the servers, design the web site, cold-call customers, find
the company office space, and go out and get everyone lunch.
And to do all this not in the calm, womb-like atmosphere of a big company, but
against a backdrop of constant disasters. That's the part that really demands
determination. In a startup, there's always some disaster happening. So if
you're the least bit inclined to find an excuse to quit, there's always one
right there.
But if you lack commitment, chances are it will have been hurting you long
before you actually quit. Everyone who deals with startups knows how important
commitment is, so if they sense you're ambivalent, they won't give you much
attention. If you lack commitment, you'll just find that for some mysterious
reason good things happen to your competitors but not to you. If you lack
commitment, it will seem to you that you're unlucky.
Whereas if you're determined to stick around, people will pay attention to
you, because odds are they'll have to deal with you later. You're a local, not
just a tourist, so everyone has to come to terms with you.
At Y Combinator we sometimes mistakenly fund teams who have the attitude that
they're going to give this startup thing a shot for three months, and if
something great happens, they'll stick with it-- "something great" meaning
either that someone wants to buy them or invest millions of dollars in them.
But if this is your attitude, "something great" is very unlikely to happen to
you, because both acquirers and investors judge you by your level of
commitment.
If an acquirer thinks you're going to stick around no matter what, they'll be
more likely to buy you, because if they don't and you stick around, you'll
probably grow, your price will go up, and they'll be left wishing they'd
bought you earlier. Ditto for investors. What really motivates investors, even
big VCs, is not the hope of good returns, but the fear of missing out. So
if you make it clear you're going to succeed no matter what, and the only
reason you need them is to make it happen a little faster, you're much more
likely to get money.
You can't fake this. The only way to convince everyone that you're ready to
fight to the death is actually to be ready to.
You have to be the right kind of determined, though. I carefully chose the
word determined rather than stubborn, because stubbornness is a disastrous
quality in a startup. You have to be determined, but flexible, like a running
back. A successful running back doesn't just put his head down and try to run
through people. He improvises: if someone appears in front of him, he runs
around them; if someone tries to grab him, he spins out of their grip; he'll
even run in the wrong direction briefly if that will help. The one thing he'll
never do is stand still.
**6\. There Is Always Room.**
I was talking recently to a startup founder about whether it might be good to
add a social component to their software. He said he didn't think so, because
the whole social thing was tapped out. Really? So in a hundred years the only
social networking sites will be the Facebook, MySpace, Flickr, and
Del.icio.us? Not likely.
There is always room for new stuff. At every point in history, even the
darkest bits of the dark ages, people were discovering things that made
everyone say "why didn't anyone think of that before?" We know this continued
to be true up till 2004, when the Facebook was founded-- though strictly
speaking someone else did think of that.
The reason we don't see the opportunities all around us is that we adjust to
however things are, and assume that's how things have to be. For example, it
would seem crazy to most people to try to make a better search engine than
Google. Surely that field, at least, is tapped out. Really? In a hundred
years-- or even twenty-- are people still going to search for information
using something like the current Google? Even Google probably doesn't think
that.
In particular, I don't think there's any limit to the number of startups.
Sometimes you hear people saying "All these guys starting startups now are
going to be disappointed. How many little startups are Google and Yahoo going
to buy, after all?" That sounds cleverly skeptical, but I can prove it's
mistaken. No one proposes that there's some limit to the number of people who
can be employed in an economy consisting of big, slow-moving companies with a
couple thousand people each. Why should there be any limit to the number who
could be employed by small, fast-moving companies with ten each? It seems to
me the only limit would be the number of people who want to work that hard.
The limit on the number of startups is not the number that can get acquired by
Google and Yahoo-- though it seems even that should be unlimited, if the
startups were actually worth buying-- but the amount of wealth that can be
created. And I don't think there's any limit on that, except cosmological
ones.
So for all practical purposes, there is no limit to the number of startups.
Startups make wealth, which means they make things people want, and if there's
a limit on the number of things people want, we are nowhere near it. I still
don't even have a flying car.
**7\. Don't Get Your Hopes Up.**
This is another one I've been repeating since long before Y Combinator. It was
practically the corporate motto at Viaweb.
Startup founders are naturally optimistic. They wouldn't do it otherwise. But
you should treat your optimism the way you'd treat the core of a nuclear
reactor: as a source of power that's also very dangerous. You have to build a
shield around it, or it will fry you.
The shielding of a reactor is not uniform; the reactor would be useless if it
were. It's pierced in a few places to let pipes in. An optimism shield has to
be pierced too. I think the place to draw the line is between what you expect
of yourself, and what you expect of other people. It's ok to be optimistic
about what you can do, but assume the worst about machines and other people.
This is particularly necessary in a startup, because you tend to be pushing
the limits of whatever you're doing. So things don't happen in the smooth,
predictable way they do in the rest of the world. Things change suddenly, and
usually for the worse.
Shielding your optimism is nowhere more important than with deals. If your
startup is doing a deal, just assume it's not going to happen. The VCs who say
they're going to invest in you aren't. The company that says they're going to
buy you isn't. The big customer who wants to use your system in their whole
company won't. Then if things work out you can be pleasantly surprised.
The reason I warn startups not to get their hopes up is not to save them from
being _disappointed_ when things fall through. It's for a more practical
reason: to prevent them from leaning their company against something that's
going to fall over, taking them with it.
For example, if someone says they want to invest in you, there's a natural
tendency to stop looking for other investors. That's why people proposing
deals seem so positive: they _want_ you to stop looking. And you want to stop
too, because doing deals is a pain. Raising money, in particular, is a huge
time sink. So you have to consciously force yourself to keep looking.
Even if you ultimately do the first deal, it will be to your advantage to have
kept looking, because you'll get better terms. Deals are dynamic; unless
you're negotiating with someone unusually honest, there's not a single point
where you shake hands and the deal's done. There are usually a lot of
subsidiary questions to be cleared up after the handshake, and if the other
side senses weakness-- if they sense you need this deal-- they will be very
tempted to screw you in the details.
VCs and corp dev guys are professional negotiators. They're trained to take
advantage of weakness. So while they're often nice guys, they just can't
help it. And as pros they do this more than you. So don't even try to bluff
them. The only way a startup can have any leverage in a deal is genuinely not
to need it. And if you don't believe in a deal, you'll be less likely to
depend on it.
So I want to plant a hypnotic suggestion in your heads: when you hear someone
say the words "we want to invest in you" or "we want to acquire you," I want
the following phrase to appear automatically in your head: _don't get your
hopes up._ Just continue running your company as if this deal didn't exist.
Nothing is more likely to make it close.
The way to succeed in a startup is to focus on the goal of getting lots of
users, and keep walking swiftly toward it while investors and acquirers scurry
alongside trying to wave money in your face.
**Speed, not Money**
The way I've described it, starting a startup sounds pretty stressful. It is.
When I talk to the founders of the companies we've funded, they all say the
same thing: I knew it would be hard, but I didn't realize it would be this
hard.
So why do it? It would be worth enduring a lot of pain and stress to do
something grand or heroic, but just to make money? Is making money really that
important?
No, not really. It seems ridiculous to me when people take business too
seriously. I regard making money as a boring errand to be got out of the way
as soon as possible. There is nothing grand or heroic about starting a startup
per se.
So why do I spend so much time thinking about startups? I'll tell you why.
Economically, a startup is best seen not as a way to get rich, but as a way to
work faster. You have to make a living, and a startup is a way to get that
done quickly, instead of letting it drag on through your whole life.
We take it for granted most of the time, but human life is fairly miraculous.
It is also palpably short. You're given this marvellous thing, and then poof,
it's taken away. You can see why people invent gods to explain it. But even to
people who don't believe in gods, life commands respect. There are times in
most of our lives when the days go by in a blur, and almost everyone has a
sense, when this happens, of wasting something precious. As Ben Franklin said,
if you love life, don't waste time, because time is what life is made of.
So no, there's nothing particularly grand about making money. That's not what
makes startups worth the trouble. What's important about startups is the
speed. By compressing the dull but necessary task of making a living into the
smallest possible time, you show respect for life, and there is something
grand about that.
** |
|
May 2003
If Lisp is so great, why don't more people use it? I was asked this question
by a student in the audience at a talk I gave recently. Not for the first
time, either.
In languages, as in so many things, there's not much correlation between
popularity and quality. Why does John Grisham (_King of Torts_ sales rank, 44)
outsell Jane Austen (_Pride and Prejudice_ sales rank, 6191)? Would even
Grisham claim that it's because he's a better writer?
Here's the first sentence of _Pride and Prejudice:_
> It is a truth universally acknowledged, that a single man in possession of a
> good fortune must be in want of a wife.
"It is a truth universally acknowledged?" Long words for the first sentence of
a love story.
Like Jane Austen, Lisp looks hard. Its syntax, or lack of syntax, makes it
look completely unlike the languages most people are used to. Before I learned
Lisp, I was afraid of it too. I recently came across a notebook from 1983 in
which I'd written:
> I suppose I should learn Lisp, but it seems so foreign.
Fortunately, I was 19 at the time and not too resistant to learning new
things. I was so ignorant that learning almost anything meant learning new
things.
People frightened by Lisp make up other reasons for not using it. The standard
excuse, back when C was the default language, was that Lisp was too slow. Now
that Lisp dialects are among the faster languages available, that excuse has
gone away. Now the standard excuse is openly circular: that other languages
are more popular.
(Beware of such reasoning. It gets you Windows.)
Popularity is always self-perpetuating, but it's especially so in programming
languages. More libraries get written for popular languages, which makes them
still more popular. Programs often have to work with existing programs, and
this is easier if they're written in the same language, so languages spread
from program to program like a virus. And managers prefer popular languages,
because they give them more leverage over developers, who can more easily be
replaced.
Indeed, if programming languages were all more or less equivalent, there would
be little justification for using any but the most popular. But they aren't
all equivalent, not by a long shot. And that's why less popular languages,
like Jane Austen's novels, continue to survive at all. When everyone else is
reading the latest John Grisham novel, there will always be a few people
reading Jane Austen instead.
---
---
| | Japanese Translation
| | | | Romanian Translation
| | Spanish Translation
* * *
--- |
|
May 2001 _(These are some notes I made for a panel discussion on programming
language design at MIT on May 10, 2001.)_
**1\. Programming Languages Are for People.**
Programming languages are how people talk to computers. The computer would be
just as happy speaking any language that was unambiguous. The reason we have
high level languages is because people can't deal with machine language. The
point of programming languages is to prevent our poor frail human brains from
being overwhelmed by a mass of detail.
Architects know that some kinds of design problems are more personal than
others. One of the cleanest, most abstract design problems is designing
bridges. There your job is largely a matter of spanning a given distance with
the least material. The other end of the spectrum is designing chairs. Chair
designers have to spend their time thinking about human butts.
Software varies in the same way. Designing algorithms for routing data through
a network is a nice, abstract problem, like designing bridges. Whereas
designing programming languages is like designing chairs: it's all about
dealing with human weaknesses.
Most of us hate to acknowledge this. Designing systems of great mathematical
elegance sounds a lot more appealing to most of us than pandering to human
weaknesses. And there is a role for mathematical elegance: some kinds of
elegance make programs easier to understand. But elegance is not an end in
itself.
And when I say languages have to be designed to suit human weaknesses, I don't
mean that languages have to be designed for bad programmers. In fact I think
you ought to design for the best programmers, but even the best programmers
have limitations. I don't think anyone would like programming in a language
where all the variables were the letter x with integer subscripts.
**2\. Design for Yourself and Your Friends.**
If you look at the history of programming languages, a lot of the best ones
were languages designed for their own authors to use, and a lot of the worst
ones were designed for other people to use.
When languages are designed for other people, it's always a specific group of
other people: people not as smart as the language designer. So you get a
language that talks down to you. Cobol is the most extreme case, but a lot of
languages are pervaded by this spirit.
It has nothing to do with how abstract the language is. C is pretty low-level,
but it was designed for its authors to use, and that's why hackers like it.
The argument for designing languages for bad programmers is that there are
more bad programmers than good programmers. That may be so. But those few good
programmers write a disproportionately large percentage of the software.
I'm interested in the question, how do you design a language that the very
best hackers will like? I happen to think this is identical to the question,
how do you design a good programming language?, but even if it isn't, it is at
least an interesting question.
**3\. Give the Programmer as Much Control as Possible.**
Many languages (especially the ones designed for other people) have the
attitude of a governess: they try to prevent you from doing things that they
think aren't good for you. I like the opposite approach: give the programmer
as much control as you can.
When I first learned Lisp, what I liked most about it was that it considered
me an equal partner. In the other languages I had learned up till then, there
was the language and there was my program, written in the language, and the
two were very separate. But in Lisp the functions and macros I wrote were just
like those that made up the language itself. I could rewrite the language if I
wanted. It had the same appeal as open-source software.
**4\. Aim for Brevity.**
Brevity is underestimated and even scorned. But if you look into the hearts of
hackers, you'll see that they really love it. How many times have you heard
hackers speak fondly of how in, say, APL, they could do amazing things with
just a couple lines of code? I think anything that really smart people really
love is worth paying attention to.
I think almost anything you can do to make programs shorter is good. There
should be lots of library functions; anything that can be implicit should be;
the syntax should be terse to a fault; even the names of things should be
short.
And it's not only programs that should be short. The manual should be thin as
well. A good part of manuals is taken up with clarifications and reservations
and warnings and special cases. If you force yourself to shorten the manual,
in the best case you do it by fixing the things in the language that required
so much explanation.
**5\. Admit What Hacking Is.**
A lot of people wish that hacking was mathematics, or at least something like
a natural science. I think hacking is more like architecture. Architecture is
related to physics, in the sense that architects have to design buildings that
don't fall down, but the actual goal of architects is to make great buildings,
not to make discoveries about statics.
What hackers like to do is make great programs. And I think, at least in our
own minds, we have to remember that it's an admirable thing to write great
programs, even when this work doesn't translate easily into the conventional
intellectual currency of research papers. Intellectually, it is just as
worthwhile to design a language programmers will love as it is to design a
horrible one that embodies some idea you can publish a paper about.
**1\. How to Organize Big Libraries?**
Libraries are becoming an increasingly important component of programming
languages. They're also getting bigger, and this can be dangerous. If it takes
longer to find the library function that will do what you want than it would
take to write it yourself, then all that code is doing nothing but make your
manual thick. (The Symbolics manuals were a case in point.) So I think we will
have to work on ways to organize libraries. The ideal would be to design them
so that the programmer could guess what library call would do the right thing.
**2\. Are People Really Scared of Prefix Syntax?**
This is an open problem in the sense that I have wondered about it for years
and still don't know the answer. Prefix syntax seems perfectly natural to me,
except possibly for math. But it could be that a lot of Lisp's unpopularity is
simply due to having an unfamiliar syntax. Whether to do anything about it, if
it is true, is another question. **3\. What Do You Need for Server-Based
Software?** I think a lot of the most exciting new applications that get
written in the next twenty years will be Web-based applications, meaning
programs that sit on the server and talk to you through a Web browser. And to
write these kinds of programs we may need some new things.
One thing we'll need is support for the new way that server-based apps get
released. Instead of having one or two big releases a year, like desktop
software, server-based apps get released as a series of small changes. You may
have as many as five or ten releases a day. And as a rule everyone will always
use the latest version.
You know how you can design programs to be debuggable? Well, server-based
software likewise has to be designed to be changeable. You have to be able to
change it easily, or at least to know what is a small change and what is a
momentous one.
Another thing that might turn out to be useful for server based software,
surprisingly, is continuations. In Web-based software you can use something
like continuation-passing style to get the effect of subroutines in the
inherently stateless world of a Web session. Maybe it would be worthwhile
having actual continuations, if it was not too expensive.
**4\. What New Abstractions Are Left to Discover?**
I'm not sure how reasonable a hope this is, but one thing I would really love
to do, personally, is discover a new abstraction-- something that would make
as much of a difference as having first class functions or recursion or even
keyword parameters. This may be an impossible dream. These things don't get
discovered that often. But I am always looking.
**1\. You Can Use Whatever Language You Want.**
Writing application programs used to mean writing desktop software. And in
desktop software there is a big bias toward writing the application in the
same language as the operating system. And so ten years ago, writing software
pretty much meant writing software in C. Eventually a tradition evolved:
application programs must not be written in unusual languages. And this
tradition had so long to develop that nontechnical people like managers and
venture capitalists also learned it.
Server-based software blows away this whole model. With server-based software
you can use any language you want. Almost nobody understands this yet
(especially not managers and venture capitalists). A few hackers understand
it, and that's why we even hear about new, indy languages like Perl and
Python. We're not hearing about Perl and Python because people are using them
to write Windows apps.
What this means for us, as people interested in designing programming
languages, is that there is now potentially an actual audience for our work.
**2\. Speed Comes from Profilers.**
Language designers, or at least language implementors, like to write compilers
that generate fast code. But I don't think this is what makes languages fast
for users. Knuth pointed out long ago that speed only matters in a few
critical bottlenecks. And anyone who's tried it knows that you can't guess
where these bottlenecks are. Profilers are the answer.
Language designers are solving the wrong problem. Users don't need benchmarks
to run fast. What they need is a language that can show them what parts of
their own programs need to be rewritten. That's where speed comes from in
practice. So maybe it would be a net win if language implementors took half
the time they would have spent doing compiler optimizations and spent it
writing a good profiler instead.
**3\. You Need an Application to Drive the Design of a Language.**
This may not be an absolute rule, but it seems like the best languages all
evolved together with some application they were being used to write. C was
written by people who needed it for systems programming. Lisp was developed
partly to do symbolic differentiation, and McCarthy was so eager to get
started that he was writing differentiation programs even in the first paper
on Lisp, in 1960.
It's especially good if your application solves some new problem. That will
tend to drive your language to have new features that programmers need. I
personally am interested in writing a language that will be good for writing
server-based applications.
[During the panel, Guy Steele also made this point, with the additional
suggestion that the application should not consist of writing the compiler for
your language, unless your language happens to be intended for writing
compilers.]
**4\. A Language Has to Be Good for Writing Throwaway Programs.**
You know what a throwaway program is: something you write quickly for some
limited task. I think if you looked around you'd find that a lot of big,
serious programs started as throwaway programs. I would not be surprised if
_most_ programs started as throwaway programs. And so if you want to make a
language that's good for writing software in general, it has to be good for
writing throwaway programs, because that is the larval stage of most software.
**5\. Syntax Is Connected to Semantics.**
It's traditional to think of syntax and semantics as being completely
separate. This will sound shocking, but it may be that they aren't. I think
that what you want in your language may be related to how you express it.
I was talking recently to Robert Morris, and he pointed out that operator
overloading is a bigger win in languages with infix syntax. In a language with
prefix syntax, any function you define is effectively an operator. If you want
to define a plus for a new type of number you've made up, you can just define
a new function to add them. If you do that in a language with infix syntax,
there's a big difference in appearance between the use of an overloaded
operator and a function call.
**1\. New Programming Languages.**
Back in the 1970s it was fashionable to design new programming languages.
Recently it hasn't been. But I think server-based software will make new
languages fashionable again. With server-based software, you can use any
language you want, so if someone does design a language that actually seems
better than others that are available, there will be people who take a risk
and use it.
**2\. Time-Sharing.**
Richard Kelsey gave this as an idea whose time has come again in the last
panel, and I completely agree with him. My guess (and Microsoft's guess, it
seems) is that much computing will move from the desktop onto remote servers.
In other words, time-sharing is back. And I think there will need to be
support for it at the language level. For example, I know that Richard and
Jonathan Rees have done a lot of work implementing process scheduling within
Scheme 48.
**3\. Efficiency.**
Recently it was starting to seem that computers were finally fast enough. More
and more we were starting to hear about byte code, which implies to me at
least that we feel we have cycles to spare. But I don't think we will, with
server-based software. Someone is going to have to pay for the servers that
the software runs on, and the number of users they can support per machine
will be the divisor of their capital cost.
So I think efficiency will matter, at least in computational bottlenecks. It
will be especially important to do i/o fast, because server-based applications
do a lot of i/o.
It may turn out that byte code is not a win, in the end. Sun and Microsoft
seem to be facing off in a kind of a battle of the byte codes at the moment.
But they're doing it because byte code is a convenient place to insert
themselves into the process, not because byte code is in itself a good idea.
It may turn out that this whole battleground gets bypassed. That would be kind
of amusing.
**1\. Clients.**
This is just a guess, but my guess is that the winning model for most
applications will be purely server-based. Designing software that works on the
assumption that everyone will have your client is like designing a society on
the assumption that everyone will just be honest. It would certainly be
convenient, but you have to assume it will never happen.
I think there will be a proliferation of devices that have some kind of Web
access, and all you'll be able to assume about them is that they can support
simple html and forms. Will you have a browser on your cell phone? Will there
be a phone in your palm pilot? Will your blackberry get a bigger screen? Will
you be able to browse the Web on your gameboy? Your watch? I don't know. And I
don't have to know if I bet on everything just being on the server. It's just
so much more robust to have all the brains on the server.
**2\. Object-Oriented Programming.**
I realize this is a controversial one, but I don't think object-oriented
programming is such a big deal. I think it is a fine model for certain kinds
of applications that need that specific kind of data structure, like window
systems, simulations, and cad programs. But I don't see why it ought to be the
model for all programming.
I think part of the reason people in big companies like object-oriented
programming is because it yields a lot of what looks like work. Something that
might naturally be represented as, say, a list of integers, can now be
represented as a class with all kinds of scaffolding and hustle and bustle.
Another attraction of object-oriented programming is that methods give you
some of the effect of first class functions. But this is old news to Lisp
programmers. When you have actual first class functions, you can just use them
in whatever way is appropriate to the task at hand, instead of forcing
everything into a mold of classes and methods.
What this means for language design, I think, is that you shouldn't build
object-oriented programming in too deeply. Maybe the answer is to offer more
general, underlying stuff, and let people design whatever object systems they
want as libraries.
**3\. Design by Committee.**
Having your language designed by a committee is a big pitfall, and not just
for the reasons everyone knows about. Everyone knows that committees tend to
yield lumpy, inconsistent designs. But I think a greater danger is that they
won't take risks. When one person is in charge he can take risks that a
committee would never agree on.
Is it necessary to take risks to design a good language though? Many people
might suspect that language design is something where you should stick fairly
close to the conventional wisdom. I bet this isn't true. In everything else
people do, reward is proportionate to risk. Why should language design be any
different?
---
---
| | Japanese Translation
* * *
--- |
|
December 2020
Jessica and I have certain words that have special significance when we're
talking about startups. The highest compliment we can pay to founders is to
describe them as "earnest." This is not by itself a guarantee of success. You
could be earnest but incapable. But when founders are both formidable (another
of our words) and earnest, they're as close to unstoppable as you get.
Earnestness sounds like a boring, even Victorian virtue. It seems a bit of an
anachronism that people in Silicon Valley would care about it. Why does this
matter so much?
When you call someone earnest, you're making a statement about their motives.
It means both that they're doing something for the right reasons, and that
they're trying as hard as they can. If we imagine motives as vectors, it means
both the direction and the magnitude are right. Though these are of course
related: when people are doing something for the right reasons, they try
harder.
The reason motives matter so much in Silicon Valley is that so many people
there have the wrong ones. Starting a successful startup makes you rich and
famous. So a lot of the people trying to start them are doing it for those
reasons. Instead of what? Instead of interest in the problem for its own sake.
That is the root of earnestness.
It's also the hallmark of a nerd. Indeed, when people describe themselves as
"x nerds," what they mean is that they're interested in x for its own sake,
and not because it's cool to be interested in x, or because of what they can
get from it. They're saying they care so much about x that they're willing to
sacrifice seeming cool for its sake.
A _genuine interest_ in something is a very powerful motivator for some
people, the most powerful motivator of all. Which is why it's what Jessica
and I look for in founders. But as well as being a source of strength, it's
also a source of vulnerability. Caring constrains you. The earnest can't
easily reply in kind to mocking banter, or put on a cool facade of nihil
admirari. They care too much. They are doomed to be the straight man. That's a
real disadvantage in your _teenage years_, when mocking banter and nihil
admirari often have the upper hand. But it becomes an advantage later.
It's a commonplace now that the kids who were nerds in high school become the
cool kids' bosses later on. But people misunderstand why this happens. It's
not just because the nerds are smarter, but also because they're more earnest.
When the problems get harder than the fake ones you're given in high school,
caring about them starts to matter.
Does it always matter? Do the earnest always win? Not always. It probably
doesn't matter much in politics, or in crime, or in certain types of business
that are similar to crime, like gambling, personal injury law, patent
trolling, and so on. Nor does it matter in academic fields at the more _bogus_
end of the spectrum. And though I don't know enough to say for sure, it may
not matter in some kinds of humor: it may be possible to be completely cynical
and still be very funny.
Looking at the list of fields I mentioned, there's an obvious pattern. Except
possibly for humor, these are all types of work I'd avoid like the plague. So
that could be a useful heuristic for deciding which fields to work in: how
much does earnestness matter? Which can in turn presumably be inferred from
the prevalence of nerds at the top.
Along with "nerd," another word that tends to be associated with earnestness
is "naive." The earnest often seem naive. It's not just that they don't have
the motives other people have. They often don't fully grasp that such motives
exist. Or they may know intellectually that they do, but because they don't
feel them, they forget about them.
It works to be slightly naive not just about motives but also, believe it or
not, about the problems you're working on. Naive optimism can compensate for
the bit rot that _rapid change_ causes in established beliefs. You plunge into
some problem saying "How hard can it be?", and then after solving it you learn
that it was till recently insoluble.
Naivete is an obstacle for anyone who wants to seem sophisticated, and this is
one reason would-be intellectuals find it so difficult to understand Silicon
Valley. It hasn't been safe for such people to use the word "earnest" outside
scare quotes since Oscar Wilde wrote "The Importance of Being Earnest" in
1895. And yet when you zoom in on Silicon Valley, right into _Jessica
Livingston's brain_, that's what her x-ray vision is seeking out in founders.
Earnestness! Who'd have guessed? Reporters literally can't believe it when
founders making piles of money say that they started their companies to make
the world better. The situation seems made for mockery. How can these founders
be so naive as not to realize how implausible they sound?
Though those asking this question don't realize it, that's not a rhetorical
question.
A lot of founders are faking it, of course, particularly the smaller fry, and
the soon to be smaller fry. But not all of them. There are a significant
number of founders who really are interested in the problem they're solving
mainly for its own sake.
Why shouldn't there be? We have no difficulty believing that people would be
interested in history or math or even old bus tickets for their own sake. Why
can't there be people interested in self-driving cars or social networks for
their own sake? When you look at the question from this side, it seems obvious
there would be. And isn't it likely that having a deep interest in something
would be a source of great energy and resilience? It is in every other field.
The question really is why we have a blind spot about business. And the answer
to that is obvious if you know enough history. For most of history, making
large amounts of money has not been very intellectually interesting. In
preindustrial times it was never far from robbery, and some areas of business
still retain that character, except using lawyers instead of soldiers.
But there are other areas of business where the work is genuinely interesting.
Henry Ford got to spend much of his time working on interesting technical
problems, and for the last several decades the trend in that direction has
been accelerating. It's much easier now to make a lot of money by working on
something you're interested in than it was _50 years ago_. And that, rather
than how fast they grow, may be the most important change that startups
represent. Though indeed, the fact that the work is genuinely interesting is a
big part of why it gets done so fast.
Can you imagine a more important change than one in the relationship between
intellectual curiosity and money? These are two of the most powerful forces in
the world, and in my lifetime they've become significantly more aligned. How
could you not be fascinated to watch something like this happening in real
time?
I meant this essay to be about earnestness generally, and now I've gone and
talked about startups again. But I suppose at least it serves as an example of
an x nerd in the wild.
** |
|
February 2022
Writing about something, even something you know well, usually shows you that
you didn't know it as well as you thought. Putting ideas into words is a
severe test. The first words you choose are usually wrong; you have to rewrite
sentences over and over to get them exactly right. And your ideas won't just
be imprecise, but incomplete too. Half the ideas that end up in an essay will
be ones you thought of while you were writing it. Indeed, that's why I write
them.
Once you publish something, the convention is that whatever you wrote was what
you thought before you wrote it. These were your ideas, and now you've
expressed them. But you know this isn't true. You know that putting your ideas
into words changed them. And not just the ideas you published. Presumably
there were others that turned out to be too broken to fix, and those you
discarded instead.
It's not just having to commit your ideas to specific words that makes writing
so exacting. The real test is reading what you've written. You have to pretend
to be a neutral reader who knows nothing of what's in your head, only what you
wrote. When he reads what you wrote, does it seem correct? Does it seem
complete? If you make an effort, you can read your writing as if you were a
complete stranger, and when you do the news is usually bad. It takes me many
cycles before I can get an essay past the stranger. But the stranger is
rational, so you always can, if you ask him what he needs. If he's not
satisfied because you failed to mention x or didn't qualify some sentence
sufficiently, then you mention x or add more qualifications. Happy now? It may
cost you some nice sentences, but you have to resign yourself to that. You
just have to make them as good as you can and still satisfy the stranger.
This much, I assume, won't be that controversial. I think it will accord with
the experience of anyone who has tried to write about anything nontrivial.
There may exist people whose thoughts are so perfectly formed that they just
flow straight into words. But I've never known anyone who could do this, and
if I met someone who said they could, it would seem evidence of their
limitations rather than their ability. Indeed, this is a trope in movies: the
guy who claims to have a plan for doing some difficult thing, and who when
questioned further, taps his head and says "It's all up here." Everyone
watching the movie knows what that means. At best the plan is vague and
incomplete. Very likely there's some undiscovered flaw that invalidates it
completely. At best it's a plan for a plan.
In precisely defined domains it's possible to form complete ideas in your
head. People can play chess in their heads, for example. And mathematicians
can do some amount of math in their heads, though they don't seem to feel sure
of a proof over a certain length till they write it down. But this only seems
possible with ideas you can express in a formal language. Arguably what
such people are doing is putting ideas into words in their heads. I can to
some extent write essays in my head. I'll sometimes think of a paragraph while
walking or lying in bed that survives nearly unchanged in the final version.
But really I'm writing when I do this. I'm doing the mental part of writing;
my fingers just aren't moving as I do it.
You can know a great deal about something without writing about it. Can you
ever know so much that you wouldn't learn more from trying to explain what you
know? I don't think so. I've written about at least two subjects I know well —
Lisp hacking and startups — and in both cases I learned a lot from writing
about them. In both cases there were things I didn't consciously realize till
I had to explain them. And I don't think my experience was anomalous. A great
deal of knowledge is unconscious, and experts have if anything a higher
proportion of unconscious knowledge than beginners.
I'm not saying that writing is the best way to explore all ideas. If you have
ideas about architecture, presumably the best way to explore them is to build
actual buildings. What I'm saying is that however much you learn from
exploring ideas in other ways, you'll still learn new things from writing
about them.
Putting ideas into words doesn't have to mean writing, of course. You can also
do it the old way, by talking. But in my experience, writing is the stricter
test. You have to commit to a single, optimal sequence of words. Less can go
unsaid when you don't have tone of voice to carry meaning. And you can focus
in a way that would seem excessive in conversation. I'll often spend 2 weeks
on an essay and reread drafts 50 times. If you did that in conversation it
would seem evidence of some kind of mental disorder. If you're lazy, of
course, writing and talking are equally useless. But if you want to push
yourself to get things right, writing is the steeper hill.
The reason I've spent so long establishing this rather obvious point is that
it leads to another that many people will find shocking. If writing down your
ideas always makes them more precise and more complete, then no one who hasn't
written about a topic has fully formed ideas about it. And someone who never
writes has no fully formed ideas about anything nontrivial.
It feels to them as if they do, especially if they're not in the habit of
critically examining their own thinking. Ideas can feel complete. It's only
when you try to put them into words that you discover they're not. So if you
never subject your ideas to that test, you'll not only never have fully formed
ideas, but also never realize it.
Putting ideas into words is certainly no guarantee that they'll be right. Far
from it. But though it's not a sufficient condition, it is a necessary one.
** |
|
November 2021
_(This essay is derived from a talk at the Cambridge Union.)_
When I was a kid, I'd have said there wasn't. My father told me so. Some
people like some things, and other people like other things, and who's to say
who's right?
It seemed so obvious that there was no such thing as good taste that it was
only through indirect evidence that I realized my father was wrong. And that's
what I'm going to give you here: a proof by reductio ad absurdum. If we start
from the premise that there's no such thing as good taste, we end up with
conclusions that are obviously false, and therefore the premise must be wrong.
We'd better start by saying what good taste is. There's a narrow sense in
which it refers to aesthetic judgements and a broader one in which it refers
to preferences of any kind. The strongest proof would be to show that taste
exists in the narrowest sense, so I'm going to talk about taste in art. You
have better taste than me if the art you like is better than the art I like.
If there's no such thing as good taste, then there's no such thing as _good
art_. Because if there is such a thing as good art, it's easy to tell which of
two people has better taste. Show them a lot of works by artists they've never
seen before and ask them to choose the best, and whoever chooses the better
art has better taste.
So if you want to discard the concept of good taste, you also have to discard
the concept of good art. And that means you have to discard the possibility of
people being good at making it. Which means there's no way for artists to be
good at their jobs. And not just visual artists, but anyone who is in any
sense an artist. You can't have good actors, or novelists, or composers, or
dancers either. You can have popular novelists, but not good ones.
We don't realize how far we'd have to go if we discarded the concept of good
taste, because we don't even debate the most obvious cases. But it doesn't
just mean we can't say which of two famous painters is better. It means we
can't say that any painter is better than a randomly chosen eight year old.
That was how I realized my father was wrong. I started studying painting. And
it was just like other kinds of work I'd done: you could do it well, or badly,
and if you tried hard, you could get better at it. And it was obvious that
Leonardo and Bellini were much better at it than me. That gap between us was
not imaginary. They were so good. And if they could be good, then art could be
good, and there was such a thing as good taste after all.
Now that I've explained how to show there is such a thing as good taste, I
should also explain why people think there isn't. There are two reasons. One
is that there's always so much disagreement about taste. Most people's
response to art is a tangle of unexamined impulses. Is the artist famous? Is
the subject attractive? Is this the sort of art they're supposed to like? Is
it hanging in a famous museum, or reproduced in a big, expensive book? In
practice most people's response to art is dominated by such extraneous
factors.
And the people who do claim to have good taste are so often mistaken. The
paintings admired by the so-called experts in one generation are often so
different from those admired a few generations later. It's easy to conclude
there's nothing real there at all. It's only when you isolate this force, for
example by trying to paint and comparing your work to Bellini's, that you can
see that it does in fact exist.
The other reason people doubt that art can be good is that there doesn't seem
to be any room in the art for this goodness. The argument goes like this.
Imagine several people looking at a work of art and judging how good it is. If
being good art really is a property of objects, it should be in the object
somehow. But it doesn't seem to be; it seems to be something happening in the
heads of each of the observers. And if they disagree, how do you choose
between them?
The solution to this puzzle is to realize that the purpose of art is to work
on its human audience, and humans have a lot in common. And to the extent the
things an object acts upon respond in the same way, that's arguably what it
means for the object to have the corresponding property. If everything a
particle interacts with behaves as if the particle had a mass of _m_ , then it
has a mass of _m_. So the distinction between "objective" and "subjective" is
not binary, but a matter of degree, depending on how much the subjects have in
common. Particles interacting with one another are at one pole, but people
interacting with art are not all the way at the other; their reactions aren't
_random_.
Because people's responses to art aren't random, art can be designed to
operate on people, and be good or bad depending on how effectively it does so.
Much as a vaccine can be. If someone were talking about the ability of a
vaccine to confer immunity, it would seem very frivolous to object that
conferring immunity wasn't really a property of vaccines, because acquiring
immunity is something that happens in the immune system of each individual
person. Sure, people's immune systems vary, and a vaccine that worked on one
might not work on another, but that doesn't make it meaningless to talk about
the effectiveness of a vaccine.
The situation with art is messier, of course. You can't measure effectiveness
by simply taking a vote, as you do with vaccines. You have to imagine the
responses of subjects with a deep knowledge of art, and enough clarity of mind
to be able to ignore extraneous influences like the fame of the artist. And
even then you'd still see some disagreement. People do vary, and judging art
is hard, especially recent art. There is definitely not a total order either
of works or of people's ability to judge them. But there is equally definitely
a partial order of both. So while it's not possible to have perfect taste, it
is possible to have good taste.
**Thanks** to the Cambridge Union for inviting me, and to Trevor Blackwell,
Jessica Livingston, and Robert Morris for reading drafts of this.
---
* * *
--- |
|
December 2020
As I was deciding what to write about next, I was surprised to find that two
separate essays I'd been planning to write were actually the same.
The first is about how to ace your Y Combinator interview. There has been so
much nonsense written about this topic that I've been meaning for years to
write something telling founders the truth.
The second is about something politicians sometimes say that the only way to
become a billionaire is by exploiting people and why this is mistaken.
Keep reading, and you'll learn both simultaneously.
I know the politicians are mistaken because it was my job to predict which
people will become billionaires. I think I can truthfully say that I know as
much about how to do this as anyone. If the key to becoming a billionaire
the defining feature of billionaires was to exploit people, then I, as a
professional billionaire scout, would surely realize this and look for people
who would be good at it, just as an NFL scout looks for speed in wide
receivers.
But aptitude for exploiting people is not what Y Combinator looks for at all.
In fact, it's the opposite of what they look for. I'll tell you what they do
look for, by explaining how to convince Y Combinator to fund you, and you can
see for yourself.
What YC looks for, above all, is founders who understand some group of users
and can make what they want. This is so important that it's YC's motto: "Make
something people want."
A big company can to some extent force unsuitable products on unwilling
customers, but a startup doesn't have the power to do that. A startup must
sing for its supper, by making things that genuinely delight its customers.
Otherwise it will never get off the ground.
Here's where things get difficult, both for you as a founder and for the YC
partners trying to decide whether to fund you. In a market economy, it's hard
to make something people want that they don't already have. That's the great
thing about market economies. If other people both knew about this need and
were able to satisfy it, they already would be, and there would be no room for
your startup.
Which means the conversation during your YC interview will have to be about
something new: either a new need, or a new way to satisfy one. And not just
new, but uncertain. If it were certain that the need existed and that you
could satisfy it, that certainty would be reflected in large and rapidly
growing revenues, and you wouldn't be seeking seed funding.
So the YC partners have to guess both whether you've discovered a real need,
and whether you'll be able to satisfy it. That's what they are, at least in
this part of their job: professional guessers. They have 1001 heuristics for
doing this, and I'm not going to tell you all of them, but I'm happy to tell
you the most important ones, because these can't be faked; the only way to
"hack" them would be to do what you should be doing anyway as a founder.
The first thing the partners will try to figure out, usually, is whether what
you're making will ever be something a lot of people want. It doesn't have to
be something a lot of people want now. The product and the market will both
evolve, and will influence each other's evolution. But in the end there has to
be something with a huge market. That's what the partners will be trying to
figure out: is there a path to a huge market?
Sometimes it's obvious there will be a huge market. If _Boom_ manages to ship
an airliner at all, international airlines will have to buy it. But usually
it's not obvious. Usually the path to a huge market is by growing a small
market. This idea is important enough that it's worth coining a phrase for, so
let's call one of these small but growable markets a "larval market."
The perfect example of a larval market might be Apple's market when they were
founded in 1976. In 1976, not many people wanted their own computer. But more
and more started to want one, till now every 10 year old on the planet wants a
computer (but calls it a "phone").
The ideal combination is the group of founders who are _"living in the
future"_ in the sense of being at the leading edge of some kind of change, and
who are building something they themselves want. Most super-successful
startups are of this type. Steve Wozniak wanted a computer. Mark Zuckerberg
wanted to engage online with his college friends. Larry and Sergey wanted to
find things on the web. All these founders were building things they and their
peers wanted, and the fact that they were at the leading edge of change meant
that more people would want these things in the future.
But although the ideal larval market is oneself and one's peers, that's not
the only kind. A larval market might also be regional, for example. You build
something to serve one location, and then expand to others.
The crucial feature of the initial market is that it exist. That may seem like
an obvious point, but the lack of it is the biggest flaw in most startup
ideas. There have to be some people who want what you're building right now,
and want it so urgently that they're willing to use it, bugs and all, even
though you're a small company they've never heard of. There don't have to be
many, but there have to be some. As long as you have some users, there are
straightforward ways to get more: build new features they want, seek out more
people like them, get them to refer you to their friends, and so on. But these
techniques all require some initial seed group of users.
So this is one thing the YC partners will almost certainly dig into during
your interview. Who are your first users going to be, and how do you know they
want this? If I had to decide whether to fund startups based on a single
question, it would be "How do you know people want this?"
The most convincing answer is "Because we and our friends want it." It's even
better when this is followed by the news that you've already built a
prototype, and even though it's very crude, your friends are using it, and
it's spreading by word of mouth. If you can say that and you're not lying, the
partners will switch from default no to default yes. Meaning you're in unless
there's some other disqualifying flaw.
That is a hard standard to meet, though. Airbnb didn't meet it. They had the
first part. They had made something they themselves wanted. But it wasn't
spreading. So don't feel bad if you don't hit this gold standard of
convincingness. If Airbnb didn't hit it, it must be too high.
In practice, the YC partners will be satisfied if they feel that you have a
deep understanding of your users' needs. And the Airbnbs did have that. They
were able to tell us all about what motivated hosts and guests. They knew from
first-hand experience, because they'd been the first hosts. We couldn't ask
them a question they didn't know the answer to. We ourselves were not very
excited about the idea as users, but we knew this didn't prove anything,
because there were lots of successful startups we hadn't been excited about as
users. We were able to say to ourselves "They seem to know what they're
talking about. Maybe they're onto something. It's not growing yet, but maybe
they can figure out how to make it grow during YC." Which they did, about
three weeks into the batch.
The best thing you can do in a YC interview is to teach the partners about
your users. So if you want to prepare for your interview, one of the best ways
to do it is to go talk to your users and find out exactly what they're
thinking. Which is what you should be doing anyway.
This may sound strangely credulous, but the YC partners want to rely on the
founders to tell them about the market. Think about how VCs typically judge
the potential market for an idea. They're not ordinarily domain experts
themselves, so they forward the idea to someone who is, and ask for their
opinion. YC doesn't have time to do this, but if the YC partners can convince
themselves that the founders both (a) know what they're talking about and (b)
aren't lying, they don't need outside domain experts. They can use the
founders themselves as domain experts when evaluating their own idea.
This is why YC interviews aren't pitches. To give as many founders as possible
a chance to get funded, we made interviews as short as we could: 10 minutes.
That is not enough time for the partners to figure out, through the indirect
evidence in a pitch, whether you know what you're talking about and aren't
lying. They need to dig in and ask you questions. There's not enough time for
sequential access. They need random access.
The worst advice I ever heard about how to succeed in a YC interview is that
you should take control of the interview and make sure to deliver the message
you want to. In other words, turn the interview into a pitch. ⟨elaborate
expletive⟩. It is so annoying when people try to do that. You ask them a
question, and instead of answering it, they deliver some obviously
prefabricated blob of pitch. It eats up 10 minutes really fast.
There is no one who can give you accurate advice about what to do in a YC
interview except a current or former YC partner. People who've merely been
interviewed, even successfully, have no idea of this, but interviews take all
sorts of different forms depending on what the partners want to know about
most. Sometimes they're all about the founders, other times they're all about
the idea. Sometimes some very narrow aspect of the idea. Founders sometimes
walk away from interviews complaining that they didn't get to explain their
idea completely. True, but they explained enough.
Since a YC interview consists of questions, the way to do it well is to answer
them well. Part of that is answering them candidly. The partners don't expect
you to know everything. But if you don't know the answer to a question, don't
try to bullshit your way out of it. The partners, like most experienced
investors, are professional bullshit detectors, and you are (hopefully) an
amateur bullshitter. And if you try to bullshit them and fail, they may not
even tell you that you failed. So it's better to be honest than to try to sell
them. If you don't know the answer to a question, say you don't, and tell them
how you'd go about finding it, or tell them the answer to some related
question.
If you're asked, for example, what could go wrong, the worst possible answer
is "nothing." Instead of convincing them that your idea is bullet-proof, this
will convince them that you're a fool or a liar. Far better to go into
gruesome detail. That's what experts do when you ask what could go wrong. The
partners know that your idea is risky. That's what a good bet looks like at
this stage: a tiny probability of a huge outcome.
Ditto if they ask about competitors. Competitors are rarely what kills
startups. Poor execution does. But you should know who your competitors are,
and tell the YC partners candidly what your relative strengths and weaknesses
are. Because the YC partners know that competitors don't kill startups, they
won't hold competitors against you too much. They will, however, hold it
against you if you seem either to be unaware of competitors, or to be
minimizing the threat they pose. They may not be sure whether you're clueless
or lying, but they don't need to be.
The partners don't expect your idea to be perfect. This is seed investing. At
this stage, all they can expect are promising hypotheses. But they do expect
you to be thoughtful and honest. So if trying to make your idea seem perfect
causes you to come off as glib or clueless, you've sacrificed something you
needed for something you didn't.
If the partners are sufficiently convinced that there's a path to a big
market, the next question is whether you'll be able to find it. That in turn
depends on three things: the general qualities of the founders, their specific
expertise in this domain, and the relationship between them. How determined
are the founders? Are they good at building things? Are they resilient enough
to keep going when things go wrong? How strong is their friendship?
Though the Airbnbs only did ok in the idea department, they did spectacularly
well in this department. The story of how they'd funded themselves by making
Obama- and McCain-themed breakfast cereal was the single most important factor
in our decision to fund them. They didn't realize it at the time, but what
seemed to them an irrelevant story was in fact fabulously good evidence of
their qualities as founders. It showed they were resourceful and determined,
and could work together.
It wasn't just the cereal story that showed that, though. The whole interview
showed that they cared. They weren't doing this just for the money, or because
startups were cool. The reason they were working so hard on this company was
because it was their project. They had discovered an interesting new idea, and
they just couldn't let it go.
Mundane as it sounds, that's the most powerful motivator of all, not just in
startups, but in most ambitious undertakings: to be _genuinely interested_ in
what you're building. This is what really drives billionaires, or at least the
ones who become billionaires from starting companies. The company is their
project.
One thing few people realize about billionaires is that all of them could have
stopped sooner. They could have gotten acquired, or found someone else to run
the company. Many founders do. The ones who become really rich are the ones
who keep working. And what makes them keep working is not just money. What
keeps them working is the same thing that keeps anyone else working when they
could stop if they wanted to: that there's nothing else they'd rather do.
That, not exploiting people, is the defining quality of people who become
billionaires from starting companies. So that's what YC looks for in founders:
authenticity. People's motives for starting startups are usually mixed.
They're usually doing it from some combination of the desire to make money,
the desire to seem cool, genuine interest in the problem, and unwillingness to
work for someone else. The last two are more powerful motivators than the
first two. It's ok for founders to want to make money or to seem cool. Most
do. But if the founders seem like they're doing it _just_ to make money or
_just_ to seem cool, they're not likely to succeed on a big scale. The
founders who are doing it for the money will take the first sufficiently large
acquisition offer, and the ones who are doing it to seem cool will rapidly
discover that there are much less painful ways of seeming cool.
Y Combinator certainly sees founders whose m.o. is to exploit people. YC is a
magnet for them, because they want the YC brand. But when the YC partners
detect someone like that, they reject them. If bad people made good founders,
the YC partners would face a moral dilemma. Fortunately they don't, because
bad people make bad founders. This exploitative type of founder is not going
to succeed on a large scale, and in fact probably won't even succeed on a
small one, because they're always going to be taking shortcuts. They see YC
itself as a shortcut.
Their exploitation usually begins with their own cofounders, which is
disastrous, since the cofounders' relationship is the foundation of the
company. Then it moves on to the users, which is also disastrous, because the
sort of early adopters a successful startup wants as its initial users are the
hardest to fool. The best this kind of founder can hope for is to keep the
edifice of deception tottering along until some acquirer can be tricked into
buying it. But that kind of acquisition is never very big.
If professional billionaire scouts know that exploiting people is not the
skill to look for, why do some politicians think this is the defining quality
of billionaires?
I think they start from the feeling that it's wrong that one person could have
so much more money than another. It's understandable where that feeling comes
from. It's in our DNA, and even in the DNA of other species.
If they limited themselves to saying that it made them feel bad when one
person had so much more money than other people, who would disagree? It makes
me feel bad too, and I think people who make a lot of money have a moral
obligation to use it for the common good. The mistake they make is to jump
from feeling bad that some people are much richer than others to the
conclusion that there's no legitimate way to make a very large amount of
money. Now we're getting into statements that are not only falsifiable, but
false.
There are certainly some people who become rich by doing bad things. But there
are also plenty of people who behave badly and don't make that much from it.
There is no correlation in fact, probably an inverse correlation between
how badly you behave and how much money you make.
The greatest danger of this nonsense may not even be that it sends policy
astray, but that it misleads ambitious people. Can you imagine a better way to
destroy social mobility than by telling poor kids that the way to get rich is
by exploiting people, while the rich kids know, from having watched the
preceding generation do it, how it's really done?
I'll tell you how it's really done, so you can at least tell your own kids the
truth. It's all about users. The most reliable way to become a billionaire is
to start a company that _grows fast_, and the way to grow fast is to make what
users want. Newly started startups have no choice but to delight users, or
they'll never even get rolling. But this never stops being the lodestar, and
bigger companies take their eye off it at their peril. Stop delighting users,
and eventually someone else will.
Users are what the partners want to know about in YC interviews, and what I
want to know about when I talk to founders that we funded ten years ago and
who are billionaires now. What do users want? What new things could you build
for them? Founders who've become billionaires are always eager to talk about
that topic. That's how they became billionaires.
** |
|
May 2001 _(I wrote this article to help myself understand exactly what
McCarthy discovered. You don't need to know this stuff to program in Lisp, but
it should be helpful to anyone who wants to understand the essence of Lisp
both in the sense of its origins and its semantic core. The fact that it has
such a core is one of Lisp's distinguishing features, and the reason why,
unlike other languages, Lisp has dialects.)_
In 1960, John McCarthy published a remarkable paper in which he did for
programming something like what Euclid did for geometry. He showed how, given
a handful of simple operators and a notation for functions, you can build a
whole programming language. He called this language Lisp, for "List
Processing," because one of his key ideas was to use a simple data structure
called a _list_ for both code and data.
It's worth understanding what McCarthy discovered, not just as a landmark in
the history of computers, but as a model for what programming is tending to
become in our own time. It seems to me that there have been two really clean,
consistent models of programming so far: the C model and the Lisp model. These
two seem points of high ground, with swampy lowlands between them. As
computers have grown more powerful, the new languages being developed have
been moving steadily toward the Lisp model. A popular recipe for new
programming languages in the past 20 years has been to take the C model of
computing and add to it, piecemeal, parts taken from the Lisp model, like
runtime typing and garbage collection.
In this article I'm going to try to explain in the simplest possible terms
what McCarthy discovered. The point is not just to learn about an interesting
theoretical result someone figured out forty years ago, but to show where
languages are heading. The unusual thing about Lisp in fact, the defining
quality of Lisp is that it can be written in itself. To understand what
McCarthy meant by this, we're going to retrace his steps, with his
mathematical notation translated into running Common Lisp code.
---
---
Complete Article (Postscript)
What Made Lisp Different
The Code
Chinese Translation
Japanese Translation
Portuguese Translation
Korean Translation
* * *
--- |
|
May 2003
_(This essay is derived from a guest lecture at Harvard, which incorporated
an earlier talk at Northeastern.)_
When I finished grad school in computer science I went to art school to study
painting. A lot of people seemed surprised that someone interested in
computers would also be interested in painting. They seemed to think that
hacking and painting were very different kinds of work-- that hacking was
cold, precise, and methodical, and that painting was the frenzied expression
of some primal urge.
Both of these images are wrong. Hacking and painting have a lot in common. In
fact, of all the different types of people I've known, hackers and painters
are among the most alike.
What hackers and painters have in common is that they're both makers. Along
with composers, architects, and writers, what hackers and painters are trying
to do is make good things. They're not doing research per se, though if in the
course of trying to make good things they discover some new technique, so much
the better.
I've never liked the term "computer science." The main reason I don't like it
is that there's no such thing. Computer science is a grab bag of tenuously
related areas thrown together by an accident of history, like Yugoslavia. At
one end you have people who are really mathematicians, but call what they're
doing computer science so they can get DARPA grants. In the middle you have
people working on something like the natural history of computers-- studying
the behavior of algorithms for routing data through networks, for example. And
then at the other extreme you have the hackers, who are trying to write
interesting software, and for whom computers are just a medium of expression,
as concrete is for architects or paint for painters. It's as if
mathematicians, physicists, and architects all had to be in the same
department.
Sometimes what the hackers do is called "software engineering," but this term
is just as misleading. Good software designers are no more engineers than
architects are. The border between architecture and engineering is not sharply
defined, but it's there. It falls between what and how: architects decide what
to do, and engineers figure out how to do it.
What and how should not be kept too separate. You're asking for trouble if you
try to decide what to do without understanding how to do it. But hacking can
certainly be more than just deciding how to implement some spec. At its best,
it's creating the spec-- though it turns out the best way to do that is to
implement it.
Perhaps one day "computer science" will, like Yugoslavia, get broken up into
its component parts. That might be a good thing. Especially if it meant
independence for my native land, hacking.
Bundling all these different types of work together in one department may be
convenient administratively, but it's confusing intellectually. That's the
other reason I don't like the name "computer science." Arguably the people in
the middle are doing something like an experimental science. But the people at
either end, the hackers and the mathematicians, are not actually doing
science.
The mathematicians don't seem bothered by this. They happily set to work
proving theorems like the other mathematicians over in the math department,
and probably soon stop noticing that the building they work in says ``computer
science'' on the outside. But for the hackers this label is a problem. If what
they're doing is called science, it makes them feel they ought to be acting
scientific. So instead of doing what they really want to do, which is to
design beautiful software, hackers in universities and research labs feel they
ought to be writing research papers.
In the best case, the papers are just a formality. Hackers write cool
software, and then write a paper about it, and the paper becomes a proxy for
the achievement represented by the software. But often this mismatch causes
problems. It's easy to drift away from building beautiful things toward
building ugly things that make more suitable subjects for research papers.
Unfortunately, beautiful things don't always make the best subjects for
papers. Number one, research must be original-- and as anyone who has written
a PhD dissertation knows, the way to be sure that you're exploring virgin
territory is to to stake out a piece of ground that no one wants. Number two,
research must be substantial-- and awkward systems yield meatier papers,
because you can write about the obstacles you have to overcome in order to get
things done. Nothing yields meaty problems like starting with the wrong
assumptions. Most of AI is an example of this rule; if you assume that
knowledge can be represented as a list of predicate logic expressions whose
arguments represent abstract concepts, you'll have a lot of papers to write
about how to make this work. As Ricky Ricardo used to say, "Lucy, you got a
lot of explaining to do."
The way to create something beautiful is often to make subtle tweaks to
something that already exists, or to combine existing ideas in a slightly new
way. This kind of work is hard to convey in a research paper.
So why do universities and research labs continue to judge hackers by
publications? For the same reason that "scholastic aptitude" gets measured by
simple-minded standardized tests, or the productivity of programmers gets
measured in lines of code. These tests are easy to apply, and there is nothing
so tempting as an easy test that kind of works.
Measuring what hackers are actually trying to do, designing beautiful
software, would be much more difficult. You need a good sense of design to
judge good design. And there is no correlation, except possibly a negative
one, between people's ability to recognize good design and their confidence
that they can.
The only external test is time. Over time, beautiful things tend to thrive,
and ugly things tend to get discarded. Unfortunately, the amounts of time
involved can be longer than human lifetimes. Samuel Johnson said it took a
hundred years for a writer's reputation to converge. You have to wait for the
writer's influential friends to die, and then for all their followers to die.
I think hackers just have to resign themselves to having a large random
component in their reputations. In this they are no different from other
makers. In fact, they're lucky by comparison. The influence of fashion is not
nearly so great in hacking as it is in painting.
There are worse things than having people misunderstand your work. A worse
danger is that you will yourself misunderstand your work. Related fields are
where you go looking for ideas. If you find yourself in the computer science
department, there is a natural temptation to believe, for example, that
hacking is the applied version of what theoretical computer science is the
theory of. All the time I was in graduate school I had an uncomfortable
feeling in the back of my mind that I ought to know more theory, and that it
was very remiss of me to have forgotten all that stuff within three weeks of
the final exam.
Now I realize I was mistaken. Hackers need to understand the theory of
computation about as much as painters need to understand paint chemistry. You
need to know how to calculate time and space complexity and about Turing
completeness. You might also want to remember at least the concept of a state
machine, in case you have to write a parser or a regular expression library.
Painters in fact have to remember a good deal more about paint chemistry than
that.
I've found that the best sources of ideas are not the other fields that have
the word "computer" in their names, but the other fields inhabited by makers.
Painting has been a much richer source of ideas than the theory of
computation.
For example, I was taught in college that one ought to figure out a program
completely on paper before even going near a computer. I found that I did not
program this way. I found that I liked to program sitting in front of a
computer, not a piece of paper. Worse still, instead of patiently writing out
a complete program and assuring myself it was correct, I tended to just spew
out code that was hopelessly broken, and gradually beat it into shape.
Debugging, I was taught, was a kind of final pass where you caught typos and
oversights. The way I worked, it seemed like programming consisted of
debugging.
For a long time I felt bad about this, just as I once felt bad that I didn't
hold my pencil the way they taught me to in elementary school. If I had only
looked over at the other makers, the painters or the architects, I would have
realized that there was a name for what I was doing: sketching. As far as I
can tell, the way they taught me to program in college was all wrong. You
should figure out programs as you're writing them, just as writers and
painters and architects do.
Realizing this has real implications for software design. It means that a
programming language should, above all, be malleable. A programming language
is for thinking of programs, not for expressing programs you've already
thought of. It should be a pencil, not a pen. Static typing would be a fine
idea if people actually did write programs the way they taught me to in
college. But that's not how any of the hackers I know write programs. We need
a language that lets us scribble and smudge and smear, not a language where
you have to sit with a teacup of types balanced on your knee and make polite
conversation with a strict old aunt of a compiler.
While we're on the subject of static typing, identifying with the makers will
save us from another problem that afflicts the sciences: math envy. Everyone
in the sciences secretly believes that mathematicians are smarter than they
are. I think mathematicians also believe this. At any rate, the result is that
scientists tend to make their work look as mathematical as possible. In a
field like physics this probably doesn't do much harm, but the further you get
from the natural sciences, the more of a problem it becomes.
A page of formulas just looks so impressive. (Tip: for extra impressiveness,
use Greek variables.) And so there is a great temptation to work on problems
you can treat formally, rather than problems that are, say, important.
If hackers identified with other makers, like writers and painters, they
wouldn't feel tempted to do this. Writers and painters don't suffer from math
envy. They feel as if they're doing something completely unrelated. So are
hackers, I think.
If universities and research labs keep hackers from doing the kind of work
they want to do, perhaps the place for them is in companies. Unfortunately,
most companies won't let hackers do what they want either. Universities and
research labs force hackers to be scientists, and companies force them to be
engineers.
I only discovered this myself quite recently. When Yahoo bought Viaweb, they
asked me what I wanted to do. I had never liked the business side very much,
and said that I just wanted to hack. When I got to Yahoo, I found that what
hacking meant to them was implementing software, not designing it. Programmers
were seen as technicians who translated the visions (if that is the word) of
product managers into code.
This seems to be the default plan in big companies. They do it because it
decreases the standard deviation of the outcome. Only a small percentage of
hackers can actually design software, and it's hard for the people running a
company to pick these out. So instead of entrusting the future of the software
to one brilliant hacker, most companies set things up so that it is designed
by committee, and the hackers merely implement the design.
If you want to make money at some point, remember this, because this is one of
the reasons startups win. Big companies want to decrease the standard
deviation of design outcomes because they want to avoid disasters. But when
you damp oscillations, you lose the high points as well as the low. This is
not a problem for big companies, because they don't win by making great
products. Big companies win by sucking less than other big companies.
So if you can figure out a way to get in a design war with a company big
enough that its software is designed by product managers, they'll never be
able to keep up with you. These opportunities are not easy to find, though.
It's hard to engage a big company in a design war, just as it's hard to engage
an opponent inside a castle in hand to hand combat. It would be pretty easy to
write a better word processor than Microsoft Word, for example, but Microsoft,
within the castle of their operating system monopoly, probably wouldn't even
notice if you did.
The place to fight design wars is in new markets, where no one has yet managed
to establish any fortifications. That's where you can win big by taking the
bold approach to design, and having the same people both design and implement
the product. Microsoft themselves did this at the start. So did Apple. And
Hewlett-Packard. I suspect almost every successful startup has.
So one way to build great software is to start your own startup. There are two
problems with this, though. One is that in a startup you have to do so much
besides write software. At Viaweb I considered myself lucky if I got to hack a
quarter of the time. And the things I had to do the other three quarters of
the time ranged from tedious to terrifying. I have a benchmark for this,
because I once had to leave a board meeting to have some cavities filled. I
remember sitting back in the dentist's chair, waiting for the drill, and
feeling like I was on vacation.
The other problem with startups is that there is not much overlap between the
kind of software that makes money and the kind that's interesting to write.
Programming languages are interesting to write, and Microsoft's first product
was one, in fact, but no one will pay for programming languages now. If you
want to make money, you tend to be forced to work on problems that are too
nasty for anyone to solve for free.
All makers face this problem. Prices are determined by supply and demand, and
there is just not as much demand for things that are fun to work on as there
is for things that solve the mundane problems of individual customers. Acting
in off-Broadway plays just doesn't pay as well as wearing a gorilla suit in
someone's booth at a trade show. Writing novels doesn't pay as well as writing
ad copy for garbage disposals. And hacking programming languages doesn't pay
as well as figuring out how to connect some company's legacy database to their
Web server.
I think the answer to this problem, in the case of software, is a concept
known to nearly all makers: the day job. This phrase began with musicians, who
perform at night. More generally, it means that you have one kind of work you
do for money, and another for love.
Nearly all makers have day jobs early in their careers. Painters and writers
notoriously do. If you're lucky you can get a day job that's closely related
to your real work. Musicians often seem to work in record stores. A hacker
working on some programming language or operating system might likewise be
able to get a day job using it.
When I say that the answer is for hackers to have day jobs, and work on
beautiful software on the side, I'm not proposing this as a new idea. This is
what open-source hacking is all about. What I'm saying is that open-source is
probably the right model, because it has been independently confirmed by all
the other makers.
It seems surprising to me that any employer would be reluctant to let hackers
work on open-source projects. At Viaweb, we would have been reluctant to hire
anyone who didn't. When we interviewed programmers, the main thing we cared
about was what kind of software they wrote in their spare time. You can't do
anything really well unless you love it, and if you love to hack you'll
inevitably be working on projects of your own.
Because hackers are makers rather than scientists, the right place to look for
metaphors is not in the sciences, but among other kinds of makers. What else
can painting teach us about hacking?
One thing we can learn, or at least confirm, from the example of painting is
how to learn to hack. You learn to paint mostly by doing it. Ditto for
hacking. Most hackers don't learn to hack by taking college courses in
programming. They learn to hack by writing programs of their own at age
thirteen. Even in college classes, you learn to hack mostly by hacking.
Because painters leave a trail of work behind them, you can watch them learn
by doing. If you look at the work of a painter in chronological order, you'll
find that each painting builds on things that have been learned in previous
ones. When there's something in a painting that works very well, you can
usually find version 1 of it in a smaller form in some earlier painting.
I think most makers work this way. Writers and architects seem to as well.
Maybe it would be good for hackers to act more like painters, and regularly
start over from scratch, instead of continuing to work for years on one
project, and trying to incorporate all their later ideas as revisions.
The fact that hackers learn to hack by doing it is another sign of how
different hacking is from the sciences. Scientists don't learn science by
doing it, but by doing labs and problem sets. Scientists start out doing work
that's perfect, in the sense that they're just trying to reproduce work
someone else has already done for them. Eventually, they get to the point
where they can do original work. Whereas hackers, from the start, are doing
original work; it's just very bad. So hackers start original, and get good,
and scientists start good, and get original.
The other way makers learn is from examples. For a painter, a museum is a
reference library of techniques. For hundreds of years it has been part of the
traditional education of painters to copy the works of the great masters,
because copying forces you to look closely at the way a painting is made.
Writers do this too. Benjamin Franklin learned to write by summarizing the
points in the essays of Addison and Steele and then trying to reproduce them.
Raymond Chandler did the same thing with detective stories.
Hackers, likewise, can learn to program by looking at good programs-- not just
at what they do, but the source code too. One of the less publicized benefits
of the open-source movement is that it has made it easier to learn to program.
When I learned to program, we had to rely mostly on examples in books. The one
big chunk of code available then was Unix, but even this was not open source.
Most of the people who read the source read it in illicit photocopies of John
Lions' book, which though written in 1977 was not allowed to be published
until 1996.
Another example we can take from painting is the way that paintings are
created by gradual refinement. Paintings usually begin with a sketch.
Gradually the details get filled in. But it is not merely a process of filling
in. Sometimes the original plans turn out to be mistaken. Countless paintings,
when you look at them in xrays, turn out to have limbs that have been moved or
facial features that have been readjusted.
Here's a case where we can learn from painting. I think hacking should work
this way too. It's unrealistic to expect that the specifications for a program
will be perfect. You're better off if you admit this up front, and write
programs in a way that allows specifications to change on the fly.
(The structure of large companies makes this hard for them to do, so here is
another place where startups have an advantage.)
Everyone by now presumably knows about the danger of premature optimization. I
think we should be just as worried about premature design-- deciding too early
what a program should do.
The right tools can help us avoid this danger. A good programming language
should, like oil paint, make it easy to change your mind. Dynamic typing is a
win here because you don't have to commit to specific data representations up
front. But the key to flexibility, I think, is to make the language very
abstract. The easiest program to change is one that's very short.
This sounds like a paradox, but a great painting has to be better than it has
to be. For example, when Leonardo painted the portrait of Ginevra de Benci in
the National Gallery, he put a juniper bush behind her head. In it he
carefully painted each individual leaf. Many painters might have thought, this
is just something to put in the background to frame her head. No one will look
that closely at it.
Not Leonardo. How hard he worked on part of a painting didn't depend at all on
how closely he expected anyone to look at it. He was like Michael Jordan.
Relentless.
Relentlessness wins because, in the aggregate, unseen details become visible.
When people walk by the portrait of Ginevra de Benci, their attention is often
immediately arrested by it, even before they look at the label and notice that
it says Leonardo da Vinci. All those unseen details combine to produce
something that's just stunning, like a thousand barely audible voices all
singing in tune.
Great software, likewise, requires a fanatical devotion to beauty. If you look
inside good software, you find that parts no one is ever supposed to see are
beautiful too. I'm not claiming I write great software, but I know that when
it comes to code I behave in a way that would make me eligible for
prescription drugs if I approached everyday life the same way. It drives me
crazy to see code that's badly indented, or that uses ugly variable names.
If a hacker were a mere implementor, turning a spec into code, then he could
just work his way through it from one end to the other like someone digging a
ditch. But if the hacker is a creator, we have to take inspiration into
account.
In hacking, like painting, work comes in cycles. Sometimes you get excited
about some new project and you want to work sixteen hours a day on it. Other
times nothing seems interesting.
To do good work you have to take these cycles into account, because they're
affected by how you react to them. When you're driving a car with a manual
transmission on a hill, you have to back off the clutch sometimes to avoid
stalling. Backing off can likewise prevent ambition from stalling. In both
painting and hacking there are some tasks that are terrifyingly ambitious, and
others that are comfortingly routine. It's a good idea to save some easy tasks
for moments when you would otherwise stall.
In hacking, this can literally mean saving up bugs. I like debugging: it's the
one time that hacking is as straightforward as people think it is. You have a
totally constrained problem, and all you have to do is solve it. Your program
is supposed to do x. Instead it does y. Where does it go wrong? You know
you're going to win in the end. It's as relaxing as painting a wall.
The example of painting can teach us not only how to manage our own work, but
how to work together. A lot of the great art of the past is the work of
multiple hands, though there may only be one name on the wall next to it in
the museum. Leonardo was an apprentice in the workshop of Verrocchio and
painted one of the angels in his Baptism of Christ. This sort of thing was the
rule, not the exception. Michelangelo was considered especially dedicated for
insisting on painting all the figures on the ceiling of the Sistine Chapel
himself.
As far as I know, when painters worked together on a painting, they never
worked on the same parts. It was common for the master to paint the principal
figures and for assistants to paint the others and the background. But you
never had one guy painting over the work of another.
I think this is the right model for collaboration in software too. Don't push
it too far. When a piece of code is being hacked by three or four different
people, no one of whom really owns it, it will end up being like a common-
room. It will tend to feel bleak and abandoned, and accumulate cruft. The
right way to collaborate, I think, is to divide projects into sharply defined
modules, each with a definite owner, and with interfaces between them that are
as carefully designed and, if possible, as articulated as programming
languages.
Like painting, most software is intended for a human audience. And so hackers,
like painters, must have empathy to do really great work. You have to be able
to see things from the user's point of view.
When I was a kid I was always being told to look at things from someone else's
point of view. What this always meant in practice was to do what someone else
wanted, instead of what I wanted. This of course gave empathy a bad name, and
I made a point of not cultivating it.
Boy, was I wrong. It turns out that looking at things from other people's
point of view is practically the secret of success. It doesn't necessarily
mean being self-sacrificing. Far from it. Understanding how someone else sees
things doesn't imply that you'll act in his interest; in some situations-- in
war, for example-- you want to do exactly the opposite.
Most makers make things for a human audience. And to engage an audience you
have to understand what they need. Nearly all the greatest paintings are
paintings of people, for example, because people are what people are
interested in.
Empathy is probably the single most important difference between a good hacker
and a great one. Some hackers are quite smart, but when it comes to empathy
are practically solipsists. It's hard for such people to design great software
, because they can't see things from the user's point of view.
One way to tell how good people are at empathy is to watch them explain a
technical question to someone without a technical background. We probably all
know people who, though otherwise smart, are just comically bad at this. If
someone asks them at a dinner party what a programming language is, they'll
say something like ``Oh, a high-level language is what the compiler uses as
input to generate object code.'' High-level language? Compiler? Object code?
Someone who doesn't know what a programming language is obviously doesn't know
what these things are, either.
Part of what software has to do is explain itself. So to write good software
you have to understand how little users understand. They're going to walk up
to the software with no preparation, and it had better do what they guess it
will, because they're not going to read the manual. The best system I've ever
seen in this respect was the original Macintosh, in 1985. It did what software
almost never does: it just worked.
Source code, too, should explain itself. If I could get people to remember
just one quote about programming, it would be the one at the beginning of
_Structure and Interpretation of Computer Programs._
> Programs should be written for people to read, and only incidentally for
> machines to execute.
You need to have empathy not just for your users, but for your readers. It's
in your interest, because you'll be one of them. Many a hacker has written a
program only to find on returning to it six months later that he has no idea
how it works. I know several people who've sworn off Perl after such
experiences.
Lack of empathy is associated with intelligence, to the point that there is
even something of a fashion for it in some places. But I don't think there's
any correlation. You can do well in math and the natural sciences without
having to learn empathy, and people in these fields tend to be smart, so the
two qualities have come to be associated. But there are plenty of dumb people
who are bad at empathy too. Just listen to the people who call in with
questions on talk shows. They ask whatever it is they're asking in such a
roundabout way that the hosts often have to rephrase the question for them.
So, if hacking works like painting and writing, is it as cool? After all, you
only get one life. You might as well spend it working on something great.
Unfortunately, the question is hard to answer. There is always a big time lag
in prestige. It's like light from a distant star. Painting has prestige now
because of great work people did five hundred years ago. At the time, no one
thought these paintings were as important as we do today. It would have seemed
very odd to people at the time that Federico da Montefeltro, the Duke of
Urbino, would one day be known mostly as the guy with the strange nose in a
painting by Piero della Francesca.
So while I admit that hacking doesn't seem as cool as painting now, we should
remember that painting itself didn't seem as cool in its glory days as it does
now.
What we can say with some confidence is that these are the glory days of
hacking. In most fields the great work is done early on. The paintings made
between 1430 and 1500 are still unsurpassed. Shakespeare appeared just as
professional theater was being born, and pushed the medium so far that every
playwright since has had to live in his shadow. Albrecht Durer did the same
thing with engraving, and Jane Austen with the novel.
Over and over we see the same pattern. A new medium appears, and people are so
excited about it that they explore most of its possibilities in the first
couple generations. Hacking seems to be in this phase now.
Painting was not, in Leonardo's time, as cool as his work helped make it. How
cool hacking turns out to be will depend on what we can do with this new
medium.
** |
|
May 2006
_(This essay is derived from a keynote at Xtech.)_
Could you reproduce Silicon Valley elsewhere, or is there something unique
about it?
It wouldn't be surprising if it were hard to reproduce in other countries,
because you couldn't reproduce it in most of the US either. What does it take
to make a silicon valley even here?
What it takes is the right people. If you could get the right ten thousand
people to move from Silicon Valley to Buffalo, Buffalo would become Silicon
Valley.
That's a striking departure from the past. Up till a couple decades ago,
geography was destiny for cities. All great cities were located on waterways,
because cities made money by trade, and water was the only economical way to
ship.
Now you could make a great city anywhere, if you could get the right people to
move there. So the question of how to make a silicon valley becomes: who are
the right people, and how do you get them to move?
**Two Types**
I think you only need two kinds of people to create a technology hub: rich
people and nerds. They're the limiting reagents in the reaction that produces
startups, because they're the only ones present when startups get started.
Everyone else will move.
Observation bears this out: within the US, towns have become startup hubs if
and only if they have both rich people and nerds. Few startups happen in
Miami, for example, because although it's full of rich people, it has few
nerds. It's not the kind of place nerds like.
Whereas Pittsburgh has the opposite problem: plenty of nerds, but no rich
people. The top US Computer Science departments are said to be MIT, Stanford,
Berkeley, and Carnegie-Mellon. MIT yielded Route 128. Stanford and Berkeley
yielded Silicon Valley. But Carnegie-Mellon? The record skips at that point.
Lower down the list, the University of Washington yielded a high-tech
community in Seattle, and the University of Texas at Austin yielded one in
Austin. But what happened in Pittsburgh? And in Ithaca, home of Cornell, which
is also high on the list?
I grew up in Pittsburgh and went to college at Cornell, so I can answer for
both. The weather is terrible, particularly in winter, and there's no
interesting old city to make up for it, as there is in Boston. Rich people
don't want to live in Pittsburgh or Ithaca. So while there are plenty of
hackers who could start startups, there's no one to invest in them.
**Not Bureaucrats**
Do you really need the rich people? Wouldn't it work to have the government
invest in the nerds? No, it would not. Startup investors are a distinct type
of rich people. They tend to have a lot of experience themselves in the
technology business. This (a) helps them pick the right startups, and (b)
means they can supply advice and connections as well as money. And the fact
that they have a personal stake in the outcome makes them really pay
attention.
Bureaucrats by their nature are the exact opposite sort of people from startup
investors. The idea of them making startup investments is comic. It would be
like mathematicians running _Vogue_ \-- or perhaps more accurately, _Vogue_
editors running a math journal.
Though indeed, most things bureaucrats do, they do badly. We just don't notice
usually, because they only have to compete against other bureaucrats. But as
startup investors they'd have to compete against pros with a great deal more
experience and motivation.
Even corporations that have in-house VC groups generally forbid them to make
their own investment decisions. Most are only allowed to invest in deals where
some reputable private VC firm is willing to act as lead investor.
**Not Buildings**
If you go to see Silicon Valley, what you'll see are buildings. But it's the
people that make it Silicon Valley, not the buildings. I read occasionally
about attempts to set up "technology parks" in other places, as if the active
ingredient of Silicon Valley were the office space. An article about Sophia
Antipolis bragged that companies there included Cisco, Compaq, IBM, NCR, and
Nortel. Don't the French realize these aren't startups?
Building office buildings for technology companies won't get you a silicon
valley, because the key stage in the life of a startup happens before they
want that kind of space. The key stage is when they're three guys operating
out of an apartment. Wherever the startup is when it gets funded, it will
stay. The defining quality of Silicon Valley is not that Intel or Apple or
Google have offices there, but that they were _started_ there.
So if you want to reproduce Silicon Valley, what you need to reproduce is
those two or three founders sitting around a kitchen table deciding to start a
company. And to reproduce that you need those people.
**Universities**
The exciting thing is, _all_ you need are the people. If you could attract a
critical mass of nerds and investors to live somewhere, you could reproduce
Silicon Valley. And both groups are highly mobile. They'll go where life is
good. So what makes a place good to them?
What nerds like is other nerds. Smart people will go wherever other smart
people are. And in particular, to great universities. In theory there could be
other ways to attract them, but so far universities seem to be indispensable.
Within the US, there are no technology hubs without first-rate universities--
or at least, first-rate computer science departments.
So if you want to make a silicon valley, you not only need a university, but
one of the top handful in the world. It has to be good enough to act as a
magnet, drawing the best people from thousands of miles away. And that means
it has to stand up to existing magnets like MIT and Stanford.
This sounds hard. Actually it might be easy. My professor friends, when
they're deciding where they'd like to work, consider one thing above all: the
quality of the other faculty. What attracts professors is good colleagues. So
if you managed to recruit, en masse, a significant number of the best young
researchers, you could create a first-rate university from nothing overnight.
And you could do that for surprisingly little. If you paid 200 people hiring
bonuses of $3 million apiece, you could put together a faculty that would bear
comparison with any in the world. And from that point the chain reaction would
be self-sustaining. So whatever it costs to establish a mediocre university,
for an additional half billion or so you could have a great one.
**Personality**
However, merely creating a new university would not be enough to start a
silicon valley. The university is just the seed. It has to be planted in the
right soil, or it won't germinate. Plant it in the wrong place, and you just
create Carnegie-Mellon.
To spawn startups, your university has to be in a town that has attractions
other than the university. It has to be a place where investors want to live,
and students want to stay after they graduate.
The two like much the same things, because most startup investors are nerds
themselves. So what do nerds look for in a town? Their tastes aren't
completely different from other people's, because a lot of the towns they like
most in the US are also big tourist destinations: San Francisco, Boston,
Seattle. But their tastes can't be quite mainstream either, because they
dislike other big tourist destinations, like New York, Los Angeles, and Las
Vegas.
There has been a lot written lately about the "creative class." The thesis
seems to be that as wealth derives increasingly from ideas, cities will
prosper only if they attract those who have them. That is certainly true; in
fact it was the basis of Amsterdam's prosperity 400 years ago.
A lot of nerd tastes they share with the creative class in general. For
example, they like well-preserved old neighborhoods instead of cookie-cutter
suburbs, and locally-owned shops and restaurants instead of national chains.
Like the rest of the creative class, they want to live somewhere with
personality.
What exactly is personality? I think it's the feeling that each building is
the work of a distinct group of people. A town with personality is one that
doesn't feel mass-produced. So if you want to make a startup hub-- or any town
to attract the "creative class"-- you probably have to ban large development
projects. When a large tract has been developed by a single organization, you
can always tell.
Most towns with personality are old, but they don't have to be. Old towns have
two advantages: they're denser, because they were laid out before cars, and
they're more varied, because they were built one building at a time. You could
have both now. Just have building codes that ensure density, and ban large
scale developments.
A corollary is that you have to keep out the biggest developer of all: the
government. A government that asks "How can we build a silicon valley?" has
probably ensured failure by the way they framed the question. You don't build
a silicon valley; you let one grow.
**Nerds**
If you want to attract nerds, you need more than a town with personality. You
need a town with the right personality. Nerds are a distinct subset of the
creative class, with different tastes from the rest. You can see this most
clearly in New York, which attracts a lot of creative people, but few nerds.
What nerds like is the kind of town where people walk around smiling. This
excludes LA, where no one walks at all, and also New York, where people walk,
but not smiling. When I was in grad school in Boston, a friend came to visit
from New York. On the subway back from the airport she asked "Why is everyone
smiling?" I looked and they weren't smiling. They just looked like they were
compared to the facial expressions she was used to.
If you've lived in New York, you know where these facial expressions come
from. It's the kind of place where your mind may be excited, but your body
knows it's having a bad time. People don't so much enjoy living there as
endure it for the sake of the excitement. And if you like certain kinds of
excitement, New York is incomparable. It's a hub of glamour, a magnet for all
the shorter half-life isotopes of style and fame.
Nerds don't care about glamour, so to them the appeal of New York is a
mystery. People who like New York will pay a fortune for a small, dark, noisy
apartment in order to live in a town where the cool people are really cool. A
nerd looks at that deal and sees only: pay a fortune for a small, dark, noisy
apartment.
Nerds _will_ pay a premium to live in a town where the smart people are really
smart, but you don't have to pay as much for that. It's supply and demand:
glamour is popular, so you have to pay a lot for it.
Most nerds like quieter pleasures. They like cafes instead of clubs; used
bookshops instead of fashionable clothing shops; hiking instead of dancing;
sunlight instead of tall buildings. A nerd's idea of paradise is Berkeley or
Boulder.
**Youth**
It's the young nerds who start startups, so it's those specifically the city
has to appeal to. The startup hubs in the US are all young-feeling towns. This
doesn't mean they have to be new. Cambridge has the oldest town plan in
America, but it feels young because it's full of students.
What you can't have, if you want to create a silicon valley, is a large,
existing population of stodgy people. It would be a waste of time to try to
reverse the fortunes of a declining industrial town like Detroit or
Philadelphia by trying to encourage startups. Those places have too much
momentum in the wrong direction. You're better off starting with a blank slate
in the form of a small town. Or better still, if there's a town young people
already flock to, that one.
The Bay Area was a magnet for the young and optimistic for decades before it
was associated with technology. It was a place people went in search of
something new. And so it became synonymous with California nuttiness. There's
still a lot of that there. If you wanted to start a new fad-- a new way to
focus one's "energy," for example, or a new category of things not to eat--
the Bay Area would be the place to do it. But a place that tolerates oddness
in the search for the new is exactly what you want in a startup hub, because
economically that's what startups are. Most good startup ideas seem a little
crazy; if they were obviously good ideas, someone would have done them
already.
(How many people are going to want computers in their _houses_? What,
_another_ search engine?)
That's the connection between technology and liberalism. Without exception the
high-tech cities in the US are also the most liberal. But it's not because
liberals are smarter that this is so. It's because liberal cities tolerate odd
ideas, and smart people by definition have odd ideas.
Conversely, a town that gets praised for being "solid" or representing
"traditional values" may be a fine place to live, but it's never going to
succeed as a startup hub. The 2004 presidential election, though a disaster in
other respects, conveniently supplied us with a county-by-county map of such
places.
To attract the young, a town must have an intact center. In most American
cities the center has been abandoned, and the growth, if any, is in the
suburbs. Most American cities have been turned inside out. But none of the
startup hubs has: not San Francisco, or Boston, or Seattle. They all have
intact centers. My guess is that no city with a dead center could be
turned into a startup hub. Young people don't want to live in the suburbs.
Within the US, the two cities I think could most easily be turned into new
silicon valleys are Boulder and Portland. Both have the kind of effervescent
feel that attracts the young. They're each only a great university short of
becoming a silicon valley, if they wanted to.
**Time**
A great university near an attractive town. Is that all it takes? That was all
it took to make the original Silicon Valley. Silicon Valley traces its origins
to William Shockley, one of the inventors of the transistor. He did the
research that won him the Nobel Prize at Bell Labs, but when he started his
own company in 1956 he moved to Palo Alto to do it. At the time that was an
odd thing to do. Why did he? Because he had grown up there and remembered how
nice it was. Now Palo Alto is suburbia, but then it was a charming college
town-- a charming college town with perfect weather and San Francisco only an
hour away.
The companies that rule Silicon Valley now are all descended in various ways
from Shockley Semiconductor. Shockley was a difficult man, and in 1957 his top
people-- "the traitorous eight"-- left to start a new company, Fairchild
Semiconductor. Among them were Gordon Moore and Robert Noyce, who went on to
found Intel, and Eugene Kleiner, who founded the VC firm Kleiner Perkins.
Forty-two years later, Kleiner Perkins funded Google, and the partner
responsible for the deal was John Doerr, who came to Silicon Valley in 1974 to
work for Intel.
So although a lot of the newest companies in Silicon Valley don't make
anything out of silicon, there always seem to be multiple links back to
Shockley. There's a lesson here: startups beget startups. People who work for
startups start their own. People who get rich from startups fund new ones. I
suspect this kind of organic growth is the only way to produce a startup hub,
because it's the only way to grow the expertise you need.
That has two important implications. The first is that you need time to grow a
silicon valley. The university you could create in a couple years, but the
startup community around it has to grow organically. The cycle time is limited
by the time it takes a company to succeed, which probably averages about five
years.
The other implication of the organic growth hypothesis is that you can't be
somewhat of a startup hub. You either have a self-sustaining chain reaction,
or not. Observation confirms this too: cities either have a startup scene, or
they don't. There is no middle ground. Chicago has the third largest
metropolitan area in America. As a source of startups it's negligible compared
to Seattle, number 15.
The good news is that the initial seed can be quite small. Shockley
Semiconductor, though itself not very successful, was big enough. It brought a
critical mass of experts in an important new technology together in a place
they liked enough to stay.
**Competing**
Of course, a would-be silicon valley faces an obstacle the original one
didn't: it has to compete with Silicon Valley. Can that be done? Probably.
One of Silicon Valley's biggest advantages is its venture capital firms. This
was not a factor in Shockley's day, because VC funds didn't exist. In fact,
Shockley Semiconductor and Fairchild Semiconductor were not startups at all in
our sense. They were subsidiaries-- of Beckman Instruments and Fairchild
Camera and Instrument respectively. Those companies were apparently willing to
establish subsidiaries wherever the experts wanted to live.
Venture investors, however, prefer to fund startups within an hour's drive.
For one, they're more likely to notice startups nearby. But when they do
notice startups in other towns they prefer them to move. They don't want to
have to travel to attend board meetings, and in any case the odds of
succeeding are higher in a startup hub.
The centralizing effect of venture firms is a double one: they cause startups
to form around them, and those draw in more startups through acquisitions. And
although the first may be weakening because it's now so cheap to start some
startups, the second seems as strong as ever. Three of the most admired "Web
2.0" companies were started outside the usual startup hubs, but two of them
have already been reeled in through acquisitions.
Such centralizing forces make it harder for new silicon valleys to get
started. But by no means impossible. Ultimately power rests with the founders.
A startup with the best people will beat one with funding from famous VCs, and
a startup that was sufficiently successful would never have to move. So a town
that could exert enough pull over the right people could resist and perhaps
even surpass Silicon Valley.
For all its power, Silicon Valley has a great weakness: the paradise Shockley
found in 1956 is now one giant parking lot. San Francisco and Berkeley are
great, but they're forty miles away. Silicon Valley proper is soul-crushing
suburban sprawl. It has fabulous weather, which makes it significantly better
than the soul-crushing sprawl of most other American cities. But a competitor
that managed to avoid sprawl would have real leverage. All a city needs is to
be the kind of place the next traitorous eight look at and say "I want to stay
here," and that would be enough to get the chain reaction started.
** |
|
August 2005
Thirty years ago, one was supposed to work one's way up the corporate ladder.
That's less the rule now. Our generation wants to get paid up front. Instead
of developing a product for some big company in the expectation of getting job
security in return, we develop the product ourselves, in a startup, and sell
it to the big company. At the very least we want options.
Among other things, this shift has created the appearance of a rapid increase
in economic inequality. But really the two cases are not as different as they
look in economic statistics.
Economic statistics are misleading because they ignore the value of safe jobs.
An easy job from which one can't be fired is worth money; exchanging the two
is one of the commonest forms of corruption. A sinecure is, in effect, an
annuity. Except sinecures don't appear in economic statistics. If they did, it
would be clear that in practice socialist countries have nontrivial
disparities of wealth, because they usually have a class of powerful
bureaucrats who are paid mostly by seniority and can never be fired.
While not a sinecure, a position on the corporate ladder was genuinely
valuable, because big companies tried not to fire people, and promoted from
within based largely on seniority. A position on the corporate ladder had a
value analogous to the "goodwill" that is a very real element in the valuation
of companies. It meant one could expect future high paying jobs.
One of main causes of the decay of the corporate ladder is the trend for
takeovers that began in the 1980s. Why waste your time climbing a ladder that
might disappear before you reach the top?
And, by no coincidence, the corporate ladder was one of the reasons the early
corporate raiders were so successful. It's not only economic statistics that
ignore the value of safe jobs. Corporate balance sheets do too. One reason it
was profitable to carve up 1980s companies and sell them for parts was that
they hadn't formally acknowledged their implicit debt to employees who had
done good work and expected to be rewarded with high-paying executive jobs
when their time came.
In the movie _Wall Street_ , Gordon Gekko ridicules a company overloaded with
vice presidents. But the company may not be as corrupt as it seems; those VPs'
cushy jobs were probably payment for work done earlier.
I like the new model better. For one thing, it seems a bad plan to treat jobs
as rewards. Plenty of good engineers got made into bad managers that way. And
the old system meant people had to deal with a lot more corporate politics, in
order to protect the work they'd invested in a position on the ladder.
The big disadvantage of the new system is that it involves more risk. If you
develop ideas in a startup instead of within a big company, any number of
random factors could sink you before you can finish. But maybe the older
generation would laugh at me for saying that the way we do things is riskier.
After all, projects within big companies were always getting cancelled as a
result of arbitrary decisions from higher up. My father's entire industry
(breeder reactors) disappeared that way.
For better or worse, the idea of the corporate ladder is probably gone for
good. The new model seems more liquid, and more efficient. But it is less of a
change, financially, than one might think. Our fathers weren't _that_ stupid.
---
---
| | Romanian Translation
| | | | Japanese Translation
* * *
--- |
|
November 2004
A lot of people are writing now about why Kerry lost. Here I want to examine a
more specific question: why were the exit polls so wrong?
In Ohio, which Kerry ultimately lost 49-51, exit polls gave him a 52-48
victory. And this wasn't just random error. In every swing state they
overestimated the Kerry vote. In Florida, which Bush ultimately won 52-47,
exit polls predicted a dead heat.
(These are not early numbers. They're from about midnight eastern time, long
after polls closed in Ohio and Florida. And yet by the next afternoon the exit
poll numbers online corresponded to the returns. The only way I can imagine
this happening is if those in charge of the exit polls cooked the books after
seeing the actual returns. But that's another issue.)
What happened? The source of the problem may be a variant of the Bradley
Effect. This term was invented after Tom Bradley, the black mayor of Los
Angeles, lost an election for governor of California despite a comfortable
lead in the polls. Apparently voters were afraid to say they planned to vote
against him, lest their motives be (perhaps correctly) suspected.
It seems likely that something similar happened in exit polls this year. In
theory, exit polls ought to be very accurate. You're not asking people what
they would do. You're asking what they just did.
How can you get errors asking that? Because some people don't respond. To get
a truly random sample, pollsters ask, say, every 20th person leaving the
polling place who they voted for. But not everyone wants to answer. And the
pollsters can't simply ignore those who won't, or their sample isn't random
anymore. So what they do, apparently, is note down the age and race and sex of
the person, and guess from that who they voted for.
This works so long as there is no _correlation_ between who people vote for
and whether they're willing to talk about it. But this year there may have
been. It may be that a significant number of those who voted for Bush didn't
want to say so.
Why not? Because people in the US are more conservative than they're willing
to admit. The values of the elite in this country, at least at the moment, are
NPR values. The average person, as I think both Republicans and Democrats
would agree, is more socially conservative. But while some openly flaunt the
fact that they don't share the opinions of the elite, others feel a little
nervous about it, as if they had bad table manners.
For example, according to current NPR values, you can't say anything that
might be perceived as disparaging towards homosexuals. To do so is
"homophobic." And yet a large number of Americans are deeply religious, and
the Bible is quite explicit on the subject of homosexuality. What are they to
do? I think what many do is keep their opinions, but keep them to themselves.
They know what they believe, but they also know what they're supposed to
believe. And so when a stranger (for example, a pollster) asks them their
opinion about something like gay marriage, they will not always say what they
really think.
When the values of the elite are liberal, polls will tend to underestimate the
conservativeness of ordinary voters. This seems to me the leading theory to
explain why the exit polls were so far off this year. NPR values said one
ought to vote for Kerry. So all the people who voted for Kerry felt virtuous
for doing so, and were eager to tell pollsters they had. No one who voted for
Kerry did it as an act of quiet defiance.
---
---
| | Support for a Woman President
| | | | Japanese Translation
* * *
| If you liked this, you may also like **_Hackers & Painters_**. |
|
October 2007
After the last talk I gave, one of the organizers got up on the stage to
deliver an impromptu rebuttal. That never happened before. I only heard the
first few sentences, but that was enough to tell what I said that upset him:
that startups would do better if they moved to Silicon Valley.
This conference was in London, and most of the audience seemed to be from the
UK. So saying startups should move to Silicon Valley seemed like a
nationalistic remark: an obnoxious American telling them that if they wanted
to do things right they should all just move to America.
Actually I'm less American than I seem. I didn't say so, but I'm British by
birth. And just as Jews are ex officio allowed to tell Jewish jokes, I don't
feel like I have to bother being diplomatic with a British audience.
The idea that startups would do better to move to Silicon Valley is not even a
nationalistic one. It's the same thing I say to startups in the US. Y
Combinator alternates between coasts every 6 months. Every other funding cycle
is in Boston. And even though Boston is the second biggest startup hub in the
US (and the world), we tell the startups from those cycles that their best bet
is to move to Silicon Valley. If that's true of Boston, it's even more true of
every other city.
This is about cities, not countries.
And I think I can prove I'm right. You can easily reduce the opposing argument
ad what most people would agree was absurdum. Few would be willing to claim
that it doesn't matter at all where a startup is—that a startup operating out
of a small agricultural town wouldn't benefit from moving to a startup hub.
Most people could see how it might be helpful to be in a place where there was
infrastructure for startups, accumulated knowledge about how to make them
work, and other people trying to do it. And yet whatever argument you use to
prove that startups don't need to move from London to Silicon Valley could
equally well be used to prove startups don't need to move from smaller towns
to London.
The difference between cities is a matter of degree. And if, as nearly
everyone who knows agrees, startups are better off in Silicon Valley than
Boston, then they're better off in Silicon Valley than everywhere else too.
I realize I might seem to have a vested interest in this conclusion, because
startups that move to the US might do it through Y Combinator. But the
American startups we've funded will attest that I say the same thing to them.
I'm not claiming of course that every startup has to go to Silicon Valley to
succeed. Just that all other things being equal, the more of a startup hub a
place is, the better startups will do there. But other considerations can
outweigh the advantages of moving. I'm not saying founders with families
should uproot them to move halfway around the world; that might be too much of
a distraction.
Immigration difficulties might be another reason to stay put. Dealing with
immigration problems is like raising money: for some reason it seems to
consume all your attention. A startup can't afford much of that. One Canadian
startup we funded spent about 6 months working on moving to the US. Eventually
they just gave up, because they couldn't afford to take so much time away from
working on their software.
(If another country wanted to establish a rival to Silicon Valley, the single
best thing they could do might be to create a special visa for startup
founders. US immigration policy is one of Silicon Valley's biggest
weaknesses.)
If your startup is connected to a specific industry, you may be better off in
one of its centers. A startup doing something related to entertainment might
want to be in New York or LA.
And finally, if a good investor has committed to fund you if you stay where
you are, you should probably stay. Finding investors is hard. You generally
shouldn't pass up a definite funding offer to move.
In fact, the quality of the investors may be the main advantage of startup
hubs. Silicon Valley investors are noticeably more aggressive than Boston
ones. Over and over, I've seen startups we've funded snatched by west coast
investors out from under the noses of Boston investors who saw them first but
acted too slowly. At this year's Boston Demo Day, I told the audience that
this happened every year, so if they saw a startup they liked, they should
make them an offer. And yet within a month it had happened again: an
aggressive west coast VC who had met the founder of a YC-funded startup a week
before beat out a Boston VC who had known him for years. By the time the
Boston VC grasped what was happening, the deal was already gone.
Boston investors will admit they're more conservative. Some want to believe
this comes from the city's prudent Yankee character. But Occam's razor
suggests the truth is less flattering. Boston investors are probably more
conservative than Silicon Valley investors for the same reason Chicago
investors are more conservative than Boston ones. They don't understand
startups as well.
West coast investors aren't bolder because they're irresponsible cowboys, or
because the good weather makes them optimistic. They're bolder because they
know what they're doing. They're the skiers who ski on the diamond slopes.
Boldness is the essence of venture investing. The way you get big returns is
not by trying to avoid losses, but by trying to ensure you get some of the big
hits. And the big hits often look risky at first.
Like Facebook. Facebook was started in Boston. Boston VCs had the first shot
at them. But they said no, so Facebook moved to Silicon Valley and raised
money there. The partner who turned them down now says that "may turn out to
have been a mistake."
Empirically, boldness wins. If the aggressive ways of west coast investors are
going to come back to bite them, it has been a long time coming. Silicon
Valley has been pulling ahead of Boston since the 1970s. If there was going to
be a comeuppance for the west coast investors, the bursting of the Bubble
would have been it. But since then the west coast has just pulled further
ahead.
West coast investors are confident enough of their judgement to act boldly;
east coast investors, not so much; but anyone who thinks east coast investors
act that way out of prudence should see the frantic reactions of an east coast
VC in the process of losing a deal to a west coast one.
In addition to the concentration that comes from specialization, startup hubs
are also markets. And markets are usually centralized. Even now, when traders
could be anywhere, they cluster in a few cities. It's hard to say exactly what
it is about face to face contact that makes deals happen, but whatever it is,
it hasn't yet been duplicated by technology.
Walk down University Ave at the right time, and you might overhear five
different people talking on the phone about deals. In fact, this is part of
the reason Y Combinator is in Boston half the time: it's hard to stand that
year round. But though it can sometimes be annoying to be surrounded by people
who only think about one thing, it's the place to be if that one thing is what
you're trying to do.
I was talking recently to someone who works on search at Google. He knew a lot
of people at Yahoo, so he was in a good position to compare the two companies.
I asked him why Google was better at search. He said it wasn't anything
specific Google did, but simply that they understood search so much better.
And that's why startups thrive in startup hubs like Silicon Valley. Startups
are a very specialized business, as specialized as diamond cutting. And in
startup hubs they understand it.
** |
|
April 2009
Om Malik is the most recent of many people to ask why Twitter is such a big
deal.
The reason is that it's a new messaging protocol, where you don't specify the
recipients. New protocols are rare. Or more precisely, new protocols that take
off are. There are only a handful of commonly used ones: TCP/IP (the
Internet), SMTP (email), HTTP (the web), and so on. So any new protocol is a
big deal. But Twitter is a protocol owned by a private company. That's even
rarer.
Curiously, the fact that the founders of Twitter have been slow to monetize it
may in the long run prove to be an advantage. Because they haven't tried to
control it too much, Twitter feels to everyone like previous protocols. One
forgets it's owned by a private company. That must have made it easier for
Twitter to spread.
---
* * *
--- |
|
September 2009
I bet you the current issue of _Cosmopolitan_ has an article whose title
begins with a number. "7 Things He Won't Tell You about Sex," or something
like that. Some popular magazines feature articles of this type on the cover
of every issue. That can't be happening by accident. Editors must know they
attract readers.
Why do readers like the list of n things so much? Mainly because it's easier
to read than a regular article. Structurally, the list of n things is a
degenerate case of essay. An essay can go anywhere the writer wants. In a list
of n things the writer agrees to constrain himself to a collection of points
of roughly equal importance, and he tells the reader explicitly what they are.
Some of the work of reading an article is understanding its structure—figuring
out what in high school we'd have called its "outline." Not explicitly, of
course, but someone who really understands an article probably has something
in his brain afterward that corresponds to such an outline. In a list of n
things, this work is done for you. Its structure is an exoskeleton.
As well as being explicit, the structure is guaranteed to be of the simplest
possible type: a few main points with few to no subordinate ones, and no
particular connection between them.
Because the main points are unconnected, the list of n things is random
access. There's no thread of reasoning you have to follow. You could read the
list in any order. And because the points are independent of one another, they
work like watertight compartments in an unsinkable ship. If you get bored
with, or can't understand, or don't agree with one point, you don't have to
give up on the article. You can just abandon that one and skip to the next. A
list of n things is parallel and therefore fault tolerant.
There are times when this format is what a writer wants. One, obviously, is
when what you have to say actually is a list of n things. I once wrote an
essay about the mistakes that kill startups, and a few people made fun of me
for writing something whose title began with a number. But in that case I
really was trying to make a complete catalog of a number of independent
things. In fact, one of the questions I was trying to answer was how many
there were.
There are other less legitimate reasons for using this format. For example, I
use it when I get close to a deadline. If I have to give a talk and I haven't
started it a few days beforehand, I'll sometimes play it safe and make the
talk a list of n things.
The list of n things is easier for writers as well as readers. When you're
writing a real essay, there's always a chance you'll hit a dead end. A real
essay is a train of thought, and some trains of thought just peter out. That's
an alarming possibility when you have to give a talk in a few days. What if
you run out of ideas? The compartmentalized structure of the list of n things
protects the writer from his own stupidity in much the same way it protects
the reader. If you run out of ideas on one point, no problem: it won't kill
the essay. You can take out the whole point if you need to, and the essay will
still survive.
Writing a list of n things is so relaxing. You think of n/2 of them in the
first 5 minutes. So bang, there's the structure, and you just have to fill it
in. As you think of more points, you just add them to the end. Maybe you take
out or rearrange or combine a few, but at every stage you have a valid (though
initially low-res) list of n things. It's like the sort of programming where
you write a version 1 very quickly and then gradually modify it, but at every
point have working code—or the style of painting where you begin with a
complete but very blurry sketch done in an hour, then spend a week cranking up
the resolution.
Because the list of n things is easier for writers too, it's not always a
damning sign when readers prefer it. It's not necessarily evidence readers are
lazy; it could also mean they don't have much confidence in the writer. The
list of n things is in that respect the cheeseburger of essay forms. If you're
eating at a restaurant you suspect is bad, your best bet is to order the
cheeseburger. Even a bad cook can make a decent cheeseburger. And there are
pretty strict conventions about what a cheeseburger should look like. You can
assume the cook isn't going to try something weird and artistic. The list of n
things similarly limits the damage that can be done by a bad writer. You know
it's going to be about whatever the title says, and the format prevents the
writer from indulging in any flights of fancy.
Because the list of n things is the easiest essay form, it should be a good
one for beginning writers. And in fact it is what most beginning writers are
taught. The classic 5 paragraph essay is really a list of n things for n = 3.
But the students writing them don't realize they're using the same structure
as the articles they read in _Cosmopolitan_. They're not allowed to include
the numbers, and they're expected to spackle over the gaps with gratuitous
transitions ("Furthermore...") and cap the thing at either end with
introductory and concluding paragraphs so it will look superficially like a
real essay.
It seems a fine plan to start students off with the list of n things. It's the
easiest form. But if we're going to do that, why not do it openly? Let them
write lists of n things like the pros, with numbers and no transitions or
"conclusion."
There is one case where the list of n things is a dishonest format: when you
use it to attract attention by falsely claiming the list is an exhaustive one.
I.e. if you write an article that purports to be about _the_ 7 secrets of
success. That kind of title is the same sort of reflexive challenge as a
whodunit. You have to at least look at the article to check whether they're
the same 7 you'd list. Are you overlooking one of the secrets of success?
Better check.
It's fine to put "The" before the number if you really believe you've made an
exhaustive list. But evidence suggests most things with titles like this are
linkbait.
The greatest weakness of the list of n things is that there's so little room
for new thought. The main point of essay writing, when done right, is the new
ideas you have while doing it. A real essay, as the name implies, is dynamic:
you don't know what you're going to write when you start. It will be about
whatever you discover in the course of writing it.
This can only happen in a very limited way in a list of n things. You make the
title first, and that's what it's going to be about. You can't have more new
ideas in the writing than will fit in the watertight compartments you set up
initially. And your brain seems to know this: because you don't have room for
new ideas, you don't have them.
Another advantage of admitting to beginning writers that the 5 paragraph essay
is really a list of n things is that we can warn them about this. It only lets
you experience the defining characteristic of essay writing on a small scale:
in thoughts of a sentence or two. And it's particularly dangerous that the 5
paragraph essay buries the list of n things within something that looks like a
more sophisticated type of essay. If you don't know you're using this form,
you don't know you need to escape it.
** |
|
July 2010
When we sold our startup in 1998 I suddenly got a lot of money. I now had to
think about something I hadn't had to think about before: how not to lose it.
I knew it was possible to go from rich to poor, just as it was possible to go
from poor to rich. But while I'd spent a lot of the past several years
studying the paths from poor to rich, I knew practically nothing about the
paths from rich to poor. Now, in order to avoid them, I had to learn where
they were.
So I started to pay attention to how fortunes are lost. If you'd asked me as a
kid how rich people became poor, I'd have said by spending all their money.
That's how it happens in books and movies, because that's the colorful way to
do it. But in fact the way most fortunes are lost is not through excessive
expenditure, but through bad investments.
It's hard to spend a fortune without noticing. Someone with ordinary tastes
would find it hard to blow through more than a few tens of thousands of
dollars without thinking "wow, I'm spending a lot of money." Whereas if you
start trading derivatives, you can lose a million dollars (as much as you
want, really) in the blink of an eye.
In most people's minds, spending money on luxuries sets off alarms that making
investments doesn't. Luxuries seem self-indulgent. And unless you got the
money by inheriting it or winning a lottery, you've already been thoroughly
trained that self-indulgence leads to trouble. Investing bypasses those
alarms. You're not spending the money; you're just moving it from one asset to
another. Which is why people trying to sell you expensive things say "it's an
investment."
The solution is to develop new alarms. This can be a tricky business, because
while the alarms that prevent you from overspending are so basic that they may
even be in our DNA, the ones that prevent you from making bad investments have
to be learned, and are sometimes fairly counterintuitive.
A few days ago I realized something surprising: the situation with time is
much the same as with money. The most dangerous way to lose time is not to
spend it having fun, but to spend it doing fake work. When you spend time
having fun, you know you're being self-indulgent. Alarms start to go off
fairly quickly. If I woke up one morning and sat down on the sofa and watched
TV all day, I'd feel like something was terribly wrong. Just thinking about it
makes me wince. I'd start to feel uncomfortable after sitting on a sofa
watching TV for 2 hours, let alone a whole day.
And yet I've definitely had days when I might as well have sat in front of a
TV all day — days at the end of which, if I asked myself what I got done that
day, the answer would have been: basically, nothing. I feel bad after these
days too, but nothing like as bad as I'd feel if I spent the whole day on the
sofa watching TV. If I spent a whole day watching TV I'd feel like I was
descending into perdition. But the same alarms don't go off on the days when I
get nothing done, because I'm doing stuff that seems, superficially, like real
work. Dealing with email, for example. You do it sitting at a desk. It's not
fun. So it must be work.
With time, as with money, avoiding pleasure is no longer enough to protect
you. It probably was enough to protect hunter-gatherers, and perhaps all pre-
industrial societies. So nature and nurture combine to make us avoid self-
indulgence. But the world has gotten more complicated: the most dangerous
traps now are new behaviors that bypass our alarms about self-indulgence by
mimicking more virtuous types. And the worst thing is, they're not even fun.
**Thanks** to Sam Altman, Trevor Blackwell, Patrick Collison, Jessica
Livingston, and Robert Morris for reading drafts of this.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
July 2013
One of the most common types of advice we give at Y Combinator is to do things
that don't scale. A lot of would-be founders believe that startups either take
off or don't. You build something, make it available, and if you've made a
better mousetrap, people beat a path to your door as promised. Or they don't,
in which case the market must not exist.
Actually startups take off because the founders make them take off. There may
be a handful that just grew by themselves, but usually it takes some sort of
push to get them going. A good metaphor would be the cranks that car engines
had before they got electric starters. Once the engine was going, it would
keep going, but there was a separate and laborious process to get it going.
**Recruit**
The most common unscalable thing founders have to do at the start is to
recruit users manually. Nearly all startups have to. You can't wait for users
to come to you. You have to go out and get them.
Stripe is one of the most successful startups we've funded, and the problem
they solved was an urgent one. If anyone could have sat back and waited for
users, it was Stripe. But in fact they're famous within YC for aggressive
early user acquisition.
Startups building things for other startups have a big pool of potential users
in the other companies we've funded, and none took better advantage of it than
Stripe. At YC we use the term "Collison installation" for the technique they
invented. More diffident founders ask "Will you try our beta?" and if the
answer is yes, they say "Great, we'll send you a link." But the Collison
brothers weren't going to wait. When anyone agreed to try Stripe they'd say
"Right then, give me your laptop" and set them up on the spot.
There are two reasons founders resist going out and recruiting users
individually. One is a combination of shyness and laziness. They'd rather sit
at home writing code than go out and talk to a bunch of strangers and probably
be rejected by most of them. But for a startup to succeed, at least one
founder (usually the CEO) will have to spend a lot of time on sales and
marketing.
The other reason founders ignore this path is that the absolute numbers seem
so small at first. This can't be how the big, famous startups got started,
they think. The mistake they make is to underestimate the power of compound
growth. We encourage every startup to measure their progress by weekly growth
rate. If you have 100 users, you need to get 10 more next week to grow 10% a
week. And while 110 may not seem much better than 100, if you keep growing at
10% a week you'll be surprised how big the numbers get. After a year you'll
have 14,000 users, and after 2 years you'll have 2 million.
You'll be doing different things when you're acquiring users a thousand at a
time, and growth has to slow down eventually. But if the market exists you can
usually start by recruiting users manually and then gradually switch to less
manual methods.
Airbnb is a classic example of this technique. Marketplaces are so hard to get
rolling that you should expect to take heroic measures at first. In Airbnb's
case, these consisted of going door to door in New York, recruiting new users
and helping existing ones improve their listings. When I remember the Airbnbs
during YC, I picture them with rolly bags, because when they showed up for
tuesday dinners they'd always just flown back from somewhere.
**Fragile**
Airbnb now seems like an unstoppable juggernaut, but early on it was so
fragile that about 30 days of going out and engaging in person with users made
the difference between success and failure.
That initial fragility was not a unique feature of Airbnb. Almost all startups
are fragile initially. And that's one of the biggest things inexperienced
founders and investors (and reporters and know-it-alls on forums) get wrong
about them. They unconsciously judge larval startups by the standards of
established ones. They're like someone looking at a newborn baby and
concluding "there's no way this tiny creature could ever accomplish anything."
It's harmless if reporters and know-it-alls dismiss your startup. They always
get things wrong. It's even ok if investors dismiss your startup; they'll
change their minds when they see growth. The big danger is that you'll dismiss
your startup yourself. I've seen it happen. I often have to encourage founders
who don't see the full potential of what they're building. Even Bill Gates
made that mistake. He returned to Harvard for the fall semester after starting
Microsoft. He didn't stay long, but he wouldn't have returned at all if he'd
realized Microsoft was going to be even a fraction of the size it turned out
to be.
The question to ask about an early stage startup is not "is this company
taking over the world?" but "how big could this company get if the founders
did the right things?" And the right things often seem both laborious and
inconsequential at the time. Microsoft can't have seemed very impressive when
it was just a couple guys in Albuquerque writing Basic interpreters for a
market of a few thousand hobbyists (as they were then called), but in
retrospect that was the optimal path to dominating microcomputer software. And
I know Brian Chesky and Joe Gebbia didn't feel like they were en route to the
big time as they were taking "professional" photos of their first hosts'
apartments. They were just trying to survive. But in retrospect that too was
the optimal path to dominating a big market.
How do you find users to recruit manually? If you build something to solve
your own problems, then you only have to find your peers, which is usually
straightforward. Otherwise you'll have to make a more deliberate effort to
locate the most promising vein of users. The usual way to do that is to get
some initial set of users by doing a comparatively untargeted launch, and then
to observe which kind seem most enthusiastic, and seek out more like them. For
example, Ben Silbermann noticed that a lot of the earliest Pinterest users
were interested in design, so he went to a conference of design bloggers to
recruit users, and that worked well.
**Delight**
You should take extraordinary measures not just to acquire users, but also to
make them happy. For as long as they could (which turned out to be
surprisingly long), Wufoo sent each new user a hand-written thank you note.
Your first users should feel that signing up with you was one of the best
choices they ever made. And you in turn should be racking your brains to think
of new ways to delight them.
Why do we have to teach startups this? Why is it counterintuitive for
founders? Three reasons, I think.
One is that a lot of startup founders are trained as engineers, and customer
service is not part of the training of engineers. You're supposed to build
things that are robust and elegant, not be slavishly attentive to individual
users like some kind of salesperson. Ironically, part of the reason
engineering is traditionally averse to handholding is that its traditions date
from a time when engineers were less powerful — when they were only in charge
of their narrow domain of building things, rather than running the whole show.
You can be ornery when you're Scotty, but not when you're Kirk.
Another reason founders don't focus enough on individual customers is that
they worry it won't scale. But when founders of larval startups worry about
this, I point out that in their current state they have nothing to lose. Maybe
if they go out of their way to make existing users super happy, they'll one
day have too many to do so much for. That would be a great problem to have.
See if you can make it happen. And incidentally, when it does, you'll find
that delighting customers scales better than you expected. Partly because you
can usually find ways to make anything scale more than you would have
predicted, and partly because delighting customers will by then have permeated
your culture.
I have never once seen a startup lured down a blind alley by trying too hard
to make their initial users happy.
But perhaps the biggest thing preventing founders from realizing how attentive
they could be to their users is that they've never experienced such attention
themselves. Their standards for customer service have been set by the
companies they've been customers of, which are mostly big ones. Tim Cook
doesn't send you a hand-written note after you buy a laptop. He can't. But you
can. That's one advantage of being small: you can provide a level of service
no big company can.
Once you realize that existing conventions are not the upper bound on user
experience, it's interesting in a very pleasant way to think about how far you
could go to delight your users.
**Experience**
I was trying to think of a phrase to convey how extreme your attention to
users should be, and I realized Steve Jobs had already done it: insanely
great. Steve wasn't just using "insanely" as a synonym for "very." He meant it
more literally — that one should focus on quality of execution to a degree
that in everyday life would be considered pathological.
All the most successful startups we've funded have, and that probably doesn't
surprise would-be founders. What novice founders don't get is what insanely
great translates to in a larval startup. When Steve Jobs started using that
phrase, Apple was already an established company. He meant the Mac (and its
documentation and even packaging — such is the nature of obsession) should be
insanely well designed and manufactured. That's not hard for engineers to
grasp. It's just a more extreme version of designing a robust and elegant
product.
What founders have a hard time grasping (and Steve himself might have had a
hard time grasping) is what insanely great morphs into as you roll the time
slider back to the first couple months of a startup's life. It's not the
product that should be insanely great, but the experience of being your user.
The product is just one component of that. For a big company it's necessarily
the dominant one. But you can and should give users an insanely great
experience with an early, incomplete, buggy product, if you make up the
difference with attentiveness.
Can, perhaps, but should? Yes. Over-engaging with early users is not just a
permissible technique for getting growth rolling. For most successful startups
it's a necessary part of the feedback loop that makes the product good. Making
a better mousetrap is not an atomic operation. Even if you start the way most
successful startups have, by building something you yourself need, the first
thing you build is never quite right. And except in domains with big penalties
for making mistakes, it's often better not to aim for perfection initially. In
software, especially, it usually works best to get something in front of users
as soon as it has a quantum of utility, and then see what they do with it.
Perfectionism is often an excuse for procrastination, and in any case your
initial model of users is always inaccurate, even if you're one of them.
The feedback you get from engaging directly with your earliest users will be
the best you ever get. When you're so big you have to resort to focus groups,
you'll wish you could go over to your users' homes and offices and watch them
use your stuff like you did when there were only a handful of them.
**Fire**
Sometimes the right unscalable trick is to focus on a deliberately narrow
market. It's like keeping a fire contained at first to get it really hot
before adding more logs.
That's what Facebook did. At first it was just for Harvard students. In that
form it only had a potential market of a few thousand people, but because they
felt it was really for them, a critical mass of them signed up. After Facebook
stopped being for Harvard students, it remained for students at specific
colleges for quite a while. When I interviewed Mark Zuckerberg at Startup
School, he said that while it was a lot of work creating course lists for each
school, doing that made students feel the site was their natural home.
Any startup that could be described as a marketplace usually has to start in a
subset of the market, but this can work for other startups as well. It's
always worth asking if there's a subset of the market in which you can get a
critical mass of users quickly.
Most startups that use the contained fire strategy do it unconsciously. They
build something for themselves and their friends, who happen to be the early
adopters, and only realize later that they could offer it to a broader market.
The strategy works just as well if you do it unconsciously. The biggest danger
of not being consciously aware of this pattern is for those who naively
discard part of it. E.g. if you don't build something for yourself and your
friends, or even if you do, but you come from the corporate world and your
friends are not early adopters, you'll no longer have a perfect initial market
handed to you on a platter.
Among companies, the best early adopters are usually other startups. They're
more open to new things both by nature and because, having just been started,
they haven't made all their choices yet. Plus when they succeed they grow
fast, and you with them. It was one of many unforeseen advantages of the YC
model (and specifically of making YC big) that B2B startups now have an
instant market of hundreds of other startups ready at hand.
**Meraki**
For hardware startups there's a variant of doing things that don't scale that
we call "pulling a Meraki." Although we didn't fund Meraki, the founders were
Robert Morris's grad students, so we know their history. They got started by
doing something that really doesn't scale: assembling their routers
themselves.
Hardware startups face an obstacle that software startups don't. The minimum
order for a factory production run is usually several hundred thousand
dollars. Which can put you in a catch-22: without a product you can't generate
the growth you need to raise the money to manufacture your product. Back when
hardware startups had to rely on investors for money, you had to be pretty
convincing to overcome this. The arrival of crowdfunding (or more precisely,
preorders) has helped a lot. But even so I'd advise startups to pull a Meraki
initially if they can. That's what Pebble did. The Pebbles assembled the first
several hundred watches themselves. If they hadn't gone through that phase,
they probably wouldn't have sold $10 million worth of watches when they did go
on Kickstarter.
Like paying excessive attention to early customers, fabricating things
yourself turns out to be valuable for hardware startups. You can tweak the
design faster when you're the factory, and you learn things you'd never have
known otherwise. Eric Migicovsky of Pebble said one of the things he learned
was "how valuable it was to source good screws." Who knew?
**Consult**
Sometimes we advise founders of B2B startups to take over-engagement to an
extreme, and to pick a single user and act as if they were consultants
building something just for that one user. The initial user serves as the form
for your mold; keep tweaking till you fit their needs perfectly, and you'll
usually find you've made something other users want too. Even if there aren't
many of them, there are probably adjacent territories that have more. As long
as you can find just one user who really needs something and can act on that
need, you've got a toehold in making something people want, and that's as much
as any startup needs initially.
Consulting is the canonical example of work that doesn't scale. But (like
other ways of bestowing one's favors liberally) it's safe to do it so long as
you're not being paid to. That's where companies cross the line. So long as
you're a product company that's merely being extra attentive to a customer,
they're very grateful even if you don't solve all their problems. But when
they start paying you specifically for that attentiveness — when they start
paying you by the hour — they expect you to do everything.
Another consulting-like technique for recruiting initially lukewarm users is
to use your software yourselves on their behalf. We did that at Viaweb. When
we approached merchants asking if they wanted to use our software to make
online stores, some said no, but they'd let us make one for them. Since we
would do anything to get users, we did. We felt pretty lame at the time.
Instead of organizing big strategic e-commerce partnerships, we were trying to
sell luggage and pens and men's shirts. But in retrospect it was exactly the
right thing to do, because it taught us how it would feel to merchants to use
our software. Sometimes the feedback loop was near instantaneous: in the
middle of building some merchant's site I'd find I needed a feature we didn't
have, so I'd spend a couple hours implementing it and then resume building the
site.
**Manual**
There's a more extreme variant where you don't just use your software, but are
your software. When you only have a small number of users, you can sometimes
get away with doing by hand things that you plan to automate later. This lets
you launch faster, and when you do finally automate yourself out of the loop,
you'll know exactly what to build because you'll have muscle memory from doing
it yourself.
When manual components look to the user like software, this technique starts
to have aspects of a practical joke. For example, the way Stripe delivered
"instant" merchant accounts to its first users was that the founders manually
signed them up for traditional merchant accounts behind the scenes.
Some startups could be entirely manual at first. If you can find someone with
a problem that needs solving and you can solve it manually, go ahead and do
that for as long as you can, and then gradually automate the bottlenecks. It
would be a little frightening to be solving users' problems in a way that
wasn't yet automatic, but less frightening than the far more common case of
having something automatic that doesn't yet solve anyone's problems.
**Big**
I should mention one sort of initial tactic that usually doesn't work: the Big
Launch. I occasionally meet founders who seem to believe startups are
projectiles rather than powered aircraft, and that they'll make it big if and
only if they're launched with sufficient initial velocity. They want to launch
simultaneously in 8 different publications, with embargoes. And on a tuesday,
of course, since they read somewhere that's the optimum day to launch
something.
It's easy to see how little launches matter. Think of some successful
startups. How many of their launches do you remember? All you need from a
launch is some initial core of users. How well you're doing a few months later
will depend more on how happy you made those users than how many there were of
them.
So why do founders think launches matter? A combination of solipsism and
laziness. They think what they're building is so great that everyone who hears
about it will immediately sign up. Plus it would be so much less work if you
could get users merely by broadcasting your existence, rather than recruiting
them one at a time. But even if what you're building really is great, getting
users will always be a gradual process — partly because great things are
usually also novel, but mainly because users have other things to think about.
Partnerships too usually don't work. They don't work for startups in general,
but they especially don't work as a way to get growth started. It's a common
mistake among inexperienced founders to believe that a partnership with a big
company will be their big break. Six months later they're all saying the same
thing: that was way more work than we expected, and we ended up getting
practically nothing out of it.
It's not enough just to do something extraordinary initially. You have to make
an extraordinary _effort_ initially. Any strategy that omits the effort —
whether it's expecting a big launch to get you users, or a big partner — is
ipso facto suspect.
**Vector**
The need to do something unscalably laborious to get started is so nearly
universal that it might be a good idea to stop thinking of startup ideas as
scalars. Instead we should try thinking of them as pairs of what you're going
to build, plus the unscalable thing(s) you're going to do initially to get the
company going.
It could be interesting to start viewing startup ideas this way, because now
that there are two components you can try to be imaginative about the second
as well as the first. But in most cases the second component will be what it
usually is — recruit users manually and give them an overwhelmingly good
experience — and the main benefit of treating startups as vectors will be to
remind founders they need to work hard in two dimensions.
In the best case, both components of the vector contribute to your company's
DNA: the unscalable things you have to do to get started are not merely a
necessary evil, but change the company permanently for the better. If you have
to be aggressive about user acquisition when you're small, you'll probably
still be aggressive when you're big. If you have to manufacture your own
hardware, or use your software on users's behalf, you'll learn things you
couldn't have learned otherwise. And most importantly, if you have to work
hard to delight users when you only have a handful of them, you'll keep doing
it when you have a lot.
** |
|
December 2014
Many startups go through a point a few months before they die where although
they have a significant amount of money in the bank, they're also losing a lot
each month, and revenue growth is either nonexistent or mediocre. The company
has, say, 6 months of runway. Or to put it more brutally, 6 months before
they're out of business. They expect to avoid that by raising more from
investors.
That last sentence is the fatal one.
There may be nothing founders are so prone to delude themselves about as how
interested investors will be in giving them additional funding. It's hard to
convince investors the first time too, but founders expect that. What bites
them the second time is a confluence of three forces:
1. The company is spending more now than it did the first time it raised money.
2. Investors have much higher standards for companies that have already raised money.
3. The company is now starting to read as a failure. The first time it raised money, it was neither a success nor a failure; it was too early to ask. Now it's possible to ask that question, and the default answer is failure, because at this point that is the default outcome.
I'm going to call the situation I described in the first paragraph "the fatal
pinch." I try to resist coining phrases, but making up a name for this
situation may snap founders into realizing when they're in it.
One of the things that makes the fatal pinch so dangerous is that it's self-
reinforcing. Founders overestimate their chances of raising more money, and so
are slack about reaching profitability, which further decreases their chances
of raising money.
Now that you know about the fatal pinch, how do you avoid it? Y Combinator
tells founders who raise money to act as if it's the last they'll ever get.
Because the self-reinforcing nature of this situation works the other way too:
the less you need further investment, the easier it is to get.
What do you do if you're already in the fatal pinch? The first step is to re-
evaluate the probability of raising more money. I will now, by an amazing feat
of clairvoyance, do this for you: the probability is zero.
Three options remain: you can shut down the company, you can increase how much
you make, and you can decrease how much you spend.
You should shut down the company if you're certain it will fail no matter what
you do. Then at least you can give back the money you have left, and save
yourself however many months you would have spent riding it down.
Companies rarely _have_ to fail though. What I'm really doing here is giving
you the option of admitting you've already given up.
If you don't want to shut down the company, that leaves increasing revenues
and decreasing expenses. In most startups, expenses = people, and decreasing
expenses = firing people. Deciding to fire people is usually hard, but
there's one case in which it shouldn't be: when there are people you already
know you should fire but you're in denial about it. If so, now's the time.
If that makes you profitable, or will enable you to make it to profitability
on the money you have left, you've avoided the immediate danger.
Otherwise you have three options: you either have to fire good people, get
some or all of the employees to take less salary for a while, or increase
revenues.
Getting people to take less salary is a weak solution that will only work when
the problem isn't too bad. If your current trajectory won't quite get you to
profitability but you can get over the threshold by cutting salaries a little,
you might be able to make the case to everyone for doing it. Otherwise you're
probably just postponing the problem, and that will be obvious to the people
whose salaries you're proposing to cut.
Which leaves two options, firing good people and making more money. While
trying to balance them, keep in mind the eventual goal: to be a successful
product company in the sense of having a single thing lots of people use.
You should lean more toward firing people if the source of your trouble is
overhiring. If you went out and hired 15 people before you even knew what you
were building, you've created a broken company. You need to figure out what
you're building, and it will probably be easier to do that with a handful of
people than 15. Plus those 15 people might not even be the ones you need for
whatever you end up building. So the solution may be to shrink and then figure
out what direction to grow in. After all, you're not doing those 15 people any
favors if you fly the company into ground with them aboard. They'll all lose
their jobs eventually, along with all the time they expended on this doomed
company.
Whereas if you only have a handful of people, it may be better to focus on
trying to make more money. It may seem facile to suggest a startup make more
money, as if that could be done for the asking. Usually a startup is already
trying as hard as it can to sell whatever it sells. What I'm suggesting here
is not so much to try harder to make money but to try to make money in a
different way. For example, if you have only one person selling while the rest
are writing code, consider having everyone work on selling. What good will
more code do you when you're out of business? If you have to write code to
close a certain deal, go ahead; that follows from everyone working on selling.
But only work on whatever will get you the most revenue the soonest.
Another way to make money differently is to sell different things, and in
particular to do more consultingish work. I say consultingish because there is
a long slippery slope from making products to pure consulting, and you don't
have to go far down it before you start to offer something really attractive
to customers. Although your product may not be very appealing yet, if you're a
startup your programmers will often be way better than the ones your customers
have. Or you may have expertise in some new field they don't understand. So if
you change your sales conversations just a little from "do you want to buy our
product?" to "what do you need that you'd pay a lot for?" you may find it's
suddenly a lot easier to extract money from customers.
Be ruthlessly mercenary when you start doing this, though. You're trying to
save your company from death here, so make customers pay a lot, quickly. And
to the extent you can, try to avoid the worst pitfalls of consulting. The
ideal thing might be if you built a precisely defined derivative version of
your product for the customer, and it was otherwise a straight product sale.
You keep the IP and no billing by the hour.
In the best case, this consultingish work may not be just something you do to
survive, but may turn out to be the thing-that-doesn't-scale that defines your
company. Don't expect it to be, but as you dive into individual users' needs,
keep your eyes open for narrow openings that have wide vistas beyond.
There is usually so much demand for custom work that unless you're really
incompetent there has to be some point down the slope of consulting at which
you can survive. But I didn't use the term slippery slope by accident;
customers' insatiable demand for custom work will always be pushing you toward
the bottom. So while you'll probably survive, the problem now becomes to
survive with the least damage and distraction.
The good news is, plenty of successful startups have passed through near-death
experiences and gone on to flourish. You just have to realize in time that
you're near death. And if you're in the fatal pinch, you are.
** |
|
November 2014
It struck me recently how few of the most successful people I know are mean.
There are exceptions, but remarkably few.
Meanness isn't rare. In fact, one of the things the internet has shown us is
how mean people can be. A few decades ago, only famous people and professional
writers got to publish their opinions. Now everyone can, and we can all see
the long tail of meanness that had previously been hidden.
And yet while there are clearly a lot of mean people out there, there are next
to none among the most successful people I know. What's going on here? Are
meanness and success inversely correlated?
Part of what's going on, of course, is selection bias. I only know people who
work in certain fields: startup founders, programmers, professors. I'm willing
to believe that successful people in other fields are mean. Maybe successful
hedge fund managers are mean; I don't know enough to say. It seems quite
likely that most successful drug lords are mean. But there are at least big
chunks of the world that mean people don't rule, and that territory seems to
be growing.
My wife and Y Combinator cofounder Jessica is one of those rare people who
have x-ray vision for character. Being married to her is like standing next to
an airport baggage scanner. She came to the startup world from investment
banking, and she has always been struck both by how consistently successful
startup founders turn out to be good people, and how consistently bad people
fail as startup founders.
Why? I think there are several reasons. One is that being mean makes you
stupid. That's why I hate fights. You never do your best work in a fight,
because fights are not sufficiently general. Winning is always a function of
the situation and the people involved. You don't win fights by thinking of big
ideas but by thinking of tricks that work in one particular case. And yet
fighting is just as much work as thinking about real problems. Which is
particularly painful to someone who cares how their brain is used: your brain
goes fast but you get nowhere, like a car spinning its wheels.
Startups don't win by attacking. They win by transcending. There are
exceptions of course, but usually the way to win is to race ahead, not to stop
and fight.
Another reason mean founders lose is that they can't get the best people to
work for them. They can hire people who will put up with them because they
need a job. But the best people have other options. A mean person can't
convince the best people to work for him unless he is super convincing. And
while having the best people helps any organization, it's critical for
startups.
There is also a complementary force at work: if you want to build great
things, it helps to be driven by a spirit of benevolence. The startup founders
who end up richest are not the ones driven by money. The ones driven by money
take the big acquisition offer that nearly every successful startup gets en
route. The ones who keep going are driven by something else. They may not
say so explicitly, but they're usually trying to improve the world. Which
means people with a desire to improve the world have a natural advantage.
The exciting thing is that startups are not just one random type of work in
which meanness and success are inversely correlated. This kind of work is the
future.
For most of history success meant control of scarce resources. One got that by
fighting, whether literally in the case of pastoral nomads driving hunter-
gatherers into marginal lands, or metaphorically in the case of Gilded Age
financiers contending with one another to assemble railroad monopolies. For
most of history, success meant success at zero-sum games. And in most of them
meanness was not a handicap but probably an advantage.
That is changing. Increasingly the games that matter are not zero-sum.
Increasingly you win not by fighting to get control of a scarce resource, but
by having new ideas and building new things.
There have long been games where you won by having new ideas. In the third
century BC, Archimedes won by doing that. At least until an invading Roman
army killed him. Which illustrates why this change is happening: for new ideas
to matter, you need a certain degree of civil order. And not just not being at
war. You also need to prevent the sort of economic violence that nineteenth
century magnates practiced against one another and communist countries
practiced against their citizens. People need to feel that what they create
can't be stolen.
That has always been the case for thinkers, which is why this trend began with
them. When you think of successful people from history who weren't ruthless,
you get mathematicians and writers and artists. The exciting thing is that
their m.o. seems to be spreading. The games played by intellectuals are
leaking into the real world, and this is reversing the historical polarity of
the relationship between meanness and success.
So I'm really glad I stopped to think about this. Jessica and I have always
worked hard to teach our kids not to be mean. We tolerate noise and mess and
junk food, but not meanness. And now I have both an additional reason to crack
down on it, and an additional argument to use when I do: that being mean makes
you fail.
** |
|
June 2013
_(This talk was written for an audience of investors.)_
Y Combinator has now funded 564 startups including the current batch, which
has 53. The total valuation of the 287 that have valuations (either by raising
an equity round, getting acquired, or dying) is about $11.7 billion, and the
511 prior to the current batch have collectively raised about $1.7 billion.
As usual those numbers are dominated by a few big winners. The top 10 startups
account for 8.6 of that 11.7 billion. But there is a peloton of younger
startups behind them. There are about 40 more that have a shot at being really
big.
Things got a little out of hand last summer when we had 84 companies in the
batch, so we tightened up our filter to decrease the batch size. Several
journalists have tried to interpret that as evidence for some macro story they
were telling, but the reason had nothing to do with any external trend. The
reason was that we discovered we were using an n² algorithm, and we needed to
buy time to fix it. Fortunately we've come up with several techniques for
sharding YC, and the problem now seems to be fixed. With a new more scaleable
model and only 53 companies, the current batch feels like a walk in the park.
I'd guess we can grow another 2 or 3x before hitting the next bottleneck.
One consequence of funding such a large number of startups is that we see
trends early. And since fundraising is one of the main things we help startups
with, we're in a good position to notice trends in investing.
I'm going to take a shot at describing where these trends are leading. Let's
start with the most basic question: will the future be better or worse than
the past? Will investors, in the aggregate, make more money or less?
I think more. There are multiple forces at work, some of which will decrease
returns, and some of which will increase them. I can't predict for sure which
forces will prevail, but I'll describe them and you can decide for yourself.
There are two big forces driving change in startup funding: it's becoming
cheaper to start a startup, and startups are becoming a more normal thing to
do.
When I graduated from college in 1986, there were essentially two options: get
a job or go to grad school. Now there's a third: start your own company.
That's a big change. In principle it was possible to start your own company in
1986 too, but it didn't seem like a real possibility. It seemed possible to
start a consulting company, or a niche product company, but it didn't seem
possible to start a company that would become big.
That kind of change, from 2 paths to 3, is the sort of big social shift that
only happens once every few generations. I think we're still at the beginning
of this one. It's hard to predict how big a deal it will be. As big a deal as
the Industrial Revolution? Maybe. Probably not. But it will be a big enough
deal that it takes almost everyone by surprise, because those big social
shifts always do.
One thing we can say for sure is that there will be a lot more startups. The
monolithic, hierarchical companies of the mid 20th century are being replaced
by networks of smaller companies. This process is not just something happening
now in Silicon Valley. It started decades ago, and it's happening as far
afield as the car industry. It has a long way to run.
The other big driver of change is that startups are becoming cheaper to start.
And in fact the two forces are related: the decreasing cost of starting a
startup is one of the reasons startups are becoming a more normal thing to do.
The fact that startups need less money means founders will increasingly have
the upper hand over investors. You still need just as much of their energy and
imagination, but they don't need as much of your money. Because founders have
the upper hand, they'll retain an increasingly large share of the stock in,
and control of, their companies. Which means investors will get less stock and
less control.
Does that mean investors will make less money? Not necessarily, because there
will be more good startups. The total amount of desirable startup stock
available to investors will probably increase, because the number of desirable
startups will probably grow faster than the percentage they sell to investors
shrinks.
There's a rule of thumb in the VC business that there are about 15 companies a
year that will be really successful. Although a lot of investors unconsciously
treat this number as if it were some sort of cosmological constant, I'm
certain it isn't. There are probably limits on the rate at which technology
can develop, but that's not the limiting factor now. If it were, each
successful startup would be founded the month it became possible, and that is
not the case. Right now the limiting factor on the number of big hits is the
number of sufficiently good founders starting companies, and that number can
and will increase. There are still a lot of people who'd make great founders
who never end up starting a company. You can see that from how randomly some
of the most successful startups got started. So many of the biggest startups
almost didn't happen that there must be a lot of equally good startups that
actually didn't happen.
There might be 10x or even 50x more good founders out there. As more of them
go ahead and start startups, those 15 big hits a year could easily become 50
or even 100.
What about returns, though? Are we heading for a world in which returns will
be pinched by increasingly high valuations? I think the top firms will
actually make more money than they have in the past. High returns don't come
from investing at low valuations. They come from investing in the companies
that do really well. So if there are more of those to be had each year, the
best pickers should have more hits.
This means there should be more variability in the VC business. The firms that
can recognize and attract the best startups will do even better, because there
will be more of them to recognize and attract. Whereas the bad firms will get
the leftovers, as they do now, and yet pay a higher price for them.
Nor do I think it will be a problem that founders keep control of their
companies for longer. The empirical evidence on that is already clear:
investors make more money as founders' bitches than their bosses. Though
somewhat humiliating, this is actually good news for investors, because it
takes less time to serve founders than to micromanage them.
What about angels? I think there is a lot of opportunity there. It used to
suck to be an angel investor. You couldn't get access to the best deals,
unless you got lucky like Andy Bechtolsheim, and when you did invest in a
startup, VCs might try to strip you of your stock when they arrived later. Now
an angel can go to something like Demo Day or AngelList and have access to the
same deals VCs do. And the days when VCs could wash angels out of the cap
table are long gone.
I think one of the biggest unexploited opportunities in startup investing
right now is angel-sized investments made quickly. Few investors understand
the cost that raising money from them imposes on startups. When the company
consists only of the founders, everything grinds to a halt during fundraising,
which can easily take 6 weeks. The current high cost of fundraising means
there is room for low-cost investors to undercut the rest. And in this
context, low-cost means deciding quickly. If there were a reputable investor
who invested $100k on good terms and promised to decide yes or no within 24
hours, they'd get access to almost all the best deals, because every good
startup would approach them first. It would be up to them to pick, because
every bad startup would approach them first too, but at least they'd see
everything. Whereas if an investor is notorious for taking a long time to make
up their mind or negotiating a lot about valuation, founders will save them
for last. And in the case of the most promising startups, which tend to have
an easy time raising money, last can easily become never.
Will the number of big hits grow linearly with the total number of new
startups? Probably not, for two reasons. One is that the scariness of starting
a startup in the old days was a pretty effective filter. Now that the cost of
failing is becoming lower, we should expect founders to do it more. That's not
a bad thing. It's common in technology for an innovation that decreases the
cost of failure to increase the number of failures and yet leave you net
ahead.
The other reason the number of big hits won't grow proportionately to the
number of startups is that there will start to be an increasing number of idea
clashes. Although the finiteness of the number of good ideas is not the reason
there are only 15 big hits a year, the number has to be finite, and the more
startups there are, the more we'll see multiple companies doing the same thing
at the same time. It will be interesting, in a bad way, if idea clashes become
a lot more common.
Mostly because of the increasing number of early failures, the startup
business of the future won't simply be the same shape, scaled up. What used to
be an obelisk will become a pyramid. It will be a little wider at the top, but
a lot wider at the bottom.
What does that mean for investors? One thing it means is that there will be
more opportunities for investors at the earliest stage, because that's where
the volume of our imaginary solid is growing fastest. Imagine the obelisk of
investors that corresponds to the obelisk of startups. As it widens out into a
pyramid to match the startup pyramid, all the contents are adhering to the
top, leaving a vacuum at the bottom.
That opportunity for investors mostly means an opportunity for new investors,
because the degree of risk an existing investor or firm is comfortable taking
is one of the hardest things for them to change. Different types of investors
are adapted to different degrees of risk, but each has its specific degree of
risk deeply imprinted on it, not just in the procedures they follow but in the
personalities of the people who work there.
I think the biggest danger for VCs, and also the biggest opportunity, is at
the series A stage. Or rather, what used to be the series A stage before
series As turned into de facto series B rounds.
Right now, VCs often knowingly invest too much money at the series A stage.
They do it because they feel they need to get a big chunk of each series A
company to compensate for the opportunity cost of the board seat it consumes.
Which means when there is a lot of competition for a deal, the number that
moves is the valuation (and thus amount invested) rather than the percentage
of the company being sold. Which means, especially in the case of more
promising startups, that series A investors often make companies take more
money than they want.
Some VCs lie and claim the company really needs that much. Others are more
candid, and admit their financial models require them to own a certain
percentage of each company. But we all know the amounts being raised in series
A rounds are not determined by asking what would be best for the companies.
They're determined by VCs starting from the amount of the company they want to
own, and the market setting the valuation and thus the amount invested.
Like a lot of bad things, this didn't happen intentionally. The VC business
backed into it as their initial assumptions gradually became obsolete. The
traditions and financial models of the VC business were established when
founders needed investors more. In those days it was natural for founders to
sell VCs a big chunk of their company in the series A round. Now founders
would prefer to sell less, and VCs are digging in their heels because they're
not sure if they can make money buying less than 20% of each series A company.
The reason I describe this as a danger is that series A investors are
increasingly at odds with the startups they supposedly serve, and that tends
to come back to bite you eventually. The reason I describe it as an
opportunity is that there is now a lot of potential energy built up, as the
market has moved away from VCs' traditional business model. Which means the
first VC to break ranks and start to do series A rounds for as much equity as
founders want to sell (and with no "option pool" that comes only from the
founders' shares) stands to reap huge benefits.
What will happen to the VC business when that happens? Hell if I know. But I
bet that particular firm will end up ahead. If one top-tier VC firm started to
do series A rounds that started from the amount the company needed to raise
and let the percentage acquired vary with the market, instead of the other way
around, they'd instantly get almost all the best startups. And that's where
the money is.
You can't fight market forces forever. Over the last decade we've seen the
percentage of the company sold in series A rounds creep inexorably downward.
40% used to be common. Now VCs are fighting to hold the line at 20%. But I am
daily waiting for the line to collapse. It's going to happen. You may as well
anticipate it, and look bold.
Who knows, maybe VCs will make more money by doing the right thing. It
wouldn't be the first time that happened. Venture capital is a business where
occasional big successes generate hundredfold returns. How much confidence can
you really have in financial models for something like that anyway? The big
successes only have to get a tiny bit less occasional to compensate for a 2x
decrease in the stock sold in series A rounds.
If you want to find new opportunities for investing, look for things founders
complain about. Founders are your customers, and the things they complain
about are unsatisfied demand. I've given two examples of things founders
complain about most—investors who take too long to make up their minds, and
excessive dilution in series A rounds—so those are good places to look now.
But the more general recipe is: do something founders want.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
July 2010
I realized recently that what one thinks about in the shower in the morning is
more important than I'd thought. I knew it was a good time to have ideas. Now
I'd go further: now I'd say it's hard to do a really good job on anything you
don't think about in the shower.
Everyone who's worked on difficult problems is probably familiar with the
phenomenon of working hard to figure something out, failing, and then suddenly
seeing the answer a bit later while doing something else. There's a kind of
thinking you do without trying to. I'm increasingly convinced this type of
thinking is not merely helpful in solving hard problems, but necessary. The
tricky part is, you can only control it indirectly.
I think most people have one top idea in their mind at any given time. That's
the idea their thoughts will drift toward when they're allowed to drift
freely. And this idea will thus tend to get all the benefit of that type of
thinking, while others are starved of it. Which means it's a disaster to let
the wrong idea become the top one in your mind.
What made this clear to me was having an idea I didn't want as the top one in
my mind for two long stretches.
I'd noticed startups got way less done when they started raising money, but it
was not till we ourselves raised money that I understood why. The problem is
not the actual time it takes to meet with investors. The problem is that once
you start raising money, raising money becomes the top idea in your mind. That
becomes what you think about when you take a shower in the morning. And that
means other questions aren't.
I'd hated raising money when I was running Viaweb, but I'd forgotten why I
hated it so much. When we raised money for Y Combinator, I remembered. Money
matters are particularly likely to become the top idea in your mind. The
reason is that they have to be. It's hard to get money. It's not the sort of
thing that happens by default. It's not going to happen unless you let it
become the thing you think about in the shower. And then you'll make little
progress on anything else you'd rather be working on.
(I hear similar complaints from friends who are professors. Professors
nowadays seem to have become professional fundraisers who do a little research
on the side. It may be time to fix that.)
The reason this struck me so forcibly is that for most of the preceding 10
years I'd been able to think about what I wanted. So the contrast when I
couldn't was sharp. But I don't think this problem is unique to me, because
just about every startup I've seen grinds to a halt when they start raising
money or talking to acquirers.
You can't directly control where your thoughts drift. If you're controlling
them, they're not drifting. But you can control them indirectly, by
controlling what situations you let yourself get into. That has been the
lesson for me: be careful what you let become critical to you. Try to get
yourself into situations where the most urgent problems are ones you want to
think about.
You don't have complete control, of course. An emergency could push other
thoughts out of your head. But barring emergencies you have a good deal of
indirect control over what becomes the top idea in your mind.
I've found there are two types of thoughts especially worth avoiding
thoughts like the Nile Perch in the way they push out more interesting ideas.
One I've already mentioned: thoughts about money. Getting money is almost by
definition an attention sink. The other is disputes. These too are engaging in
the wrong way: they have the same velcro-like shape as genuinely interesting
ideas, but without the substance. So avoid disputes if you want to get real
work done.
Even Newton fell into this trap. After publishing his theory of colors in 1672
he found himself distracted by disputes for years, finally concluding that the
only solution was to stop publishing:
> I see I have made myself a slave to Philosophy, but if I get free of Mr
> Linus's business I will resolutely bid adew to it eternally, excepting what
> I do for my privat satisfaction or leave to come out after me. For I see a
> man must either resolve to put out nothing new or become a slave to defend
> it.
Linus and his students at Liege were among the more tenacious critics.
Newton's biographer Westfall seems to feel he was overreacting:
> Recall that at the time he wrote, Newton's "slavery" consisted of five
> replies to Liege, totalling fourteen printed pages, over the course of a
> year.
I'm more sympathetic to Newton. The problem was not the 14 pages, but the pain
of having this stupid controversy constantly reintroduced as the top idea in a
mind that wanted so eagerly to think about other things.
Turning the other cheek turns out to have selfish advantages. Someone who does
you an injury hurts you twice: first by the injury itself, and second by
taking up your time afterward thinking about it. If you learn to ignore
injuries you can at least avoid the second half. I've found I can to some
extent avoid thinking about nasty things people have done to me by telling
myself: this doesn't deserve space in my head. I'm always delighted to find
I've forgotten the details of disputes, because that means I hadn't been
thinking about them. My wife thinks I'm more forgiving than she is, but my
motives are purely selfish.
I suspect a lot of people aren't sure what's the top idea in their mind at any
given time. I'm often mistaken about it. I tend to think it's the idea I'd
want to be the top one, rather than the one that is. But it's easy to figure
this out: just take a shower. What topic do your thoughts keep returning to?
If it's not what you want to be thinking about, you may want to change
something.
** |
|
September 2009
Publishers of all types, from news to music, are unhappy that consumers won't
pay for content anymore. At least, that's how they see it.
In fact consumers never really were paying for content, and publishers weren't
really selling it either. If the content was what they were selling, why has
the price of books or music or movies always depended mostly on the format?
Why didn't better content cost more?
A copy of _Time_ costs $5 for 58 pages, or 8.6 cents a page. _The Economist_
costs $7 for 86 pages, or 8.1 cents a page. Better journalism is actually
slightly cheaper.
Almost every form of publishing has been organized as if the medium was what
they were selling, and the content was irrelevant. Book publishers, for
example, set prices based on the cost of producing and distributing books.
They treat the words printed in the book the same way a textile manufacturer
treats the patterns printed on its fabrics.
Economically, the print media are in the business of marking up paper. We can
all imagine an old-style editor getting a scoop and saying "this will sell a
lot of papers!" Cross out that final S and you're describing their business
model. The reason they make less money now is that people don't need as much
paper.
A few months ago I ran into a friend in a cafe. I had a copy of the _New York
Times_ , which I still occasionally buy on weekends. As I was leaving I
offered it to him, as I've done countless times before in the same situation.
But this time something new happened. I felt that sheepish feeling you get
when you offer someone something worthless. "Do you, er, want a printout of
yesterday's news?" I asked. (He didn't.)
Now that the medium is evaporating, publishers have nothing left to sell. Some
seem to think they're going to sell content—that they were always in the
content business, really. But they weren't, and it's unclear whether anyone
could be.
**Selling**
There have always been people in the business of selling information, but that
has historically been a distinct business from publishing. And the business of
selling information to consumers has always been a marginal one. When I was a
kid there were people who used to sell newsletters containing stock tips,
printed on colored paper that made them hard for the copiers of the day to
reproduce. That is a different world, both culturally and economically, from
the one publishers currently inhabit.
People will pay for information they think they can make money from. That's
why they paid for those stock tip newsletters, and why companies pay now for
Bloomberg terminals and Economist Intelligence Unit reports. But will people
pay for information otherwise? History offers little encouragement.
If audiences were willing to pay more for better content, why wasn't anyone
already selling it to them? There was no reason you couldn't have done that in
the era of physical media. So were the print media and the music labels simply
overlooking this opportunity? Or is it, rather, nonexistent?
What about iTunes? Doesn't that show people will pay for content? Well, not
really. iTunes is more of a tollbooth than a store. Apple controls the default
path onto the iPod. They offer a convenient list of songs, and whenever you
choose one they ding your credit card for a small amount, just below the
threshold of attention. Basically, iTunes makes money by taxing people, not
selling them stuff. You can only do that if you own the channel, and even then
you don't make much from it, because a toll has to be ignorable to work. Once
a toll becomes painful, people start to find ways around it, and that's pretty
easy with digital content.
The situation is much the same with digital books. Whoever controls the device
sets the terms. It's in their interest for content to be as cheap as possible,
and since they own the channel, there's a lot they can do to drive prices
down. Prices will fall even further once writers realize they don't need
publishers. Getting a book printed and distributed is a daunting prospect for
a writer, but most can upload a file.
Is software a counterexample? People pay a lot for desktop software, and
that's just information. True, but I don't think publishers can learn much
from software. Software companies can charge a lot because (a) many of the
customers are businesses, who get in trouble if they use pirated versions, and
(b) though in form merely information, software is treated by both maker and
purchaser as a different type of thing from a song or an article. A Photoshop
user needs Photoshop in a way that no one needs a particular song or article.
That's why there's a separate word, "content," for information that's not
software. Software is a different business. Software and content blur together
in some of the most lightweight software, like casual games. But those are
usually free. To make money the way software companies do, publishers would
have to become software companies, and being publishers gives them no
particular head start in that domain.
The most promising countertrend is the premium cable channel. People still pay
for those. But broadcasting isn't publishing: you're not selling a copy of
something. That's one reason the movie business hasn't seen their revenues
decline the way the news and music businesses have. They only have one foot in
publishing.
To the extent the movie business can avoid becoming publishers, they may avoid
publishing's problems. But there are limits to how well they'll be able to do
that. Once publishing—giving people copies—becomes the most natural way of
distributing your content, it probably doesn't work to stick to old forms of
distribution just because you make more that way. If free copies of your
content are available online, then you're competing with publishing's form of
distribution, and that's just as bad as being a publisher.
Apparently some people in the music business hope to retroactively convert it
away from publishing, by getting listeners to pay for subscriptions. It seems
unlikely that will work if they're just streaming the same files you can get
as mp3s.
**Next**
What happens to publishing if you can't sell content? You have two choices:
give it away and make money from it indirectly, or find ways to embody it in
things people will pay for.
The first is probably the future of most current media. Give music away and
make money from concerts and t-shirts. Publish articles for free and make
money from one of a dozen permutations of advertising. Both publishers and
investors are down on advertising at the moment, but it has more potential
than they realize.
I'm not claiming that potential will be realized by the existing players. The
optimal ways to make money from the written word probably require different
words written by different people.
It's harder to say what will happen to movies. They could evolve into ads. Or
they could return to their roots and make going to the theater a treat. If
they made the experience good enough, audiences might start to prefer it to
watching pirated movies at home. Or maybe the movie business will dry up,
and the people working in it will go to work for game developers.
I don't know how big embodying information in physical form will be. It may be
surprisingly large; people overvalue physical stuff. There should remain some
market for printed books, at least.
I can see the evolution of book publishing in the books on my shelves. Clearly
at some point in the 1960s the big publishing houses started to ask: how
cheaply can we make books before people refuse to buy them? The answer turned
out to be one step short of phonebooks. As long as it isn't floppy, consumers
still perceive it as a book.
That worked as long as buying printed books was the only way to read them. If
printed books are optional, publishers will have to work harder to entice
people to buy them. There should be some market, but it's hard to foresee how
big, because its size will depend not on macro trends like the amount people
read, but on the ingenuity of individual publishers.
Some magazines may thrive by focusing on the magazine as a physical object.
Fashion magazines could be made lush in a way that would be hard to match
digitally, at least for a while. But this is probably not an option for most
magazines.
I don't know exactly what the future will look like, but I'm not too worried
about it. This sort of change tends to create as many good things as it kills.
Indeed, the really interesting question is not what will happen to existing
forms, but what new forms will appear.
The reason I've been writing about existing forms is that I don't _know_ what
new forms will appear. But though I can't predict specific winners, I can
offer a recipe for recognizing them. When you see something that's taking
advantage of new technology to give people something they want that they
couldn't have before, you're probably looking at a winner. And when you see
something that's merely reacting to new technology in an attempt to preserve
some existing source of revenue, you're probably looking at a loser.
** |
|
April 2009
Recently I realized I'd been holding two ideas in my head that would explode
if combined.
The first is that startups may represent a new economic phase, on the scale of
the Industrial Revolution. I'm not sure of this, but there seems a decent
chance it's true. People are dramatically more productive as founders or early
employees of startups—imagine how much less Larry and Sergey would have
achieved if they'd gone to work for a big company—and that scale of
improvement can change social customs.
The second idea is that startups are a type of business that flourishes in
certain places that specialize in it—that Silicon Valley specializes in
startups in the same way Los Angeles specializes in movies, or New York in
finance.
What if both are true? What if startups are both a new economic phase and also
a type of business that only flourishes in certain centers?
If so, this revolution is going to be particularly revolutionary. All previous
revolutions have spread. Agriculture, cities, and industrialization all spread
widely. If startups end up being like the movie business, with just a handful
of centers and one dominant one, that's going to have novel consequences.
There are already signs that startups may not spread particularly well. The
spread of startups seems to be proceeding slower than the spread of the
Industrial Revolution, despite the fact that communication is so much faster
now.
Within a few decades of the founding of Boulton & Watt there were steam
engines scattered over northern Europe and North America. Industrialization
didn't spread much beyond those regions for a while. It only spread to places
where there was a strong middle class—countries where a private citizen could
make a fortune without having it confiscated. Otherwise it wasn't worth
investing in factories. But in a country with a strong middle class it was
easy for industrial techniques to take root. An individual mine or factory
owner could decide to install a steam engine, and within a few years he could
probably find someone local to make him one. So steam engines spread fast. And
they spread widely, because the locations of mines and factories were
determined by features like rivers, harbors, and sources of raw materials.
Startups don't seem to spread so well, partly because they're more a social
than a technical phenomenon, and partly because they're not tied to geography.
An individual European manufacturer could import industrial techniques and
they'd work fine. This doesn't seem to work so well with startups: you need a
community of expertise, as you do in the movie business. Plus there aren't
the same forces driving startups to spread. Once railroads or electric power
grids were invented, every region had to have them. An area without railroads
or power was a rich potential market. But this isn't true with startups.
There's no need for a Microsoft of France or Google of Germany.
Governments may decide they want to encourage startups locally, but government
policy can't call them into being the way a genuine need could.
How will this all play out? If I had to predict now, I'd say that startups
will spread, but very slowly, because their spread will be driven not by
government policies (which won't work) or by market need (which doesn't exist)
but, to the extent that it happens at all, by the same random factors that
have caused startup culture to spread thus far. And such random factors will
increasingly be outweighed by the pull of existing startup hubs.
Silicon Valley is where it is because William Shockley wanted to move back to
Palo Alto, where he grew up, and the experts he lured west to work with him
liked it so much they stayed. Seattle owes much of its position as a tech
center to the same cause: Gates and Allen wanted to move home. Otherwise
Albuquerque might have Seattle's place in the rankings. Boston is a tech
center because it's the intellectual capital of the US and probably the world.
And if Battery Ventures hadn't turned down Facebook, Boston would be
significantly bigger now on the startup radar screen.
But of course it's not a coincidence that Facebook got funded in the Valley
and not Boston. There are more and bolder investors in Silicon Valley than in
Boston, and even undergrads know it.
Boston's case illustrates the difficulty you'd have establishing a new startup
hub this late in the game. If you wanted to create a startup hub by
reproducing the way existing ones happened, the way to do it would be to
establish a first-rate research university in a place so nice that rich people
wanted to live there. Then the town would be hospitable to both groups you
need: both founders and investors. That's the combination that yielded Silicon
Valley. But Silicon Valley didn't have Silicon Valley to compete with. If you
tried now to create a startup hub by planting a great university in a nice
place, it would have a harder time getting started, because many of the best
startups it produced would be sucked away to existing startup hubs.
Recently I suggested a potential shortcut: pay startups to move. Once you had
enough good startups in one place, it would create a self-sustaining chain
reaction. Founders would start to move there without being paid, because that
was where their peers were, and investors would appear too, because that was
where the deals were.
In practice I doubt any government would have the balls to try this, or the
brains to do it right. I didn't mean it as a practical suggestion, but more as
an exploration of the lower bound of what it would take to create a startup
hub deliberately.
The most likely scenario is (1) that no government will successfully establish
a startup hub, and (2) that the spread of startup culture will thus be driven
by the random factors that have driven it so far, but (3) that these factors
will be increasingly outweighed by the pull of existing startup hubs. Result:
this revolution, if it is one, will be unusually localized.
** |
|
February 2008
The fiery reaction to the release of Arc had an unexpected consequence: it
made me realize I had a design philosophy. The main complaint of the more
articulate critics was that Arc seemed so flimsy. After years of working on
it, all I had to show for myself were a few thousand lines of macros? Why
hadn't I worked on more substantial problems?
As I was mulling over these remarks it struck me how familiar they seemed.
This was exactly the kind of thing people said at first about Viaweb, and Y
Combinator, and most of my essays.
When we launched Viaweb, it seemed laughable to VCs and e-commerce "experts."
We were just a couple guys in an apartment, which did not seem cool in 1995
the way it does now. And the thing we'd built, as far as they could tell,
wasn't even software. Software, to them, equalled big, honking Windows apps.
Since Viaweb was the first web-based app they'd seen, it seemed to be nothing
more than a website. They were even more contemptuous when they discovered
that Viaweb didn't process credit card transactions (we didn't for the whole
first year). Transaction processing seemed to them what e-commerce was all
about. It sounded serious and difficult.
And yet, mysteriously, Viaweb ended up crushing all its competitors.
The initial reaction to Y Combinator was almost identical. It seemed laughably
lightweight. Startup funding meant series A rounds: millions of dollars given
to a small number of startups founded by people with established credentials
after months of serious, businesslike meetings, on terms described in a
document a foot thick. Y Combinator seemed inconsequential. It's too early to
say yet whether Y Combinator will turn out like Viaweb, but judging from the
number of imitations, a lot of people seem to think we're on to something.
I can't measure whether my essays are successful, except in page views, but
the reaction to them is at least different from when I started. At first the
default reaction of the Slashdot trolls was (translated into articulate
terms): "Who is this guy and what authority does he have to write about these
topics? I haven't read the essay, but there's no way anything so short and
written in such an informal style could have anything useful to say about such
and such topic, when people with degrees in the subject have already written
many thick books about it." Now there's a new generation of trolls on a new
generation of sites, but they have at least started to omit the initial "Who
is this guy?"
Now people are saying the same things about Arc that they said at first about
Viaweb and Y Combinator and most of my essays. Why the pattern? The answer, I
realized, is that my m.o. for all four has been the same.
Here it is: I like to find (a) simple solutions (b) to overlooked problems (c)
that actually need to be solved, and (d) deliver them as informally as
possible, (e) starting with a very crude version 1, then (f) iterating
rapidly.
When I first laid out these principles explicitly, I noticed something
striking: this is practically a recipe for generating a contemptuous initial
reaction. Though simple solutions are better, they don't seem as impressive as
complex ones. Overlooked problems are by definition problems that most people
think don't matter. Delivering solutions in an informal way means that instead
of judging something by the way it's presented, people have to actually
understand it, which is more work. And starting with a crude version 1 means
your initial effort is always small and incomplete.
I'd noticed, of course, that people never seemed to grasp new ideas at first.
I thought it was just because most people were stupid. Now I see there's more
to it than that. Like a contrarian investment fund, someone following this
strategy will almost always be doing things that seem wrong to the average
person.
As with contrarian investment strategies, that's exactly the point. This
technique is successful (in the long term) because it gives you all the
advantages other people forgo by trying to seem legit. If you work on
overlooked problems, you're more likely to discover new things, because you
have less competition. If you deliver solutions informally, you (a) save all
the effort you would have had to expend to make them look impressive, and (b)
avoid the danger of fooling yourself as well as your audience. And if you
release a crude version 1 then iterate, your solution can benefit from the
imagination of nature, which, as Feynman pointed out, is more powerful than
your own.
In the case of Viaweb, the simple solution was to make the software run on the
server. The overlooked problem was to generate web sites automatically; in
1995, online stores were all made by hand by human designers, but we knew this
wouldn't scale. The part that actually mattered was graphic design, not
transaction processing. The informal delivery mechanism was me, showing up in
jeans and a t-shirt at some retailer's office. And the crude version 1 was, if
I remember correctly, less than 10,000 lines of code when we launched.
The power of this technique extends beyond startups and programming languages
and essays. It probably extends to any kind of creative work. Certainly it can
be used in painting: this is exactly what Cezanne and Klee did.
At Y Combinator we bet money on it, in the sense that we encourage the
startups we fund to work this way. There are always new ideas right under your
nose. So look for simple things that other people have overlooked—things
people will later claim were "obvious"—especially when they've been led astray
by obsolete conventions, or by trying to do things that are superficially
impressive. Figure out what the real problem is, and make sure you solve that.
Don't worry about trying to look corporate; the product is what wins in the
long term. And launch as soon as you can, so you start learning from users
what you should have been making.
Reddit is a classic example of this approach. When Reddit first launched, it
seemed like there was nothing to it. To the graphically unsophisticated its
deliberately minimal design seemed like no design at all. But Reddit solved
the real problem, which was to tell people what was new and otherwise stay out
of the way. As a result it became massively successful. Now that conventional
ideas have caught up with it, it seems obvious. People look at Reddit and
think the founders were lucky. Like all such things, it was harder than it
looked. The Reddits pushed so hard against the current that they reversed it;
now it looks like they're merely floating downstream.
So when you look at something like Reddit and think "I wish I could think of
an idea like that," remember: ideas like that are all around you. But you
ignore them because they look wrong.
---
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
September 2009
Like all investors, we spend a lot of time trying to learn how to predict
which startups will succeed. We probably spend more time thinking about it
than most, because we invest the earliest. Prediction is usually all we have
to rely on.
We learned quickly that the most important predictor of success is
determination. At first we thought it might be intelligence. Everyone likes to
believe that's what makes startups succeed. It makes a better story that a
company won because its founders were so smart. The PR people and reporters
who spread such stories probably believe them themselves. But while it
certainly helps to be smart, it's not the deciding factor. There are plenty of
people as smart as Bill Gates who achieve nothing.
In most domains, talent is overrated compared to determination—partly because
it makes a better story, partly because it gives onlookers an excuse for being
lazy, and partly because after a while determination starts to look like
talent.
I can't think of any field in which determination is overrated, but the
relative importance of determination and talent probably do vary somewhat.
Talent probably matters more in types of work that are purer, in the sense
that one is solving mostly a single type of problem instead of many different
types. I suspect determination would not take you as far in math as it would
in, say, organized crime.
I don't mean to suggest by this comparison that types of work that depend more
on talent are always more admirable. Most people would agree it's more
admirable to be good at math than memorizing long strings of digits, even
though the latter depends more on natural ability.
Perhaps one reason people believe startup founders win by being smarter is
that intelligence does matter more in technology startups than it used to in
earlier types of companies. You probably do need to be a bit smarter to
dominate Internet search than you had to be to dominate railroads or hotels or
newspapers. And that's probably an ongoing trend. But even in the highest of
high tech industries, success still depends more on determination than brains.
If determination is so important, can we isolate its components? Are some more
important than others? Are there some you can cultivate?
The simplest form of determination is sheer willfulness. When you want
something, you must have it, no matter what.
A good deal of willfulness must be inborn, because it's common to see families
where one sibling has much more of it than another. Circumstances can alter
it, but at the high end of the scale, nature seems to be more important than
nurture. Bad circumstances can break the spirit of a strong-willed person, but
I don't think there's much you can do to make a weak-willed person stronger-
willed.
Being strong-willed is not enough, however. You also have to be hard on
yourself. Someone who was strong-willed but self-indulgent would not be called
determined. Determination implies your willfulness is balanced by discipline.
That word balance is a significant one. The more willful you are, the more
disciplined you have to be. The stronger your will, the less anyone will be
able to argue with you except yourself. And someone has to argue with you,
because everyone has base impulses, and if you have more will than discipline
you'll just give into them and end up on a local maximum like drug addiction.
We can imagine will and discipline as two fingers squeezing a slippery melon
seed. The harder they squeeze, the further the seed flies, but they must both
squeeze equally or the seed spins off sideways.
If this is true it has interesting implications, because discipline can be
cultivated, and in fact does tend to vary quite a lot in the course of an
individual's life. If determination is effectively the product of will and
discipline, then you can become more determined by being more disciplined.
Another consequence of the melon seed model is that the more willful you are,
the more dangerous it is to be undisciplined. There seem to be plenty of
examples to confirm that. In some very energetic people's lives you see
something like wing flutter, where they alternate between doing great work and
doing absolutely nothing. Externally this would look a lot like bipolar
disorder.
The melon seed model is inaccurate in at least one respect, however: it's
static. In fact the dangers of indiscipline increase with temptation. Which
means, interestingly, that determination tends to erode itself. If you're
sufficiently determined to achieve great things, this will probably increase
the number of temptations around you. Unless you become proportionally more
disciplined, willfulness will then get the upper hand, and your achievement
will revert to the mean.
That's why Shakespeare's Caesar thought thin men so dangerous. They weren't
tempted by the minor perquisites of power.
The melon seed model implies it's possible to be too disciplined. Is it? I
think there probably are people whose willfulness is crushed down by excessive
discipline, and who would achieve more if they weren't so hard on themselves.
One reason the young sometimes succeed where the old fail is that they don't
realize how incompetent they are. This lets them do a kind of deficit
spending. When they first start working on something, they overrate their
achievements. But that gives them confidence to keep working, and their
performance improves. Whereas someone clearer-eyed would see their initial
incompetence for what it was, and perhaps be discouraged from continuing.
There's one other major component of determination: ambition. If willfulness
and discipline are what get you to your destination, ambition is how you
choose it.
I don't know if it's exactly right to say that ambition is a component of
determination, but they're not entirely orthogonal. It would seem a misnomer
if someone said they were very determined to do something trivially easy.
And fortunately ambition seems to be quite malleable; there's a lot you can do
to increase it. Most people don't know how ambitious to be, especially when
they're young. They don't know what's hard, or what they're capable of. And
this problem is exacerbated by having few peers. Ambitious people are rare, so
if everyone is mixed together randomly, as they tend to be early in people's
lives, then the ambitious ones won't have many ambitious peers. When you take
people like this and put them together with other ambitious people, they bloom
like dying plants given water. Probably most ambitious people are starved for
the sort of encouragement they'd get from ambitious peers, whatever their age.
Achievements also tend to increase your ambition. With each step you gain
confidence to stretch further next time.
So here in sum is how determination seems to work: it consists of willfulness
balanced with discipline, aimed by ambition. And fortunately at least two of
these three qualities can be cultivated. You may be able to increase your
strength of will somewhat; you can definitely learn self-discipline; and
almost everyone is practically malnourished when it comes to ambition.
I feel like I understand determination a bit better now. But only a bit:
willfulness, discipline, and ambition are all concepts almost as complicated
as determination.
Note too that determination and talent are not the whole story. There's a
third factor in achievement: how much you like the work. If you really love
working on something, you don't need determination to drive you; it's what
you'd do anyway. But most types of work have aspects one doesn't like, because
most types of work consist of doing things for other people, and it's very
unlikely that the tasks imposed by their needs will happen to align exactly
with what you want to do.
Indeed, if you want to create the most wealth, the way to do it is to focus
more on their needs than your interests, and make up the difference with
determination.
** |
|
April 2009
I usually avoid politics, but since we now seem to have an administration
that's open to suggestions, I'm going to risk making one. The single biggest
thing the government could do to increase the number of startups in this
country is a policy that would cost nothing: establish a new class of visa for
startup founders.
The biggest constraint on the number of new startups that get created in the
US is not tax policy or employment law or even Sarbanes-Oxley. It's that we
won't let the people who want to start them into the country.
Letting just 10,000 startup founders into the country each year could have a
visible effect on the economy. If we assume 4 people per startup, which is
probably an overestimate, that's 2500 new companies. _Each year._ They
wouldn't all grow as big as Google, but out of 2500 some would come close.
By definition these 10,000 founders wouldn't be taking jobs from Americans: it
could be part of the terms of the visa that they couldn't work for existing
companies, only new ones they'd founded. In fact they'd cause there to be more
jobs for Americans, because the companies they started would hire more
employees as they grew.
The tricky part might seem to be how one defined a startup. But that could be
solved quite easily: let the market decide. Startup investors work hard to
find the best startups. The government could not do better than to piggyback
on their expertise, and use investment by recognized startup investors as the
test of whether a company was a real startup.
How would the government decide who's a startup investor? The same way they
decide what counts as a university for student visas. We'll establish our own
accreditation procedure. We know who one another are.
10,000 people is a drop in the bucket by immigration standards, but would
represent a huge increase in the pool of startup founders. I think this would
have such a visible effect on the economy that it would make the legislator
who introduced the bill famous. The only way to know for sure would be to try
it, and that would cost practically nothing.
**Thanks** to Trevor Blackwell, Paul Buchheit, Jeff Clavier, David Hornik,
Jessica Livingston, Greg Mcadoo, Aydin Senkut, and Fred Wilson for reading
drafts of this.
**Related:**
---
---
The United States of Entrepreneurs
About Half of VC-Backed Company Founders are Immigrants
* * *
--- |
|
December 2014
I've read Villehardouin's chronicle of the Fourth Crusade at least two times,
maybe three. And yet if I had to write down everything I remember from it, I
doubt it would amount to much more than a page. Multiply this times several
hundred, and I get an uneasy feeling when I look at my bookshelves. What use
is it to read all these books if I remember so little from them?
A few months ago, as I was reading Constance Reid's excellent biography of
Hilbert, I figured out if not the answer to this question, at least something
that made me feel better about it. She writes:
> Hilbert had no patience with mathematical lectures which filled the students
> with facts but did not teach them how to frame a problem and solve it. He
> often used to tell them that "a perfect formulation of a problem is already
> half its solution."
That has always seemed to me an important point, and I was even more convinced
of it after hearing it confirmed by Hilbert.
But how had I come to believe in this idea in the first place? A combination
of my own experience and other things I'd read. None of which I could at that
moment remember! And eventually I'd forget that Hilbert had confirmed it too.
But my increased belief in the importance of this idea would remain something
I'd learned from this book, even after I'd forgotten I'd learned it.
Reading and experience train your model of the world. And even if you forget
the experience or what you read, its effect on your model of the world
persists. Your mind is like a compiled program you've lost the source of. It
works, but you don't know why.
The place to look for what I learned from Villehardouin's chronicle is not
what I remember from it, but my mental models of the crusades, Venice,
medieval culture, siege warfare, and so on. Which doesn't mean I couldn't have
read more attentively, but at least the harvest of reading is not so miserably
small as it might seem.
This is one of those things that seem obvious in retrospect. But it was a
surprise to me and presumably would be to anyone else who felt uneasy about
(apparently) forgetting so much they'd read.
Realizing it does more than make you feel a little better about forgetting,
though. There are specific implications.
For example, reading and experience are usually "compiled" at the time they
happen, using the state of your brain at that time. The same book would get
compiled differently at different points in your life. Which means it is very
much worth reading important books multiple times. I always used to feel some
misgivings about rereading books. I unconsciously lumped reading together with
work like carpentry, where having to do something again is a sign you did it
wrong the first time. Whereas now the phrase "already read" seems almost ill-
formed.
Intriguingly, this implication isn't limited to books. Technology will
increasingly make it possible to relive our experiences. When people do that
today it's usually to enjoy them again (e.g. when looking at pictures of a
trip) or to find the origin of some bug in their compiled code (e.g. when
Stephen Fry succeeded in remembering the childhood trauma that prevented him
from singing). But as technologies for recording and playing back your life
improve, it may become common for people to relive experiences without any
goal in mind, simply to learn from them again as one might when rereading a
book.
Eventually we may be able not just to play back experiences but also to index
and even edit them. So although not knowing how you know things may seem part
of being human, it may not be.
**Thanks** to Sam Altman, Jessica Livingston, and Robert Morris for reading
drafts of this.
---
---
Japanese Translation
* * *
--- |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
August 2013
When people hurt themselves lifting heavy things, it's usually because they
try to lift with their back. The right way to lift heavy things is to let your
legs do the work. Inexperienced founders make the same mistake when trying to
convince investors. They try to convince with their pitch. Most would be
better off if they let their startup do the work — if they started by
understanding why their startup is worth investing in, then simply explained
this well to investors.
Investors are looking for startups that will be very successful. But that test
is not as simple as it sounds. In startups, as in a lot of other domains, the
distribution of outcomes follows a power law, but in startups the curve is
startlingly steep. The big successes are so big they dwarf the rest. And since
there are only a handful each year (the conventional wisdom is 15), investors
treat "big success" as if it were binary. Most are interested in you if you
seem like you have a chance, however small, of being one of the 15 big
successes, and otherwise not.
(There are a handful of angels who'd be interested in a company with a high
probability of being moderately successful. But angel investors like big
successes too.)
How do you seem like you'll be one of the big successes? You need three
things: formidable founders, a promising market, and (usually) some evidence
of success so far.
**Formidable**
The most important ingredient is formidable founders. Most investors decide in
the first few minutes whether you seem like a winner or a loser, and once
their opinion is set it's hard to change. Every startup has reasons both
to invest and not to invest. If investors think you're a winner they focus on
the former, and if not they focus on the latter. For example, it might be a
rich market, but with a slow sales cycle. If investors are impressed with you
as founders, they say they want to invest because it's a rich market, and if
not, they say they can't invest because of the slow sales cycle.
They're not necessarily trying to mislead you. Most investors are genuinely
unclear in their own minds why they like or dislike startups. If you seem like
a winner, they'll like your idea more. But don't be too smug about this
weakness of theirs, because you have it too; almost everyone does.
There is a role for ideas of course. They're fuel for the fire that starts
with liking the founders. Once investors like you, you'll see them reaching
for ideas: they'll be saying "yes, and you could also do x." (Whereas when
they don't like you, they'll be saying "but what about y?")
But the foundation of convincing investors is to seem formidable, and since
this isn't a word most people use in conversation much, I should explain what
it means. A formidable person is one who seems like they'll get what they
want, regardless of whatever obstacles are in the way. Formidable is close to
confident, except that someone could be confident and mistaken. Formidable is
roughly justifiably confident.
There are a handful of people who are really good at seeming formidable — some
because they actually are very formidable and just let it show, and others
because they are more or less con artists. But most founders, including
many who will go on to start very successful companies, are not that good at
seeming formidable the first time they try fundraising. What should they do?
What they should not do is try to imitate the swagger of more experienced
founders. Investors are not always that good at judging technology, but
they're good at judging confidence. If you try to act like something you're
not, you'll just end up in an uncanny valley. You'll depart from sincere, but
never arrive at convincing.
**Truth**
The way to seem most formidable as an inexperienced founder is to stick to the
truth. How formidable you seem isn't a constant. It varies depending on what
you're saying. Most people can seem confident when they're saying "one plus
one is two," because they know it's true. The most diffident person would be
puzzled and even slightly contemptuous if they told a VC "one plus one is two"
and the VC reacted with skepticism. The magic ability of people who are good
at seeming formidable is that they can do this with the sentence "we're going
to make a billion dollars a year." But you can do the same, if not with that
sentence with some fairly impressive ones, so long as you convince yourself
first.
That's the secret. Convince yourself that your startup is worth investing in,
and then when you explain this to investors they'll believe you. And by
convince yourself, I don't mean play mind games with yourself to boost your
confidence. I mean truly evaluate whether your startup is worth investing in.
If it isn't, don't try to raise money. But if it is, you'll be telling the
truth when you tell investors it's worth investing in, and they'll sense that.
You don't have to be a smooth presenter if you understand something well and
tell the truth about it.
To evaluate whether your startup is worth investing in, you have to be a
domain expert. If you're not a domain expert, you can be as convinced as you
like about your idea, and it will seem to investors no more than an instance
of the Dunning-Kruger effect. Which in fact it will usually be. And investors
can tell fairly quickly whether you're a domain expert by how well you answer
their questions. Know everything about your market.
Why do founders persist in trying to convince investors of things they're not
convinced of themselves? Partly because we've all been trained to.
When my friends Robert Morris and Trevor Blackwell were in grad school, one of
their fellow students was on the receiving end of a question from their
faculty advisor that we still quote today. When the unfortunate fellow got to
his last slide, the professor burst out:
> Which one of these conclusions do you actually believe?
One of the artifacts of the way schools are organized is that we all get
trained to talk even when we have nothing to say. If you have a ten page paper
due, then ten pages you must write, even if you only have one page of ideas.
Even if you have no ideas. You have to produce something. And all too many
startups go into fundraising in the same spirit. When they think it's time to
raise money, they try gamely to make the best case they can for their startup.
Most never think of pausing beforehand to ask whether what they're saying is
actually convincing, because they've all been trained to treat the need to
present as a given — as an area of fixed size, over which however much truth
they have must needs be spread, however thinly.
The time to raise money is not when you need it, or when you reach some
artificial deadline like a Demo Day. It's when you can convince investors, and
not before.
And unless you're a good con artist, you'll never convince investors if you're
not convinced yourself. They're far better at detecting bullshit than you are
at producing it, even if you're producing it unknowingly. If you try to
convince investors before you've convinced yourself, you'll be wasting both
your time.
But pausing first to convince yourself will do more than save you from wasting
your time. It will force you to organize your thoughts. To convince yourself
that your startup is worth investing in, you'll have to figure out why it's
worth investing in. And if you can do that you'll end up with more than added
confidence. You'll also have a provisional roadmap of how to succeed.
**Market**
Notice I've been careful to talk about whether a startup is worth investing
in, rather than whether it's going to succeed. No one knows whether a startup
is going to succeed. And it's a good thing for investors that this is so,
because if you could know in advance whether a startup would succeed, the
stock price would already be the future price, and there would be no room for
investors to make money. Startup investors know that every investment is a
bet, and against pretty long odds.
So to prove you're worth investing in, you don't have to prove you're going to
succeed, just that you're a sufficiently good bet. What makes a startup a
sufficiently good bet? In addition to formidable founders, you need a
plausible path to owning a big piece of a big market. Founders think of
startups as ideas, but investors think of them as markets. If there are x
number of customers who'd pay an average of $y per year for what you're
making, then the total addressable market, or TAM, of your company is $xy.
Investors don't expect you to collect all that money, but it's an upper bound
on how big you can get.
Your target market has to be big, and it also has to be capturable by you. But
the market doesn't have to be big yet, nor do you necessarily have to be in it
yet. Indeed, it's often better to start in a small market that will either
turn into a big one or from which you can move into a big one. There just has
to be some plausible sequence of hops that leads to dominating a big market a
few years down the line.
The standard of plausibility varies dramatically depending on the age of the
startup. A three month old company at Demo Day only needs to be a promising
experiment that's worth funding to see how it turns out. Whereas a two year
old company raising a series A round needs to be able to show the experiment
worked.
But every company that gets really big is "lucky" in the sense that their
growth is due mostly to some external wave they're riding, so to make a
convincing case for becoming huge, you have to identify some specific trend
you'll benefit from. Usually you can find this by asking "why now?" If this is
such a great idea, why hasn't someone else already done it? Ideally the answer
is that it only recently became a good idea, because something changed, and no
one else has noticed yet.
Microsoft for example was not going to grow huge selling Basic interpreters.
But by starting there they were perfectly poised to expand up the stack of
microcomputer software as microcomputers grew powerful enough to support one.
And microcomputers turned out to be a really huge wave, bigger than even the
most optimistic observers would have predicted in 1975.
But while Microsoft did really well and there is thus a temptation to think
they would have seemed a great bet a few months in, they probably didn't.
Good, but not great. No company, however successful, ever looks more than a
pretty good bet a few months in. Microcomputers turned out to be a big deal,
and Microsoft both executed well and got lucky. But it was by no means obvious
that this was how things would play out. Plenty of companies seem as good a
bet a few months in. I don't know about startups in general, but at least half
the startups we fund could make as good a case as Microsoft could have for
being on a path to dominating a large market. And who can reasonably expect
more of a startup than that?
**Rejection**
If you can make as good a case as Microsoft could have, will you convince
investors? Not always. A lot of VCs would have rejected Microsoft.
Certainly some rejected Google. And getting rejected will put you in a
slightly awkward position, because as you'll see when you start fundraising,
the most common question you'll get from investors will be "who else is
investing?" What do you say if you've been fundraising for a while and no one
has committed yet?
The people who are really good at acting formidable often solve this problem
by giving investors the impression that while no investors have committed yet,
several are about to. This is arguably a permissible tactic. It's slightly
dickish of investors to care more about who else is investing than any other
aspect of your startup, and misleading them about how far along you are with
other investors seems the complementary countermove. It's arguably an instance
of scamming a scammer. But I don't recommend this approach to most founders,
because most founders wouldn't be able to carry it off. This is the single
most common lie told to investors, and you have to be really good at lying to
tell members of some profession the most common lie they're told.
If you're not a master of negotiation (and perhaps even if you are) the best
solution is to tackle the problem head-on, and to explain why investors have
turned you down and why they're mistaken. If you know you're on the right
track, then you also know why investors were wrong to reject you. Experienced
investors are well aware that the best ideas are also the scariest. They all
know about the VCs who rejected Google. If instead of seeming evasive and
ashamed about having been turned down (and thereby implicitly agreeing with
the verdict) you talk candidly about what scared investors about you, you'll
seem more confident, which they like, and you'll probably also do a better job
of presenting that aspect of your startup. At the very least, that worry will
now be out in the open instead of being a gotcha left to be discovered by the
investors you're currently talking to, who will be proud of and thus attached
to their discovery.
This strategy will work best with the best investors, who are both hard to
bluff and who already believe most other investors are conventional-minded
drones doomed always to miss the big outliers. Raising money is not like
applying to college, where you can assume that if you can get into MIT, you
can also get into Foobar State. Because the best investors are much smarter
than the rest, and the best startup ideas look initially like bad ideas, it's
not uncommon for a startup to be rejected by all the VCs except the best ones.
That's what happened to Dropbox. Y Combinator started in Boston, and for the
first 3 years we ran alternating batches in Boston and Silicon Valley. Because
Boston investors were so few and so timid, we used to ship Boston batches out
for a second Demo Day in Silicon Valley. Dropbox was part of a Boston batch,
which means all those Boston investors got the first look at Dropbox, and none
of them closed the deal. Yet another backup and syncing thing, they all
thought. A couple weeks later, Dropbox raised a series A round from Sequoia.
**Different**
Not understanding that investors view investments as bets combines with the
ten page paper mentality to prevent founders from even considering the
possibility of being certain of what they're saying. They think they're trying
to convince investors of something very uncertain — that their startup will be
huge — and convincing anyone of something like that must obviously entail some
wild feat of salesmanship. But in fact when you raise money you're trying to
convince investors of something so much less speculative — whether the company
has all the elements of a good bet — that you can approach the problem in a
qualitatively different way. You can convince yourself, then convince them.
And when you convince them, use the same matter-of-fact language you used to
convince yourself. You wouldn't use vague, grandiose marketing-speak among
yourselves. Don't use it with investors either. It not only doesn't work on
them, but seems a mark of incompetence. Just be concise. Many investors
explicitly use that as a test, reasoning (correctly) that if you can't explain
your plans concisely, you don't really understand them. But even investors who
don't have a rule about this will be bored and frustrated by unclear
explanations.
So here's the recipe for impressing investors when you're not already good at
seeming formidable:
1. Make something worth investing in.
2. Understand why it's worth investing in.
3. Explain that clearly to investors.
If you're saying something you know is true, you'll seem confident when you're
saying it. Conversely, never let pitching draw you into bullshitting. As long
as you stay on the territory of truth, you're strong. Make the truth good,
then just tell it.
** |
|
February 2015
One of the most valuable exercises you can try if you want to understand
startups is to look at the most successful companies and explain why they were
not as lame as they seemed when they first launched. Because they practically
all seemed lame at first. Not just small, lame. Not just the first step up a
big mountain. More like the first step into a swamp.
A Basic interpreter for the Altair? How could that ever grow into a giant
company? People sleeping on airbeds in strangers' apartments? A web site for
college students to stalk one another? A wimpy little single-board computer
for hobbyists that used a TV as a monitor? A new search engine, when there
were already about 10, and they were all trying to de-emphasize search? These
ideas didn't just seem small. They seemed wrong. They were the kind of ideas
you could not merely ignore, but ridicule.
Often the founders themselves didn't know why their ideas were promising. They
were attracted to these ideas by instinct, because they were living in the
future and they sensed that something was missing. But they could not have put
into words exactly how their ugly ducklings were going to grow into big,
beautiful swans.
Most people's first impulse when they hear about a lame-sounding new startup
idea is to make fun of it. Even a lot of people who should know better.
When I encounter a startup with a lame-sounding idea, I ask "What Microsoft is
this the Altair Basic of?" Now it's a puzzle, and the burden is on me to solve
it. Sometimes I can't think of an answer, especially when the idea is a made-
up one. But it's remarkable how often there does turn out to be an answer.
Often it's one the founders themselves hadn't seen yet.
Intriguingly, there are sometimes multiple answers. I talked to a startup a
few days ago that could grow into 3 distinct Microsofts. They'd probably vary
in size by orders of magnitude. But you can never predict how big a Microsoft
is going to be, so in cases like that I encourage founders to follow whichever
path is most immediately exciting to them. Their instincts got them this far.
Why stop now?
---
* * *
--- |
|
August 2015
If you have a US startup called X and you don't have x.com, you should
probably change your name.
The reason is not just that people can't find you. For companies with mobile
apps, especially, having the right domain name is not as critical as it used
to be for getting users. The problem with not having the .com of your name is
that it signals weakness. Unless you're so big that your reputation precedes
you, a marginal domain suggests you're a marginal company. Whereas (as Stripe
shows) having x.com signals strength even if it has no relation to what you
do.
Even good founders can be in denial about this. Their denial derives from two
very powerful forces: identity, and lack of imagination.
X is what we _are_ , founders think. There's no other name as good. Both of
which are false.
You can fix the first by stepping back from the problem. Imagine you'd called
your company something else. If you had, surely you'd be just as attached to
that name as you are to your current one. The idea of switching to your
current name would seem repellent.
There's nothing intrinsically great about your current name. Nearly all your
attachment to it comes from it being attached to you.
The way to neutralize the second source of denial, your inability to think of
other potential names, is to acknowledge that you're bad at naming. Naming is
a completely separate skill from those you need to be a good founder. You can
be a great startup founder but hopeless at thinking of names for your company.
Once you acknowledge that, you stop believing there is nothing else you could
be called. There are lots of other potential names that are as good or better;
you just can't think of them.
How do you find them? One answer is the default way to solve problems you're
bad at: find someone else who can think of names. But with company names there
is another possible approach. It turns out almost any word or word pair that
is not an obviously bad name is a sufficiently good one, and the number of
such domains is so large that you can find plenty that are cheap or even
untaken. So make a list and try to buy some. That's what Stripe did. (Their
search also turned up parse.com, which their friends at Parse took.)
The reason I know that naming companies is a distinct skill orthogonal to the
others you need in a startup is that I happen to have it. Back when I was
running YC and did more office hours with startups, I would often help them
find new names. 80% of the time we could find at least one good name in a 20
minute office hour slot.
Now when I do office hours I have to focus on more important questions, like
what the company is doing. I tell them when they need to change their name.
But I know the power of the forces that have them in their grip, so I know
most won't listen.
There are of course examples of startups that have succeeded without having
the .com of their name. There are startups that have succeeded despite any
number of different mistakes. But this mistake is less excusable than most.
It's something that can be fixed in a couple days if you have sufficient
discipline to acknowledge the problem.
100% of the top 20 YC companies by valuation have the .com of their name. 94%
of the top 50 do. But only 66% of companies in the current batch have the .com
of their name. Which suggests there are lessons ahead for most of the rest,
one way or another.
** |
|
| **Want to start a startup?** Get funded by Y Combinator.
---
August 2013
The biggest component in most investors' opinion of you is the opinion of
other investors. Which is of course a recipe for exponential growth. When one
investor wants to invest in you, that makes other investors want to, which
makes others want to, and so on.
Sometimes inexperienced founders mistakenly conclude that manipulating these
forces is the essence of fundraising. They hear stories about stampedes to
invest in successful startups, and think it's therefore the mark of a
successful startup to have this happen. But actually the two are not that
highly correlated. Lots of startups that cause stampedes end up flaming out
(in extreme cases, partly as a result of the stampede), and lots of very
successful startups were only moderately popular with investors the first time
they raised money.
So the point of this essay is not to explain how to create a stampede, but
merely to explain the forces that generate them. These forces are always at
work to some degree in fundraising, and they can cause surprising situations.
If you understand them, you can at least avoid being surprised.
One reason investors like you more when other investors like you is that you
actually become a better investment. Raising money decreases the risk of
failure. Indeed, although investors hate it, you are for this reason justified
in raising your valuation for later investors. The investors who invested when
you had no money were taking more risk, and are entitled to higher returns.
Plus a company that has raised money is literally more valuable. After you
raise the first million dollars, the company is at least a million dollars
more valuable, because it's the same company as before, plus it has a million
dollars in the bank.
Beware, though, because later investors so hate to have the price raised on
them that they resist even this self-evident reasoning. Only raise the price
on an investor you're comfortable with losing, because some will angrily
refuse.
The second reason investors like you more when you've had some success at
fundraising is that it makes you more confident, and an investors' opinion of
you is the foundation of their opinion of your company. Founders are often
surprised how quickly investors seem to know when they start to succeed at
raising money. And while there are in fact lots of ways for such information
to spread among investors, the main vector is probably the founders
themselves. Though they're often clueless about technology, most investors are
pretty good at reading people. When fundraising is going well, investors are
quick to sense it in your increased confidence. (This is one case where the
average founder's inability to remain poker-faced works to your advantage.)
But frankly the most important reason investors like you more when you've
started to raise money is that they're bad at judging startups. Judging
startups is hard even for the best investors. The mediocre ones might as well
be flipping coins. So when mediocre investors see that lots of other people
want to invest in you, they assume there must be a reason. This leads to the
phenomenon known in the Valley as the "hot deal," where you have more interest
from investors than you can handle.
The best investors aren't influenced much by the opinion of other investors.
It would only dilute their own judgment to average it together with other
people's. But they are indirectly influenced in the practical sense that
interest from other investors imposes a deadline. This is the fourth way in
which offers beget offers. If you start to get far along the track toward an
offer with one firm, it will sometimes provoke other firms, even good ones, to
make up their minds, lest they lose the deal.
Unless you're a wizard at negotiation (and if you're not sure, you're not) be
very careful about exaggerating this to push a good investor to decide.
Founders try this sort of thing all the time, and investors are very sensitive
to it. If anything oversensitive. But you're safe so long as you're telling
the truth. If you're getting far along with investor B, but you'd rather raise
money from investor A, you can tell investor A that this is happening. There's
no manipulation in that. You're genuinely in a bind, because you really would
rather raise money from A, but you can't safely reject an offer from B when
it's still uncertain what A will decide.
Do not, however, tell A who B is. VCs will sometimes ask which other VCs
you're talking to, but you should never tell them. Angels you can sometimes
tell about other angels, because angels cooperate more with one another. But
if VCs ask, just point out that they wouldn't want you telling other firms
about your conversations, and you feel obliged to do the same for any firm you
talk to. If they push you, point out that you're inexperienced at fundraising
— which is always a safe card to play — and you feel you have to be extra
cautious.
While few startups will experience a stampede of interest, almost all will at
least initially experience the other side of this phenomenon, where the herd
remains clumped together at a distance. The fact that investors are so much
influenced by other investors' opinions means you always start out in
something of a hole. So don't be demoralized by how hard it is to get the
first commitment, because much of the difficulty comes from this external
force. The second will be easier.
** |
|
December 2014
If the world were static, we could have monotonically increasing confidence in
our beliefs. The more (and more varied) experience a belief survived, the less
likely it would be false. Most people implicitly believe something like this
about their opinions. And they're justified in doing so with opinions about
things that don't change much, like human nature. But you can't trust your
opinions in the same way about things that change, which could include
practically everything else.
When experts are wrong, it's often because they're experts on an earlier
version of the world.
Is it possible to avoid that? Can you protect yourself against obsolete
beliefs? To some extent, yes. I spent almost a decade investing in early stage
startups, and curiously enough protecting yourself against obsolete beliefs is
exactly what you have to do to succeed as a startup investor. Most really good
startup ideas look like bad ideas at first, and many of those look bad
specifically because some change in the world just switched them from bad to
good. I spent a lot of time learning to recognize such ideas, and the
techniques I used may be applicable to ideas in general.
The first step is to have an explicit belief in change. People who fall victim
to a monotonically increasing confidence in their opinions are implicitly
concluding the world is static. If you consciously remind yourself it isn't,
you start to look for change.
Where should one look for it? Beyond the moderately useful generalization that
human nature doesn't change much, the unfortunate fact is that change is hard
to predict. This is largely a tautology but worth remembering all the same:
change that matters usually comes from an unforeseen quarter.
So I don't even try to predict it. When I get asked in interviews to predict
the future, I always have to struggle to come up with something plausible-
sounding on the fly, like a student who hasn't prepared for an exam. But
it's not out of laziness that I haven't prepared. It seems to me that beliefs
about the future are so rarely correct that they usually aren't worth the
extra rigidity they impose, and that the best strategy is simply to be
aggressively open-minded. Instead of trying to point yourself in the right
direction, admit you have no idea what the right direction is, and try instead
to be super sensitive to the winds of change.
It's ok to have working hypotheses, even though they may constrain you a bit,
because they also motivate you. It's exciting to chase things and exciting to
try to guess answers. But you have to be disciplined about not letting your
hypotheses harden into anything more.
I believe this passive m.o. works not just for evaluating new ideas but also
for having them. The way to come up with new ideas is not to try explicitly
to, but to try to solve problems and simply not discount weird hunches you
have in the process.
The winds of change originate in the unconscious minds of domain experts. If
you're sufficiently expert in a field, any weird idea or apparently irrelevant
question that occurs to you is ipso facto worth exploring. Within Y
Combinator, when an idea is described as crazy, it's a compliment—in fact, on
average probably a higher compliment than when an idea is described as good.
Startup investors have extraordinary incentives for correcting obsolete
beliefs. If they can realize before other investors that some apparently
unpromising startup isn't, they can make a huge amount of money. But the
incentives are more than just financial. Investors' opinions are explicitly
tested: startups come to them and they have to say yes or no, and then, fairly
quickly, they learn whether they guessed right. The investors who say no to a
Google (and there were several) will remember it for the rest of their lives.
Anyone who must in some sense bet on ideas rather than merely commenting on
them has similar incentives. Which means anyone who wants such incentives can
have them, by turning their comments into bets: if you write about a topic in
some fairly durable and public form, you'll find you worry much more about
getting things right than most people would in a casual conversation.
Another trick I've found to protect myself against obsolete beliefs is to
focus initially on people rather than ideas. Though the nature of future
discoveries is hard to predict, I've found I can predict quite well what sort
of people will make them. Good new ideas come from earnest, energetic,
independent-minded people.
Betting on people over ideas saved me countless times as an investor. We
thought Airbnb was a bad idea, for example. But we could tell the founders
were earnest, energetic, and independent-minded. (Indeed, almost
pathologically so.) So we suspended disbelief and funded them.
This too seems a technique that should be generally applicable. Surround
yourself with the sort of people new ideas come from. If you want to notice
quickly when your beliefs become obsolete, you can't do better than to be
friends with the people whose discoveries will make them so.
It's hard enough already not to become the prisoner of your own expertise, but
it will only get harder, because change is accelerating. That's not a recent
trend; change has been accelerating since the paleolithic era. Ideas beget
ideas. I don't expect that to change. But I could be wrong.
** |
|
April 2009
_Inc_ recently asked me who I thought were the 5 most interesting startup
founders of the last 30 years. How do you decide who's the most interesting?
The best test seemed to be influence: who are the 5 who've influenced me most?
Who do I use as examples when I'm talking to companies we fund? Who do I find
myself quoting?
**1\. Steve Jobs**
I'd guess Steve is the most influential founder not just for me but for most
people you could ask. A lot of startup culture is Apple culture. He was the
original young founder. And while the concept of "insanely great" already
existed in the arts, it was a novel idea to introduce into a company in the
1980s.
More remarkable still, he's stayed interesting for 30 years. People await new
Apple products the way they'd await new books by a popular novelist. Steve may
not literally design them, but they wouldn't happen if he weren't CEO.
Steve is clever and driven, but so are a lot of people in the Valley. What
makes him unique is his sense of design. Before him, most companies treated
design as a frivolous extra. Apple's competitors now know better.
**2\. TJ Rodgers**
TJ Rodgers isn't as famous as Steve Jobs, but he may be the best writer among
Silicon Valley CEOs. I've probably learned more from him about the startup way
of thinking than from anyone else. Not so much from specific things he's
written as by reconstructing the mind that produced them: brutally candid;
aggressively garbage-collecting outdated ideas; and yet driven by pragmatism
rather than ideology.
The first essay of his that I read was so electrifying that I remember exactly
where I was at the time. It was High Technology Innovation: Free Markets or
Government Subsidies? and I was downstairs in the Harvard Square T Station. It
felt as if someone had flipped on a light switch inside my head.
**3\. Larry & Sergey**
I'm sorry to treat Larry and Sergey as one person. I've always thought that
was unfair to them. But it does seem as if Google was a collaboration.
Before Google, companies in Silicon Valley already knew it was important to
have the best hackers. So they claimed, at least. But Google pushed this idea
further than anyone had before. Their hypothesis seems to have been that, in
the initial stages at least, _all_ you need is good hackers: if you hire all
the smartest people and put them to work on a problem where their success can
be measured, you win. All the other stuff—which includes all the stuff that
business schools think business consists of—you can figure out along the way.
The results won't be perfect, but they'll be optimal. If this was their
hypothesis, it's now been verified experimentally.
**4\. Paul Buchheit**
Few know this, but one person, Paul Buchheit, is responsible for three of the
best things Google has done. He was the original author of GMail, which is the
most impressive thing Google has after search. He also wrote the first
prototype of AdSense, and was the author of Google's mantra "Don't be evil."
PB made a point in a talk once that I now mention to every startup we fund:
that it's better, initially, to make a small number of users really love you
than a large number kind of like you. If I could tell startups only ten
sentences, this would be one of them.
Now he's cofounder of a startup called Friendfeed. It's only a year old, but
already everyone in the Valley is watching them. Someone responsible for three
of the biggest ideas at Google is going to come up with more.
**5\. Sam Altman**
I was told I shouldn't mention founders of YC-funded companies in this list.
But Sam Altman can't be stopped by such flimsy rules. If he wants to be on
this list, he's going to be.
Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm
advising startups. On questions of design, I ask "What would Steve do?" but on
questions of strategy or ambition I ask "What would Sama do?"
What I learned from meeting Sama is that the doctrine of the elect applies to
startups. It applies way less than most people think: startup investing does
not consist of trying to pick winners the way you might in a horse race. But
there are a few people with such force of will that they're going to get
whatever they want.
---
* * *
--- |
|
August 2009
Kate Courteau is the architect who designed Y Combinator's office. Recently we
managed to recruit her to help us run YC when she's not busy with
architectural projects. Though she'd heard a lot about YC since the beginning,
the last 9 months have been a total immersion.
I've been around the startup world for so long that it seems normal to me, so
I was curious to hear what had surprised her most about it. This was her list:
**1\. How many startups fail.**
Kate knew in principle that startups were very risky, but she was surprised to
see how constant the threat of failure was — not just for the minnows, but
even for the famous startups whose founders came to speak at YC dinners.
**2\. How much startups' ideas change.**
As usual, by Demo Day about half the startups were doing something
significantly different than they started with. We encourage that. Starting a
startup is like science in that you have to follow the truth wherever it
leads. In the rest of the world, people don't start things till they're sure
what they want to do, and once started they tend to continue on their initial
path even if it's mistaken.
**3\. How little money it can take to start a startup.**
In Kate's world, everything is still physical and expensive. You can barely
renovate a bathroom for the cost of starting a startup.
**4\. How scrappy founders are.**
That was her actual word. I agree with her, but till she mentioned this it
never occurred to me how little this quality is appreciated in most of the
rest of the world. It wouldn't be a compliment in most organizations to call
someone scrappy.
What does it mean, exactly? It's basically the diminutive form of belligerent.
Someone who's scrappy manages to be both threatening and undignified at the
same time. Which seems to me exactly what one would want to be, in any kind of
work. If you're not threatening, you're probably not doing anything new, and
dignity is merely a sort of plaque.
**5\. How tech-saturated Silicon Valley is.**
"It seems like everybody here is in the industry." That isn't literally true,
but there is a qualitative difference between Silicon Valley and other places.
You tend to keep your voice down, because there's a good chance the person at
the next table would know some of the people you're talking about. I never
felt that in Boston. The good news is, there's also a good chance the person
at the next table could help you in some way.
**6\. That the speakers at YC were so consistent in their advice.**
Actually, I've noticed this too. I always worry the speakers will put us in an
embarrassing position by contradicting what we tell the startups, but it
happens surprisingly rarely.
When I asked her what specific things she remembered speakers always saying,
she mentioned: that the way to succeed was to launch something fast, listen to
users, and then iterate; that startups required resilience because they were
always an emotional rollercoaster; and that most VCs were sheep.
I've been impressed by how consistently the speakers advocate launching fast
and iterating. That was contrarian advice 10 years ago, but it's clearly now
the established practice.
**7\. How casual successful startup founders are.**
Most of the famous founders in Silicon Valley are people you'd overlook on the
street. It's not merely that they don't dress up. They don't project any kind
of aura of power either. "They're not trying to impress anyone."
Interestingly, while Kate said that she could never pick out successful
founders, she could recognize VCs, both by the way they dressed and the way
they carried themselves.
**8\. How important it is for founders to have people to ask for advice.**
(I swear I didn't prompt this one.) Without advice "they'd just be sort of
lost." Fortunately, there are a lot of people to help them. There's a strong
tradition within YC of helping other YC-funded startups. But we didn't invent
that idea: it's just a slightly more concentrated form of existing Valley
culture.
**9\. What a solitary task startups are.**
Architects are constantly interacting face to face with other people, whereas
doing a technology startup, at least, tends to require long stretches of
uninterrupted time to work. "You could do it in a box."
By inverting this list, we can get a portrait of the "normal" world. It's
populated by people who talk a lot with one another as they work slowly but
harmoniously on conservative, expensive projects whose destinations are
decided in advance, and who carefully adjust their manner to reflect their
position in the hierarchy.
That's also a fairly accurate description of the past. So startup culture may
not merely be different in the way you'd expect any subculture to be, but a
leading indicator.
---
---
Japanese Translation
* * *
--- |
Subsets and Splits