Jason Fried's Blog, page 5
March 2, 2020
Keep digging
I’m reviewing transcripts from interviews we did with customers last year and came across a nice example of interview technique.
The hardest thing about customer interviews is knowing where to dig. An effective interview is more like a friendly interrogation. We don’t want to learn what customers think about the product, or what they like or dislike — we want to know what happened and how they chose. What was the chain of cause and effect that made them decide to use Basecamp? To get those answers we can’t just ask surface questions, we have to keep digging back behind the answers to find out what really happened.
Here’s a small example.
Doris (name changed) works in the office at a construction company. She had “looked for a way to have everything [about their projects] compiled in one area” for a long time. All the construction-specific software she tried was too “in depth.” She gave up her search. Some months later, she and her co-worked tried software outside the construction domain: Monday and Click-Up.
I asked: How did you get to these options?
She said she and her co-worker did Google searches.
I asked: Why start searching again? You tried looking for software a few months before. What happened to make you look again?
She said they had quite a few new employees. And “needed a place for everything to be.”
That sounds reasonable enough. We could have moved on and started talking about how she got to Basecamp. But instead of accepting that answer, I kept digging.
Ok so you hired more employees. Why not just keep doing things the same way?
“It was an outdated system. It’s all paper based. And this isn’t a paper world.”
We have our answer, right? “Paper based” is the problem. No, not yet. That answer doesn’t tell us enough about what to do.
As designers that’s what we need to know. We need to understand the problem enough to actually decide what specifically to build. “Paper based” sounds like a problem, but what does it tell us? If we had to design a software fix for her right now, where would we start? What would we leave out? How would we know when we made the situation better enough to stop and move on to something else?
So I asked one of my favorite questions:
What was happening that showed you the way you were doing things wasn’t working anymore?
This question is extremely targeted and causal. It’s a very simple question that invites her to describe the problem in a way that is hard, factual, time-bound, contextual, and specific — without any analysis, interpretation, speculation or rationalization. Just: What happened. What did you see. What was wrong.
“The guys would just come ask for the same information over and over again. And it was taking up time for me. . . . They shouldn’t have to ask me some of these questions. You get asked 20, 30 stupid questions and try to go back to something you have to pay attention to . . . you’re working numbers and money you need to be paying attention to what you’re doing.”
Aha. Now we’re getting somewhere. She holds all the information. The guys in the field need the information. She needs to give them access to the information so they can look it up themselves. Then she’ll stop getting interrupted and she can focus on her own work.
This dramatically narrows down the supply side and begins to paint the outlines of actionable design requirements.
I check to see if I understand the causality here.
Was the number of interruptions worse after you hired more people?
“Oh yeah, absolutely.”
Because we kept digging for causality, we got to an understanding of the situation — what caused it, what went wrong, what progress meant to her, and why she was saying “yes” and “no” to different options as she evaluated them.
For more on this interview approach I recommend checking out Competing Against Luck by Clay Christensen.
February 27, 2020
The books I read in 2019
Here are all my extracted answers from our monthly Basecamp check-in question of What are you reading? for 2019. (See also my answers from 2016, 2017, and 2018).
The Sane Society
Another Fromm tome! This one starts from the premise of evaluating the different social characters of various societies. But not from the abstract, pretend objectiveness of “everything is equal; everything is just different”, but from a bold perspective of “some societies truly do better than others at promoting human health and flourish. That’s potentially a dynamite perspective, but Fromm handles it with utter grace and respect.
I particularly enjoyed the concept of mental illness or malaise as an act of rebellion against societal pressures and norms. Particularly the refutation that “the sane person” is whoever is performing their productive function in society. Yeah, fuck that.
I also really liked the depiction of a country’s social character to be an expression of what that society thought it needed. A German reputation for stinginess/savings as virtuous? A reflection of what the country needed in order to rebuild after 20th century devastation. It’s a fascinating recast of “stereotype” as something intellectually productive, and not just lazy othering.
Greatness and Limitations of Freud’s Thought
It’s fair to say that I’ve been on a Eric Fromm kick ever since discovering Escape From Freedom. His analysis of the human condition is deep, profound, yet utterly approachable. He writes in a plain, well-sourced, and fluid manner that makes it hard to stop, but you really should. Fromm’s thoughts are so provoking that I often need to force myself to take breaks to truly digest the lessons.
This book is no different. It provides a guided tour through the psychoanalytical method that Freud invented with great depth. Whether it’s the exploration of the unconscious, dream analysis, the Oedipus Complex, and character classes. But what’s so brilliant about Fromm’s tour is that it’s a critical presentation. Fromm clearly has great respect for Freud’s discoveries, but has no time for his patriarchal, bourgeois, 19th century nonsense – nor his obsession to explain everything from a root of sexual drives.
What follows is a master class in a critical reading of a great master, without neither being overcome with disgust of his fallacies or try to excuse or pave over them. Really good.
Sapiens
That whirl wind tour of the history of our species. I’m liking it, but not uncritically so. There’s a lot of definitions, like that of religion, that seem hurried, if not outright glib. But I do really like the repeated emphasis on just how much of human society is a collection of shared myths that we’ve simply all decided to believe. And that things got the way they are through an endless series of historical accidents, unlikely events, and forks in the road. Not through some deterministic path. That’s a great story of both hope – and fear! – that we can make a better society, yet that history does not “bend” or “arc” towards that intrinsically. You have to do the work.
Kubernetes in Action
I don’t read a lot of new technical books these days, but Kubernetes seems like enough of a fundamental step in cloud computing that it’s worth being literate in its basics. Understanding the differences between images, pods, and coordinations. It’s pretty good!
The Great Mortality
The story of the plague that ravaged Europe from, primarily, 1347-1351. It’s like a real-life 28 Days. A third of Europe’s population was wiped out. Just unfathomable scale of societal destruction.
It’s really well-told too. Even if there’s a bit of repetitiveness to the “and then the plague hit the next city, and the result was DEATH”. It’s a constant reminder of just how fragile humanity actually is.
The insight into the catholic church’s management of affairs is scathing too. And the pogroms that blamed the jews for the plague, and lead to mass murderings, is a sober reminder of how genocide is never too far away when a society is brought to the brink.
Really enjoyed this one.
Making Sense of the Troubles
I remember seeing stories about the IRA in the 1980s on Danish television. The bombings, the conflict. But I never really understood the underlying dynamics. This book lays it all bare.
And the story it tells might be from Northern Ireland, but it could just as well be set in Iraq or Israel. A religious group takes control of politics, uses the force of government to subdue the other group, and refuses to engage in power sharing until after years of blood sheath.
And this was all very recently. Right there in Europe. Long-running campaigns of insurgence, counter-insurgence, and a fragile peace. Given what’s going on with Brexit right now, it feels like just the time where you want to understand the history of Northern Ireland.
When Prophecy Fails
On the surface an exploration of cults, but beneath, really an exploration of how the mind bends to rationalize beliefs of any sort. We can learn a lot about our own stubbornness and filter bubbles and segregation of society by studying these cults and what made them double down on end-of-the-world claims, even after proven wrong.
Life without Principle, Henry David Thoreau
Thoreau’s thoughts on work and calm is right up the Basecamp alley. Reading through this short book, which is a recording of a lecture he gave, and I can see the root source material for much of our opposition to overwork and protecting attention.
An enjoyable reminder that there is little new under the sun, and that much of what we must do is to continue repackaging eternal truths for a new context.
Winners Take All: The Elite Charade of Changing the World
Economic win-win thinking has taken over the world of politics and charity, and we’re all worse of for it. Three-plus decades of reverence to McKinsey-type thinking, the abandonment of faith in government to fix big problems, and an elite establishment bent on peddling Everything Is Actually Great You Know stories is coming to a clash with reality. A reality where 90% of the population has seen stagnant wages and shrinking opportunity.
You might feel like you’ve heard that story before in a NY Times piece on “let’s understand why rural American voted Trump”, but this is a much broader and much more interesting story. Told in large part by examining not just the plight of the dispossessed, but the complicity of “the globalists”. Even just examining that term from outside a right-wing media slant is fascinating.
Anand Giridharadas uses a series of vignettes with doing-well-by-doing-good insiders who share their doubt of how they’re really going to “they’ll dismantle the master’s house using the master’s tools”. It’s a brave exposition of friends and acquaintances, and you occasionally cringe at the savage moral verdicts, whoever gentle they’re delivered in terms of disappointment rather than rage.
Really good.
Thus Spoke Zarathustra, Friedrich Nietzsche
A lot of my reading list as of late has come from Eric Dodson’s philosophy channel on Youtube, and the recommendations he offers. The majority has been great, but I’m having quite some trouble with this major work by Nietzsche.
It’s funny, because of all these older philosophical texts, it’s somehow quite modern in that it almost reads like a series of blog posts, at times even tweet storms. But it’s so all over the place. Yes, there are some general themes of striving for a better humanity (the superman), but it’s wrapped with all sorts of seemingly trivial or banal observations. It’s not an easy treasure chest to open.
It’s also just long. Anyway, nibbling at it. Maybe it’s just an acquired taste that’ll click. But this is the opposite of “oh, older works are so accessible and immediately poignant” experience I’ve had otherwise.
The Uninhabitable Earth by Wallace-Wells
After three years of wildfires near our home in Malibu, it’s intimately clear that climate change isn’t some far away, far future phenomenon. The effects are here, they’re scary, devastating, and yet, so utterly minor compared to what we have in store.
I thought I was pretty up to date on climate change. I follow the news, read articles, and remember watching The Inconvenient Truth when it came out. But still, I was shocked by the most recent data, science, and projections presented in this book. Just the fact that HALF of all the greenhouse gasses that are warming the earth were released since Seinfeld aired on TV. That’s my life time!
The consequences of climate change are already destined to be profound and dire. That’s just based on where we are now. But as this book dives into what a world of not +2C looks like, but +4C or +6C or even +8C, the towering calamity that is of our own making becomes both utterly real and surreal at the same time.
Discipline and Punish by Foucault
Tracing the history of punishment from the Middle Ages and forward tickles both an interest in history, philosophy, and the concept of punishment. Why did we stop torturing people in public? When did intent and mental state become such a big part of the picture? Foucault explores all of it. If you liked the Hardcore History episode on Painfotainment, this gives that show a much deeper ballast.
Permanent Record by Snowden
Snowden’s memoir is at once both gratifying and slightly frustrating. His stories of growing up with technology, “hacking” bedtime, and discovering the early internet overlaps almost entirely with my own timeline. But there’s also a little too much “just so” justification for the anecdotes and Snowden’s later heroic acts. Either way, the description of how the NSA/CIA inner world actually works, the role and freedom given to contractors, the implosion of accountability, and just what you can do with nation-state level surveillance systems is stunning, almost required reading. May a future president see the courage, wisdom, and self-sacrifice Snowden committed to and give him the pardon and the parade that he so rightly deserves.
The Tyranny of Metrics by Muller
This is one of those books where the title is worth more than the content. I just love the concept and the taste of those words put together. And, as someone prone to overanalyzing, it was a welcome reminder of not all that can be measured is worth measuring and much that it is worth measuring can’t be. But this should have been a blog post. It’s repetitive and the examples somehow appear weak and overworked at the same time. I didn’t make it all the way to the end.
The Divide by Hickel
Three books have already opened my eyes to the fallacy of the neoliberal economics program that I was thoroughly indoctrinated with as a business school graduate and long-term Economist reader. That is Debt: The First 5,000 Years, by Graeber, Capital in the 21st Century, by Picketty, and this book. Wow.
Hickel debunks the entire concept of “developing nations” by examining both the era of colonization and in particular the coups and interventions of the 20th century. It exposes with unique clarity the hypocritical way the global north have disposed the global side with the bludgeoning hammer of “free trade”. How both the EU and America kept protectionist barriers for their own economics and industries, while systematically opposing and stripping them from the so-called “developing nations”. He traces the money flows and exposes how the global south continues to send much more money out of their economics than what they receive back. How the idea of international aid serves as a justification to keep a corrupt status quo in place. And how, while corrupt indeed, the strongmen (that the global north mostly installed and supported!) might be plundering their own economies, but nothing to the scale of what’s being doing via trade, transfer mispricing, and other shenanigans. It’s eye opening reading, and it explains so much. Hickel was on the Citations Podcast for one of my all-time favorite episodes: The Neoliberal Optimism Industry. Good place to get a teaser for his work.
Cutting Through Spiritual Materialism by Trungpa
Stoicism and Buddhism shares a lot in their diagnosis of the human condition. How it is our ego, desires, and wants that lead us astray and to misery. But I actually came to this book by an off-hand recommendation in one of Fromm’s books, and the title immediately resonated. This sense that escaping the materialism of things with the materialism of beliefs struck me as a profound idea that I wanted to explore deeper.
It’s a bit of an uneven journey with this book, though. The endless tales of Buddhist masters and their cruel student selection process, the weird euphemisms like “my spiritual friend”, and a lot of other baggage that seems pretty foreign in 2019 is difficult to navigate. But as soon as I want to put it down, I keep getting to a passage that does seem relevant and apt, and I keep going.
I particularly like the emphasis of developing “personal truths” in interaction with teachings. The idea that you can’t just read something profound and then expect to be profoundly changed. That you have to engage with the material, stretch it, push it, and make it your own. There are some very strong ideas in the notion of teacher/student collaborations.
To Have Or To Be? by Fromm
Fromm’s diagnoses of the modern predicament is unrivaled in my readings so far, and this book strikes directly at the material obsession with things, achievements, and competition. He places this in opposition with a development of the self, a theme that echoes the stoic teachings directly.
But the way Fromm manages to combine his diagnoses of our predicament with a historical critique of both capitalism and communism is something else. His disappointment with communism is particularly potent for anyone sympathetic to Marx’s own diagnosis of what’s wrong with the world. The fact that communism still ended up focusing on production, consumption, and the material life, rather than embracing the socialist ideals of community and its flourishment. As Fromm puts it, is the worker doing mindless assembly work at a factory really any better off whether the plant is owned by a capitalist or the state?
Another fascinating section of the book is the contrast of conformity with community. The idea that the two are not the same, and that overly confirming communities can be just as suffocating as the hyper individualist pursuit is fascinating, if a bit flimsily argued.
Man for Himself by Fromm
Yes, another Fromm book! This one dives deep into the idea of character orientations and a defense for self-love. I particularly like his premonition of The Marketing Orientation, and how it commodifies life and persons. It’s almost like he knew that Influencer would be a 2019 phenomenon, writing, as he were, in 1947! And yet, he rejects the supposed virtues of complete self-sacrifice, as simply another escape. He redeems the idea of having self-love. Of being capable of loving who you are as a precondition for being capable of loving others. And to express that love for yourself by making the most of the human powers you are endowed with. It’s a beautiful, simple, yet counterintuitive notion.
Radical Candor by Scott
This is another modern management book that I wish had just been a blog post. I really like the fundamental premise, which is expressed as an opposition of two terms: Radical Candor vs Ruinous Empathy. The idea that you aren’t helping someone you work with by shielding them from criticism and feedback. If you pack all of that into so much soft cotton that they miss what’s being said, and actually think things are going well, you’re doing them a terrible disservice.
At the same time, Radical Candor isn’t just “brutal honesty”. It’s emphatic honesty. Telling people where they stand from a position of care, not disinterest. It’s a great frame to think about feedback in the workplace (and, really, life!).
Unfortunately this lovely dichotomy is tortured to book length with a series of ever more fawning anecdotes from the halls of Google and Apple. The constant name dropping, the constant excuses for bad behavior, and the inherent corporate worshipping that goes on in these anecdotes is simply too much. So too is the fact that the anecdotes just don’t really add much to the basic framework. Anyway, lovely dichotomy, would have made a great blog post, can’t recommend the book.
Simulacra and Simulation by Baudrillard
I’ve only just started on this, but already wrote a bunch of notes that have me eager to finish it. Here are a few choice ones: “Our entire linear and accumulative culture collapses if we cannot stockpile the past in plain view” and “Prisons are there to hide that it is the social in its entirety, in its banal omnipresence, that is carceral” and “Disneyland is presented as imaginary in order to make us believe that the rest is real”.
February 10, 2020
Mailing list software should stop spying on subscribers
The internet is finally coming out of its long haze on privacy, but it’s with one hell of a hangover. So many practices that were once taken for granted are now getting a second, more critical look. One of those is the practice of spying on whether recipients of marketing emails open them or not.
Back in August, we vowed to stop using such spying pixels in our Basecamp emails. And do you know what? It’s been fine! Not being able to track open rates, and fret over whether that meant our subject lines weren’t providing just the right HOOK, has actually been a relief.
But whether these open rates are “useful” or not is irrelevant. They’re invasive, they’re extracted without consent, and they break the basic assumptions most people have about email. There’s a general understanding that if you take actions on the internet, like clicking a link or visiting a site, there’s some tracking associated with that. We might not like it, but at least we have a vague understanding of it. Not so with email spy pixels.
Just about every normal person (i.e. someone not working in internet marketing) has been surprised, pissed, or at least dismayed when I tell them about spy pixels in emails. The idea that simply opening an email subjects you to tracking is a completely foreign one to most people.
When I’ve raised this concern in conversations with people in the marketing industry, a lot of them have taken offense to the term “spy pixels”. Affixing the spying label made a lot of them uncomfortable, because they were just trying to help! I get that nobody wants to think of themselves as the bad guy (Eilish not withstanding), but using the word “spy” isn’t exactly a reach.
Here’s the dictionary definition of a spy: “a person who secretly collects and reports information on the activities, movements, and plans”. That fits pretty well to a spy pixel that tracks whether you open an email or not, without your knowledge or consent!
So. Let’s stop doing that. Collectively. And the best place to instigate reform is with the mailing list software we use. A modest proposal for a basic ethics reform:
1) Mailing list software should not have spy pixels turned on by default. This is the most important step, because users will follow the lead of their software. It must be OK to spy on whether people open my marketing emails if the software I’m using it provides that by default.
2) Mailing list software can ask for explicit consent when the sender really does want to track open rates. Let the sender include a disclaimer at the bottom of their email: “[The sender] would like to know when you open this email to help improve their newsletter. If that’s OK with you, [please opt-in to providing read receipts]. Thanks!”.
That’s it. Don’t do it by default, ask for informed consent if you must. Being respectful of someone’s privacy isn’t rocket science.
And remember, you can still tag your links in those emails with ?source=newsletter or whatever to see whether your call-to-action is working. As we discussed, people have a basic understanding that clicking links and visiting websites – explicit actions they take! – has some tracking involved.
This isn’t going to magically make everything better. It’s not going to fix all the issues we have with privacy online or even all the deceptive practices around mailing lists. But it’s going to make things a little better. And if we keep making things a little better, we’ll eventually wake up to a world that’s a lot better.
January 31, 2020
Integrated systems for integrated programmers
One of the great tragedies of modern web development over the last five years or so has been the irrational exuberance for microservices. The idea that making a single great web application had simply become too hard, but if we broke that app up into many smaller apps, it’d all be much easier. Turned out, surprise-surprise, that it mostly wasn’t.
As Kelsey Hightower searingly put the fallacy: “We’re gonna break it up and somehow find the engineering discipline we never had in the first place”.
But it’s one of those hard lessons that nobody actually wants to hear. You don’t want to hear that the reason your monolith is a spaghetti monster is because you let it become that way, one commit at the time, due to weak habits, pressurized deadlines, or simply sheer lack of competence. No, what you want to hear is that none of that mess is your fault. That it was simply because of the oppressive monolithic architecture. And that, really, you’re just awesome, and if you take your dirty code and stick it into this new microservices tumbler, it’s going to come out sparking clean, smelling like fucking daffodils.
The great thing about such delusions is that they can keep you warm for quite a while. A yeah, sure, maybe the complexities of your new microservices monstrosity are plain as day right from the get go, but you can always excuse them with “it’s really going to pay off once we…” bullshit. And it’ll work! For a while. Because, who knows? Maybe this is better? But it’s not. And the day you have to really admit its not, you’re probably not even still there. On to the next thing.
Microservices as an architectural gold rush appealed to developers for the same reason TDD appeals to developers: it’s the pseudoscientific promise of a diet. The absolution of a new paradigm to wash away and forgive our sins. Who doesn’t want that?
Well, maybe you? Now after you’ve walked through the intellectual desert of a microservice approach to a problem that didn’t remotely warrant it (ie, almost all of them). Maybe now you’re ready to hear a different story. There’s a slot in your brain for a counterargument that just wasn’t there before.
So here’s the counterargument: Integrated systems are good. Integrated developers are good. Being able to wrap your mind around the whole application, and have developers who are able to make whole features, is good! The road to madness and despair lays in specialization and compartmentalization.
The galaxy brain takes it all in.
But of course, you cry, what if the system is too large to fit in my brain? Won’t it just swap and swap until I kernel failure? Yes, if you try to stick in a bloated beast of an application, sure.
So the work is to shrink the conceptual surface area of your application until it fits in a normal, but capable and competent, programmer’s brain. Using conceptual compression, sheer good code writing, a productive and succinct environment, using shortcuts and patterns. That’s the work.
But the payoff is glorious. Magnificent. SUBLIME. The magic of working on an integrated system together with integrated programmers is a line without limits, arbitrary boundaries, or sully gatekeepers.
Forget frontend or backend. The answer is all of it. At the same time. In the same mind.
This sounds impossible if you’ve cooked your noodle too long in the stew of modern astronautic abstractions. If you turn down the temperature, you’ll see that the web is actually much the same as it always was. Sure, a few expectations increased here, and a couple of breakthrough techniques appeared there, but fundamentally, it’s the same. What changed was us. And mostly not in ways for the better.
If your lived experience still haven’t hit the inevitable wall of defeat on the question of microservices, then be my guest, sit there with your folded arms and your smug pout. It’s ok. I get it. There’s not an open slot for this argument in your brain just yet. It’s ok. I’m patient! I’ll still be here in a couple of years when there’s room. And then I’ll send you a link to this article on twitter.
Peace. Love. Integration.
January 17, 2020
Testimony before the House Antitrust Subcommittee
My name is David Heinemeier Hansson, and I’m the CTO and co-founder of Basecamp, a small internet company from Chicago that sells project-management and team-collaboration software.
When we launched our main service back in 2004, the internet provided a largely free, fair, and open marketplace. We could reach customers and provide them with our software without having to ask any technology company for permission or pay them for the privilege.
Today, this is practically no longer true. The internet has been colonized by a handful of big tech companies that wield their monopoly power without restraint. This power allow them to bully, extort, or, should they please, even destroy our business – unless we accept their often onerous, exploitive, and ever-changing terms and conditions.
These big tech companies control if customers are able to find us online, whether customers can access our software using their mobile devices, and define the questionable ethics of what a competitive marketing campaign must look like.
A small company like ours simply has no real agency to reject or resist the rules set by big tech. And neither do consumers. The promise that the internet was going to cut out the middleman has been broken.
We’re all left to accept that these companies can and do alter the deal, any deal, however they please. And whenever they do, our only recourse is to pray that they do not alter it any further.
Let’s start with Google. Their monopoly in internet search is near total, and their multi-billion dollar bribes to browser makers like Apple ensure no fair competition will ever have a change to emerge.
Google uses this monopoly to extort businesses like ours to pay for the privilege that consumers who search for our trademarked brand name can find us. Because if we don’t, they will sell our brand name as misdirection to our competitors. Google feigns interest in recognizing trademark law, by banning the use of trademarked terms in the ad copy, but puts the onus of enforcement on the victims and does nothing to stop repeat offenders. Unless, of course, the trademarked terms are those belonging to Google itself. Then enforcement is swift and automatic. You will not find any competitor ads for Google’s own important properties.
Google would never have been able to capture a monopoly in search by acting like this from the start. Misdirecting consumers, blanketing search results with ads, and shaking down small businesses. In the absence of meaningful regulation, they’ll continue to extract absurd monopoly rents, while bribing browser makers to ensure nothing changes.
Apple too enjoys the spoils of monopoly pricing power. With the App Store, they own one of the only two mobile application stores that matter (the other belongs to Google!). This cozy duopoly has allowed Apple to keep fees on payment processing for application makers like us exorbitantly high. Whereas a competitive market like that for credit-card processing is only able to sustain around a 2% fee for merchants, Apple, along with Google, has been able to charge an outrageous 30% for years on end.
Apple may claim that they do more than payment processing for this fee, such as hosting applications and providing discovery, but the company undercuts this argument by giving these services away for free to application makers who do not charge for their apps.
But worse still is the draconian restrictions and merciless retribution that Apple brings to bear on application makers who dare to decline using Apple payment services. Even a mere link to an external webpage, that explains how to sign up for a service that doesn’t use Apple’s payment system, can get your application rejected.
Every application maker using the Apple’s App Store live in fear that their next update is denied or even that their application removed. All it takes is being assigned the wrong review clerk who chooses to interpret the often vague and confusing rules different than the last. Then you’ll be stuck in an appeals process that would make Kafka blush.
Finally, Facebook’s industrial-scale vacuuming of the everyone’s personal data has created an ad-targeting machine so devastatingly effective, that the company, together with – guess who! – Google, is currently capturing virtually all growth in internet advertisement. I quote a report in my written testimony that put that capture, between Facebook and Google, at 99% in 2016. Not even Putin would dare brag of an approval rating that high!
Facebook is able to maintain this iron grip on the collection of personal data by continuing to buy any promising competitor. The acquisitions of Instagram and WhatsApp should never have been approved by regulators, and need to be urgently undone.
This creates a marketplace where companies that wish not to partake in the wholesale violation of consumer privacy is at a grave disadvantage. If you chose not to take advantage of this terrifying and devastatingly effective ad machine, your competitors surely will.
This has been but a brief taste of what it’s like to live as a small tech company in a digital world owned and operated by big tech. And I didn’t even touch on the misery that is to attempt direct, head-on competition with any of these conglomerates. But at some point, all businesses will be competing against big tech, simply because big tech is bent on expanding until it does absolutely everything! The aforementioned companies already do payment processing, credit card issueing, music distribution, TV producing, advertising networks, map making, navigation services, alarm systems, cameras, computers, medical devices, and about a billion other things.
Help us, congress. You’re our only hope.
This testimony was delivered before the House Antitrust Subcommitee’s hearing on Online Platforms and Market Power in Part 5: Competitors in the Digital Economy on January 17th, 2020.
Expanded Written Testimony Submitted to the House Antitrust SubcommitteeDownload
Basecamp is hiring a Programmer
We’re hiring a programmer to join our Research & Fidelity team to help shape the front end of our Rails applications and expand our suite of open-source JavaScript frameworks. We’re accepting applications for the next two weeks with a start date in early April.
We strongly encourage candidates of all different backgrounds and identities to apply. Each new hire is an opportunity for us to bring in a different perspective, and we are always eager to further diversify our company. Basecamp is committed to building an inclusive, supportive place for you to do the best and most rewarding work of your career.
ABOUT THE JOB
The Research & Fidelity team consists of two people, Sam Stephenson and Javan Makhmali, whose work has given rise to Stimulus, Turbolinks, and Trix—projects that exemplify our approach to building web applications. You’ll join the team and work with them closely.
In broad terms, Research & Fidelity is responsible for the following:
Designing, implementing, documenting, and maintaining front-end systems for multiple high-traffic applicationsBuilding high-fidelity user interface components with JavaScript, HTML, and CSSAssisting product teams with front-end decisions and participating in code reviewsTracking evergreen browser changes and keeping our applications up-to-dateExtracting internal systems and processes into open-source software and evolving them over time
As a member of the R&F team at Basecamp, you’ll fend off complexity and find a simpler path. You’ll fix bugs. You’ll go deep. You’ll learn from us and we’ll learn from you. You’ll have the freedom and autonomy to do your best work, and plenty of support along the way.
Our team approaches front-end work from an unorthodox perspective:
Our architecture is best described as “HTML over the wire.” In contrast to most of the industry, we embrace server-side rendered HTML augmented with minimal JavaScript behavior.We implement features on a continuum of progressive enhancement. That means we have a baseline of semantic, accessible HTML, layered with JavaScript and CSS enhancements for desktop, mobile web, and our hybrid Android and iOS applications.We believe designers and programmers should build UI together, and that HTML is a common language and shared responsibility. Our tools and processes are manifestations of this belief.We are framework builders. We approach intractable problems from first principles to make tools that help make Basecamp’s product development process possible.
Here are some things we’ve worked on recently that might give you a better sense of what you’ll be doing day to day:
Working with a designer during Office Hours (our weekly open invitation) to review and revise their codeResearching Service Workers and building a proof-of-concept offline mode for an existing applicationCreating a Stimulus controller to manage “infinite” pagination using IntersectionObserverInvestigating a Safari crash when interacting with elements and filing a detailed report on WebKit’s issue trackerExtracting Rails’ Action Text framework from the rich text system in Basecamp 3Working with programmers from the iOS and Android teams to co-develop a feature across platformsPorting Turbolinks from CoffeeScript to TypeScript and refactoring its test suiteResponding to a security report for our Electron-based desktop app and implementing a fix
ABOUT YOU
We’re looking for someone with strong front-end JavaScript experience. You should be well-versed in modern browser APIs, HTML, and CSS. Back-end programming experience, especially with Ruby, is a plus but not a requirement. You won’t know how all the systems work on day one, and we don’t expect you to. Nobody hits the ground running. Solid fundamentals with software development, systems, troubleshooting, and teamwork pave the way.
You might have a CS degree. You might not. That’s not what we’re looking for. We care about what you can do and how you do it, not about how you got here. A strong track record of conscientious, thoughtful work speaks volumes.
This is a remote job. You’re free to work where you work best, anywhere in the world: home office, coworking space, coffeeshops. While we currently have an office in Chicago, you should be comfortable working remotely—most of the company does!
Managers of One thrive at Basecamp. We’re committed generalists, eager learners, conscientious workers, and curators of what’s essential. We’re quick to trust. We see things through. We’re kind to each other, look up to each other, and support each other. We achieve together. We are colleagues, here to do our best work.
We value people who can take a stand yet commit even when they disagree. And understand the value in others being heard. We subject ideas to rigorous consideration and challenge each other, but all remember that we’re here for the same purpose: to do good work together. That comes with direct feedback, openness to each others’ experience, and willingness to show up for each other as well as for the technical work at hand. We’re in this for the long term.
PAY AND BENEFITS
Basecamp pays in the top 10% of the industry based on San Francisco rates. Same position, same pay, no matter where you live. The salary for this position is either $149,442 (Programmer) or $186,850 (Senior Programmer). We assess seniority relative to the team at Basecamp during the interviewing process.
Benefits at Basecamp are all about helping you lead a healthy life outside of work. We won’t treat your life as dead code to be optimized away with free dinners and dry cleaning. You won’t find lures to keep you coding ever longer. Quality time to focus on work starts with quality time to think, exercise, cook a meal, be with family and friends—time to yourself.
Work can wait. We offer fully-paid parental leave. We work 4-day weeks through the summer (northern hemisphere), enjoy a yearly paid vacation, and take a one-month sabbatical every three years. We subsidize coworking, home offices, and continuing education, whether professional or hobbyist. We match your charitable contributions. All on a foundation of top-shelf health insurance and a retirement plan with a generous match. See the full list.
HOW TO APPLY
Please send an application that speaks directly to this position. Show us your role in Basecamp’s future and Basecamp’s role in yours. Address some of the work we do. Tell us about a newer (less than five years old) web technology you like and why.
We’re accepting applications until Sunday, February 2, 2020, at 9:00PM US-Central time. There’s no benefit to filing early or writing a novel. Keep it sharp, short, and get across what matters to you. We value great writers, so take your time with the application. We’re giving you our full attention.
We expect to take two weeks to review all applications. You’ll hear from us by February 14 about whether you’ve advanced to the written code review part of the application process. If so, you’ll submit some code you’re proud of, review it, and tell its story. Then on to an interview. Our interviews are one hour, all remote, with your future colleagues, on your schedule. We’ll talk through some of your code and some of ours. No gotchas, brainteasers, or whiteboard coding. We aim to make an offer by March 20 with a start date in early April.
We look forward to hearing from you! ✌️
January 14, 2020
My polyglot Advent of Code
At Basecamp we have an internal project called “Your proudest moments”. My colleague Dan set it up so that people at Basecamp could share anything we’re proud of. So far people have shared impressive, really feel-good accomplishments, such as performing complicated house renovations without professional help, writing books, or taking their parents on an unforgettable vacation.
This post comes from my first contribution to this project. As I told them, it went to “Your proudest moments” because we don’t have a “Your most useless and pointless self-inflicted programming hours” project, that would have been the best fit. Still, this quite a ridiculous thing to do made me super proud, and I also had a lot of fun doing it.
[image error]All the 50 stars!
Advent of Code is an advent calendar of programming puzzles that’s been happening since 2015, made by Eric Wastl. Every day from 1st to 25th December a new puzzle with 2 parts gets released and for each part solved you get a star. The goal is to collect all 50 stars to save Christmas. All the problems follow a story normally involving the space, a spaceship, elves, reindeers and Santa. The difficulty increases as the days pass. First ones are simpler, but then they start getting complicated and laborious. Some of them are pretty tricky! They aren’t necessarily super hard algorithmically, binary search, BFS, Dijkstra, Floyd–Warshall, A*… might be all you need but I can easily take several hours to finish each one. This year there was also a bit of modular arithmetic that I loved and a little bit of trigonometry. The problems are quite amazing. This year for example included things like a Pong game in Intcode, a made-up assembly language (you had to program the joystick movements and feed them to the program) and a text-based adventure game in Intcode as well. It’s seriously cool.
the quality and thought behind the problems in AoC is really outstanding, you can feel the amount of work and dedication behind their preparation, and the result is challenging, engaging, and fun (CAVEAT: marriages may be harmed) https://t.co/CWnSgfTsTh
— Xavier Noria (@fxn) December 12, 2019
I’ve done it in 2016, 2017 and 2018, the first time in Ruby, then the last 2 years in Elixir. I never finish on the 25th December because for me it’s impossible to work, take care of life stuff and also spend several hours programming these puzzles every day
January 12, 2020
I went to see a movie, and instead I saw the future
A few days ago my wife and I went to see Uncut Gems at a Regal theater in Chicago.
We booked our ticket online, reserved our seats, showed up 15 minutes ahead of time, and settled in.
After the coil of previews, and jaunty, animated ads for sugary snacks, the movie started.
About 20 minutes in, a loud, irritating buzzing started coming from one corner of the theater. No one was sure what to make of it. Was it part of the movie? We all just let it go.
But it didn’t stop. Something was wrong with the audio. It was dark, so you couldn’t see, but you could sense people wondering what happens now. Was someone from the theater company going to come in? Did they even know? Is there anyone up in the booth watching? Did we have to get someone?
We sent a search party. A few people stood up and walked out to go get help. The empty hallways were cavernous, no one in sight.
Eventually someone found someone from the staff to report the issue. Then they came back into the theater to settle in and keep watching the movie.
No one from the theater came into the theater to explain what was going on. The sound continued for about 10 more minutes until the screen abruptly went black. Nothingness. At least the sound was gone.
Again, no one from the theater company came in to say what was going on. We were all on our own.
The nervous, respectfully quiet giggle chatter started. Now what?
A few minutes later, the movie started again. From the beginning. No warning. Were they going to jump forward to right before they cut it off? Or were we going to have to watch the same 25 minutes again?
No one from the theater company appeared, no one said anything. The cost of the ticket apparently doesn’t include being in the loop.
Eventually people started walking out. My wife and I included.
As we walked out into the bright hallway, we squinted and noticed a small congregation of people way at the end of the hall. It felt like finally spotting land after having been at sea for awhile
We walked up. There were about eight of us, and two of them. They worked here. We asked what was going on, they didn’t know. They didn’t know how to fix the sound, there was no technical staff on duty, and all they could think of was to restart that movie to see if that fixed it.
We asked if they were planning on telling the people in the theater what was going on. It never occurred to them. They dealt with movies, they didn’t deal with people.
We asked for a refund. They pointed us to the box office. We went there and asked for a refund. The guy told us no problem, but he didn’t have the power to do that. So he called for a manager. The call echoed. Everyone looked around.
Finally a manager came over. We asked for a refund, he said he could do that. We told him we purchased the tickets through Fandango, which complicated things. Dozens of people lined up behind us. The refund process took a few minutes.
Never a sorry from anyone. Never even an acknowledgment that what happened wasn’t supposed to happen. Not even a comforting “gosh, that’s never happened before” lie. It was all purely transactional. From the tickets themselves, to the problem at hand, to the refund process. Humanity nowhere.
We left feeling sorry for the whole thing. The people who worked at the theater weren’t trained to know how to deal with the problem. They probably weren’t empowered to do anything about it anyway. The technical staff apparently doesn’t work on the premises. The guy at the box office wanted to help, but wasn’t granted the power to do anything. And the manager, who was last in the line of misery, to have to manually, and slowly, process dozens of refunds on his own. No smiles entered the picture.
This is the future, I’m afraid. A future that plans on everything going right so no one has to think about what happens when things go wrong. Because computers don’t make mistakes. An automated future where no one actually knows how things work. A future where people are so far removed from the process that they stand around powerless, unable to take the reigns. A future where people don’t remember how to help one another in person. A future where corporations are so obsessed with efficiency, that it doesn’t make sense to staff a theater with technical help because things only go wrong sometimes. A future with a friendlier past.
I even imagine an executive somewhere looking down on the situation saying “That was well handled. Something went wrong, people told us, someone tried to restart it, it didn’t work. People got their refunds. What’s the problem?” If you don’t know, you’ll never know.
January 10, 2020
AWS S3: You’re out of order.
Back in November, we noticed something odd happening with large uploads to Amazon S3. Uploads would pause for 10 seconds at a time and then resume. It had us baffled. When we started to dig, what we found left us with more questions than answers about S3 and AWS networking.
We use Amazon S3 for file storage. Each Basecamp product stores files in a primary region, which is replicated to a secondary region. This ensures that if any AWS region becomes unavailable, we can switch to the other region, with little impact to users uploading and downloading files.
[image error]
Back in November, we started to notice some really long latencies when uploading large files to S3 in us-west-2, Basecamp 2’s primary S3 region. When uploading files over 100MB, we use S3’s multipart API to upload the file in multiple 5MB segments. These uploads normally take a few seconds at most. But we saw segments take 40 to 60 seconds to upload. There were no retries logged, and eventually the file would be uploaded successfully.
[AWS S3 200 0.327131 0 retries]
[AWS S3 200 61.354978 0 retries]
[AWS S3 200 1.18382 0 retries]
[AWS S3 200 43.891385 0 retries]
For our applications that run on-premise in our Ashburn, VA datacenter, we push all S3 traffic over redundant 10GB Amazon Direct Connects. For our Chicago, IL datacenter, we push S3 over public internet. To our surprise, when testing uploads from our Chicago datacenter, we didn’t see any increased upload time. Since we only saw horrible upload times going to us-west-2, and not our secondary region in us-east-2, we made the decision to temporarily promote us-east-2 to our primary region.
[image error]
Now that we were using S3 in us-east-2, our users were no longer feeling the pain of high upload time. But we still needed to get to the bottom of this, so we opened a support case.
Our initial indication was that our direct connections were causing slowness when pushing uploads to S3. However, after testing with mtr, we were able to rule out direct connect packet loss and latency as the culprit. As AWS escalated our case internally, we started to analyze the TCP exchanges while we upload files to S3.
The first thing we needed was a repeatable and easy way to upload files to S3. Taking the time to build and set up proper tooling when diagnosing an issue really pays off in the long run. In this case, we built a simple tool that uses the same Ruby libraries as our production applications. This ensured that our testing would be as close to production as possible. It also included support for multiple S3 regions and benchmarking for the actual uploads. Just as we expected, uploads to both us-west regions were slow.
irb(main):023:0> S3Monitor.benchmark_upload_all_regions_via_ruby(200000000)
region user system total real
us-east-1: 1.894525 0.232932 2.127457 ( 3.220910)
us-east-2: 1.801710 0.271458 2.073168 ( 13.369083)
us-west-1: 1.807547 0.270757 2.078304 ( 98.301068)
us-west-2: 1.849375 0.258619 2.107994 (130.012703)
While we were running these upload tests, we used tcpdump to output the TCP traffic so we could read it with Wireshark and TShark.
$ tcpdump -i eth0 dst port 443 -s 65535 -w /tmp/S3_tcp_dump.log
When analyzing the tcpdump using Wireshark, we found something very interesting: TCP retransmissions. Now we were getting somewhere!
[image error]TCP Retransmissions
Analysis with TShark gave us the full story of why we were seeing so many retransmissions. During the transfer of 200MB to S3, we would see thousands of out-of-order packets, causing thousands of retransmissions. Even though we were seeing out-of-order packets to all US S3 regions, these retransmissions compounded with the increased round trip time to the us-west regions is why they were so much worse than the us-east regions.
# tshark -r S3_tcp_dump.log -q -z io,stat,1,"COUNT(tcp.analysis.retransmission) tcp.analysis.retransmission","COUNT(tcp.analysis.duplicate_ack)tcp.analysis.duplicate_ack","COUNT(tcp.analysis.lost_segment) tcp.analysis.lost_segment","COUNT(tcp.analysis.fast_retransmission) tcp.analysis.fast_retransmission","COUNT(tcp.analysis.out_of_order) tcp.analysis.out_of_order"
Running as user "root" and group "root". This could be dangerous.
===================================================================================
| IO Statistics |
| |
| Duration: 13.743352 secs |
| Interval: 1 secs |
| |
| Col 1: COUNT(tcp.analysis.retransmission) tcp.analysis.retransmission |
| 2: COUNT(tcp.analysis.duplicate_ack)tcp.analysis.duplicate_ack |
| 3: COUNT(tcp.analysis.lost_segment) tcp.analysis.lost_segment |
| 4: COUNT(tcp.analysis.fast_retransmission) tcp.analysis.fast_retransmission |
| 5: COUNT(tcp.analysis.out_of_order) tcp.analysis.out_of_order |
|---------------------------------------------------------------------------------|
| |1 |2 |3 |4 |5 | |
| Interval | COUNT | COUNT | COUNT | COUNT | COUNT | |
|--------------------------------------------------| |
| 0 <> 1 | 28 | 11 | 0 | 0 | 0 | |
| 1 <> 2 | 3195 | 0 | 0 | 0 | 5206 | |
| 2 <> 3 | 413 | 0 | 0 | 0 | 1962 |
...
| 13 <> Dur| 0 | 0 | 0 | 0 | 0 | |
===================================================================================
What’s interesting here is that we see thousands of our-of-order packets when transversing our direct connections. However, when going over the public internet, there are no retransmissions or out-of-order packets. When we brought these findings to AWS support, their internal teams reported back that “out-of-order packets are not a bug or some issue with AWS Networking. In general, the out-of-order packets are common in any network.” It was clear to us that out-of-order packets were something we’d have to deal with if we were going to continue to use S3 over our direct connections.
[image error]“You’re out of order! You’re out of order! This whole network is out of order!”
Thankfully, TCP has tools for better handling of dropped or out-of-order packets. Selective Acknowledgement (SACK) is a TCP feature that allows a receiver to acknowledge non-consecutive data. Then the sender can retransmit only missing packets, not the out-of-order packets. SACK is nothing new and is enabled on all modern operating systems. I didn’t have to look far until I found why SACK was disabled on all of our hosts. Back in June, the details of SACK Panic were released. It was a group of vulnerabilities that allowed for a remotely triggered denial-of-service or kernel panic to occur on Linux and FreeBSD systems.
In testing, the benefits of enabling SACK were immediately apparent. The out-of-order packets still exist, but they did not cause a cascade of retransmissions. Our upload time to us-west-2 was more than 22 times faster than with SACK disabled. This is exactly what we needed!
irb(main):023:0> S3Monitor.benchmark_upload_all_regions_via_ruby(200000000)
region user system total real
us-east-1: 1.837095 0.315635 2.152730 ( 2.734997)
us-east-2: 1.800079 0.269220 2.069299 ( 3.834752)
us-west-1: 1.812679 0.274270 2.086949 ( 5.612054)
us-west-2: 1.862457 0.186364 2.048821 ( 5.679409)
The solution would not just be as simple as just re-enabling SACK. The majority of our hosts were on new-enough kernels that had the SACK Panic patch in place. But we had a few hosts that could not upgrade and were running vulnerable kernel versions. Our solution was to use iptables to block connections with a low MSS value. This block allowed for SACK to be enabled while still blocking the attack.
$ iptables -A INPUT -p tcp -m tcpmss --mss 1:500 -j DROP
After almost a month of back-and-forth with AWS support, we did not get any indication why packets from S3 are horribly out of order. But thanks to our own detective work, some helpful tools, and SACK, we were able to address the problem ourselves.
January 6, 2020
The last tracker was just removed from Basecamp.com
Can you believe we used to willingly tell Google about every single visitor to basecamp.com by way of Google Analytics? Letting them collect every last byte of information possible through the spying eye of their tracking pixel. Ugh.
But 2020 isn’t 2010. Our naiveté around data, who captures it, and what they do with it has collectively been brought to shame. Most people now sit with basic understanding that using the internet leaves behind a data trail, and quite a few people have begun to question just how deep that trail should be, and who should have the right to follow it.
In this new world, it feels like an obligation to make sure we’re not aiding and abetting those who seek to exploit our data. Those who hoard every little clue in order to piece of together a puzzle that’ll ultimately reveal all our weakest points and moments, then sell that picture to the highest bidder.
The internet needs to know less about us, not more. Just because it’s possible to track someone doesn’t mean we should.
That’s the ethos we’re trying to live at Basecamp. It’s not a straight path. Two decades of just doing as you did takes a while to unwind. But we’re here for that work.
[image error]Every request is now served from our own domains
Last year we stopped using pixel trackers in our Basecamp emails. This year we’re celebrating the start of a new decade by dropping the last third-party tracking pixel on basecamp.com. Now when you visit our marketing page, you only have to trust that we won’t abuse that data – not a laundry list of third parties you have no reasonable chance of vetting.
We still track that someone visited our page, but it’s really only the basics that interest us. How many people visited the page? Did a new pitch work better than the old? How many people signed up? Basic stuff like that. And basic stuff doesn’t require overly sophisticated tooling, so it’s fine that our homegrown package isn’t nearly as fancy or as piercing as offerings like Google Analytics. It doesn’t need to be.
We still aren’t entirely free of Google’s long data arm, though. You can still sign-in with Google, though we’d encourage you to switch to our new two-factor authenticated, WebAuth-capable in-house system. We’ll be deprecating the Sign-In With Google path entirely soon enough.
We also still use a variety of other data processors, like Customer.io, for onboarding emails. But going forward, the analysis for when that makes sense has absolutely changed. It’s no longer enough for something to be slightly more convenient or slightly cheaper for us to send data out of the house. Fewer dependencies, fewer processors, fewer eyes on our data and that of our customers is a powerful consideration all of its own.
Untangling yourself from the old paradigm of data is neither quick, easy, nor free. But it’s worth doing, even if you can only do it one step at the time. Think about what steps you could take in 2020.
Jason Fried's Blog
- Jason Fried's profile
- 1429 followers

