The ‘Rework’ Podcast Goes Dark

Sorry, one more Basecamp link: Jason Fried and DHH (and other 37signals/Basecamp folks, allegedly) wrote a book called Rework about “the better way to work and run your business,” which has a spinoff podcast covering a mix of Basecamp behind-the-scenes and overall thought leadership, hosted by Wailin Wong and Shaun Hildner. It’s basically a nice business podcast that folks from Basecamp occasionally take over to talk about how great they are.

Or, rather, Rework did have a spinoff podcast. Wong and Hildner posted a 90 second “quick update” saying they are taking a hiatus in response to the company’s changes. And it’s a pretty intense clip: Hildner couldn’t even finish the show’s standard intro, and Wong admitted it would be weird to carry on as normal.

And one side note, from the “shoulda seen this coming” department: despite Wong and Hildner serving as regular, weekly hosts for this podcast, their names are nowhere to be found on the show website. But you can probably guess whose names are prominently mentioned:

Rework is a podcast by the makers of Basecamp about a better way to work and run your business. While the prevailing narrative around successful entrepreneurship tells you to scale fast and raise money, we think there’s a better way. We’ll take you behind the scenes at Basecamp with co-founders Jason Fried and David Heinemeier Hansson and bring you stories from business owners who have embraced bootstrapping, staying small, and growing slow.

There’s also this line:

REWORK is proudly hosted by Transistor Podcasting Company

They credit their hosting platform, but not their hosts.

What really happened at Basecamp

In response to the Basecamp partners’ “full Coinbase” heel turn that I wrote about yesterday, Casey Newton spoke with Jason Fried, DHH, and numerous employees and reported on what’s been going on behind the scenes:

The controversy that embroiled enterprise software maker Basecamp this week began more than a decade ago, with a simple list of customers.

Around 2009, Basecamp customer service representatives began keeping a list of names that they found funny. More than a decade later, current employees were so mortified by the practice that none of them would give me a single example of a name on the list. … Many of the names were of American or European origin. But others were Asian, or African, and eventually the list — titled “Best Names Ever” — began to make people uncomfortable.

The series of events that led to yesterday’s policy change — which Newton confirms was not fully discussed internally before it was dropped like a bomb via the founders’ personal blogs — began with things like this, but grew to include a broader question of how the company is handling diversity and inclusion. Between the lines of it all, it sure seems like the “committees” Fried railed against refers to an employee-led DEI Council, as does the new ban against “societal and political discussions” on company forums.

As I called out yesterday, unlike most tech companies with enough of a public profile to trigger this much discussion, Basecamp is an LLC under the near-total control of its two managing partners. That in itself is not that unusual; with the rise of “founder friendly” equity structures in the last decade, there are plenty of VC-backed businesses that also have dictatorial leaders and toxic work environments. The main difference is that unlike (say) Coinbase’s CEO Brian Armstrong (who tweeted a high five at Basecamp yesterday), the Basecamp partners aren’t accountable to investors or a board of directors — the buck really does stop with them.

“There’s always been this kind of unwritten rule at Basecamp that the company basically exists for David and Jason’s enjoyment,” one employee told me. “At the end of the day, they are not interested in seeing things in their work timeline that make them uncomfortable, or distracts them from what they’re interested in. And this is the culmination of that.”

How To Blog

Sometimes I wonder if I’ve simply forgotten how to blog. Little (shit)posts about whatever interested me used to come so easily — I’m sitting on over 1,000 entries from 2002-2008, from my original Movable Type blog, along with a few hundred from my old Tumblr. Most of those were gushing about some tech company or […]

Where The Web Fonts Go

Self-hosting web fonts can be easy; just add the font files somewhere in your site’s directory structure and reference them from your CSS. But if your site’s source code is stored in a GitHub repo, and you want your code to be public (or just forget to make it private), you may accidentally be violating the fonts’ license terms! Roel Nieskens called GitHub “the web’s largest font piracy site” due to web developers storing font files in publicly-viewable repos:

Let’s use the Github search API and see if we can find the most ubiquitous commercial font on the planet: Helvetica. And yep, more than 100,000 copies are findable on Github

What if you search for MyFonts’ products on Github? That’s exactly what I did. I skipped generic names that could result in false positives: names like Black, Latin or Text and fed the rest to the Github search API. The result? Of the deduped list of 29,951 fonts, 7,617 were present on Github – that’s a quarter of the entire MyFonts collection. Of their fonts labeled “bestseller”, 39 out of 49 can be found on Github, as well as 28 of the 30 labeled “top webfont”.

For a while now, I’ve kept my site’s source code private (even though I’d prefer it be public) so that I can store fonts there — it’s just so simple and straightforward to keep fonts and other assets with my code, and by keeping the repo private I can stay in compliance with all my font licenses.

But beyond that, having fonts in a Git repo is an anti-pattern because font files are relatively big binaries, which Git is not super-efficient at tracking or storing. And, because Git remembers everything, every font file I’ve ever used in any version of the site will remain part of the repo forever. Any time I (or Netlify’s build servers) clone a fresh copy, it’ll have to pull down a megabyte or so of font files, only a fraction of which it actually needs.

IMHO, the best idea is to not store web fonts in Git if you don’t have to, but where should they go instead?

My friend Stephen Nixon — who made the excellent typefaces Recursive and Name Sanswrote up a nice post explaining why and how he securely hosts web fonts on AWS S3 :

With the S3 Buckets feature of Amazon Web Services (AWS), this is relatively easy & very inexpensive – unless you are making a hugely-popular website, perhaps. You can (and should) configure it to only work on specific web domains, so you don’t break your licensing or end up paying for other people to use your font hosting!

S3 is great — one of very few internet things that is fast, cheap, and good for most use cases, and it’s been that way for more than a decade. Amazon offers a powerful web control panel for working with S3 buckets and data, and there are also many excellent third-party and open source apps that can upload to S3. My favorites are Transmit, Panic’s venerable file-transfer client for macOS, and s3cmd, a Python-based open source command line tool.

For me, the main drawback to S3 is that it can be annoying to serve fonts or other files over SSL. All S3 buckets have default s3.amazonaws.com URLs that can be accessed over HTTP or HTTPS, which is great. But S3’s static website hosting features (which you may not need for this, but idk) are only available over regular HTTP, and if you want to leverage those or use a custom domain you’ll have to set up CloudFront, Amazon’s CDN service, which is extremely powerful but also complicated and rather expensive.

Another drawback to S3, less important for small projects but still worth thinking about, is that without CloudFront all your data is served from your chosen AWS datacenter, not from Amazon’s CDN. Some users may see latency or slower downloads, which is exactly what you don’t want with larger assets like fonts. Slow font downloads can block page rendering or exacerbate problems like FOUT.

So, for the fonts on this site, I decided to use DigitalOcean Spaces, an “object” (aka file) storage service that’s patterned after S3, and compatible with S3’s API so that apps like Transmit will work with it. It’s a lot simpler, both in the product itself (nice web UI, easy-to-understand settings) and in its pricing model (a flat $5/month fee), and it has a built-in CDN that can integrate with DigitalOcean’s DNS servers to effortlessly configure custom domains and SSL certificates.

DigitalOcean’s control panel makes it easy to set up and configure Spaces, including custom domains, SSL, and CORS rules

I keep all my fonts in the same directory of the same Spaces bucket, which I manage using Transmit:

My web fonts in their directory on my Spaces-powered CDN

Each subdirectory is named after the fonts’ CSS font-family name, so that my “API” for using the fonts is consistent. To enable the Söhne fonts, I add a link to fonts/soehne/index.css, and then I can use font-family: soehne, … in my CSS. Nice and simple.

Because these directory names and URLs follow a nice, regular structure, I can lightly automate adding these links in my Hugo templates, providing a list of family slugs that are turned into <link> tags. These are hard-coded, but could just as easily be set as front matter data on a page or post.

CDN-hosted web fonts, integrated into my site’s Hugo templates

Now that these fonts are up in The Cloud, I can easily reference them in test pages and experiments without having to copy them over from another project.

Parcel Post

I dunno about you, but I’ve been missing the old days when we could try out some new web technique or think through some code by just opening up an editor, making a fresh index.html, and getting to work.

I’m generally a fan of frameworks that let you get to work with a bare minimum of boilerplate code or setup, and I’m particularly fond of tools that leverage the filesystem and/or the native syntax of the web, so that web development feels like it did back when uploading PHP scripts to a FTP site felt magical.

This is a rare feeling these days; in order to give developers the power to make powerful, scalable web apps, it feels like we’ve neglected or even forgotten how to make web pages. I miss the simplicity and immediacy — the feeling of magic — that made web development so fun when I was starting out.

Next.js has some of this magic. It’s a React-based app framework that uses file and directory names to set up URL routes; given a file named about/index.js, Next will create a web page whose URL is /about. This isn’t quite the old web I loved in the 2000s, because React is involved. That file isn’t a web page, it’s a JavaScript file that exports a component, and there are things that are stupidly hard to do without layering on ever more libraries and boilerplate. But what’s nice about Next is that once you install it and its dependencies, you can just create a couple of files, run next dev, and you’re off to the races.


This weekend I wanted to play around with Chroma.js, a library for manipulating colors and scales. I started out trying it in CodePen and Glitch — both great tools for trying things out — but found myself wanting to write code in my favorite editor, not a browser.

Parcel made it possible for me to have my cake and eat it too — to write code like I was building a totally local, static web page, but enjoy all the benefits of modern build tools.

Parcel’s website describes it as “a compiler for all your code, regardless of the language or toolchain… (it) takes all of your files and dependencies, transforms them, and merges them together into a smaller set of output files that can be used to run your code.” All of which is true, but I think obscures the most important part: Parcel does all of this with little or no setup, configuration, or boilerplate code.

This may seem remarkable in different ways depending on your experience with the modern JavaScript world.

If you’re familiar with compiled languages or frameworks, or other bundler tools like Webpack, Parcel’s big pitch is that it can simplify your life. Whenever I use Webpack it usually takes me dozens of minutes to write (or rather copy-paste) a configuration file and install packages to make my code run. Even for an experienced JS programmer who’s used to this pain, Parcel is a valuable time-saver.

But what’s really great about Parcel is that it’s a Webpack-like tool that can be used without prior knowledge of Webpack-like tools, that uses your own code to configure itself.

Take an HTML document like this:

<!-- index.html -->
<html>
  <head>
    <title>A throwaway web page experiment</title>
    <link href="./styles.css" rel="stylesheet" />
  </head>
  <body>
    <h1>Time to code!</h1>
    <div id="vue-app"></div>
    <script src="./app.js"></script>
  </body>
</html>

In a bygone era, with all your HTML, JavaScript, and CSS code hand-crafted as static files, you could just load this into a browser and go. In fact, let me tell you a secret: that way of making web pages still works. The modern web platform still supports simple ways of working, it just doesn’t allow or make it easy for you to use preprocessors if you want to.

But Parcel does! Once it’s installed, just run this command:

parcel index.html

Reading your HTML, Parcel will see that it depends on two other assets — styles.css and main.js — and build those, preprocessing them according to the file extensions. It’ll (re-)build your HTML too, replacing references to these source code files to the built asset files it generates.

What’s more, these don’t have to be plain CSS or JS files. If you want to use (say) Sass and TypeScript, you could do this and it will Just Work:

<!-- index.html -->
<html>
  <head>
    <title>A throwaway web page experiment</title>
    <link href="./styles.scss" rel="stylesheet" />
  </head>
  <body>
    <h1>Time to code!</h1>
    <div id="vue-app"></div>
    <script src="./app.ts"></script>
  </body>
</html>

Beyond that, Parcel brings a web server and hot reloading to the party—you give it some files, it gives you a local development URL, and that URL will auto-magically refresh as you edit code. Hot reloading has been a revolution in how I approach web design — beyond just reloading pages, seeing code or style changes applied seamlessly in the browser makes designing in the browser responsive and delightful. Hot reloading with Webpack usually requires a framework or complicated setup; in Parcel that too Just Works.


So what’s the catch? Well, Parcel may make the JS ecosystem much simpler and easier to use, but it is still part of that ecosystem. Simple things tend to work very simply, but if you push the limits of what Parcel is good at it can require some know-how to get back on track.

For my color theming experiment I wanted to use a couple of my favorite libraries: Tailwind CSS to apply styles to a web page, and Vue to set up data-driven templates. But it turns out the current release of Tailwind, v2.0, requires PostCSS 8. Parcel 1.x doesn’t work with PostCSS 8, so I needed to switch to a nightly build of Parcel 2, which isn’t out yet.

Parcel 2, meanwhile, doesn’t support single-file components with the current version of Vue — for those I had to upgrade to the beta of Vue 3. For my “simple” web page to hack on, I had to use pre-release, bleeding-edge versions of two JavaScript tools just to get things to work.

BTW, this is the NPM incantation to install the stack I ended up using:

npm install --save parcel@nightly vue@next \
  tailwindcss@latest postcss@latest

Now, I did have another option, to stick with versions of these libraries that work together, and to only use features that work with those versions. Tailwind 1.x is nice, as are non-single-file Vue views. I’m the one who chose to live dangerously.

And even with the JS dependency whack-a-mole, it was and is nice to set up a project by just writing code and having it work. It’s nice because I don’t feel like I wasted an hour setting up a throwaway project, and the steps to get going with some code are simple enough to keep in my head.