Recently, April 20, 2025

Angelina caught a cold, so the past week has been largely laying low and sleeping 9+ hours a night trying not to catch it myself. Not the worst life.


Elevating this to top fish recipe: Rockfish, Garlic, Shallots, Tomatoes & and a lotta Herbs.


Using ChatGPT’s Web Search is ok. “Find me articles, marketing posts, and conference talks about [something]”. I have to follow up several times slightly differently (“anything else? What about lightning talks?”) and copy resulting links into a separate doc to organize to have something approaching comprehensive…. But pretty good and better than what I can get out of either Kagi or Google. I ignore the summaries and chatty nonsense and just copy the links and read them myself. Sorry climate and future generations.


I cut ~30 seconds from my GitHub Actions build times by replacing my apt-get install step with an action that caches using awalsh128/cache-apg-pkgs-action; there’s a couple options but this one had the most stars in the marketplace:

# Before
- name: "Install packages"  
  run: |
  sudo apt-get -yqq update
  sudo apt-get -yqq install libvips-dev

# After
- name: "Install packages" 
  uses: awalsh128/cache-apt-pkgs-action@7ca5f46d061ad9aa95863cd9b214dd48edef361d
  with:
  packages: libvips-dev
  version: 1 # cache version, change to manually invalidate cache

Turbo/hotwire stuff: I’ve been gradually replacing more granular broadcasts of like prepend/update/remove with page refresh events for their simplicity. The challenge I have is that if there is a form + refreshable content on the same page (sometimes with the form in the middle or multiple forms). If the content refreshes, I don’t want to refresh the form. But I do want the form to refresh itself when submitted (show validation messages, reset, etc.). I can wrap the form in a data-turbo-permanent for the first part, but then the form doesn’t update when it’s submitted.

My workaround to that is a stimulus controller that wraps the form and removes the data-turbo-permanent when the form is submitted, inspired by this. Is there a better way to do it?

import { Controller } from "@hotwired/stimulus"

// To be used to wrap a form to allow the form to be permanent during
// Turbo Stream refresh events but to update normally when submitting the form.
// Example:
//  <div data-turbo-permanent id="<%= dom_id(@phone, :message_form) %>" data-controller="permanent-form">
//    <%= form_with ...
//  </div>

export default class extends Controller {
  connect() {
    this.submitHandler = this.submitForm.bind(this);
    this.element.addEventListener("submit", this.submitHandler);
  }

  disconnect() {
    this.element.removeEventListener("submit", this.submitHandler);
  }

  submitForm(event) {
    if (event.target.matches("form")) {
      this.element.removeAttribute("data-turbo-permanent");
    }
  }
}

I finished Spinning Silver. Now reading The Space Between Worlds.

I bought Javascript for Rails Developers, largely because I like the posts on Rails Designer.

I started the demo for Unbeatable (“where music is illegal and you do crimes”); I like the art style, but is it fun? I dunno.


I had to go to the shipping warehouse to pick up my new mechanical keyboard because I kept missing the delivery person, but it otherwise arrived no problem.

Recently, April 14, 2025

Last week I tried out a lot of coworking spaces: Canopy, Tandem, Temescal Works. We’re trying to find a space between Oakland and SF with nice outdoor walks.


I’m having a great time being a technical cofounder to my (everything else!) cofounder. It’s fun explaining what I am doing. And we have fun shouting “Monolith!” and “Skateboard [MVP]” all day long.

An example of an explanation I gave: one of our client advocate tools is a Twilio-powered Voice Conference Bridge where we can dial in any number of participants which helps shadow and assist our clients in their welfare application journey. We wanted to add DTMF tones for dialing extensions and navigating IVR systems. Unfortunately, the Twilio API that I used initially (Create a Conference Bridge, then create a Participant Call) doesn’t support DTMF tones so I had to flip the logic to a different API (Create a Call, then add it to a Conference Bridge as a Participant). Figuring that out was a couple hours of reading docs and SDK code, feeling confident I wasn’t overlooking something, creating a runner script to bench test it, and finally putting the pieces into their production-ready places which was only like 20 lines of code at the end. That’s where the time goes.


I had several conversations about “the AI memo”. I’ll paste the two themes I talked about, in the words I put into the Rails Perf Slack:

I don’t know what Shopify’s culture is, but I imagine the pronouncement itself could be useful, for Tobi.

As a leader, you say “everyone must… unless you get an exception from me” to learn by forcing exceptions to roll up to you directly. It’s a shitty way to learn, but power is shitty. (I mean “learn” in the very personal sense). It’s a tactic. The flip side is then as a leader you debug the need for the exceptions and that leads to a better policy.

GitHub’s CEO said (not published) something similar (internally) 2 weeks before I left. I sweated it for a day, then DMed him and said “as a manager, I’m not aware of any LLM api that is approved for my use for internal admin stuff?” and he pointed me to the GitHub Models product that is totally unreferenced on any of the internal docs about staff AI tools. I poked that enablement team to add it, and I dunno if the CEO actually followed up with anyone to debug the low awareness (the story of my DM got retold at a different meeting as one about security, but it was really my complete unawareness and its absence on any of the tool lists that were intended to be the starting place for staff to integrate AI into their work).

TLDR: in a culture of opennness (safety to DM the CEO about the policy) and learning (the policy is the start not the end of discussion). I could see the pronouncement to be catalytic.

And

I appreciate that FOMO hype (“don’t be left behind”) has been largely absent [in this Slack community], though I find it elsewhere and a huge distraction.

I think a lot in this thread could have the word “AI” replaced with “Rubymine” and it would be an equally familiar discussion between folks who use it, folks who are curious, and folks who are happy with their current code editor and wish others would stop pushing Rubymine cause it’s slow and costs money and makes developers lazy, analogously.

I share that because I don’t think it’s a new experience to be like: “both of us are producing software but our moment-to-moment experience is wildly materially different” (eg “here is my elaborate process for naming and organizing methods so I can find them later” vs “I cmd-click on it and I go there”). … and then people debate whether that difference matters or not in the end.

When I think of my own experience in The Editor Wars I think the only meaningful thing is to go pair with somebody and observe their material experience producing software in situ, fumbles and all.

I did my first Deep Research this week; it was good A1.


Week one of my startup journey and I already made a successful Rails PR with a bug fix. I didn’t think it was a big deal but it got backported too 💪


On Saturday I did what I’m trying to make my standard 10-mile hike: Stinson Beach to Muir Woods loop (Steep Ravine up, Bootjack down, Ben Johnson up, Dipsea back down). Shandy and fries at the end.

Sunday was a swim (the Bay was a balmy 57F/14C !) and the treat of a Warriors day game with Angelina’s geospatial colleagues, and dinner and ice cream and showing them all our favorite park walks.


I’m still reading Spinning Silver; it’s good and long! I have not played Witcher 3 since writing about it last time, or really anything.

Recently, April 7, 2025

  • I had my last day at old job. I got locked out of all my GitHub accounts at noon on Friday. At 2pm I did a tour of a coworking space for my new job. We’re looking at several spaces between where I live (SF) and my cofounder (Oakland). Both of us are looking forward to regularly being in the same space with a big whiteboard adjacent to somehere nice to walk around outside.
  • I helped publish the monthly April Newsletter for the Alliance of Civic Technologists. I’ve stepped back mostly to focus on website tasks, though I’m proud that the comms stuff I previously pushed on (“what if we just regularly re-published stuff from the network without committing to a lot of other words?”) seems to have been taken up. I also feel like my involvement has been good training for my conviction of like “the reason we’re doing it this way is because I’m responsible for it.” Not that I expected to defend a a five page website with Jekyll on GitHub pages in 2023 (when I put together with Bill Hunt and Molly McLeod), but the only way some people know how to engage is by aggressively wondering why you didn’t do it differently.
  • I tried not to think about (new) work all weekend. Saturday we got up before 5am to volunteer at a Bay Bridge swim; we worked registration and body marking (TIL some people are immune to sharpie). We took a dip ourselves, cafe for breakfast, then farmer’s market, cleaned up at home, met friends for tea (one of whom I’m trying to recruit to work with me; so it goes now), then to the protest, then a wine bar where we picked up some more friends in civic tech, then a gallery showing for some other friends from the swim club, then scrambled eggs at home for dinner. Saturday! Sunday was more sedate of swim, cafe, walk to Trader Joes, a different wine bar where I found agreement with a neighbor that being run over by a car is one’s most likely fate in SF.
  • I got up a LinkedIn post about my job change:

    Today was my last day GitHub. I’m really proud of the last 3 years helping build and support the Rails and Ruby developer community inside of GitHub and beyond.

    I also couldn’t pass on the new opportunity to work again on improving America’s social safety net. It’s been 3 years since I left Code for America and I’m excited about new options that have opened up with tech, telephony, and AI. I’m optimistic that we can fully close the loop in assisting, advocating, and escalating for people throughout their welfare journey and achieve significantly higher approval rates than was possible before. And do so sustainably; that’s the challenge!

    Here’s a nice write-up about what my cofounder and I are hoping to achieve.

  • I participated in totally normal global commerce by ordering a mechanical keyboard (75% Alice brown). It’s currently in Guangzhuo; we shall see what happens now.
  • I finished reading Polostan. It’s better than his last… 4 books, despite containing the phrase “girls’ bottoms in riding breaches” two times too many. I started Naomi Novik’s Spinning Silver.
  • I started playing “The Witcher 3” which is neither cozy nor casual. I don’t know how many of the Witcher books I read previously because all evidence points to it being prior to 2014 when Pantheon’s new-hire perk was a Kindle. Seems like there are more books now.
  • We watched the White Lotus finale 🤷
  • On my first day as CTO, I reviewed all of our seat-based SaaS costs. $8 here, $4 there, $15 jeez 🫠 I’m already annoyed that my former employer charges for Branch Protection rules to block force-pushes on main 🙃

Recently, April 2, 2025

  • I’ve been away from work for the past week hosting family, including a 9 and 11 year old. In that week, we did: Ferry Building Farmer’s Market, Exploratorium and Tactile Dome, Alcatraz, “Dear San Francisco” at Club Fugazi, swam in the Bay and at the YMCA, rode a cable car, rode some buses, walked the Golden Gate Bridge, hiked Muir Woods, ate House of Prime Rib, Mama’s, Fish, Tailor’s Son, and Cafe de Casa. We had the kids for a night so their parent’s could do Napa and overnight at Indian Springs. I dropped them off at the airport yesterday and it is blessedly quiet and cats are decompressing.
  • For the kids we opened up The Big Bag of Quest Headsets that we have accumulated because Angelina works on them. Lots of charging and battery swapping and then Beat Saber. The kids also played Threes and Tiny Wings on iPhones.
  • During downtime we watched through “Wolf King”, and I got to provide adult commentary of “do you think they are a werelord?” about everyone; I had fun.
  • I finished reading Wicked; I won’t be doing the trilogy. I recluctantly started reading Polostan; the past several Neal Stephenson books have not been my thing but I am a suffering optimist.
  • I started playing Anodyne. Please suggest casual uncomplicated metroidvanias and open-world wander-arounders.

Wide Models and Active Record custom validation contexts

This post is a brief description of a pattern I use a lot using when building features in Ruby on Rails apps and that I think needed a name:

Wide Models have many attributes (columns in the database) that are updated in multiple places in the application, but not always all at once i.e. different forms will update different subsets of attributes on the same model.

How is that not a “fat model”?

As you add more intrinsic complexity (read: features!) to your application, the goal is to spread it across a coordinated set of small, encapsulated objects (and, at a higher level, modules) just as you might spread cake batter across the bottom of a pan. Fat models are like the big clumps you get when you first pour the batter in. Refactor to break them down and spread out the logic evenly. Repeat this process and you’ll end up with a set of simple objects with well defined interfaces working together in a veritable symphony.

I dunno. I’ve seen teams take Wide Models pretty far (80+ attributes in a model) while still maintaining cohesion and developer productivity. And I’ve seen the opposite where there is a profusion of tiny service objects and any functional change must be threaded not just through a model, view and controller but also a form object and a decorator and several command objects, or where there is large number of narrow models that all have to be joined or included nearly all of the time in the app—and it sucks to work with. I mean, find the right size for you and your team, but the main thrust here is that bigger doesn’t inherently mean worse.

This all came to mind while reading Paweł Świątkowski’s “On validations and the nature of commands”:

Recently I took part in a discussion about where to put validations. What jarred me was how some people inadvertently try to get all the validation in a one fell swoop, even though the things they validate are clearly not one family of problems.

The post goes on to suggest differentiating between:

  • “input validation”, which I take to mean user-facing validation that is only necessary when the user is editing some fields concretely on a form in the app. Example: that an account’s email address is appropriately constructed.
  • “domain checks”, which I take to mean as more fundamental invariants/constraints of the system. Example: that an account is uniquely identified by its email address.

I didn’t entirely agree with this advice though:

In Rails world you could use dry-validation for input validations and ActiveRecord validation for domain checks. Another approach would be to heavily use form objects (input validation) and limit model validations to actual business invariants.

My disagreement is because Active Record validations have a built-in feature to selectively apply validations: Validation Contexts (the on: keyword) and specifically custom validation contexts:

You can define your own custom validation contexts for callbacks, which is useful when you want to perform validations based on specific scenarios or group certain callbacks together and run them in a specific context. A common scenario for custom contexts is when you have a multi-step form and want to perform validations per step.

I use custom validation contexts a lot. I don’t intend for this to be a tutorial on custom validation contexts, but just to give a quick example:

  • Imagine you have an Account model
  • A person can register for an account with just an email address so they can sign in with a magic link.
  • An account holder can later add a password to their account if they want to optionally sign in with a password
  • An account holder can later add a username to their account which will be displayed next to their posts and comments.

You might set up the Account model validations like this:

class Account < ApplicationRecord
  validates :email, uniqueness: true, presence: true
  # also set up uniqueness/not-null constraints in the database too
  validates :email, email_structure: true, on: [:signup_form, :update_email_form]

  validates :password, password_complexity: true, allow_blank: true
  validates :password, presence: true, password_complexity: true, on: [:add_password_form, :edit_password_form]

  validates :username, uniqueness: true, allow_blank: true
  validates :username, presence: true, on: [:add_username_form, :edit_username_form]
end

Note: it’s possible to add custom validation contexts on before_validation and after_validation callbacks, but not others like before_save, after_commit, etc. only take the non-custom callbacks like on: :create.

So to wrap it up: sure, maybe it can all go in the Active Record model.

Recently, March 26, 2025

  • I am on a new work adventure. I gave my notice at GitHub and will be doing this full-time starting in April. The new job should be a nice combination of a cozy “this again” and some thrilling new.
  • I finished reading Careless People; recommend as a good sequence of business trainwrecks that will leave you wondering if this one is penultimate trainwreck (spoiler: it’s not). Now I’m reading Wicked; I didn’t really like the beginning but it’s gotten more interesting.
  • I finished Severance. Hopefully without spoilers, the consistent plot driver seems to be “Mark (yes) sucks”. So now just White Lotus and with palate cleansers of Say Yes to the Dress.
  • I have been desultorily playing Bracket City; the scoring system generates no motivation for me but it’s fun to have found another use for the decades spent training my brain to parse deeply nested hierarchical syntax. I was also told that LinkedIn has games, and other than being “Faster than 95% of CEOs” at Queens, I have already lost my streak.
  • I asked on Rails Performance Slack how to better delegate Rails model association accessors and got some good ideas.
  • My RailsConf session proposal was accepted! See you there 🙌

Recently, March 16, 2025

  • We have promoted another cat to fostering: Merlin, the cat formerly known as Gray Cat.
  • I finished the latest Bruno, Chief of Police book. I read it for the food and culture, but it has some bad descriptions of hacking in this one. I started The Midnight Library, which as close as you can imagine to a TED talk but actually a novel. Next is Careless People, which I’m looking forward to; hopefully as exhilarating/vicariously-traumatic as Exit Interview.
  • At work the latest is that all planning must snap to 1-month objectives. “If you don’t produce a plan, someone will produce one for you” is an advice. Super proud of the work: doing Pitchfork, kicking the tires on ruby/json, adding more knobs to Active Record. My Incident Commander shift was this week too; 2pm - 8pm really destroys the possibility of pre-dinner errands. I did go to a Mustqche Harbor show while on secondary Friday night and nothing bad happened (though I was still 15 minutes from home should something have).
  • I bought a RailsConf supporter ticket. I submitted a panel discussion talk, so even if that’s accepted, I think I’ll still need a ticket. It’s for a good cause.
  • Some new Playdate games. Echo: The Oracle’s Scroll was the only one I’ve beaten so far, despite fiddly jumping puzzle, and the surprise ending by which I mean I was surprised when I went to chat with an owl-person and then the credits rolled.
  • I am recovering from pink eye, again. Reinfecting yourself is a thing that can happen. The last six months or so have not been my favorite, minor ailments-wise.
  • Folks on Rails Performance Slack asked about Cursor rules, which was an opportunity for me to consolidate mine from several projects. I dunno, it’s ok.
  • After about a month, I think I’m an iPad Mini person. The screen is very not good, but I guess “the best screen is the screen you have” when said screen is bigger than a phone, but smaller than 11 inches.
  • This is the most SF press release and I can’t wait.

Addressing it directly

Lost to time in my Code for America email’s sent folder was a list of reasons why deferring to software engineers can be problematic. It included this theme, from Will Larson’s “Building personal and organizational prestige”:

In my experience, engineers confronted with a new problem often leap to creating a system to solve that problem rather than addressing it directly. I’ve found this particularly true when engineers approach a problem domain they don’t yet understand well, including building prestige.

For example, when an organization decides to invest into its engineering brand, the initial plan will often focus on project execution. It’ll include a goal for publishing frequency, ensuring content is representationally accurate across different engineering sub-domains, and how to incentivize participants to contribute. If you follow the project plan carefully, you will technically have built an engineering brand, but my experience is that it’ll be both more work and less effective than a less systematic approach.

Sometimes you just do stuff.

Flattening the curve for the safety net, five years later

It’s been 5 years since the start of the COVID-19 pandemic. From my notebook, I found a brief presentation I gave at Code for America in April, 2020 about that first month of the pandemic and the positive impact that GetCalFresh had during the initial lockdown and economic turmoil. There’s a contemporary postscript at the end too.

The idea of flattening the curve is to create time and space to build up the system capacity and avoid a catastrophic failure leading to greater social disruption and deaths.

Within the social safety net, like the healthcare system, there is a limited systemic capacity to help people. Within the social safety net, catastrophic failure is not only that people aren’t able to apply for or receive benefits because the systems to receive and process their applications are overloaded, but also that they lose trust in society and government entirely as a result.

Demand for CalFresh / SNAP / Food Stamps has massively increased over the past month. Our digital assister, GetCalFresh.org, has seen 6x the number of applicants, with a peak of over 9,000 applications per day.

The government and their contractors are beefing up the capacity of their own systems to deal with the increased volume but it’s taken them several weeks to marshal those resources.

During this time period of massive demand, these government-managed systems have suffered, leading to client-facing error messages, timeouts and service degradations.

GetCalFresh, independently operated by Code for America and funded by CDSS (California Department of Social Services) and private philanthropy, has been online, stable and accepting applications this entire time, giving CalFresh applicants a path for submitting their applications regardless of the stability or availability of the underlying government systems. GetCalFresh is able to accept and hold those applications until they can be successfully processed through the government systems, once their outage is fixed or during non-peak usage times like overnight.

GetCalFresh is a fantastic resource for Californians. And we’re seeing heavy promotion of GetCalFresh, likely because of the quality and stability of our system.

GetCalFresh is now assisting two-thirds of all statewide CalFresh applications.

And we’re maybe starting to see the government systems stabilize. Over the past 3 days we’ve observed a decrease in error rates and an increase in stability when interfacing with these government systems, which should also be comparable to how applicants would experience these government websites too. This implies that the government is successfully growing their capacity to address the increased volume of applicants.

GetCalFresh has been a critical resource in ensuring that people-in-need can get safety-net resources during this unprecedented pandemic and maintain trust between themselves, society, and government. 👍


Postscript (2025)

Here we are, 5 years later. Of what I remember of putting this presentation together, it came of a desperation to find a story, a meaning, to the grief and fear and exhaustion of that first month. It creates a narrative arc: that things were fucked, and through the specificity of our efforts, they became unfucked. I believe that discovering the tidy stories in what we have done is inarguably a necessary comfort. And such stories are, inarguably too, inadequate at giving certainty to what we must do next.

I’m immensely proud of what we accomplished during this time. It strengthens my conviction of what small, durable, cross-functional teams, supported by stable, well-funded organizations with long-term goals, can accomplish together. And every act and decision I see leading up to that, during the good times: every boring technology decision, every generalist full-stack hire, every retrospective and team norms and career ladder conversation… it was worth it, because we performed how we had previously practiced together: exemplary.

And what the fuck! I have to reflect on this in the contemporary context of DOGE and the gutting of 18F and USDS and everyone else and any sense of stability or generative capacity in our federal government and the trickle down it will have everywhere. My original presentation is rather bland in calling them “Government Systems” but in reality these are systems that have already been outsourced, for decades, to private enterprise. They fell over, badly. And us, some stupid nonprofit geeks playing house in silicon valley, we happened to be there to hold things together for 60 million Californians until the safety-net could be stood back up again. Whatever the fuck DOGE is doing is bad. To face the dangers of an uncertain world, we need more capacity in-house in government, not less. I am angry, still.

There’s so much more that must be done.

Ruby “Thread Contention” is simply GVL Queuing

There’s been a ton of fantastic posts from Jean Boussier recently explaining application shapes, instrumenting the GVL (Global VM Lock), and thoughts on removing the GVL. They’re great reads!

For the longest time, I’ve misunderstood the phrase “thread contention”. It’s a little embarrassing that given I’m the author of GoodJob (👍) and a maintainer of Concurrent Ruby and have been doing Ruby and Rails stuff for more than a decade. But true.

I’ve been reading about thread contention for quite a while.

Through all of this, I perceived thread contention as contention: a struggle, a bunch of threads all elbowing each other to run and stomping all over each other in a an inefficient, disagreeable, disorganized dogpile. But that’s not what happens at all!

Instead: when you have any number of threads in Ruby, each thread waits in an orderly queue to be handed the Ruby GVL, then they gently hold the GVL until they graciously give it up or it’s politely taken from them, and then the thread goes to the back of the queue, where they patiently wait again.

That’s what “thread contention” is in Ruby: in-order queuing for the GVL. It’s not that wild.

Let’s go deeper

I came to this realization when researching whether I should reduce GoodJob’s thread priority (I did). This came up after some exploration at GitHub, my day job, where we have a maintenance background thread that would occasionally blow out our performance target for a particular web request if the background thread happened to run at the same time that the web server (Unicorn) was responding to the web request.

Ruby threads are OS (operating system) threads. And OS threads are preemptive, meaning the OS is responsible for switching CPU execution among active threads. But, Ruby controls its GVL. Ruby itself takes a strong role in determining which threads are active for the OS by choosing which Ruby thread to hand the GVL to and when to take it back.

(Aside: Ruby 3.3 introduced M:N threads which decouples how Ruby threads map to OS threads, but ignore that wrinkle here.)

There’s a very good C-level explanation of what happens inside the Ruby VM in The Ruby Hacking Guide. But I’ll do my best to explain briefly here:

When you create a Ruby thread (Thread.new), that thread goes into the back of a queue in the Ruby VM. The thread waits until the threads ahead of it in the queue have their chance to use the GVL.

When the thread gets to the front of the queue and gets the GVL, the thread will start running its Ruby code until it gives up the GVL. That can happen for one of two reasons:

  • When the thread goes from executing Ruby to doing IO, it releases the GVL (usually; it’s mostly considered a bug in the IO library if it doesn’t). When the thread is done with its IO operation, the Thread goes to the back of the queue.
  • When the thread has been executing for longer than the length of the thread “quantum”, the Ruby VM takes back the GVL and the thread steps to the back of the queue again. The Ruby thread quantum default is 100ms (this is configurable via Thread#priority or directly as of Ruby 3.4).

That second scenario is rather interesting. When a Ruby thread starts running, the Ruby VM uses yet another background thread (at the VM level) that sleeps for 10ms (the “tick”) and then checks how long the Ruby thread has been running for. If the thread has been running for longer then the length of the quantum, the Ruby VM takes back the GVL from the active thread (“preemption”) and gives the GVL to the next thread waiting in the GVL queue. The thread that was previously executing now goes to the back of the queue. In other words: the thread quantum determines how quickly threads shuffle through the queue and no less/faster than the tick.

That’s it! That’s what happens with Ruby thread contention. It’s all very orderly, it just might take longer than expected or desired.

What’s the problem

The dreaded “Tail Latency” of multithreaded behavior can happen, related to the Ruby Thread Quantum, when you have what might otherwise be a very short request, for example:

  • A request that could be 10ms because it’s making ten 1ms calls to Memcached/Redis to fetch some cached values and then returns them (IO-bound Thread)

⠀…but when it’s running in a thread next to:

  • A request that takes 1,000ms and largely spends its time doing string manipulation, for example a background thread that is taking a bunch of complex hashes and arrays and serializing them into a payload to send to a metrics server. Or rendering slow/big/complex views for Turbo Broadcasts (CPU-bound Thread)

In this scenario, the CPU-bound thread will be very greedy with holding the GVL and it will look like this:

  1. IO-bound Thread: Starts 1ms network request and releases GVL
  2. CPU-bound Thread: Does 100ms of work on the CPU before the GVL is taken back
  3. IO-bound Thread: Gets GVL again and starts next 1ms network request and releases GVL
  4. CPU-bound Thread: Does 100ms of work on the CPU before the GVL is taken back
  5. Repeat … 8 more times…
  6. Now 1,000 ms later, the IO-bound Thread, which ideally would have taken 10ms is finally done. That’s not good!

That’s the worse case in this simple scenario with only two threads. With more threads of different workloads, you have the potential to have even more of a problem. Ivo Anjo also wrote about this too. You could speed this up by lowering overall thread quantum, or by reducing the priority of the CPU-bound thread (which lowers the thread quantum). This would cause the CPU-bound thread to be more finely sliced, but because the minimum slice is governed by the tick (10ms) you’d never get below a theoretical maximum of 100ms for the IO-bound thread; 10x more than optimal.


Older posts