Centering Civic Tech

From Cyd Harrell’s excellent “A Civic Technoligists Practice Guide”, via a Twitter convo, reformatted by me:

Because its goal is change, civic tech embodies an interesting split between demonstrating and operationalizing the potential of modern tech. I like to call these two branches showing what’s possible and doing what’s necessary. Many projects are a mix of the two, but they require different mindsets.

“Showing what’s possible” is about speed, prototyping, design, public feedback, and data. These are often web projects because web tools are great for those purposes.

“Doing what’s necessary,” on the other hand, is about shifting the underlying practices and systems: back-end systems, security, and procurement; hiring and team composition; even shifting budget priorities.”


But our job as civic technologists isn’t to be the hero of the stories we stumble into halfway through; it’s to understand and support the people who have already been in place doing the work, and who want to use tech to make improvements.

They line up with Code for America’s pillars of “Show what’s possible”, “Help people do it themselves” and “Build a movement” (though the latter is rather more grandiose than understand and support).

As we called these in my previous career, Direct Service, Capacity Building, and Roy Johnson (“Put some gratitude in your attitude!”).

GoodJob v1.3: Web dashboard and full documentation

GoodJob version 1.3 is released. GoodJob is a multithreaded, Postgres-based, ActiveJob backend for Ruby on Rails. If you’re new to GoodJob, read the introductory blog post.

GoodJob’s v1.3 release adds a mountable Web Dashboard and improved README documentation and complete code-level YARD documentation.

Version 1.3’s release comes five weeks after v1.2 and three months after GoodJob’s initial v1.0 release.

Shoutouts 🙌

GoodJob has accepted contributions from 9 people total, and currently has 559 stars on Github. The project just passed 150 combined Issues and Pull Requests.

I’m grateful for everyone who has reached out to me on Ruby on Rails Link Slack (@bensheldon) and Twitter (@bensheldon)


Mountable web dashboard

GoodJob v1.3 adds a web dashboard for exploring and visualizing job status and queue health.

GoodJob Dashboard MVP

The web dashboard is implemented as an optional Rails::Engine and includes charts and lists of pending jobs.

I expect the web dashboard to be a hot area of ongoing improvement. This initial release contains a minimum functional interface, a chart, and some necessities like keyset pagination. The dashboard is familiar to develop (Rails Controllers, ERB Views and ActiveRecord), and I’ve adopted Bootstrap CSS and Chartist to ease and speed development.

Improved documentation

GoodJob’s README has been edited and rewritten for clarity and comprehensiveness, and GoodJob’s implementation code is now thoroughly documented with YARD.

Good documentation is vital for open source projects like GoodJob. I worked with Rob Brackett, who consults on complex software and open source challenges.


In the next release, v1.4, I plan to continue adding views and charts to the web dashboard and improving thread management.


Code, documentation, and curiosity-based contributions are welcome! Check out the GoodJob Backlog, comment on or open a Github Issue, or make a Pull Request.

I also have a GitHub Sponsors Profile if you’re able to support GoodJob and me monetarily. It helps me stay in touch and send you project updates too.

Performance facilitators not supervisors

Doublespeak from Pro-Publica’s “Meet the Customer Service Reps for Disney and Airbnb Who Have to Pay to Talk to You”, via Pluralistic’s ongoing chickenization coverage:

Arise carefully monitors the language agents use to reinforce that it does not have an employment relationship with them. Stung by lawsuits that claimed Arise had actually employed agents but didn’t pay them fairly, Arise’s legal department has become a kind of word police, one former staffer told ProPublica.

“You don’t schedule ‘hours,’ you schedule ‘intervals,’” the former staffer said. Agents were not to be addressed as “you,” but “your business.” They were not “working,” they were “servicing.” There were no “supervisors,” only “performance facilitators.” Agents were not “coached.” Rather, their services were “enhanced.”

Once, an Arise manager, testifying in an arbitration hearing, was asked about meetings that performance facilitators have with agents. “They’re not meetings,” he said. “They’re informational sessions, or hosts.”

In an internal announcement in 2012, Arise listed “new terminology” for eight terms to avoid “the misconception” that agents are Arise employees. The corporate link between Arise and the agents went from being called Virtual Services Corporations to Independent Businesses. Service Fees became Service Revenue. Central Operations became Support Operations.

Arise seems particularly unable to settle on a term for the agents. Early on, the company called each a CyberAgent. Later came Arise Certified Professional. In 2012, that was changed to Client Support Professional. Nowadays, Arise’s website calls agents “onshore brand advocates or Service Partners.”

“Arise-speak,” as one opposing attorney called it in legal proceedings, could be a wonder to behold. Client Support Professionals (CSPs) would work with Quality Assurance Performance Facilitators (QAPFs) in a Performance Enhancement Session (PES), or they might reach out to Chat Performance Facilitators or Escalation Performance Facilitators, and none would be an Arise employee, all would be independent contractors.

And a disturbing exchange:

Rice said he worked out of his bedroom, in his mother’s home, helping customers for Arise’s clients, including Barnes & Noble, Disney and Sears. While testifying, Rice referred to performance facilitators in the Arise network as supervisors. This elicited a challenge from a lawyer for Arise.

“Where’d you get that term from, supervisor?” the lawyer asked.

“Growing up in America,” Rice said. “That’s the term people use for people that are above you.”

“… You never referred to them as supervisors while you were providing services, did you?”

“Well, yeah,” Rice said.

“You did? To who?”

“Well, obviously I’m on the phone with a customer, I’m not going to say, ‘OK, let me go check my chat performance facilitator.’ Usually I just said, you know, ‘Let me just talk to my supervisor.’”

Respect our vendors

Costco Values

“Respect our vendors” is foreign enough to me in software engineering that I took this picture of Costco’s values. The opposite of respect, perhaps “contempt for vendors and tools,” seems endemic.

At one job, memorably, a coworker was fired over it. Our engineering team had a shared email list used when setting up root accounts on various 3rd party services, including our primary infrastructure vendor for whom we were one of their largest customers.

The infrastructure vendor sent a Net-Promoter Score-like survey to our shared email list. Receiving this kind of marketing junk was frequent enough that I ignored it, but my colleague filled it out:

On a scale of 1-10, how likely are you to recommend our service?: “1”

Why?: “I hate you.”

This survey response led to the vendor’s account executive making a frantic and fearful call to our leadership team. The blowback of that led to my coworkers’ termination. (This was not my colleague’s first warning; my team and I were also targets of their trolling and bullying.)

This is a funny story to reflect on. And it’s terribly toxic the multitude of things engineers will despise, wear on their sleeve, and eagerly share at the slightest opportunity, myself included.

GoodJob v1.2: Multithreaded queue isolation and LISTEN/NOTIFY

GoodJob version 1.2 has been released. GoodJob is a multithreaded, Postgres-based, ActiveJob backend for Ruby on Rails. If you’re new to GoodJob, read the introductory blog post.

GoodJob’s v1.2 release adds multithreaded queue isolation for easier congestion management, and usage of Postgres LISTEN/NOTIFY to greatly reduce queue latency.

Version 1.2 comes out 2 weeks after GoodJob v1.1, and 5 weeks after GoodJob’s initial v1.0 release.

Multithreaded queue isolation

GoodJob v1.2 adds multithreaded queue isolation for easier congestion management. Queue isolation ensures that slow, long-running jobs do not block the execution of higher priority jobs.

Achieving queue isolation has always been possible by running multiple processes, but GoodJob v1.2 makes it easy to configure multiple isolated thread-pools within a single process.

For example, to create a pool of 2 threads working from the mice queue, and 1 thread working from the elephants queue:

$ bundle exec good_job --queues="mice:2;elephants:1"

Or via an environment variable:

$ GOOD_JOB_QUEUS="mice:2;elephants:1" bundle exec good_job

Additional examples and syntax:

  • --queues=*:2;mice,sparrows:1 will create two thread-pools, one running jobs on any queue, and another dedicated to mice and sparrows queued jobs.
  • --queues=-elephants,whales:2;elephants,whales:1 will create two thread-pools, one running jobs from any queue except the elephants or whales, and another dedicated to elephants and whales queued jobs.


GoodJob now uses Postgres LISTEN/NOTIFY to push newly enqueued jobs for immediate execution. LISTEN/NOTIFY greatly reduces queue latency, the time between when a job is enqueued and execution begins.

LISTEN/NOTIFY works alongside GoodJob’s polling mechanism. Together, jobs queued for immediate execution (ExampleJob.perform_later) are executed immediately, while future scheduled jobs (ExampleJob.set(wait: 1.hour).perform_later) are executed at (or near) their set time.


In the next release, v1.3, I plan to include a simple web dashboard for inspecting job execution performance, and focus on improving GoodJob’s documentation.


Code, documentation, and curiousity-based contributions are welcome! Check out the GoodJob Backlog, comment on or open a Github Issue, or make a Pull Request.

I’ve also set up a GitHub Sponsors Profile if you’re able to support me and GoodJob monetarily. It helps me stay in touch and send you project updates too.

GoodJob v1.1: async and improved documentation

GoodJob version 1.1 has been released. GoodJob is a multithreaded, Postgres-based, ActiveJob backend for Ruby on Rails. If you’re new to GoodJob, read the introductory blog post.

GoodJob’s v1.1 release contains a new, economical execution mode called “async” to execute jobs within the webserver process with the same reliability as a separate job worker process.

This release also contains more in-depth documentation based on feedback and questions I’ve received since the v1.0 release.

Version 1.1 comes out 3 weeks after GoodJob v1.0. The initial release of GoodJob was featured on Ruby Weekly, A Fresh Cup

, Awesome Ruby, and was as high as #8 on Hacker News. GoodJob has since received nearly 500 stars on Github.

Async mode

In addition to the $ good_job executable, GoodJob now can execute jobs inside the webserver process itself. For light workloads and simple applications, combining web and worker into a single process is very economical, especially when running on Heroku’s free or hobby plans.

GoodJob’s async execution is compatible with Puma, in multithreaded (RAILS_MAX_THREADS), multi-process (WEB_CONCURRENCY), and memory efficient preload_app! configurations. GoodJob is built with Concurrent Ruby which offers excellent thread and process-forking safety guarantees. Read the GoodJob async documentation for more details.

On a personal level, I’m very excited to have this feature in GoodJob. Async execution was the compelling reason I had previously adopted Que, another Postgres-based backend, in multiple projects and I was heartbroken when Que dropped support for async execution.

Improved documentation

Since GoodJob was released 3 weeks ago, the documentation has been significantly expanded. It contains more code and examples for ensuring reliability and handling job errors. I’ve had dozens of people ask questions through Github Issues and Ruby on Rails Link Slack.


In the next release, v1.2, I plan to simplify the creation of multiple dedicated threadpools within a single process. The goal is to provide an economical solution to congestion when the execution of a number of slow, low-priority jobs (elephants) are being performed and there are no execution resources available for newly introduced fast, high priority jobs (mice) until the currently executing elephants complete.

A proposed configuration, for example:


…would allocate 2 dedicated threads for jobs enqueued on the mice queue, and 4 threads for the elephants queue. Learn more in the feature’s Github Issue.


GoodJob continues to be enjoyable to develop and build upon Rails’ ActiveJob and Concurrent Ruby. Contributions are welcomed: check out the GoodJob Backlog, comment on or open a Github Issue, or make a Pull Request.

Introducing GoodJob 1.0, a new Postgres-based, multithreaded, ActiveJob backend for Ruby on Rails

GoodJob is a new Postgres-based, multithreaded, second-generation ActiveJob backend for Ruby on Rails.

Inspired by Delayed::Job and Que, GoodJob is designed for maximum compatibility with Ruby on Rails, ActiveJob, and Postgres to be simple and performant for most workloads.

  • Designed for ActiveJob. Complete support for async, queues, delays, priorities, timeouts, and retries with near-zero configuration.
  • Built for Rails. Fully adopts Ruby on Rails threading and code execution guidelines with Concurrent::Ruby.
  • Backed by Postgres. Relies upon Postgres integrity and session-level Advisory Locks to provide run-once safety and stay within the limits of schema.rb.
  • For most workloads. Targets full-stack teams, economy-minded solo developers, and applications that enqueue less than 1-million jobs/day.

Visit Github for instructions on adding GoodJob to your Rails application , or read on for the story behind GoodJob.

A “Second-generation” ActiveJob backend

Why “second-generation*”? GoodJob is designed from the beginning to be an ActiveJob-backend in a conventional Ruby on Rails application.

First-generation ActiveJob backends, like Delayed::Job and Que, all predate ActiveJob and support non-Rails applications. First-generation ActiveJob backends are significantly more complex than GoodJob because they separately maintain a lot of functionality that comes with a conventional Rails installation (ActiveRecord, ActiveSupport, Concurrent::Ruby) and re-implement job lifecycle hooks so they can work apart from ActiveJob. I’ve observed that this can make them slow to keep up with major Rails changes. An impetus for GoodJob was reviewing the number of outages, blocked upgrades, and forks of first-generation backends I’ve managed during both major and minor Rails upgrades over the years.

As a second-generation ActiveJob backend, GoodJob can draft off of all the advances and solved problems of ActiveJob and Ruby on Rails. For example rescue_from, retry_on, discard_on are all implemented already by ActiveJob.

GoodJob is significantly thinner than first-generation backends, and over the long run hopefully easier to maintain and keep up with changes to Ruby on Rails. For example, GoodJob is currently ~600 lines of code, whereas Que is ~1,200 lines, and Delayed::Job is ~2,300 lines (2,000 for delayed_job, and an additional 300 for delayed_job_active_record).

*“Second generation” was coined for me by Daniel Lopez on Ruby on Rails Link Slack.


I love Postgres. Postgres offers a lot of features, has safety and integrity guarantees, and simply running fewer services (skipping Redis) means less complexity in development and production.

GoodJob builds atop ActiveRecord. It’s numbingly boring, in a good way.

GoodJob uses session-level Advisory Locks to provide run-once guarantees with relatively little performance implications for most workloads.

GoodJob’s session-level Advisory Lock implementation is perhaps the only “novel” aspect, that comes from my experience orchestrating complex web-driving of government systems (“the browser is the API”) for Code for America. GoodJob uses a Common Table Expression (CTE) to find, lock, and return the next workable job in a single query. Session-level Advisory Locks will gracefully relinquish that lock if interrupted, without having to maintain a transaction for the duration of the job.


GoodJob uses Concurrent::Ruby to scale and manage jobs across multiple threads. “Concurrent Ruby makes one of the strongest thread-safety guarantees of any Ruby concurrency library”. Ruby on Rails has adopted Concurrent Ruby, and GoodJob follows its lead and thread-execution and safety guidelines.

In building GoodJob I leaned heavily on my positive experiences running Que, another multithreaded backend, on Heroku. Threads are great for balancing simplicity, economy, and performance for typical IO-bound workloads like heavy database queries, API requests, Selenium web-driving, or sending emails.

A feature that won’t be in GoodJob 1.0, but I hope to implement soon, is the ability to run the GoodJob scheduler inside the webserver process (“async mode”). This was a feature withdrawn from Que , but I believe can be safely implemented with Concurrent Ruby. An async mode would offer even greater economy, for example, in Heroku’s constrained environment.

GoodJob is right for me

GoodJob’s design is based directly on my experience in 2-pizza, full-stack teams, and as an economy-minded solo developer. GoodJob already powers Day of the Shirt and Brompt performing tens-of-thousands of real-world jobs a day.

Is GoodJob right for you?

Try it out and let me know.

Retail politics

I will quote anything that reinforces the necessity of showing up. From SF Weekly’s “The Many Faces of Leland Yee: A Politician’s Calculated Rise and Dramatic Fall” :

Upon reflection, Yee’s principles may be ever-shifting and his policies may be decorative, but he found a way around this: by being omnipresent.

He knew the name of every neighborhood stalwart from every neighborhood club; he cleaned hundreds of plates at hundreds of Chinatown banquets; he sat through countless community meetings, gathering hundreds of converts at a time: “In local politics,” says one longtime player, “a cup of coffee and a handshake can win you a friend for life.”

Yee showed up at your kid’s bar mitzvah or high school graduation; he showed up at your community gathering; he showed up at your neighborhood bazaar — in short, he showed up. His staff returned your phone call. And he read your letters: A former associate says Yee never failed to leave the office at the end of a long day toting a thick stack of mail that he made a point of poring through. In insider jargon, this is known as “retail politics.” Few worked harder or did it better.

Engineering Operations is not the same as Development

I wrote this memo several years ago when I joined GetCalFresh as the first outside engineering hire. An early focus of mine was helping the team move more confidently into an operational mindset: this memo reframes the teams existing values that drive development as values that also support operations. This also overlaps greatly with a talk I gave at Code for America’s 2018 Summit “Keeping Users at the Forefront While Scaling Services”.

Over the past month GetCalFresh has tripled the number of food stamp applications we’re processing. We often talk about “build the right thing”, but I wanted to focus on what it means to “operate a thing safely”.

Understanding operational failure

GetCalFresh collects foodstamp applicant’s information via a series of webforms, and then submits that applicant information to the county to begin the foodstamp eligibility process.

The website and webforms being offline or unavailable is bad.

Failing to submit application information to the county in a timely manner is awful. Foodstamp benefits are prorated to the day that the client’s application arrives at the county before 5pm. Failing to deliver a clients application in a timely manner literally means less food on the table for a hungry family.

Our system is operationally “safe” when it ensures that client information is transmitted to the county in a timely manner. Our system experiences an operational “failure” when information is not submitted in a timely manner. Our system has operational “risk” that degrades safety and is the potential for an operational failure.

Risks in complicated, complex and chaotic systems

Keeping a website online is complicated, but can be addressed with good practice. We use boring technologies: Ruby on Rails, SQL, AWS, that scale and respond predictably and are part of a mature ecosystem of monitoring tools and practice.

Submitting client information to the county is complex and sometimes chaotic. Because county systems often have no API, we have a queue of jobworkers that use Selenium Webdriver to click through and type into a “virtualized” headless Firefox browser. Automating this leads to emergent and novel problems. Client data must be transformed into a series of scripted actions to be performed across multiple county webpages, with dynamic forms and data fields. The county websites may be offline or degraded, and occasionally their structure and content changes. Additional client documents may need to be faxed, emailed or uploaded to the county, and those systems can be degraded as well.

Our applicants themselves can cause operational risks. As we target new populations and demographics (e.g. seniors, students, military families, homeless, low-literacy or non-English-speaking), we discover new usability issues and challenges in collecting and transforming data from our webforms into county systems. For example, different county systems have different optional and required fields and expect names and addresses to by sanitized and tokenized differently.

In this system, we cannot reliably (or affordably, with time and resources) predict how this system will respond as it scales to new users or integrates with new counties.

Creating safety with staff and time

We ensure that foodstamp applications are submitted in a timely manner through existing staff and dedicated time. Because we cannot reliably predict how our system scales or responds to changes, we have systems that alerts us to the risk of operational failure and engineers who are available to respond, remediate, and harden against similar circumstances in the future.

Every day, engineers block out 4pm to 5pm as “Apps & Docs”. We use this time to review any food stamp applications that failed our automated submission process to ensure the applications are submitted to the county by the daily deadline. Problems are documented and potential improvements are added to or reprioritized within the team’s backlog. We create safety by sometimes reaching out to clients for clarification or correction. In the event of an operational failure (we are not able to submit their application that day), we try to make things right; sometimes offering a gift card the client can use to purchase food.

Examples of problems identified during our hour of Apps & Docs:

  • Services not allowing multiple parallel sessions using the same credentials.
  • Inconsistent address tokenization for college campuses, military bases, PO boxes, and Private Mail Boxes
  • Frequency of people uploading iexplore.exe and instead of their intended document
  • Forms that do not allow non-ASCII characters
  • Forever optimizing headless Firefox, writing flexible and reliable Selenium scripts, and managing an increasing fleet of specialized jobworkers

Trade operational risk for speed of learning

We can’t predict the exact operational issues we’ll experience during a given day, but by scheduling and protecting one hour per day for operational tasks, we can deliberately trade risk for flexibility. Flexibility comes because we can accept small risks by introducing incomplete or manual-intervention-required workflows into the system. We do not have to build for every edge case or automate every action. We can develop features faster and create more opportunities to learn with real users in a real operational environment. This is an operationalization of our engineering principle “don’t argue, ship”.


  • Define operational failure: Leaving failure ambiguous can lead to fire-drills on every bad experience and exception, even if they may not have a material impact on business process or metrics. Defining service level objectives helps everyone self-organize, prioritize and understand the impact of their work.
  • Operationalize operations: Unexpected things happen all the time, but merely saying “high priority interrupt” does not expose the actual cost of response and remediation. Blocking out explicit times and spaces helps measure, and thus manage, work that might otherwise be overlooked.
  • Protect Developers’ time only so much: “Any improvement not made at the constraint is an illusion.” Approaching automation as an iterative and forever-incomplete process enables our team to move quickly in optimizing the system as a whole. When manual remediation is at risk of overflowing our time block, we dedicate time to greater automation; when we have perceived sufficient tolerances, we can push product features faster by manually tasking edge-cases.
  • Operations is a practice: Product Design and Development principles and practice provide a strong foundation and an experienced team can greatly reduce the risk of technical and market failure… but they can’t eliminate it. Operations is a field and practice that can reinforce and elevate Product Design and Development.

Decade in Review 2010-2019

In loose category and no particular order, other than I think they warrant mentioning.


  • Communications and mental health. Two things that really greatly influenced me was reading Nonviolent Communications and doing Mood Gym.
  • Inclusion (continuation). Compared to last decade I’ve practiced in larger groups and communities, from workplace to church. Two books that stick with me are White Fragility and Dear Church: A Love Letter From a Black Preacher to the Whitest Denomination in the US.
  • Business. I incorporated my own business, Day of the Shirt, for which I’ve been filing taxes, hiring contractors, and businessing since 2011.
  • Fiction. Malazan Book of the Fallen. Jemisin’s Inheritance and broken Earth trilogies. Up to book 26 of The Cat Who…. Remembrance of Earth’s Past trilogy. The Dark Tower series. And the entirety of Discworld.
  • Many deaths. Dottie Stephens. Many folks from Church: Dale, Clifton, Sam, Kirsten.
  • Affluence and finance. The move to software engineering has had a four-fold increase on my income. As well as the matters of founders stock, options, shares, RSUs, etc. We bought a new car.


  • Marriage. Angelina and I got married in 2014 in San Francisco. We’ve also been together for the entirety of the decade.
  • Membership organizations (continuation). I became a member of St. Francis Lutheran Church, the South End Rowing Club, Golden Gate Angling and Casting Club, and numerous museums.
  • Cat changes. We lost Jose Pierpont, but gained Sally Ride and Billie Jean King.
  • Extended family. Living near a lot of extended family has been a new experience and we’ve gained many new nephews and nieces around the country.
  • Spending time together. This decade has been marked by a ramping up of weekend trips and travel, from Calistoga to Australia.


  • San Francisco. 8 years of this decade have been spent in San Francisco, longer even then my time in Boston where I spent the majority of the naughties.
  • Transition from community-based work to software/tech. Shutting down the Transmission Project and Digital Arts Service Corps was hard. Software/Tech is fine.
  • I have great appreciation for friends and colleagues who have introduce me to the body of work on ergonomics is software development. For example, DevOps, Extreme programming, TDD, and Christopher Alexander.
  • Facilitation, Coaching and Sponsorship (continue). Still doing it.