If you replaced “accessibility” with “responsive design” in 2018, this is what you’d get

I took all of the conversations and experiences I’ve had over the years advocating accessibility features, and imagined them applied to responsive design.


Imagine a product organization where no one has a smartphone or knows how to resize their browser.

A forward-thinking engineer says “Hey folks, I know everyone is busy but I was playing around with this browser draggy thing and I dunno but I think our website isn’t very good on smartphone. Can I help?”

Their manager replies “That’s the kind of initiative we love. Let’s put together a plan.”

Here’s what their plan for “responsive” looks like

  • Summarize industry standards for responsive design. It’s a “growing field”.
  • Do demographic research: What are common viewports? How many people have smartphones?
  • Standardize on CSS rules for projects: Every CSS file must have at least 1 media query. All container elements must have at least 2 breakpoints. No fixed width element can be wider than 320px. All buttons must be larger than 16 x 16px.
  • Write some automated tests and QA tools to ensure those rules are followed.

That’s a good plan, right?

The plan is standards based, actionable, measurable, and integrated into the development process. And engineering is championing it. That’s great!

So things move along and then…

  • New customers are increasingly focused on “responsive best practices” and the company isn’t sure where they stand in the competitive landscape.
  • An important stakeholder got a smartphone and begins poking the CEO about some weirdness.
  • Product managers and designers grumble about deadlines and bolting-on changes when developer and QA questions get kicked back upstream.
  • That forward-thinking engineer occasionally sighs and pushes through odd-looking code and CSS changes and it’s just kinda ok fine, but isn’t sure how it’ll fit into their next performance review.

So the company decides to hire you.

Imagine you’re hired to fix this

What do you say? (Remember, it’s 2018.)

“Let’s get some goddamn smartphones. And let’s train our people how to resize their browsers. And then we’ll talk about our usability testing practices.”

Time to get to work! But there are still some objections to overcome:

  • “Getting ‘good’ at resizing browsers is gonna take a lot of time and training. Is it really that important?”
  • “Not all designers will want to specialize in responsive. Can we really make them?”
  • “We have a lot of other product needs too. How do we prioritize “responsive” against the other business and UX problems we already know about and need to fix.”
  • “Airdropping some smartphones won’t make us native users. We can’t really understand a smartphone user’s experience enough to make it perfect. Our company just isn’t the culture for smartphone people anyway.”
  • “Making existing projects perfectly responsive will be like a total rewrite. Maybe next time we could really bake it in from the beginning.”

This is tough, right? It’s a slow and steady push. It’s amazing anyone manages to do responsive at all.

Yet here we are for accessibility

Over and over again I see organizations fall into an approach that places accessibility as an implementation detail to be addressed at the end of the product development process.

Because so many accessibility errors relating to assistive technologies are markup errors, and because markup errors are so easy to identify, we’ve grown up in an accessibility remediation culture that is assistive technology obsessed and focused on discrete code errors. 

Inclusive design has a different take. It acknowledges the importance of markup in making semantic, robust interfaces, but it is the user’s ability to actually get things done that it makes the object. The quality of the markup is measured by what it can offer in terms of UX.

Inclusive Design Patterns by Heydon Pickering

When talking to people, I’ve found the responsive-design analogy helps to reframe their approach to accessibility and inclusion. I follow up with technical recommendations, but it opens a door to having broader impact on product and UX practices.

Like responsive and mobile-first design, integrating accessibility and inclusion into the entire product development process offers another powerful opportunity and perspective to distill user needs, focus product value and intent, and yes, verify the implementation’s delivery.

What challenges have you seen in producing organizational and process change around accessibility and inclusion? How are you overcoming them? I’d love to hear from you and continue sharing what I’ve learned.

To test god

From Steven Erikson’s “Toll the Hounds”:

“I tried to tell him what I am sensing from the Redeemer. Sir, your friend is missed.” She sighed, turning away. “If all who worship did so without need. If all came to their saviour unmindful of that title and its burden, if they came as friends—” she glanced back at him, “what would happen then, do you think? I wonder…”

[…much later…]

Seerdomin glared at the god, who now offered a faint smile. After a moment, Seerdomin hissed and stepped back. “You ask this of me? Are you mad? I am not one of your pilgrims! Not one of your mob of would-be priests and priestesses! I do not worship you!

“Precisely, [Seerdomin]. It is the curse of believers that they seek to second-guess the one they claim to worship.”

“In your silence what choice do they have?”

The Redeemer”s smile broadened. “Every choice in the world, my friend.”

From Dan Simmons’ The Fall of Hyperion:

With a sudden clarity which went beyond the immediacy of his pain or sorrow, Sol Weintraub suddenly understood perfectly why Abraham had agreed to sacrifice Isaac, his son, when the Lord commanded him to do so.

It was not obedience.

It was not even to put the love of God above the love of his son.

Abraham was testing God.

By denying the sacrifice at the last moment, by stopping the knife, God had earned the right—in Abraham’s eyes and the hearts of his offspring — to become the God of Abraham.

Sol shuddered as he thought of how no posturing on Abraham’s part, no shamming of his willingness to sacrifice the boy, could have served to forge that bond between greater power and humankind. Abraham had to know in his own heart that he would kill his son. The Deity, whatever form it then took, had to know Abraham’s determination, had to feel that sorrow and commitment to destroy what was to Abraham the most precious thing in the universe.

Abraham came not to sacrifice, but to know once and for all whether this God was a god to be trusted and obeyed. No other test would do.

Design Principles are insights made actionable

From Chris Risdon, quoted in Laura Klein’s Build Better Products:

“Design principles must be based in research,” Chris explains. “You need to do some research where you have multiple inputs, such as quantitative metrics, stakeholder interviews, ethnography, or usability studies. You then converge on a set of insights–those are the things you’ve learned and that you wouldn’t have learned with only one input. Design principles are the output when you take those insights and make them actionable.”

For example, let’s say that you ran a usability test and got the insight that people weren’t reading all the necessary information before starting an onboarding process. You might turn that into a principle like “Learn while doing.”

“Learn while doing” may not seem like a more actionable insight than “take away the text on step one of the onboarding process,” but the thing that makes it useful is that it can be applied across the entire product. “Insights are tactical,” Chris says. “Principles are wider, but not so wide that you can’t judge design against them.”

When you adopt a principle like “learn while doing,” you give yourself a standard against which you can judge all future designs. When a new feature is built, you can ask yourself and your team whether it violates any of the principles you’ve adopted.

By making sure that all of the design principles are being followed, you give yourself a better chance of creating a consistent and cohesive user experience. Even if the product is being created by several teams working independently, you all have a single yardstick you can use to measure your design.

Taking responsibility for safety on the line

From Sidney Dekker’s The Field Guide to Understanding ‘Human Error’:

To take responsibility for safety on the line, you should first and foremost look at people’s work, more than (just) at people’s safety.

  • What does it take to get the job done on a daily basis? What are the “workarounds,” innovations or improvisations that people have to engage in in order to meet the various demands imposed on them?
  • What are the daily “frustrations” that people encounter in getting a piece of machinery, or technology, or even a team of people (for example, contractors), to work the way they expect?
  • What do your people believe is “dodgy” about the operation? Ask them that question directly, and you may get some surprising results.
  • What do your people have to do to “finish the design” of the tools and technologies that the organization has given them to work with? Finishing the design may be obvious from little post-it notes with reminders for particular switches or settings, or more “advanced” jury-rigged solutions (like an upside-down paper coffee cup on the flap handle of the 60-million dollar jet I flew, so as to not forget to set the flaps under certain circumstances). Such finishing the design can be a marker of resilience: people adapt their tools and technologies to forestall or contain the risks they know about. But it can also be a pointer to places where your system may be more brittle than you think.
  • How often do your people have to say to each other: “here’s how to make it work” when they discuss a particular technology or portion of your operation? What is the informal teaching and “coaching” that is going on in order to make that happen?

And on goal conflict:

Production pressure and goal conflicts are the essence of most operational systems. Though safety is a (stated) priority, these systems do not exist to be safe. They exist to provide a service or product, to achieve economic gain, to maximize capacity utilization. But still they have to be safe. One starting point, then, for understanding a driver behind routine deviations, is to look deeper into these goal interactions, these basic incompatibilities in what people need to strive for in their work. If you want to understand ‘human error,’ you need to find out how people themselves view these conflicts from inside their operational reality, and how this contrasts with other views of the same activities (for example, management, regulator, public).

People do not need or await permission to move

From China Mieville’s October: The Story of the Russian Revolution:

To be a radical was to lead others, surely, to change their ideas, to persuade them to follow you; to go neither too far or too fast, nor to lag behind. ‘To patiently explain.’ How easy to forget that people do not need or await permission to move.

The Three Ways Explained

The Phoenix Project, by Gene Kim, Kevin Behr, and George Spaffoed, has been my most recommended book of the past several years. This is from the appendix which explains one of the key frameworks of the book:

The First Way is about the left-to-right flow of work from Development to IT Operations to the customer. In order to maximize flow, we need small batch sizes and intervals of work, never passing defects to downstream work centers, and to constantly optimize for the global goals…


The Second Way is about the constant flow of fast feedback from right-to-left at all stages of the value stream, amplifying it to ensure that we can prevent problems from happening again or enable faster detection and recovery. By doing this, we create quality at the source, creating or embedding knowledge where we need it.


The Third Way is about creating a culture that fosters two things: continual experimentation, which requires taking risks and learning from success and failure, and understanding that repetition and practice is the prerequisite to mastery.

Having and losing effective Crew Resource Management

From Sidney Dekker’s The Field Guide to Understanding ‘Human Error’ on Crew Resource Management (CRM):

Judith Orasanu at NASA has done research to find out what effective CRM is about.

  • shared understanding of the situation, the nature of the problem, the cause of the problem, the meaning of available cues, and what is likely to happen in the future, with or without action by the team members;
  • shared understanding of the goal or desired outcome;
  • shared understanding of the solution strategy: what will be done, by whom, when, and why?


In his work for the Australian Transportation Safety Bureau, for example, Maurice Nevile has operationalized loss of effective CRM as follows:

  • unequal turn-taking where one person does much more of the talking then others;
  • missing responses where responses are expected or standard, with one person regularly withholding talk, opting out, or replying only in a clipped fashion;
  • overlapping talk where another person still has talk of substance to utter but is stepped on by someone else;
  • repair of talk done by others. We often engage in repair of our own talk (to correct or clarify our own speech). But if other people repair our talk, this can point to problems in the interaction or hierarchy between them.

Smart smart cities

Some text from Accenture Australia’s smart cities practice lead, Janine Griffiths on “Smart city ‘killer use case’ doesn’t exist”. (edited for ease of reading):

Smart cities are about improving the liveability of their citizens [and] this may or may not be supported by technology. … It is critical that leaders of cities understand this and look at technologies as a tool to deliver an outcome for the community as opposed to being the actual panacea.”

Each city is unique in its own right - with even neighbouring councils having different economies, community demographics, geography, industries, and political priorities - and the needs of the communities are different too. … Innovation should be around how the City understands the needs of its citizens and intelligently utilises technology to develop a capability, and it is vital to understand that capability is not just technology but is a combination with processes and people.

Adopting the latest technology with limited understanding of local context and not making changes to ‘ways of working’ will lead to unpredictable results.

Sales side:

By engaging in a wider ecosystem with other industries and digital partners, local councils can develop a design-based, citizen-centric and outcome-driven strategy and, consequently, a seamless experience for their communities.

By considering the entire ecosystem around them, Australian cities will get an outside-in view of the people in the ecosystem, the places in which the service is experienced, the products used by everyone, the processes that people follow and the performance.

High school app advisor

My brother organizes an entrepreneurship academy at the high school he principals. I play the role of technical advisor to teams that want to build apps. Last year’s team I advised won first place. This is the initial advice I’ve given:

When I talk to people about building apps I usually focus on two things:

  1. Can you prove out the idea without building the app at all? In other words, use a combination of low-tech tools (spreadsheets, phone calls, emails, google survey, etc) to create your first 10 customer transactions. This is really helpful to understand your customers and it’s much easier to tweak your product idea early on. To learn more, look up “Lean Startup”.

  2. Can you draw out how the app will work? Just using paper and pencil, you can draw out all of the screens. The best way to learn is to download a fresh app from the App Store (something you’ve never used before) and draw out the entire experience of a user. Try to recreate it on paper in such a way that you can explain to someone who has never seen the real app. To learn more, look up “Paper app prototyping”

Those are my top 2 most important things to do. For your business plan, I understand you’ll also need to have a solid budget and costs. I can’t tell you exactly how much your app would cost without knowing how complex it needs to be. The more time you can take in thinking through the details, the better prepared you’ll be when you take your concept to a local design and development agency.

Engeering Practice Ad nauseam

I came up with this list to spark peer-led initiatives on an engineering team. Originally I used it to work with a team to define “ideal practice”, “current practice”, and then identify distinct “projects” to close the gap. The examples here are to help explain what I’m describing and are probably not the ideal state.


Frontend engineering is unique because the engineer has relatively little control over the environment or human context in which their applications are run and used.

  • Browser compatibility: Having a standard; reviewed regularly? Based on real user data from GA. SCSS Autoprefixer
  • Real user monitoring (performance, bandwidth/api latency): using Fullstory and New Relic. Having a performance budget / SLO. Having “key transactions” for monitoring
  • Browser exception monitoring: Sentry with sourcemaps
  • Viewport / responsive design: Standards, device lab, browserstack assertions, etc.
  • Usability: Heatmaps, User observation, Usability interviews, clear activation metrics, etc
  • Accessibility: Accessibility Standards and Linting. People care about accessibility. Technical staff are proficient with Voiceover
  • Customer interaction: Confidence and knowledge to have public conversations with customers and support them. Self-directed teams that understand the product well enough that they can take and manage customer feedback


The services that do the work.

  • Services, Responsibilities and Boundaries: Monoliths, SOA, Microservices, how you slice the problem.
  • Data storage: SQL/NoSQL, cache (Redis), index (ES, Solr), etc
  • Database Querying: Getting the data, N+1s, Views, indexing
  • Asynchronous Operations: job workers, notifications/messaging, ETLs, scheduled emails, generated reports

Development and Collaboration

A technically complex team sport.

  • Packaging and build pipeline: Up-to-date build tooling. Containerized and artifacted, roll-backable, and promotable. Live code reloading. JS/CSS packaging
  • Source code management and sharing: The source is committed to a Git repository and uploaded to GitHub. Code Review on PRs. File naming and organization conventions; practice of removing unused code.
  • Dependency management: Dependencies are regularly updated. Considering breaking out things into modules. Having a standard for out-of-date updates. Dependency risk review (availability, security, do-we-need-it). Gemnasium, etc.
  • API Design: Client systems (front and backend) have something to talk to. Available and complete documentation, messages attached to API calls should be coherent. Design is part of some standard process
  • Continuous Integration: CI server; automated tests. Regularly review runtime. Regularly reviews and automates manual tasks
  • Testing: Testing is easy. Visual regression. Acceptance testing as part of a team. Code coverage is tracked and managed.
  • Philosophy of personal productivity: Automation vs just doing it. Consensus on that XKCD cartoon about how long something takes. “Maker hours” and review of happiness/productivity. Time tracking, etc.
  • Philosophy of team productivity: Meetings and collaboration across departments and functions. Process for managing the process of change.

Project Management and Acceptance

Within complex systems - both human and technical - it’s difficult to ensure that the work being done is the right work to be done. On the business level, certain parts of the system, especially visual-facing, may unduly affect perception of the overall system.

  • Management philosophy: Scrum, kanban, theory of constraints
  • Management practice (running standup, demos, retros, architectural planning): Concise Agenda, Clear outcomes, well-facilitated. Come prepared. Meeting Templates. Clear reason why in and having the meeting
  • Management tools (Jira, Trello, sticky notes): Visually track the flow of work through the organization
  • Code and standards reviews: Enforce standards with automation. Regularly scheduled as part of project
  • Collateral and hand-offs: Collab with Product, Design and Ops. Clear templates and cross-checks; Wire frames over pixel-perfect mockups.
  • Acceptance and demos: Feature Flagging; Staging Environment. Standard handoff process.
  • Estimating and Prediction: Have a high-level view of process. Be able to make and communicate estimates satisfactorily and/or accurately.
  • Managing and prioritizing product feedback: Can receive feedback efficiently and route/evaluate/prioritize. Balances functions between Product, Design and Engineering.

Delivery and Operations

  • Asset hosting and CDNs: Quick, reliable, and caches don’t ruin your day (“you’ll have to clear your cache”)
  • Service monitoring (deliverability, consistency, verification): Healthchecks; SLO; Pagerduty; thresholds; frog boils.
  • Security: https, XSS, Pentests, Bugbounties, responsible disclosure, Human/asset management
  • Business Continuity: Backups, Domain name and certificate renewals, multi-datacenter, hit-by-a-bus problems
  • Dependency and vulnerability management philosophy: Scheduling reviews/updates
  • Incident Command and Management: Roles and responsibilities, SLAs, incident reporting, play/run-books
  • Bugs and regression management: identification, prioritization, prevention
  • Marketing/activity/metrics tracking: Practice (tag management) and Process (can easily report out biz analysis numbers without heroics)

Standardization and Innovation

Web engineering, as the intersection of several different domains and technologies (e.g. HTML, CSS, JavaScript, browsers, backends, PaaS, etc.) rapidly innovates along multiple dimensions. These are strategies for managing change.

  • Frameworks (e.g. Rails) & conventions (e.g. BEM): Use open source community maintained framework/standards vs NIH; unique business case attitude
  • Standards and style guides: Have coding practice/styleguides and ratification process
  • Linting: Have coding practice/styleguides and ratification process
  • Proof of Concepting: Be quicker to use to validate and overcome decision avoidance. Quantity of work that is thrown away.
  • Architecture and system strategy: Long term technical vision and alignment with external forces and opportunities

Stewardship and Advancement

Ensure a healthy and growing environment exists for technical practitioners to professionally advance and spread the good news.

  • Onboarding and orientation of new hires and practitioners
  • Career advancement path
  • Industry Leadership: public speaking, publishing, being a leader in the field