Lighthouse Labs retrospective note

For more than a year I’ve been attending monthly [Lighthouse Labs meetups](LightHouse Labs - LightHouse for the Blind and Visually Impaired at San Francisco’s Lighthouse for the Blind. Each month Lighthouse Labs holds an open forum for accessibility technologists to present and receive feedback. These are my notes from an audience retrospective of a year of these presentations:

  • Working with new learners:
    • What tech do you currently have?
    • What do you want to do that you currently can’t? “Based on life. “
    • “Its ok to say Blind. How exhausting is it to hear people talk around for 15 minutes trying not to say the B-word?”
  • Advice for inventors and presenters:
    • No more remote assistance
    • No more buzzing wearables
    • No more cane solutions
    • Keep it simple.
    • Make your website accessible
    • If they think they have a product for the blind, ask them what research they’ve done. Should we be patient zero? And if so, they should be aware of what that means.
    • What is the end goal of what you’re presenting? How do you intend to effect a person’s life? Example: “be more independent”, but in what way? It might be too open or predictable; how bizarre it could go. Answer: “How do you intend to enrich someone’s life?”
    • Why do you think your life solution is better than others? Eg why is it better than a cane?
    • Don’t: Try to solve a problem that blind people don’t need solved.
    • Have a list of how blind people already do stuff.
    • Bring appropriate audio-media, and understand it well enough to connect it to the presentation room.
    • “Yeah, that idea was terrible. But they were young, full of ideas and open to feedback”
    • “It is exhausting to say every month, we have canes and they’re fine.”

Goal Evaluation Practices

From Appraising Performance Appraisal by Steven Sinofsky:

The following are ten of the most common attributes that must be considered and balanced when developing a performance review system: …

3. Measuring against goals. It is entirely possible to base a system of evaluation and compensation on pre-determined goals. Doing so will guarantee two things. First, however much time you think you save on the review process you will spend up front on an elaborate process of goal-setting. Second, in any effort of any complexity there is no way to have goals that are self-contained and so failure to meet goals becomes an exercise in documenting what went wrong. Once everyone realizes their compensation depends on others, the whole work process becomes clouded by constant discussion about accountability, expectation setting, and other efforts not directly related to actually working things out. And worse, management will always have the out of saying “you had the goal so you should have worked it out”. There’s nothing more challenging in the process of evaluation than actually setting goals and all of this is compounded enormously when the endeavor is a creative one where agility, pivots, and learning are part of the overall process.

Best practice: let individuals and their manager arrive at goals that foster a sense of mastery of skills and success of the project, while focusing evaluation on the relative (and qualitative) contribution to the broader mission.


Best Practice Practices

I like how Rapid Development: Taming Wild Software Schedules by by Steve McConnell lays out exactly how “Best Practices” were selected or rejected:

Summary of Best-Practice Candidates

Each practice described in a best-practice chapter has been chosen for one of the following reasons:

  • Reduction of development schedules
  • Reduction of perceived development schedules by making progress more visible
  • Reduction of schedule volatility, thus reducing the chance of a runaway project

Some of the best practices are described in Part I of this book, and those best practices are merely summarized in this part of the book.

You might ask, “Why did you ignore Object-Structured FooBar Charts, which happen to be my favorite practice?” That’s a fair question and one that I struggled with throughout the creation of this book. A candidate for best-practice status could have been excluded for any of several reasons.

Fundamental development practices. Many best-practice candidates fell into the category of fundamental development practices. One of the challenges in writing this book has been to keep it from turning into a general software-engineering handbook. In order to keep the book to a manageable size, I introduce those practices in Chapter 2, “Software Development Fundamentals” and provide references to other sources of information. A lot of information is available from other sources on the fundamental practices.

In a few cases, you might rightly consider a practice to be a fundamental one, but if it has a profound impact on development speed, I included it as a best-practice chapter anyway.

Best philosophy, but not best practice. Some best-practice candidates seemed to be more like theories or philosophies than practices. The distinction between theory, practice, and philosophy in software development is not clear, and so an approach that I call a “philosophy” you might call a “practice” and vice versa. Regardless of what it’s called, if I considered it to be “best,” I discussed it in the book somewhere. But if I considered it to be a philosophy, it’s in the first or second part of the book. (See Table III-1 for a list of where each best philosophy is discussed.)

Best practice, maybe, but not for development speed. Some best-practice candidates might very well be best practices for their effect on quality or usability, but they could not pass the tests of improving actual development schedules, perceived schedules, or schedule volatility. Those practices were not included in this book.

Insufficient evidence for a practice’s efficacy. A few promising practices were not supported by enough evidence to deem them to be best practices. If the development community has not yet had enough experience with a practice to publish a handful of experiments or experience reports about it, I didn’t include it. Some of the practices that fell into this category will no doubt someday prove that they have large speed benefits, and I’ll include those in a future edition of this book.

In a few instances in which published support by itself was not sufficient to justify treating a practice as a best practice, I had personal experience with the practice that convinced me that it was indeed a best practice. I included those in spite of the lack of published support from other sources.

Questionable evidence for a practice’s efficacy. A few best-practice candidates seemed promising, but the only published information I could find was from vendors or other parties who had vested interests in promoting the practices, so I excluded them.

Not a best practice. A few best-practice candidates are highly regarded (even zealously regarded) in some quarters, but that does not make them best practices. In some cases, experience reports indicated that a well-regarded practice typically failed to live up to expectations. In some, a practice is a good practice, but not a best practice. And in some, the practice works fabulously when it works, but it fails too often to be considered a best practice.

In one case (RAD), the candidate practice consisted of a combination of many of the other practices described in this book. That might very well be an effective combination in some circumstances. But because this book advocates selecting rapid-development practices that meet the needs of your specific project, that specific pre-fab combination of practices was not itself considered to be a best practice.


The concrete sumo

This paper on “The Concrete Sumo: Exigent Decision-Making in Engineering” by Taft H. Broome, Jr. is a difficult read because it tells a story first, and then explains who the characters are; read it twice back-to-back.

In the Johnny-on-the-Spot, Tubby was the first to speak to me: “No court in the land,” he said, “would blame you for letting the sumo dump the concrete in the entrance way. Its not your fault that they left you alone on your first day!” Then, Roebling began to speak: “You are an engineer, and engineers sacrifice all for their responsibilities to the business of engineering!” Finally, Uncle Roy, the engineer after whom I had patterned my career, spoke to me: “This job belongs as much to you as to anyone else. So, you have a duty to either move this project along, or resign!”

My last day on the job was occasioned by my acceptance to graduate school, and by lunch treated me by the superintendent and the project manager. We exchanged pleasantries before I recalled for them the elevator pit task left to me on my first day. I expected the superintendent to say that the carpenter foreman was alerted to the plot and instructed to prevent any catastrophe. Instead, he recalled for me that on his first day he was likewise abandoned and thus laid out a church, not only in the wrong direction, but also on the wrong lot! Without any apology at all he said: “When it comes to rookie engineers, it is better to pay early, than to pay later.”

The afterward explains the simplified procedure:

A year ago, I agreed to instruct an ethics workshop for undergraduate engineering students in preparation for the Fundamentals of Engineering Examination (FEE). The FEE is the first step toward licensure. The workshop was scheduled for ninety minutes. I convened the workshop by passing out a trial examination in professional ethics. Instead of lecturing on ethics as I had planned, it occurred to me to ask the students to take the examination. Fifteen minutes later, they had finished. Then I asked them to think of an aged, highly mature person: a family member or some legendary character; someone who exhibited great wisdom and caring for others. Then I asked the students to re-do the examination, but this time putting their sage in the position of test taker. Finally, I gave them the solution to the examination and asked them to grade both responses, theirs and the responses of the sages. The results were surprising: the first responses were either failures or marginal passes; the second responses maximized the examination! I then adjourned what turned out to be a forty-minute workshop.

The following semester, one of the students informed me that he had taken the FEE and passed it, and had done very well on its ethics portion.

Perhaps the literary approach to problem solving in ethics and deference to the old yet have places in engineering, in practice as well as in the classroom, today.


There Is Often A Crisis

Some reflections from Matt Webb on their accelerator’s office hours:

4. There is often a crisis. Fixing the issue is not my job.

A special type of Office Hours is when there’s a crisis. I would characterise a crisis as any time the founder brings urgency into the room–whether it’s good or bad. There are times when sales are going just too well! “A great problem to have” can trigger a panicked response just as a more existential crisis such as an unhappy team.

I have to remind myself that fixing the issue is not my primary job. Participating in panic validates panic as a response. But if a startup responded to every crisis with panic, nothing would get done. (I would characterise panic as short-termist thinking, accompanied by a stressed and unpleasant emotional state.)

What makes this challenging is that I often know what they’re going through. Sometimes I recognise a situation and my own emotional memories well up. There have been sessions where my heart races, or my palms sweat, or I look from team member to team member and wonder if they realise the dynamic they’ve found themselves in.

So before we talk about the issue, I try to find the appropriate emotional response: enthusiastically cheer first sales (but don’t sit back on laurels); get pissed off about bad news but move on with good humour; treat obstacles with seriousness but don’t over-generalise. It’s a marathon not a sprint, and so on.

Then use the situation to talk tactics and build some habits. I like to encourage:

  1. Writing things down. Startups are not about product, they are about operationalising sales of that product. Operationalising means there is a machine. The minimum viable machine is a google doc with a checklist. The sales process can be a checklist. HR can be a checklist. Bookkeeping can be a checklist. When things don’t work, revise the checklist. Eventually, turn it into software and people following specific job objectives. This is how (a) the startup can scale where revenue scales faster than cost of sale; and (b) the founder can one day take a holiday.
  2. A habit of momentum. I forget who said to me “first we figure out how to row the boat, then we choose the direction” but movement is a team habit. If, in every meeting, i respond to a business update with “so, what are you doing about that” then that expectation of action will eventually get internalised

I find these viewpoints sink in better when they’re using in responding to a crisis.

I also like to encourage self-honesty. Sometimes my job is to say out loud things which are unsaid. Founders are very good at being convincing (both themselves and others) otherwise they wouldn’t be founders. Sometimes that data that doesn’t fit the narrative is left out… to others and to themselves. So I can help break that down.

There will be crises and crises and crises. But we only have these Office Hours for 12 weeks. If we concentrate on fixing just today’s issue, we miss the opportunity to build habits that can handle tomorrow’s.


Japanese processes

Jugyō Kenkyū (“Lesson Study”)

“Everything we do in the U.S. is focused on the effectiveness of the individual. ‘Is this teacher effective?’ Not, ‘Are the methods they’re using effective, and could they use other methods?’” — James Hiebert

From American RadioWorks A different approach to teacher learning: Lesson study:

A group of teachers comes together and identifies a teaching problem they want to solve. Maybe their students are struggling with adding fractions.

Next, the teachers do some research on why students struggle with adding fractions. They read the latest education literature and look at lessons other teachers have tried. Typically they have an “outside adviser.” This person is usually an expert or researcher who does not work at the school but who’s invited to advise the group and help them with things like identifying articles and studies to read.

After they’ve done the research, the teachers design a lesson plan together. The lesson plan is like their hypothesis: If we teach this lesson in this way, we think students will understand fractions better.

Then, one of the teachers teaches the lesson to students, and the other teachers in the group observe. Often other teachers in the school will come watch, and sometimes educators from other schools too. It’s called a public research lesson.

During the public research lesson, the observers don’t focus on the teacher; they focus on the students. How are the students reacting to the lesson? What are they understanding or misunderstanding? The purpose is to improve the lesson, not to critique the teacher.

Shuhari

Via “Scrum” by Jeff Sutherland:

  1. Shu: Know all the rules and forms and repeat them, don’t deviate at all
  2. Ha: having mastered the forms, you make innovations
  3. Ri: you’re able to discard the forms entirely and be creative in an unhindered way

Waste

Via Toyota Production Systems and Kaizen processes:

  • Muri: waste through unreasonableness
  • Mura: waste through inconsistency
  • Muda: waste through outcomes

2017 Professional Goals Reviewed

I changed jobs in March, 2017. It was a tough decision. I went into the job with some very specific goals to accomplish.

The Goals

Accessibility & Inclusion

I started attending Lighthouse Labs and doing some organizational advocacy. It was difficult presenting in an engineering role because I wasn’t able to develop strong design and product allies on my team. I made some presentations, but any success came from seeding ideas to other teams and helping support others.

A/B & Split Testing

A/B testing progress, like accessibility, was hampered by the absence of champions on the design and product front. Having made some presentations, identified some opportunities, and demoed the ease and possibilities, it was difficult to champion from an engineering role. A few months into 2018 we’ve now run some successful tests.

Ops / Kaizen

The new job had already defined some values (“No blame postmortems”) but I wanted to introduce some more practices. For example, collecting “3 things that would have prevented, 3 things that would have detected faster, 3 things that would have helped to fix faster”, risk inventories, and service level objectives. There moving forward pretty well.

Career Ladders

One of the last things I championed at my last job was the adoption of engineer career ladders. At my new job, I also pushed heavily on this again. The entire organization adopted them and we got salary bands too. I dunno how much credit I can take, but I sure mentioned it a lot and there it was.

Facilitation

I opened the new job by running a 90 minute timeline activity that I’ve referred to multiple times over the past year. Also I’ve run several full-time planning sessions and gotten feedback that they’re very productive and satisifying. It’s easy to forget that practices that seem formulaic to myself like magic to other people who don’t know the process. I’ve since gotten several people to attend Technology of Participation trainings.

Growth

Growth was the biggest bummer. The most exciting part of the new job was the emphasis, during interviews, in growing usage by 100x. Many of my personal goals came out my expectation that the primary challenges would be reorganizing operations around these business goals. Unfortunately (like my last job, oddly), growth became a trailing indicator rather than a leading and aligning goal. This manifests as a lot of wasted efforts because people are pulling in different directions and optimizing for individual (or role-based) throughput rather than whole-team throughput. For a brief moment we had a clear growth plan which I championed, but once we hit the first milestone, it was set aside rather than built upon.


If you replaced “accessibility” with “responsive design” in 2018, this is what you’d get

I took all of the conversations and experiences I’ve had over the years advocating accessibility features, and imagined them applied to responsive design.

Imagine

Imagine a product organization where no one has a smartphone or knows how to resize their browser.

A forward-thinking engineer says “Hey folks, I know everyone is busy but I was playing around with this browser draggy thing and I dunno but I think our website isn’t very good on smartphone. Can I help?”

Their manager replies “That’s the kind of initiative we love. Let’s put together a plan.”

Here’s what their plan for “responsive” looks like

  • Summarize industry standards for responsive design. It’s a “growing field”.
  • Do demographic research: What are common viewports? How many people have smartphones?
  • Standardize on CSS rules for projects: Every CSS file must have at least 1 media query. All container elements must have at least 2 breakpoints. No fixed width element can be wider than 320px. All buttons must be larger than 16 x 16px.
  • Write some automated tests and QA tools to ensure those rules are followed.

That’s a good plan, right?

The plan is standards based, actionable, measurable, and integrated into the development process. And engineering is championing it. That’s great!

So things move along and then…

  • New customers are increasingly focused on “responsive best practices” and the company isn’t sure where they stand in the competitive landscape.
  • An important stakeholder got a smartphone and begins poking the CEO about some weirdness.
  • Product managers and designers grumble about deadlines and bolting-on changes when developer and QA questions get kicked back upstream.
  • That forward-thinking engineer occasionally sighs and pushes through odd-looking code and CSS changes and it’s just kinda ok fine, but isn’t sure how it’ll fit into their next performance review.

So the company decides to hire you.

Imagine you’re hired to fix this

What do you say? (Remember, it’s 2018.)

“Let’s get some goddamn smartphones. And let’s train our people how to resize their browsers. And then we’ll talk about our usability testing practices.”

Time to get to work! But there are still some objections to overcome:

  • “Getting ‘good’ at resizing browsers is gonna take a lot of time and training. Is it really that important?”
  • “Not all designers will want to specialize in responsive. Can we really make them?”
  • “We have a lot of other product needs too. How do we prioritize “responsive” against the other business and UX problems we already know about and need to fix.”
  • “Airdropping some smartphones won’t make us native users. We can’t really understand a smartphone user’s experience enough to make it perfect. Our company just isn’t the culture for smartphone people anyway.”
  • “Making existing projects perfectly responsive will be like a total rewrite. Maybe next time we could really bake it in from the beginning.”

This is tough, right? It’s a slow and steady push. It’s amazing anyone manages to do responsive at all.

Yet here we are for accessibility

Over and over again I see organizations fall into an approach that places accessibility as an implementation detail to be addressed at the end of the product development process.

Because so many accessibility errors relating to assistive technologies are markup errors, and because markup errors are so easy to identify, we’ve grown up in an accessibility remediation culture that is assistive technology obsessed and focused on discrete code errors. 

Inclusive design has a different take. It acknowledges the importance of markup in making semantic, robust interfaces, but it is the user’s ability to actually get things done that it makes the object. The quality of the markup is measured by what it can offer in terms of UX.

Inclusive Design Patterns by Heydon Pickering

When talking to people, I’ve found the responsive-design analogy helps to reframe their approach to accessibility and inclusion. I follow up with technical recommendations, but it opens a door to having broader impact on product and UX practices.

Like responsive and mobile-first design, integrating accessibility and inclusion into the entire product development process offers another powerful opportunity and perspective to distill user needs, focus product value and intent, and yes, verify the implementation’s delivery.

What challenges have you seen in producing organizational and process change around accessibility and inclusion? How are you overcoming them? I’d love to hear from you and continue sharing what I’ve learned.


To test god

From Steven Erikson’s “Toll the Hounds”:

“I tried to tell him what I am sensing from the Redeemer. Sir, your friend is missed.” She sighed, turning away. “If all who worship did so without need. If all came to their saviour unmindful of that title and its burden, if they came as friends—” she glanced back at him, “what would happen then, do you think? I wonder…”

[…much later…]

Seerdomin glared at the god, who now offered a faint smile. After a moment, Seerdomin hissed and stepped back. “You ask this of me? Are you mad? I am not one of your pilgrims! Not one of your mob of would-be priests and priestesses! I do not worship you!

“Precisely, [Seerdomin]. It is the curse of believers that they seek to second-guess the one they claim to worship.”

“In your silence what choice do they have?”

The Redeemer”s smile broadened. “Every choice in the world, my friend.”

From Dan Simmons’ The Fall of Hyperion:

With a sudden clarity which went beyond the immediacy of his pain or sorrow, Sol Weintraub suddenly understood perfectly why Abraham had agreed to sacrifice Isaac, his son, when the Lord commanded him to do so.

It was not obedience.

It was not even to put the love of God above the love of his son.

Abraham was testing God.

By denying the sacrifice at the last moment, by stopping the knife, God had earned the right—in Abraham’s eyes and the hearts of his offspring — to become the God of Abraham.

Sol shuddered as he thought of how no posturing on Abraham’s part, no shamming of his willingness to sacrifice the boy, could have served to forge that bond between greater power and humankind. Abraham had to know in his own heart that he would kill his son. The Deity, whatever form it then took, had to know Abraham’s determination, had to feel that sorrow and commitment to destroy what was to Abraham the most precious thing in the universe.

Abraham came not to sacrifice, but to know once and for all whether this God was a god to be trusted and obeyed. No other test would do.


Design Principles are insights made actionable

From Chris Risdon, quoted in Laura Klein’s Build Better Products:

“Design principles must be based in research,” Chris explains. “You need to do some research where you have multiple inputs, such as quantitative metrics, stakeholder interviews, ethnography, or usability studies. You then converge on a set of insights–those are the things you’ve learned and that you wouldn’t have learned with only one input. Design principles are the output when you take those insights and make them actionable.”

For example, let’s say that you ran a usability test and got the insight that people weren’t reading all the necessary information before starting an onboarding process. You might turn that into a principle like “Learn while doing.”

“Learn while doing” may not seem like a more actionable insight than “take away the text on step one of the onboarding process,” but the thing that makes it useful is that it can be applied across the entire product. “Insights are tactical,” Chris says. “Principles are wider, but not so wide that you can’t judge design against them.”

When you adopt a principle like “learn while doing,” you give yourself a standard against which you can judge all future designs. When a new feature is built, you can ask yourself and your team whether it violates any of the principles you’ve adopted.

By making sure that all of the design principles are being followed, you give yourself a better chance of creating a consistent and cohesive user experience. Even if the product is being created by several teams working independently, you all have a single yardstick you can use to measure your design.