Japanese processes

Jugyō Kenkyū (“Lesson Study”)

“Everything we do in the U.S. is focused on the effectiveness of the individual. ‘Is this teacher effective?’ Not, ‘Are the methods they’re using effective, and could they use other methods?’” — James Hiebert

From American RadioWorks A different approach to teacher learning: Lesson study:

A group of teachers comes together and identifies a teaching problem they want to solve. Maybe their students are struggling with adding fractions.

Next, the teachers do some research on why students struggle with adding fractions. They read the latest education literature and look at lessons other teachers have tried. Typically they have an “outside adviser.” This person is usually an expert or researcher who does not work at the school but who’s invited to advise the group and help them with things like identifying articles and studies to read.

After they’ve done the research, the teachers design a lesson plan together. The lesson plan is like their hypothesis: If we teach this lesson in this way, we think students will understand fractions better.

Then, one of the teachers teaches the lesson to students, and the other teachers in the group observe. Often other teachers in the school will come watch, and sometimes educators from other schools too. It’s called a public research lesson.

During the public research lesson, the observers don’t focus on the teacher; they focus on the students. How are the students reacting to the lesson? What are they understanding or misunderstanding? The purpose is to improve the lesson, not to critique the teacher.

Shuhari

Via “Scrum” by Jeff Sutherland:

  1. Shu: Know all the rules and forms and repeat them, don’t deviate at all
  2. Ha: having mastered the forms, you make innovations
  3. Ri: you’re able to discard the forms entirely and be creative in an unhindered way

Waste

Via Toyota Production Systems and Kaizen processes:

  • Muri: waste through unreasonableness
  • Mura: waste through inconsistency
  • Muda: waste through outcomes

2017 Professional Goals Reviewed

I changed jobs in March, 2017. It was a tough decision. I went into the job with some very specific goals to accomplish.

The Goals

Accessibility & Inclusion

I started attending Lighthouse Labs and doing some organizational advocacy. It was difficult presenting in an engineering role because I wasn’t able to develop strong design and product allies on my team. I made some presentations, but any success came from seeding ideas to other teams and helping support others.

A/B & Split Testing

A/B testing progress, like accessibility, was hampered by the absence of champions on the design and product front. Having made some presentations, identified some opportunities, and demoed the ease and possibilities, it was difficult to champion from an engineering role. A few months into 2018 we’ve now run some successful tests.

Ops / Kaizen

The new job had already defined some values (“No blame postmortems”) but I wanted to introduce some more practices. For example, collecting “3 things that would have prevented, 3 things that would have detected faster, 3 things that would have helped to fix faster”, risk inventories, and service level objectives. There moving forward pretty well.

Career Ladders

One of the last things I championed at my last job was the adoption of engineer career ladders. At my new job, I also pushed heavily on this again. The entire organization adopted them and we got salary bands too. I dunno how much credit I can take, but I sure mentioned it a lot and there it was.

Facilitation

I opened the new job by running a 90 minute timeline activity that I’ve referred to multiple times over the past year. Also I’ve run several full-time planning sessions and gotten feedback that they’re very productive and satisifying. It’s easy to forget that practices that seem formulaic to myself like magic to other people who don’t know the process. I’ve since gotten several people to attend Technology of Participation trainings.

Growth

Growth was the biggest bummer. The most exciting part of the new job was the emphasis, during interviews, in growing usage by 100x. Many of my personal goals came out my expectation that the primary challenges would be reorganizing operations around these business goals. Unfortunately (like my last job, oddly), growth became a trailing indicator rather than a leading and aligning goal. This manifests as a lot of wasted efforts because people are pulling in different directions and optimizing for individual (or role-based) throughput rather than whole-team throughput. For a brief moment we had a clear growth plan which I championed, but once we hit the first milestone, it was set aside rather than built upon.


If you replaced “accessibility” with “responsive design” in 2018, this is what you’d get

I took all of the conversations and experiences I’ve had over the years advocating accessibility features, and imagined them applied to responsive design.

Imagine

Imagine a product organization where no one has a smartphone or knows how to resize their browser.

A forward-thinking engineer says “Hey folks, I know everyone is busy but I was playing around with this browser draggy thing and I dunno but I think our website isn’t very good on smartphone. Can I help?”

Their manager replies “That’s the kind of initiative we love. Let’s put together a plan.”

Here’s what their plan for “responsive” looks like

  • Summarize industry standards for responsive design. It’s a “growing field”.
  • Do demographic research: What are common viewports? How many people have smartphones?
  • Standardize on CSS rules for projects: Every CSS file must have at least 1 media query. All container elements must have at least 2 breakpoints. No fixed width element can be wider than 320px. All buttons must be larger than 16 x 16px.
  • Write some automated tests and QA tools to ensure those rules are followed.

That’s a good plan, right?

The plan is standards based, actionable, measurable, and integrated into the development process. And engineering is championing it. That’s great!

So things move along and then…

  • New customers are increasingly focused on “responsive best practices” and the company isn’t sure where they stand in the competitive landscape.
  • An important stakeholder got a smartphone and begins poking the CEO about some weirdness.
  • Product managers and designers grumble about deadlines and bolting-on changes when developer and QA questions get kicked back upstream.
  • That forward-thinking engineer occasionally sighs and pushes through odd-looking code and CSS changes and it’s just kinda ok fine, but isn’t sure how it’ll fit into their next performance review.

So the company decides to hire you.

Imagine you’re hired to fix this

What do you say? (Remember, it’s 2018.)

“Let’s get some goddamn smartphones. And let’s train our people how to resize their browsers. And then we’ll talk about our usability testing practices.”

Time to get to work! But there are still some objections to overcome:

  • “Getting ‘good’ at resizing browsers is gonna take a lot of time and training. Is it really that important?”
  • “Not all designers will want to specialize in responsive. Can we really make them?”
  • “We have a lot of other product needs too. How do we prioritize “responsive” against the other business and UX problems we already know about and need to fix.”
  • “Airdropping some smartphones won’t make us native users. We can’t really understand a smartphone user’s experience enough to make it perfect. Our company just isn’t the culture for smartphone people anyway.”
  • “Making existing projects perfectly responsive will be like a total rewrite. Maybe next time we could really bake it in from the beginning.”

This is tough, right? It’s a slow and steady push. It’s amazing anyone manages to do responsive at all.

Yet here we are for accessibility

Over and over again I see organizations fall into an approach that places accessibility as an implementation detail to be addressed at the end of the product development process.

Because so many accessibility errors relating to assistive technologies are markup errors, and because markup errors are so easy to identify, we’ve grown up in an accessibility remediation culture that is assistive technology obsessed and focused on discrete code errors. 

Inclusive design has a different take. It acknowledges the importance of markup in making semantic, robust interfaces, but it is the user’s ability to actually get things done that it makes the object. The quality of the markup is measured by what it can offer in terms of UX.

Inclusive Design Patterns by Heydon Pickering

When talking to people, I’ve found the responsive-design analogy helps to reframe their approach to accessibility and inclusion. I follow up with technical recommendations, but it opens a door to having broader impact on product and UX practices.

Like responsive and mobile-first design, integrating accessibility and inclusion into the entire product development process offers another powerful opportunity and perspective to distill user needs, focus product value and intent, and yes, verify the implementation’s delivery.

What challenges have you seen in producing organizational and process change around accessibility and inclusion? How are you overcoming them? I’d love to hear from you and continue sharing what I’ve learned.


To test god

From Steven Erikson’s “Toll the Hounds”:

“I tried to tell him what I am sensing from the Redeemer. Sir, your friend is missed.” She sighed, turning away. “If all who worship did so without need. If all came to their saviour unmindful of that title and its burden, if they came as friends—” she glanced back at him, “what would happen then, do you think? I wonder…”

[…much later…]

Seerdomin glared at the god, who now offered a faint smile. After a moment, Seerdomin hissed and stepped back. “You ask this of me? Are you mad? I am not one of your pilgrims! Not one of your mob of would-be priests and priestesses! I do not worship you!

“Precisely, [Seerdomin]. It is the curse of believers that they seek to second-guess the one they claim to worship.”

“In your silence what choice do they have?”

The Redeemer”s smile broadened. “Every choice in the world, my friend.”

From Dan Simmons’ The Fall of Hyperion:

With a sudden clarity which went beyond the immediacy of his pain or sorrow, Sol Weintraub suddenly understood perfectly why Abraham had agreed to sacrifice Isaac, his son, when the Lord commanded him to do so.

It was not obedience.

It was not even to put the love of God above the love of his son.

Abraham was testing God.

By denying the sacrifice at the last moment, by stopping the knife, God had earned the right—in Abraham’s eyes and the hearts of his offspring — to become the God of Abraham.

Sol shuddered as he thought of how no posturing on Abraham’s part, no shamming of his willingness to sacrifice the boy, could have served to forge that bond between greater power and humankind. Abraham had to know in his own heart that he would kill his son. The Deity, whatever form it then took, had to know Abraham’s determination, had to feel that sorrow and commitment to destroy what was to Abraham the most precious thing in the universe.

Abraham came not to sacrifice, but to know once and for all whether this God was a god to be trusted and obeyed. No other test would do.


Design Principles are insights made actionable

From Chris Risdon, quoted in Laura Klein’s Build Better Products:

“Design principles must be based in research,” Chris explains. “You need to do some research where you have multiple inputs, such as quantitative metrics, stakeholder interviews, ethnography, or usability studies. You then converge on a set of insights–those are the things you’ve learned and that you wouldn’t have learned with only one input. Design principles are the output when you take those insights and make them actionable.”

For example, let’s say that you ran a usability test and got the insight that people weren’t reading all the necessary information before starting an onboarding process. You might turn that into a principle like “Learn while doing.”

“Learn while doing” may not seem like a more actionable insight than “take away the text on step one of the onboarding process,” but the thing that makes it useful is that it can be applied across the entire product. “Insights are tactical,” Chris says. “Principles are wider, but not so wide that you can’t judge design against them.”

When you adopt a principle like “learn while doing,” you give yourself a standard against which you can judge all future designs. When a new feature is built, you can ask yourself and your team whether it violates any of the principles you’ve adopted.

By making sure that all of the design principles are being followed, you give yourself a better chance of creating a consistent and cohesive user experience. Even if the product is being created by several teams working independently, you all have a single yardstick you can use to measure your design.


Taking responsibility for safety on the line

From Sidney Dekker’s The Field Guide to Understanding ‘Human Error’:

To take responsibility for safety on the line, you should first and foremost look at people’s work, more than (just) at people’s safety.

  • What does it take to get the job done on a daily basis? What are the “workarounds,” innovations or improvisations that people have to engage in in order to meet the various demands imposed on them?
  • What are the daily “frustrations” that people encounter in getting a piece of machinery, or technology, or even a team of people (for example, contractors), to work the way they expect?
  • What do your people believe is “dodgy” about the operation? Ask them that question directly, and you may get some surprising results.
  • What do your people have to do to “finish the design” of the tools and technologies that the organization has given them to work with? Finishing the design may be obvious from little post-it notes with reminders for particular switches or settings, or more “advanced” jury-rigged solutions (like an upside-down paper coffee cup on the flap handle of the 60-million dollar jet I flew, so as to not forget to set the flaps under certain circumstances). Such finishing the design can be a marker of resilience: people adapt their tools and technologies to forestall or contain the risks they know about. But it can also be a pointer to places where your system may be more brittle than you think.
  • How often do your people have to say to each other: “here’s how to make it work” when they discuss a particular technology or portion of your operation? What is the informal teaching and “coaching” that is going on in order to make that happen?

And on goal conflict:

Production pressure and goal conflicts are the essence of most operational systems. Though safety is a (stated) priority, these systems do not exist to be safe. They exist to provide a service or product, to achieve economic gain, to maximize capacity utilization. But still they have to be safe. One starting point, then, for understanding a driver behind routine deviations, is to look deeper into these goal interactions, these basic incompatibilities in what people need to strive for in their work. If you want to understand ‘human error,’ you need to find out how people themselves view these conflicts from inside their operational reality, and how this contrasts with other views of the same activities (for example, management, regulator, public).


People do not need or await permission to move

From China Mieville’s October: The Story of the Russian Revolution:

To be a radical was to lead others, surely, to change their ideas, to persuade them to follow you; to go neither too far or too fast, nor to lag behind. ‘To patiently explain.’ How easy to forget that people do not need or await permission to move.


The Three Ways Explained

The Phoenix Project, by Gene Kim, Kevin Behr, and George Spaffoed, has been my most recommended book of the past several years. This is from the appendix which explains one of the key frameworks of the book:

The First Way is about the left-to-right flow of work from Development to IT Operations to the customer. In order to maximize flow, we need small batch sizes and intervals of work, never passing defects to downstream work centers, and to constantly optimize for the global goals…

[…]

The Second Way is about the constant flow of fast feedback from right-to-left at all stages of the value stream, amplifying it to ensure that we can prevent problems from happening again or enable faster detection and recovery. By doing this, we create quality at the source, creating or embedding knowledge where we need it.

[…]

The Third Way is about creating a culture that fosters two things: continual experimentation, which requires taking risks and learning from success and failure, and understanding that repetition and practice is the prerequisite to mastery.


Having and losing effective Crew Resource Management

From Sidney Dekker’s The Field Guide to Understanding ‘Human Error’ on Crew Resource Management (CRM):

Judith Orasanu at NASA has done research to find out what effective CRM is about.

  • shared understanding of the situation, the nature of the problem, the cause of the problem, the meaning of available cues, and what is likely to happen in the future, with or without action by the team members;
  • shared understanding of the goal or desired outcome;
  • shared understanding of the solution strategy: what will be done, by whom, when, and why?

[…]

In his work for the Australian Transportation Safety Bureau, for example, Maurice Nevile has operationalized loss of effective CRM as follows:

  • unequal turn-taking where one person does much more of the talking then others;
  • missing responses where responses are expected or standard, with one person regularly withholding talk, opting out, or replying only in a clipped fashion;
  • overlapping talk where another person still has talk of substance to utter but is stepped on by someone else;
  • repair of talk done by others. We often engage in repair of our own talk (to correct or clarify our own speech). But if other people repair our talk, this can point to problems in the interaction or hierarchy between them.

Smart smart cities

Some text from Accenture Australia’s smart cities practice lead, Janine Griffiths on “Smart city ‘killer use case’ doesn’t exist”. (edited for ease of reading):

Smart cities are about improving the liveability of their citizens [and] this may or may not be supported by technology. … It is critical that leaders of cities understand this and look at technologies as a tool to deliver an outcome for the community as opposed to being the actual panacea.”

Each city is unique in its own right - with even neighbouring councils having different economies, community demographics, geography, industries, and political priorities - and the needs of the communities are different too. … Innovation should be around how the City understands the needs of its citizens and intelligently utilises technology to develop a capability, and it is vital to understand that capability is not just technology but is a combination with processes and people.

Adopting the latest technology with limited understanding of local context and not making changes to ‘ways of working’ will lead to unpredictable results.

Sales side:

By engaging in a wider ecosystem with other industries and digital partners, local councils can develop a design-based, citizen-centric and outcome-driven strategy and, consequently, a seamless experience for their communities.

By considering the entire ecosystem around them, Australian cities will get an outside-in view of the people in the ecosystem, the places in which the service is experienced, the products used by everyone, the processes that people follow and the performance.