[Tool] The C-Squared Learning Loops

In practice, teams often slack on capability building. But it's a critical, ongoing priority for innovation work.

[Tool] The C-Squared Learning Loops
Achieve exponential learning by pursuing both content and capability traction

TL; DR

Learning about user pain points and the solutions and businesses that might address those problems is good.

But it leaves your team treading water.

You will only meaningfully improve your capabilities if you make Capability Learning just as much of a priority.

So:

🔭 "Orient" to top team capability gaps.

🧑‍🔬 Find natural ways to "test" fixes in daily work.

🧑‍🎓 "Learn" lessons and check whether it mattered.

🧱 "Embed" what worked in your regular approach.

But watch out. This is harder in reality than in theory!


Why isn't a "normal" corporate learning plan enough?

Innovators need team-wide capability building

Most org's want their people to get better. And so they provide "developmental feedback" and may even demand "individual learning plans" or the like.

But that's not good enough for innovation work.

For one thing, the team, not the individual, is the priority in innovation. It's typically impossible to segment innovation efforts cleanly by function. It takes the whole team. And we cannot raise team capabilities without coordinating people's individual learning efforts.

For another thing, the capabilities we need for our work change with context. They have little to do with team members being "good" or "bad" at their work. It relates more to having "no regrets" capabilities alongside the right "world-class" specialties in the areas that happen to matter today. I've summarized this in the mandate to "be selectively awesome" before.

Capability building only happens when it's a priority

Of course, one might suggest the cliché that our teams are "always learning." Sure. But material improvement and course changes for evolved conditions don't just happen in passing. They are pretty darn hard and take conscious effort. But because they don't offer a direct path toward achieving a team's top goals, objectives, or KPRs, they (really, really) get punted to such a low priority that they often don't happen in practice.

Don't think so? I'd ask you to reflect on how often you have seen:

  • Static team toolkits that don't meaningfully change or adapt to project needs, other than minimal tweaks?
  • Unread books with cool ideas that have not yet been learned–let alone implemented–years after someone first became aware of them?
  • Minimal change in team's learning pace (either faster completion or much greater learning during the same time) from one project to the next?

Look, I'm not hating here. I've been in this situation myself more than once. It's just hard to prioritize individual and collective learning on top of official priorities and the distractions that any org throws at teams daily. But it's still a problem.

The good news here is that it actually doesn't take much to get started on capability building. Explicit agreement among team members to prioritize xyz capability during the current sprint, combined with a quick review of lessons learned and follow-on actions at the end, can give you outsized results.

Can teams even do better?

In short, yes, teams can do better.

The bummer of consulting life is that I can rarely talk specifics. But even keeping it general, I've seen teams

  • Build capabilities overnight
  • Get executives to value and support new capabilities
  • Change their toolkits to include entire new disciplines, far beyond individual tools
  • Coach up junior team members fast, so they produce results far beyond the normal requirements for their pay grade.

It's definitely possible, though it does show whether the team truly has the trust and intellectual humility that most any team claims to possess.

Hasn't this been fixed yet?

Maybe?

My online searches have found gobs of "learning loops" and similar frameworks. But many (1) focus on individual learning, not group activity building and (2) zoom in too far for our needs, into the behavioral, psychological, and neurological specifics of learning.

⚠️
What I haven't found is a simple framework for which one can show that capability building must happen in the work itself, not separately, as a team, based on a long-term learning agenda.

Got anything better?

Enter what I have named the "C-squared Learning Loops" for now.

The Lean Startup learning loop can use a tweak

First, there is the "standard" Lean Startup learning loop. There's nothing wrong with it per se. I simply call out that it achieves "content learning" (vs. "capability learning").

Ok, one edit. A lot of versions of the Lean Startup learning loop leave out the "Hypothesize" step. It bugs me. People inevitably claim that thoughtful, scientific method-style observation, realization, and hypothesis development still happens even if it's not called out. But ... I've seen team after team after team basically skip over it. A bit of lip service maybe. But definitely no rigorous experiment strategy. This has to be called out. (Phew, end rant. 😅)

The Capability Learning Loop lets you achieve exponential learning

Capability learning differs a bit from content learning.

In content learning, we have one goal (traction), and we test to optimize that. Within any one project, our goal is fairly steady. Even across a portfolio, our learning effort may stay fairly focused.

But in capability learning, we have to consider the long term direction of our org, the role that our team may need to play, the current composition of the team, individual learning interests, and more. These learning goals can be more volatile.

In capability learning, our goals are also more opportunistic and cumulative than in content learning. In content learning, we move on from one topic to the next: Found the problem to solve? Great! Now move on to solutions, value props, channels, and so on. But in capability learning, we may, e.g., always need financial insights, no matter the project. So if someone offers us access to new data, we may grab it ahead of need. On the next project, we might create efficient ways to retrieve and synthesize the data. On yet another project we might raise our capability further by creating ready dashboards or easy queries. And so on. It's a different, longer-term mode of learning.

And the key is that a team that focuses on such capability building will get exponentially better over time. Done right, small improvements pile up, for little impact at first but ever greater impacts over time.

Activating the Capability Learning Loop takes 4 (plus 2) steps

I see four core steps to this learning cycle:

🔭 "Orient" to top team capability gaps. They might relate to past problems or future ones. They might result from team member changes or priority changes. They are not obvious. They take conscious consideration.

🧑‍🔬 Find natural ways to "test" fixes in daily work. Capabilities are about the work. So they happen as part of the work. Of course, we are normally under deadline pressure. So we need to be ready for the extra mental load from such new tests and find ways to sneak them into project plans without harming our delivery of results.

🧑‍🎓 "Learn" lessons and check whether it mattered. Things might be a bit of a mess while we work. Trying something new alongside our expertise tends to feel clunky and may even stress us out. So reflecting on what happened takes an after-action-conversation.

What's particularly hard here is the learning curve, which is easy to overlook: When we go from never having done something to trying it for the first time, we may feel a great sense of accomplishment. But further iterations will run into the slog of building up real capabilities. That stage will feel hard, with little apparent "progress." We can't give up then. Only after significant repetition will we become as good as our ambitions demand.

🧱 "Embed" what worked in your regular approach. Too often do I see teams who try–and even achieve–something only to abandon it right afterward. They never connected the new capability into their regular way of working. It remains a one-off. Knowledge fades away. Soon, the team sees it as an old distraction or forgets it outright. So claiming that progress explicitly and making it part of "the way work is done around here" is the key step for ensuring that you actually benefit from what you have learned.

There are also two more steps that you may have already anticipated. They just clog up the diagram a bit. So I didn't include it in the version included above. ☝️

A more complete diagram includes the steps of "envisioning" and "assessing" your learning strategy and roadmap, like so:

A colorful graphic of two overlapping word circles for "Content. learning" and "Capability learning," in a circle of arrows
The same C-Squared Learning Loop, enhanced with zoomed-out steps for reflection

🏔️ "Envisioning" a learning agenda ties together content and capability directions. We can only take on the kind of projects we want in the future if we actually have the capabilities that it takes to succeed.

📈 "Assessing" progress must happen explicitly. We know from psychology just how important conscious learning is. We need explicit assessments both to ensure we push far enough and to recognize the progress we have already made. Beyond that, our assessments must also involve re-evaluation. Agendas often need to change. We don't want to forge forward blindly. So checking whether to change course must also be part of our assessments.

Beware the side effects

It'd be easy to see Capability Learning as a simple "good" thing. But it comes with watch-outs.

I've especially seen the following problems and found the solutions listed to be of help:

The better you get (through reps, reflection, and optimization), ...

  • the more you feel like you fall behind. That's because the better you get, the more opportunities are unlocked, against which you could never make progress before while they were locked, and that now are completely untapped.
  • To counter the resulting risk of decision paralysis or "spinning too many plates at once," prioritize and focus. (That's the "selective" part of the mandate in Credible Innovation to be "selectively awesome.")

The more you prioritize and focus then, ...

  • the more you become irrelevant in some areas. That's because you didn't prioritize specializing in those areas, even though it would have been reasonable to do so. By contrast, the areas in which you do specialize may now over-serve some clients. So you can come across as "space cadet-y" or "too geeky" on those fronts.
  • To counter that irrelevance, re-assess to ensure that your learning direction still points toward end states that can help you be the most useful and irreplaceable that you can be toward your primary stakeholders. If others see you as irrelevant then, that's not ideal but unavoidable.

The more you keep your end state/ vision top-of-mind, ...

  • the more you might find the challenge of reaching it daunting. That's because focus and practice give you true appreciation for the difficulty of what you attempt.
  • To counter that sense of everything being too much and too hard, track the progress that you have already made, and assess what level of competence is both realistic (given your other priorities) and necessary (given what you stakeholder do or don't demand from you).

In other words, capability building sounds good in theory but is hard and causes unintended side effects in practice.

But if you know why you are taking on that extra labor, it can help your team achieve exponential results.

T.I.S.C.