TL; DR
I found a great "behind the scenes" tell-all about the research paper on which much of the current AI generation rests.
Sound dry? No, super interesting!
I highly recommend this article by Steven Levy of WIRED (with illustrations by Magda Antoniuk) to you. Just go and do it. Give it a read.
Yeah, it's also a bit long. But not terribly so. Oh, and after reading it, come back for my commentary. đ
Of course, the article teaches us loads about the origins of today's generative AI. But for our purposes, just skip the talk about long short-term memory, transformers, and all the rest.
Instead, check out all the lessons for doing innovation work credibly with which the article bursts.
It turns out, even some of the most lauded innovators in the world face massive challenges when they pursue breakthrough innovation. Even they, with all of Google's smarts, cash, and might, run into issues that you might find very familiar.
In other words: If credible innovation principles matter for them, they matter to anyone!
The story
Much of the generative AI generation that has exploded into our lives since 2023 rests on insights from a single scientific paper. A group of eight Google researchers scrambled to submit that paper two minutes before the deadline to a nerdy conference in early 2017.
By the end of the year, this paper had received enough attention that other scientists ran down the researchers' doors to learn more, and even a co-inventor of the previous leading AI technology congratulated the researchers on their accomplishments.
Steven Levy interviewed the eight researchers, who have since moved on from Google, and shares the inside story of the paper with us--how it came about, how it was received, what happened after, and, importantly for us, the role of Google corporate in all this.
While many factors contributed to the paper's insights, three stand out:
- The fantastic team spirit and cross-pollination among talented, curious individuals with highly-diverse experiences
- The many random chances that contributed to the group ever coming together
- The parent company's ho-hum and opportunistic support, driven both by a lack of understanding and by utterly rational choices that you or I might make too.
BTW: Cool side bar: Levy has contributed to the magazine since its inception and is also the author of other favorite articles of mine, based on insightful discussions he had with Jeff Bezos (2011) and Larry Page (2013).
[Source: WIRED]
The point for doing credible innovation work
Ok, let's get to the gobs of lessons for innovators. I grouped them in four groups:
Take time to set your course calmly. Avoid both shiny objects and doom and gloom panic
Not everything should change. Not everything should be "innovated."
After all, the whole point of innovating is to make something better. And once you've achieved that, you probably should stop changing it more. Get some benefits from that "better" state that you just worked so hard to create. It would be foolish to rip it all up right away in search of some elusive "even better" thing.
In that spirit, I've found that teams must be really crisp reasons for choosing to innovate and be able to express them clearly. It's a bedrock principle of the Credible Innovation approach.
Turns out, this also mirrors several aspects of what the Google team did or experienced.
Specifically, I found it interesting how much their "must-do purpose." purpose mattered to them, with what tenacity they pursued it, and what counted as a must-do purpose for the wider org. On other fronts, their innovation governance appears to have been mixed, both helping and hindering them. Either way, it highlights the unique needs of governing innovation and the importance of getting it right.
The specific lessons I saw in the article were:
- Pursue "must-solve" problems to set your work up for success as you can see in the years of effort the Google researchers put into this work and their acknowledgement of how much the "right" problems matter. They kept at it despite all the setbacks. Without that must-do-ness, neither they nor you would last long enough, past all obstacles, eventually to succeed. It would just get too hard.
- Waiting and not being first can be a perfectly legitimate strategy: That is what Google CEO Sundar Pichai argued had been Google's strategy related to large language models, when asked why the company hadn't continue to push forward full-speed and try to be first to market with transformer-based solutions. The article presents this argument as being at least open to questioning. But fundamentally, it IS an ok strategy, even if it runs counter to activators' natural instinct of doing something. (BTW, Wharton professor Ethan Mollick has a beautiful illustration of the value of waiting, based on the current state/ maturity of generative AI.)
- Once ready, trigger action whatever way you can. Inertia is the great enemy: In the article, the immediate threat of supposed "disruptive" competitors got Google to invest resources in the topics that ultimately led to the research paper. Of course, it's easy to get wrong who the most potent competitors are. In this case, Siri, bought by Apple, turned out to cause "false panic" at Google. But that panic was still helpful to the team. "Google brass smelled a huge competitive threat" and put more people on work that might counter this likely competitor. In other words, the organization's "must-do" may differ from what the innovation team considers a "must-do." But that need not be all bad.
Great innovation takes massive, ongoing effort, like it or not
"Impressive craft" proved to be another leg of the stool of Google's success, on top of their "must-do purpose."
The sweat equity that the research team put in, requiring top effort even among experts in their field, is the very opposite of pretty "innovation theater." It may not sound happy-shiny. But if you've pursued serious innovation that had a chance at success, you also know how much the reality of this work is about "99% perspiration and only 1% inspiration."
- Ideas are easy. But making meaningful innovation work (1) takes months or years of focused work and (2) takes healthy, effective teams: By contrast, we often massively over-value "ideation" and the role of brilliantly-creative individuals. It takes time, and it takes groups! E.g., some of Google's researchers had been working on pieces of the problem that they solved in this paper for over three yearsâan eternity in the AI space.
- Expect to beat up your work and iterate at massively-high volume: It's not merely about being "smart" or "having empathy" or "a pivot or two." We need to evolve massively from our starting point. But if we're willing to do it, then the work can be so good that even one's own team might "later describe [one's contributions] with words like 'magic' and 'alchemy' ...," as the Google researchers found.
- Luck still matters. You can adapt to it but not avoid it: Even these Google researchers acknowledge to this day that being lucky was a factor that helped them to create magic. This reality brings us back to the importance of must-do purposes, big portfolios, and major iteration. If your chances of success are low, you need to create an environment that can overcome those long odds.
Get the team right. Then actually make the team the priority
In an age of prima donnas and self-promoters, truly caring about teams over selves (vs. just saying so) feels downright quaint. But this team apparently did it anyway ... and reaped rewards from it!
Focus on team over individuals marked the team's entire effort. Everyone pulled their weight. Everyone contributed their unique strengths. And while they did so, they didn't make it about them themselves. From all I can tell in the article, people poured in their efforts for the sake of and on behalf of the cause and the team.
E.g., at the very end, the team had to decide whose name would appear where in the order of names at the top of the final paper. In academia, that question is a big deal. The first name becomes famous. Later ones can sink into obscurity. But the team wanted to acknowledge everyone's contributions. So they "hacked the system" by randomizing the order and calling out in the published paper that the order was random. Not bad for a make-or-break moment where you can't normally have equality. No matter what truly happened behind the scenes, the team's choice of bucking convention and doing their best to name everyone an equal co-contributor set a powerful signal.
(Side bar: Others have found the the importance of such "one for all" spirit in different contexts too. My favorite example comes from the world of sports, where Sam Walker's The Captain Class tells us that the most legendary teams ever, worldwide lived similar values.)
Anyway, back to the specific lessons we can learn from the Google researchers:
- Trusting teams that truly live a "one for all spirit" are more likely to produce great innovation: Among other things, the team hacked paper submission guidelines to make it clear that everyone contributed equally. This matches findings from a longitudinal Stanford study (Baron 2002) that found two types of team cultures that achieve the best outcomes for startups and presumably for innovation teams operating similarly. This type of team was one of them. The other was a team type where brilliant individuals are given free rain and all the resources they need to do brilliant things individually.
- Diversity of attitudes and backgrounds really-really matter for breakthrough results: No single function or background is enough. In Google's case, the researchers mixed junior and senior staff, and most were either immigrants or children of immigrants. They brought all of themselves and then, crucially, valued what each of the others brought.
- Fun is good when it contributes to productivity: In this case, the researchers' work was full of nerdy fun, especially when it came to naming the technology and paper on which they were working. Of course, such fun has to be positive.
- A great work environment is non-negotiable: We're not talking ping pong tables. Just whatever it takes to make committed, talented people work their best, including a mindset to encourage exploration and pushing the envelop, diverse teams, and more. In Google's case of course, that apparently did also involve excellent coffee machines for late night work sessions.
- You can't force serendipity. But you can engineer an environment that favors it: Because the researchers worked close to each other, they could randomly meet and realize they might help each other. In one case, a researcher overheard a conversations in a corridor that sounded interesting and took place among smart, positive people. So they said hello. The individual events cannot be predicted. But the environment can be engineered. Vice versa, it also means (1) that one cannot fully "plan" for innovation on command and (2) that innovation takes multiple disciplines and backgrounds.
It takes empathy for your own org (not just for users) to understand the factors arrayed against transformational work
Being "set up for success" (part of having a must-do purpose) is a rarity. Often, it's perfectly rational for others beyond the team to reject your work, cool though you think it is.
The default for innovation work is to fail. But you can improve the odds for Agreeability/ Acceptability, as this case study shows.
In the case of the 2017 Transformer Paper, the WIRED article tells us of several challenges that the team had to overcome. Unfortunately, the article sheds less light on the ways in which the team managed to overcome those roadblocks. So the lessons called out here must focus more on the issues to anticipate rather than the solutions with which to resolve them.
But that may actually help us in a round-about way, by putting greater emphasis on an intermediary issue that needs mentioning:
To be quite honest, I have met too many innovators who set themselves up to fail by simply rejecting the context in which they do their work. That kind of "stick your head in the sand-ness" may feel better but still produces blindspots. The problem is not always the shortsightedness of operators. It can just as much be the reality distortion field/ magical thinking that innovators produce.
People who focus on multiple options for growth, on near-in returns, and on risks of change are not all idiots. Few innovators would say out loud that they consider their colleagues from other department bozos. But their interactions and actions quite clearly show their dismissal of the rest of their org.
As a first step then, innovators must acknowledge that others with wider or different perspectives don't share innovators' own excitement about the current work. And as a second step, innovators must accept that that is actually ok. Only once we truly apply the same empathy to our own co-workers as we do to users will we ever gain operators' support.
Innovators should then make sure that they can accept these lessons from the Transformer Paper:
- Corporations need innovations that are cheap and instantly high-performing: In Google's case, the company happily adopted initial insights from the research group that readily plugged into existing businesses ... and then lost interest in the rest.
- Expect the broader org to be just lightly interested in your work (even if it's good), as one interesting effort in a broader portfolio: In Google's case, the researchers working on the effort, though, "had a strong hunch [they] were onto something" great. But their leaders and stakeholders saw both their work and that of others. And so their emotional investment in this one effort among several stayed modest.
- It's perfectly rational for executives to dismiss work that requires self-cannibalization: We may talk a big game about "if we don't disrupt ourselves, others will." Easy for us to say: Our bonus and very job don't depend on the current business' success. That said, yes, others will then be happy to cannibalize us. But our bosses and stakeholders can only care if their own incentives are set up to be impacted by such cannibalization. Don't expect people to work against their own self-interest.
- Evenâand especiallyâsmart, competent outsiders may reject good ideas, especially ones that counter current orthodoxy: Even one Google researcher's own father, another AI expert, rejected his son's hypotheses for a long time, as did many outside experts at first. "Transformers did not instantly take over the world, or even Google." This makes sense. Orthodoxies at work often become orthodoxies by working pretty well in lots of situations. So, just because you think you found an exception that upends an outdated apple card, that doesn't mean that others should instantly agree. Maybe you are just wrong. Sure, you're convinced that you are right. But leave others space to want you to prove it.
- Even great success does not ensure longevity of innovation teams: All eight researchers eventually left Google. And the Google Brain team on which several of them worked has been merged into another team. Plan on your innovation team to have a built-in sunset date. For example, many teams I have encountered lasted 3 - 7 years before being disbanded or radically re-imagined.
Footnotes
Organizations involved or mentioned (External links)
Conference on Neural Information Processing Systems (NIPS)
Google Brain (merged into Google DeepMind since)
Further reading
Baron, J. N., & Hannan, M. T. (2002). Organizational Blueprints for Success in High-Tech Start-Ups: Lessons from the Stanford Project on Emerging Companies. California Management Review, 44(3), 8â36. https://doi.org/10.2307/41166130
Levy, S. (2024). 8 Google Employees Invented Modern AI. Hereâs the Inside Story. WIRED. https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ĺ., & Polosukhin, I. (2017). Attention Is All You Need. In. Long Beach, CA, USA. https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
Walker, S. N. E. (2017). The Captain Class: The Hidden Force That Creates the Worldâs Greatest Teams. Random House. https://bysamwalker.com/the-captain-class/
Credits
Photo "Three person about to fire dancing" by Yves Alarie on Unsplash
Disclaimers
External links for your convenience. I do my best to link to reputable sources. But I cannot guarantee or accept liability for 3rd-party links beyond my control.
I have no affiliation with or investments in organizations mentioned in case studiesânor do I ever unless explicitly called out.
The organizations mentioned do not endorse me in any way. Nor do I endorse them. These case studies are offered merely for general information purposes. They do not imply that certain choices or actions are effective or ineffective, either for the organizations mentioned or for you. Consult your own professionals before taking action on anything mentioned here.
I have received no incentive of any kind for mentioning or not mentioning organizations or for the perspectives I take. Opinions voiced above represent my subjective, editorial take, based on public data.
I use all third party content in accordance with "fair use," "open source," and similar permitted uses, to the best of my knowledge. Please contact me in case of legitimate questions.