If the accuracy and frequency of the feedback loop came written on the label of the career you have chosen, would you have made the same choice?
Feedback loop determines your learning curve. You try out things, you hear back, you make sense of what you hear back, you make corrections, and you try again. The faster this cycle runs, the faster you learn.
In 2021, I was leading a unit of fifteen-odd that was building out a smart writing assistant for scientists. The time came when we had something to put out into the world, so I got some help. As an intrapreneur, I asked around the company for marketers. We did some segmentation of the market, designed a campaign around that, launched it, and then waited for results. And we waited.
It seemed to be a segmentation issue. We just didn’t have the ideal customer persona pinned down. It was maybe a discovery issue–the right people didn’t know we existed. It was also a conversion issue—our funnel narrowed down too quickly. After two months, I did not have a good explanation for the numbers, or the lack of them. The one thing I learned for sure was to always add more than 100% slack to the duration of a marketing campaign, no matter how tightly it was designed.
Now consider this:
Professional athletes have coaches, support staff, commentators, and the wider public to point out exactly what they’re acing and where they’re tanking.
Airline pilots have a wealth of flight data to capture how good they are at their jobs.
Surgeons have patients’ recovery times and post-op quality of life as a barometer for their competence.
These professionals can review their equivalent of game tape immediately after a performance and pick out the smallest areas of improvement. Armed with this information, they can then go on and design deliberate practice sessions and work on training simulators to get better at those identified areas of their work.
Knowledge work falls short on all fronts. No clear feedback loop, no game tape to watch and learn from right after, no training simulators.
If you’re a marketer or a product manager or an engineer or an investor or a CEO:
1️⃣Your feedback is delayed.
3️⃣Your feedback is ambiguous.
4️⃣Simulation opportunities are hard to come by.
I assume none of this is a surprise to you. Market conditions, the team you’re a part of, and timing are all factors that contribute to the outcome. Separating luck from skill, no wonder, is hard.
Under such circumstances, how do you get better at what you do?
You can’t measure your progress. There are no clear metrics, except proxies such as salary and rank. So, if you can’t measure it, how will you improve it?
Maybe you can go another way. Not as quantifiable as measuring but useful nonetheless. You can evaluate your progress. What are the signs that you’re doing better, or worse? That won’t be straightforward. Because there are a hundred variables at play that are causing the outcome.
Most don’t sweat over this. Most get frustrated and come up with coping mechanisms. One such mechanism is confidence. Appear to the world like you know. Pick up a belief system and champion it.
You will find such an ideological bunch everywhere. They will pepper their speech with absolutely and impossible and totally. When explaining their reasons, they will pile on _moreovers _and alsos.
I was / still am one of them, depending on the issue at hand.
I remember the first time I had the distinct feeling of listening to someone who didn’t seem married to an ideology. Instead of absolutely there was probably; in place of moreover there was however. It seemed namby-pamby and almost frustrating.
Yet, talk to Novak Djokovic and even he will admit to doubts. Even he will say he doesn’t know for sure. In vocations with reliable feedback loops, experts are comfortable projecting ambiguity. In noisy professions, like knowledge work, experts strive to project conviction. These experts aren’t paid to say ‘I don’t know.’
Early on in your career, things may be different though. You’re new so you’re building new mental models with each experience. You may have a manager who cares about you and your growth. As you move up into mid-career and assume more consequential responsibilities, that fellowship tends to end before you feel ready. And the (voluntary) work begins.
If you really want to be intentional about your growth as a knowledge worker, know that the bar for curiosity is high.
For you to extract feedback about your skill set that you can fine-tune and make more efficient, you have to have your own processes that lets you track your progress. You will find little of that ritualized. You will have to be creative.
It’s an ongoing thing. Next time I’ll share some of the ways I have tried to hear back from the world outside.
OPTION 1: #Midweek 129 - How to make sure you get the most out of team’s collective wisdom as a manager
At the start of the weekend, I wrote about the value of tapping into the wisdom of the crowds. Multiple independent perspectives can help build a model that is more accurate than that of any one person’s.
But you can hit a snag with this approach when looking to apply it in your team or business unit. Keeping perspectives independent is notoriously hard, especially in a hierarchical setting like the modern corporate organization.
As a manager mulling over the annual sales forecast or the price for an acquisition target or any other meaningful estimation problem, you face two problems even before you start.
Problem #1. Independent judgments are rarely independent. If you hear someone else’s estimation first, your own judgment will be pulled closer to what you heard. It’s called anchoring. Anchoring is pernicious because knowing about it isn’t enough to safeguard you against the effects of it.
Was Taylor Swift more or less than 9 years old when she had her first American Top 40 singles hit?
How old was Taylor Swift when she had her first American Top 40 singles hit?
Did you come up with your estimate by adjusting up from the absurdly low 9? Had the number in the first question been 19 instead of 9, it’s likely that your second answer would’ve been a higher number. Yet, you understand that the two answers should not be connected.
Anchoring is priming. You see it on menus. You see it with discounts. It is so common because it is one of those tilts of the human mind that are hard to spot in oneself.
Problem #2. If you form an estimate first and then hear about the estimate of someone from your team, you’re likely to judge any difference in your favor. That is, you’re likely to believe you’re right and your team mate is wrong.
Repeated studies done with parents, national security experts, and cross-profession adults point to this finding. The more the difference in opinion with ours, the more harshly we judge others’ opinions on a topic. Our self-narrative in such situations is remarkably the same: You’re wrong to disagree with me.
There’s a third and a fourth problem, if you may.
Problem #3: Professionals are as susceptible to anchoring as are amateurs, but only professionals have a reputation to defend so they will deny the influence of anchors. Your team members, who are paid for their expertise, may refuse to admit that their estimates have been influenced by hearing about those of others.
Problem #4: You’re the boss. As a boss you may privilege your own judgment. You may think your judgment counts for more owing to your richer experience and intuition, and so you use your authority to amplify your opinion and discount others’ opinion.
The complication though is not with the judgment of any one person, whether or not you’re the boss. It is what an individual thinks of their own judgment in relation to the judgment of others.
‘What should a manager do if she wants to get to better judgments and minimize the costs that arise from people getting enamored with their own opinions?,’ asks this 2018 Harvard Business Review piece on how to use the wisdom of the crowd.
The answer is straightforward. Pre-commit to aggregating opinions.
The specific strategy will depend on the type of question a team faces. However, committing to an aggregation strategy ahead of time can protect teams from the negative social consequences of evaluating each other’s judgments in light of their own previously-formed opinions.
Make a pact to protect yourself and your team members from their own tendency to anchor to random stimulus and to over-index information that corroborates their beliefs.
Teams facing quantifiable questions should aim for strategies that, as much as possible, remove human judgment from the aggregation process. A team estimating how much a market will grow faces a quantifiable question; they should pre-determine an algorithm (such as a simple average or median) for combining the opinions of different team members.
When it comes to an estimation problem, don’t let your team behave like a team. Don’t have them talking to each other as part of the decision-making process. Let each person think and estimate independently. And then you combine the group’s view while rejecting any one person’s estimate, no matter how correct you think it is. It may seem counterintuitive but research shows that it’s more effective.
You have my sympathies if you think all this is too much work. The odds are stacked against you. But remember that very few managers and leaders actually set in place such decision-making behaviors and structures. On top of these corrective steps, there is a preventive step. Build your team and organization to be, in the words of Tim Urban, an Idea Lab, not an Echo Chamber.