Thank you by Suzanne Vickberg and Kim Christfort
Would you enjoy being stuck in an airport with me? If after chatting with me for half an hour you don’t think so, there’s a good chance you wouldn’t choose me to be on your team. This screening technique is commonly known as the airport test, and the basic assumption behind it may be flawed. I suggest using the life-raft test instead.
If you’re lucky enough to have the budget and headcount to add a new member to your team, there are lots of ways to go about making your selection, and many of them involve some element of testing for fit. Like, do you think my working style is a fit for the role? Or, is my temperament a fit for the work environment? Or, is my personality a fit for the culture? If you think I’m a good fit and that you would, in fact, have a great time with me in the airport, you very well might welcome me to the team. But if not, best of luck to me.
This kind of selection criterion can create teams who work together smoothly and really have a great time in the process. But here’s the common problem: People who feel like a good fit are often a lot like you. You might share the same perspectives, prefer the same communication methods, and have the same sense of humor. You might also possess the same strengths and the same weaknesses, too. And I’ll bet you can see how a whole team of people with the same strengths and the same weaknesses may not be the best idea.
But you may be tempted to select teammates this way because it feels pretty good to work with people who are a lot like you. If you’re a creative type, being around other creatives can inspire you to new heights of innovation. If you’re a detail person, it can be a real relief to be around others who get the importance of the little things. But these feel good scenarios can lead to some pretty undesirable outcomes. Too many creative types together can waste a lot of time and money chasing one impractical idea after another and then abandoning each before they come to fruition. If you all see yourselves as the idea people, who is focused on execution? Likewise, too many detailed people can get trapped in a state of analysis paralysis, make very little progress, and end up choking on the dust of their competitors. If you’re all focused on the minutiae, who’s keeping an eye on the horizon and making sure you move forward in a timely way?
This is typically not a recipe for success.
So perhaps you should be looking to add some diversity to your team, right? Maybe you should even select the person you’d least like to be stuck in an airport with? Well, that might be a good start. A team of creative, big picture thinkers could probably benefit from a teammate with a penchant for thinking through the specifics of implementation. (Even if they find their minds wandering during conversations with that person.) And that detail-obsessed team could probably use some encouragement to make their way out of the weeds. (Even if the person drawing them out makes them feel like they’re caught up in a tornado.) But if you just add a teammate or two with a different perspective and stop there, you’re not likely to get the effect you’re hoping for. Because a team with a majority type tends to favor that type’s perspective and way of working, overshadowing those of any token minorities.
Take as a case in point a team I once worked on, full of high EQ types; we prided ourselves on being empathic, diplomatic, and inclusive. But we sometimes had a hard time moving forward, or even choosing a direction, because we valued everyone’s perspective so much, and we were reluctant to appear critical of anyone’s ideas.
Then a new member joined our friendly but floundering team—a more competitive, goal-focused type. And we were excited, thinking he could help us get and stay on track. And he certainly tried, but his communication style was direct, to say the least, and it rubbed us all the wrong way. When he pushed us to move ahead, we felt railroaded. When he pointed out the flaws in someone’s line of thinking, we felt offended. Looking back, I regret to say that we neither appreciated nor benefited from his unique perspective. Instead, we froze him out, thinking, how dare he?! Didn’t he understand how we did things on our team?! And honestly, no, he didn’t seem to understand. In fact, he seemed quite baffled by our reactions to his honest and straightforward approach. “Didn’t we want help making decisions and getting stuff done” he asked? Apparently not. Ultimately, feeling unwelcome, he high-tailed it to the nearest exit. So much for us being inclusive.
Okay, so a team that’s all the same likely isn’t ideal, nor is one with a few token members who are different, if the team continues to work together in a way that’s best suited to the majority. Which brings us back to that airport test, and its alternative, the life-raft test.
Next time you’re selecting a new team member, imagine you’re not stuck in the airport, because your flight is leaving right on time with your whole team on board. But the plane makes a crash landing at sea and you’re now floating in a life-raft with no hope of immediate rescue. Would you want everyone on that raft to have the same strengths and weaknesses? Probably not. Suppose you’re all great at building things out of random items (a really useful strength on a life-raft with limited supplies), but you’re also terrible navigators (a very unfortunate weakness on a life-raft). Would it make sense to select a new teammate who was just like the rest of you? You could build lots of cool stuff together, but you’d be drifting around aimlessly.
Perhaps instead you’d wish for a teammate with great navigational skills, even if they couldn’t build things. And then, instead of expecting them to do things the way you do them, maybe you’d go above and beyond to support that person in doing what they do best. That could be the deciding factor in whether your team survives the life-raft ordeal.
So next time you’re thinking about how to make your team even more successful, take a quick inventory of the perspectives, working styles, strengths, and weaknesses of your current members. And then review how your team’s ways of working may support the preferences and needs of some types more than others. Because your goal should not be just to add diversity, but also to activate and manage it by creating an environment where all types can thrive. Or alternatively, you could just search for teammates who can do it all. But I don’t think unicorn hooves are a great idea on a life raft, not to mention the horn.
Kim Christfort and Suzanne Vickberg, PhD, are the authors of Business Chemistry: Practical Magic for Crafting Powerful Work Relationships, on which this article is based.
Procter & Gamble Co. has been vigorously rooting out fraud and unverified data from its digital buys while also doing more influencer marketing, but those two things may be at cross purposes. In a new study, two P&G brands last month ranked among the top 10 in using paid influencers with fake followers.
The data, which comes from Points North Group (March 2018), shows that Pampers and Olay ranked No. 4 and 10, respectively, on the list of brands with the most fake followers among their paid influencers last month; Pampers with 32 percent and Olay with 19 percenty. Topping the study’s list: Ritz-Carlton, with a whopping 78 percent of fake followers for its influencers.
Ritz-Carlton didn’t respond to requests for comment. P&G spokeswoman Tressie Rose declined to comment on report specifics, since the company hasn’t seen the data and isn’t familiar with Points North or its methodology. “There are a lot of companies out ther offering services to combat this,” she says. “Bot fraud is an industry-wide issue and one we’re continuing to actively work on.”
Even as use of influencer marketing by big marketers grows, so do questions about how it’s measured. Reach numbers used to measure influencer campaigns often come from raw follower counts, without regard to how many followers actually saw posts—or were real.
Numbers marketers usually get on influencer campaigns come either from their influencers or third-party providers “grading their own homework,” as Points North co-founder Peter Storck sees it. He’s now working with clients, whom he says he doesn’t yet have approval to name, to help “get real measures of their influencer marketing, because they don’t get them from the partners they work with.”
Points North is a firm founded by former executives with analytics experience going back to the old Jupiter Research digital ad measurement business, as well as with the Word of Mouth Marketing Association (now part of the Association of National Advertisers) and such influencer networks as CrowdTap and House Party. Storck’s joined co-founder Sean Spielberg, who was with Crowdtap, to form what Storck calls “the Nielsen of influencer marketing.”
Storck says he’s unaware of any other third-party audience measurement dedicated to influencers space, though some firms have sought out third-party validation of their reach numbers other ways. Cincinnati-based Ahalogy, for example, only charges clients for digital impressions verified by Oracle’s Moat on content created by its influencers.
Points North is releasing its first public data in the form of lists of the biggest spenders on influencer marketing in March, the most efficient spenders based on cost per thousand impressions (CPM), and those with the most fake followers. Storck says work for private clients has included a large cosmetics brand where $600,000 out of a $2 million outlay for influencers was for impressions that weren’t seen or were seen by fake followers.
The spending data is based on analysis of influencers used by brands and industry norms on payments, which Points North founders say average 0.3 cents per follower per post across the industry based on their prior sell-side experience and more recent input from clients. It’s an estimate, but one they say they’re looking to apply consistently.
Fake follower counts are based on Points North scanning followers of influencers to sort out such things as accounts making comments in languages that don’t make sense for the content or the influencer, or accounts making the exact same comments across multiple influencers and posts. Storck says the algorithm is similar to what e-mail users lean on to sort out spam.
The CPMs are based on the spending estimates and effective reach, which not only subtracts fake followers, but also uses estimates of how many legitimate followers actually see posts, leaning on engagement rates for posts and norms for viewership gleaned from actual influencers, who get such information from their own Instagram business accounts, for example.
The top influencer spenders last month, as estimated by Points North, include names you’ve likely heard of, such as Amazon, Walmart and Mercedes Benz, but also the more obscure—at least until that influencer spending kicks in—like Flat Tummy, Waist Gang Society and SugarBearHair vitamins.
Among the most efficient spenders was Heinz Ketchup, owned by the Kraft Heinz and the ever-thrifty 3G Capital; Ulta Beauty; and Clorox Co.‘s Hidden Valley, all with CPMs around $2 or less.
P&G also had a brand on the top 10 most efficient, Vicks.
Below, top spenders, most efficient, and most fake followers.
|1||Flat Tummy Co||$1,560,178|
|7||Waist Gang Society||$317,783|
Brands Achieving Lowest Effective CPMs for Instagram Sponsored Posts (>$10k Spend) in
|5||Marc Jacobs Beauty||$2.56|
|8||Call of Duty||$3.12|
|Rank||Advertiser||% Fake Followers|
|9||Magnum Ice Cream||20%|