Data Scientists: How to get instant buy-in from your org?

Data Scientists: How to get instant buy-in from your org?

Let me tell you a unique story.

You’re on the 50th and last floor of a giant building. There are many white shirts and Patek Phillips around the table, no doubt we are at a board meeting.

The directors' committee decides that their company should lead the AI innovation ecosystem in their industry.

To this they hire a team with great skills, spend a vast amount of money on resources, and after a few months, they find themselves at the starting point with less money and no trust at all for AI.

The end.

Wait, what: The story isn’t unique? It happens a lot of time, you say?

Meanwhile, you should avoid this situation at all costs, I’m sure you can manage a healthy collaboration based on trust, transparency, and communication no matter what.

Data scientists shouldn’t be isolated or isolate themselves in the org, by leaving the strategic side to consultants or managers. Data work done in isolation can offer important insights, but falls short of the discipline’s full potential to transform industries, improve society, or offer a competitive advantage.

Getting data science outputs into production will become increasingly important, requiring leaders and data scientists alike to remove barriers to deployment and data scientists to learn to communicate the value of their work.

"I believe that the competitive advantage of AI exists at the intersection of data, technology, and people. You can only get value from your AI when it’s woven with deep business accountability and expertise. For enterprises to unlock the real power of AI and realize its value, they must first build a strong partnership between data science and the business.”

But how? These are my 3 principles for getting guaranteed buy-ins.

Build trust: white box yourself

In general, we can't talk about collaboration without trust. In AI/ML, we can't build trust with other stakeholders without transparency.

At a time where complex methods getting into dark and darker boxes. Keep your work and your teamwork as transparent, understandable as possible.

Consider "white-box" yourself. Speak their language.

First, identify who your stakeholders are, make personas and build empathy for those personas.

Imagine being surrounded with dollar signs, product pitches, or switching meetings with UX to tech, or even legal folks. It's about having enough empathy for your audience, the stakeholders. In most cases, you'll interact with folks who don't have your data literacy. The problem? If you want to be transparent about your work, to build trust, you may feel obligated to get into the details and to explain very precise principles of the data work.

In short, adapt your message to your audience.

Then, the track record. Board members heard horror stories about billion-dollar unsuccessful AI investments by IBM or Google. The starter point of your relationship with your stakeholder might be biased due to those stories and the fact that every. fucking. articles (mine included sry) on the topic of AI start by announcing that +80% of AI projects fail.

The ultimate antidote is to build a track record of wins that you can present, with a business point-of-view for maximum impact. And it doesn't have to be Giga ambitious projects: Having a culture of shipping to production and pragmatism by accumulating a lot of small wins that aligns to the business goals, have a lot of value than being a "one-hit wonder".

There's a cheat code to multiply the trust of your ability to ship and succeed, it's called under-promise, over-deliver. But it can be tricky since with ML achieving the bare minimum still engage a consequent investment. That being said it's mandatory to set realistic expectations with your stakeholders at the beginning of every project. It will create a common ground for everyone, and give you a baseline to explode with your finest models and witty methods ☺️

Their problems are yours

Now you're working on building trust in parallel, great. Keep a little bit of empathy to answer this question. How can you get buy-in if your stakeholders don't know exactly the ins and outs of what they're buying?

If your answer consists of explaining how a transformer works, go back to the first section.

Let be clear, they pay you to know how a transformer work and how it can bring value to the company's problems. Not to receive an introduction class of whatever new approach you chose every time you have a meeting with them.

If you want to have all your chances, you should focus on giving them the full picture, not only on the technical solution, but on how you'll approach the problem and how it will solve it in terms of product, or even business outcomes.

First, I would highly recommend explicitly say what you've understood of their problem, of what they're trying to achieve, in your terms.

Aligning everyone early on can help for emerging data points and constraints you'll use as reference: budget restriction, performance baseline, product requirements like latency, or even ethical concerns while connecting the business problem to a research problem.

Then, with you have to present metrics make sure to define success metrics before, and since it's linked to a business/product problem, it must be a business/product metric.

It can be LTV, NPS, Revenue, Churn rate. Anything understandable by everyone, and shows that you're aligned with the initial problem.

You still can present your data metrics, like accuracy or ROC curves, as a way to document your journey, but still, it won't have too much impact compared to metrics that everyone can articulate with.

This is exactly why we are building guap: an open-source python package that helps the data team to get ML evaluation metrics everyone can agree on by converting your model output to business outcomes, a.k.a. profits.

This is the google translator for in-house ML. And also the secret sauce to get buy-in, if you need to remember one thing from this blog post it should be: Link your solution with the initial business problem and translate your metrics in an understandable metric. Like guap.

I've got the last tip, and it sounds like a stretch, it might be a stretch but try to be as agile as possible. I know I'm asking a lot here because the ML lifecycle can be lengthy. Few ideas to make it happen:

  • Challenge strongly ML ideas early on. Better find red flags now than later.
  • Prototype everything to reach as soon as possible the baseline, no exception. People have prototyped a self-driving car—I'm sure you'll find a way for your use-case.
  • If your accuracy is still below the baseline, kill it for now. If it's shyly above, work with the UX team to compensate for inevitable errors with a fantastic user experience, but ship it. Trust me, after years of adoption and iteration it is still the case for most of today's chatbots that uses Natural Language Processing.

Two birds one stone: you'll build more trust and more chance to get your buy-in!

Reactive to Proactive

When I'm procrastinating on Twitter, Quora and subreddits, I often came across the same feeling that data scientists share: loneliness and isolation.

I guess it might be explained by misalignment on the vision and success metrics, the fact that not everyone is fluent in the data language, or the paradigm shift from software to ML projects that not everybody knows how to handle and manage daily.

Every company has a different structure, culture, and politics but I think no matter what the context is, you can try to avoid being at every end of the chain by becoming proactive on opportunities you think your company should take.

Seize the opportunity, you have data on your hands and you know how to use it productively. You're placed in the racing seat, don't limit yourself at the gas station.

Think lean. No waste allowed, you can recycle data you've generated on your first product to suggest new use-cases to tackle based on the new data.

It's great news since your first product might fail, at least you've anticipated it and plan your rebound: hoping from use-cases to use-cases with sufficient data that you accumulate on the road seems a good strategy.

And this is a pro-active attitude that is valued in the enterprise: Getting clean and labeled data is an important source of cost for a data team, you'll make great allies if you keep in mind being smart and frugal about strategies to cut costs!

But I have bad news: Doing the job alone is not enough. Communication is one of the most important aspects of being a pro-active and connected data scientist. It will feel like over-communication at the beginning but trust the process and don't underestimate the power of the ripple effect, the serendipity.

Your findings can have an enormous impact on the business.

A missed attempt, a failed experiment, a fun pattern you've discovered may interestingly resonate for your audience, resulting in a grotesque idea, creative approach, or even your next innovation. You can't know and that's exactly why you have to document all your battles and share them whenever you can.

I've experienced it myself and it's a humble experience, sometimes the key you've waited all that time will be given to you by someone who struggles to fully understand your job.


Today, the data science discipline is finding its identity, but the journey to maturity is ongoing. We expect that the next 2-3 years will continue data science’s trajectory towards becoming a strategic business function across a wider range of industries than today, but that institutions and enterprises will face continued growing pains in the process.

As I've come up with the following advice for managers “owning the confusion matrix” to translate the output to comprehensive metrics - data scientists can do their thing by advising the board for the better: it requires to communicate a lot, in a certain way and being genuinely interested in the business strategy.

The upside is gigantic. That's why guap's mission is to empower data scientists as deeply connected leaders, with tools that facilitate collaboration and communication.

Convinced? Build this vision with us, guap is open-source for a reason :)

guap-ml/guap
Algorithms outputs to business outcomes. The magical ML evaluation metric everyone can agree on 🎩 - guap-ml/guap

Subscribe to Ulysse Bottello

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe