AI is here to stay, and soon virtually all product managers will need the right AI know-how in order to successfully navigate and understand this changing landscape.

Essentially, if you ain’t got the knowledge, you could get a little lost when it comes to AI product manager jobs. This is why so many product-led pros are embracing the technology, developing solutions, becoming certified, and leading the product charge to overcome the growing challenges ahead.

We’ve recently touched on the challenges facing AI product managers in 2023, and for the follow-up, we wanted to explore just how those challenges are being tackled and how AI is being embraced. So, we thought we’d get some stimulating snippets and insights from experts in the field…

We spoke to:

  • Scott Jones, Commercial AI & Senior Product Manager at Lenovo.
  • Omi Iyamu, Lead Product Manager at Google.

To get their thoughts and perspectives on our questions…

Q. How can AI PMs tackle issues such as explainability, transparency, and fairness?

Scott Jones:

Explainability is the crux of the whole promise of AI and ML, and comes down to UX in my mind. This is something I'm wrestling with right now on a product I just took over. If you have the best models and methods running under the hood it's meaningless if the customer doesn't see the value.

It depends on what exactly the product is and does, but ultimately there should be a layer of distillation that can clearly communicate "this is what's happening and things are going well per these KPIs." If the AI is surfacing something to the user, then the UX should make it tangible and easy to understand and give the requisite input to take the output to a useful conclusion. The only way to achieve this is through research and lots of user testing.

Transparency of AI is interesting. It tends to be a black box as to where an answer came from. Audit trails are something I've heard of some players trying to solve. It really depends on the customer. Most customers will not be technical enough to even comprehend the story that would be told of how an optimization was reached.

This leads back to the UX. How can you provide a presentation layer on top with a crisp UX that tells the story in a way that is tangible and not overly wrought with engineering-level technicality – unless that's what the customer and the market really want.

For trust you have to approach it as a step function with increments of automation. If the AI is designed to identify and solve problems automatically, that will sound really cool but customers won't necessarily say "oh yeah sure, just let it run wild and start impacting my business." Rather, the first step would be to surface what the AI finds and essentially attach a button for the user to click, something like "do this," to empower the user to review and approve the output.

Over time the button transitions to something like "anytime you see this again, proactively fix it for me." Then over time the user could say "only show me the things you don't know how to solve." It's about building comfort.


Omi Iyamu:

All of these can sort of be grouped together, but fairness is kind of its own sort of realm. I don't necessarily think PMs should be the ones tackling exploitability and transparency. And here's why - to be honest there are tools out there, but we've come pretty far with helping to explain deeper models and neural nets, which are your traditional black box. There are a tonne of already inherently explainable machine learning algorithms out there, like decision trees.

And any sort of linear regression model or rule engine, they're inherently explainable already. With decision trees, you can literally just plot it out and see where and how the end result is being made. But because deep neural nets have a lot of layers, and back propagation, and all these things, it becomes extremely difficult to know how the black box is working from a higher level. Your concern should really be about the effects of a particular feature on the actual end decision.

So if you have a blackbox model, trying to replicate what the outcome is in a decision tree, there are a lot of pros and cons to doing that. But at least if you try and replicate that in a more explainable model, you can sort of explain more. There are a few graphical tools and stuff that folks use these days, such as TensorFlow, or pytorch, plugins to sort of help with that.

It really should be the researchers and the engineers that should be trying to solve this particular problem. I also think that in the future this is going to be less of an issue, as models advance it’ll be a lot easier to explain them, as with the new platforms of the future, or the AI could just explain itself.

There's a great book on the importance of explainability - “Interpretable Machine Learning” by Christoph Molnar - it’s a great book on explanation, and touches on a tonne of different tools and stuff that folks use today.

Leveraging ChatGPT for Product Managers: Enhancing Productivity and Collaboration
hatGPT is an advanced language model that can generate human-like responses based on user inputs. Its ability to understand and generate natural language makes it a valuable resource for product managers looking to optimize their workflows and drive innovation.

Q. How can PMs utilize AI’s unique and special value in the product space?

Scott Jones:

Honestly, it can be used in infinite ways across all aspects of a product. AI is a tool that can solve all sorts of problems and drive value whether it's the product itself or systems that can drive value adjacent to the product itself. Whether it's tools for marketing, tools to have things like chatbots that can sell your product on a website, technology that is actually under the hood of the product itself.


Omi Iyamu:

There are a lot of misconceptions or misguidance around AI. And I think the problem is that it's still somewhat viewed as this nebulous thing. I think how they can better utilise it, is by getting into some basic understanding of AI v. NET mutations, and how it actually works, and how models are built.

To some it might seem scary, but it’s stats really, it's numbers, and PMs are supposed to love numbers. So, once a PM gets a greater understanding of the baseline, statistical models and systems behind AI, they’ll better understand where it comes into play and where it doesn't. I think every PM at this stage should have a very baseline sort of training in AI.

They don't necessarily need to be data science experts. But say you're either doing classification or regression. Are you trying to classify groups and fins? Are you trying to predict a particular numerical outcome - you know, stuff like that, the extreme basics. Go from from there and work backwards into what you’re actually trying to do with the AI, you can't just simply slap AI on everything and think that it's going to be magic. There are things about AI that creating a model is extremely great for, and there are other things that you really just need an expert system for.

You need to first think about the scope of your solution. And work from there. If whatever product you’re building relies on image recognition, and your product is only ever going to be looking at say chairs or tables, you can literally just build an expert system for that, you don't need an entire CD edge.

Fundamentally what really helps folks is for them to really get these basic understandings of how AI works at the extremely basic level, it goes a long way.

Q. How much data science experience do AI PMs need to build AI products?

Scott Jones:

I actually did a presentation for Product-Led Festival in November, which you can check out in the video below at around the 19 minute mark:

I probably can't say it better here than I did there. But the key takeaway is you don't need to be an expert. You need to be data driven, have intellectual curiosity, and be able to guide a team in your intended strategic direction ensuring real tangible value delivery along the way. For example, you don't have to know which model to use for a given problem, but you need to be able to tell the technical team specifically what they should be focusing on to solve - how I would judge if they succeed - to ensure they pick a model that solves the problem.


Omi Iyamu:

I think only basic experience. The focus should be more on the statistical side of data science, understanding things such as the outcome of probabilistic equations and all that. Understanding aerobars and limits and all those things that are mostly around statistics. This is really the base knowledge I believe that PMs should have.

PMS in the ML and AI field don't necessarily need to go too deep into data science, because they're not going to be building the algorithms. But you just need to be able to understand, at the base level, what you're trying to do and what you want. So what your product is trying to do, and where AI fits in your product, so that you can be able to better talk about that with your data scientists, with your ML researchers and your engineers. They can take that information and say “we understand this now” and can decide to use this algorithm or not.

Q. Why do you think AI PMs are ‘scared’ of?

Scott Jones:

I suspect it would be a classic case of not being comfortable with the unfamiliar/unknown. The best way to build up knowledge is by feeding intellectual curiosity. Read about it, network with people, ask lots of questions and consume information from across industries about user cases. Ultimately it's another tool and type of technology that can solve problems. PMs are all about finding solutions to problems, so it should abstract to another arrow in the quiver, so to speak.


Omi Iyamu:

A lot of PMs have limited interaction with actual data; monthly active users, CAC, key KPIs, and metrics etc. Whereas in the AI world, all those typical things are very different and require a level of confidence around. The worlds are different, and there's still perhaps an inherent fear around AI. It still seems so nebulous. But I think with just a base understanding of stats, it can become far less scary for any PM.

Now that ML is extremely mature, you just have to pick up at least some base knowledge on it, and you'll be far better for it, even if you’re not a research PM, because a lot of things that you're going to be building going forward are going to use ML in one way or another.


Omi Iyamu:

I'm going to talk about machine learning and privacy for product managers, these are the two core areas I've spent my career in. And I think two extremely important areas for PMs to truly understand.

I think the interesting thing about the dynamic between these two is that ML is all about using more data, so the more data you have, the better your ML models will be for the most part, right? But privacy is a lot about restricting access data, and more often than not, less data is better in the privacy world.

So these two worlds are the two things that are racing up in industry. So, how do you find a balance between the two? I’ll be touching on this.

My goal at the end of my talk, is to reassure teams that are scared of AI and privacy not to be, and to feel like it's okay, it's a lot simpler. It's not this huge dragon of a myth.