Our AI & Impact Lead Minna Mustakallio has spent the last year shaping Un/known’s mindful and responsible AI offering. Concurrently, she has been steering an organization-wide AI strategy & governance model for a major Finnish corporation.
In this conversation, she opens up about her trailblazing journey in responsible AI and some of the key lessons she has gathered on the way. No AI was used in creating or editing this text.
AI hype went into overdrive in November 2022 with the public release of Open AI’s ChatGPT. But you had already been involved in companies such as Saidot.ai and Silo.ai when we started Un/known 3 years ago. Can you tell us more about your journey in AI?
My journey in AI started around 2017. I was at Futurice, an international tech and design company, where we mainly worked with digital services, AI, and data. I had a role as a design and business leader at the time.
I was always interested in the impact and real-life consequences of the technology we built. So I kept asking questions: If we use this data now, what will happen down the line? If we apply this algorithm now, what will happen down the line?
At the time, AI was basically data scientists talking about deep learning and machine learning. Those were the big topics of the day. They wanted to harness data and use it for automated decision-making: copying what real people were doing and making it more efficient.
A typical example was hiring people – what can we do with AI to make recruiting more efficient? And of course, I had a lot of questions about that.
So I asked if anyone had been thinking about how an AI hiring tool should be designed and what potential consequences should be considered. I was told, “Not really - not in that way. But the client is very enthusiastic about it”.
Then what happened?
At that point, few people were asking the questions I was asking. I was kind of a trailblazer here in Finland. Most people working in tech were skeptical about even considering the moral and ethical issues of technology.
And as I was doing just that, Futurice decided to create a new role for me, a data & AI ethics role. I give them a lot of credit for doing that.
This role allowed me to dive into the global discussion around not only responsible but mindful AI. The discussion is obviously not only about technology but also about humanity. It is about understanding the human consequences of technology. It is about wanting to bring that understanding into all AI development, from strategy to product development and everything in between.
The discussion is obviously not only about technology but also about humanity. It is about understanding the human consequences of technology. It is about wanting to bring that understanding into all AI development.
And from Futurice, you went to do just that at Saidot.ai?
Soon after that, I moved to a Head of Product role at Saidot.ai. They were just starting and were all about building a platform to make AI governance and transparency easier.
That was a very valuable experience for me. It has become evident that all the questions we were asking about ethics and the consequences of AI at the time are now more relevant than ever.
With all this AI experience, what is something you have learned that most people don't agree with?
That AI is about humanity, about people, as much as it is about technology. I think that is something that people often do not realize.
Can you elaborate?
When we automate work or make decisions based on automation, it is not a clean process that we are automating. It is messy because it is full of human messiness. Very seldom do we get successful outcomes with data and technology alone, as there are connections and touchpoints already in the process that are not captured by data and technology.
That is a great point. And as we know, organizational dynamics often add to the complexity.
When ICT brings technology in, there is often a power imbalance. The people whose work they are dealing with often do not have enough say in these kinds of situations. There’s this notorious doctor story: “Hey, I thought this system was supposed to work for me and help me to serve the people better. But now it seems that both me and the patient are actually serving the system."
Just wow.
At the same time, organizations realize that the new generative AI services are very easy to get into. They are realizing these new services are spreading like wildfire throughout the teams. So they start wondering if they need some guidelines and governance and all that. What are some of the challenges that leaders face at that point?
People are indeed starting to incorporate technology in their work without ICT being involved in any way. They wonder if certain services could help in their daily tasks and just start trying things out.
This is obviously a fundamentally positive thing. But it is also a challenge. There's a discrepancy between trying new things and mediating generative AI's potential risks and consequences.
Also, there’s this pretty large legal gray area at the moment. An area where everybody's kind of thinking that, okay, I'm using this now, but could this be illegal tomorrow?
What leadership should be asking is how do we calibrate all these new tools to our values and our strategic goals? And from this perspective, do they deliver the desired impact down the line?
So, what functions should be involved when organizations start tackling these issues?
The initiative should start at the senior leadership level. It should not be buried anywhere else, as you need to make sure that whatever technology you are using it is indeed serving your strategic goals.
Kind of surprisingly, often nobody asks that with new technology – It almost seems to have a life of its own. There is a lot of stuff that just happens without anyone really asking, “Why?".
There is a lot of stuff that just happens without anyone really asking “why?”.
With AI & ethics, it is also easy to dwell in pretty existential discussions. And even if they feel meaningful, they are not necessarily helping. How do you avoid that trap?
Yeah, these existential discussions are real and can cause a lot of anxiety. But what helps in building an AI-ready organization is taking a systematic approach.
I have in fact, gathered a list of the most crucial questions that help a lot in having the right discussions with the right stakeholders. They help assess the entire system before you build it, from inputs to consequences.
You can use it to walk through your plan, involve everyone, and decide if it’s good enough.
So basically, you tackle all these complex issues by getting the right people together and then approaching the whole thing in a very systematic manner?
Exactly, by looking at the whole AI chain and asking the right questions, even if you don't always have answers ready. Which can also be very illuminating.
And the first test is always going to be the question, “Why are we doing this?” And if you can’t answer that, that is indeed a bummer.
Want to build mindful AI at your organization? Contact our Impact & AI Lead Minna Mustakallio.
Need inspiring & actionable generative AI training for your team? Contact our Lead Researcher Anna Haverinen.
コメント