Knowledge Institute Podcasts
-
AI in Product Design with WongDoody’s Valentina Proietti
September 10, 2024
Insights
- AI must enhance user experience, not just be a trend: Companies often rush to incorporate AI into their products without considering if it's truly necessary. The focus should be on using AI to improve functionality and solve real customer problems, rather than adopting it simply because it's popular.
- Ethical AI implementation requires strong leadership and governance: Successful AI integration demands more than just technical know-how. Companies need clear leadership, strong governance, and ethical guidelines to ensure AI is used responsibly, balancing innovation with the protection of user privacy and trust.
Kate Bevan: Hello and welcome to this episode of the Infosys Knowledge Institute's podcast on being AI first, The AI Interrogator. I'm Kate Bevan of the Infosys Knowledge Institute, and my guest day is Val Proietti, who's head of design at WongDoody. Val, thank you so much for joining us. It's really great to see you.
Valentina Proietti: Thank you for having me.
Kate Bevan: Honestly, it's a pleasure. I think what I want to start with is just if you can explain a little bit about what you do at WongDoody.
Valentina Proietti: So, at WongDoody, I head the product design team. We design products that are digital, physical, or actually both, for a lot of businesses around the world. We are part of Infosys, and we are kind of connected with a lot of clients.
Kate Bevan: Okay. So, when a client comes to you and says, "Oh, we want to build this product and we want to use AI", how do you help them decide if it's actually right to use AI? Or how do you maybe convince them that actually they don't need to use it?
Valentina Proietti: Clients come to us, and we design lots of products. They come to us to really innovate and scale and be ahead of the curve, and we kind of have to always say AI, it's the technology that's out there, everyone's talking about it, and we cannot say, we don't think you should use AI. Our clients want to explore it. But so many times it comes to a point where you look at the businesses, you're trying to work with their products, and we have to be quite honest and kind of say AI is potentially not a good solution for you. But it is something that we really always have to explore. And a lot of businesses already, it's just not all of them can really embrace this new technology to better their customer experience.
Kate Bevan: So, what kind of products are you building that are using AI?
Valentina Proietti: So, we're using the narrow type of AI at the moment. So narrow type of AI is an AI that's really focused on a specific topic. Obviously, we're not looking at AI as that can answer any questions, we are looking at a pool of data that is safe, that is vetted, and really helps users solve a specific topic. So, it's narrow on a specific business or a specific set of tasks, very specific. And the products that we design for our customers, we do a lot of prototypes, and we do lots of concepts, and we really help them understand what it is that AI can help for them as a business and the customer. So, for example, we're working right now with the Telco company. They want to make sure that we consider AI as part of their solution and how customers can interact with them as a brand, solving their problems, making sure they have Wi-Fi in the right rooms. So, we're really trying to help them using this technology to augment that experience with that brand and the product and make sure that we create value for their customers.
Kate Bevan: How do users react, say a user of that app with that telecom provider, with that ISP, how did they react when they realize they're using an AI product? When they're talking to an AI.
Valentina Proietti: So, we were debating if we had to tell users, if they had to know they were using an AI. When we talked to the business we're working with, we were exploring a little bit of what's the ethics around that? What do we tell users? Do we need to tell them they're using an AI, or do we simply just offer them a solution that works for them? And what we found is that different type of consumers perceives the AI interaction differently. We went with the root of not telling them it was an AI, just the root of testing the technology, the interaction and the product that we created was clearly not a human interaction. So, it was an interaction with a voice-activated product. And because there was quite a lot of transparency about the fact that it wasn't a human, we decided to not tell them that it was an AI. But throughout the testing, it sorts of came out that the interaction and how the AI was solving their problems created a lot of value. And that's when we understood that actually consumers didn't care if there was an AI or if it wasn't an AI, but they only cared about the fact that it solved their problems and that technology was really helping them, and they could see the benefit and they didn't mind interacting with an AI at that point.
Kate Bevan: That's really interesting because if you look at the narrative in places like, I don't know, like Twitter or X, as we must now call it, there's a lot of concern about that transparency around whether you're talking to an AI or not. But if you are saying, certainly on this project, they didn't mind, and they just loved that they solved the problems. That's quite a disconnect actually between the concerns of, I don't know, high status, highly engaged, highly online people versus ordinary consumers. And I'm not saying ordinary consumers, to diss them in any way, shape, or form. Have you found that with other products you've built with, other vendors?
Valentina Proietti: Yes, absolutely. So, transparency is definitely one of the principles that we try and really make sure the businesses adhere to when it comes to creating an AI product. Users need to understand that when they're talking or interacting with an AI, they're not talking to a human. And that's where this misleading part, like the X example you mentioned, all the things that we see on social media that might not even be AI, could just be a bot actually, that is where the concern is. It's not necessarily with the technology, but the use of that technology. And the transparency around it is imperative and the ethics around it, it also is. But when an AI helps you reactivate your Wi-Fi, potentially that isn't where the problem is. It's just when you're thinking you're talking to a human, but you're actually not, and the information that you're consuming doesn't come from a source that you can trust, and that's where the problem is.
Kate Bevan: Do you think users feel more or less trust when they're talking to an AI?
Valentina Proietti: We found through our research that it depends. There's a lot of differences between people in, for example, Asia versus Europe or versus North and South America. Different cultures perceive AI differently. Trust is something that it really is a case-by-case basis based on the brand, based on the experience, based on the framework that businesses create around AIs. The trust is there when the value is there, when users understand that the interaction with an AI creates a lot of value, the trust is immediately there. When we tested the Stalker product I was telling you about, users were wowed.
Valentina Proietti: The AI could understand user intent better than any human could. I would've had to ask two or three questions to understand what that person was asking the machine to do or asked to do. And instead, AI managed to do that really quickly and solve their problem really quickly. So that when the value is there, the trust is immediately gained.
Kate Bevan: So, do you think we're going to see a lot of customer service going in that direction?
Valentina Proietti: I think it's inevitable that will happen. The balance is going to be very tricky. It won't be an easy thing to achieve to make people feel that they can trust what they're using and who they're talking to. And I think we've seen it in the past with the failures, unfortunately, of chatbots. It hasn't gone super well. In fact, we have some of the customers we work with that they're like, "Please do not build a chatbot. We don't want that. We know it's a failure". But inevitably, there's going to be a point in time where a lot of the things that AI can do are much faster. If you think about having to be on the phone for 20, 30 minutes to get a bill explained or trying to reactivate a service, those things can be done really, really quickly. So, I think that for certain tasks, not all of them, when it comes to customer service, it'll be replaced. Also, the human element of it is incredibly important. We have to really think about what is replaceable and what isn't. And I think we are finding out that in a lot of retail settings, for example, or physical spaces, that human element is very, very important still.
Kate Bevan: You're talking about that face-to-face, actual human interaction rather than sort of an online interaction. Yeah. Is there any regulation yet that says, oh, you must disclose when a user, a customer, a consumer is talking to an AI? Or do you expect that to come along?
Valentina Proietti: I believe the reason a regulation today, I think obviously every government, every country, every state is going at different paces in different ways, more liberal, less liberal. I think the EU is trying to implement quite a lot of strict rules around it versus North America, for example. But as we work with businesses, we do think that it's very, very important that we are very transparent about using AI and who's using it and who you're talking to. The regulations and the ethics around it are very, very important topics. We have to explore it. We have to push for more regulation. We've learned from social media that we can't let this happen again, from putting democracy in danger, to child's safety and behavior, just to name a few. The reason we can't fail at this again, we have to be talking about it. It's very important that we use this technology in the right way.
Kate Bevan: I am reasonably optimistic, I think, about the fact that we seem to be getting out in front of regulation rather than being reactive to it like we have been with social media, for example. Are you optimistic or are you more pessimistic about that?
Valentina Proietti: Given the job that I do in terms of always trying to envision different futures, trying to push businesses to do better and innovate, I am very optimistic. I do think that it is an amazing technology. It's incredible that we get to use it. It kind of makes our lives so much easier. It's so important that we embrace it, we use it, we're conscious, and we really push it in the right direction. Yes, but optimistic, nonetheless.
Kate Bevan: What's the right way to implement AI in software and products?
Valentina Proietti: There's a lot of research when it comes to implementing an AI in a product. We need to really understand, as we said, if it is the right solution, if it is the right technology. A lot of times we talk about AI but it's not an AI, it's actually something much simpler. But implementing it really needs a framework. It needs governance, it needs understanding the data, it needs testing, a lot of testing. We run a lot of tests on some of the generative AIs that are commercially available out there, and we might be using it for our clients. And we find the biases are very real, very simple tasks as well, and we need to be able to implement it in the right way. So, helping businesses create an acceptance level of what the AI needs to be, how it needs to behave, and how it needs to communicate with customers and clients. And create an experience, we call them experience principles in the industry. And it's really tried and make sure that those principles go from where the data is, to the technology, to how that product is created and developed and then iterated upon. And creating that consistent experience for an AI is how we go about. But lots of testing and lots of rigor around the data for sure.
Kate Bevan: I mean, I sometimes think that a lot of businesses think, oh, AI can be a real shortcut to whatever they're trying to roll out. But I think from what you are saying, making sure that it's done ethically, the data, the testing, it's not a shortcut at all, is it?
Valentina Proietti: No, it's not. It really isn't. I think you'll have a lot of benefits in the long term, but to get it right and to get a product that you're not going to be on the face of whatever newspaper saying, this happened. We used this AI, and it created this experience. It is really a lot of work, especially as we get more and more people using them. Right now, it's not my words, someone else said it, but as we keep going and we keep implementing AI, we just really have to push it in the right direction.
Kate Bevan: And have we got the right people to work on this? I mean, I think one of the things we discovered, we did some work last year looking at how generative AI is being implemented across companies around the world and what the barriers are. And one of the barriers was actually getting the right people to work on it. Where are we with the right people to work on it?
Valentina Proietti: There are some people in the industry that we look up to and we try to get a little bit of guidance around what are the right things to do, how to do it, what are the right processes, what not to do. Also, very important in learning from other people's mistakes. I think we have to really educate ourselves. Being in the industry, we can't let this just pass or run. There's a lot of education that has to come with this. We train all of our designers to really know what this is, how to use it, how to train it, what the impact is. So, I think we are on track to get there, but there's way more education that needs to happen around how to build this and hopefully, we'll... Again, I'm an optimist. I think we'll get there. But yeah.
Kate Bevan: Have we got the right leadership in companies? Again, one of the things we found when we were doing our generative AI radars last year was there was a real sort of spread of where leadership is. Some places it's just the tech teams leading it's, some places it's legal teams, some places it's the C-suite. How do you think we are on that readiness for leadership in companies?
Valentina Proietti: Mixed feelings. The leadership around AI is definitely lacking, I can see that. It's almost a novelty, but no one really is in charge in many companies. We try and be that guiding voice and try and make sure that, as I said, we follow standards, we follow ethics, we have principles around it and governance specifically, but we can see that many companies are not equipped to really be responsible for what AI will bring. And that's a big challenge and also an exciting opportunity to create this space in these companies to create like a chief AI officer, for example. I don't know if there is one out there. I think there are in some of the boards or some of the big companies.
Kate Bevan: What would chief AI officer's skills be? Would they be a lawyer? Would they be an ethicist? Would they be a computer scientist? What would they be?
Valentina Proietti: A unicorn, yeah.
Kate Bevan: Yes.
Valentina Proietti: Yeah, there would have to be a little bit of a unicorn. I think we find people that have quite a lot of experience. I don't think they need to be lawyers. I think they can have team of lawyers they can inform, and I think it's just about that top-down approach as in creating that AI knowledge and having that governance, building teams that know what they're doing and pulling people together that then cascades down into the business. I think they need to understand the technology for sure. A big understanding of data as well is big plus. And understanding of the business. It's not an easy job.
Kate Bevan: It's really not, is it? I'm going to come to my final question. Do you think AI is going to kill us?
Valentina Proietti: I was thinking about this. It's really refreshing that you're asking the question and we're asking ourselves the question, and we're really exploring the consequences of this technology. As I said, we've learned from social media. I don't recall anyone asking if social media was going to put democracy in danger. So, it's very, very good that we are at the point that we are questioning it and everyone's thinking about it. Even my mom knows about it, which is crazy. But I think we are also the makers of our destiny, and the faith of humanity is a humanity's hands, so we will be responsible for whatever comes.
Kate Bevan: I think that's a great place to leave it. Val Proietti, thank you so much for joining us today.
Valentina Proietti: Thank you, Kate.
Kate Bevan: The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to follow us wherever you get your podcasts. And visit us on infosys.com/iki. The podcast was produced by Yulia De Bari and Christine Calhoun. Dode Bigley is our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Valentina Proietti
Valentina is a dynamic creative professional with 16 years of experience in the industry. With a deep passion for emergent tech, they've successfully collaborated with some of the biggest names in the business, including Spotify, Tom Ford, AMC+, and Bloomberg. Valentina is renowned for their ability to lead and grow diverse teams, driving innovation and creativity to deliver outstanding results for their clients. Today, they hold the position of Head of Product Design at WongDoody London Design Studio, where they continue to inspire and push the boundaries of what's possible.
- On LinkedIn
About Kate Bevan
Kate is a senior editor with the Infosys Knowledge Institute and the host of the AI Interrogator podcast. This is a series of interviews with AI practitioners across industry, academia and journalism that seeks to have enlightening, engaging and provocative conversations about how we use AI in enterprise and across our lives. Kate also works with her IKI colleagues to elevate our storytelling and understand and communicate the big themes of business technology.
Kate is an experienced and respected senior technology journalist based in London. Over the course of her career, she has worked for leading UK publications including the Financial Times, the Guardian, the Daily Telegraph, and the Sunday Telegraph, among others. She is also a well-known commentator who appears regularly on UK and international radio and TV programmes to discuss and explain technology news and trends.
- On LinkedIn
- “About the Infosys Knowledge Institute”
- “Generative AI Radar” Infosys Knowledge Institute
- WongDoody
Mentioned in the podcast