Knowledge Institute Podcasts
-
Wall Street, AI and Corporate Self-Governance with Tevoro's Dawn Talbot
November 04, 2024
Insights
- The rise of AI on Wall Street demands a shift from traditional compliance-driven governance to proactive self-governance that balances innovation with ethical oversight.
- Effective AI governance requires treating AI systems with the same level of accountability and incentive structures as human employees, ensuring alignment with corporate values and goals.
Kate Bevan: Hello, and welcome to this episode of AI Interrogator, Infosys' podcast on enterprise AI. I'm Kate Bevan of the Infosys Knowledge Institute, and my guest today is Dawn Talbot, who's the founder of Tevoro. She's a distinguished expert on corporate AI ethics, and she describes herself as a presentable geek, which I think it's fair to say you definitely are, Dawn. Thank you very much for joining us.
Dawn Talbot: Thank you for having me.
Kate Bevan: And when we spoke, you talked a lot about your background on Wall Street. Can you just explain a little bit about that, and what you're doing now and how those two things link up?
Dawn Talbot: My master's degree is in information systems engineering. So I initially joined Wall Street as an engineer doing globally distributed systems for large banks, but I was recruited out of there to become an investment banker. Because at that time the internet was just becoming popular and networks were being valued, and they needed someone with a little more technical expertise than the typical MBA coming out of Ivy League firms at that time.
So, I was recruited in to help with some of that early work. And then from there I moved from investment banking to research. So, I did emerging technology, institutional equity research for about a decade. And covered the rise of Silicon Valley, the encryption algorithms, the browsers, and the onset of the internet, and how that started to make huge changes in the way businesses were approaching their customers. From there, I was recruited over to raise capital and run an emerging tech mutual fund at Merrill Lynch, and that was later sold to BlackRock. So, I did a lot of work there putting together portfolios of emerging tech companies. And I spent, I guess, most of my career on Wall Street in senior executive offices or pitching as a banker to boards, that kind of thing.
Kate Bevan: So, with AI, do you think AI is, the time we're in now, this explosion of technology and abilities, is this the similar sort of transformative energy, new stuff exciting as it was back in the days of the internet?
Dawn Talbot: I find it so. I really think we're only scratching the surface on where AI can be, and I think it's going to happen very rapidly. So that's much like the internet. It took hold globally very quickly, and the same thing's happening with AI. But I think that it will go further. And if we look at how we integrated the internet into an industry, we're going to see the same kind of leap forward. And I think it's going to take a little bit of a conceptual shift from executives because we've just gone through a couple of decades of process engineering and process re-engineering, and now that we have AI with such different timing variances between what humans are doing at the workplace and what AI is doing, we're going to have to switch to something that's more like state-based, almost a quantum concept.
Kate Bevan: What do you mean by state-based?
Dawn Talbot: State-based is if you have a step in a process and certain thresholds have to be met before you go to the next step in a process, that's a state. Rather than breaking it into a bit of pieces and just automating each human's role away. I don't think that's going to be effective this time. We're going to have to shift the way we think about what we create as corporate executives and then how we manage that because it's not the old-fashioned way of automating a process and sticking AI in little bits and pieces.
Kate Bevan: No, it's really not. I mean, are financial markets even ready for AI?
Dawn Talbot: I think it's interesting. I think Wall Street goes through phases where they embrace new technology and they foster innovation and creativity, but they have to do it with such careful controls in place. Because you really want to keep a firm handle on how people choose to integrate it, whether it's regulatory compliant, and whether you have firm guardrails in place. Enforcing things like impulse control. Just things that you have to do with humans when they're around a large amount of money or an important product, you're going to have to have for AI as well. And that's why I think governance models for AI is the most fascinating place to be in tech right now.
Kate Bevan: This is a huge sort of transformation for leadership, isn't it, as much as for technology or for anything?
Dawn Talbot: Yes, and if you think about it, most leaders have been in their positions for a couple of decades before they start or they have a formal education in a discipline before they begin their company. And something like this comes into place, and the curriculum haven't even been adjusted entirely to reflect this new technology and how it may impact management style leadership. So, it's going to stretch, I think, a lot of executives to reach new heights or fail. Hopefully reach new heights.
Kate Bevan: Do we need a different type of leader in an AI-driven Wall Street, AI-driven financial markets, do you think?
Dawn Talbot: I think it's going to take someone who understands how AI is similar and how it's different from a human employee. But make sure that they pay attention to how to manage the AI in a way that they aren't creating loopholes in their own policies. I can give an example of that, which I think is interesting, but back when I was an institutional analyst doing emerging growth, there was a time when creating little routines, I used to create little software companions to my research reports. And there was no way for the compliance department to approve them. There was no mechanism in the regulatory oversight structure for someone to attach an algorithm to their report. In fact, it had to be submitted as a PDF.
So, one of the things I did eventually was I automated my research. And I got it to go out, read the press release of the financial announcements, come back, fill in my model, create five years of projections. And then I would write a text, tweak the model, and I could get my reports out faster than anyone else. But because I still touched the model, I had to have about $10,000 in licenses across the United States as my overhead costs.
Kate Bevan: That's not very scalable, is it?
Dawn Talbot: If I took myself out of the process, I was categorized as an algorithm, and that software system could operate without licenses as software. That's a loophole.
Kate Bevan: It feels like with banks and financial services, we are constantly trying to catch up and keep up with what they're doing. I'm thinking of things like having to regulate to stop people using WhatsApp and having off-grid conversations. How are big banks and financial services firms going to keep tabs on AI because it's moving at such a pace?
Dawn Talbot: I think that there is a self-interest and a real incentive for these executives to stay on top of it. I think that they will always be faced with these cutting-edge technologies and be forced to react to them more rapidly than a regulator or a legislator who is just by definition going to operate more slowly. So, executives are going to be forced to do something.
Now what I'm hoping is that we will get to an elevated version of self-governance rather than strictly dealing with compliance. I've always talked about the fact that accountability is very different than simple compliance with existing laws. So right now, you'll see that, I think the most recent survey I read was out of 600 executives interviewed, only 5% of those had talked internally about self-governance. They're all experimenting with AI.
And what was funny is that the conclusion of the survey, most of these executives truly believe that by mid-next year they'll have self-governance in place. And I don't think they understand. I think when they talk about self-governance, some people think strictly in terms of risk governance, which is usually it deals more with compliance than a more proactive stance of self-governance that would include creativity and innovation. How do you handle that when it comes from AI, and how do you differentiate that from something that is outside the lines from a regulatory perspective?
Kate Bevan: This feels like the real challenge to me with all this governance. I mean, we did some research, the Infosys Knowledge Institute, quite recently, which I think I shared with you, on AI readiness. And we had five pillars, data governance, technology. But we found that of all the companies we surveyed, only 2% of them were ready across all five measures. So, this feels like an enormous challenge. And particularly for financial services, where somebody touching a button can blow up the economy, and that's slightly overstating it, but there is so much hanging on governance compliance and keeping ahead of it rather than trying to catch up with it. Are we catching up with AI or are we keeping ahead of it?
Dawn Talbot: Well, I think many banks have already established AI and quantum research labs, innovation labs. I think what may be missing, and it's hard because it's closely held information many times, but what's missing is an equivalent and separate self-governance innovation program. Because I think they can be developed in parallel. And then when a company finally decides the level of opacity, they acknowledge is acceptable for their AI systems in various different departments, they have a self-governance program already thought through and ready to implement. Because that self-governance is what's going to be important to the executives and the boards of directors. Risk compliance, risk governance, yeah, that'll help maybe with legal implications, but it's not going to help instill the values of your company the way a true self-governance mechanism will.
Kate Bevan: How can we use AI to drive, develop, improve governance? What can AI bring to governance, I think is my question there?
Dawn Talbot: I have to say that's the first time I've been asked that question. And I think the way I would describe it is if you look at the different models, the different approaches to AI, some limit the amount of creativity and some let it run wild. And I think we're starting to just scratch the surface of when different AI approaches are appropriate in business. So, AI may lend a hand when saying, wait a minute, I detect another AI out there, and it's pushing the boundaries of creativity in a way that may pose a hazard to the company as a whole, and we need to step in at this point and put some guardrails in place. That's I think most probable, not in the financial industry as much as it is in maybe pharmaceuticals or gene research. That's where it can really impact lives in a significant way.
Although in financial institutions, we impact people's futures in a significant way. So, in that context, I would say you don't want AI inventing a new financial instrument without having self-governance in place. And the way that Wall Street traditionally does that is to triangulate the product and have competing agendas so that way no one participant in the financial institution can maximize their profit without the input from others. And that's stealing a little tool from the blockchain world called consensus. So, I have always maintained as an engineer that putting consensus algorithms higher in the software stack than it has traditionally been in blockchain could be very beneficial when we put through AI systems.
Kate Bevan: A couple of thoughts off that. The first is, my goodness, Wall Street can come up with quite harmful financial derivatives all by itself without AI necessarily doing it. So that's quite a scary thought, AI just coming up some wild derivative. Second, you talked about opacity of algorithms. I mean, I understand that obviously there's intellectual property involved in this. Everybody wants to keep their governance close to their chest. But isn't there a benefit in sharing, between the financial services institutions, how they might build governance, AI governance in a more open way and an open-source way? I'm slightly playing devil's advocate there.
Dawn Talbot: I think that's a fascinating topic because I initially started my research based on the fact that I had noticed a similar pattern, even when I worked at different institutions in the financial industry. So, there is a self-governance model for the industry participants at the institutional level, and then they are implemented within each institution as well. So, I think that's definitely a good way to go. And I think that industries could take the initiative and get, particularly in the US where we have almost no infrastructure addressing this, industries could get ahead of the game by starting to develop these things on their own.
Kate Bevan: I mean I'm sort of thinking of the Silicon Valley tech bro approach of we want to define the governance so that we don't have regulators defining it for us. Is that the same in Wall Street?
Dawn Talbot: It is, except that if you look at this in context of where we are historically in finance, Wall Street might have been powerful enough to dictate these regulations in the past, but right now they're up against the potential disruption of decentralized finance. So, if anyone is displeased with what they do in their current state, they could very easily decentralize another version of it and see if they can create a better regulated environment.
Kate Bevan: Could they though? Because sort of I'm thinking of, I think it was the Australian Stock Exchange's experiments with blockchain, which they were quite keen to do. And then discovered that nobody else was doing it so they couldn't. Is that a paradigm that's appropriate for AI, the way we're talking?
Dawn Talbot: It could be. I haven't studied Australia as much as I have the US. But I know in the US and parts of Europe, they are very actively pursuing blockchain-based decentralized exchanges. And they can easily adopt the institutional self-governance that's already in place on Wall Street so that it's less distinguishable, and I think more of an open threat to the participants in traditional banks to really take it seriously because we have the technology to displace chunks of their business, if not all of it.
Kate Bevan: So, in a way, you could almost have self-governance becoming like smart contracts that just execute automatically almost autonomously.
Dawn Talbot: When I discuss the methodology that I propose, we talk about something called an economic architecture. Which you go back and historically trace what are the economic incentives that make your employees behave the way you wish them to? Now what is the equivalent that you can do for AI? And in AI, it's often access to additional data. So, if they don't complete the step and achieve the state at which they can progress, they don't get [inaudible 00:16:10] data, go around the system, find another way to get into the data. So that's almost the concept of a cognitive policy engine that keeps AI operating by the same set of rules that your bonus program creates an incentive for your human employees.
Kate Bevan: The whole tech financial services establishment, which is terribly male, but yet you are a senior woman who's been involved in technology and deeply involved in AI governance now. I'm a senior woman covering it from the content side of things, from the journalism side of things, how can we get more women into this space?
Dawn Talbot: I think there are two ways. I think the most recent statistic I read said that of all of the venture capital that went to AI, 2% went to women leading firms. So that's very frustrating. On the other hand, when there are women who achieved, like you and I, in these environments for quite some time, we have developed our own armor when it comes to navigating these things. So, I have gotten very accustomed to self-funding my startups, and I'm also a true believer in a concept of mentorship. So, when I achieve some level of success, I try to reach out to younger women, and not do what is historically sort of a pattern of women pulling the ladder up behind them. They worked so hard to get into some group that they say, okay, now I'm one of them, and I'm pulling up the ladder and you'll have to work as hard as I did to duplicate my success. I always was so disappointed when I encountered that. So, I think women owe it to each other to reach out once they've achieved a level of success and help others come in as well.
Kate Bevan: Although is it women's problem to solve this one? I sort of feel like it's not a problem we created. Is it our problem to solve?
Dawn Talbot: When I was younger, I always thought that this male dominated tech industry was a problem that age would resolve, and that's what kept me going when I was one of, I think, three women in my master's in engineering program, and one of just, I think, three women in portfolio management. And I kept saying, okay, if I just hang in there, this thing will dissolve eventually. And I haven't seen it happen. So, we can push. I think there's more diversification at the board level, and that's a real factor. I do believe that women who distinguish themselves with innovative technology that can break through the noise really stand a better chance now than they ever have in the past of breaking through this. But we'll see. All we can do is keep trying, right?
Kate Bevan: That's really optimistic. I'm now going to turn into my final question which I ask everybody, which may or may not have an optimistic answer. Do you think AI is going to kill us?
Dawn Talbot: It can go anything from hurting another's feelings to pushing a red button and unleashing a weapon. If we don't have the impulse control and we're training it, how do we expect them to do any better? So that really gets into, again, this area of can we establish self-governance mechanisms and allow those to be enforced rigorously enough that we assume a zero-trust environment, just like Wall Street does when they hire someone, and sort of provide incentives and disincentives so that they cooperate at points and they are penalized for going rogue, that kind of thing. I think that's the best way to approach it.
Kate Bevan: Can we train AI to have ethics?
Dawn Talbot: If you think of ethics from the standpoint of a zero-trust environment, historically, Wall Street, anytime you have huge sums of money actually, you will have people who have ethical challenges trying to get close to that money. So, Wall Street established a set of practices pretty early on, and it's not always successful, as we know. When there's a breach, it's big news, and it usually causes a lot of damage. But on a day-to-day basis, we don't see it every day. So there has been some success in corralling those ethically challenged, we'll just say, people from gaining too much control over things, and that includes check and balances.
So, if we're going to do that with AI, we have to show the value of doing things ethically. How does that help the most number of people? Are we taking everyone's concerns into consideration before proceeding down this path? And that goes into creating a derivative, that goes into deciding whether you want the same people to do research also the same ones that can create a market in a security. I mean, that's the FTX example. So, you need to really make sure that those check and balance systems get translated into this new technology, whether it's the blockchain, the exchange, the bank, we have to get this right. That's why I think this is such a fascinating place to specialize right now.
Kate Bevan: I'm going to leave it there. Dawn Talbot, thank you so much. You've been brilliant. The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to follow us wherever you get your podcasts and visit us on infosys.com/IKI. The podcast was produced by Yulia De Bari and Christine Calhoun. Dode Bigley is our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Dawn Talbot
Dawn Talbot is the lead co-author of "Redefining the Future of the Economy: Governance Blocks and Economic Architecture" (Tevoro, 2020) and a contributor to "Critical Intellectual Property Issues Facing America: Issues Looking for Answers" (Center for Advanced Technologies, 2009). She spent decades working at some of the largest financial institutions on Wall Street as a "presentable geek" who could explain highly complex concepts in a concise manner. Dawn has served as an institutional research analyst, corporate finance professional, and portfolio manager.
In her 20s, she founded her first technology start-up. Her early technology experience included systems design, protocol design, and rapid prototyping for Fortune 500 firms, as well as user interface design for senior officers at a U.S. Air Force base. Currently, Ms. Talbot is focused on developing methods to encode accountability into semi-autonomous systems.
- On LinkedIn
About Kate Bevan
Kate is a senior editor with the Infosys Knowledge Institute and the host of the AI Interrogator podcast. This is a series of interviews with AI practitioners across industry, academia and journalism that seeks to have enlightening, engaging and provocative conversations about how we use AI in enterprise and across our lives. Kate also works with her IKI colleagues to elevate our storytelling and understand and communicate the big themes of business technology.
Kate is an experienced and respected senior technology journalist based in London. Over the course of her career, she has worked for leading UK publications including the Financial Times, the Guardian, the Daily Telegraph, and the Sunday Telegraph, among others. She is also a well-known commentator who appears regularly on UK and international radio and TV programmes to discuss and explain technology news and trends.
- On LinkedIn
- “About the Infosys Knowledge Institute”
- Infosys’s Enterprise AI Readiness Radar
- “Generative AI Radar” Infosys Knowledge Institute
- Redefining the Future of the Economy: Governance Blocks and Economic Architecture
- Critical Intellectual Property Issues Facing America: Issues Looking for Answer
- Tevoro
Mentioned in the podcast