Understanding Risk Appetite
Learn how risk appetite can enable bold decision-making and strategic growth, not just limit risk, with insights from director Catherine Brenner.
Mike Gonski shares practical insights on AI governance risks & opportunities for company secretaries. Drive AI conversations while managing the risks.
"It is a hundred percent not a silver bullet to anything. It's just another useful tool we need to know how to use and work out."
This practical perspective from Michael Gonski perfectly captures the reality of AI adoption in modern governance.
In this episode of Minutes by boardcycle, I spoke with Mike about his hands-on experience implementing AI across multiple boards, from the charitable sector to major cultural institutions. What emerged was an honest account of both the opportunities and pitfalls of AI in governance functions.
Mike's first piece of advice to any company secretary considering AI? Understand the risks before you chase the opportunities. His legal background has made him acutely aware of how AI can create unintended consequences in governance contexts.
Take AI minute-taking, for instance. While the technology can transcribe conversations with impressive accuracy, Mike warns that this creates litigation risks many governance professionals haven't considered. Traditional board minutes are carefully crafted to capture decisions without providing verbatim records that could be problematic in legal proceedings. AI transcription can create more problems than it solves.
The confidentiality question is equally important. Mike shared a story about joining a client call where an AI notetaker suddenly appeared as a participant. His immediate response was to question why confidential, privileged discussions were being recorded by AI systems that might not maintain professional protections. Company secretaries need to understand the difference between private AI instances and public platforms, and never compromise on confidentiality requirements.
Despite these warnings, Mike isn't anti-AI. He's actively using AI tools across his board roles, but in carefully controlled ways:
The common thread in all these applications is control and context. Mike advocates for a "walled garden" approach - using AI within controlled environments with trusted, curated information sources rather than open platforms that might compromise sensitive data.
Perhaps most importantly, Mike believes company secretaries shouldn't wait for direction on AI adoption. The best governance professionals are "influencers in the board" who drive positive change rather than passive administrators waiting for instruction.
His approach involves having "very safe discussions" with chairs about AI opportunities and risks, working with your organisation's technology team to understand available tools, and treating AI implementation as a strategic capability-building exercise. The conversation reinforced that successful AI adoption in governance requires the same qualities that make great company secretaries: careful risk assessment, strategic thinking, and the ability to balance innovation with professional responsibility.
Richard Conway is the founder of boardcycle, the board meeting platform designed for Company Secretaries. Create, manage and automate your board agendas, run sheets, shell minutes and more with boardcycle Agendas.
[00:00:14] Richard Conway: Welcome to Minutes by boardcycle. I'm your host Richard Conway, and today my guest on the podcast is Mike Gonski. Mike's a partner at Herbert Smith Freehills and is also an experienced director in the NFP space in particular. He currently chairs ReachOut Australia and Carriageworks.
[00:00:32] And today, Mike's kindly agreed to let me talk to him about use of AI in the boardroom which is an area he's exploring actively and very interested in.
[00:00:42] So, welcome to podcast Mike.
[00:00:45] Michael Gonski: Thanks for having me.
[00:00:47] Richard Conway: So Mike, before the podcast, we had a brief discussion about this and you were talking about in your practice at Herbert Smith Freehills, a lot of clients coming to you and talking about their company secretaries wanting to use AI in minute taking, et cetera.
[00:01:02] And so, I wanted to ask you what you're seeing in that space and what you see as the kind of risk areas to start off with.
[00:01:08] Michael Gonski: Yeah, of course. I'll start by just giving you an interesting story. I also have a practice at HSF where, you know, I deal with a lot of startups and early stage businesses and I had a circumstance where a client had asked for quite a serious conversation on a very important matter and we joined their Zoom call. And suddenly their AI notetaker joins as a participant in the call.
[00:01:33] And I had to start by saying: Well, why is this AI note taker here as a participant?
[00:01:38] Presumably there'll be some things that you are hoping are privileged and can be in discussion just between the two of us. Wanna make sure you're as open as possible and to have a full on transcript of what we're talking about. I can only see downside in this as much as clearly the upside of you having a couple of extra notes, could be of use.
[00:01:57] And I think there's obviously a time and a place for being able to have AI notetakers. We in our practice have worked with some clients where I'll say, Great, why don't we turn on the AI notetaker now, and I'm gonna tell you what I think my legal advice is, and hopefully it'll be more efficient for us to record and transcribe this and then use the AI to provide you with a very simple email legal advice that will be much more cost effective.
[00:02:24] But in the boardroom, if you think about it, is clearly, I think one of the issues that you often have with maybe, smaller boards or boards that don't have as much experience is you might often bring in someone who's not well versed in what board minutes should look like and essentially transcribes what is being said. Which I think in a transcription perspective, while no one's trying to be unethical in saying that, we don't want a transcription of what's being said, but clearly in the litigious world that we live in, often what is said can have nuance and having a full transcription can have serious risks for companies who have been through litigation.
[00:03:03] Richard Conway: Yeah. That's interesting Mike and so that's not a great use of AI in the boardroom. What are some areas where you are exploring or seeing people explore AI that you do think are good uses in the board?
[00:03:16] Michael Gonski: Yeah, so I mean, my favorite thing is, is I donated licenses to each of the boards I sit on for LawPath, which essentially is an AI enabled law firm that clearly doesn't compete with HSF and is looking to democratize the access to legal services.
[00:03:33] And I've always worried about this, that you come to a top tier law firm and a partner costs 1,200 bucks an hour for their specialty. And that can price out certain small businesses but also clearly in the charitable space unless you're lucky enough to have a partner of a law firm on your board, it's very difficult to get pro bono legal services.
[00:03:55] And I think sadly, in the boards I've been on, before coming on them and being lucky to be able to obtain a pro bono legal for most of the boards I've been on, I think their legal spend was around $10,000 a year for maybe quite a lower form of legal advice they were getting.
[00:04:13] So the best example I always have is clearly if I'm doing an exit of someone in one of the charities I'm part of, I can't use HSF precedent documents. It's sort of the first thing they tell you when you start as a lawyer. Do not use the HSF letterhead for your own personal purposes or the purposes of things you're involved in.
[00:04:33] And so I would use LawPath as the way to have a look at things like deeds of release or termination letters or employment agreements. And what's great is it's good enough in terms of what you need. It does the job. And I think for the cost price, I really love the fact that AI is gonna help to democratise that access to legal at the smaller end of the scale.
[00:04:55] And to me, I find that horrible in the way that, you think about is big client comes to me and says, "We've got a dispute and the chairman of the board needs an affidavit done, well, I'm clearly gonna write the first cut of that affidavit." They are super clever. They're gonna read it. They're not going to just sign it because I told them to sign it.
[00:05:17] They're gonna put their own words into it. There is a cut of the document that is so clear, hopefully, and easy to read and will be useful to the court. That is very useful to that person.
[00:05:30] But if you look at a very small charity or normal punter who has to be participating, something quite scary as litigation, if they're not allowed access to AI, how are they gonna be able to provide an affidavit that is of use to the court? I think what they're missing, which I hope will change over time is that, AI is not something that is used to deceive the court in terms of someone's way in which they wanna portray something.
[00:05:58] Well, if you're not able to use AI, then I'm not sure, is that just at all? Certainly not efficient. And definitely won't be cheap if these poor people have to try and afford a lawyer that they have to pay for.
[00:06:12] Richard Conway: And just to go back to your example about an NFP or a punter using an AI enabled legal service to deliver something that's good enough. I guess a question that comes up to me there is, how do they know that it's good enough? Or how do they know when they need to go to the next step and they need to, I guess pay for the outcomes that they need in particular scenario.
[00:06:35] Michael Gonski: Yeah, I mean, I think it's really tricky, but the way I put it, another way is, is if it didn't exist, I think they would go with nothing. So, if you think, if you are priced out from, I actually can't even afford the suburban lawyer for what I'm trying to do, people will just go and Google, look for something, and try and hack it together.
[00:06:56] And so it's an interesting one as to like, where I had my Aha moment with AI is that, it was a very early adopter of it at work. And I'll give you a funny example is we had a matter and it related to looking at real estate agencies. And my team did this great research and they came to me and they said, "This is the answer." And they showed me all these cases and I said, "Well, you know, I'm much smarter than you guys."
[00:07:24] So, I've used ChatGPT to ask it what it thinks the main cases are in this area. And you missed this main case. And I told them the main case, and these poor people, they're all very upset that they'd spent all this time and they missed the key case. Then they did lots more research on it. Came back and said like, that case doesn't exist. And so, that was my literally week one of ChatGPT realising that AI hallucinates.
[00:07:50] And so, when you think about it. What you need to do, if you want to use AI correctly is have these walled gardens of safe information.
[00:08:00] So, I chair ReachOut, which for 20 years has had such a useful website that has been able to let young people be able to get a first access to what to do if I don't feel good in the mental health space.
[00:08:13] And they go on the website and we've just got excellent curated material. What's fascinating now is, if we have a walled garden of that material and you add an AI on top of it, young people are gonna be able to access that material in a safe way.
[00:08:29] And so you'll probably see in the next three to six months an announcement by us and then ReachOut where we will be providing that service.
[00:08:36] And so, I love that for the first time in my career, AI has actually pulled together these things that I'm interested in together. So collectively, my philanthropic world, my legal world, and actually my investing world have all come together in this interesting thing where I'm suddenly seeing the use cases and how useful this is going to be for democratising access to legal, access for charities to legal, but then access to services like psychology that have actually had this horrible financial bar to the access which is exactly why I joined ReachOut. Because I thought that democratization post COVID of access to mental health services, was horrible. And there was such a gap that I had to do something to be involved.
[00:09:26] Richard Conway: So Mike, another area which I've heard of directors using AI, I guess to extend their own knowledge is they will build something in Claude or ChatGPT which is effectively kind of like an avatar of someone who they respect or think has particular expertise that presumably they don't have themselves. So, a virtual Warren Buffet or something like that as an example.
[00:09:51] What do you think of that as a way for directors to prepare themselves for a board meeting or to think about how they should approach things in their company?
[00:10:00] Michael Gonski: So, I think number one, like I hope there is some training for board directors on understanding the difference between private instances of AI and public instances of AI.
[00:10:12] Because, the thing that I've loved about our law firm is immediately rather than just saying we're gonna ban everything, they gave great training on, hey, if you're gonna look up what restaurant you want to go to with your mates, well clearly you can use a public ChatGPT. But if you want to upload our client's information, well, first of all, we need to get the consent.
[00:10:34] And two, it needs to be private. Then, I think the interesting part is going to be that this is a great use case for AI.
[00:10:41] So, I'll give you an example is my great-grandfather in South Africa, there was some articles written about his life and I don't think I knew much about him. He was not alive when I was born.
[00:10:52] I'd heard great things from my dad about him and I was quite impressed by reading his history. So I took all these articles and a book written about him and I uploaded it into ChatGPT and I said, I'd like to have a conversation with my great-grandpa. And it was just so interesting because clearly it will only respond from the things in this walled garden again.
[00:11:15] It was very good at working out the personality type from all the different articles you had. So I think if you can get enough Warren Buffet related material books, et cetera, and upload them, I think that's excellent to be able to test things with them as long as you do it in a safe and private manner.
[00:11:33] I think the other one is board papers, for summarisation.
[00:11:37] Anyone listening to this who has been the CEO of a board that I've chaired will laugh when they hear it, is that I always go on this rampage whenever I join a board and say, "Where is the one page of your next three-year strategy?" And I've had battles with lots of CEOs in a kind way as to they might often say: “It's not possible. It's too complex. You can't have something on a page.”
[00:12:03] I always push back to say it's complete rubbish. And I always come back to the CEO of Story Factory, Cath Keenan, who is someone who's trained as a journalist, not as a CEO, no business background at all, has always been able to, on one page, put what the next three years looks like in a very simplistic form. And clearly there are these plans behind the scenes that are very complex and could go for 20 pages.
[00:12:30] But it's not relevant to the people we want to talk to.
[00:12:33] I've loved it with ReachOut. Our current CEO did the most incredible one page that made sense - that our year one, right now for ReachOut is about understanding partnerships. We need to understand who is the tech partner that will help to create this walled garden approach. We need to understand who are the partners that are gonna help us to provide all of the walled garden material. Once you work that out, everyone can say that is what we're gonna do this year. And the simplicity of it, I think is excellent for the stakeholder management.
[00:13:04] But again, back to the AI, we worked together on, interestingly with that CEO, him and I decided we were on a phone call to talk about the strategy. And I said, "Hey, do you wanna try something just on our iPhones?" And I said, "Why don't we record and transcribe our discussion about our strategy?" And then I said, "Afterwards, we'll take that into my private instance of ChatGPT" and I'm gonna say, "If you had to put a strategy on a page from what we discussed, what does it look like?"
[00:13:33] And you know what? It was 40% there. I think it summarised some things really, really well, but it made my life so much easier to be able to put something on a page.
[00:13:42] Richard Conway: Yeah, absolutely.
[00:13:43] So Mike, a final question on this, which I think I know the answer to from what you've already said, is, if you are in the management of a company or you are the company secretary dealing with a board, I wanted to ask you what you think they need to do to get their directors ready for this.
[00:14:02] And I guess my main question is, do you think that a company secretary, for example, needs to be on the front foot on driving what a company's policy is around use of AI in the boardroom? Is there a risk that if they don't do that, the directors will just start doing it anyway?
[00:14:20] What are your thoughts on that?
[00:14:22] Michael Gonski: Yeah, I mean I always think that, for CoSecs, and you and I have talked about this for many years, the best ones are the ones who are influencers in the board. The worst ones are ones who are there who just note take, right? And who just scare monger and bring up issues.
[00:14:37] And so, I just think this is yet another issue where if someone wants to have some value in terms of influencing, they can have a very safe discussion with a chair.
[00:14:47] I think the really smart ones will see that it is a hundred percent not a silver bullet to anything. It's just another useful tool we need to know how to use and work out. If it has some efficiency in a safe way, how can we do that? But I think to your point, if you were a CoSec trying to influence, I think what I would try and do is actually speak to a CTO in the company or someone in that area to understand what are the tools we have as a business straight out of the box.
[00:15:17] Do we have a private instance? So, that then I could say to the board, "Hey, if you are gonna do things like upload the board papers somewhere, it'd be really great if we can upload them to the private instance because of the fact that I just think from a confidentiality perspective, it is a breach to be uploading it to a public ChatGPT. It's not safe for anyone, there is confidential information in those things. And that is a public source where someone could get access to it.
Learn how risk appetite can enable bold decision-making and strategic growth, not just limit risk, with insights from director Catherine Brenner.
What makes legal professionals valuable board directors? Chair Ilana Atlas shares how lawyers and company secretaries can go from advisor to board...
Explore how governance professionals can apply startup thinking to simplify, automate and scale in the age of digital transformation with Hannah...
To be the first to know about new episodes of Minutes by Boardcycle, subscribe on Apple Music or Spotify.