Skip to main content
Home
National Security College
  • ANU College of Law, Governance and Policy
  • Home
  • Education
    • Executive and professional development
    • Academic study
  • Ideas
    • Policy engagement
    • NSC Futures Hub
    • Publications
    • Initiatives
  • Events
  • People
  • About

Breadcrumb

  1. Home
  2. podcast
  3. AI, rights and rules: who’s accountable in an automated world?
The National Security Podcast
The National Security Podcast
05 June 2025

AI, rights and rules: who’s accountable in an automated world?

Acast Spotify Apple Podcasts

Transcript

Can different systems of AI implementation and regulation work together, or will they exist in siloes? 

And how can Australia support AI governance in the Pacific as part of its regional aid and security agenda? 

What challenges does Australia face in regulating AI without a national bill of rights or federal human rights charter? 

Should governments mandate the inclusion of human oversight in all AI-powered decisions? 

In this episode, Sarah Vallee and Maria O’Sullivan join David Andrews to talk about how AI is impacting national security, with a focus on AI governance models and mass-surveillance.

(This transcript is AI-generated and may contain inaccuracies.)

Maria O’Sullivan

For years and years, we've had a debate about the tension between national security and human rights. So that's an age old problem. So we need to think about what are these existing problems and then what are the new challenges posed by AI.

Sarah Vallee

The systems don't always work and the technology is not always mature and so that's also important to keep in mind because when we implement the systems, if it's not working then maybe it needs to be complemented by low tech, low enforcement underground.

National Security Podcast

You're listening to the National Security Podcast, the show that brings you expert analysis, insights and opinion on the national security challenges facing Australia and the Indo-Pacific produced by the ANU National Security College.

David Andrews

Welcome to the National Security Podcast. I'm David Andrews, Senior Manager for Policy and Engagement at the ANU National Security College. Today's podcast is being recorded on the lands of the Ngunnawal and Ngambri people, and I pay my respects to their elders past and present. This week I'm joined by Associate Professor Maria O’Sullivan and Sarah Vallee for a discussion on some evolving national and international security considerations regarding artificial intelligence. Maria is an Associate Professor at Deakin Law School. She's a member of the Deakin Cyber Research and Innovation Centre.

The theme lead of the Protection from Technology-Based Harm Stream of the Center for Law as Protection, and a member of the Law, Health and Society Research Unit at Deakin Law School. Sarah Vallee is a specialist in AI Policy and Governance. She's a fellow at the UTS Human Technology Institute, and her position is sponsored by the French Ministry of Foreign Affairs. Her goal is to foster discussions and collaboration on artificial intelligence between Australia and France. Maria, Sarah, welcome to the National Security Podcast.

Sarah Vallee

Thank you.

Maria O’Sullivan

Thank you.

David Andrews

Well, one of the reasons why I wanted to have this conversation is that I feel like artificial intelligence as a concept and as a lived reality is something that's exploded across the public sphere in the last couple of years. It's really sort of shot to the forefront of policymaking and of the news and journalism. It's ubiquitous for everyday users, whether it's from large language models and chatbots to things like embedded AI assistants in search engines and on social media platforms. And it sparked a wide range of concerns from the rise of cheating in schools and universities up to the widespread use or misuse of others intellectual property to train commercial systems free of charge or to reproduce copyrighted artistic material. And then you've got more extreme concerns for instance, which also relate to the integration of AI into weapons systems and military equipment, run crude platforms and defence decision-making processes. So these notion of the rise of so-called killer robots, for example, which is something that we have addressed on the podcast before.

We're taking a different direction in the conversation here. And many people have talked about this as being the beginning of a potentially new industrial revolution in terms of the scope of AI and its impact on society. And so those effects could well be widespread, but the risk of relatively unconstrained AI development will be societally transformative, is the supposition, but they won't truly be felt for several decades yet. So what we would, I think we're going to focus on today is more of this wider AI policy environment and how, where, and why governments can address the threats and vulnerabilities that AI poses for democratic societies. So, Sarah, I've tried to set the scene a little bit here, but if I might defer to you to paint more of a comprehensive picture, what's the present state of AI development and integration? And we talk about being in this era of tech competition, particularly between the United States and China, but how do their approaches or visions for the role of AI differ?

Sarah Vallee

Thank you, David. I think I'll start with talking more about global governance of AI and then I'll move a bit more on tech competition and this race between the US and China.

At the international level, there has been a lot of different initiatives in the realm of AI governance. Pretty much every international organization has published recommendations or policies on AI, the OECD, UNESCO. You also have the UN General Assembly, the Global Partnership on AI and states as well. They're starting to draft their own legislation on AI.

And more recently, what has kind of gained momentum in this realm of global AI governance is this notion of AI safety with the creation of this international network of AI safety institutes that were joined by countries like the UK, Japan, Singapore, France, and Australia, course. And around AI safety seemed to be some sort of global, a beginning of a global consensus. AI safety mainly deals with the technical safety aspect of what we call frontier AI model, which are like the latest technological advances in AI. So mainly generative AI. So they cover risks such as malfunction or malicious use or systemic risk.

And there were two global summits, AI safety Summit that were organized by the UK in 2023 and then by South Korea in 2024. And earlier this year in February, France hosted an AI Action Summit, which was a week long international gathering of leaders and technologists in Paris. And the aim of that summit was, of course, to continue this work on AI safety and address the risks that are caused to our society by AI, but it was also to highlight the opportunities that are offered by AI technology for our society and our economies.

And so out of this week-long event, there was a political statement that came out on inclusive and sustainable AI for people and the planet. It was signed by 64 countries and international organizations, including Australia, with this goal to unite around the common vision of what AI should be in the service of public interest. And so AI that is human rights based, human centric, ethical. And China was one of the country invited to this AI action summit and China actually did sign this declaration as well.

But of course, that doesn't mean that there is no international competition. And even if we're heading into some sort of consensus around AI governance, what was interesting with that summit is that it happened right after Donald Trump took office. So there was a big shift from the Trump administration in the way they do AI policy. So JD Vance was in Paris attending the summit and he gave a speech that I think you can find on YouTube. That's around 15 minutes, but it's quite interesting to hear him speak because really, his speech was not in the spirit of international cooperation, which was the goal of the event. And really, it showed the world that the US focus now is more on retaining global competition, retaining their leadership at all costs.

So, of course, resorting completely to deregulation. Joe Biden enacted an executive order on AI that Trump revoked in a few days after he took office. Of course, there are all the tariff barriers. There is an export ban on the Nvidia chips, which are important for AI system to China. And so the US has really much this vision of anti-regulation and it's just pro, therefore a fast and uncontrolled innovation ecosystem. And then competing with China, China also is advancing in AI and notably on frontier AI model and large language model. We've all been...surprised a few months ago when China came up with DeepSeek, which is its own large language model, which is a much smaller and more specialized model. But what's interesting is that it was built at the fraction of the cost that American tech giants build their model. And another interesting fact is that actually one of the co-founders of DeepSeek studied computer science in Australia before going to China and decided to build this first Chinese LLM for his country. So China as well is investing in AI, it's trying to bypass the chip ban from the US by importing through third country before the chip arrives in China. But also it's anticipating that it will might need to manufacture its own chips. As we know today, the biggest chip manufacturer is Taiwan and that's where most of the chips industry is. So Taiwan is very strategic for China.

China also recently published a new standard on the manufacturing of ships that would be more environmental friendly. So really it sets the scene for maybe them trying to develop their very own industry and not be so much dependent on the US for it.

So we can see that China is setting norm around AI. They're also leading AI global governance discussion within their own sphere of influence. Last September 2024, they came out with Shanghai declaration for global AI safety. So they're also very much involved in those global process.

David Andrews

When you say that there's kind of these norms that are being developed around the use or the development of AI, does that extend to what it's applied to in terms of is it just in the development and sort of how the models are constructed? Or do those norms extend to questions of the implementation and the use of AI? Could we maybe focus a little bit more on what the EU's bring to the table? What's their approach been like to AI development? How does that differ? How is that similar to the US and China.

Sarah Vallee

So Europe also has like excellent AI research. have excellent AI start-ups. In France, you have a French chat GPT by this start-up company called Mistral AI. There is also Alep Alpha, which is another German start-up that's doing an LLM. And there is an important trend that speaks of the dynamism of AI research and excellency in Europe. have all of the...

 

Big tech company, MEDA, OpenAI, Google that like that plays their AI research office in Europe as well. And there are also significant investment that were announced by Europe to kind of develop the European AI capability. as of course, the European Union is a much smaller competitor compared to China and to the US.

But as you said, it's still a big customer market and notably for US tech giants and it's a normative bar. So it's mostly been known now Europe for the AI Act, which is the first global comprehensive piece of legislation that's regulating AI.

And so with this, I think the EU hopes to play a leading role in setting some sort of global golden standard around AI system and AI use and what AI can do, kind of in the same way that they did around data privacy with the GDPR, the General Data Protection Regulation, because usually global companies implanted everywhere in the world, they usually comply with the most tangent regulation. So if the EU has the most tangent regulation, then that would be kind of the standard fo our models are developed everywhere and then because there are norms and there are standards, the EU has embedded their values within it so that the models respect the fundamental rights of people and try to do the least amount possible of harm. So on the technology front, maybe Europe is doing as much as it can to compete with China and the US, but that's hard.

And then there's a lot that's being done on this normative front to promote a European model and European way of doing AI. Along with the AI act was also created the European Union AI office and its very role is to engage in international dialogue and incorporation to position Europe as a leader in ethical AI and sustainable AI. And there is a lot of discussion with partners. I know the EU is…There is a digital dialogue with Australia as well around the digital economy, but AI is also part of it, data, cybersecurity, et cetera.

David Andrews (12:05)

Do think that there's a risk of, I guess, a fragmentation of systems? So if we're talking about different standards and approaches and models being applied across the EU, which then, as you say, has downstream consequences for what, say, the US might do or what China might do, but with all these different individual of styles or approaches that being applied.

And that's not even to mention, the Middle East, where we have, in the Gulf States, for example, they're putting billions of dollars into AI research through their public investment funds or through sort of investment opportunities. And I guess when we're the traditional role that the US has played as being this sort of anchor in a lot of the sort of international institutions and organizations and systems like that, but they're actually striking out on their own in a different way. Do you think, that lead to a risk of fragmentation in systems? Will these all still be able to work together or will we be seeing more discrete bundles of AI and tech in that way?

Sarah Vallee (13:09)

I think the goal is for everyone to work together. That's why there are all those global high-level discussion around AI safety, the AI Action Summit I mentioned. mean, China was at the table talking with everyone. So, I think the idea is to kind of maybe like in any international role process or international like multi diplomacy, you have to come with the minimum common denominator so that everyone is kind of happy. That could lead to fragmentation is that if the US isn't on board anymore and not so much willing to go into this discussion of trustworthy, safe, inclusive AI and just really focused on more of the competitive side. But that's where pieces of legislation like the EU Act come in is that their company will have, if they want to be able to sell their product on the EU market, they will have to comply with it or lose the market entirely if they decide that they don't want to operate in this jurisdiction.

David Andrews

I think this is maybe a much smaller scale example, but it reminds me of conversations around common charging points in phones that sort of between Apple and Samsung and all these different things that there's those downstream tech effects of the power that EU has to drive some of those trends in technological development but maybe looking closer to the Australian context and that obviously it's something this is an issue that our government is very engaged with and that has great consequence for what we're doing. But for countries, say in the Pacific, who aren't necessarily well equipped with the same volume of IT infrastructure and cyber skills and AI skills and even just power and water and electricity, the things you need to run and support data centres, for instance, are you seeing anything that you think would apply or maybe even that Australia could take away or work with the EU or with France to apply in the Pacific to make sure they're not left behind in this great development of AI that's going on around the world.

Sarah Vallee

Of course. I think first Australia has been in discussion for a while now as to know how it will regulate AI in its own territory. So a few months ago there was a proposal paper by the Department of Industry to introduce mandatory gold rail for AI in high-risk settings. And was kind of asking the preference of the ecosystem, academics, big companies, banks on what would be best. So there were like three options that were offered either to keep the existing legislation and to adapt it to AI challenges or to create a whole of economy AI, kind of an Australian AI act that would be modelled after the EU, for example, or just to find a mix of both. So there are still these discussions that are happening.

There was a Senate committee on adopting AI that recommended a preference for a new whole of economy AI act for Australia. And there are all those discussions between Australia and the EU around how best to organize governance. And Australia is very much attached as well to technical standards. So the international standards organization, ISO, also has a number of...technical standard, think that's where maybe we won't find as much fragmentation at the global level is just because technical standards, you follow them and everyone follows them. So that's where maybe you would have some sort of harmony when it comes to AI systems and some sort of interoperability between the different ystems. So Australia has very much all of this in mind.

And I feel like it has like a closer maybe understanding of what AI got, like what AI system and what values of AI system should be promoted quite similar to the EU. So, I mean, we know that under the AUKUS agreement, have the defence related uses of AI that are covered. But as I mentioned earlier with the AI Action Summit, there was this political declaration on sustainable and inclusive AI that Australia signed.

Interestingly, the US didn't sign it, the UK didn't sign it. So it feels like on a more global approach to governance of AI, it might be closer to the EU. And then there's also, of course, an interest in the Pacific at the AI Action Summit, the Department of Foreign Affairs organized a series of workshops with countries from Asia-Pacific to kind of address this. So in the Pacific, AI can be also an excellent tool. It can help with multilingualism to help revive our languages, can also help with detecting illegal fishing or it can do climate monitoring as well. So I think there is an opportunity for those tools to be used in a way that will help the Pacific. if AI is very energy consuming, so we also need to have renewable energies that feed those systems that need a lot of environmental resources.

So I think here there is an opportunity for maybe Australia and the EU to work on this. And I say also the EU because so in the Pacific you also have French territories, for example. And so we've seen with the general data protection regulation and with the Digital Service Act, which is another piece of EU legislation that deals with the recommendation algorithm and social platforms, that those two laws were then translated into French law. And then the French law.

was also applicable in the French territories, overseas territories. the question now is what will happen with the AI Act. So for now, the AI Act isn't applicable to the territories, but this might be a hint that it might become. I think it could be, yeah, of course, interesting for Australia and the EU that both have a lot at stake in this region to also, if Pacific Island countries want and...are interested in working together with Australia and Europe, work on this front so that they also benefit from AI and that it serves the interest of their population.

National Security Podcast

We'll be right back.

In this disrupted world, Australia needs security professionals more than ever. Join the next generation studying at the ANU National Security College. Our programs uniquely fuse academic knowledge with practitioner experience and fit around your lifestyle with study offered online and on campus. Follow the link in the show notes for more information about programs and scholarships. The ANU National Security College.

Engaging minds for a secure Australia.

David Andrews

Maria, I'd certainly welcome any reflections from you on what Sarah had to say so far, but I also wanted to turn to you to discuss another dimension of this problem set, which is mass surveillance and its intersection with AI. There's been plenty of discussion over recent years about the increasing omnipresence of surveillance technologies around the world, and we're seeing it here in Australia too, but is this a trend that's being exacerbated by AI? And like, where do you think it might go in the future?

Maria O’Sullivan

Great questions. If I could just reflect a few things in relation to the EU AI Act and that fascinating discussion. Thank you, Sarah. So firstly, I am a little bit uncomfortable with the power relation that comes from the so-called Brussels effect. So yes, we could say that the EU AI Act is a gold standard. I might disagree a little bit with that conceptualization. As a human rights scholar, I agree that the risk categories, high risk and so forth and low risk are quite good mechanisms and concepts. do have a slight concern. Others have a very strong concern about the carve out for national security for those listeners that was strongly debated and is quite a strong exception for national security.

So that means for instance, when we're talking about surveillance of protest, that it is permissible for there to be surveillance online of people organizing protests if it's seen to be either a criminal activity, serious criminal activity or anything related to terrorism. And that's a live issue because increasingly the discourse of protest, particularly in relation to climate protest, is to label them as eco-terrorists. So that's one thing. And then I guess in that regard, it's interesting, Sarah, that you talked about French territories because I've been examining the use of AI governance in relation to colonialism. So there's a big debate going on at the moment about whether we, just to say we agree that the EU AI Act is a great model. There is some discomfort about applying that to the global South. So I'm talking about the global North and the global South to put things very, very bluntly.

And I know that's a simple sort of demarcation, but it is something that academics talk about. so increasingly academics are trying to work with people in the global South, like in Africa, to ensure that they have a regional AI treaty. So I think policymakers in Australia, if I could sheet at home to us here, maybe need to think about whether AI governance also means using aid money to assist people in the governments in the global South in the global majority to also have governance mechanisms, whether that's remedial mechanisms, impact assessment acts, to actually help them have the resources to develop their own AI guidance mechanisms. So what does that mean in relation to the Asia Pacific and also our responsibilities to help other countries?

And then secondly, I thought it was very interesting to make this link between the EU and America. And I just wanted to note that there is a Council of Europe Framework Convention on Artificial Intelligence and Human Rights. Now again, those in the human rights community are a little bit, well, let's just say we think some of it is quite general and some scholars say it's quite weak.

Let's just say that it does refer to human rights and international law and that's always a good thing. But I mentioned that because the US signed that Council of Europe, it's not EU, Council of Europe, slightly different. EU, sorry, Council of Europe Framework Convention on Human Rights and AI, the US signed that last year and that was seen as a wonderful thing. think any time that we can get international cooperation is great. I would just demarcate at the point and make the point that it is quite a soft instrument. doesn't, even though it's a binding treaty, it doesn't contain a lot of hard and fast mechanisms and rules.

So that's just my response to the very interesting discussion on the issue of surveillance and human rights and bringing in national security, I think one problem with any sort of international governance of these sorts of things is that we are coming up against national systems. For example, Australia doesn't have a bill of rights or any federal charter on rights, which means that we don't have the protections that Europe has with the European convention on human rights. For instance, there is no Article 8 equivalent. And for those who are not familiar, Article 8 of the European Convention on Human Rights protects the right to family and privacy. And that's been very important, for instance, in the UK, there was a case of bridges where the UK court found that having facial recognition software utilized by the police was against the European Convention on Human Rights. Now Australia not having a federal charter or bill of human rights means that we can't use human rights directly in litigation, not at the federal level where national security and counterterrorism is used.

I would just note obviously that certain states and territories in Australia have a human rights charter, so anything to do with sort of local police is covered, but federal police matters don't have that human rights constraint.

David Andrews

So one thing that I think is, contextualize this a little bit, I suppose, of where I think it's really important that we talk about human rights in this context of AI and surveillance is one dimension of that, but let's say AI more broadly is that if we are at this kind of historical inflection point, that we're sort of, if this is a new industrial revolution, for example, that I think it's critical that we understand what the rights implications are for us as people and as citizens, as people who are sort of living through the impacts of this. Because I suppose, you know, we talk about technology as being a tool that is sort of used by people. But I think in this case, when we're almost giving more, as the name suggests, sort of intelligence and sort of capacity to our tool, it's a bit more than, than a classic one instrument.

And if we're talking, as you say, Maria, about these, sort of quite find out foundational concepts of human rights and democracy that we live in, well, are we thinking about things in the same transformative way, in the way that we approach them from legislation and from governance and from sort of setting the right foundations to have a society in 10–20 years time that is being sort of governed and upheld in the ways that we might hope and expect. I think that was a really useful sort of introductory point. And I think that's also why it's so fundamental that we talk about human rights. We have that at the foreground of our conversation as we're going through this development process of AI.

But to really hone in on that surveillance piece, I think the omnipresence of facial recognition is one that many of us may not put a lot of thought into, but is there in our everyday lives. Whether you're going to Bunnings to...buy some hardware suppliers or you're going to the football and there are now systems that are based on facial recognition and credit cards, where you can just walk into stores and pick things off the shelves and it tracks you and pins you against that credit card number and it charges you without having to ever talk to a person or go to a checkout. It's also being used in security ways. So I think it's important to give the...the positive story there as well is that there can be some security benefits of ensuring that people who have otherwise say been banned from sporting events for violent acts or for antisocial behaviour and things like that, that it's also being used to filter people in that context, security is always a big, big open complex conversation because where does human rights and security interact?

You're the expert here. So I'd be glad to hear some further reflections on how we're seeing this sort of trend developing and what that means for us, I suppose, as citizens and individuals in society.

Maria O’Sullivan

Yes. And this is where scholars will talk about the challenges that are inherent in society and the challenges posed by AI. So for years and years, we've had a debate about the tension between national security and human rights. So that's an age-old problem. So we need to think about what are these existing problems and then what are the new challenges posed by AI? And one of them is consent. So when you go out in a public space, do you consent to having facial recognition or other surveillance? It's the knowledge power. So are you aware that you're being photographed, that CCTV is being used? Now, sometimes you can see it when you're out and about in the street, you can see the CCTV camera, but many people would not know how that's being used, the purpose for which it can be used, where the data is stored, et cetera. So I think there's a knowledge issue and transparency. And then I think we need to demarcate the use of AI and particularly surveillance by private companies as opposed to states.

So if I could speak to the issue of private companies in the first instance, private companies like Bunnings or Meta are not directly parties to international treaties which protect our rights. So the International Covenant on Civil and Political Rights, protect things like the right to private life, the right to freedom of assembly, freedom of expression, very core fundamental rights. Now, a lot of businesses, particularly global businesses, I guess, agree to meet certain human rights principles and that comes from what we call business in human rights movement. So they might say in their annual report, we adhere to human rights as a matter of choice, but it's not an obligation.

So that's our first, I guess, fundamental point. Businesses are not parties to international human rights treaties. In Australia though, if I could give an example of bunnings, they are subject to certain privacy laws. So I talked before about how we don't have a federal charter of human rights. We do have human rights, sorry, privacy laws.

Now they are a little bit different to how Europe deals with privacy issues because they've got the European Convention and human rights. But it is possible to get the privacy commissioner to look at issues and that's what happened with Bunnings. So last year, Carly Kind, the privacy commissioner, found that Bunnings was in breach by using facial recognition technologies at the entrance. Bunnings wanted to use this and this goes to David, your point about, I guess, innovation and the good uses of AI. I totally accept that in order to deal with anything like serious crime, for instance, that this was public order issues, people coming in that were disruptive or stealing things. And that can be a reason to use facial recognition. But in this instance, the privacy commissioner said Bunnings want to use facial recognition to stop crime and disorderly conduct, so-called high risk individuals, but in this case, they chose the most intrusive option. They did have other options open to it. People coming into the store did not consent. The facial recognition was not included in their privacy policy. And using human rights language, the facial recognition was neither necessary or proportionate. So I think that's a good example of private companies.

Now, of course, when we go to state, States are directly parties to international human rights treaties. So the standard for them is much higher. And if I could give one example from India, if we look globally, I'm doing a book on protest at the moment. And part of that is looking at surveillance of protesters. And last year in India, some protesters were criminalised and arrested because they were planning protests online.

And that was dealt with as a foreign policy issue. Looking across at the USA, those listeners will know that there's been a lot of talk about the illegality of deportations. And one thing that's been utilized a great deal at borders in the US is AI and other surveillance to either stop people coming into America, by looking at their social media posts, WhatsApp, screening their phones, screening their computers. And for those in the US, the same thing has been happening too. So the use of surveillance and other AI mechanisms to look at online posts by people.

David Andrews

When it comes to developments of AI and AI tools and AI assistance in writing and decision making and the way it's becoming more integrated into day-to-day life in that sense, and no doubt will be integrated into legislation and government processes and official decision making in that way, what are the consequences of AI in that greater uptake of AI for our understanding of the law? And I appreciate this is going to be a little bit...glib sounding, to use sort of the paraphrase of, know, computer says no, are there going to be algorithms and AI systems that are kind of empowered to make decisions as identified decision makers, or are we still finding ways in our legal structures to kind of keep the human in the loop as much as possible? Are we seeing an evolution in our understanding of what literally the law and decision making looks like as a consequence of AI development?

Maria O’Sullivan (35:29)

Yes, definitely. This is not really AI as such, but I've written a lot on RoboDebt. And again, this is a very simple form of data matching, which was found to be unlawful. But if we could talk about decision makers and the human in the loop, one issue there was that the computerized debt notices, not really AI, not sophisticated, but automated – if I could use the word automated decision making – was sent out using incorrect data matching parameters and also most importantly not empowered by legislation, by the Social Security Act. To the extent that there was a human in the loop, obviously there was a human in the loop in terms of configuring the system, but they were literally just sent out and no contact information was included.

And if I could speak also to the right to remedy, one of the issues with decision making. Say if we look in Australia, have to have your automated decision empowered under the legislation, but also importantly, there has to be some remedy, some context. So that's where I think the important human in the loop comes in, that there has to be some way in which the affected individual seeks a remedy, even to the extent of having a phone number on the debt notice, which in the case of RoboDebt was not the case.

There's the lawfulness of the AI or automated decision making, particularly when it's going to be a large-scale automated system. not talking about one case we're talking about or one decision, but thousands of thousands. And also there has to be a delegated power in the legislation. So for instance, in some pieces of legislation in Australia, they say that the power can be delegated or assisted by computerized system. So I think in the future, domestic legislation, I'm giving the example of Australia, but also internationally in the UK or France, there will have to be in domestic legislation, a clear person who is responsible, or if it is going to be AI in the future, that they're behind the scenes, be someone accountable for the actions of the AI.

David Andrews

Thank you, Maria. That's fascinating. I think there's obviously so much to keep a track on as these things develop. And, Sarah, I'm not sure if there's anything that you wanted to reflect on from what Maria said, if there's anything that resonates with you from your experience within the French system or beyond, but I just wanted to make sure you had that chance to respond if there was anything that was at the forefront of your mind.

Sarah Vallee

Thank you, Maria. You talked a lot about mass surveillance and the right to privacy and all this. And that's very important. And that's why you have regulators in place that need to go to the company that are using the system or to go to the government that is using the system to say, well, you've used it in a non-loyal way or there wasn't enough justification for it. But I think also what's important is of course the privacy part and the law part, but also does the system work because sometimes it just doesn't, facial recognition doesn't always work.

Recently, during the Paris Olympics in 2024, it was made as an experiment law for the algorithmic video surveillance of the event. There was an evaluation report that came out at the end of 2024 and it just showed mixed results as to the effectiveness of the technology, largely dependent on the different use cases. For example, the system could detect people trespassing. For example, people go onto the train rail, whether that's supposed to. So the system would detect it and then that would help employees of the transport service come or law enforcement come and make sure everyone's safe. But in other use cases, the system just did not work. In the French media, what came up a lot is that there was a system to detect abandoned objects in a train station and shut just to make sure there are no risk of bombs or anything left.

I think like 62 % of the time it wasn't detecting an abandoned parcel, but it was detecting people that were sitting on benches or homeless people that were just living or sitting in the train station. The system don't always work and the technology is not always mature. And so that's also important to keep in mind because when we implement the systems, if it's not working, then maybe it needs to be complemented by low tech, low enforcement on the ground.

David Andrews

Well, Sarah and Maria, I think we've come to very close to end of our conversation. But one thing that I thought I might ask of you each is a sort of elevator pitch that if you were here with the relevant ministers in Australia or anywhere for AI and for these sort of technologies, what would be your top one or two recommendations you would make to them to see change or reform in the way things are done in this space? So Sarah, perhaps if I can stick with you for a minute, what's your quick elevator pitch to key ministers on what they need to do when it comes to artificial intelligence.

Sarah Vallee

It's a point I've made before, but I think it's really this international cooperation piece and keeping alive global discussion, international cooperation with like-minded states and region. And I think it's really important that we make sure the AI system we develop are for the public interest worldwide, that they benefit the population and the people that are living, that they respect human rights, of course.

And that can only be done with like increased collaboration, whether it's research, whether it's policy, whether it's a multilateral diplomacy. I think that would be my main focus. again, really an opportunity for Europe and Australia in this context where the stance of the US is not so much around cooperation this day to maybe work together to promote this kind of shared vision of what AI should be and how it should work.

David Andrews

Thank you, and Maria, how about your suggestions?

Maria O’Sullivan

Well, in the spirit of encouraging innovation, I want to emphasise that in order for the Australian government to propel itself into the AI space and be really a world leader in the use of safe AI, I think the Australian people have to be able to trust what the Australian government is doing. And in order for that to occur, particularly in light of the coverage of RoboDebt, I think that the passing of a federal human rights act would be helpful. I also think clear communication about the guardrails that will be used for AI and in that respect, if the Australian government could commit to the use of human rights impact assessments at the design and implementation level of AI, that will not only benefit our human rights compliance, but also feed our desire for innovation because it will enable Australians to have more trust in the AI systems that will be utilised in Australia going forward.

David Andrews

Thank you very much Maria O’Sullivan and Sara Vallee for being with us on the National Security Podcast and we look forward to speaking with you more again in the future.

Maria O’Sullivan

Thank you.

Sarah Vallee

Thank you so much.

National Security Podcast

Thank you for listening to the National Security Podcast. We welcome listener feedback and suggestions at any time. So please get in contact at natsecpodatanu.edu.au. For more important conversations about Australia's national security, please subscribe to the podcast, our YouTube channel and follow us on LinkedIn and Twitter to receive the latest updates.