Why DeepSeek’s breakthrough is Australia’s new China challenge

Icons of DeepSeek, Ernie Bot and ChatGPT
Adobe stock

China’s generative AI dark horse, DeepSeek, whose name was unknown a week ago, has trampled global tech markets.

But it’s not just investors who should be shocked. DeepSeek is yet another wake-up call about the risks of Chinese technology platforms that should have political and business leaders on edge.

The AI application – the most popular app downloaded globally this weekend – is the latest entry in a long list of consumer technologies where Chinese companies have seized a major, if not the dominant, market share.

Think: surveillance cameras made by Hikvision and Dahua. Drones from DJI. BYD connected cars. Social media curated by TikTok.

DeepSeek’s arrival recalls security risks associated with these previous innovations. But more importantly, its rapid and unfettered consumer adoption highlights our inability to manage these risks.

Any technology made in China is controlled by the Chinese Communist Party. The CCP requires tech companies to collaborate with its intelligence agencies, heavily regulates AI companies, and audits their models to ensure they reflect “socialist values”.

The impacts of this control are even more troubling for AI than other technology classes. While concerns about TikTok centre on pro-China censoring of social media posts, DeepSeek has the power to elevate the CCP as a gatekeeper to history and knowledge itself.

Large language models are the most significant change to how we access and process information since the printing press. DeepSeek shows that controlled-by-China LLMs pick sides.

When prompted to explain the 1989 Tiananmen Square incident, the model demurs: “Let’s talk about something else.” But when asked about the January 6, 2021 Capitol riots, a 10-paragraph response concludes that the events raise questions about the “future of American democracy”.

Things get weirder if you mention Taiwan. No longer feigning objectivity, DeepSeek evangelises “complete reunification” as “an unstoppable force”.

If censors poison the LLM tree, how can you trust eating the fruit of any of its responses?

The risks from controlled-by-China apps extend beyond manipulation. China is a notorious and well-documented data glutton, having industrialised information theft to benefit its economy, military and strategic interests.

Australia needs a high-risk foreign vendor framework that is public and applies to critical infrastructure and democratic institutions.

Expect DeepSeek to hoover up sensitive data from the computers it’s installed on, and from every prompt its users give it. Expect the Chinese government to enjoy access to this data.

These security issues alone should worry Australians. But DeepSeek’s arrival has also revealed weaknesses in how Australia responds to technology controlled by China.

First, DeepSeek has exposed the limits of America’s approach to managing security risks with economic policy. Economic measures, such as export controls on silicon chips, were meant to slow down Chinese breakthroughs in generative AI.

(There’s a lingering question whether DeepSeek’s models use high-powered chips, having skirted US controls or, worse for Nvidia and other chip giants, it doesn’t need them.)

The clear implication for Australia is that we need our own, sovereign policies on managing technology risk.

Second, Australia’s fragmented approach to managing high-risk foreign vendors is unsustainable. Every time a popular consumer technology emerges from China – from TikTok to surveillance cameras – our political leaders react like it’s a novel challenge.

Australia needs a high-risk foreign vendor framework that is public and applies to critical infrastructure and democratic institutions, as well as government. In some cases, certain vendors should be outright banned.

Third, we can no longer afford to be country agnostic on technology risks. Technology that is built in China, and controlled by the CCP, is not equivalent to technology built in, and regulated by, democracies.

That’s not to say that American technology hasn’t resulted in harms. With tech bros centre stage in the Trump administration, Australia’s regulators will almost certainly clash with Meta, X, OpenAI and others in 2025. But the challenge of engaging with – and enforcing Australian law against – Chinese tech companies is fundamentally different in nature and scale.

Over the past 30 years, our leaders have failed to keep pace with technology. We failed to secure the internet as it was commercialised.

We failed to understand how social media would damage our democracies and the mental health of our children. We failed to digitalise our critical infrastructure in a way that was secure by design.

AI was supposed to be different. In 2023, Australia signed the UK-led Bletchley Declaration, affirming that AI should be designed with safety baked in from the start. A head-in-the-sand response to DeepSeek fails to uphold that approach.

Of course, AI investors are notoriously hair-triggered. It’s far from certain that DeepSeek is the game-changer the market has taken it for. But regardless of how its capability stacks up, this is yet another canary in the coal mine.

And, frankly, with time running out to respond to the risks of technology that is controlled by China, we are unlikely to have the luxury of any more canaries.

We predict inaction.

Katherine Mansted is a senior fellow in the practice of national security at the ANU National Security College. She is also the executive director of cyberintelligence at CyberCX.

Alastair MacGibbon is chief strategy officer at CyberCX and was national cybersecurity adviser and head of the Australian Cyber Security Centre.

This article first appeared in the Australian Financial Review on 29 January 2025.