The government must make sure technology serves public interest. The alternative is a libertarian free-for-all | Peter Lewis | Australia news


Falling levels of trust in our public institutions have become the backing track for the demise of the progressive political project and the rise of populist strongmen who promise to take back control.

Government becomes a problem to be solved, a “bubble”, a “swamp” of compromised technocrats and bean-counters operating against the interests of hard-working common folk, the “quiet Australians” whose will for a simple life is constantly being frustrated.

One of the drivers of this collapse has been the impact of technological change on our body politic, the anger-driven echo chambers of social media, the fake news and disinformation, the increasingly sophisticated targeting designed to reinforce what we already think.

Numerous benchmark surveys, including Essential’s own, document this decline, which tracks closely the destruction in traditional media models at the expense of these platforms.

But as two reports released in the past week show, when it comes to thinking through the impacts of technology on the future, government leadership is more important than ever.

The first, the government’s response to the Australian Competition and Consumer Commission’s digital platforms review, is to the point.

On one level the report, initiated as part of the deal to water down the media ownership laws that saw the Nine takeover of Fairfax, is an attempt by media giants to restore the natural order.

But somewhere along the way the ACCC inquiry became more than that. Someone inside the agency seriously put their minds around the existential challenges of Facebook and Google, setting out a detailed framework that would have ended the conceit the social networks did not carry responsibility as publishers.

While limited in scope to exploring the market dominance of the platforms as opposed to the broader social consequences of technological change, the inquiry positioned Australia as a world leader in grappling with the market power of big tech.

The government’s response this week may fall short of realising the ACCC’s ambition of enforceable standards, opting instead for the sort of voluntary codes that any industry lobbyist yearns, but the intent from the prime minister in launching the report is clear: “The rules that exist in the real world need to exist in the digital world.”

You can quibble with the ambition, and we have, but when a conservative government invests behind the ACCC to build its capacity to monitor the market operations of the platforms and get to the bottom of their algorithmic marketing model, something interesting is going on.

The second report is, if anything, even more ambitious in its vision of governments’ need to lead us through profound technological change. The Human Rights Commission discussion paper into AI technology calls for the establishment of rules around the way automated decisions and data-matching develop in Australia.

The report calls for all AI to be subject to scrutiny around its design and impact on users before it is unleashed on to the public, ensuring it complies with existing laws covering both direct and indirect discrimination.

Commissioner Ed Santow argues that human accountability cannot be automated and that facial recognition technology in particular needs to be tested and thought through before it is unleashed on the Australian public. And that this should be the role of a new government body, the AI Safety Commissioner.

In doing so, Santow is challenging some basic tenets of the information economy: that it’s OK to disrupt, move fast and break things; that the benefits of tech advancement outweigh its cost; and that the role of government is to adapt to change rather than step up and shape it.

Research that Essential has conducted around this report shows Australians are looking for government leadership on the issue, with the majority of the public concerned about the automating of decisions.

Some government agencies do use artificial intelligence technology to make decisions. When a government agency such as Centrelink or the Australian Tax Office makes a decision using artificial intelligence, rather than a human decision maker, this is called an automated decision. How comfortable are you about government agencies using artificial intelligence technology to make automated decisions which can affect you?

Santow argues that placing guardrails around how Australia develops AI will ultimately serve the national interest – not just protecting citizens but also developing a uniquely Australian AI that is “fair by design” and can become a compelling global export.

But to get to that point, government needs to lead: not just being more assertive in taking on the recommendations of its expert bodies, but in the way it too uses its citizens’ information.

In an era of declining trust in government, it is hardly surprising that the My Health Record program has stalled, with millions of Australians not prepared to share their medical records, especially under a model where entrepreneurs would have been encouraged to access this data to “innovate”.

More profoundly, the failure of robodebt has reinforced every latent instinct that government is not to be trusted with sensitive information. That the first big government data-matching project was used to chase poor people deemed to have been overpaid says it all.

Imagine the difference in trust dividend if the first application had been to find people who had not claimed benefits they were entitled to and send them a cheque to make good; or to chase down unpaid super; or ensure workers were being paid the right amount of money.

The challenges of rapid technological change provide an opportunity for government to win back public trust, by setting rules that ensure technology serves the public interest and by being a best-practice custodian of our personal information.

As a social democrat that’s what I want my government to be doing, regardless of its partisan colours. The alternative is a libertarian free-for-all that will only ensure the disruption, division, distraction and displacement of the times accelerates unabated.

Peter Lewis is executive director of Essential Media and the director of the Centre for Responsible Technology, a new initiative of the Australia Institute.


Source link