The dilemma

How can we position Australia well in the global AI landscape, encourage AI adoption and leverage its benefits for greater innovation/productivity while protecting citizens and minimizing risks?

AIIA (Australian Information Industry Association). I felt honoured to join such an important discussion and to learn about the latest efforts from the Department of Industry, Science and Resources in that space. Hats off to Simon Bush for co-hosting the meeting so skilfully and inclusively. Balancing the community concerns and the trust deficit around AI risks with the country’s need for leveraging its benefits for greater innovation and productivity is not an easy job.

These were some of the questions we explored:

How to Position Australia in the AI landscape?
How to protect citizens without stifling innovation?
What are other countries around the world doing?
How to encourage AI adoption in Australia?
What AI guard rails do we need for developers, deployers, distributors, manufacturers and importers?
AI Governance and the shared responsibility model. Who is responsible for what?
How to minimise bias on datasets used to train AI models?
How can we use record-keeping to support transparency?
How to reduce burdens of mandatory guardrails and mitigations on Australian businesses?

I was representing The Mindful AI Manifesto which advocates for human values in AI so it was great to see human rights and AI literacy being mentioned by other participants. While robust governance frameworks, comprehensive legislation and global standards are essential and urgent I believe we should not overlook the need to engage the SME’s, AI professionals, data scientists, machine learning engineers and the likes, who are often making critical decisions in their daily jobs. The need to include subjects around ethics in the curriculum of Data Science degrees was suggested.

The topic of responsible and safe AI is complex and urgent. May we find the answers we seek. Quickly. ⏳

Key takeaways:

🔷 Defining AI is not an easy task. Where does it start and stop?

🔷 AI literacy is an important aspect of the accountability framework

🔷 According to surveys there is an AI trust deficit in the community

🔷 Key community concerns around AI are related to access to education, health and credit

🔷 AI Risks: Bias/discrimination, deep fakes, misuse of information, loss of control, workforce impact, fundamental risks to security and economy

🔷 We are working with counterparts overseas and looking at other commonwealth countries as role models for guidance

🔷 Australian AI Act is on the table

🔷 European AI Act will start shifting behaviour globally

🔷 AI implications to national intelligence is in the radar

🔷The limitations of domestic regulations and accountability of supply chains

🔷 Volunteer standards set by SME’s is a thing

🔷 Transparency – knowing how an AI model was trained

🔷 The need for certification by regulators

🔷 High Risks could be defined based on the application of AI on Critical Systems and Non-Critical Systems that are driven by use case, consumer/business impact, reputation, compliance, legal & liability, etc

🔷 A holistic risk framework addressing transparency & security cutting through Fairness, Privacy, Transparency, Explainability with distinct checklist attributes for critical & non-critical systems helps bring pragmatic ways of looking at aftermath actions once risks are identified to further mitigate, avoid, defer or absorb

Leave a comment

WordPress Video Lightbox