Responsible AI Self-Assessment Tool

Responsible AI Self-Assessment Tool

This self-assessment tool is for organisations who are deploying and/or developing Artificial Intelligence technology. The tool measures an organisation’s level of maturity with regard to Responsible AI.

Name and email
Q1. Which of the following statements best describes your organisation’s use of AI?
Q2. What is your role in the organisation?
Q3. How many full-time employees does your company employ in Australia?
Q4. Where is your company’s Australian head office located?
Q5. In which industry does your business operate?

Q6. Using a scale of 0 to 10 where 0 is an Extremely poor performance and 10 is an Excellent performance, how would you rate your organisation's performance in the following areas regarding the use of AI?

Click and drag on the slider bar to select your answer.

Including both technical and non-technical consultants or professionals (e.g. social scientists, psychologists, ethicists, legal experts) as well as customer representatives to review AI systems for the potential for harmful outcomes to customers.
Hiring / engaging a diverse (different cultures, genders etc) workforce to consider broader perspectives and consideration of risks into the development process
Ensuring AI system designers and developers are appropriately skilled and knowledgeable about the ethical implications of their work, including risks of discrimination and bias and techniques to address these
Having appropriate mechanisms in place to allow individuals materially impacted by an AI-driven decision to understand and/or challenge that decision
Scrutinising the systems and processes used by potential AI suppliers to ensure they are designed to not harm, deceive or cause unfair treatment of individuals, communities or groups
Having robust processes to ensure all AI systems are compliant with relevant regulation and laws
Having an ethical (or equivalent risks) framework in place to ensure AI-systems are formally assessed consistently against clear standards that account for its impacts on individuals, communities and groups
Routinely monitoring AI systems using clear metrics designed to trigger suitable corrective or remediation action when AI systems are not working as intended, for example monitoring of bias and the accuracy of decisions
Where decisions have a material impact on individuals, communities or groups conducting a regular, independent peer review of all aspects of AI-systems and their impact
Having a leadership team that is clearly accountable for the impact of AI systems
Having a leadership team who are demonstrably committed to the responsible use of AI
Having a strategy in place for the responsible use of AI which stays up to date with emerging best practice and international frameworks, and is reviewed on an ongoing basis
Having formal organisational routines (for example rewards, recognition etc) to incentivise responsible use of AI
Having robust systems and processes in place to ensure personal information used or created by AI systems is appropriately protected
Reviewing underlying databases for potential bias to help ensure AI systems do not result in unfair treatment of or discrimination against individuals, communities or groups
Having documented policies and processes in place to quickly respond to and resolve any adverse customer outcomes caused by the unauthorised use of AI systems
Q7. Has your organisation done any of the following as part of its approach to the deployment of AI?