Artificial Intelligence Transparency Statement
The Australian Digital Health Agency (the Agency) is committed to supporting a healthier future for Australians through connected healthcare. We recognise that, into the future, this vision will be supported by adoption and use of digital solutions such as Artificial Intelligence (AI) and other emerging technologies. We may use these innovative solutions to increase staff efficiency, improve service delivery and support Australians in managing their health and care journey.
This transparency statement is aligned with the Australian Government policy for the responsible use of AI in government. We understand that transparency is critical to building trust that AI is used responsibly and that the broader impacts of AI systems are appropriately considered.
As our use of AI continues to evolve, we are committed to maintaining transparency. We will update this statement to reflect changes and advancements in our AI technologies and practices.
AI safety and governance
The Agency is committed to assessing benefits and mitigating risks associated with AI tools. This includes protecting the public against negative impacts of AI. We do this through our internal governance committees, policies, processes, guidelines and staff training. Any use of AI tools will occur in accordance with applicable legislation, regulations, frameworks and policies.
The Agency's governance approach is focused on ensuring responsible use of technology by the Agency. When we consider any new technology solution, we focus on safety, ethics, transparency, security, privacy and accountability. We reflect on these elements before we introduce new AI technologies, and they guide us as we monitor, evaluate and improve the use of AI.
We do this in the context of our Clinical Governance Framework, which supports clinical safety, quality and continuous improvement of our products and services, through collaboration and evidence-based practice.
Use of AI
Currently the Agency uses some AI technologies for the 'Workplace Productivity' and 'Analytics for Insights' usage patterns, within the Agency's internal IT systems. This occurs predominantly in the 'Corporate and Enabling' domain, with a small amount of activity related to the 'Policy and Legal', 'Scientific', and 'Compliance and Fraud Detection' domains, as defined in the classification system for AI use. For example, we use AI to support IT and cyber security monitoring, data analysis and for some internal documentation tasks.
In 2024, the Agency participated in the Australian Government's trial of a generative AI service, Microsoft 365 Copilot, in its internal corporate environment. The Agency has extended its trial period, to enable further evaluation to occur. Prior to using Copilot, staff were required to complete training on appropriate use of this tool.
We do not currently have any AI solutions that members of the public can interact with or would be significantly impacted by.
Over time, the Agency may consider introducing additional AI technology, where we determine that its use would be responsible, safe, ethical, appropriate and in line with national policy and relevant regulatory frameworks.
Staff training
As outlined in our Workforce Strategy, we are committed to building staff capability, including digital capability required at all levels. We offer AI fundamentals training to all staff and provide a range of other opportunities to enable our workforce to develop their skills and knowledge.
Additional training needs will be assessed as the Agency's use of AI evolves. This may include product-specific training, such as the training that was provided to staff participating in the Agency's trial of Microsoft 365 Copilot.
Policy for the responsible use of AI in government
The Branch Manager, Architecture has been appointed as the Agency's AI Accountable Official.
The AI Accountable Official is responsible for overseeing implementation of the Australian Government policy for the responsible use of AI in government. They will also engage in whole-of-government AI forums and processes, and monitor changes to requirements over time.
Additional information
This AI transparency statement was last updated in February 2025. Further adjustments will be made if the Agency's use of AI changes. This statement will be reviewed annually at a minimum.
For enquiries, visit our contact us page or email help@digitalhealth.gov.au.