Appendix C. Acceptable Use of AI Tools
Acceptable Use of AI Tools¶
2026.1
This appendix covers the use of third-party AI tools by Changineers staff. LLMs owned, hosted, and maintained by Changineers are exempt from its details and are governed by our standard Information Security, Access Management, and Data Management policies.
Approved AI tools¶
Staff may use the following AI tools for company work:
- Anthropic Claude, including Claude Code and other Claude-powered agentic tools
- GitHub Copilot, including Copilot IDE plugins and Copilot-powered agents
Any other AI tool must be reviewed and approved by the Security Officer before use for company work.
All use must be through a company-managed team or organisation plan. Staff sign in via SSO using their @changineers.com.au account. Free-tier, personal, and consumer accounts must not be used to process Changineers data.
Data that must not be shared with AI tools¶
The following must never be provided to any AI tool:
- Customer data.
- Credentials, secrets, and access tokens.
- Data classified as confidential or restricted, unless the specific use has been reviewed and approved by the Security Officer.
Internal operational data (documentation, internal communications, planning notes, and similar non-customer information) may be used with approved tools.
Source code may be read and modified by AI tools provided a human reviews any changes before they are merged.
Training data and model improvement¶
Changineers does not contribute its data to improve external AI models. Where a vendor offers a setting to opt out of training or model improvement, that setting must be enabled. The Security Officer reviews these settings quarterly.
Human review¶
AI-generated or AI-modified code must be reviewed by a human before being merged. AI tools may open pull requests and suggest changes, but a human approver is required for merge.
AI agents operating on Changineers infrastructure have read-only access, enforced through dedicated credentials and IAM policies. Agents may propose fixes via pull requests but cannot perform destructive or state-changing actions. Agent credentials are provisioned and audited through the same mechanisms used for human access.
Attribution¶
Pull requests and other externally visible contributions produced with AI assistance are attributed to the human who reviews and approves them. Commits made by AI agents include a Co-Authored-By trailer identifying the agent so that AI authorship is recorded in version history.
Incidents¶
Suspected misuse of AI tools, and incidents involving AI tools such as restricted data being sent to a model, must be reported immediately via the incident reporting process in the Employee Handbook.
Revision History¶
| Date | Summary | Approved by |
|---|---|---|
| 2026-04-24 | Initial revision. | James Gregory |