To use AI safely and responsibly, please follow this checklist to protect university data:
Users of AI tools are responsible for the data they generate. When using consumer AI applications whether purchased, free, or open source (e.g., ChatGPT, Dall-E, Anthropic), exercise caution, as the data may become public. While major university software contracts (such as Microsoft, Canvas, Zoom, and Adobe) prioritize data protection, it is crucial to adhere to established handling guidelines.
Never input confidential or legally restricted data (e.g., personnel records or data protected by FERPA, HIPAA, PCI) into any AI tool. This includes any information classified as Orange or Red according to 91Èȱ¬Íøâ€™s data classification policy.
Treat all information shared through AI tools as if it will be made public. Avoid sharing personal or sensitive information and be aware that data entered into AI tools may be retained by the vendor.
All AI purchases, including enhancements to existing licenses, must comply with 91Èȱ¬Íøâ€™s established . This ensures that appropriate legal, privacy, and security reviews are conducted.
When using university devices, or data, work with AI solutions that have been reviewed and approved by the university. Unlike free consumer versions, these enterprise-wide tools require your campus login and typically do not use your data to train their systems.
AI-generated responses can be biased, inaccurate, or inappropriate, and may include unauthorized copyrighted material. It is each user’s responsibility to review and verify the outputs generated by AI tools to ensure accuracy and appropriateness.
Policy 52 - Responsible Use of Information Technology Resources
Policy 97 - Information Security and Privacy Governance
Policy 117 – Information Security
Policy 122 – Video Capture
Policy 119 – Software Lifecycle
Data Handling Procedures Related to the Information Security & Privacy Governance Policy
Adapted from