Data Confidentiality
Do not input confidential information (Levels 1 & 2) into generative AI tools that are publicly accessible.* Any data shared with these tools under standard settings may not be secured and may risk disclosure of sensitive or proprietary details. Assume that any information provided on these publicly accessible AI tools could become public.
Refer to Data Classification Levels Definitions
Content Verification
Content generated by AI can sometimes be inaccurate, deceptive, or completely fabricated, also known as “hallucinations,” and might inadvertently include copyrighted material. It is your responsibility to vet any AI-generated content before dissemination.
Phishing Alertness
With Generative AI, phishing attempts and deepfakes are easier to produce. It’s important to stay vigilant and report any suspicious activities to the
Campus HelpDesk.
AI Tool Procurement
Prior to procuring any generative AI tools, please contact the
Campus HelpDesk. Assistance can be provided in evaluating the suitability of an AI tool for your requirements or to identify if an alternative, pre-approved AI solution is available.
User Privacy
As with any other technology service managed by the
CSU, both users and system administrators will be governed by the
CSU
Responsible Use Policy This link will take you to an external website in a new tab.
. As that policy lays out, the
CSU
has legal and operational obligations that require access to user data in the services we offer, but importantly that section ends with: "These activities are not intended to restrict, monitor, or utilize the content of legitimate academic and organizational communications."
*Publicly accessible generative AI tools include (but are not limited to) technologies that the public has access to, outside of an Enterprise Contract such as free or Plus version of ChatGPT, Microsoft Copilot (previously Bing Chat)
with no Commercial Data Protection
and any AI tool used under a free trial period.