Artificial Intelligence and University Data Policy
Purpose
The purpose of this Policy is to address key risks associated with the use of Artificial Intelligence (AI) technologies. Specifically, it provides guidance on entering University data into AI tools, which may pose cybersecurity threats, raise data privacy issues, and lead to other unintended consequences. The Policy promotes the responsible use of rapidly evolving AI tools and strives to protect the reputation, intellectual property, equity, and privacy of Northern Michigan University, its students, faculty, staff, volunteers, and affiliates. This Policy also outlines important considerations for University personnel and students using AI tools in their University-affiliated capacity. The Policy supplements the Acceptable Use Policy, which codifies more general guidelines for use of University resources and data, as well as the consequences for misuse.
This Policy does not govern student use of AI tools for coursework when such use is explicitly authorized by the course instructor, nor does it address academic integrity related to the use of AI tools. The Policy does not put forth guidance on the responsible use of AI technologies as it relates to environmental impact.
Applicability
This Policy applies to all users of University data and Information Technology (IT) resources, including NMU faculty, staff, students, volunteers, and affiliates.
Scope
This Policy governs the use of AI technologies, including but not limited to:
- Generative AI: AI tools capable of creating new content, such as human-like text (large language models), images, audio, code, or other media.
- Traditional AI: AI systems used for tasks such as data analysis, classification, prediction, interpolation, or automation.
- Other AI technologies as they emerge.
This Policy applies to the use of AI while conducting University business, operations, and/or research, regardless of whether the AI tool is provided by the University or accessed through other means, and whether the AI use occurs on or off campus.
Policy
The University supports the ethical and informed use of AI technologies, but recognizes that all AI technologies pose significant limitations, concerns, and risks. The fact that AI is typically based on algorithms that “learn” from all data and information that any user inputs, introduces risks related to information security, data privacy, intellectual property and copyright, and the risk that the output may be biased, misleading, and/or inaccurate. Users must understand that data entered into any AI tool outside the University’s domain may be stored and reused by the service provider, and potentially disclosed to others. The University will strive to provide AI resources where the content entered will stay within the University domain. However, while AI tools provided within the University domain may offer a safer environment, users should still exercise caution when using any AI tool.
The Chief Information Security Officer (CISO) will provide guidance to the University community by defining both permitted and prohibited uses of University data in Artificial Intelligence tools. The CISO will also provide technical guidance as AI technology evolves.
Permitted Uses
Use of AI tools is permitted where it is not prohibited by this Policy or other University Policies, procedures, guidelines, handbooks, or contractual obligations. Data that is already publicly available and not subject to confidentiality or intellectual property protections may be input into AI tools.
Prohibited Uses
The following may not be intentionally input into any AI tool, whether the tool is outside the University’s domain, or within the University’s domain, without the express permission of the Chief Information Security Officer (CISO) or designee:
- Data defined by NMU’s Data Classification Policy as confidential or private.
- Data protected by NMU’s Family Educational Rights and Privacy Act (FERPA) Policy.
- Protected Health Information (PHI), as protected under federal and state privacy laws, including the Health Insurance Portability and Accountability Act (HIPAA).
- Works protected under NMU’s Intellectual Property Policy; refer to that Policy for guidance on intellectual property ownership of work created by NMU faculty, staff, and students.
- Any proprietary or other restricted information subject to a non-disclosure agreement, grant or contract agreement, or other agreement involving a third party.
- Any other data protected by intellectual property laws, including copyright, patent, trademark, or trade secrets, unless there is written permission by the owner; even publicly available data may be protected by copyright or other intellectual property laws and may have limitations on use.
Related Policies:
Data Classification Policy - https://nmu.edu/policies/1299
Family Educational Rights and Privacy Act Policy – https://nmu.edu/policies/898
Intellectual Property Policy – https://nmu.edu/policies/645
Acceptable Use Policy – https://nmu.edu/policies/719
Artificial Intelligence and University Data (Guideline)
Guideline
Research Use:
Scholarly research data may be entered into AI only if it does not fall under the prohibited uses categories outlined above. Before using AI to support research, researchers must carefully assess the nature and sensitivity of the data. Do not input any data that is confidential, contains sensitive information, or is subject to legal or ethical requirements (such as data involving human subjects) into AI tools unless:
- the data has been thoroughly anonymized,
- the potential risks have been scrupulously evaluated, and
- approval has been obtained from the appropriate research compliance body (e.g., the Institutional Review Board).
AI-Generated Code:
AI-generated computer code must not be deployed in University IT systems and services without a thorough human review and explicit approval from the CISO or their designated representative. The CISO may grant pre-authorization for certain categories of AI-generated code, such as scripts for data analysis or non-critical automation, provided they undergo appropriate security and functionality assessments. These assessments may include, for example, secure code review checklists, vulnerability scanning, and/or verification of compliance with University security standards.
Meetings and Communications:
Users are expected to inform participants if AI tools will be used in meetings or other shared engagements. Consideration must be given for participants that wish to opt-out of such use. Exceptions apply to AI tools that are approved by Human Resources or Disability Services for individual needs.
Legal and Licensing Compliance:
With any AI use, ensure compliance with applicable laws, University policies, and relevant terms of use or license agreements. Legal and licensing agreement considerations may apply to software, AI tools, and/or to the datasets or content used as input.
Verification and Responsibility:
The output of any AI tool must be thoroughly reviewed before use or publication to ensure accuracy and relevance. Users are responsible for verifying accuracy, avoiding the spread of misinformation, and ensuring alignment with ethical and professional standards.
| Date Approved | 2025-04-28 |
|---|---|
| Last Reviewed | 2025-04-28 |
| Last Revision | 2025-04-28 |
| Approved By | President |
| Oversight Unit | INTERNAL AUDIT/RISK MANAGEMENT |