Purpose

The University of Northern Iowa (UNI) is committed to the creative, ethical, and secure practice of embracing innovation. The following guidelines were developed to provide a foundation for the selection and appropriate use of Artificial Intelligence (AI) technologies. UNI acknowledges its obligation to provide appropriate security and availability for data and IT resources in its domain of ownership and control. Furthermore, the University recognizes its responsibility to reduce risk to University IT resources by allowing only appropriate use of Artificial Intelligence (AI) technologies on IT resources. It is important to note that AI is a fast evolving technology and, as such, this document cannot cover all potential use cases that may arise.

The University of Northern Iowa develops, publishes, and enforces policies, procedures, and standards in order to achieve and maintain appropriate protection of university data. This document along with related security policies, procedures, and standards identifies key security issues for which individuals, colleges, departments, and units are responsible.

Scope

This procedure is authorized through UNI Policy 14.03 Data Security Policy which states, “The Chief Information Officer (CIO) or their designee shall publish security procedures and standards applicable to all university IT resources. The procedures and standards shall be updated regularly as advances in technology occur and will have the full force and effect of this policy.”

These procedures and guidelines apply to all faculty, staff, students, visitors, partners, patrons, donors, as well as any other individuals or entities who are supporting University-related work, efforts, or functions. In addition, these procedures and guidelines apply to all IT resources owned or leased by UNI.

Procedure Statement

Use of Artificial Intelligence technologies must not exacerbate division, harm, bias, and inequity, but instead foster and improve the success of faculty, staff, and students. Use of AI technologies must be done with care to ensure there is not a loss of confidentiality, integrity, or availability of University Data. Further, use should ensure that academic integrity and academic freedom is maintained.

AI technology must be used responsibly, to ensure there is no harm of any kind to individuals, the University, or other entities. AI technology should not be used in situations where the use of the technology would be seen as a violation of due process or due care.

AI technologies must not be used to impersonate any individual, including their writing style, likeness, or voice, unless part of sanctioned research under the oversight of an institutional review board. This restriction shall not prohibit an individual to utilize AI to create content in the likeness or voice of themselves as part of an approved accessibility need or accommodation.

It is important to acknowledge the use of AI generated content or analysis (e.g., publications, presentations, coursework, etc). Unless directed otherwise by a supervisor or class instructor, AI technologies used as advanced spelling, grammar, and error checkers, do not need to be disclosed so long as the creation of original ideas is fully-maintained by the author. University policies governing workplace behavior continue to apply when a university employee uses AI in their university work. The employee generating output from AI is responsible for the appropriate use of that output.

Level III data, including FERPA protected data regarding a student that is identifiable, must not be submitted to any Internet-based AI technology without written permission from the CIO or designee. This restriction does not apply to solutions purchased and integrated into core University systems, such as Blackboard, SIS, Workday, etc. so long as appropriate data security contracts are in place with the vendor.

Process

University Provided & Supported Tools:

Many Internet-based AI technologies allow sharing of content with any user worldwide or with the public via unauthenticated access. Additionally, some AI technologies store all submissions for future processing and data mining. Due care should be taken by all users to prevent accidental or intentional oversharing of University Data or other sensitive information. A loss of confidentiality, integrity, or availability of University data via excess sharing must be reported in a manner prescribed by UNI Policy 14.02 Information Security Incident Response Policy.

Microsoft Copilot has been selected as the primary generative AI tool for the University of Northern Iowa. Copilot is the preferred solution of its type that may be used for routine work within the University, as the necessary data protection contracts are in place. Alternate Internet-based AI technologies, such as Google Gemini, Amazon Lex, and others, used in support of University work, efforts, or functions may only be utilized with approval of the CIO or their designee. 

AI technologies integrated and enabled within software purchased by the University, such as the Adobe Creative Suite, Google Workspace, Blackboard, Zoom, etc., may also be used so long as the precautions in this procedure are followed. AI software that processes information locally without submission to the Internet may be used for more sensitive work with the approval of the CIO or designee. 

Users of AI technologies should recognize biases that exist in the data sets used by AI technologies and ensure that use of AI does not amplify these existing biases. Content generated by AI should be manually-reviewed for accuracy before use. AI generated content may also intentionally or unintentionally violate intellectual property rights and copyright laws. Care must be taken to ensure AI content does not infringe on the intellectual property rights of others.

AI Tool Selection Process:

If an AI technology is not yet specifically approved, please submit a Service Hub request to initiate the review (e.g., compliance, security, accessibility, usability, support services, licensing, contractual, etc.) and approval process. IT may also use this document to make a determination if installation or use of an AI technology is appropriate. 

Prior to requesting the selection of specific AI technologies, please consider the following:

  • Is this tool being used for business or academic purposes?
  • What information is being submitted to the vendor via the tool?  
    • Does the information include personal information (PII)?
    • Are there intellectual property considerations?
    • Are there legal or compliance requirements associated with the data/information (e.g., FERPA, PCI, ADA, HIPAA, etc.)?
    • Does the University consider the information confidential or proprietary?
  • Who owns the data and content that will be created or submitted via the AI tool? What are the vendor’s data retention policies?
  • What are the privacy, ethical, risk, safety, and accessibility policies of the vendor? Does the vendor claim to own any University data uploaded to its environment?
  • Is the information being submitted intellectual property?
  • Are students required to use the tool?

Examples of Appropriate Activities:

  • Using AI software to assist in the translation of web pages into different languages.
  • Using AI software to search webpages and create custom responses to queries using natural language processing.
  • Allowing AI technology to make grammatical corrections.
  • Using AI technology for voice, handwriting, and image recognition.
  • Using AI technology for topic generation & analysis.
  • Using AI to produce transcripts.
  • Faculty/staff using Blackboard Learn Ultra’s AI Design Assistant to design courses

Examples of Disallowed Activities:

  • Using AI to create content using information protected by law or policy.
  • Using non-UNI licensed AI Technologies, unless approved by the CIO and/or IT-Information Security. Institutional use is prohibited in the majority of terms of service for personal licenses of AI solutions, as institutions are expected to license the services directly.
  • Using AI to create content that impersonates an individual's writing, likeness, or voice unless approved by an IRB or formal accessibility/accommodation request. 
  • Using AI to create content and representing that work as a unique creation of the user.
  • Using AI to create content without appropriate notification or disclaimer to the audience of the content.
  • Using AI to assist in any variety of plagiarism or academic misconduct.
  • Creating content with AI technologies licensed by the University for commercial use or for personal gain not related to University business.
  • Creating content with AI technologies licensed by the University for Federal and State lobbying and political activity, consistent with UNI Policy 10.09.
  • Any other disallowed use found in UNI Policy 14.04 Acceptable Use of Information Technology Resources.

Reporting of AI-Related Misuse

Misuse of AI technology should be reported to the University by selecting from a variety of channels including, but not limited to, University leadership, the Chief Information Officer, IT employees, or anonymously via the UNI Confidential Reporting Service website. Uses of AI that violate other University policies will be handled in accordance with those policies. It is important to note that violations of this procedure may be referred for disciplinary action as indicated in UNI Policy 14.04 Acceptable Use of Information Technology Resources. In addition, a critical component of data security is to address security breaches promptly and with the appropriate level of action. All individuals must follow the Information Security Incident Response Policy (UNI policy 14.02). 

Usage of Terms

AI TECHNOLOGY  – Artificial intelligence technology intended to augment or replace traditional human work in regard to tasks associated with reasoning, meaning, and experience.

AVAILABILITY – Availability is the ability to assure that systems work promptly and service is not denied to authorized users. A loss of availability is the disruption of access to or use of information or an information system.

CONFIDENTIALITY – Confidentiality ensures that confidential information is only disclosed to authorized individuals. A loss of confidentiality, for the purposes of this policy, is the unauthorized disclosure of information.

DUE CARE – Due care is the standard of care a reasonable person would exercise in the circumstance. Used to determine negligence.

DUE PROCESS – The following of fair and documented procedures to ensure all legal rights of an individual are maintained by giving appropriate notice to an individual, the opportunity to be heard, and decisions made by a neutral party.

INTEGRITY – Integrity is the appropriate maintenance of information and systems. A loss of integrity is the unauthorized modification or destruction of information.

INTERNET-BASED AI – Internet-based AI requires the use of infrastructure not hosted by the University to analyze and process information. Such internet-based or “cloud” AI technologies require University Data to leave control of the University in order to be processed, and thus require appropriate data security contracts to be in place. Some AI technologies can be integrated into systems hosted at the University and do not require information to leave the University’s control; those are not Internet-Based.

IT RESOURCE – IT resources may include computers, software, servers, network utilization, storage utilization, virtual machine capacity, tablets, phones, multimedia devices, storage devices, wireless spectrum, and any other in-demand resource managed by IT staff.

UNIVERSITY DATA – University data are information that supports the mission and operation of the University.  It is a vital asset and is owned by the University. Some university data are shared across multiple units of the University as well as outside entities.

USER – User includes any faculty, staff, student, developer, contractor, vendor, or visitor as well as any other individual or entity using information, university data, and/or IT resources of the University of Northern Iowa.