Main Menu

The Post Artificial intelligence use guidelines from an IT leadership perspective

Artificial intelligence use guidelines from an IT leadership perspective


Note: This is a reposted article from Donna Roach, University of Utah Health Chief Information Officer and Steve Hess, Utah Chief Information Officer. Please CLICK HERE to access the article directly on UIT website.  

To further our collective commitment to high-quality patient care, education, and research, we recognize that the potential transformative benefits of emerging, experimental artificial intelligence (AI) tools, such as ChatGPT, must be used with caution so that we avoid known pitfalls, such as bias, privacy violations, copyright infringement, and inaccuracy. These guidelines are intended to support institutional standards for privacy, safety, ethical practices, and data security. 

  • Safeguarding privacy and data security: AI tools, such as ChatGPT, and associated data fall within the public domain and lack security and data regulation compliance features. Therefore, when using public AI tools, never enter or upload sensitive or restricted data per Rule 4-004C: Data Classification and Encryption, including protected health information (PHI), FERPA-protected data, personally identifiable information (PII), donor information, intellectual property (IP) information, and payment card industry (PCI) information. Note: A business associate agreement (BAA) must be on file with the University of Utah Health Privacy Office for AI-related and other IT products/vendors that store or process PHI.  
  • Mitigating misinformation: AI-generated responses can be false. When an AI tool is trained with data containing inaccuracies, its output is prone to be unreliable. This is particularly evident in AI tools that draw from extensive datasets available on the internet and other public sources, which often include inaccurate information. Consequently, if AI tool output is used as a source in research or authoring documents, the output must be verified and the AI tool cited. Such usage could potentially compromise the organization’s credibility if not verified for appropriateness and accuracy. 
  • Confronting bias in AI output: Data used by AI tools may contain biases. Therefore, biases may be reflected in AI output. For example, if the training data reflects negative stereotypes or prejudiced views, the AI might produce responses that align with those biases. It is important that when the AI output is used it is reviewed and edited to correct bias that may be present. 
  • Upholding copyright and academic integrity: Many public domain AI tools do not provide specifics about the data sources used to train the technology. As a result, there is a risk that AI tools generate responses that are copyrighted, without proper attribution. AI tools should never be used in place of health care recommendations and peer-reviewed research. As with misinformation and bias, AI output must be reviewed for copyrighted material to avoid plagiarism and misattribution. 
  • AI as a continuous learning tool: Through strategic integration, AI can enrich and supplement formal training and continuous learning but does not replace it. There are many sources available to stay informed and safely utilize AI technology in advancing health care initiatives, teaching, and research. 
  • Reporting: Promptly report concerns regarding entry of sensitive or restricted data into an AI tool, or a suspected IT security breach, to the University of Utah Information Security Office: iso@utah.edu or 801-587-1925. 
  • Further information: Additional guidelines for AI use have been published by the university, including:“Guidance on the use of AI in research,”“Fall 2023 instructional guidelines on generative AI,” and “The data privacy ‘GUT check’ for synthetic media like ChatGPT.” If you have questions about AI in general, specific tools, or these general guidelines, please contact the U of U Automation Center of Excellence (ACoE) at  ACoE@hsc.utah.edu.