³Ô¹ÏÍø

Skip to main content

Draft: Guidelines for Using Generative AI in Research

Generative AI is transforming the research landscape, potentially accelerating scientific discovery and amplifying creativity and innovation in the process. At the same time, tools built with large language models (LLMs), machine learning, and other AI technologies have spawned new challenges and risks — both for individuals and for institutions. All researchers, regardless of discipline, increasingly must navigate this complex and evolving landscape. UVic endorses and will follow the . We anticipate that these will be regularly updated in the coming years to keep up with the rapid evolution of the field. 

The Office of the Vice-President of Research and Innovation (OVPRI) is committed to monitoring the emergence of generative AI tools, their implications for researchers and society, and to supporting UVic researchers as they begin to incorporate these tools in their work. In consultation with technology experts and researchers from across campus, OVPRI has developed and endorsed the following guidelines for the use of generative AI in research.

Authorship & Accountability

A work’s human author(s) or creators remain fully accountable for its contents, regardless of whether that content was generated in whole or in part though generative AI technologies. Many tools based on generative AI are prone to ‘hallucinate’, i.e. to fabricate or invent untrue facts or sources, or misattribute statements to real sources. It is entirely incumbent on the researchers using these tools to ensure the validity, accuracy, and truthfulness of their work, regardless of the tools or resources used in its development. This principle applies to all research works and documents, from project proposals and grant applications  to finished manuscripts and everything in between. 

Transparency

Researchers should embrace full transparency when it comes to communicating the role of these technologies in their work. Such transparency is in accordance with the norms of scientific reproducibility, rigour, and openness. It is the responsibility of all authors to review, understand, and adhere to the specific requirements of academic publishers, editorial boards, and journals in this regard, and OVPRI and UVic libraries will support researchers in this requirement. Some referencing styles like MLA and Chicago have developed initial recommendations as to , but citation standards are evolving. It is incumbent on researchers to ensure they are following the most recent guidance. Best practices require full transparency about what tool or tools have been used, and when and how they have been used. Researchers should also disclose known limitations or biases associated with such tools. Finally, researchers should consider the transparency of generative AI tools themselves, including their openness about their training data and methods, before adopting them in their work.

Data Security & Confidentiality

Researchers are responsible for understanding the full implications of these tools for data security and confidentiality. Material uploaded into generative AI tools may not have adequate privacy protections and in some cases may be used for other purposes, including the training of new models.  With few exceptions, this means that no personal information or private or confidential research data should be uploaded into these tools. Doing so may violate UVic’s Protection of Privacy Policy or the UVic Information Security Policy. Due to such concerns, the prohibit the use generative AI tools in the evaluation of grant proposals. Reviewers should confer with Tri-agency staff if they have questions. Within UVic, researchers are encouraged to contact UVic Systems or OVPRI with questions about the security implications for specific tools. Researchers should also be aware the uploading published material, including their own publications, to generative AI tools may be found to infringe on copyright protections, though the current legal framework governing such cases in Canada is unclear.

Avoiding Harm

The use of generative AI can introduce significant risks. Therefore, before integrating such tools into their work, researchers are encouraged to thoroughly and systematically assess these risks. One such risk is the introduction of bias. Many AI tools are prone to systemic biases arising from their underlying data. Such biases must not be allowed to further reinforce and entrench themselves through the uncritical adoption these tools. Judgements about research quality and impact, or the suitability of research candidates for any position, remain best left to an appropriate selection of peers. The use of generative AI tools is inappropriate in the context of most research assessment. Generative AI may also pose unique risks in the context of Indigenous research given a lack of attention to other knowledge systems and world views.  The use of AI in research also carries significant resource and environmental impacts. Researchers are encouraged to be mindful of this as they explore potential advantages and disadvantages of these tools for supporting their research and avoid frivolous or excessive use.

Embracing Curiosity & Change

The risks and limitations of generative AI are real and they warrant care. But their potential to spark and accelerate discovery is also real. UVic and OVPRI are committed to supporting researchers as they explore these opportunities. Additional education and training opportunities will be developed by UVic Libraries, University Systems, and other campus partners. New information and resources related to using AI tools for research will be provided as they become available. Generative AI tools developed specifically to support research will be assessed and evaluated, and institutional subscriptions to such tools will be investigated. OVPRI will also sponsor a series of open dialogues in the coming months to encourage interdisciplinary exploration of the issues and implications for research arising from these tools. In the spirit of UVic’s Strategic Plan and X̣əčiŋəɫn̓əw̓əl | XEĆIṈEȽNEUEL, we must be bold enough to embrace a culture of change and transformation while building on a foundation of responsibility and trust.

Resources

UVic LibGuides

Workshops & Seminars

 Other Resources

Microsoft 365 Copilot

Faculty and researchers should consider the advantages of using Microsoft Copilot relative to other generative AI tools in situations where there may be concerns about data security. Microsoft 365 Enterprise Copilot has been approved for use by the UVic Privacy Office and safeguards user data by adhering to stringent privacy and security standards.  While logged into your UVic M365 account, it ensures that any information you input, retrieve, or generate remains within UVic’s Microsoft 365 service boundary, aligning with existing commitments such as the General Data Protection Regulation (GDPR) and the European Union (EU) Data Boundary. Copilot accesses only the data you have permission to view, utilizing Microsoft Graph to provide relevant responses without using your data to train its foundational large language models. Additionally, it employs multiple protections to block harmful content and detect protected material. For more information, see:

Researchers who wish to use other AI services are encouraged to engage the UVic Privacy Office to ensure they meet all regulatory requirements.

Call for feedback

We are seeking feedback on the draft guidelines from the UVic research community. Please share any thoughts you may have by March 20th.