Academic Integrity

Main Content

What is Academic Integrity (Generative AI)?

Generative Artificial Intelligence tools can produce human-like responses to prompts and can perform many tasks effectively. However, it is essential to be critical and responsible about the use of these tools for teaching and learning. 

George Brown College’s Academic Integrity Policy states that “inappropriate use of digital technology...includes but is not limited to the... use of digital technology to obtain an unauthorized academic advantage on an assignment, test, or examination” (2019). 

Offering intentional opportunities for students to discuss or reflect on their learning process can be beneficial in preventing inappropriate uses of AI tools. Additionally, you can provide students with choices for expressing their learning if the learning outcomes don’t specify a particular medium: reflective pieces can be documented through text, video or audio notes, or a graphic or artistic artifact.

Providing alternative ways for students to demonstrate their learning is an excellent practice for many reasons: it supports Universal Design for Learning by allowing students to choose a preferred medium that helps them highlight their strengths; it can remove barriers for learners and increase students’ motivation and connection to their goals and interests.

    Here are some recommendations to promote academic integrity and prevent the inappropriate use of Generative AI tools for assessments:

    • Incorporate peer and self-feedback at different times in the assignment process and integrate explicit opportunities for students to reflect on their learning.
    • Include a section in your Course Outline about your stance on the use of Generative Artificial Intelligence tools for assessments and coursework. This way students will have a clear parameter to follow. Discuss your stance in the classroom. See below for sample texts for your Course Outline.
    • Consider asking students if they plan to use Artificial Intelligence tools to complete assignments, and if they do, invite them to comment on their experience using these tools for their assessments. Students can use comments on Google Docs or Microsoft Word documents to reflect on their thinking process and keep track of changes.
    • Ask students to keep track of the Generative AI tools they use, the prompts, outputs and dates.

    Sample Texts about the Use of Generative AI tools for Course Outline

    Adapted from Durham College's 'How to Incorporate Generative AI' guide under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


    Using generative AI tools such as, but not limited to, ChatGPT, Copilot, Bing, ChatPDF, Dall-e, to aid in or fully complete your coursework will be considered a breach of academic integrity and George Brown’s Academic Integrity Policy will be applied. Generative AI sources are not infallible and have been known to produce inaccurate, biased, unethical, and offensive information.


    Review the course outline/assignment specifications closely to determine where you are permitted to use generative AI. It is your responsibility, as the student, to be clear on when, where, and how the use of generative AI is permitted. Generative AI sources are not infallible and have been known to produce inaccurate, biased, unethical, and offensive information. In all submissions in which you use generative AI, you must reference its usage. Failing to disclose the use of generative AI is academic misconduct. In all other aspects of your work, the use of generative AI will be considered a breach of academic integrity and George Brown College’s Academic Integrity Policy will be applied. If you are uncertain if you have used GenAI and/or referenced it appropriately, please speak with your professor to discuss possible additional uses.


    In all submissions in which you use generative AI, you must reference its use. Failing to disclose the use of Generative AI is considered a breach of academic integrity and George Brown’s Academic Integrity Policy will be applied. However, it is important to understand that all large language models are known to make up incorrect facts, fake citations, and inaccurate outputs, and image-generation models can occasionally create offensive products. You are responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a Generative AI source. If you are uncertain if you have used GenAI and/or referenced it appropriately, please speak with your professor. 


    Talking with students about the use of Generative AI

    The transformative potential of Generative AI in teaching and learning has become a source of great interest and concern among faculty, staff, and students in Higher Education. Questions about Academic Integrity and ethical considerations are on everyone’s minds.

    AI technology is evolving quickly and it is challenging to keep up. We are all learning, trying to understand, making mistakes, and looking for the most equitable, responsible, and meaningful way to support faculty and students. Many students are using Generative AI in one form or another in their courses. Many other students are curious. These are some recommendations when talking to students about the use of Artificial Intelligence tools for assessments and coursework: 

    • Create an opportunity for an open discussion about the value, limitations and ethical implications of using AI for assessments and coursework with your students. You may ask your students some of these questions:  
      • What do you know about Generative Artificial Intelligence?  
      • What is and will be the role of Generative Artificial Intelligence in your field of study or profession? 
      • What are your concerns about Generative Artificial Intelligence? What questions do you have? 
      • What are the potential risks of using Generative Artificial Intelligence tools in an educational setting?
      • How can Generative Artificial Intelligence support you in your learning journey? 
      • How can Generative Artificial Intelligence impact Academic Integrity or stifle independent learning, creativity and critical thinking?

    Recommendations for Students about the use of Generative AI: 

    Feel free to adapt and share all or some of these recommendations with your students: 

    • Ask each one of your instructors about the rules for using Generative AI tools in their course.   
    • Fact-check the information provided by these tools: It's critical to be aware of the limitations and the potential for bias in AI systems and to use them responsibly, critically evaluating the information provided and not using AI as the sole source of information or decision-making.  
    • Keep track of all the activities you engage in with Generative AI tools for your coursework: which tools you used, the prompts you used, the links to the outputs, and dates. This documentation may be requested by your instructors. This documentation can also help you reflect on your learning process.  
    • Disclose any Generative AI tools that you use in your assignments or coursework.
    • If you choose to use these tools for study support, make sure you also use other resources and more importantly, connect with your classmates and professors to deepen your learning. A tool like ChatGPT can help you create an outline for an essay or project, create content for flashcards, practice quizzes, or short-answer questions, or propose a study schedule. But collaborating and having conversations with your peers and teachers is the best strategy to enrich your learning experience. 
    • Read GBC’s Academic Integrity Policy to learn about the college’s principles.


    Task force Mandate

    Academic honesty is central to the learning environment and is an expectation of all applicants, students, faculty, and staff. A breach of academic honesty is considered to be an offence against the academic integrity of the learning environment. Considering the current educational context, and contemplating disruptions caused by Artificial Intelligence and other technology, there is a need to examine our current practices and processes with a deep consideration of how Artificial Intelligence will fundamentally disrupt teaching, learning and assessment practices. A pan-institutional task force will be assembled to conduct a review of Academic Integrity and make recommendations on how to proactively and intentionally build and reinforce a culture of academic integrity.

    Project Timeline

    November 2023: Inaugural meeting; Assignment of sub-committees.
    December 2023: Sub-committees meet to scope the plan for the Winter semester.
    January-February 2024: Internal and external reviews completed.
    March 2024: Committees share findings and begin to formulate recommendations.
    April 2024: Recommendations from committees collected, collated, and cross-referenced; Creation of draft document for Recommendation Report to be reviewed by full Taskforce for commentary and discussion.
    May 2024: Recommendation Report submitted to Vice President, Academic.

    Broad Areas of Focus

    Curriculum/Teaching and Learning/Educational Technology

    • Teaching practices – what we teach (Artificial Intelligence, ethical use and implications for students and faculty; risk of perpetuating bias); what academic integrity looks like; use of Artificial Intelligence for access.
    • Assessment practices – integrating Artificial Intelligence /technology; risks of Artificial Intelligence /technology for plagiarism, misuse, copyright issues, etc.; mapping of assessments across a semester.
    • Academic Integrity related tech: Artificial Intelligence, Virtual proctoring, Turnitin
    • Implications for work-placements/practicums

    Processes and Practices for Students

    • Awareness of what is Academic Integrity
    • Awareness of what constitutes academic misconduct (for students)
    • Orientation for new students on Academic Integrity
    • Awareness of policies and processes (for students)
    • Support for International students

    Processes and Practices for Faculty

    • Strategies to foster Academic Integrity
    • Awareness of policies and processes (for faculty)
    • Practices related to Misconduct
    • Training and resources specifically for contract faculty and new faculty (Toolkit)
    • Policies for the use of Artificial Intelligence, student conduct policy, academic misconduct, assessment policy, academic appeals
    • Resourcing to support academic integrity at the college
    • Investigation into incidents
    • Appeals process; Appeals panel composition and training for members
    • External communications about AI, bringing students into the conversation before they start at the college
    • Website with resources easily available to all, including external

    Institutional Research

    Survey of GBC students & faculty about academic integrity experiences, practices, awareness, etc.

    Task Force Membership


    • Heidi Marsh
    • Mark Hanna


    • Adel Esayed
    • Alex Irwin
    • Ana Mateus
    • Andrea Hall
    • Anna Bartsosik
    • Ash Andrews
    • Beth Stockton
    • Bryan Rogers
    • Chilli Leung
    • Chris Kim
    • Chris Sinclair
    • Christine Houston
    • Colin Fitzsimons
    • David Parker
    • Elena Chudaeva
    • Eva Aboagye
    • Gian Michele Pileri
    • Giselle Basanta
    • Hauwa Dogonyaro
    • Heather Buffett
    • Ingrid Wagemans
    • Jason Inniss
    • Joanna Friend
    • Katye Seip
    • Kristen Boujos
    • Lisbia Romero
    • Margrit Talpalaru
    • Michelle Desgroseilliers
    • Monique Bacher
    • Rosa Fracassa
    • Ryan Morrison
    • Shay Steinberg
    • Somi Abalu
    • Susan Toews
    • Timothy Bingham
    • Valerie Scovill
    • Wren Alden