Currently (fall 2023) there is no university policy regarding Artificial Intelligence (AI) use in courses. Each faculty member has the academic freedom to decide to what extent, if at all, students may use AI in support of their coursework. Instructors should clearly state on their syllabi and in the instructions for each assignment whether or not students can use AI to complete the assignment.
Below are some resources to help with understanding the context of AI in education.
ChatGPT is one of the most well-known generative AI text chatbots today. You can use a Google account, such as your Boise State account to create an account. However, be aware that ChatGPT does require you to provide a cell phone number to use this service as well.
Google Bard allows you to login using your Boise State University Google account. This tool is available for all students, staff, and faculty.
Alternative Option: If you are not comfortable with trying either of the two options above, you can explore one of the other generative AI tools described on this website:
AI is an impressive technology that can produce human-like responses. However, it's essential to understand that AI does not think in the same way humans do. When humans interpret words, they analyze their meanings and then generate a response based on that understanding. In contrast, generative AI skips the step of meaning interpretation entirely. Instead, it converts words into numbers without any reference to real-world objects those words represent.
AI Can Not Access Training Data
The impressive capabilities of large language models, such as ChatGPT, are achieved through extensive training on vast amounts of data. However, it's important to note that these AI models do not store or search through the input data used during their training. The training data is applied in batches to tune the model's underlying tensors, which are multi-dimensional arrays of numbers. Due to the sheer volume of training data, it is not feasible to store it all, and after each training input is used, it is discarded to make room for the next batch.
AI is Predictive
Generative AI models, like ChatGPT, are trained on datasets that include numerous example conversations across different contexts. When generating a response, the AI aims to solve the problem of predicting the most likely token (words and punctuation) to follow the given prompt. This means that when asked to provide a citation, ChatGPT is statistically trying to produce something that looks like a citation based on its training data. Sometimes, it may generate a correct and relevant citation, but this is an emergent property, and should not be mistaken for consistent and reliable behavior.
AI is Indifferent to Truth
Generative AI models, including ChatGPT, may "hallucinate" or generate responses that may not be entirely factually accurate. AI is not designed to determine the truth of information. Instead, it is more aligned with making users happy and providing responses that are contextually relevant. Determining the truth is a complex philosophical challenge that even humans often struggle with, and AI, as of now, is not proficient in this aspect.
This guide can help you with the context and provides sample statements.
Can I use AI chatbots to evaluate student work?
No, as all these chatbots do is merely predict the next word, they do not evaluate or assess work. Only humans can do that. Placing student work into a chatbot may be a violation of FERPA and may also go against the student's copyright. Students have ownership of their work and to place their work into an AI chatbot may release the copyright protections.
Can I copyright the work my chatbot produces?
Likely no, not at this time as copyright only covers human-created works.
Can I use the work I created with a chatbot in my courses and publications?
Possibly yes, though you must cite your uses in those instances.
Can I require students to use ChatGPT or Bard?
No, but you can have students use or learn Generative AI for their assignments. There are many options and we suggest providing them with a list of possible tools. Some require a separate login and creating accounts requires a cell phone number. Others have some other privacy concerns. Students will want to be careful about what they input into an AI chatbot.
Bard is currently turned on for all Google accounts (students, staff, and faculty) at Boise State University.
Are AI detectors reliable?
They have a 4% false positive rate. This is why if you suspect a student is using AI, even though you said they can’t use AI for an assignment, you should start with a conversation first.
Vanderbilt University turned off the Turnitin AI detector.
You can ask students not to use generative AI, however, be aware that it is likely they will still use it. It’s almost impossible to detect student use of generative AI. It’s best to learn how to support students’ knowledge of these tools and let them know how to use these tools in the course you teach. If you suspect a student used AI in your course, but were not allowed to do so, please use this guide to have a conversation with that student:
Learn how to use AI for yourself first. Determine which tools are helpful and which are not. Decide if they offer any value for your discipline or learning and if not, why not. Consider having students discuss whether or not it’s a good idea to use AI in their coursework or career. Whatever you do, once you decide how to use it and if students can use it too - create clear, simple statements in your syllabus and assignments that describe how students can use generative AI or not in the course.