It’s obvious with how accessible and easy to use AI has become that people may tend to over-use it, especially middle and high school students. Tedious homework, difficult papers, the list goes on with assignments that LLMs can complete for students in minutes, sometimes seconds. There’s even a new word, where to “chat” something has literally become a verb for using ChatGPT to do an assignment or assessment. With more and more students “chatting” their work, schools have grown more anti-AI and anti-technology than ever before. AI bans and phone bans have made it even harder to access the resource. The presentation will dive into this dynamic of using AI to work for you from a high school student’s perspective, and how it has the opportunity to be a tool that you can use to help you do your work if it is used ethically. The challenge? Getting to make all the teenagers decide to use it ethically rather than cut corners. My argument in the presentation is that there should be regulation set on the AI to check your age and school enrollment, in which case if a student is identified with using the software, it has built in metrics to prevent over-usage. There would be drawbacks to this system however, because of how difficult it is to set an exact brightline. There will always be workarounds, and if you try to limit them all you probably prevent many from being able to use the tool in specific ways that wouldn’t necessarily be unethical.