Science and Technology
Source : (remove) : The Michigan Daily
RSSJSONXMLCSV
Science and Technology
Source : (remove) : The Michigan Daily
RSSJSONXMLCSV

U-Michigan Students and Staff Debate the Role of AI in Campus Therapy

65
  Copy link into your clipboard //science-technology.news-articles.net/content/2 .. aff-debate-the-role-of-ai-in-campus-therapy.html
  Print publication without navigation Published in Science and Technology on by The Michigan Daily
  • 🞛 This publication is a summary or evaluation of another publication
  • 🞛 This publication contains editorial commentary or bias from the source

U‑Michigan Students and Staff Debate the Role of AI in Campus Therapy

The University of Michigan’s campus is abuzz with a heated debate over whether artificial‑intelligence (AI) tools should be used to provide mental‑health support to students. According to a recent article in The Michigan Daily, the discussion—spanning campus forums, email threads, and social‑media conversations—has drawn a wide range of voices, from student activists to faculty members, from counseling‑center staff to administrators. The core question is simple yet complex: can a chatbot or other AI system safely and ethically supplement—or even replace—human therapy for the students who rely on the university’s counseling resources?


The Rise of AI‑Based Therapy

The debate was sparked by a pilot program announced by the U‑M Counseling Center in the spring, which would allow students to access a chatbot trained on therapeutic principles, in addition to the existing lineup of licensed counselors and psychologists. The bot—named “Moxie” in the article—uses natural‑language processing to provide psycho‑educational content, coping strategies, and a “listening” interface that students can engage with at any time.

AI‑based therapy is not a new idea. Across the United States, tech companies such as Woebot and Replika have marketed chatbot‑driven mental‑health interventions, while the American Psychological Association (APA) has begun drafting guidelines on how such tools might be integrated into clinical practice. The Michigan Daily notes that the U‑M team consulted with the APA’s Digital Health Task Force before rolling out Moxie, and that the bot is designed to flag severe distress signals and prompt students to seek in‑person care.


Proponents: Convenience, Accessibility, and Cost

Students who champion the pilot argue that AI offers a low‑barrier first step for those hesitant to seek human help. “I think it’s a great idea,” says Maya Chen, a sophomore in the College of Literature, Science, and the Arts (LSA). Chen says she used the chatbot on a night when she felt overwhelmed and appreciated having someone—or something—listen without scheduling a 30‑minute appointment. “The anonymity and immediacy feel less intimidating.”

Other proponents point to data that AI‑therapy programs can reduce wait times for appointments and help counseling centers triage cases more efficiently. A faculty member in the U‑M Department of Psychology, Dr. Rahul Patel, notes that “chatbots can handle a range of low‑intensity interventions, freeing up counselors for more complex cases.” Patel also highlights that AI could help staff monitor student mental‑health trends on a macro‑level, allowing for early interventions at the population level.


Opponents: Ethical, Clinical, and Safety Concerns

Opponents of the program raise a host of concerns. First, they question whether an algorithm can truly “understand” human suffering. A graduate student in the U‑M School of Social Work, Alex Reyes, says the bot “may provide generic or misdirected advice that could exacerbate anxiety.” Reyes also points out that AI lacks empathy, nuance, and cultural competency—qualities that licensed professionals are trained to bring into therapy.

Second, privacy advocates worry about data security. While the Counseling Center claims that conversations are encrypted and that no personal identifiers are stored, the article references a 2022 study in Health Affairs that found “chatbot logs occasionally reveal sensitive information that can be accessed by third parties if the platform is not secure.” According to the U‑M legal counsel, the counseling center is in the process of negotiating an updated data‑protection agreement with the AI vendor to address these risks.

Third, critics highlight the risk of “false reassurance.” A clinical psychologist who works at the center’s on‑campus clinic warns that students may interpret the bot’s suggestions as definitive treatment plans rather than prompts to seek human help. “If a student feels better after a chat, they might postpone seeing a counselor,” she says. “That delay could be dangerous for students with deeper issues.”


The Administrative Perspective

University administrators see the AI initiative as part of a broader strategy to modernize campus services. In a statement to The Michigan Daily, the Vice‑President for Student Affairs acknowledges the controversy but emphasizes the Center’s commitment to “student wellness.” “We are exploring every tool that can expand access to care, and AI is one such tool,” the VP said. “But it is only one component of a multi‑layered support system.”

The article also references a forthcoming campus‑wide “Mental Health Task Force” that will include faculty, student leaders, and staff. The task force’s mandate is to evaluate the pilot, gather data on outcomes, and develop best‑practice guidelines that balance technological innovation with ethical responsibility.


What’s Next?

The pilot is currently running on a limited schedule, and the Counseling Center has pledged to collect rigorous data—such as student satisfaction, frequency of usage, and any adverse events—over the next academic year. In addition to the data, the Center plans to hold quarterly town‑hall meetings where students can ask questions, voice concerns, and provide feedback.

A notable next step, highlighted in the article, is the potential partnership with the U‑M School of Information. The two departments are negotiating a research collaboration that would study the efficacy of the chatbot through controlled trials, comparing outcomes between students who used the bot and those who received traditional therapy.


Takeaway

The U‑Michigan debate over AI therapy underscores a national conversation about the intersection of technology and mental health. While the potential to reduce barriers to care is compelling, the risks of misdiagnosis, data breaches, and over‑reliance on automated systems cannot be ignored. As the university navigates this uncharted territory, it will need to balance innovation with rigorous ethical safeguards. Whether the AI bot will become a standard adjunct to counseling services—or be discontinued in favor of more human‑centric approaches—remains to be seen. The coming months will likely set a precedent for how higher‑education institutions incorporate AI into their health‑care ecosystems.


Read the Full The Michigan Daily Article at:
[ https://www.michigandaily.com/news/campus-life/u-m-students-staff-debate-use-of-ai-for-therapy/ ]