Alongside has massive plans to interrupt damaging cycles earlier than they flip scientific, mentioned Dr. Elsa Friis, a licensed psychologist for the corporate, whose background contains figuring out autism, ADHD and suicide threat utilizing Massive Language Fashions (LLMs).
The Alongside app at the moment companions with greater than 200 colleges throughout 19 states, and collects scholar chat knowledge for his or her annual youth mental health report — not a peer reviewed publication. Their findings this 12 months, mentioned Friis, have been shocking. With nearly no point out of social media or cyberbullying, the coed customers reported that their most urgent points needed to do with feeling overwhelmed, poor sleep habits and relationship issues.
Alongside boasts constructive and insightful knowledge factors of their report and pilot examine performed earlier in 2025, however consultants like Ryan McBain, a well being researcher on the RAND Company, mentioned that the information isn’t sturdy sufficient to know the true implications of a majority of these AI psychological well being instruments.
“In case you’re going to market a product to tens of millions of youngsters in adolescence all through the US by faculty techniques, they should meet some minimal normal within the context of precise rigorous trials,” mentioned McBain.
However beneath all the report’s knowledge, what does it actually imply for college kids to have 24/7 entry to a chatbot that’s designed to deal with their psychological well being, social, and behavioral considerations?
What’s the distinction between AI chatbots and AI companions?
AI companions fall below the bigger umbrella of AI chatbots. And whereas chatbots have gotten an increasing number of refined, AI companions are distinct within the ways in which they work together with customers. AI companions are inclined to have much less built-in guardrails, that means they’re coded to endlessly adapt to person enter; AI chatbots alternatively may need extra guardrails in place to maintain a dialog on monitor or on subject. For instance, a troubleshooting chatbot for a meals supply firm has particular directions to hold on conversations that solely pertain to meals supply and app points and isn’t designed to stray from the subject as a result of it doesn’t know the way to.
However the line between AI chatbot and AI companion turns into blurred as an increasing number of individuals are using chatbots like ChatGPT as an emotional or therapeutic sounding board. The people-pleasing options of AI companions can and have change into a rising problem of concern, particularly in the case of teenagers and different susceptible individuals who use these companions to, at instances, validate their suicidality, delusions and unhealthy dependency on these AI companions.
A latest report from Common Sense Media expanded on the dangerous results that AI companion use has on adolescents and teenagers. In line with the report, AI platforms like Character.AI are “designed to simulate humanlike interplay” within the type of “digital mates, confidants, and even therapists.”
Though Widespread Sense Media discovered that AI companions “pose ‘unacceptable dangers’ for customers below 18,” younger individuals are nonetheless utilizing these platforms at excessive charges.

Seventy two p.c of the 1,060 teenagers surveyed by Widespread Sense mentioned that they’d used an AI companion earlier than, and 52% of teenagers surveyed are “common customers” of AI companions. Nonetheless, for essentially the most half, the report discovered that almost all of teenagers worth human friendships greater than AI companions, don’t share private data with AI companions and maintain some degree of skepticism towards AI companions. Thirty 9 p.c of teenagers surveyed additionally mentioned that they apply expertise they practiced with AI companions, like expressing feelings, apologizing and standing up for themselves, in actual life.
When evaluating Widespread Sense Media’s suggestions for safer AI use to Alongside’s chatbot options, they do meet a few of these suggestions — like disaster intervention, utilization limits and skill-building components. In line with Mehta, there’s a massive distinction between an AI companion and Alongside’s chatbot. Alongside’s chatbot has built-in security options that require a human to evaluation sure conversations based mostly on set off phrases or regarding phrases. And in contrast to instruments like AI companions, Mehta continued, Alongside discourages scholar customers from chatting an excessive amount of.
One of many greatest challenges that chatbot builders like Alongside face is mitigating people-pleasing tendencies, mentioned Friis, a defining attribute of AI companions. Guardrails have been put into place by Alongside’s group to keep away from people-pleasing, which may flip sinister. “We aren’t going to adapt to foul language, we aren’t going to adapt to dangerous habits,” mentioned Friis. However it’s as much as Alongside’s group to anticipate and decide which language falls into dangerous classes together with when college students attempt to use the chatbot for dishonest.
In line with Friis, Alongside errs on the facet of warning in the case of figuring out what sort of language constitutes a regarding assertion. If a chat is flagged, lecturers on the accomplice faculty are pinged on their telephones. Within the meantime the coed is prompted by Kiwi to finish a disaster evaluation and directed to emergency service numbers if wanted.
Addressing staffing shortages and useful resource gaps
In class settings the place the ratio of scholars to highschool counselors is usually impossibly excessive, Alongside acts as a triaging device or liaison between college students and their trusted adults, mentioned Friis. For instance, a dialog between Kiwi and a scholar would possibly include back-and-forth troubleshooting about creating more healthy sleeping habits. The scholar is likely to be prompted to speak to their dad and mom about making their room darker or including in a nightlight for a greater sleep atmosphere. The scholar would possibly then come again to their chat after a dialog with their dad and mom and inform Kiwi whether or not or not that resolution labored. If it did, then the dialog concludes, but when it didn’t then Kiwi can recommend different potential options.
In line with Dr. Friis, a few 5-minute back-and-forth conversations with Kiwi, would translate to days if not weeks of conversations with a college counselor who has to prioritize college students with essentially the most extreme points and desires like repeated suspensions, suicidality and dropping out.
Utilizing digital applied sciences to triage well being points isn’t a brand new thought, mentioned RAND researcher McBain, and pointed to physician wait rooms that greet sufferers with a well being screener on an iPad.
“If a chatbot is a barely extra dynamic person interface for gathering that type of data, then I believe, in idea, that isn’t a difficulty,” McBain continued. The unanswered query is whether or not or not chatbots like Kiwi carry out higher, as nicely, or worse than a human would, however the one technique to examine the human to the chatbot can be by randomized management trials, mentioned McBain.
“One among my greatest fears is that firms are dashing in to attempt to be the primary of their type,” mentioned McBain, and within the course of are decreasing security and high quality requirements below which these firms and their tutorial companions flow into optimistic and attention-grabbing outcomes from their product, he continued.
However there’s mounting strain on faculty counselors to fulfill scholar wants with restricted sources. “It’s actually onerous to create the house that [school counselors] wish to create. Counselors wish to have these interactions. It’s the system that’s making it actually onerous to have them,” mentioned Friis.
Alongside provides their faculty companions skilled growth and session companies, in addition to quarterly abstract studies. Plenty of the time these companies revolve round packaging knowledge for grant proposals or for presenting compelling data to superintendents, mentioned Friis.
A research-backed strategy
On their web site, Alongside touts research-backed strategies used to develop their chatbot, and the corporate has partnered with Dr. Jessica Schleider at Northwestern College, who research and develops single-session mental health interventions (SSI) — psychological well being interventions designed to deal with and supply decision to psychological well being considerations with out the expectation of any follow-up classes. A typical counseling intervention is at minimal, 12 weeks lengthy, so single-session interventions have been interesting to the Alongside group, however “what we all know is that no product has ever been in a position to actually successfully try this,” mentioned Friis.
Nonetheless, Schleider’s Lab for Scalable Psychological Well being has printed a number of peer-reviewed trials and clinical research demonstrating constructive outcomes for implementation of SSIs. The Lab for Scalable Psychological Well being additionally provides open source materials for fogeys and professionals keen on implementing SSIs for teenagers and younger folks, and their initiative Mission YES provides free and nameless on-line SSIs for youth experiencing psychological well being considerations.
“One among my greatest fears is that firms are dashing in to attempt to be the primary of their type,” mentioned McBain, and within the course of are decreasing security and high quality requirements below which these firms and their tutorial companions flow into optimistic and attention-grabbing outcomes from their product, he continued.
What occurs to a child’s knowledge when utilizing AI for psychological well being interventions?
Alongside gathers scholar knowledge from their conversations with the chatbot like temper, hours of sleep, train habits, social habits, on-line interactions, amongst different issues. Whereas this knowledge can provide colleges perception into their college students’ lives, it does carry up questions on scholar surveillance and knowledge privateness.

Alongside like many different generative AI instruments makes use of different LLM’s APIs — or utility programming interface — that means they embrace one other firm’s LLM code, like that used for OpenAI’s ChatGPT, of their chatbot programming which processes chat enter and produces chat output. Additionally they have their very own in-house LLMs which the Alongside’s AI group has developed over a few years.
Rising considerations about how person knowledge and private data is saved is particularly pertinent in the case of delicate scholar knowledge. The Alongside group have opted-in to OpenAI’s zero knowledge retention coverage, which signifies that not one of the scholar knowledge is saved by OpenAI or different LLMs that Alongside makes use of, and not one of the knowledge from chats is used for coaching functions.
As a result of Alongside operates in colleges throughout the U.S., they’re FERPA and COPPA compliant, however the knowledge must be saved someplace. So, scholar’s private figuring out data (PII) is uncoupled from their chat knowledge as that data is saved by Amazon Internet Companies (AWS), a cloud-based trade normal for personal knowledge storage by tech firms around the globe.
Alongside makes use of an encryption course of that disaggregates the coed PII from their chats. Solely when a dialog will get flagged, and must be seen by people for security causes, does the coed PII join again to the chat in query. As well as, Alongside is required by legislation to retailer scholar chats and knowledge when it has alerted a disaster, and oldsters and guardians are free to request that data, mentioned Friis.
Usually, parental consent and scholar knowledge insurance policies are executed by the college companions, and as with all faculty companies supplied like counseling, there’s a parental opt-out choice which should adhere to state and district tips on parental consent, mentioned Friis.
Alongside and their faculty companions put guardrails in place to guarantee that scholar knowledge is stored protected and nameless. Nonetheless, data breaches can nonetheless occur.
How the Alongside LLMs are educated
One among Alongside’s in-house LLMs is used to determine potential crises in scholar chats and alert the required adults to that disaster, mentioned Mehta. This LLM is educated on scholar and artificial outputs and key phrases that the Alongside group enters manually. And since language adjustments typically and isn’t all the time straight ahead or simply recognizable, the group retains an ongoing log of various phrases and phrases, like the favored abbreviation “KMS” (shorthand for “kill myself”) that they retrain this explicit LLM to know as disaster pushed.
Though in line with Mehta, the method of manually inputting knowledge to coach the disaster assessing LLM is among the greatest efforts that he and his group has to sort out, he doesn’t see a future by which this course of might be automated by one other AI device. “I wouldn’t be comfy automating one thing that would set off a disaster [response],” he mentioned — the desire being that the scientific group led by Friis contribute to this course of by a scientific lens.
However with the potential for fast development in Alongside’s variety of faculty companions, these processes can be very tough to maintain up with manually, mentioned Robbie Torney, senior director of AI applications at Widespread Sense Media. Though Alongside emphasised their means of together with human enter in each their disaster response and LLM growth, “you possibly can’t essentially scale a system like [this] simply since you’re going to run into the necessity for an increasing number of human evaluation,” continued Torney.
Alongside’s 2024-25 report tracks conflicts in college students’ lives, however doesn’t distinguish whether or not these conflicts are taking place on-line or in individual. However in line with Friis, it doesn’t actually matter the place peer-to-peer battle was going down. Finally, it’s most essential to be person-centered, mentioned Dr. Friis, and stay targeted on what actually issues to every particular person scholar. Alongside does provide proactive talent constructing classes on social media security and digital stewardship.
With regards to sleep, Kiwi is programmed to ask college students about their telephone habits “as a result of we all know that having your telephone at evening is among the fundamental issues that’s gonna hold you up,” mentioned Dr. Friis.
Common psychological well being screeners out there
Alongside additionally provides an in-app common mental health screener to highschool companions. One district in Corsicana, Texas — an outdated oil city located exterior of Dallas — discovered the information from the common psychological well being screener invaluable. In line with Margie Boulware, govt director of particular applications for Corsicana Unbiased Faculty District, the group has had points with gun violence, however the district didn’t have a manner of surveying their 6,000 college students on the psychological well being results of traumatic occasions like these till Alongside was launched.
In line with Boulware, 24% of scholars surveyed in Corsicana, had a trusted adult of their life, six share factors fewer than the common in Alongside’s 2024-25 report. “It’s slightly stunning how few children are saying ‘we truly really feel linked to an grownup,’” mentioned Friis. According to research, having a trusted grownup helps with younger folks’s social and emotional well being and wellbeing, and also can counter the consequences of adversarial childhood experiences.
In a county the place the college district is the most important employer and the place 80% of scholars are economically deprived, psychological well being sources are naked. Boulware drew a correlation between the uptick in gun violence and the excessive share of scholars who mentioned that they didn’t have a trusted grownup of their house. And though the information given to the district from Alongside didn’t instantly correlate with the violence that the group had been experiencing, it was the primary time that the district was in a position to take a extra complete take a look at scholar psychological well being.
So the district fashioned a activity drive to sort out these problems with elevated gun violence, and decreased psychological well being and belonging. And for the primary time, quite than having to guess what number of college students have been combating behavioral points, Boulware and the duty drive had consultant knowledge to construct off of. And with out the common screening survey that Alongside delivered, the district would have caught to their finish of 12 months suggestions survey — asking questions like “How was your 12 months?” and “Did you want your instructor?”
Boulware believed that the common screening survey inspired college students to self-reflect and reply questions extra honestly when put next with earlier suggestions surveys the district had performed.
In line with Boulware, scholar sources and psychological well being sources specifically are scarce in Corsicana. However the district does have a group of counselors together with 16 tutorial counselors and 6 social emotional counselors.
With not sufficient social emotional counselors to go round, Boulware mentioned that quite a lot of tier one college students, or college students that don’t require common one-on-one or group tutorial or behavioral interventions, fly below their radar. She noticed Alongside as an simply accessible device for college kids that gives discrete teaching on psychological well being, social and behavioral points. And it additionally provides educators and directors like herself a glimpse behind the scenes into scholar psychological well being.
Boulware praised Alongside’s proactive options like gamified talent constructing for college kids who wrestle with time administration or activity group and might earn factors and badges for finishing sure expertise classes.
And Alongside fills an essential hole for employees in Corsicana ISD. “The quantity of hours that our kiddos are on Alongside…are hours that they’re not ready exterior of a scholar help counselor workplace,” which, due to the low ratio of counselors to college students, permits for the social emotional counselors to give attention to college students experiencing a disaster, mentioned Boulware. There may be “no manner I might have allotted the sources,” that Alongside brings to Corsicana, Boulware added.
The Alongside app requires 24/7 human monitoring by their faculty companions. Which means that designated educators and admin in every district and faculty are assigned to obtain alerts all hours of the day, any day of the week together with throughout holidays. This characteristic was a priority for Boulware at first. “If a kiddo’s struggling at three o’clock within the morning and I’m asleep, what does that appear like?” she mentioned. Boulware and her group needed to hope that an grownup sees a disaster alert in a short time, she continued.
This 24/7 human monitoring system was examined in Corsicana final Christmas break. An alert got here in and it took Boulware ten minutes to see it on her telephone. By that point, the coed had already begun engaged on an evaluation survey prompted by Alongside, the principal who had seen the alert earlier than Boulware had referred to as her, and he or she had acquired a textual content message from the coed help council. Boulware was in a position to contact their native chief of police and handle the disaster unfolding. The scholar was in a position to join with a counselor that very same afternoon.


