More so than face-to-face counseling, users of online text-based services might drop out from a session before establishing a clear closure or expressing the intention to leave. Such premature departure may be indicative of heightened risk or dissatisfaction with the service or counselor. However, there is no systematic way to identify this understudied phenomenon.
This study has two objectives. First, we developed a set of rules and used logic-based pattern matching techniques to systematically identify premature departures in an online text-based counseling service. Second, we validated the importance of premature departure by examining its association with user satisfaction. We hypothesized that the users who rated the session as less helpful were more likely to have departed prematurely.
We developed and tested a classification model using a sample of 575 human-annotated sessions from an online text-based counseling platform. We used 80% of the dataset to train and develop the model and 20% of the dataset to evaluate the model performance. We further applied the model to the full dataset (34,821 sessions). We compared user satisfaction between premature departure and completed sessions based on data from a post-session survey.
The resulting model achieved 97% and 92% F1 score in detecting premature departure cases in the training and test sets, respectively, suggesting it is highly consistent with the judgment of human coders. When applied to the full dataset, the model classified 15,150 (43.5%) sessions as premature departure and the remaining 19,671 (56.5%) as completed sessions. Completed cases (15.2%) were more likely to fill the post-chat survey than premature departure cases (4.0%). Premature departure was significantly associated with lower perceived helpfulness and effectiveness in distress reduction.
A model derived from heuristics-based rules and logic-based pattern matching techniques that identifies premature departure in online text-based counseling was developed and tested.
The model achieved high accuracy vis-à-vis human annotation in making the binary judgement of whether or not a chat ended prematurely.
Premature departure was significantly associated with lower perceived helpfulness and effectiveness in distress reduction evaluated by the service users.
The proposed model has a relatively high level of transparency and reproducibility. It can be easily understood, and readily modified and transferred to other similar contexts.