Can a brief 3 session intevention reduce the risk of suicide attempts?

A Randomized clinical trial of the “Attempted Suicide Short Intervention Program”.

YouTube Video
Randomized Controled trial
Suicide prevention

Michel Nivard


November 21, 2022

I try to reproduce Gysin-Maillart et al. (2016). The paper presents a randomized controlled trial that tests the effectiveness of a brief 3 session intervention designed for people who have attempted suicide. The intevention consists of 3 60-90 minute sessions and frequent contact (via letters) troughout follow up, in addition there is access to treatment as usually, the control group receives only treatment as usual. On their primary outcome, further suicide attempts, there is a significant reduction (HR = 0.17, ci 0.07 - 0.46,p < 0.0004). I was able to reproduce this effect almost perfectly from the open data and code.

content warning, this post replicates an intervention study targeted at people who attempted to commit suicide and are at high risk of doing so again. Therefore the post will discuss suicide. If you came her searching for help, here is a list of international suicide crisis lines.

Because I am 1. still considering how to produces a video on a sensitive topic and 2. have been down with the flu this week, this will only be a blog post for now perhaps a YouTube video in the coming week or the one after.


Gysin-Maillart et al. (2016) run a trial of a brief intervention to prevent further suicide attempts in people who have attempted to commit suicide. They specifically evaluate the efficacy of the Attempted Suicide Short Intervention Program (ASSIP). The focus of the program is described by the authors as: “ A major focus of ASSIP lies in the development of an early therapeutic alliance, combined with psycho education, a cognitive case conceptualization, safety planning, and continued long-term outreach contact.”. Effectively it consists of 3, if needed 4, in person sessions fo between60 and 90 minutes focused on: a session that consists of a narrative interview to reach a patient centered understanding of the individual mechanism that led to suicidal behavior, a second session in which patient and therapist watch a video of the first session, which is meant to reactivate the mental state experienced in the previous crisis identifying automatic thoughts , emotions and physiological responses. Patients receive psycho education and homework focused an a handout titled “suicide is not a rational Act”. In a third session long-term goals, individual warning signs, and safety strategies were developed in cooperation with the patient. These are bundled on a credit card sized leaflet for the patient to carry with them, and a second card with crisis hotline information. After these three sessions patients received personalized letters reminding them of long term risks, and the importance of safety strategies. Patients received letters very 3 months in the first year and every 6 months in the second year. The effect of ASSIP is evaluated in a randomized trial, where 120 patents are randomized to treatment as usual (n = 60), or ASSIP combined with treatment as usual(n = 60).


Data & Background

The trial is registered at (link), but the trial was registered after initiation. in 2008 when the project began there was no requirement to pre-register a trial. As this requirement was enacted in Switzerland the authors did post a trial registration. The authors do include a letter to the ethics comity which describes the study before it began. ITs does list “further suicide attempts” first among its outcomes measure, this became the primary outcome in the post-hoc trial application. Other outcomes, and scales used to measure these outcomes, mentioned in the letter are included as secondary outcomes in the trial registration.

The trial data, and brief code to allow reproduction is provided HERE


I downloaded the data, and the code on November 25th 2022, I could directly read it in and closely approximate the findings in the paper:

load(file = "doi_10/assip.RData")

repeater = c(
  as.character(mydata$repeater_t2), #  6 months
  as.character(mydata$repeater_t3), # 12
  as.character(mydata$repeater_t4), # 18
  as.character(mydata$repeater_t5)) # 24 R cannot concat factors, conversion to char

repeater = as.factor(repeater) == 'Suizidversuch (mind. 1)'
group = rep(mydata$ITT,4) # repeat group coding for the 4 follow-up's
time = c(rep(6,120), rep(12,120), rep(18,120), rep(24,120)) # follow-up month

# survival (Kaplan-Meier) ----
fit = survfit(Surv(time, repeater) ~ group, type="kaplan-meier")
Call: survfit(formula = Surv(time, repeater) ~ group, type = "kaplan-meier")

65 observations deleted due to missingness 
                group=ASSIP & ASSIP Drop out 
 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    6    229       1    0.996 0.00436        0.987            1
   12    170       1    0.990 0.00727        0.976            1
   18    111       1    0.981 0.01143        0.959            1
   24     56       2    0.946 0.02671        0.895            1

                group=CG & CG Drop out 
 time n.risk n.event survival std.err lower 95% CI upper 95% CI
    6    186       7    0.962  0.0140        0.935        0.990
   12    134       5    0.926  0.0207        0.887        0.968
   18     84       5    0.871  0.0308        0.813        0.934
   24     42       5    0.768  0.0513        0.673        0.875

These results appear to match the paper’s KM curve in Figure 2A, and the table entries in Table 2 labeled “1-6 months”, “7-12 months”, “13 to 18 months” and “19 to 24 months”. let’s visualize the results:


autoplot(fit) + 
 labs(x = "\n Follow up Time (Months) ", y = "incident free Probabilities \n", 
 title = "Risk reduction in ASSIP study \n")  +
 theme(plot.title = element_text(hjust = 0.5), 
 axis.title.x = element_text(face="bold", colour="darkgrey", size = 12),
 axis.title.y = element_text(face="bold", colour="darkgrey", size = 12),
 legend.title = element_text(face="bold", size = 10))

We can proceed to re-compute the Cox-Hazard.

# Cox hazard for discrete data ----
hazard <- coxph(Surv(time, repeater) ~ group, ties="exact")
coxph(formula = Surv(time, repeater) ~ group, ties = "exact")

  n= 415, number of events= 27 
   (65 observations deleted due to missingness)

                        coef exp(coef) se(coef)     z Pr(>|z|)    
groupCG & CG Drop out 1.7826    5.9451   0.5006 3.561 0.000369 ***
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

                      exp(coef) exp(-coef) lower .95 upper .95
groupCG & CG Drop out     5.945     0.1682     2.229     15.86

Concordance= 0.705  (se = 0.039 )
Likelihood ratio test= 16.75  on 1 df,   p=4e-05
Wald test            = 12.68  on 1 df,   p=4e-04
Score (logrank) test = 16.11  on 1 df,   p=6e-05

the exp(-coef) coeficient corresponds to the 17% relative risk reported in the paper. Though to get the confidence interval I had to switch the default reference group:

flip_ <- relevel(group, ref = "CG & CG Drop out")
hazard <- coxph(Surv(time, repeater) ~ flip_, ties="exact")
coxph(formula = Surv(time, repeater) ~ flip_, ties = "exact")

  n= 415, number of events= 27 
   (65 observations deleted due to missingness)

                               coef exp(coef) se(coef)      z Pr(>|z|)    
flip_ASSIP & ASSIP Drop out -1.7826    0.1682   0.5006 -3.561 0.000369 ***
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

                            exp(coef) exp(-coef) lower .95 upper .95
flip_ASSIP & ASSIP Drop out    0.1682      5.945   0.06306    0.4487

Concordance= 0.705  (se = 0.039 )
Likelihood ratio test= 16.75  on 1 df,   p=4e-05
Wald test            = 12.68  on 1 df,   p=4e-04
Score (logrank) test = 16.11  on 1 df,   p=6e-05

We now estimate a relative risk of 17% (95% CI 6% to 45%) which comes fairly close to the paper, where additional multiple imputation was used to deal with missingness due to dropout.

Robustness checks

The main robustness check i would want to consider is re-fitting with a model that allow for the recurrence of events. The Cox model used isn’t meant for events (suicide attempts) the can (frequently) re-occur in the same person. There are various models available for these situations (see (Amorim and Cai 2014)). I think this would require the precise intervals and/or counts of the number of attempts in each interval. The authors do try some basic mitigation strategies but we cant rally do a lot with the currently truncated data (people are coded as “1 or more further attempts” in a period).

One model we can try and use to accommodate individual differences in recurrence risk, is a frailty (random effects) model, where each individual has a random intercept capturing their personal baseline recurrence risk.

id <- rep(1:120,4) <- coxph(Surv(time, repeater) ~ flip_ + frailty(id, distribution="gamma"), ties="exact")
summary( )
coxph(formula = Surv(time, repeater) ~ flip_ + frailty(id, distribution = "gamma"), 
    ties = "exact")

  n= 415, number of events= 27 
   (65 observations deleted due to missingness)

                          coef   se(coef) se2    Chisq DF    p     
flip_ASSIP & ASSIP Drop o -1.772 0.5228   0.4975 11.49  1.00 0.0007
frailty(id, distribution                         18.90 16.07 0.2800

                          exp(coef) exp(-coef) lower .95 upper .95
flip_ASSIP & ASSIP Drop o      0.17      5.881   0.06104    0.4737

Iterations: 6 outer, 26 Newton-Raphson
     Variance of random effect= 0.7673105   I-likelihood = -137.4 
Degrees of freedom for terms=  0.9 16.1 
Concordance= 0.899  (se = 0.032 )
Likelihood ratio test= 44.97  on 16.98 df,   p=2e-04

This model gives very similar results to the results presented in the paper. We can try one more model which treats the 6 month intervals explicitly the Anderson Gill model, we must prepare the data by coding a start and end time for each interval and whether or not an event did occur in the interval. I combine the model with a random effect I choose the Anderson Gill modelpurely as an illustration, if you want more background and evaluate whether my choice is actually appropriate, read up on this topic more: (Amorim and Cai 2014).

start_t <- rep(c(1,7,13,19),each=120)
end_t <- rep(c(6,12,18,24),each=120) <- coxph(Surv(start_t,end_t, repeater) ~ flip_ , method="breslow",id=id, robust=TRUE,  ties="exact")
summary( )
coxph(formula = Surv(start_t, end_t, repeater) ~ flip_, ties = "exact", 
    robust = TRUE, method = "breslow", id = id)

  n= 415, number of events= 27 
   (65 observations deleted due to missingness)

                               coef exp(coef) se(coef) robust se     z Pr(>|z|)
flip_ASSIP & ASSIP Drop out -1.6918    0.1842   0.4956    0.4875 -3.47  0.00052
flip_ASSIP & ASSIP Drop out ***
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

                            exp(coef) exp(-coef) lower .95 upper .95
flip_ASSIP & ASSIP Drop out    0.1842       5.43   0.07084    0.4789

Concordance= 0.697  (se = 0.041 )
Likelihood ratio test= 15.41  on 1 df,   p=9e-05
Wald test            = 12.04  on 1 df,   p=5e-04
Score (logrank) test = 14.7  on 1 df,   p=1e-04,   Robust = 9.82  p=0.002

  (Note: the likelihood ratio and score tests assume independence of
     observations within a cluster, the Wald and robust score tests do not).

This method seems to introduce minimal change in estimate or the overall conclusions. I am honestly not enough of an expert to dig deeper into he world of recurrent event models and do this justice. But I am sure more specific data on the number of reported events, or more precise event timing would be really helpful.


One study is a crazy amount of work, the first ethics documents added tot he supplements of the paper is dated 2008, which means 9 years between ethics approval and published results. You could point to the fact their trial registration is backdated as a limitation, but contemporary documents provided by the authors seem consistent with (but slightly less precisely defined) then the trial registration.

The code and the data are sufficient to reproduce the final step that produces the critical statistics in the paper. The code and data should/could also enable future individual patient data based meta-analysis. I’d say 9.5 out of 10 for reproducibility! Work with other people’s data and code has been a huge motivation to make my own work more fully and easily reproducible. The only suggestion for improvement would be not to truncate event counts to “more then one”, this would enable fuller re-analysis. Having said that, the findings as presented in the paper are robust to some alternate model specifications.

Yet, one study with a large and encouraging effect, is still just one study. This second study of a similar intervention in adolescents finds no effect (Yen et al. 2019). A further far larger (N > 650) randomized controlled study finds no effect either(Hatcher et al. 2015). While we could consider joint statistical analysis of the studies this isn’t exactly my expertise and so I will shy away from that for now. Doing new science and not just reproduction demands extra levels of vetting and scrutiny, especially if its done by someone who isn’t a domain expert (like me), getting it wrong carries consequences. Its worth noting there would be even more value in having clinicians reconcile the differences and similarities between the treatments in the three studies before treating them as interchangeable or similar in a quantitative study. Similarly its wise to consider the differences in “treatment as usual” the controls where exposed to, is that standard currently very high already in some of these studies? Lots to consider…

Finally lets consider a way forward from here. This study required long term dedicated effort, and its a very promising start (but only a start). If this was a drug, the next step would be a phase-3 randomized controlled blinded trial, which is VERY expensive and includes ~800-2400 participants across multiple centers. The trial would also ideally measure the intervention against other viable or current best practice interventions and the outcomes and analysis would be blinded, that is the persons doing the measurement and the persons statistics would be blind to case and control status and not in contact with the teams implementing the ASSIP in the field. There is no industry backing for a trial that does not yield an industry product (drug, device, etc) and government grants to run trials are sparse. Though evidently there can be bigger trials, the aforementioned ACCESS study, of a different suicide pevention protocol, did manage more than 650 inclusions. Getting those big trial grants is an artform in and off itself and requires a lot of effort. Other avenues for follow up would be trials or experimental consideration of ASSIP like programs and/or competing programs by a national agencies like the NHS.


Amorim, Leila DAF, and Jianwen Cai. 2014. “Modelling Recurrent Events: A Tutorial for Analysis in Epidemiology.” International Journal of Epidemiology 44 (1): 324–33.
Gysin-Maillart, Anja, Simon Schwab, Leila Soravia, Millie Megert, and Konrad Michel. 2016. “A Novel Brief Therapy for Patients Who Attempt Suicide: A 24-Months Follow-Up Randomized Controlled Study of the Attempted Suicide Short Intervention Program (ASSIP).” Edited by Alexander C. Tsai. PLOS Medicine 13 (3): e1001968.
Hatcher, Simon, Cynthia Sharon, Allan House, Nicola Collins, Sunny Collings, and Avinesh Pillai. 2015. “The ACCESS Study: Zelen Randomised Controlled Trial of a Package of Care for People Presenting to Hospital After Self-Harm.” British Journal of Psychiatry 206 (3): 229–36.
Yen, Shirley, Anthony Spirito, Lauren M Weinstock, Katherine Tezanos, Antonija Kolobaric, and Ivan Miller. 2019. “Coping Long Term with Active Suicide in Adolescents: Results from a Pilot Randomized Controlled Trial.” Clinical Child Psychology and Psychiatry 24 (4): 847–59.