This post is about this trial: Furman et al. A Phase II Trial of Hu14.18K322A in Combination with Induction Chemotherapy in Children with Newly Diagnosed High-Risk Neuroblastoma. Clin Cancer Res (2019) 25 (21): 6320–6328. https://doi.org/10.1158/1078-0432.CCR-19-1452

This trial investigated addition of the catchily-named Hu14.18K322A to chemotherapy for newly-diagnosed neuroblastoma patients. This is an anti-GD2 antibody, for which there is some evidence that it might improve outcomes. The paper says:

“We sought to evaluate whether combining a humanized antidisialoganglioside mAb (hu14.18K322A) with induction chemotherapy improves early responses and outcomes in children with newly diagnosed high-risk neuroblastoma.”

But… the trial used a single-arm design. Why is this a problem? Because it’s intrinsically a comparative question: is adding a new drug to a standard chemotherapy regimen better than just the standard chemotherapy? The best way to address this question is to randomise between chemotherapy alone and chemotherapy plus anti-GD2 (Hu14.18K322A). This trial did not do that. Instead, all patients received chemotherapy plus Hu14.18K322A, in a single-arm design. This seems (to me at least) a completely inappropriate way to attempt to answer the question.

The design was based on finding a response rate (complete plus partial responses) of at least 40%, with an assumed true response rate of 60%. It was initially designed as a 2 stage group-sequential trial, and this paper reports on the 42 (or 43) patients recruited to this design. It was extended to a 3-stage design to incorporate the effects on event-free survival, with a sample size of 61. [the updated results were published in 2022 J Clin Oncol. 2022 Feb 1;40(4):335-344. doi: 10.1200/JCO.21.01375.]

The result (after 42 patients) was that 32/42 (76.2%) patients had complete or partial response, with tumour volume changes of +5% to -100%. The interpretation was:

“the addition of a unique anti-GD2 mAb to induction chemotherapy nearly doubled early responses…”

But this is a completely unjustified conclusion, because the trial didn’t compare the figure of 76.2% to anything. If there had been a standard care group, what would their response rate be? We just don’t know. The 40% minimum response rate may be roughly what would be expected overall in patients treated with standard care, but that does not mean that it would apply to the patients in the trial. Trials are always (ALWAYS!) a non-random selection of patients, so how do we know that the response rate in the non-existent control group would have been around 40%? We don’t. It’s just a logical fallacy: just because the response rate was high in the group that received the new therapy doesn’t mean that it was caused by that therapy. It could be that you’ve just (by luck or non-random selection) got a set of patients that did well on standard therapy. You just don’t know, and this is the huge weakness of a single-arm trial design.

Defenders of single-arm studies will say that it isn’t intended to be definitive, and (in the abstract) the conclusion says:

“These results, if validated in a larger study, may change the standard of care…”

But it isn’t just a matter of doing a bigger study, it’s also necessary to perform a valid comparison to find out if the new treatment is actually doing anything useful. So the best we can say from the results of this single-arm trial are that they are suggestive; the outcomes looked as though they could be better with the new intervention, but we really can’t be sure. This is a pretty weak conclusion. If we can’t say anything substantive from the results of a trial, what is actually the point?

This trial has absorbed substantial time and resources. The paper, published in 2019, states that it started recruitment in May 2013, and recruitment was ongoing (to reach the revised target of 61 participants), so more than six years of recruitment and associated costs of trial management have been used to achieve… what, exactly? There are also opportunity costs; all of the time, effort and patients that have gone into this trial could have been spent on something useful instead. Surely it would be a better use of time and money to do a study that could reach a firmer conclusion? Isn’t methodological rigour the point of clinical trials?


<
Previous Post
Single-arm non-inferiority? Does that make any sense?
>
Blog Archive
Archive of all previous blog posts