Too bad JABBA is no longer maintained (http://www.acgt.me/blog/2016/2/8/the-last-ever-awards-for-just-another-bogus-bioinformatics-acronym-jabba).

Some thoughts on the paper itself.

Testing 18 "tools" does not seem to be the main issue here, some of the included tools are pipelines of pipelines and the dependencies does not seem to be given much thought here. Next to no dependencies has actually been cited nor mentioned here, e.g. neither BLAST or ResFinder has been cited while. The whole paper has just 9 citations, which obviously only covers a fraction of the used methods and work from others.

When we have workflows that is executed to run ResFinder (and subsequently PointFinder), which then run BLAST/KMA. Then the results  have earlier been shown to be dominated by the underlying algorithms. We all remember (or at least we should) the BLAST update that invalidated dozens of workflows:
https://doi.org/10.1093/bioinformatics/bty833

With this in mind it is closer to examine 100 tools, which of course makes the burden of bencAMRking harder, and then we haven't even looked at different options yet. But are these hAMRonized in this benchAMRk so that these discrepencies can be highlighted easily, or could we then create a "new" tool which just pushes the button of another tool (actually included on the github).

Best,

Philip Thomas Lanken Conradsen Clausen
Postdoc
National Food Institute
Kemitorvet
Building 204
2800 Kgs. Lyngby

Fra: Marco van Zwetselaar via Bioinfo List <bioinfo-list@seqshare.org>
Sendt: 16. januar 2025 02:04
Til: bioinfo-list@seqshare.org <bioinfo-list@seqshare.org>
Cc: René S. Hendriksen <rshe@food.dtu.dk>
Emne: [Bioinfo-list] BenchAMRking: [...] illustrating the major issues associated with current antimicrobial resistance (AMR) gene prediction workflows
 
Dear all,

Oh no, not another tool with an AMRwkward name! Wasn't hAMRonization enough already? (Try typing it if you disagree.)

But anyway, this looks like a valuable initiative:

Strepis N, Dollee D, Vrins D, Vanneste K, Bogaerts B, Carrillo C, et al. BenchAMRking: a Galaxy-based platform for illustrating the major issues associated with current antimicrobial resistance (AMR) gene prediction workflows. BMC Genomics. 2025;26: 27. doi:10.1186/s12864-024-11158-5. https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-024-11158-5

I was a bit underwhelmed by the four currently included workflows (see: https://erasmusmc-bioinformatics.github.io/benchAMRking/), of which two are Salmonella-only, one Ecoli-only, and one multi-species (abritAMR), but things can grow from here.

How does it relate to hARMonization_workflow, which incorporates 18 AMR tools? BenchAMRking aims at validating AMR workflows for clinical and public health use, against a "gold standard" reference. hAMRonization_workflow was set up primarily to have a testing ground for hAMRonization, the tool to convert output from any AMR tool into a common format.

As its maintainer (Finlay Maguire) said, the workflow is almost too brittle to maintain. But how else will you test hAMRonization, manually run 18 tools over a dozen inputs?!

I recently submitted a bunch of patches to bring hAMRonization up to date with the latest tool versions. I also managed to upgrade hAMRonization_workflow to use all the latest tool versions.

Give it a try if you're up for it: https://github.com/pha4ge/hAMRonization_workflow. It installs with a simple "conda env create". Prepare for a long wait on the first run, while it installs all tools and databases. Subsequent runs are quick: it takes ~10mn on my laptop to run all 18 tools on an isolate.

Best wishes,
AMRco ;-)