Dear all,
Oh no, not another tool with an AMRwkward name! Wasn't hAMRonization enough already? (Try typing it if you disagree.)
But anyway, this looks like a valuable initiative:
Strepis N, Dollee D, Vrins D, Vanneste K, Bogaerts B, Carrillo C, et al. BenchAMRking: a Galaxy-based platform for illustrating the major issues associated with current antimicrobial resistance (AMR) gene prediction workflows. BMC Genomics. 2025;26: 27. doi:10.1186/s12864-024-11158-5. https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-024-11158-5
I was a bit underwhelmed by the four currently included workflows (see: https://erasmusmc-bioinformatics.github.io/benchAMRking/), of which two are Salmonella-only, one Ecoli-only, and one multi-species (abritAMR), but things can grow from here.
How does it relate to hARMonization_workflow, which incorporates 18 AMR tools? BenchAMRking aims at validating AMR workflows for clinical and public health use, against a "gold standard" reference. hAMRonization_workflow was set up primarily to have a testing ground for hAMRonization, the tool to convert output from any AMR tool into a common format.
As its maintainer (Finlay Maguire) said, the workflow is almost too brittle to maintain. But how else will you test hAMRonization, manually run 18 tools over a dozen inputs?!
I recently submitted a bunch of patches to bring hAMRonization up to date with the latest tool versions. I also managed to upgrade hAMRonization_workflow to use all the latest tool versions.
Give it a try if you're up for it: https://github.com/pha4ge/hAMRonization_workflow. It installs with a simple "conda env create". Prepare for a long wait on the first run, while it installs all tools and databases. Subsequent runs are quick: it takes ~10mn on my laptop to run all 18 tools on an isolate.
Best wishes, AMRco ;-)
Too bad JABBA is no longer maintained (http://www.acgt.me/blog/2016/2/8/the-last-ever-awards-for-just-another-bogus...).
Some thoughts on the paper itself.
Testing 18 "tools" does not seem to be the main issue here, some of the included tools are pipelines of pipelines and the dependencies does not seem to be given much thought here. Next to no dependencies has actually been cited nor mentioned here, e.g. neither BLAST or ResFinder has been cited while. The whole paper has just 9 citations, which obviously only covers a fraction of the used methods and work from others.
When we have workflows that is executed to run ResFinder (and subsequently PointFinder), which then run BLAST/KMA. Then the results have earlier been shown to be dominated by the underlying algorithms. We all remember (or at least we should) the BLAST update that invalidated dozens of workflows: https://doi.org/10.1093/bioinformatics/bty833
With this in mind it is closer to examine 100 tools, which of course makes the burden of bencAMRking harder, and then we haven't even looked at different options yet. But are these hAMRonized in this benchAMRk so that these discrepencies can be highlighted easily, or could we then create a "new" tool which just pushes the button of another tool (actually included on the github).
Best,
[http://www.dtu.dk/-/media/DTU_Generelt/Andet/mail-signature-logo.png] Philip Thomas Lanken Conradsen Clausen Postdoc National Food Institute plan@food.dtu.dkmailto:plan@food.dtu.dk Kemitorvet Building 204 2800 Kgs. Lyngby www.food.dtu.dkhttp://www.food.dtu.dk/ ________________________________ Fra: Marco van Zwetselaar via Bioinfo List bioinfo-list@seqshare.org Sendt: 16. januar 2025 02:04 Til: bioinfo-list@seqshare.org bioinfo-list@seqshare.org Cc: René S. Hendriksen rshe@food.dtu.dk Emne: [Bioinfo-list] BenchAMRking: [...] illustrating the major issues associated with current antimicrobial resistance (AMR) gene prediction workflows
Dear all,
Oh no, not another tool with an AMRwkward name! Wasn't hAMRonization enough already? (Try typing it if you disagree.)
But anyway, this looks like a valuable initiative:
Strepis N, Dollee D, Vrins D, Vanneste K, Bogaerts B, Carrillo C, et al. BenchAMRking: a Galaxy-based platform for illustrating the major issues associated with current antimicrobial resistance (AMR) gene prediction workflows. BMC Genomics. 2025;26: 27. doi:10.1186/s12864-024-11158-5. https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-024-11158-5
I was a bit underwhelmed by the four currently included workflows (see: https://erasmusmc-bioinformatics.github.io/benchAMRking/), of which two are Salmonella-only, one Ecoli-only, and one multi-species (abritAMR), but things can grow from here.
How does it relate to hARMonization_workflow, which incorporates 18 AMR tools? BenchAMRking aims at validating AMR workflows for clinical and public health use, against a "gold standard" reference. hAMRonization_workflow was set up primarily to have a testing ground for hAMRonization, the tool to convert output from any AMR tool into a common format.
As its maintainer (Finlay Maguire) said, the workflow is almost too brittle to maintain. But how else will you test hAMRonization, manually run 18 tools over a dozen inputs?!
I recently submitted a bunch of patches to bring hAMRonization up to date with the latest tool versions. I also managed to upgrade hAMRonization_workflow to use all the latest tool versions.
Give it a try if you're up for it: https://github.com/pha4ge/hAMRonization_workflow. It installs with a simple "conda env create". Prepare for a long wait on the first run, while it installs all tools and databases. Subsequent runs are quick: it takes ~10mn on my laptop to run all 18 tools on an isolate.
Best wishes, AMRco ;-)
Thanks Philip,
On 16/01/2025 10:31, Philip Thomas Lanken Conradsen Clausen via Bioinfo List wrote:
Too bad JABBA is no longer maintained (http://www.acgt.me/blog/2016/2/8/the-last-ever-awards-for-just-another-bogus...).
LOL! I didn't know JABBA but was about to come up with YARW for "Yet Another ResFinder Wrapper" with respect to StarAMR, which made me wonder why it was included in the four workflows in the BenchAMRking workflow of workflows (WoW?).
One change in that workflow, from what I see [here](https://workflowhub.eu/workflows/470), is that they apparently added ABRicate next to StarAMR. This is ironic in light of the [note on ABRicate](https://github.com/ncbi/amr/wiki#a-note-about-abricate) on the AMRFinderPlus site: NCBI are not amused that ABRicate is YAAW (yet another AFP+ wrapper).
Testing 18 "tools" does not seem to be the main issue here,
Note: the 18 tools are the ones for which hAMRonization now provides converters (with no ambition to perform comparisons or benchmarking; just that it has hAMRonization_workflow to conveniently run all 18 tools in one go).
The BenchAMRking "WoW" does claim it is for benchmarking, but it isn't quite clear to me how. The four workflows seem to be somewhat arbitrary (except perhaps abritAMR in that it was certified for PH / clinical use). Maybe I need to get inside Galaxy to understand.
some of the included tools are pipelines of pipelines and the dependencies does not seem to be given much thought here. Next to no dependencies has actually been cited nor mentioned here, e.g. neither BLAST or ResFinder has been cited while. The whole paper has just 9 citations, which obviously only covers a fraction of the used methods and work from others.
You're right, it's actually a "WoWoW", with indeed a severe lack of attribution. But again, maybe we're missing the point of it?
When we have workflows that is executed to run ResFinder (and subsequently PointFinder), which then run BLAST/KMA. Then the results have earlier been shown to be dominated by the underlying algorithms. We all remember (or at least we should) the BLAST update that invalidated dozens of workflows: https://doi.org/10.1093/bioinformatics/bty833
Wow, I missed that! And for sure I also thought that "max target hits" would return the top hit, not the first hit!
With this in mind it is closer to examine 100 tools, which of course makes the burden of bencAMRking harder, and then we haven't even looked at different options yet. But are these hAMRonized in this benchAMRk so that these discrepencies can be highlighted easily, or could we then create a "new" tool which just pushes the button of another tool (actually included on the github).
I'm hoping someone else on the list can lift the fog for us? :-) Thanks for you thoughts!
Cheers Marco