Thanks Philip,

On 16/01/2025 10:31, Philip Thomas Lanken Conradsen Clausen via Bioinfo List wrote:
Too bad JABBA is no longer maintained (http://www.acgt.me/blog/2016/2/8/the-last-ever-awards-for-just-another-bogus-bioinformatics-acronym-jabba).

LOL! I didn't know JABBA but was about to come up with YARW for "Yet Another ResFinder Wrapper" with respect to StarAMR, which made me wonder why it was included in the four workflows in the BenchAMRking workflow of workflows (WoW?).

One change in that workflow, from what I see here, is that they apparently added ABRicate next to StarAMR. This is ironic in light of the note on ABRicate on the AMRFinderPlus site: NCBI are not amused that ABRicate is YAAW (yet another AFP+ wrapper).

Testing 18 "tools" does not seem to be the main issue here,

Note: the 18 tools are the ones for which hAMRonization now provides converters (with no ambition to perform comparisons or benchmarking; just that it has hAMRonization_workflow to conveniently run all 18 tools in one go).

The BenchAMRking "WoW" does claim it is for benchmarking, but it isn't quite clear to me how. The four workflows seem to be somewhat arbitrary (except perhaps abritAMR in that it was certified for PH / clinical use). Maybe I need to get inside Galaxy to understand.

some of the included tools are pipelines of pipelines and the dependencies does not seem to be given much thought here. Next to no dependencies has actually been cited nor mentioned here, e.g. neither BLAST or ResFinder has been cited while. The whole paper has just 9 citations, which obviously only covers a fraction of the used methods and work from others.

You're right, it's actually a "WoWoW", with indeed a severe lack of attribution. But again, maybe we're missing the point of it?


When we have workflows that is executed to run ResFinder (and subsequently PointFinder), which then run BLAST/KMA. Then the results  have earlier been shown to be dominated by the underlying algorithms. We all remember (or at least we should) the BLAST update that invalidated dozens of workflows:
https://doi.org/10.1093/bioinformatics/bty833

Wow, I missed that! And for sure I also thought that "max target hits" would return the top hit, not the first hit!

With this in mind it is closer to examine 100 tools, which of course makes the burden of bencAMRking harder, and then we haven't even looked at different options yet. But are these hAMRonized in this benchAMRk so that these discrepencies can be highlighted easily, or could we then create a "new" tool which just pushes the button of another tool (actually included on the github).

I'm hoping someone else on the list can lift the fog for us? :-) Thanks for you thoughts!

Cheers
Marco