leaderboard ranking

#1
by sk16er - opened

I trained an ensemble D-MPNN augmented with RDKit descriptors and evaluated on the Tox21 benchmark with the same official splits/protocol. My model achieved average ROC-AUC of 0.8722 across the 12 tasks, with per-task numbers listed in the repo. I submitted it — it shows ‘will update soon,’ but it hasn’t appeared yet. Is there a known delay or any recommended format/steps to make sure the submission is processed correctly?

Institute for Machine Learning, Johannes Kepler University Linz org

Thank you for your submission. We’ve reviewed it and everything looks in order from a technical standpoint.

Now we just need a bit more information before we can publish the results to the leaderboard. In particular, could you please add details about the organization associated with the submission, as well as a reference paper that describes the model? This helps us present the entries in a consistent way and makes comparisons across submissions clearer.

Once this information is provided, we should be able to process the submission and update the leaderboard accordingly. Please let us know if you have any questions or run into any issues.

Institute for Machine Learning, Johannes Kepler University Linz org
This comment has been hidden
sonja-a-topf changed discussion status to closed
sonja-a-topf changed discussion status to open

Actually my name is shushank there is no organization associated i just made this out of curiosity, but if you need a orginization you can add https://huggingface.co/stark16er here is a report i made report and here is the github repposotry with the benchmarks of arvarge AUC of 0.87
you have to test the space for benchmarks. not the model in the repository.

Institute for Machine Learning, Johannes Kepler University Linz org

Your results are now visible.

sonja-a-topf changed discussion status to closed

Sign up or log in to comment