CoNEXT acknowledges the importance of artifacts such as software, hardware, data, and documentation that accompany a scientific paper. These artifacts are critical for validating the results presented in the paper and enhancing the reproducibility of the research. Therefore, the CoNEXT conference will run an optional artifact evaluation process, to evaluate the availability and functionality of the artifacts associated with the corresponding papers. The process will also evaluate the reproducibility of the paper's key results and claims with the help of these artifacts.
All authors are encouraged to opt-in to artifact evaluation during paper submission.
Questions can be directed to Mallesham Dasari (m.dasari_at_northeastern.edu) and Hong Xu (hongxu_at_cuhk.edu.hk).
We're soliciting self-nominations if you have hands-on experience in networking and networked systems, and would like to help review (and improve!) the artifacts of the accepted CoNEXT'25 papers. Use the following link to nominate yourself! We look forward to working with a strong AEC to help broaden the impact of research through artifacts, and appreciate everyone who volunteers for this process!
https://forms.gle/aPQgfu4NpaHSnqjC6
Nomination DDL: Sunday April 20th, 2025, AoE
Submission deadline / begin of evaluation period: May 12, 2025 (11:59pm AoE)
Artifact decisions announced: June 02, 2025
Submission deadline / begin of evaluation period: July 28, 2025 (11:59pm AoE)
Artifact decisions announced: August 20, 2025
Submission deadline / begin of evaluation period: October 24, 2025 (11:59pm AoE)
Artifact decisions announced: November 14, 2025
TBA
Note1: For an artifact to be considered, at least one contact author for the submission must be reachable via email and respond to questions in a timely manner during the evaluation period.
Artifact evaluation promotes the reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare alternative approaches. Authors of accepted papers are invited to formally describe supporting materials (code, data, models, workflows, results) using the standard Artifact Appendix template and submit it together with the materials for evaluation.
Note that this submission is voluntary and will not influence the final decision regarding the papers. We want to help the authors validate experimental results from their accepted papers by an independent AE Committee in a collaborative way while helping readers find articles with available, functional, and validated artifacts.
The papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in ACM DL).
Authors can apply for the following badges:
Author-created artifacts relevant to this paper have been placed on a publically accessible archival repository. A DOI or link to this repository, along with a unique identifier for the object is provided.
The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated – Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.
The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.
NOTE: We take the view that it is sufficient to reproduce the main findings of a paper using the dataset provided as part of the artifact, although the data collection process itself may not be available in the artifact (especially for measurement papers).
You need to prepare the Artifact Appendix describing all software, hardware and data set dependencies, key results to be reproduced, and how to prepare, run and validate experiments. Though it is relatively intuitive and based on our past AE experience, we strongly encourage you to check out the recommendations, such as the Artifact Appendix guide, artifact reviewing guide, the SIGPLAN Empirical Evaluation Guidelines, the NeurIPS reproducibility checklist and AE FAQs. before submitting artifacts for evaluation. You can find the examples of Artifact Appendices in the last year's reproduced papers.
We strongly recommend you to provide at least some scripts to build your workflow, all inputs to run your workflow, and some expected outputs to validate results from your paper. You can then describe the steps to evaluate your artifact using Jupyter Notebooks or plain README files. You can skip this step if you want to share your artifacts without the validation of experimental results - in such case your paper can still be entitled for the “artifact available” badge.
After you receive the notification that your paper has been accepted to CoNEXT 2025, please submit the artifact abstract, the original submission PDF of your paper, and the Artifact Appendix at the AE submission website: https://conext25artifacts.hotcrp.com/
The (brief) abstract should describe your artifact, the minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Do not forget to specify if you use any proprietary software or hardware. This abstract will be used by evaluators during artifact bidding to make sure that they have access to appropriate hardware and software and have required skills.
Most of the time, the authors make their artifacts available to the evaluators via an online repository service (such as GitHub, GitLab, or BitBucket). Other acceptable methods include: Using zip or tar files with all related code and data, particularly when your artifact should be rebuilt on reviewers' machines (for example to have a non-virtualized access to a specific hardware). Using Docker, Virtual Box and other containers and VM images. Arranging remote access to the authors' machine with the pre-installed software - this is an exceptional case when rare or proprietary software and hardware is used. You will need to communicate access information (e.g. a SSH key) to the AEC members via HotCRP Note that your artifacts will receive the ACM “artifact available” badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad. You must provide a DOI automatically assigned to your artifact by these repositories in your final Artifact Appendix.
Reviewers will need to read a paper and then thoroughly go through the Artifact Appendix step-by-step to evaluate a given artifact based on a set of reviewing guidelines.
The review will be organized in two phases:
1. The first week of the artifact evaluation is mainly dedicated to sorting out technical and setup issues. During this phase, reviewers are strongly encouraged to communicate with the authors about encountered technical issues immediately (and anonymously) via the HotCRP submission website to give the authors time to resolve all problems.
2. The remaining time of the artifact evaluation is dedicated to the actual reproduction of results. During remaining time of the review, reviewers will investigate and evaluate the artifacts in more detail.
Note that our philosophy of artifact evaluation is not to fail problematic artifacts but to help the authors improve their artifacts (at least publicly available ones) and pass the evaluation. In the end, AE chairs will decide on a set of the standard ACM reproducibility badges to award to a given artifact based on all reviews as well as the authors' responses.
There are several sources of good advice about preparing artifacts for evaluation. These two are particularly noteworthy: