Call for Resources Track Papers

Many of the research efforts in the areas of the Semantic Web and Linked Data focus on publishing scientific papers that prove a hypothesis. However, scientific advancement is often reliant on good quality resources, that provide the necessary scaffolding to support the scientific publications; yet the resources themselves rarely get the same recognition as the advances they facilitate. Sharing these resources and the best practices that have lead to their development with the research community is crucial, to consolidate research material, ensure reproducibility of results and in general gain new scientific insights.

The ISWC 2017 Resources Track aims to promote the sharing of resources including, but not restricted to: datasets, ontologies, vocabularies, workflows, evaluation benchmarks or methods, replication studies, services, APIs and software frameworks that have contributed to the generation of novel scientific work. In particular, we encourage the sharing of such resources following best and well established practices within the Semantic Web community. This track calls for contributions that provide a concise and clear description of a resource and its usage. We also encourage submissions that illustrate new types of resources such as ontology design patterns, crowdsourcing task designs, workflows, methodologies, protocols and measures, etc.

A typical Resource track paper has its focus set on reporting on one of the following categories:

  • Ontologies developed for an application, with a focus on describing the modelling process underlying their creation;
  • Datasets produced to support specific evaluation tasks or by a novel algorithm;
  • Outcome and results obtained regarding the verification of an existing method by applying it to a new specific task;
  • Description of a reusable research prototype / service supporting a given research hypothesis; 
  • Describing a dataset produced by a novel algorithm;
  • Description of community shared software frameworks that can be extended or adapted to support scientific study and experimentation;
  • Benchmarking, focusing on datasets and algorithms for comprehensible and systematic evaluation of existing and future systems.
  • Development of new evaluation methodologies, and their demonstration in an experimental study

Delineation from the other tracks

We strongly recommend that prospective authors carefully check the calls of the other main tracks of the conference in order to identify the optimal track for their submission. Papers that propose new algorithms and architectures should continue to be submitted to the regular research track, whilst papers that describe the use of semantic web technologies in practical settings should be submitted to the in-use track.


Review Criteria

The program committee will consider the quality of both the resource and  the paper in its review process. Therefore, authors must ensure unfettered access to the resource during the review process, ideally by the resource being cited at a permanent location specific for the resource. For example, data available in a repository such as FigShare, Zenodo, or a domain specific repository; or software code being available in public code repository such as GitHub or BitBucket. In exceptional cases, when it is not possible to make the resource public, authors must provide anonymous access to the resource for the reviewers.

We welcome the description of well established as well as emerging resources. Resources will be evaluated along the following generic review criteria. Additionally, detailed and resource specific criteria are accessible at https://doi.org/10.6084/m9.figshare.4679152. These criteria should be carefully considered both by authors and reviewers.

  • Potential impact:
    • Does the resource break new ground?
    • Does the resource plug an important gap?
    • How does the resource advance the state of the art?
    • Has the resource been compared to other existing resources (if any) of similar scope?
    • Is the resource of interest to the Semantic Web community?
    • Is the resource of interest to society in general?
    • Will the resource have an impact, especially in supporting the adoption of Semantic Web technologies?
    • Is the resource relevant and sufficiently general, does it measure some significant aspect?
  • Reusability:
    • Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively, what is the resource’s potential for being (re)used; for example, based on the activity volume on discussion forums, mailing list, issue tracker, support portal, etc?
    • Is the resource easy to (re)use?  For example, does it have good quality documentation? Are there tutorials availability? etc.
    • Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use?
    • Is there potential for extensibility to meet future requirements (e.g., upper level ontologies, plugins in protege)?
    • Does the resource clearly explain how others use the data and software?
    • Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some functionality?
  • Design & Technical quality:
    • Does the design of the resource follow resource specific best practices?
    • Did the authors perform an appropriate re-use or extension of suitable high-quality resources?  For example, in the case of ontologies, authors might  extend upper ontologies and/or reuse ontology design patterns.
    • Is the resource suitable to solve the task at hand?
    • Does the resource provide an appropriate description (both human and machine readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
    • If the resource proposes performance metrics, are such metrics sufficiently broad and relevant?
    • If the resource is a comparative analysis or replication study, was the coverage of systems reasonable, or were any obvious choices missing?
  • Availability:
    • Mandatory: Is the resource (and related results) published at a persistent URI (PURL, DOI, w3id)?
    • Mandatory: Is there  a canonical citation associated with the resource?
    • Mandatory: Does the resource provide a licence specification? (See creativecommons.org, opensource.org for more information)
    • Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.
    • Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it registered in generic repositories such as  FigShare, Zenodo or GitHub?
    • Is there a sustainability plan specified for the resource? Is there a plan for the maintenance of the resource?
    • Does it use open standards, when applicable, or have good reason not to?

Submission Details

  • Pre-submission of abstracts is a strict requirement. All papers and abstracts have to be submitted electronically via the EasyChair conference submission system
  • Papers describing a resource must not exceed 8 pages (including references). There is no expectation for an evaluation, the focus of the paper should be on the sustainability and community surrounding the resource.
  • Replication and benchmark papers must not exceed 16 pages (including references). These papers are expected to include evaluations and therefore may be more lengthy. They are not expected to describe a resource such as an ontology but to describe a replication framework or experiment.
  • All research submissions must be in English. Papers that exceed the page limit will be rejected without review.
  • Submissions must be either in PDF or in HTML, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. For HTML submission guidance, see the ISWC 2017 website. ISWC 2017 submissions are not anonymous. We encourage embedding metadata in the PDF/HTML to provide a machine readable link from the paper to the resource.
  • Authors will have the opportunity to submit a rebuttal to the reviews to clarify questions posed by program committee members.
  • Authors of accepted papers will be required to provide semantic annotations for the abstract of their submission, which will be made available on the conference web site. Details will be provided at the time of acceptance.
  • Accepted papers will be distributed to conference attendees and also published by Springer in the printed conference proceedings, as part of the Lecture Notes in Computer Science series.
  • At least one author of each accepted paper must register for the conference and present the paper there. As in previous years, students will be able to apply for travel support to attend the conference. Preference will be given to students that are first authors on papers accepted to the main conference or the doctoral consortium, followed by those who are first authors on papers accepted to ISWC workshops and the Poster & Demo session.

Prior Publication And Multiple Submissions

ISWC 2017 will not accept research papers that, at the time of submission, are under review for or have already been published in or accepted for publication in a journal, another conference, or another ISWC track. The conference organizers may share information on submissions with other venues to ensure that this rule is not violated.

Important Dates
Abstracts: May 8th, 2017
Full paper submission: May 15th, 2017
Author rebuttals: June 23 – 27, 2017
Notifications: July 14, 2017
Camera-ready versions: July 28, 2017

All deadlines are midnight Hawaii time.


Committee Chairs


Program Committee

The list of senior program committee and program committee members can be found here.