- Hybrid Statistical Semantic Understanding and Emerging Semantics (HSSUES)
- Visualization and Interaction for Ontologies and Linked Data (VOILA)
- Ontology Design and Patterns (WOP)
- Semantics for Biodiversity (S4BioDiv)
- Ontology Matching (OM)
- Humanities in the Semantic Web (WHiSe)
- Enabling Open Semantic Science (SemSci)
- Semantic Statistics (SemStats)
- Dataset PROFlLing and fEderated Search for Web Data (PROFILES)
- Decentralizing the Semantic Web (DeSemWeb)
- Managing Changes in the Semantic Web (MaCSeW)
- Re-coding Black Mirror (BlkMirror)
- Society, Privacy and the Semantic Web – Policy and Technology (PrivOn)
- Semantic Web Technologies for the IoT (SWIT)
- Linked Data for Information Extraction (LD4IE)
- Ontology Modularity, Contextuality, and Evolution (WOMoCoE)
- Benchmarking Linked Data (BLINK)
- Natural Language Interfaces for the Web of Data (NLIWOD)
- Web Stream Processing (WSP)
Xin Dong, Ramanathan Guha, Pascal Hitzler, Mayank Kejriwal, Freddy Lecue, Dandapani Sivakumar, Pedro Szekely and Michael Witbrock
Understanding the semantics of Web content is at the core of many applications, ranging from Web search, news aggregation and machine translation to personal assistant services such as Amazon Echo, Cortana, Siri, and Google Home. Presently, two different approaches apply to this task. The first approach utilizes a rich suite of information retrieval and machine learning techniques that capture meaning through powerful statistical tools like neural networks. Recently, such emerging semantic models have achieved state-of-the-art results in several predictive applications. The second approach conveys meaning in a structured form through embedded data markup (using Schema.org, OGP, etc.) and ontologies, and be further enhanced through available knowledge bases such as Freebase and DBpedia. The HSSUES workshop will explore the synergy, from perspectives of theory, application, experiments (including negative results) and vision, between both approaches, and how such synergies can be exploited to create powerful applications. We are interested in mechanisms that range the spectrum of possible strategies and provide novel functionalities through hybrid approaches. The broader goal is to foster a discussion that will lead to cross-cutting ideas and collaborations at a timely moment when Semantic Web research has significantly started intersecting with the natural language processing and knowledge discovery communities.
Valentina Ivanova, Patrick Lambrix, Steffen Lohmann and Catia Pesquita
‘A picture is worth a thousand words’, we often say, yet many areas are in demand of sophisticated visualization techniques, and the Semantic Web is not an exception. The size and complexity of ontologies and Linked Data in the Semantic Web constantly grows and the diverse backgrounds of the users and application areas multiply at the same time. Providing users with visual representations and sophisticated interaction techniques can significantly aid the exploration and understanding of the domains and knowledge represented by ontologies and Linked Data. There is no one-size-fits-all solution but different use cases demand different visualization and interaction techniques. Ultimately, providing better user interfaces, visual representations and interaction techniques will foster user engagement and likely lead to higher quality results in different applications employing ontologies and proliferate the consumption of Linked Data.
Eva Blomqvist, Oscar Corcho, Matthew Horridge, Rinke Hoekstra and David Carral
The Workshop on Ontology Design and Patterns targets topics relating to high quality ontology design. The workshop series addresses topics centered around quality in ontology design as well as ontology design patterns (ODP) in Semantic Web data and ontology engineering. ODPs have seen a sharp rise in attention in the past few years, both pertaining to this workshop series and other related events. Patterns can benefit knowledge engineers and Semantic Web developers with a direct link to requirements, reuse, guidance, and better communication. They need to be shared by a community in order to provide a common language, hence the aim of this workshop is twofold: 1) providing an arena for discussing patterns, pattern-based ontologies, systems, datasets, etc., and 2) broadening the pattern community by developing its own “discourse” for discussing and describing relevant problems and their solutions. Related to the latter aim we see that it is an opportune time to open up the workshop to other approaches focusing on high quality ontology design, e.g. other methods and tools, with the intention to cross-fertilise these with the ODP idea.
Alsayed Algergawy, Naouel Karam, Friederike Klan and Clement Jonquet
Biodiversity research aims at comprehending the totality and variability of organisms, their morphology, genetics, life history, habitats and geographical ranges, it is usually used to refer to biological diversity at three levels: genetics, species, and ecology. Biodiversity is an outstanding domain that deals with heterogeneous datasets and concepts generated from a large number of disciplines in order to build a coherent picture of the extend of life on earth. The presence of such a myriad of data resources makes integrative biodiversity increasingly important in the life sciences research. However, it is severely strangled by the way data and information are made available. The Semantic Web approach enhances data exchange, discovery, and integration by providing common formats to achieve a formalized conceptual environment. This workshop aims to bring together computer scientists and biologists, working on Semantic Web approaches for biodiversity and related areas such as agriculture or agro-ecology. The goal is to exchange experiences, build a state of the art of realizations and challenges and reuse and adapt solutions that have been proposed in other domains. The workshop focuses on presenting challenging issues and solutions for the design of high quality biodiversity information systems based on Semantic Web techniques.
Pavel Shvaiko, Jérôme Euzenat, Ernesto Jiménez-Ruiz, Michelle Cheatham and Oktie Hassanzadeh
Ontology matching is a key interoperability enabler for the Semantic Web, as well as a useful technique in some classical data integration tasks dealing with the semantic heterogeneity problem. It takes ontologies as input and determines as output an alignment, that is, a set of correspondences between the semantically related entities of those ontologies. These correspondences can be used for various tasks, such as ontology merging, data interlinking, query answering or process mapping. Thus, matching ontologies enables the knowledge and data expressed in the matched ontologies to interoperate. The goals of the workshop are to (1) bring together leaders from academia, industry and user institutions to assess how academic advances are addressing real-world requirements in this area; (2) conduct an extensive and rigorous evaluation of ontology matching and instance matching (link discovery) approaches through the OAEI (Ontology Alignment Evaluation Initiative1) 2017 campaign; (3) examine new uses, similarities and differences from database schema matching, which has received decades of attention but is just beginning to transition to mainstream tools.
Alessandro Adamou, Enrico Daga and Leif Isaksen
WHiSe is an emerging symposium aimed at strengthening communication between scholars in the Digital Humanities and Linked Data communities, to discuss unthought-of opportunities arising from the research problems of the former. Inspired by pioneering work in cultural heritage and digital libraries, WHiSe reflects the rise of research interests in applying data science to fields such as musicology and digital archaeology, in an effort to stimulate the formation of a harmonic ecosystem where critical issues in Semantics and the Humanities can be investigated. Its best-of-both-worlds format accommodates the practices of scholarly dialogue in both fields, by welcoming rigorously peer-reviewed research papers, as well as mature running systems and debate on future research directions.
Jun Zhao, Daniel Garijo, Tobias Kuhn, Tomi Kauppinen and Willem van Hage
In the past few years, a push for open reproducible research has led to a proliferation of community efforts for publishing raw research objects like datasets, software, methodologies, etc. These efforts underpin research outcomes much more explicitly accessible. However, the actual time and effort required to achieve this new form of scientific communication remains a key barrier to reproducibility. Furthermore, scientific experiments are becoming increasingly complex, and ensuring that research outcomes become understandable, interpretable, reusable and reproducible is still a challenge. The goal of this workshop is to incentivize practical solutions and fundamental thinking to bridge the gap between existing scientific communication methods and the vision of a reproducible and accountable open science. Semantic Web technologies provide a promising means for achieving this goal, enabling more transparent and well-defined descriptions for all scientific objects required for this new form of science and communication.
Sarven Capadisli, Franck Cotton, Raphaël Troncy, Armin Haller and Evangelos Kalampokis
The goal of the SemStats workshop is to explore and strengthen the relationship between the Semantic Web and statistical communities, and to provide better access to the data held by statistical offices. It will focus on ways in which statisticians can use Semantic Web technologies and standards in order to formalize, publish, document and link their data and metadata, and also on how statistical methods can be applied on linked data. The statistical community shows more and more interest in the Semantic Web. In particular, initiatives have been launched to develop semantic vocabularies representing statistical classifications and discovery metadata. Tools are also being created by statistical organizations to support the publication of dimensional data conforming to the Data Cube W3C Recommendation. But statisticians see challenges in the Semantic Web: how can data and concepts be linked in a statistically rigorous fashion? How can we avoid fuzzy semantics leading to wrong analysis? How can we preserve data confidentiality? The SemStats workshop will also cover the question of how to apply statistical methods or treatments to linked data, and how to develop new methods and tools for this purpose. Except for visualization techniques and tools, this question is relatively unexplored, but the subject will obviously grow in importance in the near future.
Elena Demidova, Stefan Dietze, Julian Szymanski and John Breslin
The Web of Data, including Linked Data and knowledge graphs, has seen tremendous growth recently. In addition, new forms of structured data have emerged in the form of Web markup, such as schema.org, and entity-centric data in the Web tables. Considering these rich, heterogeneous and evolving data sources which cover a wide variety of domains, exploitation of Web Data becomes increasingly important in the context of various applications, including federated search, entity linking, question answering and fact verification. These applications require reliable information on dataset characteristics, including general metadata, quality features, statistical information, dynamics, licensing and provenance. Lack of a thorough understanding of the nature, scope and characteristics of data from particular sources limits their take-up and reuse, such that applications are often limited and focused on well-known reference datasets. The PROFILES workshop series aim at gathering approaches to analyse, describe and discover data sources – including but not limited to SPARQL endpoints – as a facilitator for applications and tasks such as query distribution, semantic search, entity retrieval and recommendation. PROFILES offers a highly interactive forum for researchers and practitioners bringing together experts in the fields of Semantic Web, Linked Data, Semantic Search, Databases, NLP, IR and application domains.
Ruben Verborgh, Andrei Sambra and Tobias Kuhn
The Semantic Web is increasingly becoming a centralized story: we rely on large-scale server-side infrastructures to perform intense reasoning, data mining, and query execution. Therefore, we urgently need research and engineering to put the “Web” back in the “Semantic Web”, aiming for intelligent clients—instead of intelligent servers—as sketched in the original Semantic Web vision. The DeSeWe017 workshop purposely takes a radical perspective by focusing solely on decentralized and client-side applications, to counterbalance the centralized discourse of other tracks. While we recognize the value in all subfields of the Semantic Web, we see an urgent need to revalue the role of clients. This workshop will help put different topics on the Semantic Web community’s research agenda, which should lead to new inspiration and initiatives to build future Semantic Web and Linked Data applications.
Claudia Schon, Renata Dividino, Nadeschda Nikitina and Jürgen Umbrich
The Web is primary a communication platform where knowledge is produced, shared, and consumed by a diversity of stakeholders. As communication is a dynamic process, the data on Web is subject to changes. These dynamics evoke the need for versatile methods and algorithms to represent data changes and provide Web agents with a suitable world view at any time. Managing changing datasets is important for many purposes and applications involving Web Data such as data caching, indexing of distributed data sources, resolving conflicts and inconsistencies introduced by updates, optimizing the execution of queries and querying for changes. The workshop aims to bring together researchers addressing the problem of managing changes in Semantic Web data from different areas like linked data, Semantic Web, knowledge and belief change and description logics in order provide an overview of the existing approaches as well as to advance cooperation of the different areas.
Pinelopi Troullinou, Mathieu d’Aquin and Ilaria Tiddi
Re-coding Black Mirror aims at exploring potential solutions that semantic web technologies could bring to social and ethical concerns emanating from the wide use of digital advancements. The potential risks of a dystopian future as depicted in scenarios such as the ones of the British sci-fi series Black Mirror will be explored from a multi-disciplinary and dialectic approach. The workshop will enable participants to address emerging social phenomena from different perspectives building bridges in practice between two arguably distinct ‘worlds’ – the ones of computer and social sciences, introducing a rather innovative methodological approach, namely the one of animated case scenarios. Indeed, Re-coding Black Mirror aims at promoting the dialogue between semantic web researchers and social scientists drawing upon case scenarios on specific technologies, with the objective to make emerge potential semantic solutions to societal and ethical challenges as discussed intensively within social science fields such as surveillance and mobile media studies. It will also be a forum for raising opportunities of networking with scholars from different fields to explore novel research problems that can be relevant to both communities. To that end, the workshop will have a mixed program committee and target audience, combining both traditions.
Sabrina Kirrane, Christopher Brewster, Michelle Cheatham, Mathieu D’Aquin and Stefan Decker
Schneier’s article “The Internet is a surveillance state” summarised the state of Internet privacy as “Welcome to an Internet without privacy, and we’ve ended up here with hardly a fight”. Later, Snowden revealed that the NSA was tracking online communication, followed by revelations that other countries were running similar covert operations. Autumn 2015 saw the collapse of the EU-US Safe Harbor Agreement, which resulted in legal uncertainty regarding transatlantic data exchange, while April 2016 saw the ratification of the the new EU Data Protection Regulation, which will come into being in May 2018, after years of discussion involving parliamentarians, lobbyists and activists. On the 28th anniversary of the Web, Tim Berners-Lee sent a widely spread open letter warning of the devastating effect of losing control over personal data and the spread of misinformation, especially on the political scene. This workshop aims to raise awareness that the technologies our community are working on have global societal consequences and, vice versa, our research can be guided by social, economic and legal privacy requirements. This year’s workshop will build on previous workshops by investigating the privacy implications of semantic technology and also exploring how the technology can be used to support privacy preservation.
Maria Maleshkova, Ruben Verborgh and Amelie Gyrard
Current developments on the Internet are characterised by the wider use of network-enabled devices, such as sensors, mobile phones, and wearables that serve as data providers or actuators, in the context of client applications. Even though real-life objects can finally participate in integrated scenarios, the use of individual and specific interaction mechanisms and data models lead to realising isolated islands of connected devices or to custom solutions that are not reusable. To this end, the vision of the Internet of Things (IoT) is to leverage Internet standards in order to interconnect all types of embedded devices (e.g., patient monitors, medical sensors, congestion monitoring devices, traffic-light controls, temperature sensors, smart meters, etc.) and real-world objects, and thus to make them a part of the Internet and provide overall interoperability. Therefore, IoT aims to build a future of connected devices that is truly open, flexible, and scalable. The SWIT (Semantic Web technologies for the IoT) workshop aims to contribute towards achieving this goal by exploring how existing well-established Semantic Web Technologies can be used to solve some of the challenges that the IoT currently faces. The focus of the workshop is on solving IoT challenges with Semantic Web Technologies.
Anna Lisa Gentile, Ziqi Zhang and Andrea Giovanni Nuzzolese
The LD4E workshop focuses on the exploitation of Linked Data for Web Scale Information Extraction (IE), which concerns extracting structured knowledge from unstructured/semi-structured documents on the Web. One of the major bottlenecks for the current state of the art in IE is the availability of learning materials (e.g., seed data, training corpora), which, typically are manually created and are expensive to build and maintain. Linked Data (LD) defines best practices for exposing, sharing, and connecting data, information, and knowledge on the Semantic Web using uniform means such as URIs and RDF. It has so far been created a gigantic knowledge source of Linked Open Data (LOD), which constitutes a mine of learning materials for IE. However, the massive quantity requires efficient learning algorithms and the unguaranteed quality of data requires robust methods to handle redundancy and noise. LD4IE intends to gather researchers and practitioners to address multiple challenges arising from the usage of LD as learning material for IE tasks, focusing on (i) modeling user defined extraction tasks using LD; (ii) gathering learning materials from LD assuring quality (training data selection, cleaning, feature selection etc.); (iii) robust algorithms for various IE tasks using LD; (iv) publishing IE results to the LOD cloud.
Loris Bozzato, Thomas Eiter, Martin Homola and Daniele Porello
In the Semantic Web and Linked Data, knowledge is rarely considered a monolithic and static unit. Instead, partitioning knowledge into distinct modular structures is central to organize knowledge bases, from their design to their management, from their maintenance to their use in knowledge sharing. From a different perspective, representing and reasoning about the context respective to the knowledge in distinct modules is essential for their correct exploitation and for reliable and effective reasoning in changing situations. Finally, evolution of knowledge resources, in terms of updates by newly acquired knowledge, is an important factor influencing the meaningfulness of stored knowledge over time. Considering these emerging needs in the Semantic Web / Linked Data community, the 2nd International Workshop on Ontology Modularity, Contextuality, and Evolution (WOMoCoE 2017) offers the ground to practitioners and researchers to discuss current work on practical and theoretical aspects of modularity, contextuality and evolution of knowledge resources. The workshop aims to bring together an interdisciplinary audience interested in its topics both from a theoretical and formal point of view (i.e. researchers from philosophy, logic, cognitive science, and linguistics) and from an applicational perspective (i.e. Semantic Web / Linked Data knowledge engineers, adopters from various application domains).
Axel-Cyrille Ngonga Ngomo, Michael Röder and Irini Fundulaki
The provision of benchmarks has been shown to push the development of innovative solutions throughout the history of computer science. The increasing uptake of Linked Data as a technology for the easy integration across different industries has led to Linked-Data-driven solutions being faces with higher requirements pertaining to their performance. The objective of the BLINK workshop series is to provide a discussion forum where research, industry and other users can meet to discuss the performance of current solutions, the methodology, performance indicators and benchmarks used to quantify this performance and current strengths and weaknesses of current approaches for benchmarking Linked-Data-driven solutions. The workshop will aim to be a forum for discussing and cross-fertilizing benchmarking practices across all steps of the Linked Data lifecycle.
Key-Sun Choi, Jin-Dong Kim, Axel-Cyrille Ngonga Ngomo and Ricardo Usbeck
This workshop is a joint event of two active communities in the area of interaction paradigms to Linked Data: NLIWOD3 and QALD. NLIWOD, a workshop for discussions on the advancement of natural language interfaces to the Web of Data, has been organized twice within ISWC, with a focus on soliciting discussions on the development of question answering systems. QALD is a benchmarking campaign powered by the H2020 project HOBBIT (project-hobbit.eu) including question answering over (Big) linked data, has been organized as a challenge within CLEF and ESWC. This time, we will hold a joint workshop to attract people from the two communities in order to promote active collaboration, to extend the scope of currently addressed topics, and to foster the reuse of resources developed so far. Furthermore, we offer an OpenChallenge – QALD-8 – where users are free to demonstrate the capabilities of their systems using the provided online benchmark platform. Furthermore, the scope of this workshop will extend to dialogue systems and chatbots as increasingly important business intelligence factors.
Daniele Dell’Aglio, Darko Anicic, Payam Barnaghi, Emanuele Della Valle and Deborah McGuinnes
More and more applications require real-time processing of massive, dynamically generated, ordered data, where order is often an essential factor reflecting recency. Data stream management techniques provide reactive and reliable processing mechanisms over such data. Key to their success is the use of streaming algorithms that harness the natural or enforceable orders in the data. This trend started to be visible also in the Web, where an increasing number of streaming sources and datasets are becoming available. They originate from social networks, sensor networks, Internet of Things (IoT) and many other technologies that find in the Web a platform for sharing data. This is resulting in new Web-centric efforts such as the Web of Things, which studies how to expose and describe IoT using the Web, or the Social Web, which investigates protocols, vocabularies, and APIs to facilitate access to social functionality as part of the Web. In the Semantic Web context emerged efforts like Stream Reasoning and RDF Stream Processing. Stream Reasoning aims at combing data stream management and semantic technologies to perform reasoning over massive, heterogeneous and dynamic data;, while RDF Stream Processing studies the continuous query answering process over data streams modelled accordingly to the RDF model. The WSP workshop aims at putting together such sub-communities and to discuss and investigate holistic processing models for streams over the Web, which consider the issues about publishing data streams on the Web as well as processing them with queries and inference processes. The event will contribute in the creation of an active community interested in integrating stream processing and reasoning by using methods inspired by data and knowledge management.