=Paper=
{{Paper
|id=Vol-1700/preface
|storemode=property
|title=None
|pdfUrl=https://ceur-ws.org/Vol-1700/preface.pdf
|volume=Vol-1700
}}
==None==
BLINK 2016 preface Preface The first BLINK workshop on Benchmarking Linked Data took place in Kobe, Japan on Oc- tober 18th, 2016 and was hosted by Kobe International Conference Center. The workshop was supported by the HOBBIT project (https://project-hobbit.eu). BLINK provided a forum where topics related to the evaluation (included, but not limited to, expressive power, usability and performance) of Linked Data Technologies for different steps of the Big Linked Data Chain can be discussed and elaborated upon. Big Linked Data is starting to enter the new data economy. Systems are constantly being developed in order to support the booming exchange of data (existing in numerous formats) in the Web and the Enterprise. Big Linked Data standards and benchmarks can function as valuable tools to objectively depict and illustrate the level of adequacy and thus performance provided by the existing Linked Data systems. This workshop aimed to bring together a broad range of attendants interested in benchmarking Big Linked Data and aims at identifying collectively the specific needs and challenges of the domain in order to foster interdisciplinary collaborations towards attaining these challenges. More specifically the objectives of this workshop were to: • Create a discussion forum where researchers and industrials can meet and discuss topics related to the performance of Linked Data systems, and • expose and initiate discussions on best practices, different application needs and scenarios related to Linked Data management. Six papers were presented during the workshop. All were selected by peer review for pre- sentation. The first paper presents the benchmark’s schema, data generator, workload and the results of experiments of the Semantic Publishing Benchmark (SPB) developed in the context of the Linked Data Benchmark Council (LDBC) EU project. The second paper presents a mod- ification of an existing RDF data generator, in order to create the basis for more realistic RDF benchmarking. The third paper presents the results of the evaluation of instance matching systems using Lance, a domain-independent, schema agnostic instance matching benchmark generator for Linked Data. The development of a new benchmark for federated query process- ing systems is presented in the fourth paper. The fifth paper describes the basic strategies that archiving tools follow for storing, as well as some basic requirements an RDF archiving system should meet. The last paper proposes a data-scaler for OBDA benchmarks. The data-scaler integrates some of the measures used by database query optimizers and existing data scalers with OBDA-specific measures, in order to deliver a better data generation in the context of OBDA benchmarks. We wish to thank all who participated to the success of this workshop, especially the authors, reviewers, speakers and participants. October, 2016 Axel-Cyrille Ngonga Ngomo Anastasia Krithara Irini Fundulaki i