Difference between revisions of "PapersQ5"

From Openresearch
Jump to: navigation, search
(Created page with "We found the following results: {{#ask: Category:Paper Has problem::SPARQL Query Federation |mainlabel=Paper | ?Has Implementation= Tool |?Has Downloadpage=Downloa...")
 
(No difference)

Latest revision as of 14:56, 11 July 2018

We found the following results:

PaperToolDownloadpageExperiment SetupEvaluation Description
A Semantic Web Middleware for Virtual Data Integration on the WebSemWIQhttps://sourceforge.net/projects/semwiq/The tests were performed with the following setup: the mediator (and also the test client) where running on a 2.16 GHz Intel Core 2 Duo with 2 GB memory and a 2 MBit link to the remote endpoints. All endpoints were simulated on the same physical host running two AMD Opteron CPUs at 1.6 GHz and 2 GB memory.For the following sample queries, real-world data of sunspot observations recorded at

Kanzelh¨ohe Solar Observatory (KSO) have been used. The observatory is also a partner in the Austrian Grid project. The queries are shown in Fig. 2. Query 1 retrieves the first name, the last name, and optionally the e-mail address of scientists who have done observations. Query 2

retrieves all observations ever recorded by Mr. Otruba.
ANAPSID: An Adaptive Query Processing Engine for SPARQL EndpointsANAPSIDhttps://github.com/anapsid/anapsidWe empirically analyze the performance of the proposed query processing techniques and report on the execution time of plans comprised of ANAPSID operators versus queries posed against SPARQL endpoints, and state-of-the-art RDF engines.

Three sets of queries were considered (Table of Figure 5(b)); each sub-query was executed as a query against its corresponding endpoint. Benchmark 1 is a set of 10 queries against LinkedSensorData-blizzards; each query can be grouped into 4 or 5 sub-queries. Benchmark 2 is a set of 10 queries over linkedCT with 3 or 4 subqueries. Benchmark 3 is a set of 10 queries with 4 or 5 sub-queries executed against linkedCT and DBPedia endpoints.

Experiments were executed on a Linux CentOS machine with an Intel Pentium Core2 Duo 3.0 GHz and 8GB RAM.
We report on runtime performance, which corresponds to the user time produced by the _ 􀀀_ command of the Unix operation system. Experiments in RDF-3X were run in both cold and warm caches; to run cold cache, we executed the same query five times by dropping the cache just before running the first iteration of the query. Each query executed by ANAPSID and SPARQL endpoints was run ten times, and we report on the average time.
Adaptive Integration of Distributed Semantic Web DataADERIShttp://No data available now.Endpoint machines are connected to the machine on which the mediator is deployed (2GHz AMD Athlon X2, 2GB RAM) via a 100Mbs Ethernet LAN.No data available now.
Avalanche: Putting the Spirit of the Web back into Semantic Web QueryingAvalanchehttp://-Test Avalanche using a five-node cluster. Each machine had 2GB RAM and an Intel Core 2 Duo E8500 @ 3.16GHz{{{Description}}}
FedX: Optimization Techniques for Federated Query Processing on Linked DataFedXhttp://www.uidops.com/FedXAll experiments are carried out on an HP Proliant DL360 G6 with 2GHz

4Core CPU with 128KB L1 Cache, 1024KB L2 Cache, 4096KB L3 Cache, 32GB 1333MHz RAM, and a 160 GB SCSI hard drive. In all scenarios we assigned 20GB RAM to the process executing the query

In the SPARQL federation we additionally assign 1GB RAM to each individual SPARQL endpoint process.
Optimizing SPARQL Queries over Disparate RDF Data Sources through Distributed Semi-joinsDistributed SPARQLhttp://No data available now.No data available now.No data available now.
Querying Distributed RDF Data Sources with SPARQLDARQhttp://darq.sf.net/we

split all data over two Sun-Fire-880 machines (8x sparcv9 CPU, 1050Mhz, 16GB RAM) running SunOS 5.10. The SPARQL endpoints were provided using Virtuoso Server 5.0.37 with an allowed memory usage of 8GB . Note that, although we use only two physical servers, there were five logical SPARQL endpoints. DARQ was running on Sun Java 1.6.0 on a Linux system with Intel Core Duo CPUs, 2.13 GHz and 4GB RAM. The machines were connected over a standard

100Mbit network connection.
In this section we evaluate the performance of the DARQ query engine. The

prototype was implemented in Java as an extension to ARQ5. We used a subset of DBpedia6. DBpedia contains RDF information extracted from Wikipedia.

The dataset is offered in different parts.
Querying the Web of Data with Graph Theory-based TechniquesGDShttp://code.google.com/p/gdsparal/The three engines are run independently

on a machine having an Intel Xeon W3520 processor, 12 GB

memory and 1Gbps LAN.
We deploy 6 SPARQL endpoints (Sesame 2.4.0) on 5 remote

virtual machines. About 400,000 triples (generated by BSBM) are distributed to these endpoints following Gaussian distribution. We follow the metrics presented in (23). For each query, we calculate the number of queries executed per second (QPS) and average results count. For the whole test, we record the overall runtime, CPU usage, memory usage and network overhead. We perform 10 warm up runs and 50 testing runs for each engine. Time out is set to 30 seconds. In each run, only one instance of each engine is used for all queries, but cache is cleared after finishing each query. Warm up runs are not counted in query time related metrics, but included in system

and network overhead.
Querying the Web of Interlinked Datasets using VOID DescriptionsWoDQAhttps://sourceforge.net/projects/wodqa/&action=edit&redlink=1--
SPLENDID: SPARQL Endpoint Federation Exploiting VOID DescriptionsSPLENDIDhttps://github.com/semagrow/fork-splendid-serverDue to the unpredictable availability and latency of the original SPARQL endpoints

of the benchmark dataset we used local copies of them which were hosted on five 64bit Intel(R) Xeon(TM) CPU 3.60GHz server instances running Sesame 2.4.2 with each instance providing the SPARQL endpoint for one life science and for one cross domain dataset. The evaluation was performed on a separate server instance with 64bit Intel(R)

Xeon(TM) CPU 3.60GHz and a 100Mbit network connection.
we investigated how the information from the VOID

descriptions effect the accuracy of the source selection. For each query, we look at the number of sources selected and the resulting number of requests to the SPARQL endpoints. We tested three different source selection approaches, based on 1) predicate index only (no type information), 2) predicate and type index, and 3) predicate and type

index and grouping of sameAs patterns as described in Section 4.2.
Do you think that items on this page are out of date? You can clean its cache to manually update all dynamic parts to the latest data from the wiki.