Search by property
This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.
List of results
- KnoFuss: A Comprehensive Architecture for Knowledge Fusion + (-)
- Querying the Web of Interlinked Datasets using VOID Descriptions + (-)
- SLINT: A Schema-Independent Linked Data Interlinking System + (2.66Ghz quad-core CPU and 4GB of memory)
- FedX: Optimization Techniques for Federated Query Processing on Linked Data + (All experiments are carried out on an HP P … All experiments are carried out on an HP Proliant DL360 G6 with 2GHz 4Core CPU with 128KB L1 Cache, 1024KB L2 Cache, 4096KB L3 Cache, 32GB 1333MHz RAM, and a 160 GB SCSI hard drive. In all scenarios we assigned 20GB RAM to the process executing the query In the SPARQL federation we additionally assign 1GB RAM to each individual SPARQL endpoint process.o each individual SPARQL endpoint process.)
- A Probabilistic-Logical Framework for Ontology Matching + (All experiments were conducted on a desktop PC with AMD Athlon Dual Core Processor 5400B with 2.6GHz and 1GB RAM.)
- SPLENDID: SPARQL Endpoint Federation Exploiting VOID Descriptions + (Due to the unpredictable availability and … Due to the unpredictable availability and latency of the original SPARQL endpoints of the benchmark dataset we used local copies of them which were hosted on five 64bit Intel(R) Xeon(TM) CPU 3.60GHz server instances running Sesame 2.4.2 with each instance providing the SPARQL endpoint for one life science and for one cross domain dataset. The evaluation was performed on a separate server instance with 64bit Intel(R) Xeon(TM) CPU 3.60GHz and a 100Mbit network connection.3.60GHz and a 100Mbit network connection.)
- Adaptive Integration of Distributed Semantic Web Data + (Endpoint machines are connected to the machine on which the mediator is deployed (2GHz AMD Athlon X2, 2GB RAM) via a 100Mbs Ethernet LAN.)
- RDB2ONT: A Tool for Generating OWL Ontologies From Relational Database Systems + (No data available now.)
- Updating Relational Data via SPARQL/Update + (No data available now.)
- Bringing Relational Databases into the Semantic Web: A Survey + (No data available now.)
- D2RQ – Treating Non-RDF Databases as Virtual RDF Graphs + (No data available now.)
- LIMES - A Time-Efficient Approach for Large-Scale Link Discovery on the Web of Data + (No data available now.)
- Discovering and Maintaining Links on the Web of Data + (No data available now.)
- Use of OWL and SWRL for Semantic Relational Database Translation + (No data available now.)
- Accessing and Documenting Relational Databases through OWL Ontologies + (No data available now.)
- Optimizing SPARQL Queries over Disparate RDF Data Sources through Distributed Semi-joins + (No data available now.)
- From Relational Data to RDFS Models + (No data available now.)
- DataMaster – a Plug-in for Importing Schemas and Data from Relational Databases into Protégé + (No data available now.)
- AgreementMaker: Efficient Matching for Large Real-World Schemas and Ontologies + (No data available now.)
- Unveiling the hidden bride: deep annotation for mapping and migrating legacy data to the Semantic Web + (No data available now.)
- Analysing Scholarly Communication Metadata of Computer Science Events + (No data available now.)
- Relational.OWL - A Data and Schema Representation Format Based on OWL + (No data available now.)
- LogMap: Logic-based and Scalable Ontology Matching + (No data available now.)
- A Survey of Current Link Discovery Frameworks + (No data available now.)
- Integration of Scholarly Communication Metadata using Knowledge Graphs + (No data available now.)
- Towards a Knowledge Graph for Science + (No data available now.)
- Cross: an OWL wrapper for teasoning on relational databases + (On an Intel Core 2, 2.33GHz, with 2GB of memory)
- Avalanche: Putting the Spirit of the Web back into Semantic Web Querying + (Test Avalanche using a five-node cluster. Each machine had 2GB RAM and an Intel Core 2 Duo E8500 @ 3.16GHz)
- Zhishi.links Results for OAEI 2011 + (Tests were carried out on a Hadoop computer cluster. Each node has a quad-core Intel Core 2 processor (4M Cache, 2.66 GHz), 8GB memory. The number of reduce tasks was set to 50.)
- Towards a Knowledge Graph Representing Research Findings by Semantifying Survey Articles + (The evaluation started with the phase of letting researchers first read the given overview questions and letting them try in their own way to find the respective answer.)
- A Semantic Web Middleware for Virtual Data Integration on the Web + (The tests were performed with the followin … The tests were performed with the following setup: the mediator (and also the test client) where running on a 2.16 GHz Intel Core 2 Duo with 2 GB memory and a 2 MBit link to the remote endpoints. All endpoints were simulated on the same physical host running two AMD Opteron CPUs at 1.6 GHz and 2 GB memory.D Opteron CPUs at 1.6 GHz and 2 GB memory.)
- Querying the Web of Data with Graph Theory-based Techniques + (The three engines are run independently on a machine having an Intel Xeon W3520 processor, 12 GB memory and 1Gbps LAN.)
- ANAPSID: An Adaptive Query Processing Engine for SPARQL Endpoints + (We empirically analyze the performance of … We empirically analyze the performance of the proposed query processing techniques and report on the execution time of plans comprised of ANAPSID operators versus queries posed against SPARQL endpoints, and state-of-the-art RDF engines. Three sets of queries were considered (Table of Figure 5(b)); each sub-query was executed as a query against its corresponding endpoint. Benchmark 1 is a set of 10 queries against LinkedSensorData-blizzards; each query can be grouped into 4 or 5 sub-queries. Benchmark 2 is a set of 10 queries over linkedCT with 3 or 4 subqueries. Benchmark 3 is a set of 10 queries with 4 or 5 sub-queries executed against linkedCT and DBPedia endpoints. Experiments were executed on a Linux CentOS machine with an Intel Pentium Core2 Duo 3.0 GHz and 8GB RAM.tel Pentium Core2 Duo 3.0 GHz and 8GB RAM.)
- SERIMI – Resource Description Similarity, RDF Instance Matching and Interlinking + (We have loaded all these datasets into an … We have loaded all these datasets into an open-source instance of Virtuoso Universal server 10 , where around 2GB of data were loaded. An exception was the DBPedia dataset, which we accessed online via its Sparql endpoint. The Virtuoso server was installed in a Mac OS X – version 10.5.8, with 2.4 GHz Intel Core 2 Duo processor and with 4 GB 1067 MHz DDR3 of memory. We ran the script that implements the SERIMI approach directly over the local SPARQL endpoints and DBPedia online endpoint.RQL endpoints and DBPedia online endpoint.)
- Querying Distributed RDF Data Sources with SPARQL + (we
split all data over two Sun-Fire-880 ma … we split all data over two Sun-Fire-880 machines (8x sparcv9 CPU, 1050Mhz, 16GB RAM) running SunOS 5.10. The SPARQL endpoints were provided using Virtuoso Server 5.0.37 with an allowed memory usage of 8GB . Note that, although we use only two physical servers, there were five logical SPARQL endpoints. DARQ was running on Sun Java 1.6.0 on a Linux system with Intel Core Duo CPUs, 2.13 GHz and 4GB RAM. The machines were connected over a standard 100Mbit network connection.ver a standard 100Mbit network connection.)
- Querying over Federated SPARQL Endpoints : A State of the Art Survey + ({{{ExperimentSetup}}})