Difference between revisions of "PapersQ2"
(Created page with " {{#ask: Category:Paper Has problem::Link Discovery | ?Has Implementation= Tool/Ontology |mainlabel=Paper |?implementedIn ProgLang = programming language | ?Has ye...") |
|||
(3 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
+ | Query 2: Which evaluation dimensions, Evaluation Methods, and benchmarks are used to evaluate (LOD Link Discovery/ “SPARQL Query Federation”) tools along with the tools being evaluated and the title of the published article and the year of publication? | ||
{{#ask: [[Category:Paper]] [[Has problem::Link Discovery]] | {{#ask: [[Category:Paper]] [[Has problem::Link Discovery]] | ||
− | + | |mainlabel=Paper | |
− | + | |?Has Dimensions= Evaluation Dimensions | |
− | |? | + | | ?Has EvaluationMethod= Evaluation Method |
− | | ?Has | + | | ?Has Benchmark= benchmarks |
− | | ?Has | ||
| ?Event in series = Venue | | ?Event in series = Venue | ||
− | | ?Has | + | | ?Has year= year |
− | |||
− | |||
− | |||
| format= broadtable |sort=Has problem | | format= broadtable |sort=Has problem | ||
}} | }} | ||
__NOCACHE__ | __NOCACHE__ | ||
{{update}} | {{update}} |
Latest revision as of 07:27, 13 July 2018
Query 2: Which evaluation dimensions, Evaluation Methods, and benchmarks are used to evaluate (LOD Link Discovery/ “SPARQL Query Federation”) tools along with the tools being evaluated and the title of the published article and the year of publication?
Paper | Evaluation Dimensions | Evaluation Method | benchmarks | Venue | year |
---|---|---|---|---|---|
LogMap: Logic-based and Scalable Ontology Matching | Accuracy | Use four ontologies from OAEI 2010 benchmark, calculating the classification time for these ontologies. | OAEI 2010 | ISWC | 2011 |
A Probabilistic-Logical Framework for Ontology Matching | Accuracy | Average F1-values over the 21 OAEI reference alignments for manual weights vs. learned weights vs. formulation without stability constraints; thresholds range from 0.6 to 0.95. | Ontofarm dataset | AAAI | 2010 |
SLINT: A Schema-Independent Linked Data Interlinking System | Accuracy | Compare the system with AgreementMaker, SERIMI, and Zhishi.Links | LinkedMDB GeoNames | OM | 2012 |
AgreementMaker: Efficient Matching for Large Real-World Schemas and Ontologies | Accuracy | Compare the mappings found by the system between the two ontologies with a reference matching or “gold standard,” which is a set of correct and complete mappings as built by domain experts. | OAEI 2012 | VLDB | 2009 |
Zhishi.links Results for OAEI 2011 | Accuracy | Utilize distributed MapReduce framework to adopt index-based pre-matching | DBpedia Freebase GeoNames | OM | 2011 |
Discovering and Maintaining Links on the Web of Data | {{{Dimensions}}} | A methodology that proved useful for optimizing link specifications is to manually create a small reference linkset and then optimize the Silk linking specification to produce these reference links, before Silk is run against the complete target data source. | DBpedia DrugBank | ISWC | 2009 |
A Survey of Current Link Discovery Frameworks | {{{Dimensions}}} | - | - | Semantic Web Journal | 2017 |
LIMES - A Time-Efficient Approach for Large-Scale Link Discovery on the Web of Data | Performance | Compare LIMES with different numbers of exemplars on knowledge bases of different sizes. | DBpedia DrugBank LinkedCT MESH | 2011 | |
KnoFuss: A Comprehensive Architecture for Knowledge Fusion | - | - | - | K-CAP | 2007 |
SERIMI – Resource Description Similarity, RDF Instance Matching and Interlinking | Accuracy | In order to evaluate the effectiveness of the proposed interlinking method, we used the precision, recall and F1 metrics. | DBpedia Sider DrugBank LinkedCT Dailymed Diseasome TCM | ArXiv | 2011 |
Do you think that items on this page are out of date? You can clean its cache to manually update all dynamic parts to the latest data from the wiki.