Difference between revisions of "PapersQ2"

From Openresearch
Jump to: navigation, search
(Created page with " {{#ask: Category:Paper Has problem::Link Discovery | ?Has Implementation= Tool/Ontology |mainlabel=Paper |?implementedIn ProgLang = programming language | ?Has ye...")
 
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
Query 2: Which evaluation dimensions, Evaluation Methods, and benchmarks are used to evaluate (LOD Link Discovery/ “SPARQL Query Federation”) tools along with the tools being evaluated and the title of the published article and the year of publication?
  
 
{{#ask: [[Category:Paper]] [[Has problem::Link Discovery]]
 
{{#ask: [[Category:Paper]] [[Has problem::Link Discovery]]
| ?Has Implementation= Tool/Ontology
+
|mainlabel=Paper
|mainlabel=Paper
+
  |?Has Dimensions= Evaluation Dimensions
  |?implementedIn ProgLang = programming language
+
  | ?Has EvaluationMethod= Evaluation Method
  | ?Has year= Year
+
  | ?Has Benchmark= benchmarks
  | ?Has vendor= vendor
 
 
  | ?Event in series = Venue
 
  | ?Event in series = Venue
  | ?Has GUI= GUI
+
  | ?Has year= year
| ?Has DataCatalouge= Data Catalouge
 
| ? has platform= Platform
 
|?Has Downloadpage=Downloadpage
 
 
  | format= broadtable |sort=Has problem
 
  | format= broadtable |sort=Has problem
 
}}
 
}}
 
__NOCACHE__
 
__NOCACHE__
 
{{update}}
 
{{update}}

Latest revision as of 07:27, 13 July 2018

Query 2: Which evaluation dimensions, Evaluation Methods, and benchmarks are used to evaluate (LOD Link Discovery/ “SPARQL Query Federation”) tools along with the tools being evaluated and the title of the published article and the year of publication?

PaperEvaluation DimensionsEvaluation MethodbenchmarksVenueyear
LogMap: Logic-based and Scalable Ontology MatchingAccuracyUse four ontologies from OAEI 2010 benchmark, calculating the classification time for these ontologies.OAEI 2010ISWC2011
A Probabilistic-Logical Framework for Ontology MatchingAccuracyAverage F1-values over the 21 OAEI reference alignments for manual weights vs. learned weights vs. formulation without stability constraints; thresholds range from 0.6 to 0.95.Ontofarm datasetAAAI2010
SLINT: A Schema-Independent Linked Data Interlinking SystemAccuracyCompare the system with AgreementMaker, SERIMI, and Zhishi.LinksLinkedMDB
GeoNames
OM2012
AgreementMaker: Efficient Matching for Large Real-World Schemas and OntologiesAccuracyCompare the mappings found by the system between the two ontologies with a reference matching or “gold standard,” which is a set of correct and complete mappings as built by domain experts.OAEI 2012VLDB2009
Zhishi.links Results for OAEI 2011AccuracyUtilize distributed MapReduce framework to adopt index-based pre-matchingDBpedia
Freebase
GeoNames
OM2011
Discovering and Maintaining Links on the Web of Data{{{Dimensions}}}A methodology that proved useful for optimizing link specifications is to manually create a small reference linkset and then optimize the Silk linking specification to produce these reference links, before Silk is run against the complete target data source.DBpedia
DrugBank
ISWC2009
A Survey of Current Link Discovery Frameworks{{{Dimensions}}}--Semantic Web Journal2017
LIMES - A Time-Efficient Approach for Large-Scale Link Discovery on the Web of DataPerformanceCompare LIMES with different numbers of exemplars on knowledge bases of different sizes.DBpedia
DrugBank
LinkedCT
MESH
2011
KnoFuss: A Comprehensive Architecture for Knowledge Fusion---K-CAP2007
SERIMI – Resource Description Similarity, RDF Instance Matching and InterlinkingAccuracyIn order to evaluate the effectiveness of the proposed interlinking method, we used the precision, recall and F1 metrics.DBpedia
Sider
DrugBank
LinkedCT
Dailymed
Diseasome
TCM
ArXiv2011
Do you think that items on this page are out of date? You can clean its cache to manually update all dynamic parts to the latest data from the wiki.