Buckets:

|
download
raw
28 kB

Shiva: A Framework for Graph based Ontology Matching

Iti Mathur, Nisheeth Joshi

Apaji Institute
Banasthali University
Rajasthan, India

Hemant Darbari, Ajai Kumar

Applied Artificial Intelligence Group
Center for Development of Advanced Computing
Pune, Maharashtra, India

ABSTRACT

Since long, corporations are looking for knowledge sources which can provide structured description of data and can focus on meaning and shared understanding. Structures which can facilitate open world assumptions and can be flexible enough to incorporate and recognize more than one name for an entity. A source whose major purpose is to facilitate human communication and interoperability. Clearly, databases fail to provide these features and ontologies have emerged as an alternative choice, but corporations working on same domain tend to make different ontologies. The problem occurs when they want to share their data/knowledge. Thus we need tools to merge ontologies into one. This task is termed as ontology matching. This is an emerging area and still we have to go a long way in having an ideal matcher which can produce good results. In this paper we have shown a framework to matching ontologies using graphs.

General Terms

Ontology Matching, Ontology Alignment

Keywords

Ontology Matching, Ontology Alignment, Graph Matching, Kuhn-Munkres Algorithm.

1. INTRODUCTION

Since the dawn of Semantic Web. Ontology Matching (OM) is gaining popularity. As corporations have started using ontologies for storing their knowledge. This knowledge is the most valuable asset for any organization and timely access to this knowledge is the major focus to them. Unfortunately this is not as simple as it sounds because at times a knowledge engineer has to come across with a situation where more than one ontology is being used for the same knowledge. This is a nightmare which every knowledge engineer fears.

To address this issue, one has to either employ a human annotator to merge all the ontologies having same knowledge or they have to devise a mechanism to merge the ontologies automatically. This latter part is termed as ontology matching. Since the beginning of 21st century this concept is being widely explored. Researchers are trying to develop new ways to merge ontologies which can produce results as good as humans. The problem of merging or matching ontologies is not as simple, as there are several issues that are to be considered while matching ontologies. Among them the most prominent issue is of heterogeneity where ontologies are available in different frameworks and we need to merge the knowledge incorporated in them. Most of the matchers developed today are unable to handle this problem. In our approach we have addressed this issue. As mostly the ontologies are available in OWL, RDF or XML formats. Our matcher can read any of these formats and can align their information and produce an aligned ontology.

The rest of the paper is as follows: Section 2 gives a brief description of the work done in the area of ontology matching. Section 3 describes our approach; it explains the experimental setup and our methodology. Section 4 describes the evaluation procedure incorporated to test the performance of the matcher and Section 5 concludes the work done.

2. LITERATURE SURVEY

In the past decade, as this area gained popularity, a lot of work was done to develop good matching systems. In this section we describe some of the best matchers developed till date.

Agreement Maker is a matcher developed at University of Illinois at Chicago by Cruz et al. [1]. This system has the best user interface developed so far. Moreover it has a flexible architecture and an integrated user interface which makes it different from other matchers. The core philosophy of the developers of this matcher is that of involving users into matching process. They believe that users can help make better alignments which are not possible in automatic alignments. Thus they prophesize the use of having semi-automatic matching systems. LogMap is another ontology matcher which is developed at University of Oxford by Ruiz and Grau [2]. They have used a logic based reasoning approach in their matcher. Their argument is that if we use logic based semantics incorporated in the ontologies then we may produce better alignments. This matcher is still under development and has already started a debate among the circles of the developers of ontology matchers.

AROMA [3] is a hybrid ontology matcher which can effectively match the concepts and properties from two ontologies. In order to do so they use association rule paradigm [4] and statistical interestingness measure. CIDER [5] tries to align ontologies using schema matching. It follows a two pronged approach, first it tries to extract similar concepts up to a certain depth and then applies different matching techniques onto the concepts and then finally produce aligned ontology. Lily [6] is another matching system which has re-emerged as one of the active ontology matchers. It can match generic and large scale ontologies. It can produce good results for normal size ontologies but it takes a lot of time to do so. The main reason behind this is that this matchers tries to extract semantic sub graphs and then tries to map them with other ontologies.

RiMOM [7] is one the top performing matchers that are tested in various evaluation campaigns across the globe. This matcher can not only match schema but also can match instances available in the ontologies. It uses multiple techniques to implement this feature and uses external resources like WorldNet to do semantic matching. TaxoMap [8] is another matcher which can produce matched ontologies of large scale. It does so by finding correspondence between the concepts of two ontologies. It also performs matches forsubsumption relations and its inverse and proximity relations. YAM++ [9] is another matcher which can produce good results. This system uses multiple matching algorithms which are combined to produce matched ontology. This system provides flexibility as it allows the user to provide preferences. This system is self-configurable and extensible as if the user is not satisfied with the results then he can provide his own customized matching approach.

3. OUR APPROACH

3.1 Experimental Setup

To test the performance of ontology matchers, we required ontologies. So, we used some of the ontologies with OAEI (Ontology Alignment and Evaluation Initiative) 2013 evaluation task [10]. This task had some lightweight ontologies and one heavyweight ontology. We used fifteen ontologies from benchmark test set. These were light weight ontologies. We also used an ontology from anatomy track.

Since we could not find any more heavy weight ontologies, we developed some ontologies on our own. These were ontology on human anatomy [11] which had concepts relating to human physiological structure; we also developed two ontologies on health care services [12] and communicable diseases [13].

We have also used some of the best matchers from the OAEI 2013 task and compared our system with them. We used a graph based methodology for matching the ontologies. The objective was to check the feasibility of graph matching algorithms into ontology matching. Although some work has been done for using graphs in ontology matching. None of the previous work has checked the feasibility of graph based matchers with both heavyweight as well as lightweight ontologies.

3.2 Methodology

As ontologies have a hierarchical structure where concepts, attributes and instances can be arranged in a tree/graph like structure; using a graph matching algorithm here is far more intuitive mechanism. Thus, in our approach we have done the same. We have used bi-partite graph matching algorithm in our approach.

We have christened our system as Shiva. In our approach, we first take two ontologies. These can be in different formats. For example, the source ontology can be in OWL format while the target ontology can be in RDF format. Our system can recognize ontologies in OWL, RDF and XML formats. So, the source ontology $O_s$ and target ontology $O_t$ are read and are sent for preprocessing. In preprocessing task, first we separately parse the ontologies by collecting various concepts, sub-concepts, properties and instances. This information is stored in a file for manual debugging. Moreover, this extracted information is preprocessed and is arranged into a linked graph in memory. Thus each concept has a direct relationship with its properties, sub-concepts and instances. If we want we can generate an adjacency metric of this information or we can see it visually by creating vertices and arcs labeled as Isa, instanceof and hasproperty.

Once preprocessing is completed, the extracted information is sent to the matching system, where the user has the choice to selection from four different structural matching algorithms these are: Levenstein Edit Distance [14], Qgrams [15], Smith Waterman [16] and Jaccard’s Coefficient [17] algorithms. All the algorithms search for similarities between concepts, sub-

concepts, properties and instances and are checked for three types of correspondences. These are:

    1. Equivalence correspondence: where a concept, sub-concept, property or instance in $O_s$ matches with its counterpart (at same level) in $O_t$ .
    1. Isa correspondence: where a sub-concept of $O_s$ matches with a concept of $O_t$ and vice versa.
    1. General correspondence: where a property $O_s$ matches with a concept or sub-concept of $O_t$ and vice versa.

Thus all the mapping (mapping( $x, y$ )) are generated using four tuples( $x, y, r, t$ ). Where:

$x \in O_s$ : $x$ belongs to concepts, sub-concepts, properties and instances in source ontology.

$y \in O_t$ : $y$ belongs to concepts, sub-concepts, properties and instances in target ontology.

$r \in R$ : $r$ is a correspondence relations in a set of correspondence relations $R$ , in our case these are Equivalence, Isa and General.

$t \in T$ : $t$ is the similarity metric used in alignment from a set of available metrics $T$ , in our case these are Levenstein Distance, Jaccard Coefficient, Smith Waterman and Qgrams.

Using these mappings, we generated a score matrix in the following format:

S=[M[o11o21]M[o12o21]M[o13o21]M[o1mo21]M[o11o22]M[o12o22]M[o13o22]M[o1mo22]M[o11o23]M[o12o23]M[o13o23]M[o1mo23]M[o11o2n]M[o12o2n]M[o13o2n]M[o1mo2n]]m×nS = \begin{bmatrix} M[o_{11}o_{21}] & M[o_{12}o_{21}] & M[o_{13}o_{21}] & \dots & M[o_{1m}o_{21}] \\ M[o_{11}o_{22}] & M[o_{12}o_{22}] & M[o_{13}o_{22}] & \dots & M[o_{1m}o_{22}] \\ M[o_{11}o_{23}] & M[o_{12}o_{23}] & M[o_{13}o_{23}] & \dots & M[o_{1m}o_{23}] \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ M[o_{11}o_{2n}] & M[o_{12}o_{2n}] & M[o_{13}o_{2n}] & \dots & M[o_{1m}o_{2n}] \end{bmatrix}_{m \times n}

Here, $M[o_{11}o_{21}]$ is the mapping between one of the elements (concepts, sub-concepts, properties, instances) of source ontology $O_s$ with one of the elements (concepts, sub-concepts, properties, instances) of target ontology $O_t$ . This has the value which is produced by the similarity metric. For example, if we are using Levenstein distance algorithm and we have two concepts as car and cars, then its score would be 1 and the similarity is calculated using the formula in equation 1.

sim(x,y)=#edits(x,y)max{len(x),len(y)}(1)sim(x, y) = \frac{\#edits(x, y)}{\max\{len(x), len(y)\}} \quad (1)

Here $x$ and $y$ are the two strings, in our case $x$ is “car” and $y$ is “cars”. $#matches(x, y)$ is the no. of edits required to make the two strings equal and $len(x)$ is the length of string $x$ , $len(y)$ is the length of string $y$ . The maximum of the two is selected to compute the final score. This is done for all the mappings which then generate the score matrix of all the matched elements of both the ontologies. This matrix can be seen as bipartite graph which has two disjoint sets of vertices (in our case mapping elements of $O_s$ and $O_t$ ) and edge weights (similarity values) are clearly mentioned.

Once the score matrix is generated, it is passed to our graph matching algorithm. We used Hungarian method [18] for matching our score matrix (bipartite graph). This gave us the best matching pairs in the matrix which are then used to generate the aligned ontology. Figure 1 shows the architecture of our system. A snapshot of aligned ontology is shown in figure 2.## 4. EVALUATION

To evaluate the performance of our system we used 19 ontologies. Among them 15 were light weight ontologies and 4 were heavy weight ontologies. We used 2 popular ontology matchers (RiMOM and YAM++) with our four variants and compared their performance. We performed our evaluation on three categories. In first category we matched all the ontologies. In the second category we only matched the light weight ontologies and in the third category we only matched the heavy weight ontologies. We calculated precision, recall and f-measures using equations 2, 3 and 4 respectively.

Precision (P)=#correct_mappings#total_mappings_system(2)\text{Precision } (P) = \frac{\# \text{correct\_mappings}}{\# \text{total\_mappings\_system}} \quad (2)

Recall (R)=#correct_mappings#total_mappings_human(3)\text{Recall } (R) = \frac{\# \text{correct\_mappings}}{\# \text{total\_mappings\_human}} \quad (3)

FMeasure (F)=2×P×RP+R(4)F - \text{Measure } (F) = \frac{2 \times P \times R}{P+R} \quad (4)

Here, the system generated matched ontology is compared with human generated manually matched ontology. The basic idea is to make the system produce an ontology which can emulate human matched ontology. Thus the matchers which produce better mappings is considered being the best. Precision is calculated using the correct mappings between human and system's ontology divided by the total mappings produced by the system. Recall is calculated using the correct mappings between human and system's ontology divided by the total mappings produced by the human. F-measure is the combination of the two.

Table 1 shows the values of precision, recall and fmeasure. While taking the average of all the ontologies, we found that RiMOM performed better than all other matchers while Shiva with Levenstein Distance Algorithm was second. In category 2, where only light weight ontologies were considered, we computed the averages of only these ontologies and found that again RiMOM performed better than other matchers with Shiva with Levenstein Distance Algorithm managing to get the second position. For category 3, we only took the averages of heavy weight ontologies and found that RiMOM again was the top matchers. This time YAM++ performed better than Shiva with Levenstein Distance Algorithm.

5. CONCLUSION

In this paper, we have shown the implementation of a graph based matcher with four different variants which use four different algorithms. We have used bipartite graph matching algorithm in creating aligned ontology. This approach produced good results as it could work at par with YAM++, one of the good ontology matchers while could not match with RiMOM. One of the reasons for this is that RiMOM matches ontologies at semantic level while Shiva only matches them at structural level.

As an enhancement to this work, we can add WorldNet and similar semantic resources to improve the performance of the matcher by combining structural as well as semantic matching techniques.

6. REFERENCES

  1. [1] Cruz, I. F., Stroe, C., Caci, M., Caimi, F., Palmonari, M., Antonelli, F. P., Keles, U. C. 2010. Using AgreementMaker to Align Ontologies for OAEI 2010. Fifth International Workshop on Ontology Matching, co-located with the International Semantic Web Conference, Shanghai, China.

  2. [2] Ruiz, E. J., & Grau, B. C. 2011. LogMap: Logic-based and Scalable Ontology Matching. In the 10th International Semantic Web Conference

  3. [3] Jérôme, D. 2011. AROMA results for OAEI 2011. In Proceedings of the Sixth International Workshop on Ontology Matching.

  4. [4] Agrawal, R., Imielinski, T., Swami, A. 1993. Mining association rules between sets of items in large databases. Vol 22 (2), ACM.

  5. [5] Jorge, G., Bernad, J., Mena, E. 2011. Ontology matching with CIDER: evaluation report for OAEI 2011. In Proceedings of the Sixth International Workshop on Ontology Matching.

  6. [6] Peng, W., Xu, B. 2008. Lily: Ontology alignment results for OAEI 2008. In Proceedings of the Third International Workshop on Ontology Matching.

  7. [7] Juanzi, L., Tang, J., Li, Y., & Luo, Q. 2009. Rimom: A dynamic multistrategy ontology alignment framework. Knowledge and Data Engineering, IEEE Transactions on, Vol. 21(8), pp 1218-1232.

  8. [8] Fayçal, H., Safar, B., Niraula, N.B., Reynaud, C. 2010. TaxoMap alignment and refinement modules: Results for OAEI 2010. In Proceedings of the Fifth International Workshop on Ontology Matching.

  9. [9] DuyHoa, N., Bellahsene, Z. 2012. YAM++: a multi-strategy based approach for ontology matching task. Knowledge Engineering and Knowledge Management. Springer Berlin Heidelberg, pp 421-425.

  10. [10] Shvaiko, P., Euzenat, J., Srinivas, K., Mao, M., Ruiz, E.J. (Eds) 2013. Proceedings of the 8th International Workshop on Ontology Matching.

  11. [11] Vashisth, A., Mathur, I., Joshi, N. 2012. OntoAna: Domain Ontology for Human Anatomy. arXiv preprint arXiv:1208.3802.

  12. [12] Mathur, I., Mathur, S., Joshi, N. 2011. Ontology development for health care in India. Proceedings of the International Conference & Workshop on Emerging Trends in Technology. ACM.

  13. [13] Mathur, I., Darbari, H., Joshi, N. 2013. Domain Ontology Development for Communicable Diseases. Proceedings of International Conference on Artificial Intelligence, Soft Computing.

  14. [14] Levenshtein, V.I. 1966. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady. Vol. 10.

  15. [15] Ukkonen, E. 1992. Approximate string-matching with q-grams and maximal matches. Theoretical computer science, Vol. 92(1), pp 191-211.

  16. [16] Smith, T.F., Waterman, M.S. 1981. Identification of common molecular subsequences. Journal of molecular biology, Vol. 147(1), pp 195-197.

  17. [17] Jaccard, P. 1912. The distribution of the flora in the alpine zone. New Phytologist, Vol. 11(2), pp 37-50.

  18. [18] Munkres, J. Algorithms for the assignment and transportation problems. Journal of the Society for Industrial & Applied Mathematics, Vol. 5(1), pp 32-38.```

    graph LR subgraph Data SO[Source Ontology] TO[Target Ontology] end subgraph Preprocess PO[Parse Ontology] PreO[Preprocess Ontology] OGC[Ontology Graph Conversion] end subgraph Matching_Process [Matching Process] subgraph SM [Similarity Metrics] LD[Levenstein Distance] Q[Qgrams] SW[Smith Waterman] J[Jaccard] end SM --> SMatrix[Score Matrix] SMatrix --> KMA[Kuhn-Munkres Algorithm] end subgraph Final_Mappings FM[Final Mappings] end subgraph Fundamental_Tools [Fundamental Tools] FMgt[File Management] TP[Text Processing] GV[Graph Visualizer] OV[Ontology Visualizer] R[Reasoner] end Data --> Preprocess Preprocess --> Matching_Process Matching_Process --> Final_Mappings Fundamental_Tools --> Preprocess Fundamental_Tools --> Matching_Process Fundamental_Tools --> Final_Mappings


Figure 1: Architecture of Shiva Ontology Matching System and Framework

Figure 2: Snapshot of aligned ontology**Table 1: Comparison of Evaluation Results**

<table border="1">
<thead>
<tr>
<th rowspan="2">Ontology</th>
<th colspan="3">RiMOM</th>
<th colspan="3">YAM++</th>
<th colspan="3">Shiva<sub>Jaccard</sub></th>
<th colspan="3">Shiva<sub>LD</sub></th>
<th colspan="3">Shiva<sub>Qgrams</sub></th>
<th colspan="3">Shiva<sub>sw</sub></th>
</tr>
<tr>
<th>P</th>
<th>R</th>
<th>F</th>
<th>P</th>
<th>R</th>
<th>F</th>
<th>P</th>
<th>R</th>
<th>F</th>
<th>P</th>
<th>R</th>
<th>F</th>
<th>P</th>
<th>R</th>
<th>F</th>
<th>P</th>
<th>R</th>
<th>F</th>
</tr>
</thead>
<tbody>
<tr>
<td>101</td>
<td>1</td>
<td>0.98969</td>
<td>0.99481</td>
<td>0.75257</td>
<td>0.42941</td>
<td>0.54681</td>
<td>0.59036</td>
<td>0.50515</td>
<td>0.54444</td>
<td>0.975</td>
<td>0.80412</td>
<td>0.88135</td>
<td>0.13725</td>
<td>0.64948</td>
<td>0.22661</td>
<td>0.92957</td>
<td>0.68041</td>
<td>0.7857</td>
</tr>
<tr>
<td>103</td>
<td>0.96875</td>
<td>0.95876</td>
<td>0.96373</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.55056</td>
<td>0.50515</td>
<td>0.52688</td>
<td>0.94666</td>
<td>0.73195</td>
<td>0.82558</td>
<td>0.11623</td>
<td>0.64948</td>
<td>0.19718</td>
<td>0.92957</td>
<td>0.68041</td>
<td>0.7857</td>
</tr>
<tr>
<td>104</td>
<td>0.96875</td>
<td>0.95876</td>
<td>0.96373</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.55056</td>
<td>0.50515</td>
<td>0.52688</td>
<td>0.95876</td>
<td>0.95876</td>
<td>0.95876</td>
<td>0.11623</td>
<td>0.64948</td>
<td>0.19718</td>
<td>0.92957</td>
<td>0.68041</td>
<td>0.7857</td>
</tr>
<tr>
<td>201</td>
<td>0.90909</td>
<td>0.72164</td>
<td>0.80459</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>1</td>
<td>0.05825</td>
<td>0.11009</td>
<td>1</td>
<td>0.98969</td>
<td>0.99481</td>
<td>0.06862</td>
<td>0.06730</td>
<td>0.06796</td>
<td>0.0845</td>
<td>0.06185</td>
<td>0.0714</td>
</tr>
<tr>
<td>201-2</td>
<td>0.86111</td>
<td>0.63917</td>
<td>0.73372</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.61702</td>
<td>0.29896</td>
<td>0.40277</td>
<td>0.97894</td>
<td>0.95876</td>
<td>0.96875</td>
<td>0.11421</td>
<td>0.50515</td>
<td>0.18631</td>
<td>0.76056</td>
<td>0.5567</td>
<td>0.6425</td>
</tr>
<tr>
<td>201-4</td>
<td>0.92</td>
<td>0.71134</td>
<td>0.80232</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.57142</td>
<td>0.16494</td>
<td>0.256</td>
<td>0.97894</td>
<td>0.95876</td>
<td>0.96875</td>
<td>0.11111</td>
<td>0.38144</td>
<td>0.17209</td>
<td>0.54929</td>
<td>0.40206</td>
<td>0.4642</td>
</tr>
<tr>
<td>201-6</td>
<td>1</td>
<td>0.82474</td>
<td>0.90395</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.66666</td>
<td>0.08247</td>
<td>0.14678</td>
<td>0.97894</td>
<td>0.95876</td>
<td>0.96875</td>
<td>0.10038</td>
<td>0.26804</td>
<td>0.14606</td>
<td>0.39436</td>
<td>0.28866</td>
<td>0.3333</td>
</tr>
<tr>
<td>201-8</td>
<td>1</td>
<td>0.86597</td>
<td>0.92817</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>1</td>
<td>0.04902</td>
<td>0.09345</td>
<td>0.93333</td>
<td>0.57732</td>
<td>0.71337</td>
<td>0.07471</td>
<td>0.13402</td>
<td>0.09594</td>
<td>0.23943</td>
<td>0.17525</td>
<td>0.2023</td>
</tr>
<tr>
<td>202</td>
<td>1</td>
<td>0.86597</td>
<td>0.92817</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>1</td>
<td>0.06730</td>
<td>0.12612</td>
<td>1</td>
<td>0.91752</td>
<td>0.95698</td>
<td>0.07767</td>
<td>0.07619</td>
<td>0.07692</td>
<td>0.0845</td>
<td>0.06185</td>
<td>0.0714</td>
</tr>
<tr>
<td>202-2</td>
<td>1</td>
<td>0.84536</td>
<td>0.91620</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.61702</td>
<td>0.29896</td>
<td>0.40277</td>
<td>0.98795</td>
<td>0.84536</td>
<td>0.91111</td>
<td>0.11421</td>
<td>0.50515</td>
<td>0.18631</td>
<td>0.76056</td>
<td>0.5567</td>
<td>0.6428</td>
</tr>
<tr>
<td>202-4</td>
<td>1</td>
<td>0.84536</td>
<td>0.91620</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.57142</td>
<td>0.16494</td>
<td>0.256</td>
<td>0.98717</td>
<td>0.79381</td>
<td>0.88</td>
<td>0.11111</td>
<td>0.38144</td>
<td>0.17209</td>
<td>0.54929</td>
<td>0.40206</td>
<td>0.4642</td>
</tr>
<tr>
<td>202-6</td>
<td>1</td>
<td>0.87628</td>
<td>0.93406</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.66666</td>
<td>0.08247</td>
<td>0.14678</td>
<td>0.97260</td>
<td>0.73195</td>
<td>0.83529</td>
<td>0.10038</td>
<td>0.26804</td>
<td>0.14606</td>
<td>0.39436</td>
<td>0.28866</td>
<td>0.3333</td>
</tr>
<tr>
<td>202-8</td>
<td>0.91304</td>
<td>0.64948</td>
<td>0.75903</td>
<td>1</td>
<td>0.5</td>
<td>0.66666</td>
<td>1</td>
<td>0.07619</td>
<td>0.14159</td>
<td>0.72340</td>
<td>0.70103</td>
<td>0.71204</td>
<td>0.07471</td>
<td>0.13402</td>
<td>0.09594</td>
<td>0.23943</td>
<td>0.17525</td>
<td>0.2023</td>
</tr>
<tr>
<td>203</td>
<td>1</td>
<td>0.77319</td>
<td>0.87209</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.55056</td>
<td>0.50515</td>
<td>0.52688</td>
<td>0.72340</td>
<td>0.70103</td>
<td>0.71204</td>
<td>0.11623</td>
<td>0.64948</td>
<td>0.19718</td>
<td>0.92957</td>
<td>0.68041</td>
<td>0.785</td>
</tr>
<tr>
<td>204</td>
<td>1</td>
<td>0.77319</td>
<td>0.87209</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.56097</td>
<td>0.47422</td>
<td>0.51396</td>
<td>0.72340</td>
<td>0.70103</td>
<td>0.71204</td>
<td>0.12403</td>
<td>0.65979</td>
<td>0.20880</td>
<td>0.88732</td>
<td>0.64948</td>
<td>0.75</td>
</tr>
<tr>
<td>Anatomy</td>
<td>0.97222</td>
<td>0.72164</td>
<td>0.82840</td>
<td>0.65591</td>
<td>0.39610</td>
<td>0.49392</td>
<td>0.58181</td>
<td>0.34408</td>
<td>0.43243</td>
<td>0.89743</td>
<td>0.37634</td>
<td>0.53030</td>
<td>0.11428</td>
<td>0.64516</td>
<td>0.19417</td>
<td>0.93846</td>
<td>0.65591</td>
<td>0.77215</td>
</tr>
<tr>
<td>OntoAna</td>
<td>0.97058</td>
<td>1</td>
<td>0.98507</td>
<td>0.92783</td>
<td>0.48128</td>
<td>0.63380</td>
<td>0.50549</td>
<td>0.47422</td>
<td>0.48936</td>
<td>0.952381</td>
<td>0.412371</td>
<td>0.57554</td>
<td>0.07588</td>
<td>0.63917</td>
<td>0.13566</td>
<td>0.91549</td>
<td>0.6701</td>
<td>0.77381</td>
</tr>
<tr>
<td>HithCare</td>
<td>0.98795</td>
<td>0.84536</td>
<td>0.91111</td>
<td>0.2414</td>
<td>1.05197</td>
<td>2.21928</td>
<td>0.60869</td>
<td>0.48275</td>
<td>0.53846</td>
<td>1</td>
<td>0.27272</td>
<td>0.42857</td>
<td>0.14110</td>
<td>0.79310</td>
<td>0.23958</td>
<td>0.85185</td>
<td>0.7931</td>
<td>0.82142</td>
</tr>
<tr>
<td>HCD</td>
<td>1</td>
<td>0.7628</td>
<td>0.865497</td>
<td>0.78787</td>
<td>0.44067</td>
<td>0.56521</td>
<td>0.5333</td>
<td>0.72727</td>
<td>0.61538</td>
<td>1</td>
<td>0.2727</td>
<td>0.4285</td>
<td>0.05921</td>
<td>0.81818</td>
<td>0.11042</td>
<td>0.81818</td>
<td>0.81818</td>
<td>0.81818</td>
</tr>
<tr>
<td><b>Average Category1</b></td>
<td><b>0.9721</b></td>
<td><b>0.8225</b></td>
<td><b>0.8885</b></td>
<td>0.8645</td>
<td>0.5029</td>
<td>0.7034</td>
<td>0.6706</td>
<td>0.3087</td>
<td>0.3577</td>
<td><b>0.9325</b></td>
<td><b>0.7191</b></td>
<td><b>0.7875</b></td>
<td>0.1025</td>
<td>0.4670</td>
<td>0.1606</td>
<td>0.6413</td>
<td>0.4882</td>
<td>0.5529</td>
</tr>
<tr>
<td><b>Average Category2</b></td>
<td><b>0.9693</b></td>
<td><b>0.8199</b></td>
<td><b>0.8861</b></td>
<td>0.9209</td>
<td>0.47907</td>
<td>0.6301</td>
<td>0.7008</td>
<td>0.2558</td>
<td>0.3147</td>
<td><b>0.9245</b></td>
<td><b>0.8219</b></td>
<td><b>0.8666</b></td>
<td>0.1038</td>
<td>0.3985</td>
<td>0.1581</td>
<td>0.5774</td>
<td>0.4226</td>
<td>0.4879</td>
</tr>
</tbody>
</table>

Xet Storage Details

Size:
28 kB
·
Xet hash:
1935ff74e8d96626622302fbf8b54dd05c5de3e06afc3eeb97c16d4bee9b8938

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.