Knowledge extraction

Knowledge Extraction is the creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to Information Extraction (NLP) and ETL (Data Warehouse), the main criteria is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data.

The RDB2RDF W3C group is currently standardizing a language for extraction of RDF from relational databases. Another popular example for Knowledge Extraction is the transformation of Wikipedia into structured data and also the mapping to existing knowledge (see DBpedia, Freebase and ).

Overview
After the standardization of knowledge representation languages such as RDF and OWL, much research has been conducted in the area, especially regarding transforming relational databases into RDF, Entity resolution, Knowledge Discovery and Ontology Learning. The general process uses traditional methods from Information Extraction and ETL, which transform the data from the sources into structured formats.

The following criteria can be used to categorize approaches in this topic (some of them only account for extraction from relational databases) :

Entity Linking

 * 1) DBpedia Spotlight, OpenCalais, the Zemanta API, Extractiv and PoolParty Extractor analyze free text via Named Entity Recognition and then disambiguates candidates via Name Resolution and links the found entities to the DBpedia knowledge repository (DBpedia Spotlight web demo or PoolParty Extractor Demo).

"President Obama called Wednesday on Congress to extend a tax break for students included in last year's economic stimulus package, arguing that the policy provides more generous assistance."


 * As President Obama is linked to a DBpedia LinkedData resource, further information can be retrieved automatically and a Semantic Reasoner can for example infer that the mentioned entity is of the type Person (using FOAF (software)) and of type Presidents of the United States (using YAGO). Counter examples: Methods that only recognize entities or link to Wikipedia articles and other targets that do not provide further retrieval of structured data and formal knowledge.

Relational Databases to RDF

 * 1) Triplify, D2R Server, Ultrawrap, and Virtuoso RDF Views are tools that transform relational databases to RDF. During this process they allow reusing existing vocabularies and ontologies during the conversion process. When transforming a typical relational table named users, one column (e.g.name) or an aggregation of columns (e.g.first_name and last_name) has to provide the URI of the created entity. Normally the primary key is used. Every other column can be extracted as a relation with this entity. Then properties with formally defined semantics are used (and reused) to interpret the information. For example a column in a user table called marriedTo can be defined as symmetrical relation and a column homepage can be converted to a property from the FOAF Vocabulary called foaf:homepage, thus qualifying it as an inverse functional property. Then each entry of the user table can be made an instance of the class foaf:Person (Ontology Population). Additionally domain knowledge (in form of an ontology) could be created from the status_id, either by manually created rules (if status_id is 2, the entry belongs to class Teacher ) or by (semi)-automated methods (Ontology Learning). Here is an example transformation:

:marriedTo a owl:SymmetricProperty. :Peter foaf:homepage . :Peter a foaf:Person. :Peter a :Student. :Claus a :Teacher.

1:1 Mapping from RDB Tables/Views to RDF Entities/Attributes/Values
When building a RDB representation of a problem domain, the starting point is frequently an entity-relationship diagram (ERD). Typically, each entity is represented as a database table, each attribute of the entity becomes a column in that table, and relationships between entities are indicated by foreign keys. Each table typically defines a particular class of entity, each column one of its attributes. Each row in the table describes an entity instance, uniquely identified by a primary key. The table rows collectively describe an entity set. In an equivalent RDF representation of the same entity set:
 * Each column in the table is an attribute (i.e., predicate)
 * Each column value is an attribute value (i.e., object)
 * Each row key represents an entity ID (i.e., subject)
 * Each row represents an entity instance
 * Each row (entity instance) is represented in RDF by a collection of triples with a common subject (entity ID).

So, to render an equivalent view based on RDF semantics, the basic mapping algorithm would be as follows:
 * 1) create an RDFS class for each table
 * 2) convert all primary keys and foreign keys into IRIs
 * 3) assign a predicate IRI to each column
 * 4) assign an rdf:type predicate for each row, linking it to an RDFS class IRI corresponding to the table
 * 5) for each column that is neither part of a primary or foreign key, construct a triple containing the primary key IRI as the subject, the column IRI as the predicate and the column's value as the object.

Early mentioning of this basic or direct mapping can be found in Tim Berners-Lee's comparison of the ER model to the RDF model.

Complex mappings of relational databases to RDF
The 1:1 mapping mentioned above exposes the legacy data as RDF in a straightforward way, additional refinements can be employed to improve the usefulness of RDF output respective the given Use Cases. Normally, information is lost during the transformation of an entity-relationship diagram (ERD) to relational tables (Details can be found in Object-relational impedance mismatch) and has to be reverse engineered. From a conceptual view, approaches for extraction can come from two directions. The first direction tries to extract or learn an OWL schema from the given database schema. Early approaches used a fixed amount of manually created mapping rules to refine the 1:1 mapping. More elaborate methods are employing heuristics or learning algorithms to induce schematic information (methods overlap with Ontology learning). While some approaches try to extract the information from the structure inherent in the SQL schema (analysing e.g. foreign keys), others analyse the content and the values in the tables to create conceptual hierarchies (e.g. a columns with few values are candidates for becoming categories). The second direction tries to map the schema and its contents to a pre-existing domain ontology (see also: Ontology alignment). Often, however, a suitable domain ontology does not exist and has to be created first.

XML
As XML is structured as a tree, any data can be easily represented in RDF, which is structured as a graph. XML2RDF is one example of an approach that uses RDF blank nodes and transforms XML elements and attributes to RDF properties. The topic however is more complex as in the case of relational databases. In a relational table the primary key is an ideal candidate for becoming the subject of the extracted triples. An XML element, however, can be transformed - depending on the context- as a subject, a predicate or object of a triple. XSLT can be used a standard transformation language to manually convert XML to RDF.

Extraction from natural language sources
The biggest portion of information contained in business documents, even about 80%, is encoded in natural language and therefore unstructured. Because unstructured data are rather badly suited to extract knowledge from it, it is necessary to apply more complex methods, which nevertheless generally supply worse results, than it would be possible for structured data. The massive acquisition of extracted knowledge should compensate the increased complexity and decreased quality of extraction. In the following, natural language sources are understood as sources of information, where the data are given in an unstructured fashion as plain text. But the text can be additionally embedded in a markup document (e. g. HTML document), because the most of the systems remove the markup elements automatically.

Traditional Information Extraction (IE)
The Traditional Information Extraction is a technology of natural language processing, which extracts information from typically natural language texts and structures these in a suitable manner. The kinds of information to be identified must be specified in a model before beginning the process, which is why the whole process of Traditional Information Extraction is domain dependent. The IE is split in the following five subtasks.


 * Named Entity Recognition (NER)
 * Coreference Resolution (CO)
 * Template Element Construction (TE)
 * Template Relation Construction (TR)
 * Template Scenario Production (ST)

The task of Named Entity Recognition is to recognize and to categorize all named entities contained in a text (assignment of a named entity to a predefined category). This works by application of grammar based methods or statistical models.

The Coreference Resolution identifies equivalent, by NER recognized, entities within a text. There are two relevant kinds of equivalence relationships. The first one relates to the relationship between two different represented entities (e. g. IBM Europe and IBM) and the second one to the relationship between an entity and their anaphoric references (e. g. it and IBM). Both kinds should be recognized by the Coreference Resolution.

At the Template Element Construction the IE system identifies descriptive properties of entities, recognized by NER and CO. These properties correspond to ordinary qualities like red or big.

The Template Relation Construction identifies relations, which exist between the template elements. These relations can be of several kinds, such as works-for or located-in, with the restriction, that both domain and range correspond to entities.

In the Template Scenario Production events, which are described in the text, will be identified and structured with respect to the entities, recognized by NER and CO and relations, identified by TR.

Ontology-Based Information Extraction (OBIE)
The Ontology-Based Information Extraction is a subfield of Information Extraction, with which at least one ontology is used to guide the process of information extraction from natural language text. Though, the OBIE system uses methods of Traditional Information Extraction to identify concepts, instances and relations of the used ontologies in the text, which will be structured to an ontology after the process. Thus, the input ontologies constitute the model of information to be extracted.

Ontology Learning (OL)
With Ontology Learning whole ontologies from natural language text are semi-automatically extracted. Therefore, it can be applied to support ontology engineering. It is usually split into the following eight subtasks, which are not necessarily supported by all Ontology Learning (OL) systems.


 * Domain Terminology Extraction
 * Concept Discovery
 * Concept Hierarchy Derivation
 * Learning of non-taxonomic relations
 * Rule Discovery
 * Ontology Population
 * Concept Hierarchy Extension
 * Frame and event detection

At the Domain Terminology Extraction domain-specific terms are extracted, which are used in the following Concept Discovery to derive concepts. Relevant terms can be determined e. g. by calculation of the TF/IDF values or by application of the C-value / NC-value method. The resulted list of terms has to be filtered by a domain expert. Subsequent, similarly to Coreference Resolution in IE, the OL system determines synonyms, because they share the same meaning and therefore correspond to the same concept. The most common methods therefor are clustering and the application of statistical similarity measures.

In the Concept Discovery terms are grouped to meaning bearing units, which correspond to an abstraction of the world and therefore to concepts. The grouped terms are these domain-specific terms and their synonyms, which were identified in the Domain Terminology Extraction.

In the Concept Hierarchy Derivation the OL system tries to arrange the extracted concepts in a taxonomic structure. This is mostly achieved by unsupervised hierarchical clustering methods. Because the result of such methods is often noisy, a supervision, e. g. by evaluation by the user, is integrated. A further method for the derivation of a concept hierarchy exists in the usage of several patterns, which should indicate a sub- or supersumption relationship. Pattern like “X, what is a Y” or “X is a Y” indicate, that X is a subclass of Y. Such pattern can be analyzed efficiently, but they occur too infrequent, to extract enough sub- or supersumption relationships. Instead bootstrapping methods are developed, which learn these patterns automatically and therefore ensure a higher coverage.

At the Learning of non-taxonomic relations relationships are extracted, which don´t express any sub- or supersumption. Such relationships are e. g. works-for or located-in. There are two common approaches to solve this subtask. The first one bases upon the extraction of anonymous associations, which are named appropriate in a second step. The second approach extracts verbs, which indicate a relationship between the entities, represented by the surrounding words. But the result of both approaches has to be evaluated by an ontologist.

In the Rule Discovery axioms (formal description of concepts) are generated for the extracted concepts. This can be achieved, e. g., by analyzing the syntactic structure of a natural language definition and the application of transformation rules on the resulting dependency tree. The result of this process is a list of axioms, which is afterward comprehended to a concept description. This one has to be evaluated by an ontologist.

At the Ontology Population the ontology is augmented with instances of concepts and properties. For the augmentation with instances of concepts methods, which are based on the matching of lexico-syntactic patterns, are used. Instances of properties are added by application of bootstrapping methods, which collect relationtuples.

In the Concept Hierarchy Extension the OL system tries to extend the taxonomic structure of an existing ontology with further concepts. This can be realized supervised by an trained classifier or unsupervised by the application of similarity measures.

In Frame/Event Detection, the OL system tries to extract complex relationships from text, e.g. who departed from where to what place and when. Approaches range from applying SVM with kernel methods to Semantic Role Labelling (SRL) to deep semantic parsing techniques.

Semantic Annotation (SA)
At the Semantic Annotation of natural language text this one is augmented with metadata (often represented in RDFa), which should make the semantics of contained terms machine-understandable. At this process, which is generally semi-automatic, knowledge is extracted in the sense, that a link between lexical terms and e. g. concepts from ontologies is established. Thus, the knowledge is also won, which meaning of a term in the processed context was intended. The semi-automatic Semantic Annotation can be split in the following two subtasks.


 * Terminology Extraction
 * Entity Linking

At the Terminology Extraction lexical terms from the text are extracted. For this purpose a tokenizer determines at first the word boundaries and solves abbreviations. Afterward terms from the text, which correspond to a concept, are extracted with the help of a domain-specific lexicon to link these at Entity Linking.

At Entity Linking a link between the extracted lexical terms from the source text and the concepts from an ontology is established. For this, candidate-concepts are detected appropriate to the several meanings of a term with the help of a lexicon. Closing, the context of the terms is analyzed, to determine the most appropriate disambiguation, to assign the term to the correct concept.

Tools
The following criteria can be used to categorize tools, which extract knowledge from natural language text.

The following table characterizes some tools for Knowledge Extraction from natural language sources.

Knowledge discovery
Knowledge discovery describes the process of automatically searching large volumes of data for patterns that can be considered knowledge about the data. It is often described as deriving knowledge from the input data. Knowledge discovery developed out of the Data mining domain, and is closely related to it both in terms of methodology and terminology.

The most well-known branch of data mining is knowledge discovery, also known as Knowledge Discovery in Databases (KDD). Just as many other forms of knowledge discovery it creates abstractions of the input data. The knowledge obtained through the process may become additional data that can be used for further usage and discovery.

Another promising application of knowledge discovery is in the area of software modernization, weakness discovery and compliance which involves understanding existing software artifacts. This process is related to a concept of reverse engineering. Usually the knowledge obtained from existing software is presented in the form of models to which specific queries can be made when necessary. An entity relationship is a frequent format of representing knowledge obtained from existing software. Object Management Group (OMG) developed specification Knowledge Discovery Metamodel (KDM) which defines an ontology for the software assets and their relationships for the purpose of performing knowledge discovery of existing code. Knowledge discovery from existing software systems, also known as software mining is closely related to data mining, since existing software artifacts contain enormous value for risk management and business value, key for the evaluation and evolution of software systems. Instead of mining individual data sets, software mining focuses on metadata, such as process flows (e.g. data flows, control flows, & call maps), architecture, database schemas, and business rules/terms/process.

Input data

 * Databases
 * Relational data
 * Database
 * Document warehouse
 * Data warehouse
 * Software
 * Source code
 * Configuration files
 * Build scripts
 * Text
 * Concept mining
 * Graphs
 * Molecule mining
 * Sequences
 * Data stream mining
 * Learning from time-varying data streams under concept drift
 * Web

Output formats

 * Data model
 * Metadata
 * Metamodels
 * Ontology
 * Knowledge representation
 * Knowledge tags
 * Business rule
 * Knowledge Discovery Metamodel (KDM)
 * Business Process Modeling Notation (BPMN)
 * Intermediate representation
 * Resource Description Framework (RDF)
 * Software metrics