This tool exploits an intermediate semantic representation (UNL-RDF graphs) to
construct an ontology representations of NL sentences. [TODO: compléter]
TENET is a python library for automatically constructing logical representations (OWL ontology) from textual documents. Its development is based on the W3C Semantic Web standards (RDF, OWL, SPARQL). It requires, as input, a set of pivot structures representing the document to be analysed, and gives as output a set of RDF-OWL triples forming an ontology, composed of classes, properties, instances and logical relations between these elements.
The treatment is carried out in two stages:
1. Initialization: TODO.
2. UNL sentences Loading: TODO.
3. Transduction Process: the UNL-RDF graphs are extended to obtain semantic nets.
4. Classification / Instanciation
5. Reasonning
## 1 - Environment Setup
[TODO: compléter la description]
The python code has been tested under Python 3.7 and Linux Manjaro, but should be run on most common systems (Linux, Windows, Mac).
All dependencies are listed in **requirements.txt**. These dependencies are used for external modules.
The **test** directory contains evaluation files with some test corpus.
## 1 - Implementation
This implementation was made using Python languages, with UNL as pivot structure.
## 2 - Library Usage
[TODO: talk about UNL-RDF graph (obtained using UNL-RDF schemas)]
The script **test_tenet_main.py** (test directory) gives an example of using the library.
The following module is included as main process:
Two main methods are proposed to create an ontology from a file in amrlib format, or from a directory containing several files. These two methods take as parameters the path of the file or directory to be processed, and also some optional parameters.
1. Semantic Transduction Process (stp) for semantic analysis with transduction schemes
The following code can be used to create an ontology from an AMR-Lib file:
The python script _tenet.py_ is used to manage the tool's commands, using components of the directory _scripts_.
The data to be processed must be placed in the directory _corpus_. All working data, including the results,