
TENET - Tool for Extraction using Net Extension by (semantic) Transduction
This tool exploits an intermediate semantic representation (UNL-RDF graphs) to construct an ontology representations of NL sentences. [TODO: compléter]
The treatment is carried out in two stages:
- Initialization: TODO.
- UNL sentences Loading: TODO.
- Transduction Process: the UNL-RDF graphs are extended to obtain semantic nets.
- Classification / Instanciation
- Reasonning
[TODO: compléter la description]
1 - Implementation
This implementation was made using Python languages, with UNL as pivot structure.
[TODO: talk about UNL-RDF graph (obtained using UNL-RDF schemas)]
The following module is included as main process:
- Semantic Transduction Process (stp) for semantic analysis with transduction schemes
The python script tenet.py is used to manage the tool's commands, using components of the directory scripts. The data to be processed must be placed in the directory corpus. All working data, including the results, will be processed in the directory workdata.
Transduction process configuration includes an ontology definition for semantic net, and several transduction schemes as SPARQL request.
2 - Environment Setup
[TODO: external module souces?]
The python code has been tested on Python 3.7. All dependencies are listed in requirements.txt. These dependencies are used for external modules.
The input directories contain evaluation files with some test corpus.
3 - Content
The config directory contains various configuration files for the process:
- unl-rdf-schema.ttl: RDF schema for the interpretation of UNL graphs
- smenet.ttl: RDF schema of the semantic rules