Skip to content
Snippets Groups Projects
Select Git revision
  • master default protected
1 result

nlreqdataset-unl-enco

  • Clone with SSH
  • Clone with HTTPS
  • David Rouquet's avatar
    David Rouquet authored
    Ajout d'une fct qui ignore les namespaces dans le document (pb car le NS avait été supprimé de l'exemple alors qu'il est présent dans les autres XML du corpus)
    24553ca7
    History
    Name Last commit Last update
    data
    scripts
    .gitlab-ci.yml
    README.md

    nlreqdataset-unl-enco

    This repo contains all the nlreqdataset of system requirements (http://fmt.isti.cnr.it/nlreqdataset/), enconverted in UNL with http://unl.ru/deco.html.

    The dataset is presented in the following abstract:

    PURE: a Dataset of Public Requirements Documents

    Ferrari, Alessio; Spagnolo, Giorgio Oronzo; Gnesi, Stefania

    This paper presents PURE (PUblic REquirements dataset), a dataset of 79 publicly available natural language requirements documents collected from the Web. The dataset includes 34,268 sentences and can be used for natural language processing tasks that are typical in requirements engineering, such as model synthesis, abstraction identification and document structure assessment. It can be further annotated to work as a benchmark for other tasks, such as ambiguity detection, requirements categorisation and identification of equivalent re-quirements. In the associated paper, we present the dataset and we compare its language with generic English texts, showing the peculiarities of the requirements jargon, made of a restricted vocabulary of domain-specific acronyms and words, and long sentences. We also present the common XML format to which we have manually ported a subset of the documents, with the goal of facilitating replication of NLP experiments. The XML documents are also available for download.

    The paper associated to the dataset can be found here:

    https://ieeexplore.ieee.org/document/8049173/

    More info about the dataset is available here:

    http://nlreqdataset.isti.cnr.it

    Preprint of the paper available at ResearchGate:

    https://goo.gl/HxJD7X

    Usage of the script to encode a document

    The encoding script work on xml files conforming to ./data/orig/req_document.xsd

    Examples of an input anf outputs are provided in ./data/examples/

    Ziped folders of "unlized" XML files of the corpus are available in the ./data folder.

    ‼️ unlizeXml.py ignores namespaces in the XML document.

    First clone the repo (or at least download the scripts folder):

    git clone https://gitlab.tetras-libre.fr/unl/nlreqdataset-unl-enco.git

    Then enter the scripts folder :

    cd nlreqdataset-unl-enco/scripts

    The main Python 3 script to encode is unlizeXml.py.

    It relies on the java executable of unlTools that is included. You might want to update it with a newer version possibly available at https://gitlab.tetras-libre.fr/unl/unlTools/-/releases

    Basic usage is :

    python unlizeXml.py <input-file-path> <output-file-path>

    further options are described using the --help tag :

    $ python unlizeXml.py --help
    Usage: unlizeXml.py [OPTIONS] INPUT OUTPUT
    
    Options:
      --lang [en|ru]
      --dry-run / --no-dry-run  if true do not send request to unl.ru
      --svg / --no-svg          Add svg node representing unl graph
      --unltools-path FILE      Path of the unltools jar
      --help                    Show this message and exit.