Skip to content

Commit

Permalink
Add files via upload
Browse files Browse the repository at this point in the history
added code to github
  • Loading branch information
tuur authored Jun 25, 2020
1 parent 670ad9f commit c8fa6bb
Show file tree
Hide file tree
Showing 9 changed files with 2,053 additions and 6 deletions.
80 changes: 74 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# SPTempRels
SPTempRels trains and evaluates a structured perceptron model for extracting temporal relations from clinical texts, in which events and temporal expressions are given. It can also be used to replicate the experiments done by [Leeuwenberg and Moens (EACL, 2017)](http://www.aclweb.org/anthology/E/E17/E17-1108.pdf)

# Code
The code corresponding to the paper can be obtained at:
SPTempRels trains and evaluates a structured perceptron model for extracting temporal relations from clinical texts, in which events and temporal expressions are given. It can also be used to replicate the experiments done by [Leeuwenberg and Moens (EACL, 2017)](http://www.aclweb.org/anthology/E/E17/E17-1108.pdf). The paper contains a detailed description of the model. The conference slides can be found [here](https://github.com/tuur/SPTempRels/raw/master/SPTempRels-EACL2017-Slides.pdf). When using this code please refer to the paper.

http://liir.cs.kuleuven.be/software_pages/structured_learning_temporal_relation.php

However, you can also always send me an email to request the code: aleeuw15 [at] umcutrecht [dot] nl
> Any questions? Feel free to send me an email at aleeuw15@umcutrecht.nl
# Reference



## Reference
> In case of usage, please cite the corresponding publications.
```
@InProceedings{leeuwenberg2017structured:EACL,
Expand All @@ -22,3 +23,70 @@ However, you can also always send me an email to request the code: aleeuw15 [at]
}
```


### Requirements
* [Gurobi](https://www.gurobi.com)
- create account, download gurobi, and run setup.py
* [Python2.7](https://www.python.org/downloads/release/python-2711/)
* [Argparse](https://pypi.python.org/pypi/argparse)
* [Numpy](http://www.numpy.org/)
* [SciPy](https://www.scipy.org/)
* [Networkx](https://networkx.github.io)
* [Scikit-Learn](http://scikit-learn.org/stable/)
* [Pandas](http://pandas.pydata.org/)


When cTAKES output is not provided the program backs off to the [Stanford POS tagger](http://nlp.stanford.edu/software/tagger.shtml) for POS features. For this reason it is required to have the Stanford POS Tagger folder (e.g. `stanford-postagger-2015-12-09`), the `stanford-postagger.jar`, and the `english-bidirectional-distsim.tagger` file at the same level as `main.py`.

### Data

In the paper we used the [THYME](https://clear.colorado.edu/TemporalWiki/index.php/Main_Page) corpus sections as used for the [Clinical TempEval 2016](http://alt.qcri.org/semeval2016/task12/index.php?id=data) shared task. So, training, development, or test data should be provided in the anafora xml format, in the folder structure as indicated below, where in the deepest level contains the text file `ID001_clinic_001` and corresponding xml annotations `ID001_clinic_001.Temporal-Relation.gold.completed`. Notice that we refer to the top level of the THYME data (`$THYME`) also in the python calls below.

`$THYME`
* `Train`
* `ID001_clinic_001`
* `ID001_clinic_001`
* `ID001_clinic_001.Temporal-Relation.gold.completed.xml`
* ...
* `Dev`
* ...
* `Test`
* ...

In our experiments we use POS, and dependency parse features from the [cTAKES Clincal Pipeline](http://ctakes.apache.org/). So, you need to provide the cTAKES output xml files as well. Here we assume these are in a directory called `$CTAKES_XML_FEATURES`. You can also call the program without the -ctakes_out argument. Then the it will use the Stanford POS Tagger for POS tag features instead (and no dependency parse features). The folder structure of this directory is:

`$CTAKES_XML_FEATURES`
* `ID001_clinic_001.xml`
* ...

### Experiments: Leeuwenberg and Moens (2017)
To obtain the predictions from the experiments of section 4 in the paper you can use the example calls below. Each call will output the anafora xmls to the directory `$SP_PREDICTIONS`. To get more information about the usage of the tool you can run:
```
python main.py -h
```

#### SP
```sh
python main.py $THYME 1 0 32 MUL 1000 Test -averaging 1 -local_initialization 1 -negative_subsampling 'loss_augmented' -lowercase 1 -lr 1 -output_xml_dir $SP_PREDICTIONS -constraint_setting CC -ctakes_out_dir $CTAKES_XML_FEATURES -decreasing_lr 0
```

#### SP random
```sh
python main.py $THYME 1 0 32 MUL 1000 Test -averaging 1 -local_initialization 1 -negative_subsampling 'random' -lowercase 1 -lr 1 -output_xml_dir $SP_PREDICTIONS -constraint_setting CC -ctakes_out_dir $CTAKES_XML_FEATURES -decreasing_lr 0
```

#### SP + 𝒞 *

```sh
python main.py $THYME 1 0 32 MUL,Ctrans,Btrans,C_CBB,C_CAA,C_BBB,C_BAA 1000 Test -averaging 1 -local_initialization 1 -negative_subsampling 'loss_augmented' -lowercase 1 -lr 1 -output_xml_dir $SP_PREDICTIONS -constraint_setting CC -ctakes_out_dir $CTAKES_XML_FEATURES -decreasing_lr 0
```


#### SP + 𝚽sdr
```sh
python main.py $THYME 1 0 32 MUL 1000 Test -averaging 1 -local_initialization 1 -negative_subsampling 'loss_augmented' -lowercase 1 -lr 1 -output_xml_dir $SP_PREDICTIONS -constraint_setting CC -ctakes_out_dir $CTAKES_XML_FEATURES -decreasing_lr 0 -structured_features DCTR_bigrams,DCTR_trigrams
```




104 changes: 104 additions & 0 deletions entities.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
from __future__ import print_function, division

class Entity(object):

def __init__(self, type, id, string, spans, text_id, doctimerel=None, etree=None):
self.type = type
self.id = id
self.string = string
self.spans = spans
self.text_id = text_id
self.doctimerel = doctimerel
self.phi = {}
self.phi_v = None
self.tokens = None
self.paragraph = None
self.xmltree = etree
self.embedding = None
self.next_event = None
self.next_entity = None
self.attributes = {}

def __str__(self):
return str(self.string)

def __hash__(self):
return hash(self.id)

def __eq__(self, other):
return self.id == other.id

def __ne__(self, other):
return not(self == other)

def type(self):
return self.type

def ID(self):
return self.id

def get_tokens(self):
return self.tokens

def text_id(self):
return self.text_id

def get_doctimerel(self):
return self.doctimerel

def get_span(self):
return self.spans[0]

def get_etree(self):
return self.xmltree

def get_doc_id(self):
return self.id.split('@')[2]

class TLink(object):

def __init__(self, e1, e2, tlink=None):
self.e1 = e1
self.e2 = e2
self.tlink = tlink
self.phi = {}
self.phi_v = None
self.tokens_ib = None
self.id = None

def __str__(self):
return str(self.e1) + '-' + str(self.e2)

def ID(self):
if not self.id:
self.id = self.e1.ID() + '-' + self.e2.ID()
return self.id

def __hash__(self):
return hash(self.id())

def __eq__(self, other):
return self.ID() == other.ID()

def __ne__(self, other):
return not (self == other)

def set_tokens_ib(self, tokens):
self.tokens_ib = list(tokens)

def get_tokens_ib(self):
return self.tokens_ib

def type(self):
return self.e1.type + '-' + self.e2.type

def get_tlink(self):
return self.tlink

def get_e1(self):
return self.e1

def get_e2(self):
return self.e2


60 changes: 60 additions & 0 deletions evaluation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
from __future__ import print_function, division
from collections import Counter
import pandas as pd

class Evaluation:

def __init__(self, Y_p, Y, name='', tasks='DCTR,TLINK'):
self.name = name
self.Y_p, self.Y = Y_p, Y
print('\n---> EVALUATION:',self.name,'<---')
if 'DCTR' in tasks.split(','):
self.evaluate_e()
if 'TLINK' in tasks.split(','):
self.evaluate_ee()

def pprint(self):
return 'todo'

def evaluate_e(self):
print('\n*** Evaluating DOCTIMEREL ***')
self.evaluate([yp[0] for yp in self.Y_p], [y[0] for y in self.Y])

def evaluate_ee(self):
print('\n*** Evaluating TLINKS ***')
self.evaluate([yp[1] for yp in self.Y_p], [y[1] for y in self.Y])

def evaluate(self, Yp, Y): # Internal evaluation, may not be the same as in Clinical TempEval (due to temporal closure and candidate generation)!
Yp = [l for i in Yp for l in i]
Y = [l for i in Y for l in i]
labels = set(Y+Yp)
print('Y:',set(Y),'Yp',set(Yp))
y_actu = pd.Series(Y, name='Actual')
y_pred = pd.Series(Yp, name='Predicted')
confusion = Counter(zip(Y,Yp))
df_confusion = pd.crosstab(y_actu, y_pred, rownames=['Actual'], colnames=['Predicted'], margins=True)
print('==CONFUSION MATRIX==')
print(df_confusion)
print('==PER LABEL EVALUATION==')
print(' P\t R\t F\t')
s_TP, s_FP, s_FN = 0,0,0
for l in labels:
TP = confusion[(l,l)] if (l,l) in confusion else 0
FP = sum([confusion[(i,l)] for i in labels if (i,l) in confusion and l!=i])
FN = sum([confusion[(l,i)] for i in labels if (l,i) in confusion and l!=i])
print('TP',TP,'FP',FP,'FN',FN)
precision = float(TP) / (TP + FP + 0.000001)
recall = float(TP) / (TP + FN + 0.000001)
fmeasure = (2 * precision * recall) / (precision + recall + 0.000001)
print(round(precision,4),'\t',round(recall,4),'\t',round(fmeasure,4),'\t',l)
s_TP += TP
s_FP += FP
s_FN += FN
s_prec = float(s_TP) / (s_TP + s_FP + 0.000001)
s_recall = float(s_TP) / (s_TP + s_FN + 0.000001)
s_fmeasure = (2 * s_prec * s_recall) / (s_prec + s_recall + 0.000001)
print(round(s_prec,4),'\t',round(s_recall,4),'\t',round(s_fmeasure,4),'\t','**ALL**')




Loading

0 comments on commit c8fa6bb

Please sign in to comment.