BioCreAtIvE - Critical Assessment for Information Extraction in Biology
Home - CNIO - MITRE - NCBI - Organization - News - Contact
































Protein-Protein Interaction ISS Task: Questions and Answers

 

1. For the ISS task, can we say that the test set annotation will look 
like the "SAMPLE_EVIDENCE" files except that they won't include the 
"source-text" tags?    

Yes, they look like the sample cases, but in case of the test set, you will not get the actual annotation files containing the interaction pairs, 
otherwise it would not result in a task which is relevant in practice for the database annotators or 
biologists. Here you will only get the actual full text articles for which you have to provide 
the predictions. We have these annotations and hold them back until the test set submission deadline. 
In case of ISS, you have to predict the actual interaction pairs (i.e. the corresponding 
UniProt IDs) together with the evidence sentences. 

In case of the ISS, the main evaluation criteria will be: 

(a) whether the evidence sentence you submit actually corresponds to an interaction description -> 
to see how good your system is in detecting interaction relevant sentences. This will be evaluated in 
terms of percentage of interaction relevant sentences predicted with respect to the total number of 
predicted (submitted) sentences. 


(b) whether the interaction protein pair predicted for this sentence is correct (by comparing it to 
a previously extracted annotation set done by the database curators -> to measure how good your 
system is in extracting the actual annotation together with the corresponding evidence sentence. 
Here the main evaluation metric will be the mean reciprocal rank (MRR) of the ranked list of interaction 
evidence sentences (predicted) with respect to the manually chosen best interaction sentence (annotated). 
Note that MRR has been used for evaluation of question answering tasks. 

ADDITIONAL COMMENTS:
We expect this sub-task to be the most challenging/difficult one of the PPI task, so systems will certainly not 
be perfect but instead address a really relevant biological data annotation problem. Many databases (almost all) 
do currently not record the actual evidence sentence or passage used by the human curator to derive the annotations. 
This is problematic, because (a) it difficults revision of existing annotations as well as (b) in limits the use and 
interpretation of annotations by the data consumer biologists. (c) It also makes the use of biological annotation 
database information by text mining/information extraction tools more difficult. 

2. How have been the evidence sentences of the ISS task been extracted?
They have been extracted by IntAct and MINT annotation database curators from the HTML and PDF documents,
mainly using cut and paste. These initial evidences have then been normalized in the sense that phrases 
would be mapped to the corresponding whole sentences and that they match the HTML full text articles 
provided to the participants.



Last update of this page: 21 September 2006




[
up][home]

© by Martin Krallinger 2006