11 This use of
12 Contact the
-atta _hment is identical to its original use in Church's parser (Church 1980).
])ata Consortium, 441 Williams Hall, University of Pennsylvania, Philadelphia
PA 19104-605
e-mail to
[email protected] for more information.
326Mitchell P. Marcus et al.
Building a Large Annotated Corpus of English
Table 4
Penn Treebank (as of 11/92).
Description
Tagged for
Part-of-Speech
(Tokens)
Dept. of Energy abstracts
Dow Jones Newswire stories
Dept. of Agriculture bulletins
Library of America texts
MUC-3 messages
IBM Manual sentences
WBUR radio transcripts
ATIS sentences
Brown Corpus, retagged
231,404
3,065,776
78,555
105,652
111,828
89,121
11,589
19,832
1,172,041
231,404
1,061,166
78,555
105,652
111,828
89,121
11,589
19,832
1,172,041
Total:
4,885,798
2,881,188
Some comments on the materials included:
Department of Energy abstracts are scientific abstracts from a variety of
disciplines.
All of the skeletally parsed Dow Jones Newswire materials are also
available as digitally recorded read speech as part of the DARPA
WSJ-CSRl corpus, available through the Linguistic Data Consortium.
The Department of Agriculture materials include short bulletins on such
topics as when to plant various flowers and how to can various
vegetables and fruits.
The Library of America texts are 5,000-10,000 word passages, mainly
book chapters, from a variety of American authors including Mark
Twain, Henry Adams, Willa Cather, Herman Melville, W. E. B. Dubois,
and Ralph Waldo Emerson.
.The MUC-3 texts are all news stories from the Federal News Service
about terrorist activities in South America. Some of these texts are
translations of Spanish news stories or transcripts of radio broadcasts.
They are taken from training materials for the Third Message
Understanding Conference.
The Brown Corpus materials were completely retagged by the Penn
Treebank project starting from the untagged version of the Brown
Corpus (Francis 1964).
The IBM sentences are taken from IBM computer manuals; they are
chosen to contain a vocabulary of 3,000 words, and are limited in length.
The ATIS sentences are transcribed versions of spontaneous sentences
collected as training materials for the DARPA Air Travel Information
System project.
The entire corpus has been tagged for POS information, at an estimated error rate
327Computational Linguistics
Volume 19, Number 2
of approximately 3%. The POS-tagged version of the Library of America texts and the
Department of Agriculture bulletins have been corrected twice (each by a different
annotator), -and the corrected files were then carefully adjudicated; we estimate the
error rate of the adjudicated version at well under 1%. Using a version of PARTS
retrained on the entire preliminary corpus and adjudicating between the output of the
retrained version and the preliminary version of the corpus, we plan to reduce the
error rate of the final version of the corpus to approximately 1%. All the skeletally
parsed materials have been corrected once, except for the Brown materials, which have
been quickly proofread an additional time for gross parsing errors.
5.2 Future Directions
A large number of research efforts, both at the University of Pennsylvania and else-
where, have relied on the output of the Penn Treebank Project to date. A few examples
already in print: a number of projects investigating stochastic parsing have used either
the POS-tagged materials (Magerman and Marcus 1990; Brill et al. 1990; Brill 1991) or
the skeletally parsed corpus (Weischedel et al. 1991; Pereira and Schabes 1992). The
POS-tagged corpus has also been used to train a number of different POS taggers in-
cluding Meteer, Schwartz, and Weischedel (1991), and the skeletally parsed corpus has
been used in connection with the development of new methods to exploit intonational
cues in disambiguating the parsing of spoken sentences (Veilleux and Ostendorf 1992).
The Penn Treebank has been used to bootstrap the development of lexicons for particu-
lar applications (Robert Ingria, personal communication) and is being used as a source
of examples for linguistic theory and psychological modelling (e.g. Niv 1991). To aid
in the search for specific examples of grammatical phenomena using the Treebank,
Richard Pito has developed tgrep, a tool for very fast context-free pattern matching
against the skeletally parsed corpus, which is available through the Linguistic Data
Consortium.
While the Treebank is being widely used, the annotation scheme employed has a
variety of limitations. Many otherwise clear argument/ adjunct relations in the corpus
are not indicated because of the current Treebank's essentially context-free represen-
tation. For example, there is at present no satisfactory representation for sentences in
which complement noun phrases or clauses occur after a sentential level adverb. Either
the adverb is trapped within the VP, so that the complement can occur within the VP
where it belongs, or else the adverb is attached to the S, closing off the VP and forcing
the complement to attach to the S. This "trapping" problem serves as a limitation for
groups that currently use Treebank material semiautomatically to derive lexicons for
particular applications. For most of these problems, however, solutions are possible
on the basis of mechanisms already used by the Treebank Project. For example, the
pseudo-attachment notation can be extended to indicate a variety of crossing depen-
dencies. We have recently begun to use this mechanism to represent various kinds
of dislocations, and the Treebank annotators themselves have developed a detailed
proposal to extend pseudo-attachment to a wide range of similar phenomena.
A variety of inconsistencies in the annotation scheme used within the Treebank
have also become apparent with time. The annotation schemes for some syntactic
categories should be unified to allow a consistent approach to determining predicate-
argument structure. To take a very simple example, sentential adverbs attach under
VP when they occur between auxiliaries and predicative ADJPs, but attach under S
when they occur between auxiliaries and VPs. These structures need to be regularized.
As the current Treebank has been exploited by a variety of users, a significant
number have expressed a need for forms of annotation richer than provided by the
project's first phase. Some users would like a less skeletal form of annotation of surface
328Mitchell P Marcus et al.
Building a Large Annotated Corpus of English
grammatical structure, expanding the essentially context-free analysis of the current
Penn Treebank to indicate a wide variety of noncontiguous structures and dependen-
cies. A wide range of Treebank users now strongly desire a level of annotation that
makes explicit some form of predicate-argument structure. The desired level of rep-
resentation would make explicit the logical subject and logical object of the verb, and
would indicate, at least in clear cases, which subconstituents serve as arguments of
the underlying predicates and which serve as modifiers.
During the next phase of the Treebank project, we expect to provide both a richer
analysis of the existing corpus and a parallel corpus of predicate-argument structures.
This will be done by first enriching the annotation of the current corpus, and then
automatically extracting predicate-argument structure, at the level of distinguishing
logical subjects and objects, and distinguishing arguments from adjuncts for clear
cases. Enrichment will be achieved by automatically transforming the current Penn
Treebank into a level of structure close to the intended target, and then completing
the conversion by hand.
Acknowledgments
The work reported here was partially
supported by DARPA grant
No. N0014-85-K0018, by DARPA and
AFOSR jointly under grant
No. AFOSR-90-0066 and by ARO grant
No. DAAL 03-89-C0031 PRI. Seed money
was provided by the General Electric
Corporation under grant No. J01746000. We
gratefully acknowledge this support. We
would also like to acknowledge the
contribution of the annotators who have
worked on the Penn Treebank Project:
Florence Dong, Leslie Dossey, Mark
Ferguson, Lisa Frank, Elizabeth Hamilton,
Alissa Hinckley Chris Hudson, Karen Katz,
Grace Kim, Robert Maclntyre, Mark Parisi,
Britta Schasberger, Victoria Tredinnick and
Matt Waters; in addition, Rob Foye, David
Magerman, Richard Pito and Steven Shapiro
deserve our special thanks for their
administrative and programming support.
We are grateful to AT&T Bell Labs for
permission to use Kenneth Church's PARTS
part-of-speech labeler and Donald Hindle's
Fidditch parser. Finally, we would like to
thank Sue Marcus for sharing with us her
statistical expertise and providing the
analysis of the time data of the experiment
reported in Section 3. The design of that
experiment is due to the first two authors;
they alone are responsible for its
shortcomings.