Using UAM CorpusTool to Explore the Language of Evaluation in Interview Program
- Chunyu Hu
- Jinlin Tan
Abstract
As an interactional encounter between a journalist and one or more newsworthy public figures, an interview program is a special type of discourse that is full of evaluative language. This paper sets out to explore evaluation in interview programs from the perspective of appraisal system. The corpus software used in this study is UAM CorpusTool 3.3— annotating software that is able to automatically annotate the grammatical structure and parts of speech of the text, meanwhile allows manual annotation of linguistic features based on various schemes. The results show that the use of attitudinal resources is closely related to speaker’s communication strategies. Invoked evaluation and positive appraisal resources are most frequently used in the interview to establish alignment with the audience. As will be explained below, the design of the program process and other information are also sources of evaluation.
- Full Text: PDF
- DOI:10.5539/elt.v10n7p8
Journal Metrics
Index
- Academic Journals Database
- CNKI Scholar
- Educational Research Abstracts
- Elektronische Zeitschriftenbibliothek (EZB)
- EuroPub Database
- Excellence in Research for Australia (ERA)
- GETIT@YALE (Yale University Library)
- Harvard Library E-Journals
- IBZ Online
- INDEX ISLAMICUS
- JournalSeek
- JournalTOCs
- LearnTechLib
- Linguistics Abstracts Online
- LOCKSS
- MIAR
- MLA International Bibliography
- NewJour
- Open J-Gate
- PKP Open Archives Harvester
- Publons
- ResearchGate
- ROAD
- SHERPA/RoMEO
- Standard Periodical Directory
- Technische Informationsbibliothek (TIB)
- The Keepers Registry
- Ulrich's
- Universe Digital Library
Contact
- Gavin YuEditorial Assistant
- elt@ccsenet.org