This thesis proposes a version of the Critical Stylistics model that accounts for meaning-making in multimodal online news articles, as non-literary texts, each composed of a linguistic text and still images. A framework integrating Critical Stylistics and Visual Grammar models suggests three multimodal textual conceptual functions developed from Jeffries (2010a): Naming and Describing; Representing Events/Actions/States; and Prioritising which are tested to analyse the images, of the news articles, as texts. Applying Jeffries’ (2014) concept of textual meaning, the analysis shows that the linguistic text and images are two independent texts contributing differently but collaboratively to the meanings made and projected in the multimodal texts.
The findings of the search for patterns in the data are that:
1. Images reinforce meanings made by the linguistic text
2. Images extend meanings made by the linguistic text
3. Images add/suppress meanings made by the linguistic text
I argue that a critical stylistic approach is applicable to images, but it needs an equivalent visual model to propose a toolkit that can analyse meaning-making in non-literary multimodal texts. I adopt Jeffries’(2010a) critical stylistic approach and adapt it for images, making use of Kress and van Leeuwen’s (2006) model of visual grammar and drawing on their notion that images are texts to create a model for the analysis of multimodal news texts. The model can show how the linguistic text and the accompanying images, while using resources specific to their underlying structures construct textual meanings that result in a coherent portrayal of the world of events reported. The multimodal textual conceptual functions use the notion of co-text to reduce the number of the possible interpretations an image might suggest, producing a more systematic and replicable analysis.
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Download (7MB) | Preview
Downloads
Downloads per month over past year