Talking About Visual Texts With Students

Jon Callow

For a printer-ready version of this article, click here. Use your browser’s “back” button to return to the Reading Online site.


Abstract

The study reported in this article explored students’ understanding of the visual aspects of their own multimedia presentations. The study was part of a larger research project designed to investigate new types of literacies and learning environments in classroom contexts.

Investigating the nature of students’ understanding of visual texts is an important area of research in the “new literacies.” The research suggests that students need access to a metalanguage in order both to explain their own visual designs and to develop more sophisticated and critical understandings about how visual texts in general are constructed. For this to occur, teachers require an understanding of visual features, as well as the ability to incorporate appropriate pedagogical practices into the classroom environment.

  Related Postings from the Archives



Introduction | Theoretical Framework | The Study | The Unit | Talking About Texts | Conclusions | References



The word CHOCOLATE zooms in from the center of the screen. Then two chocolate bars fly in from each corner, accompanied by the sound of a car roaring past.

“Chocolate! Mmmmmm!” chorus the voices of 11-year-old girls from the computer, as the hypermedia presentation on the production of chocolate begins.

In an Australian Year 6 classroom, 11-year-old students began their school year with a science and technology unit. As part of a project to investigate new learning environments (Downes & Zammit, 2001), their teachers were introducing them to the use of software for creating multimedia presentations on the unit’s topic of food production. One of the aims of the project was to explore how students worked toward English language arts outcomes, with a particular focus on viewing and visual literacy. This article explores the types of multimodal texts (texts that combine elements such as images, written or spoken words, video, etc.) students produced within the unit, focusing on the language students used in describing and explaining the visual elements of their presentations. It concludes with discussion on the implications for teaching about visual texts. There is a strong need for additional research into visual literacy and what happens with visual texts in elementary classrooms; such research would allow development of appropriate pedagogy and teaching resources.

Back to menu


Theoretical Framework

Visual literacy is part of new literacies and multiliteracies (Luke & Elkins, 1998; New London Group, 2000), conceptualizations that seek to understand the evolving nature of literacy in the new millennium. As used by the New London Group, the term multiliteracies acknowledges the multiplicity of meaning-making modes (visual, textual, audio, etc.) as well as the wider social contexts of these modes, from diverse local settings to global communities (see also Unsworth, 2001). The New London Group advocates a pedagogy that acknowledges the dynamic nature of communication, the importance of understanding and experiencing culturally relevant texts and of designing new texts, and the need to question and critique what is seen and experienced. Linked to this is work in related areas such as critical literacy, intermediality, and media, which also seeks to interrogate the meanings and ideologies of a variety of texts (Alvermann & Hagood, 2000; Kist, 2000; Pailliotet, 2000; Schwartz, 2001). Visual images play a key role in all these approaches to literacy.

While visual and multimodal texts are acknowledged in curriculum documents developed in countries around the world, there is limited research on what skills and understandings our students need when they are involved in reading or viewing such texts. In Australian curriculum documents, for example, viewing has been acknowledged as part of reading in general, but much more emphasis is given to written text, which reflects the large body of Australian research in writing (Anstey & Bull, 1996; Bull & Anstey, 1996; Christie & Misson, 1998). Healy (2000) argues that attaching viewing as an appendage, rather than including it as an integral component of reading, means that relatively little is done to theorize about it and then put theory into practice.

The lack of theory relating to visual texts in the wider context of multiliteracies has been noted not only in Australian contexts (e.g., Unsworth, 2002) but in the United Kingdom (e.g., Kress, 2000a) and the United States (Lemke, 1998). One of the key elements in emerging theories of literacy in general in this new millennium, and of visual and multimodal texts in particular, is the inclusion of some type of metalanguage -- that is, “a language for talking about language, images, texts and meaning-making interactions” (New London Group, 2000, p. 24). There is a small but growing body of research that investigates the visual aspect of multiliteracies (Callow, 1999; Callow & Unsworth, 1997; Callow & Zammit, 2002; Goodman, 1996; Pailliotet, 2000; Stenglin & Iedema, 2001; Unsworth, 2001; Van Kraayenoord, 1996; Zammit, 2000), but much remains to be done

Pedagogical issues need to be considered along side any new or revised literacy theories. The role of explicit and implicit learning of language and literacy is well documented when it comes to listening, speaking, reading, and writing (e.g., Campbell, 2000; Galda, Cullinan, & Strickland, 1997; Halliday, 1975; Painter, 1985). When students are learning to create paper-based written texts, for example, the practice of having an expert guide scaffold their processes is considered pedagogically sound (Derewianka, 1990; Hammond, 2001; New South Wales Department of Education and Training, 2000; Van Kraayenoord, Moni, & Jobling, 2001, online document). In a balanced literacy program, teachers include a variety of writing opportunities, modeling the various purposes for writing. In particular lessons, a teacher will point out some key features of a text, such as its audience, purpose, structure, or grammar. Using a metalanguage to describe parts of a written work (such as the role of orientation in a narrative structure or the use of appropriate tenor in writing a persuasive letter) provides teachers with a way of explicitly teaching about that text. This explicitness in turn provides students with a language to develop their literacy skills. A metalanguage can be used as part of the scaffolding process when teaching students. As students become more expert at understanding these newer aspects of learning about literacy, the scaffold is gradually removed. Students can then practice and experiment with their reading and writing, using the new skills and understandings.

This theoretical orientation to literacy may hold not only for written texts, but also for visual or multimodal texts, but both the metalanguage of visual texts and the pedagogy for teaching about them needs to be explored and developed (Kress, 1997, 2000b; New London Group, 2000). The ability to talk about the features of any text not only allows teachers to see how students’ literacy understandings might be developing, but also provides students with the tools and frameworks to create their own work. In addition, if these tools are adopted critically, students will not only be able critique their own visual products, but also use the tools to interrogate other texts to explore intended audience, purpose, emotional effect, and ideological positions (New London Group; Zammit & Callow, 1998).

The work of Kress and van Leeuwen (1996) and their functional “grammar of visual design” offers a very useful framework for metalinguistic understandings. They suggest a visual grammar, drawing on Halliday’s (1994) work in systemic functional linguistics. Their model acknowledges that all texts have social, cultural, and contextual aspects that must be considered, along with consideration of the intended audience and purpose. Like written texts, visual images draw on meaning-making systems. This model suggests that images simultaneously

Back to menu


The Study

Using the concepts of Kress and van Leeuwen (1996) as a framework, the study described here investigated what metalanguage, if any, students used when talking about visual aspects of their work. The study adopted a qualitative approach (Merriam, 1998). Data sources included researcher field notes made when viewing classes, discussion with the teachers, collection of work samples from students, and group interviews with students about their work. Features of image color, selection, salience, and layout were explored via group interviews with students. Interviews involved questions designed to focus on student understanding of the visual elements of their presentations and the aspects they considered effective. Questions such as “How does the size of the text or the size of the image on the screen make a difference to the effect of the presentation or information?” were designed to probe whether students had considered how an element might have been made salient.

It should be noted that the classroom teachers’ role in this unit was more explicit in teaching the technical skills of PowerPoint software than in discussing the visual aspects of screen design. While teachers talked with students about the audience for their presentations and listed the main elements of a PowerPoint show, there was no planned sequence of learning about layout, design, or the effect of image selection. One class had a single lesson on how advertisements used images, whether those images reflected reality, and how the size of the text or photos could attract a viewer’s attention. Apart from this, teachers encouraged students to decide as a group what visual features and design they thought appropriate.

Back to menu


The Unit: Food for the Tucker Box

“Food for the Tucker Box” (tucker being a colloquial Australian term for food) is a science and technology unit in which students explore food production, preservation, and packaging. Two teachers, each working with 25 Year 6 (11-year-old ) students, began the school year with this unit of work. Here, the science curriculum was combined with the English curriculum and the school’s computer technology program. Desired outcomes were drawn from the relevant curriculum documents, with a focus on the use of technology and information skills (Board of Studies, 1996, 1998; New South Wales, 1997) . Over the six weeks of the unit, some key activities were

Bread, Chocolate, and Fish Fingers

As part of the unit, bread, chocolate, and fish fingers were studied to investigate food production in detail. The unit took place at the beginning of the school year, and marked the first use of technology students had for the year. Given the introductory nature of the technology components, students received considerable direction and were strongly scaffolded. Each group was to work together to create a storyboard for planning (Figure 1), and then a PowerPoint slide presentation.

Figure 1
Food for the Tucker Box Storyboards (full view and close up)

full view of storyboard pages stapled to bulletin board close up of storyboard pages

The integration of text and image in the students’ PowerPoint presentations provided a context to explore their understanding of multimodal texts. Working in groups of four or five, students were supplied with sources for the basic facts about the production process of bread, chocolate, or frozen fish fingers. For their slideshow presentations, they were to put the facts into their own words, sequence the information appropriately, and include text, images, and their choice of sound and animation. Students worked on their presentations over a three-week period, when access to the classroom and library computers was available (Figure 2). Although many had some previous experience in using the software, for most this was the first group construction and presentation they had worked on.

As was the case with classes in previous years, the initial “gee-whiz” factor of using software that allowed colors to be changed and sounds and animations to be added had a big effect on the students’ initial work. Most chose to use a diverse array of backgrounds, sound effects, and slide transitions, obviously enjoying the fun of seeing words streak across the screen, accompanied by their own voices or the sound of a laser beam. This seems to be common among students who engaged with new software for the first time (Downes, 1998).

Back to menu

Figure 2
Students Working Collaboratively at a Computer

four students gathered in front of a computer monitor


Talking About Texts With Students

When students talk about their work, their teacher gains deeper understanding of what knowledge and skills have been developed, along with reflections on the overall experience of the learning process (Van Kraayenoord, 1996; Woodward, 1993). In talking to 52 students across 14 groups, their comments showed that their knowledge about multimodal and visual texts was certainly developing but required further scaffolding and support.

The main criteria used for evaluating student presentations included students’ own perceptions of what qualities made an effective slideshow, the features they may have used to attract their viewers’ attention (salience), the layout of their screens, and their choice of images. The final products showed many commonalities. As directed, each group used only the information supplied by the teacher, rewording it for their own presentations. Clip art was the only source of prepared images, though two or three groups chose to draw one or two of their own pictures when they could not find an appropriate image in the clip art files. Each information page included both text and image, and many used sound effects as well. In nearly every presentation, the images were either proportionately the same size or significantly larger than the written text. It was also clear from observing the group work sessions that the students were quite familiar and confident in their technical use of the program.

While a cursory viewing suggests that the presentations were quite derivative of the teachers’ set criteria and showed little originality, what was significant were the comments students made when asked about specific visual features to do with the slideshow. For example, when asked what makes an effective PowerPoint presentation, students suggested the following:

Although their commendation of color was strong, this appeared to be a more intuitive response rather than a reasoned one: When they were asked why they had chosen particular background colors, only five students were able to articulate specific reasons for their choice (e.g., the background color matched the topic presented -- brown for chocolate -- or the the clip art selected). There seemed to be no explicit link to their classroom lessons as to why color was seen as the most effective element in a presentation, other than their enthusiasm to experiment with as many color and shading options as possible before deciding on their final choice.

Figure 3
Slide Showing Choice of Coordinating Background Colors

PowerPoint slide with clip art, text, and coordinating background color

Most students said that using different colors in each of their slide backgrounds made them interesting or not boring.

Since students strongly identified with the importance of color, an opportunity was presented to provide them with an understanding of how color might be understood in a wider visual framework. The functional semiotic framework (Kress & van Leeuwen, 1996), where color is a key element in the interpersonal aspect of images, as well as understandings about color from visual arts literature (Trifonas, 1998) provides a useful guide here. In the classrooms, teachers can point out how color is used if they are doing a shared reading or viewing of any text that utilizes color in a dominant way. The strong, luminous colors on the Yucky! website, for example, would engage younger viewers, who could discuss what emotional effect the color choice has on them and might have on others who view the page (Figure 4).

Figure 4
Home Page of Yucky!, Showing Bold and Energetic Use of Color

screen shot of yucky home page

As well, the use of harmonious or contrasting color in an image can create a salient element or space in the image, which draws the eye in -- as with the green title “Yuckiest” against the purple at the Yucky home page (Anstey & Bull, 2000, p. 180).

Students again showed an intuitive, though generally unarticulated, understanding of other elements in their layouts. The most salient part of an image is one that attracts the viewers attention by size, color, or placement (Kress & van Leeuwen, 1996, p. 183). When asked whether the size of the text or image mattered, there was a general response that the larger element would draw the viewer’s attention. There was a preference for bigger pictures, with comments such as “people like to look at pictures.” Figure 5 shows a typical slide that reveals the salience of images in the students’ design.

Figure 5
Image as a Salient Element

screen shot of yucky home page

Some students articulated a similar concept to salience when they said that viewers would look at a picture more if it was bigger, or that they “need something to take [their] eye” when looking at a screen. While it is encouraging to see some students noticing more complex features of visual design, this initial instinctive knowledge needs to be made available to all students. When working with a class, for example, a teacher could show a particular PowerPoint screen, and change the size or features used in it, in order to demonstrate how particular aspects might be made salient. In Figure 6, the slide on the right (one group’s title page) makes the written text the most salient by the large font size; the slide is modified on the right to demonstrate how elements can be changed to attract the viewers’ attention.

Figure 6
Modification of Elements to Demonstrate Salience

screen shot of slide with large type   screen shot of slide with large image

The students’ concept of layout design was also explored when they were asked whether it mattered where the text and image were placed on the screen. The functional semiotic model suggests that placement in the top part of a page or screen privileges that element, while placement in the bottom denotes a more everyday value (Kress & van Leeuwen, 1996, p. 193). The students’ opinions about where writing should be placed on the screen were fairly evenly divided between top, bottom, or “doesn’t matter.” Some cited books to justify their response -- “It’s like picture books. The text is at the bottom.” Some suggested that the picture needs to be at the top “because it’s more important.” This comment again shows some intuitive knowledge of the patterns or designs of visual texts, opening the way for deeper discussion of how visual texts work.

Even without access to a specific metalanguage to describe most aspects of their work, students were still able to justify why they had made particular choices for their presentations. When a particular image or text box on a slide was pointed out and students were asked why it had been placed in a particular position on the screen, most were able to give a logical, reasoned response. Some of these related to

Figure 7
Repetition of Images

slide showing repeated image of chocolate bars

The actual type of images used by the students showed some interesting understandings about the perception of images in our culture. Students could use clip art or draw their own images, but when asked what images they would have preferred to use, most said they wanted photos as they are “real” and “show what things are really like.” This preference is reflected in Kress and van Leeuwen’s (1996) comments on modality in images, where the truth value or credibility of an image depends on the codes and colors used. While black-and-white diagrams might convey more credible scientific concepts (e.g., an electrical circuit diagram), Kress and van Leeuwen argue that in our current culture, “photo-realism” is accepted as representative of the natural reality of the scene or person shown. This credibility of photos is reflected by one child, who noted that “people believe you more if you use photos.” Some students, however, preferred clip art, as it was bright. One student brought to bear issues of audience, saying that little kids like cartoon (clip art) images, while adults like real pictures.

The issue of the credibility of images is an important one, not only for effective presentation and audience relevance, but also in terms of critical literacy. If students are to be critical viewers and creators of visual and multimedia texts, then questions of validity, credibility, and ideology need to be part of the learning environment (Alvermann & Hagood, 2000). The question of photos being “real” will grow more problematic as digital and computer alteration of photos becomes more sophisticated and accessible.

Influence of the Electronic Medium

The medium used to present an image is an important factor when developing both visual literacy and critical literacy skills. The medium not only informs how images can be presented or manipulated (e.g., a static photo in a book, the continually changing visual content of websites, the use of interactivity and video on CD-ROMs, etc.), but it also shapes our expectations of text (Megarry, 1991). If we pick up a book in a library and open to a page, we might do a quick visual scan and find headings and subheadings, usually indicating that the text is factual, rather than narrative. With the fast pace of technology change, the “cues” we might use to orient ourselves to information on a screen are still evolving and are sometimes uncharted (Burbules, 1997).

In response to the interview question “Does it matter where the image or text goes on the screen?’ one student in the study stated,

In books, the writing is first, then the picture, while on computers, the picture comes first at the top while the writing is at the bottom.

While this comment did not flow from explicit teaching or deconstruction of screens and books in the Food for the Tucker Box unit, it does reveal an awareness that there are similarities and differences between screens and books. This awareness is significant, in that while many teachers would model the differences between storybooks and information books in terms of structure, layout, and language choices, the same issues are important to explore when working across media and communication modes. This shows the importance of critically viewing a variety of texts with students, helping them to make generalizations about the patterns that might be evident.

The hypermedia aspect of the work in this unit was limited to sounds and animations. The students generally thought of these features as attention-getting devices, which made the presentations interesting. There was no specific teaching in the unit about the nature or specific effects of using these features, although some students suggested that being able to record and playback the written text was useful for nonreaders or blind people.

Because of the nature of PowerPoint, most presentations created with it are linear in nature, as opposed to Web-like (Burbules, 1997). Even within the linear environment of the software, however, the connectedness of screens and their construction deserves consideration. Creating a coherent, understandable, and aesthetically pleasing experience for a viewer is not restricted to individual screens but also relates to the work as a whole. Students had some notion of this when they argued either to make every screen background different (so that it wasn’t boring) or the same (for consistency and to minimize distraction). In working with students about this aspect of visual literacy, parallels can be drawn to the typesetting and layout of a book. Consistency of design, whether in a book, PowerPoint presentation, or webpage, is an important aspect of literacy knowledge, especially as students have more and more access to multimedia publishing tools (Kafai, Ching, & Marshall, 1997; Lynch & Horton, 1999).

Back to menu


Conclusions

The key findings from this study show that when working with visual and multimodal texts, particularly in the electronic medium, students need to understand not only technical skills in manipulating text, image, and color but how these elements work to create meaning. In particular, they must be aware of the role of color, salience, and layout design, and have an understanding of how different types of images might function within presentations. The students in this study showed that they certainly had a strong, if mainly intuitive, visual literacy knowledge in these areas. The question is how to scaffold this initial knowledge.

The question of metalanguage is a key element here. If each of these students is to develop “multiliteracies,” then explicit knowledge about reading and writing traditional paper-based texts needs to be extended to the visual. Teachers need to be able to read and view visual and multimodal texts with students and point out how color, salience, or layout might have been used to create the possible meanings in the text. The findings of this study suggest that many students have some understanding of visual features, but that this is not developed into a richer systematic understanding, where similar concepts might be transferred to other literacy tasks. Without this, students are not able to offer a more complex self-assessment of their work, articulating the specific visual features they use in presentations such as those described here. This explicit articulation also opens the possibility of implementing critical literacy more fully, such as considering how food products are presented to the public in advertising, whether photos or diagrams were most effective in the PowerPoint explanations, and how layout, color, and salience could be used to attract a chosen audience.

Furthermore, as with the teaching of reading and writing, the use of a metalanguage to talk about and critique texts needs to be introduced not only at point of need, when students are constructing their own projects, but also as part of the scaffolding around any literacy experiences where students encounter multimodal texts. Appropriate resources and strategies need to be developed in order for teachers to provide such learning experiences.

The comments and understandings exhibited by students in this study show that exposure to visual texts affords students an implicit knowledge of the patterns and purposes of these texts. As when learning to read or write, however, students need to understand the choices that are available to them to become skilled, critical viewers and text designers. Pedagogically, teachers need to have a sound framework for their own understanding of visual texts if they are to model and discuss images explicitly with their students. This study shows the beginning of this process in one school.

Educators need to understand not only the design features of visual texts, but also the wider cultural and social aspects. In turn, appropriate pedagogy needs to be tested, and activities and resources developed in order for teachers to integrate visual and multimodal literacy activities into their current curriculum. While only in its beginning phase, there is much to suggest that working with a multiliteracies framework in new learning environments will be an exciting and fruitful endeavor.

Back to menu


References

Alvermann, D.E., & Hagood, M. (2000). Critical media literacy: Research, theory, and practice in “new times.” Journal of Educational Research, 93(3), 193-205.
Back

Anstey, M., & Bull, G. (1996). The literacy labyrinth. New York and Sydney: Prentice Hall.
Back

Anstey, M., & Bull, G. (2000). Reading the visual: Written and illustrated children’s literature. Sydney: Harcourt.
Back

Board of Studies, New South Wales. (1996). Science and technology K-6 syllabus and support document. Sydney: Author.
Back

Board of Studies, New South Wales. (1998). English K-6. Sydney: Author.
Back

Bull, G., & Anstey, M. (1996). The literacy lexicon. Sydney: Prentice Hall.
Back

Burbules, N.C. (1997). Rhetorics of the Web: Hyperreading and critical literacy. In I. Snyder (Ed.), Page to screen: Taking literacy into the electronic era. St Leonards, Sydney: Allen & Unwin.
Back

Callow, J. (1999). Image matters: Visual texts in the classroom. Marrickville, NSW: Primary English Teaching Association.
Back

Callow, J., & Unsworth, L. (1997). Equity in the videosphere. Southern Review, 30(3), 268-286.
Back

Callow, J., & Zammit, K. (2002). Visual literacy: From picture books to electronic texts. In M. Monteith (Ed.), Teaching primary literacy with ICT. Buckingham, UK: Open University Press.
Back

Campbell, R. (2000). Language acquisition, development and learning. In R. Campbell & D. Green (Eds.), Literacies and learners: Current perspectives. Frenchs Forest, NSW: Prentice Hall Australia.
Back

Christie, F., & Misson, R. (1998). Literacy and schooling. London: Routledge.
Back

Derewianka, B. (1990). Exploring how texts work. Rozelle, NSW: Primary English Teaching Association.
Back

Downes, T. (1998). Children’s use of computers in their homes. Unpublished doctoral thesis, University of Western Sydney, NSW, Australia.
Back

Downes, T., & Zammit, K. (2001). New literacies for connected learning in global classrooms. A framework for the future. In P. Hogenbirk & H. Taylor (Eds.), The bookmark of the school of the future. Boston: Kluwer.
Back

Galda, L., Cullinan, B.E., & Strickland, D.S. (1997). Language, literacy, and the child (2nd ed.). Fort Worth, TX: Harcourt Brace.
Back

Goodman, S. (1996). Visual English, redesigning English: New texts, new identities. London: Routledge.
Back

Halliday, M.A.K. (1975). Learning how to mean: Explorations in the development of language. London: Edward Arnold.
Back

Halliday, M.A.K. (1994). An introduction to functional grammar (2nd ed.). London: Edward Arnold.
Back

Hammond, J. (2001). Scaffolding: Teaching and learning in language and literacy education. Newtown, NSW: Primary English Teaching Association.
Back

Healy, A. (2000). Visual literacy: Reading and the contemporary text environment. In R. Campbell & D. Green (Eds.), Literacies and learners: Current perspectives (pp. 155-172). Frenchs Forest, NSW: Prentice Hall Australia.
Back

Kafai, Y., Ching, C.C., & Marshall, S. (1997). Children as designers of educational multimedia software. Computers in Education, 29(23), 117-126.
Back

Kist, W. (2000). Beginning to create the new literacy classroom: What does the new literacy look like? Journal of Adolescent & Adult Literacy, 43(8), 710-718.
Back

Kress, G. (1997). Visual and verbal modes of representation in electronically mediated communication: The potentials of new forms of text. In I. Snyder (Ed.), Page to screen: Taking literacy into the electronic era. St. Leonards, NSW: Allen & Unwin.
Back

Kress, G. (2000a). A curriculum for the future. Cambridge Journal of Education, 30(1), 133-145.
Back

Kress, G. (2000b). Design and transformation: New theories of meaning. In B. Cope & M. Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of social futures (pp. 153-161). South Melbourne: Macmillan.
Back

Kress, G., & van Leeuwen, T. (1996). Reading images : The grammar of visual design. London: Routledge.
Back

Lemke, J. (1998). Metamedia literacy: Transforming meanings and media. In D. Reinking, M.C. McKenna, L.D. Labbo, & R.D. Kieffer, R.D. (Eds.), Handbook of literacy and technology: Transformations in a post-typographic world. Mahwah, NJ: Erlbaum.
Back

Luke, A., & Elkins, J. (1998). Reinventing literacy in new times. Journal of Adolescent & Adult Literacy, 42(1), 4.
Back

Lynch, P.J., & Horton, S. (1999). Web style guide: Basic design principles for creating web sites. New Haven, CT: Yale University Press.
Back

Megarry, J. (1991). Europe in the round: Principles and practices of screen design. Educational and Training Technology International, 28(4), 306-315.
Back

Merriam, S.B. (1998). Qualitative research and case study applications in education. San Francisco, CA: Jossey-Bass.
Back

New London Group. (2000). A pedagogy of multiliteracies: Designing social futures. In B. Cope & M. Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of social futures. South Melbourne: Macmillan.
Back

New South Wales, C.S.D. (1997). Computer-based technologies in the primary KLAs: Enhancing student learning. Ryde, NSW: New South Wales Department of Education and Training.
Back

New South Wales, Department of Education and Training. (2000). Focus on literacy: Writing. Sydney: Author.
Back

Pailliotet, A.W. (2000). Intermediality: Bridge to critical media literacy. The Reading Teacher, 54(2), 208-220.
Back

Painter, C. (1985). Learning the mother tongue. Waurn Ponds, Vic.: Deakin University, Deakin University Press.
Back

Schwartz, G. (2001). Literacy expanded: The role of media literacy in teacher education. Teacher Education Quarterly, Spring, 111-119.
Back

Stenglin, M., & Iedema, R. (2001). How to analyse visual images: A guide for TESOL teachers. In A. Burns & C. Coffin (Eds.), Analysing English in a global context: A reader. London: Routledge.
Back

Trifonas, P. (1998). Cross-mediality and narrative textual form: A semiotic analysis of the lexical and visual signs and codes in the picture book. Semiotica, 118(1/2), 1-70.
Back

Unsworth, L. (2001). Teaching multiliteracies across the curriculum: Changing contexts of text and image in classroom practice. Buckingham, UK: Open University.
Back

Unsworth, L. (2002). Changing dimensions of school literacies. Australian Journal of Language and Literacy, 25(1), 62-77.
Back

Van Kraayenoord, C.E. (1996). Literacy assessment. In G. Bull & M. Anstey (Eds.), The literacy lexicon (pp. x, 270). New York and Sydney: Prentice Hall.
Back

Van Kraayenoord, C., Moni, K., & Jobling, A. (2001). Putting it all together: Building a community of practice for learners with special needs. Reading Online, 5(4), Available: www.readingonline.org/articles/art_index.asp?HREF=vankraayenoord/index.html
Back

Woodward, H.L. (1993). Negotiated evaluation. Newtown, NSW: Primary English Teaching Association.
Back

Zammit, K. (2000). Computer icons: A picture says a thousand words, or does it? Journal of Educational Computing Research, 23(2), 217-231.
Back

Zammit, K., & Callow, J. (1998). Ideology and technology: A visual and textual analysis of two popular CD-ROM programs. Linguistics and Education, 10(1), 89-105.
Back


About the Author

portrait of Jon Callow   Jon Callow is a lecturer in literacy education at the University of Western Sydney, Australia. His experience includes classroom teaching from elementary to secondary level, and working as a literacy consultant in schools serving socioeconomically disadvantaged students in Sydney and in adult education in Europe. His interests include multiliteracies and critical literacy in both written and multimedia texts, and visual literacy and technology in the classroom. Address correspondence to University of Western Sydney, Bankstown Campus Building 4, Locked Bag 1797, Penrith South DC. NSW 1797, Australia, or contact Jon by e-mail at j.callow@uws.edu.au.

Back to top




The author thanks Jenny Burgess and Alison Fazio for their work on this project.

For a printer-ready version of this article, click here.

Citation: Callow, J. (2003, April). Talking about visual texts with students. Reading Online, 6(8). Available: http://www.readingonline.org/articles/art_index.asp?HREF=callow/index.html




Reading Online, www.readingonline.org
Posted April 2003
© 2003 International Reading Association, Inc.   ISSN 1096-1232