Pages

11.2.07

2 Create a Story, Multimodality and Onscreen Text Development

The apparent first glance simplicty of many of 2 simple's products often hides the huge power of the tools that lie behind their interface. 2 create a story for example has a hidden yet powerful tool which enables the youngest of students and their teachers to create web publishable content at the drop af a menu. As well as being able to save a file in it's 2 create a story format, these can be exported or saved from the software in the same format used to publish for example the National Numeracy Strategy's ITPs, Flash. These self contained and playable packages can then be included in your web site, or shared via the network or on the Interactive Whiteboard as standalone texts.

Last term as a 2 create newbie, I was excited by how quickly our year 1 and 2 students could compile a text using 2 create a story. The desktop view is very similar to the newsbooks, my year 1 class used to have when I began teaching, with a space to draw a picture at the top, and a box to write in at the bottom. To the left of the screen is a set of thick and thin felt tips to draw with, or using teacher controls a wider range of drawing tools can be offered to the students. The abilty to add new pages was quickly exploited by the students, who were happy to draw their picture and then add a sentence, or input their emergent writing and names, for editing, revision or teacher modelling below. The students also seemed to like the idea that the software's page structure could be used like a storyboard. They could create their story in pictures first and then go back and add there writing afterwards, using the pictures as a scaffold or writing frame, allowing them to effectively sketch out/plan their story through talk first. Another exciting aspect for them, was the ability to animate their pictures and add sound effects. Without knowing it these children were compiling multimodal texts all on their own.

The concept of multimodality within texts is likely to be one of our biggest challenges. At it's root it challenges our basic understanding of what a text is, and what the terms text and literacy for the 21st century may mean. During my masters course I engaged with a unit entitled "Communication and Representation with ICT." I have to say that it blew my mind, unfamiliar terminology and a lack of vocabulary, due in part to my recent immersion in traditional text types through the National Literacy Strategy, left me frequently floundering to express my thoughts and understanding. I vowed never to go near the idea again. However in engaging with my dissertation and video analysis, I have found myself unable to avoid it's theoretical basis. Within methodological papers, I discovered that in choosing to collect and analyse video data I had commited myself to reading the data as a multimodal text. Oh woe was me. Anyway the key concepts explored in this study unit related to Semiotics. Putting this simply is quite difficult, but essentially semiotics as an area of study is about how signs and symbols are used to create and achieve meaning. What is apparent when you begin to explore texts in this way is that communications are rarely dependent on one mode of representation in order to be understood. We tend to see reading and writing, speaking and listening as individual activities, but when someone speaks they gesture and intone, and these change the meaning of what is being said. Even taking a reading book from the Oxford Reading tree for example, it can be considered to have multimodal properties. We have the picture above, the written text below, and the children will be encouraged to read this aloud drawing on the visible resources on the page, and past experiences they bring to the story. When these elements are put together in action we have at least three modes of communication, two symbolic forms and the sound of the child's voice. When we create a text in for example 2 create a story, we have the three modes above, but we also have the possibilty of including movement through animation of the pictures, and the possibility to embed sound effects. All of these add further texture and depth to what has been created, and how the overall text can be interpreted. Each new element added to the text creates new levels for interpretation, not only by the reader but also by the author to consider in their writing. The author as designer, has to make decisions about which sound to add, which type of movement to input, and whether they are doing this because they can, or whether they structure this to enhance the text they are creating. Eg Do we want a spooky sound when we are taking a walk in the country? Do we really want the car, we took a drive in to shoot into the sky and explode, if later we will drive up to our door in it? Or do we want it to drive off the screen to the left going forward or to the right in reverse? Multimodality, in this context would see the author as designer for meaning, the text needs an audience and a purpose if it is to be meaningful. The student may be writing for themselves, for their teacher, for others through publication to the web. Choosing the elements to compile their text from a toolbox, requires them to begin thinking about their readers and how they would like their text to be interpreted. Infering from their own writing they might be considering what meanings their reader will take from their text and need to reorganise and structure it's elements accordingly.

On a word level, as a school, we have introduced, a programme for phonics which sees letters as being sound pictures, symbols or images which represent the sounds we make when we speak. Referring back to the ideas of multimodality and Semiotics above, and applying this to the keyboard on the computer, when we input a letter character, we are also inputting representations of the sounds we hear when we speak, and want to write. I have begun to include 2 create a story and Tizzy's presenter as scaffolds, and to apply these ideas as we seek to develop keyboard familiarity with our younger students. This is enabling the younger students to use the computer to extend their emergent writing, by applying their growing phonological awareness to develop texts which use spellings that are increasingly phonetically plausible. As children are introduced to two letter sound pictures and high frequency words, they are encouraged to use these in their texts. In some of the sessions children draw their pictures, and in some they select graphics from a clipart gallery, and add simple texts. Some times we use a sentence start. Eg I am... or Once there was... and the children extend and complete this using their emerging understanding of how the phonetic system they are using works through keyboard input. All sessions begin together, using the Interactive whiteboard's keyboard, and children work together to sound out some of the words or structures we are going to use today. In Year 2 we also use small dry wipe whiteboards, and children rehearse their sentences and texts aloud, they use familiar words directly, and write first unfamiliar words they want to include, which can then be checked and corrected by the teacher, with celebration of attempts they have made, or discussion about structures they have used. The benefits of using ICT and writing directly to screen however is that children can have a go first and the temporary nature of text means that work for publication can if neccesary be corrected on screen, without them having to go back and rewrite the whole thing. I say if necessary, because some spellings even though incorrect, are fabulous and worthy of celebration. They give us clues about what to do and where to go next, and provide encouragement to write on. In school I would be happy to display them in the writing area, though if publishing to the web or for a wider audience, would want to discuss the need for accuracy in the public domain. Initial drafts can be saved to the network as evidence of their independent work, and saving the file as flash, after editing or revision means we have a web publishable, or playable document. Paper based writing for publication by many of our students can be demoralising, especially if they have spent an age creating it and then they may have to go back and do it all over again. Perhaps fine motor skills let them down, and no matter how hard they try, their handwriting never looks good enough to them. As a tool for supporting the writing process and reluctant recorders, tools such as 2 create a story offer enormous potentials. The writing space is small, and the students feel great when they have written 5 pages. Structuring input, as annotation provides the possibility for students to structure their own work chronologically and develop their own writing scaffolds. The ability to alter font size enables differentiation of expectation for student input. Engaging with onscreen text development encourages and promotes keyboard familiarity. Once in the frame, students may be happier to go back and extend their work, revising and adding wow words, as they do not have to rewrite the whole thing. Coming back to texts in this way encourages reading and rereading of drafts. I know I am certainly better at this since I began using a word processor. Seeing keys as sound pictures, potentially enables links to be made between the multimodal aspects of traditional text construction, and the idea that symbolic text represents what we want to say. All children are expected to label their texts with their name. We are also extending the idea of starting all of our sentences with a capital letter (using the shift key) and ending them with a full stop. We see these as magic Keys, as they help us change the way text looks and also the way text is read. In year 2 children frequently choose to use 2 create a story independently. Next steps are to extend this type of activity to become an integral part of the literacy hour or topic based activity in class.

Thanks to a post by Anthony Evans earlier this term I have played further with 2 Create and learned I can import photographs and images from outside of the software. I didn't realise before this article that you could, it is all 2simple 2make assumptions based on face value. I have also worked with some students to apply their own recorded sounds including voice to the texts they develop. Last week with a group of year 2s we created a simple multimedia text as an experiment. They imported their choice photographs from a visit to Slimbridge Wild Fowl Trust, inputted simple sentences to describe their favourite parts of the day and recorded themselves reading their text as a background for each page. It is very simple, but very powerful. I wanted to use it to introduce the idea that speech marks surround spoken phrases, and intended the students only to read the direct speech elements of the text. They were too excited for this unfortunately, and if you read our multimodal text, you will see they read the whole thing. However, they enjoyed sharing the outcome with their classmates and teacher who were wowed by what they had made. Their partner class are now desparate to make a story like it, and so I will need to find the time in my busy schedule to do this. I am looking forward to reading about what Anthony and his lead teacher group have been doing around Literacy, ICT and the new Primary Framework.

No comments: