By Ildar Sagdejev (Specious) (Own work) [Public domain], via Wikimedia Commons
A free downloadable lesson, based on a video from StoryCorps, which tells the true story of Storm Reyes, who grew up poor in a migrant camp in Washington State. Students start by watching the video without sound, which encourages them to make predictions, which they then check on a second viewing. There is then some further comprehension and discussion, before an activity which helps students to develop their listening skills by focusing on the weak forms that are so difficult to hear.
Finally there is a focus on opinion or comment adverbs, before a speaking activity to round up the lesson, about the topic of books and reading.
Love it, hate it, feel guilty about it…or just mystified?
The first #ELTChat on Twitter 7th November 2012 discussed the phonemic chart (and script) asking ‘Do you use it? Why/why not? And how?’
There were quite a few participants who fell into the ‘love it’ category:
@worldteacher ‘I’m a huge fan of the phonemic chart and use it in every lesson.’
@BobK99 ‘Can’t see how anyone can teach English effectively WITHOUT IPA.’
@Julian_L’enfant ‘I use the phonemic chart with learners of all ages. An excellent resource & important for learner autonomy.’
Others, such as @teflerinha, @KerrCarolyn , @TEFL Geek , @jo_cummins, @sueAnnan who like using it, but perhaps in more of an ad hoc way.
And those who didn’t use it, such as @bnleez, @michaelegriffin, @theteacherjames and @garyJones 01 (who preferred to use rhymes and tonguetwisters)
So the stage was set for a good discussion from different angles, though there were probably more people in favour of using the chart than against. The arguments on both sides are set out below.
It’s complicated, takes time and students don’t know it (@michaelegriffin)
@bnleez: Pronunciation varies to such a degree, I find the Phon. chart would cause more confusion for learners than it needs to be
I found it intimidating as a new teacher, so what about students? (@jo_cummins)
Agree it can be overwhelming for ss and worse for ss without Roman alphabet perhaps? (@teflerinha)
But using symbols empowers students (@worldteacher)
If trained in the chart, students can get word outside classroom and note down pron themselves (@TEFLGeek)
I agree with learner autonomy (@jo_Cummins)
Teaching in France, ‘improved pronunciation’ is one of the most common objectives (@KerrCarolyn)
I’ve found students who have followed courses w/pron more resourceful than new students entering the school (Julian_l’Enfant)
ELLs can use online dictionaries with sound files, so IPA is redundant (@bnleez)
I’m still not convinced the sts need to ‘see’ the pronunciation, just hear & reproduce it. (@theteacherjames)
Or a useful visual aid?
Students need to have constant access to internet and to already be able to differentiate between sounds (@TEFLGeek)
Great for use at home or if you have tech in the classroom – otherwise I’ll stick to paper charts & flashcards! (@worldteacher)
seeing it helps with highlighting the difference in form (@sueannan)
doesn’t seeing it help lock it into the memory? (@shaunwilden)
Some find it reassuring – identify visually what ears can’t deal with (@KerrCarolyn)
It’s complicated, takes time and students don’t know it (@michaelegriffin)
@bnleez: Pronunciation varies to such a degree, I find the Phon. chart would cause more confusion for learners than it needs to be
Or not so much…?
it just looks intimidating but once you start to study it it is logical. (@jo_Cummins)
Many symbols (e.g. consonants and some dipthongs) are self-evident (@teflerinha)
A restrictive model?
Does the idea of a ‘model’ (as suggested by using a uniform chart) undermine local accents & non-native teacher …(@laurahaha)
@teflerinha thought that different allophones (versions of the same phoneme) could be included in each phoneme)
There was also mention of the American IPA- see links below.
Unsuitable for learners without a Roman script, those with literacy problems or YLs?
Previous #ELTChat on similar subject made the point that study showed learning symbols made students with different alphabet more anxious (@michaelegriffin)
Interestingly my CELTA tutor told me its no problem for Korean/Japanese Ss to pick up easily because new scripts r easy (@michaelegriffin)
A new script shouldn’t be too hard. Japanese and Chinese know thousands of characters. And most consonants are the same.(@EBEFL)
@Marisa_C and @teflerinha both raised the point that it might not be a good idea for students who were already struggling with literacy (though @teflerinha would still probably use a few key symbols such as schwa)
There was also a brief discussion about YLs, and here opinion was divided.
I use it with adults but hardly use the symbols with YLs (@prese1)
When teaching YLs we’d use it as a secret code; Lee the Lion and Sid the Snake liked certain types of words.(@Julian_l’Enfant)
What does the chart or phonemic script offer in addition to the teacher’s other resources?
The teacher can model pronunciation (@bnleeez) but..
The chart can help with this as at least Underhills’ version (see link below) shows the mouth positions as well (@teflerinha)
It can really help with showing connected speech. E.g.(@harrisonmike):
This shows the intrusive ‘w’.
@theteacher James suggested it would be as easy to write ‘tu wapples’
But others disagreed:
No- could be ‘too wapplz’ (@Marisa_C)
No, because people write things phonetically in very different ways (@teflerinha)
‘Tu wapples’ could cause spelling mistakes (@harrisonmike)
@Harrisonmike also described giving Farsi and Urdu speakers text in script to work out, and @teflerinha agreed that this was useful. For example:
Would you like to…? Could be represented as /wʊdʒə’laɪktə/, which shows how this phrase sounds in connected speech much more effectively than the words alone.
Other ways to use the chart or IPA
@Kerrcarolyn said ’11 Colour vowel sounds are great. I use the colors with kids and they love coloring words and poems’
@harrisonmike ‘good one is with minimal pairs and a maze on the board. eg. forked paths, one is 15, the other 50, or bin/bean (Pronunciation Games)– this is a link to Mark Hancock’s blog, with lots of Pron Games and activities.
On YouTube you can find Adrian Underhill going through the chart. Part 1 is here.
a The British Council interactive char also shows sounds in words with sound at beg, middle and end . You can click to hear these and it deals with allophones – the way phonemes can be slightly different in these different places.
@julian_l’Enfant suggested Celce-Murcia Brinton & Goodwin’s for an American English chart (there is also one on Adrian Underhill’s blog, link above)
This statement jumped out at me recently, from an excellent post by Robin Walker on Pronunciation for YLs. He was reporting from a talk by Catherine Walter at IATEFL 2008, and reading this (perhaps surprising) statement sent me off to find her original research.
In her article Phonology in second language reading: not an optional extra, Walter questions the idea that the skill of reading is something which needs to be taught to second language learners. She argues that successful L1 readers already possess the cognitive skills needed to build a mental structure or representation, which is, in fact, what we mean by ‘comprehension.’ According to Walter, comprehension isn’t actually a linguistic skill, so it’s fruitless to talk about transferring it from L1 to L2.
So why are some learners competent at reading in L1, but struggle with reading in L2? Obviously, lack of linguistic knowledge plays a part. Fascinatingly, though, Walter cites a study (Robertson et al 2000)* which used MRI scanning to show that sentence comprehension and comprehension of a text as a whole take place on different sides of the brain.
Walter doesn’t mention this, but, of course, there has been a lot of speculation about left brain and right brain thinking. Interestingly, sentence level comprehension, or decoding, used the left frontal lobe (associated with analytical, logical processing) and text comprehension uses the right frontal lobe (associated with intuition).
So a learner may have effective (non-linguistic) comprehension skills, and even be able to decode L2 sentences, but still struggle. Why?
According to Walter, it may be to do with how we use our working memories. Part of the working memory is something called the phonological loop:
‘a short-term memory mechanism that stores information in phonological form and automatically rehearses that information by unconscious sub- vocalisation.’ (Walter 2008)
In other words, as we listen, we automatically ‘record’ the last two seconds of what we hear in the memory, like a little Dictaphone. It’s why we can repeat back what we have just heard, even if we weren’t really listening properly to the speaker.
There is also evidence that we do exactly the same thing as we read- that we also sub-vocalise and record the sound of what we are reading. We don’t see it, we hear it. At least, this is true for those of us with L1s that are alphabetic (there is some evidence that learners with non-alphabetic L1s, may use more visual representations).
However, if our phonological representations of what we have just read are unreliable, we may find it difficult to associate these sounds with meanings, and thus find it difficult to keep meaning in our short term memory. This, in turn, will make it harder for us to carry out meaning building processes on the text as a whole.
Walter’s research in this paper concludes that unless learners are also poor at comprehension in L1, we would be better off teaching them to improve how they ‘mentally represent spoken language’ than teaching comprehension skills.
As much exposure as possible to the spoken language- so lots of listening and watching videos in English.
Listening while reading – hearing the spoken version while reading a text. This could mean using sub-titles, or following a transcript while listening, or listening to an audio version of a written text while reading. I would suggest that these don’t have to be instead of our usual listening or reading activities (I’m not quite ready to throw out more traditional reading and listening procedures), but as a follow up.
And finally, explicit focus on features of pronunciation, such as minimal pairs work , and focus on word stress and on how words change in the stream of speech . This kind of work will help learners to develop a more reliable repertoire of L2 sounds, which, Walter suggests, could also help them to hold what they are reading more efficiently in their short term memory, which in turn will greatly help with building up the meaning of the text.
And even if this isn’t the case, it will certainly help with developing fluency in speaking and confidence in listening, so what have we got to lose?
* Robertson, D. A., Gernsbacher, M. A., Guidotti, S. J., Robertson, R. R. W., Irwin, W.,Mock, B. J., et al. (2000). Functional neuroanatomy of the cognitive process of mapping during discourse comprehension.
Patient: Doctor, Doctor, I’ve got two theik, a near rake, sore rise, bruise darms a stummer cake and I far tall the time.
Doctor: I see, perhaps you’d like to way tin the corridor?
(Try reading it aloud)
The joke [apologies for the vulgarity 😉 ] showcases a good number of examples of features of connected speech. Teacher can tend to shy away from highlighting these in the classroom, but research shows that teaching learners about connected speech can really make a difference in terms of how well they understand native speakers. See for example, Authentic Communication: whyzit important ta teach reduced forms (Brown 2006) . Equally, some ability to use these features in their own speech will also be likely to make students more confident and fluent speakers.
Features of connected speech
As a brief overview, there is a strong tendency in English to simplify and link words together in the stream of speech, in order to help the language flow rhythmically. Some of the most common features:
This is when the sound at the end of one word changes to make it easier to say the next word. For example:
‘ten boys’ sounds like ‘ tem boys’ (the /n/ sound changes to the bilabial /m/ to make it easier to transition to the also bilabial /b/)
Incidentally bilabial just means two lips together, which is a good example of the kind of jargon that puts people off!
This is when the last consonant of the first word is joined to the first vowel of the next word. This is very very common in English, and can be very confusing for students. For example:
‘an apple’ sounds like ‘a napple’ (Teacher, what is a napple?)
Elision means that you lose a sound in the middle of a consonant cluster, sometimes from the middle of a word. E.g. ‘sandwich’ becomes ‘sanwich’.
Or from the end of a word. For example:
‘fish and chips’ ‘fishnchips’
This is when an extra sound ‘intrudes’. There are three sounds that often do this /r/ /j/ and /w/
E.g. ‘go on’ sounds like ‘gowon’
I agree sounds like ‘aiyagree’
Law and order sounds like ‘lawrunorder’
[I probably should have used a phonemic keyboard!]
If you want to discover more about features of connected speech- and I think it’s fascinating stuff, there’s a list of useful books at the end of the post, but now let’s look at some activities to help raise awareness and encourage more natural sounding speech.
Connected speech activities
I remember reading somewhere that there are three ways to deal with pronunciation in the classroom: integrating it into other activities, dealing with it discretely, and completely ignoring it. 😉 Let’s assume we aren’t going to do the latter, and look at the other two approaches.
I strongly believe that students should be made aware of the basics of connected speech right from the start. I don’t mean that you should be teaching your beginners exactly what catenation is, but you can certainly show them how words link together and what happens to sounds in the stream of speech. You don’t have to be an expert, and you don’t even need to know very much about the technical aspects; you just need to listen to yourself very carefully and notice what is happening in your mouth as you speak.
Drilling and using the board
At lower levels, we tend to teach quite a lot of functional chunks, such as ‘What’s your name?’ Phonetically that could be transcribed as /wɔ:tsjəneɪm/. However, this is likely to confuse (terrify) the students. Instead, using the board, you can just show the students how the words link by using arrows, and write the schwa /ə/over the top of ‘your’ . Alternatively, you can use your fingers to show how the three words (separate fingers) meld into one long sound (push fingers together). And model and drill the phrase as it is said naturally.
If students struggle with longer phrases, try the technique of back-chaining, starting from the last sound and working up to the whole sound bit by bit. For example with ‘Where do you come from?’ you drill ‘frum’ ‘kumfrum’ ‘dz-kumfrum’ ‘where-dz-kumfrum’ I have no idea why this works- but it does.
Using recording scripts
Where new language has been recorded (or by recording it yourself), ask students to first look at the chunk of language written down and try saying it a few times. Then play the recording several times and ask them to write down what they hear, however they want to spell it. Use the two written forms to elicit the differences (such as the use of the schwa) and then drill the more natural pronunciation. You could of course just say the phrase for them, but it can be hard to keep repeating something exactly the same way.
Make it part of presenting new language
Whenever you are dealing with new language, you need to be thinking about the meaning, the form AND the pronunciation. So if you’re teaching ‘Have you ever + past participle’, make sure you’re teaching it as something like /əvju:w’evə/ not ‘Have… you… ever…’ You don’t need to explain that the first /h/ is elided or that there’s an intrusive /w/- just provide a good model.
Incidentally, I say ‘something like’ because individual ways of connecting and simplifying speech do vary a bit.
Be aware of the difficulties connected speech may cause with listening
If students struggle to understand something in a recording, or that you say, be aware that they may actually know all the words, just not recognise them in the stream of speech. A great example of this is the student who asked me what ‘festival’ meant. I went into an explanation, giving examples of different festivals…but teacher, he said, why do you always say it at the beginning of the lesson? (I was saying First of all…).
If students don’t understand a phrase, see if they do understand it written down and then take the opportunity to highlight the differences between the written and spoken forms.
As well as teaching connected speech as you go along, it is also worth doing some discrete activities for the purpose of awareness-raising.
A good activity to start learners thinking about connected speech and weak forms is to dictate just part of some phrases. For example: ‘uvbin’. After students have written these down as best they can (this should be a light-hearted activity), you dictate the full phrase, in this case ‘I’ve been to Paris.’.
After doing a listening activity, try doing a dictation where you handout the recording script, with chunks of 2-3 words missing. These should include some aspects of connected speech. Students have to complete the gaps, which will help to develop their decoding skills.
Mark Hancock has some great activities in Pronunciation Games and on the HancockMacDonald website. I particularly like The Word Blender, a game for A2/B1 students which starts to help students identify some of the features of connected speech.
This is necessarily a very brief and somewhat simplistic overview. For more information and ideas, you could try:
Drilling has certainly fallen out of favour in recent years. Strongly associated with the behaviourist approach it is often seen as non-communicative, boring, patronising…. A recent ELTChat on the subject brought up all the negatives, but also provided a long list of positive reasons for drilling. For example:
Helping learners get their tongues round new words
Picking up pace and getting students’ attention
Developing ability to produce (and understand) connected speech
And perhaps the key reason, for me at any rate: drilling or repetition is an important step towards fluency. Especially at lower levels, it is quite natural to rehearse (at least mentally) before tackling a speaking situation. Repeating something helps us to ‘notice’ what we are repeating and assimilate it into our store of language. The ELTChat I referred to concentrated mostly on drilling words or single chunks. There are plenty of benefits to this, but in this post I want to concentrate on some techniques which are probably even more out of favour: drilling and repeating dialogues and narratives.
I did my CELTA so long ago that it wasn’t even called that then (!). It was also at IH in Cairo, which I rather suspect was a little behind the times in terms of materials. The result was that my initial training centred around the coursebook series Streamlines and Strategies. Lots of drilling and repetition.
One of the first techniques I learnt was a dialogue build. For the uninitiated, it goes like this: Set up the situation, using a photo (or in pre-IWB days two stick figures on the board). Elicit where the characters are, who they are, what’s happening and so on. This is often a service encounter (e.g. in a café), but can be anything you like.
Then you elicit the dialogue from the students, line by line. As you accept each line, you help learners correct it if necessary and then model the final version, with appropriate connected speech and intonation, getting the students to repeat it. You DON’T write the dialogue on the board, but do indicate where each line starts and who is speaking. You might also add question marks or little visual clues. As you go through the dialogue you keep returning to the start, getting students to keep repeating the dialogue, and thus memorising it. You can do this whole class, or divide the class into the number of characters or ask individuals to do each line. It’s good to have a bit of variety here.
Once the class knows the dialogue by heart, they can practise a little more in pairs, changing roles. You can also have a bit of fun with it by getting one of the characters to change their answers, so that the first person has to react spontaneously. (Good idea to model what to do first with lower level learners) Finally, you can elicit the dialogue once more, writing it onto the board so that the students have a clear written version. Alternatively, you could get students to come up and write it on the board, giving scope for some work on correcting spellings, missing articles and so on.
A dialogue build is a great technique to have up your sleeve for a last minute cover lesson. It’s obviously most appropriate for lower level learners, but could be done with more advanced students if the chunks of language elicited were more demanding. At the end of it, the students have memorised a whole set of (hopefully useful) chunks of language and can produce them fluently at will. Fluent speakers are essentially those who have enough chunks of language that they can stick together to keep going, so teaching chunks in this way is a real help.
It’s also an excellent activity to do with learners with low levels of literacy. I can’t understand why it isn’t an ESOL staple. You don’t need any equipment except a board and pens (great for those community halls), it doesn’t rely at all on reading and writing, it can be adapted for specific situations that learners might have to deal with (ringing for a doctor’s appointment, for example) and once it’s elicited onto the board it provides a copying activity where the meaning is already very clear (and was indeed student generated).
Instead of eliciting the whole dialogue, you could give them one half or the dialogue (i.e. all that one person says) and elicit the missing responses. Then proceed as above. A good alternative for learners who can read quite well, is to start by writing the elicited dialogue onto the board. Drill as a class, making sure you are giving a good model of natural pronunciation, then ask learners to practise it in pairs. As they are practising, gradually wipe off words and lines from the dialogue. As it disappears, they have to remember more and more. Finally you can re-elicit it onto the board or get them to write it down on the board, or in pairs on paper. This could work equally well with a narrative.
Another old, but great, idea comes from Mario Rinvolucri’s book, Dictation. In this activity, you do the repeating (at least to start with) and the students listen and mime the actions. It works really well with younger learners, but if you have a lively class, adults could enjoy it too. Here’s the text (slightly adapted)
You’re standing in front of the Coke machine.Put your hand in your back pocket.Take out three 50p coins.Put them in one by one.You hear the machine click.Choose your drink and press the button.You hear a terrible groan from the machine.Clunk! A can drops down.Pick it up.Open the can.It squirts Coke in your face.Take a tissue out of your pocket.Rub your eye.Lick your lips.Take a sip.Burp!
First read the text right through, just to orientate students. Then read again and elicit a movement for each line. Get all the students doing it. Then read a third time with all the students doing all the movements. You can make this stage fast as possible if you want a bit of fun. Then give the students a version of the text with most of it missing. They have to work together to recreate the text.
This is a form of dictogloss, but the difference is that doing the actions should help them to remember what’s missing. If they get stuck, get them to do the actions and try and remember that way. This is the stage at which they should be drilling the language themselves, as they try to recall it. You can obviously differentiate this activity by giving less of the text to more able students and vice versa. Finally you get the whole class to carry out the actions while saying the text (from memory).
All of these ideas are extremely preparation-light and student generated. They provide a way to help learners appropriate new chunks of language to their store, and the challenge of memorisation also provides interest and stimulation. Maybe it’s time for a revival?
At the weekend I was lucky enough to catch Sam Shephard’s lively session on pronunciation at the NATECLA conference in Liverpool. His session focused mostly on productive pronunciation, but as I was presenting on the same day on decoding skills for listening, I found myself thinking more about the role of pronunciation work in decoding- and specifically about minimal pairs.
When I first saw this advert for Berlitz language schools on youtube, I was struck with how clever it is.
But, apart from in this rather specific context, how important is it really that learners can understand or pronounce the difference between //θ/ / and /s /?
Minimal pairs, minimal importance?
It seems that misunderstandings in natural speech are rarely caused by the mispronunciation of one sound. Usually context gives us enough of a clue to understand what the speaker is trying to say. Adam Brown gives a good example in his 1995 article, Minimal pairs, minimal importance?:
‘Singapore is one of the busiest ports in the world. However, it is a tiny island (the size of the Isle of Man) with a population of three million. Consequently, land is at a premium, and there are no animal farms. The nearest most Singaporeans come to sheep is mutton curry. In short, if Singaporeans don’t pronounce the distinction between ship and sheep clearly, the chances of misunderstanding are minimal: they are almost certain to mean ship.’
Similarly, Jenkins (2000) found that /θ/ rarely caused misunderstandings between NNSs, and she also points out that many native speaker varieties don’t use it anyway, often using /t/ or /f/.
So should we chuck out the minimal pairs work?
Can minimal pairs help L2 listeners decode more effectively?
Well, according to John Field (2008) there is evidence that L2 listeners process in words, but that ‘many of the matches they make are rough approximations that do not correspond exactly to the sounds that the listener heard.’ In other words an inability to recognise certain phonemes is leading to learners making inaccurate guesses about words, which in turn could lead them quite seriously off track as they apply top down skills to their guesses. For example, the listener who hears ‘screams’ instead of ‘screens’ is likely to go quite a way off track.
It is certainly true that context could help here- but that is making the assumption that learners are able to use their top down skills effectively when, Field and others argue, learners who are unable to decode effectively, usually can’t hold onto enough meaning to start stringing ideas together.
So, therefore, there is certainly an argument for using some minimal pair work, especially at lower levels- though we probably do need to be quite selective about which phonemes we choose to focus on.
Sounds that carry a high functional load are used to distinguish between a significant number of words. The opposite is sounds which carry a low functional load. For example, Brown (1995 above) says that the only minimal pairs in English for /ʃ/ and /ʒ/ are:
With a monolingual group, it should be fairly straightforward to find out which pairs are causing the most problems. A book like Learner English can be helpful, or simple observation. Obviously with a mixed nationality group, tricky minimal pairs are likely to vary, but there are some which seem to be difficult for speakers of many different languages, and have a high functional load, such as /e/ and /ae/ and /ae/ and /ʌ/
The first point to make is that ideally, learners should be able to see the link between the minimal pair work and what they are listening to. For example, if a number of learners have heard ‘scream’ instead of ‘screen’, that would be a perfect opportunity to do some minimal pair work on /m/ and /n/.
The second point is not to overload the learners. I wouldn’t suggest working on more than one pair of sounds at a time.
The third point (made by Field) is that ideally words used should be relatively frequent and of roughly equal frequency. So bin and pin would be OK, but perhaps not blade and played.
There are lots of ideas for working on minimal pairs (some of which came up in Sam’s session, mentioned above)
Some different ways for learners to show they can differentiate the two sounds:
Put the two words in each pair on different sides of the board and learners put up their left hand/right hand according to which they think they hear.
Alternatively, learners can physically move to the right or left side of the classroom.
Put the words on cards and learners grab the right card, either in small groups with little cards, or with big (sturdy) cards, you can haver learners line up so one from each team is in front of the board and they race to grab the right word from there.
For a more sedate activity, learners write down what they think they hear.
Learners say if the words you say are the same or different.
Obviously all the activities above can be done with a learner providing a model, but then it becomes oral work, rather than listening, and they will need help to know how to make the sounds etc.
If learners have literacy issues, the above activities could potentially be done with pictures rather than words:
And if one of the words in the pair you want to use is not very frequent (e.g. played/blade), you could still do the activity but just write the frequent word on the board and ask ‘Same or Different?’
A more contextualised task, which would make the relationship to listening clearer, might be to select a phrase or short section from something they have listened to which contains a lot of the two sounds (not necessarily in minimal pairs) and ask them to mark the two phonemes.
E.g. ‘Looking after rabbits is really easy’ might work well for /r/ and /l/.
Clearly working on minimal pairs is much trickier with a multi-lingual class. As mentioned earlier, there are some vowel sounds which a lot of people find tricky. Alternatively, learners could be given different sounds to work on, according to needs. There are now quite a few websites (for example www.shiporsheep.com) where learners can listen to minimal pairs, so this kind of differentiated activity could be set as homework.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.