women checking her smartphone in a cafe

Friday 10th December (11:00-20:55 SGT) & Saturday 11th December (10:30-20:15 SGT)

For a full session list including breakout sessions and timings visit our conference platform page here.

Plenary Speakers

With a special welcome keynote speech from British Council Chairman Stevie Spring CBE and interview with CEO Scott McDonald.

 

Panel Discussion: Artificial Intelligence: opportunities and challenges for the future of language assessment

CHAIRED BY Professor Barry O'Sullivan OBE (Assessment Research Group, British Council)
 

Artificial intelligence (AI) refers to the concept of computer systems being able to mimic actions that normally require human intelligence, and though it may bring up images of maniacal robots taking over the world, AI has many useful, real-world applications, not least in language assessment and testing. In this session, our panellists will look at how AI can be used in areas such as automated scoring and remote proctoring, as well as considering the benefits and challenges (both practical and ethical) that come with operationalizing AI in testing systems.

Panellists

Panel Discussion: English-Medium Education (EME) - Assessment Issues in an East Asian Context

CHAIRED BY Ann Veitch (British Council) 

English Medium Education (EME), often referred to as English Medium Instruction (EMI), “refers to the use of the English language to teach academic subjects (other than English itself) in countries where the first language of the majority of the population is not English” (Macaro et al., 2018). Around the world, EME is experiencing rapid growth in popularity, with numerous Outer Circle countries offering university courses in English. The effects of this trend have been under-researched, not least when it comes to assessment. In this panel, our speakers will discuss the implementation and effects of EME on a range of assessment-related issues in the East Asia region. Particular focus will be given to projects conducted in Japan, Thailand, South Korea, Vietnam and China. 

Panellists

Panel Discussion: Equality, Diversity & Inclusion in Language Assessment 

CHAIRED BY Matt Burney (British Council)
 

Many organisations in the world today actively promote EDI (Equality, Diversity and Inclusion), aiming to challenge prejudice and discrimination, and encourage fair treatment and opportunity for all. In this panel, our speakers will discuss EDI and accessibility within English language learning and assessment in East Asia. Special attention will be given to high-stakes testing, and the challenges faced by test developers and administrators to ensure that all candidates, regardless of background, identity, learning differences or abilities, have an equal opportunity to participate fully and achieve their best.

 

Panellists

Panel Discussion: Climate Action in Language Assessment

CHAIRED BY Chris Graham (ELT Footprint) 
 
In common with all the stakeholders in ELT, the testing and assessment community has an environmental footprint. The organisations operating in this space are developing ways of at least partly mitigating this negative impact, yet still retaining robust processes to ensure absolute integrity and given the digital divide in many locations, equity of access. In this session the panellists will discuss their perception of the negative impact on the environment of the testing and assessment community, outline what steps their organisation is taking to reduce that impact and their effectiveness, detail the challenges these measures present (including their understanding of the climate impact of digital), describe what legacy they see from the Covid-19 pandemic and suggest how technological developments in the next few years may allow the sector to become substantially greener. 
 
 

Panellists

Breakout Session Themes

  • INCLUSIVITY AND ACCESS IN LANGUAGE TESTING – Making language tests more inclusive and accessible to all.
  • FUTURE-READINESS: LEARNING FROM THE LEGACY OF COVID-19 - How to make language testing systems more resilient and innovative.
  • ASSESSMENT IN COMPREHENSIVE LEARNING SYSTEMS: AT THE POLICY LEVEL – Driving positive change in educational policy and empowering learners.
  • ASSESSMENT IN COMPREHENSIVE LEARNING SYSTEMS: IN THE CLASSROOM – How teachers respond to changes in testing and assessment. 
  • CREATING OPPORTUNITIES: ASSESSMENT FOR WORK - How tests can be used to increase access to the workplace
  • AUTOMATED LANGUAGE TESTING - Implementing automated testing in a responsible way
  • THE IMPACT OF LANGUAGE ASSESSMENT ON THE ENVIRONMENT, NOW AND IN THE FUTURE – Exploring the environmental impact of testing and making testing more environmentally friendly.

 

 

Thursday 9th December

Pre-conference Workshops

 

Thursday 9th December 16:00-17:30 SGT

Workshop: Peering into the black box: a new approach to measure how well an AI auto-scoring system works

FACILITATED BY William Bayliss (British Council) and Trevor Breakspear (British Council)  

Many language test developers and teachers have come to embrace the use of auto-scoring systems despite the lack of a common approach to measure how fair they are to learners. The unexplainable nature of “black box” scoring systems, when considered with the reluctance of many developers to share details of proprietary technology, is likely to have contributed to this failure. To facilitate improved developer communication, transparency and accountability, Mitchell et. al (2019) responded by devising an accessible report, the model card, to evaluate the use of AI scoring models.

In this workshop, we start with an overview of the model card framework, which will include the aims, content and outputs. We then apply the framework to an auto-scored, low-stakes placement test designed by the British Council and developed with a technology vendor. We will provide participants with data used in the validation of the placement test and together explore what the model card framework reveals about the appropriate use of the auto-scoring system in our context.  We conclude by reflecting on how these insights can encourage a more inclusive conversation about the appropriate use of AI in language assessment. 

We hope participants will gain the following from this workshop:

1.     An introduction to key questions that could help teachers decide which auto-scored assessment solution to adopt.

2.     An awareness of how to identify sources of bias in auto-scoring systems.

3.     An understanding of how bias could cause unfair scoring. 

Reference

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D. & Gebru, T. (2019). Model Cards for Model Reporting. FAT* ’19, January 29–31, 2019, Atlanta, GA, USA. Retrieved from https://arxiv.org/pdf/1810.03993.pdf (August 3, 2021) Thursday 9th December 16:00

Thursday 9th December 18:00-19:30 SGT

Workshop: Separate or integrate? Assessing speaking as a discrete and integrated skill

FACILITATED BY Richard Spiby (British Council) and Carolyn Westbrook (British Council)
 

In many classrooms around the world, speaking as a language skill is often neglected due to the practicalities of the size of classes, time available for practice and the perceived difficulty assessing speaking skills. In this workshop, we will consider practical ways in which we can assess speaking skills in the classroom.

We will consider what speaking is and some of the key issues involved in assessing it. Then we will look at different types of tasks that students can be given to develop their speaking skills and discuss the advantages and disadvantages of different ways of assessing speaking in the classroom. In addition, we will examine the benefits and drawbacks of assessing speaking discretely or in an integrated way, in recognition that a great deal of spoken interaction in learning environments and real life involves the integration of speaking with other skills. Drawing on the interaction and mediation scales contained in the Common European Framework of Reference Companion Volume, we will present some examples of how integrated skills can be operationalised in task design and scoring. During the workshop, participants will be invited to share their experiences of teaching and assessing speaking in their own context, and complete hands-on activities related to important aspects of assessing speaking in both a discrete and integrated way. 

 

 

See also