Education for a Changing World: the implications of AI for Education

I am really delighted to be speaking at a fascinating event in Sydney, New South Wales where they are embracing the need to plan for the impact of AI in Education. The symposium is part of the Education for a Changing World project of the NSW Department of Education that is examining the implications that advances in technology will have for education. The project aims to stimulate informed discussions about how we should be preparing children to thrive in an increasingly complex and interconnected world.


The project’s first discussion paper explores some of the department’s initial thinking about the challenges of the big technological, economic, demographic and social shifts occurring around the globe. In addition to which, the department has established the Education: Future Frontiers Occasional Paper Series to widen the evidence base and the case for change. The paper I contributed is published here below:

Occasional Paper: The implications of Artificial Intelligence for teachers and schooling

Rose Luckin, Professor of Learning with Digital Technologies, University College London Institute of Education’s Knowledge Lab


Most people in countries where modern technology is widely used will be interacting with Artificial Intelligence (AI) through its many practical applications in computers that have visual capabilities, that can learn, solve problems, make plans, and understand and produce natural language, both spoken and written. These AI applications are used in areas such as medical diagnosis, language translation, face recognition, autonomous vehicle design and robotics.


AI is also already being applied to educational settings. For example Alelo has been developing culture and language learning products since 2005 and specialises in experiential digital learning driven by virtual role play simulations powered by AI. Carnegie Learning produce the software that can support students with their mathematics and Spanish studies. In order to provide individually tailored support for each learner the software must continually assess each student’s progress. The assessment process is underpinned by an AI-enabled computer model of the mental processes that produce successful and near-successful student performance.


UK-based Century Tech, has developed a learning platform with input from neuroscientists that track students’ interactions, from every mouse movement and each keystroke. Century’s AI looks for patterns and correlations in the data from the student, their year group, and their school to offer a personalised learning journey for the student. It also provides teachers with a dashboard, giving them a real-time snapshot of the learning status of every child in their class.


These examples merely scratch the surface of what is possible with AI. The purpose of this paper is to explore how AI is relevant to education and what AI can contribute to teaching and learning to help students and educators progress their understanding and knowledge more effectively.


The relevance of AI to education


In order to benefit from the potential advantages of AI – from personalised cancer treatment specified according to individual genetic profiles generated by AI to workplace automation that increases productivity – we must attend to the needs of education as a matter of urgency.


To be blunt, none of the potential AI benefits will be achieved at scale unless we address education and AI now. The nature of what needs to be done is summarised in Figure 1, which illustrates the elements involved in the AI and education knowledge tree. There are two key dimensions that need to be addressed:


  1. How can AI improve education and help us to address some of the big challenges we face?
  2. How do we educate people about AI, so that they can benefit from AI?


This paper will examine both of these dimensions in turn.

Figure 1: The AI and Education Knowledge Tree

The Tree of AI


Dimension 1. Addressing educational challenges with AI


The thoughtful design of AI approaches to educational challenges has the potential to provide significant benefits to educators, learners, parents and managers. But it must not start with the technology, it must start with a thorough exploration of the educational problem to be tackled.


A clear specification of the problem provides the basis on which a well-designed solution can be developed. Only when a solution design exists, can we start to consider what role AI can best play in that solution and what type of AI method, technique or technology should be used within that solution. There is an obvious and important role for teachers in the pursuit of a problem specification and solution design. Without this enterprise, the technologists cannot design effective AI solutions to the key educational challenges recognised across the globe.


Identifying the problem


For the purposes of this paper, I’ll take the definition of AI used in the Oxford dictionary, which defines AI as: computer systems that have been designed to interact with the world through capabilities (for example, visual perception and speech recognition) and intelligent behaviours (for example, assessing the available information and then taking the most sensible action to achieve a stated goal) that we would think of as essentially human.


AI is an interdisciplinary area of study that includes psychology, philosophy, linguistics, computer science and neuroscience. The study of AI is complex and the disciplines are interlinked as we strive for a greater understanding of human intelligence as well as attempting to build smart computer technology that behaves intelligently.


A key aspect of this definition that is often overlooked is the initial statement about an AI being a computer system that has been designed to interact with the world in ways we think of as human and intelligent. In current discussions of AI in the media, for example, we tend to focus on the AI technology rather than the problem and the design process that has preceded and informed the implementation of the AI technology. This is ironic, because the most important aspect of AI is the identification of the problem to which intelligence is to be applied and the design of a clear understanding and representation of that problem.


Without this problem specification process, there is no chance of developing a good solution to which AI technology can be applied. The AI designer must have a good understanding of the problem AI is supposed to solve, as well as the type of AI technique that might be appropriate. The features of the problem must be specified along with the features of the environment in which the AI must operate. Once we recognise the importance of the AI design stage we can start to unpack the relevance of AI to teaching and learning and the vital role that educators need to play if AI is to meet its potential in the benefits it can provide to education.


I remember when I was an undergraduate studying AI, one of the hardest final year examinations was a paper that we could complete outside normal exam conditions over a three-day period. The paper presented candidates with a selection of problems. For example: a complex of road junctions where fluctuating traffic flow rates and poor visibility had resulted in a series of accidents, or a teacher who needed to provide support for a class of language learning students who were all at very different levels of proficiency.


As students we were required to select one of the problems, describe the problem as we understood it including any assumptions we were making, develop a potential solution and design the AI techniques and technologies that could be used to implement our proposed solution. The first example problem requires predominantly a planning or possibly a computer vision approach, whereas the second is more likely to be concerned with knowledge collation and representation, possibly also knowledge elicitation. Students were not required to implement any technology or write any code; the paper was designed to test their design skills.


My point here then is that when we ask how AI can contribute to teaching and learning, we need to start from the problems that we believe need to be tackled.


Designing solutions


Thinking about the problem specification and solution design stage of AI should prompt us to start considering how AI could help us to transform problematic educational activities and bring about changes to the working lives of teachers. Changes that would make best use of teachers’ uniquely human skills and abilities, and that would remove much of teachers’ routine administration, record keeping and assessment work.


Before looking at examples of the changes that could be made to the job of being a teacher, it is important to consider briefly the changes to the workforce that are likely to occur, partly due to the automation brought about by AI. Schools will need to ensure that they equip students to be effective in the future workforce and educators will therefore need to know which skills, abilities and knowledge are most valuable for their students to learn.


The impact of technology, particularly automation, on employment is a key topic of debate at the moment in much of the western world. Predictions about the future pace of technological change, due to AI have historically been over-optimistic.  In fact, the jobs and skills composition of a workforce have tended to change only gradually over time.[1] The most dramatic historical shift was from agriculture to industry rather than due to an ICT-driven transformation. Current estimates of the impact of future automation on the number of jobs and the types of jobs most at risk vary. See Marc Tucker’s essay Educating for a digital future – the challenge as part of this series for detailed consideration of these issues.


Some jobs are more likely to be augmented by AI rather than replaced through the automation of specific tasks. For example, lawyers routinely conduct document reviews, which is a task that can be automated in some contexts. However, lawyers also provide advice to their clients and complete negotiations for their clients and these tasks are much harder to automate. Not only does this suggest that there is not a clear one-to-one relationship between a job lost and a task automated, but also that the coordination of the different tasks between machines and humans may be a new job in its own right. The situation is made more complex by the many factors at play beyond automation: globalisation, environmental sustainability, urbanisation, increasing inequality, political uncertainty for example.


The only thing we can be sure about is that the future workplace will be uncertain, unpredictable and that our students will therefore need to be able to cope with this uncertainty, to be resilient, flexible and lifelong learners. The way to achieve this is to focus on individuals as learners and enable them to be effective for themselves and with and for others and society too.


The key skill people will need for their future work lives will be self-efficacy – by this I mean that every individual needs to have an evidence-based and accurate belief in their ability to succeed in specific situations and to accomplish tasks both alone and with others. A person’s sense of self-efficacy plays a key role in how people tackle tasks and challenges, and how they set their goals, both as individuals and as collaborators. It is something that can be taught and mentored and it requires an extremely good knowledge of what one does and does not know, what one is and is not so good at, where one needs help and how to get this help. This self-knowledge is not just about subject specific knowledge and understanding, but also about one’s wellbeing, emotional strength and intelligence.


This self-knowledge and efficacy is particularly important because these are skills that AI cannot replicate. No AI developed to date understands itself, no AI has the human capability for metacognitive awareness and self-knowledge. We must therefore ensure that we develop our knowledge and skills to take advantage of what is uniquely human and use AI wisely to do what it does best: the routine cognitive and mechanical skills that we have spent decades instilling in learners and testing in order to award qualifications.


The implications of this for school systems, the curriculum and teaching are profound and educators must engage in discussing what needs to change as a matter of urgency. This is not a job for the technologists, but if we do not motivate educators to engage in discussions about what AI could and should be used for in education the large technology companies may usurp the educators and occupy the AI vacuum that a lack of engagement will produce.


Leveraging AI to enhance teaching and learning


What I hope is clear from the discussion about the future of the workforce is that we need to review what and how we teach and ensure that AI is designed and used as a tool to make our students (and ourselves) smarter, not as a technology that takes over human roles and dumbs us down. To achieve this we need to concentrate on developing teaching and schooling that develops the uniquely human abilities of our students as well as instilling within them the requisite subject knowledge in a flexible, interdisciplinary and accessible manner.


The parallel in teaching is that we need AI assistants to relieve teachers from the routine automatable parts of their job and enable them to focus on the human communication, the sensitive scaffolding and supporting the wellbeing of their students so that they can build the self-knowledge and self-efficacy that will ensure that they are able to advance in their chosen workplace.


Three examples of the ways in which teaching and schooling could be re-imagined are presented below, each is driven by a significant educational challenge.


Example 1: Assessing what can’t be automated not what we can easily automate

The current outdated assessment systems that prevail across the world revolve around testing and examining the routine cognitive subject knowledge that can easily be automated. These assessment systems are also ineffective, time consuming and the cause of great anxiety for learners, parents and teachers. However, there is now an alternative due to the potential information we can gain from combining big data and AI and applying it to the problem of assessing learning. There is a rather beautiful irony in the fact that while unable to understand itself or develop any self-knowledge, AI can help us to understand ourselves as learners, teachers and workers.


Let me explain what I mean by this:

  • The careful collection, collation and analysis of the data that can be harvested through people’s use of technology gives us a rich source of evidence about how learners are progressing, cognitively, metacognitively and emotionally;
  • Continuing work in psychology, neuroscience and education has increased our understanding of how humans learn. This increased knowledge can be used to specify signifiers or behaviours that evidence learner progress;
  • Our increased knowledge about human learning can also be used to design AI algorithms and models that can analyse data about learners, recognise signifiers of learning and build dynamic models of each individual students’ progress holistically so that we can chart their development of self-knowledge and self-efficacy as well as their increased knowledge and understanding of key subject knowledge;
  • The final step in the process is to design ways in which we can visualise the data that has been analysed to define each learner’s progress cognitively, metacognitively and emotionally. These visualisations can be used by learners, educators, parents and managers to understand the detailed needs of each learner and to develop within each learners the skills and abilities that will enable them to be effective learners throughout their lives.


An AI assessment system that was composed of these AI tools and that illustrated to every learner the analysis of their progress in an accessible format would support learning and teaching through continually assessing learning of both subject specific knowledge and the skills and capabilities that the AI augmented workforce will require, such as negotiation, communication and collaborative problem solving.


This AI assessment system would be more accurate and cheaper than the human intensive examination systems currently in place and it would free up time for teaching and learning that is currently taken up when we stop teaching in order for people to sit tests and exams. Assessment would happen continuously while people learn. This assessment change requires political will as well as investment in technology development and engagement with teachers, students and parents so that they fully understand the AI assessment proposition.[2]

Example 2:  Addressing the achievement gap between advantaged and disadvantaged learners

AI could also help to make the education system more equitable. Education is the key to changing people’s lives but the less able and poorer students in society are generally least well served by education systems. Wealthier families can afford to pay for the coaching and tutoring that can help students access the best schools and pass those currently cherished exams.


AI would provide a fairer assessment system that would evaluate students across a longer period of time and from an evidence-based, value-added perspective. It would not be possible for students to be coached specifically for an AI assessment, because the assessment would be happening in the background, over time, without necessarily being obvious to the student. AI assessment systems would, for example, be able to demonstrate how a student deals with challenging subject matter, how they persevere and how quickly they learn when given appropriate support.


One of the key benefits that AI can bring to all learners is the capability to understand more about themselves: what they know and where they need help to understand, their strengths and weaknesses and their well-being. Metacognitive awareness is a complex concept, but broadly it refers to any knowledge or cognitive process that references, monitors or controls any aspect of cognition. Scholars distinguish between a person’s knowledge of their cognitive processes and the processes they use to monitor and regulate their cognition. This latter regulatory process incorporates a variety of executive functions and strategies, such as planning, resource allocation, monitoring, checking and error detection and correction.


Good metacognitive awareness and regulation enhances cognitive performance, including attention, problem-solving and intelligence and it has been shown to increase learning outcomes[3]. Successful students continually evaluate, plan and regulate their progress, which makes them aware of their own learning and promotes deep-level processing. Metacognitive awareness and regulation can be taught and supported, and can benefit learners of all abilities.


A series of studies we conducted using an AI software simulation called the Ecolab demonstrated that AI could be employed to scaffold learners to develop metacognitive skills, in particular help seeking and task difficulty selection skills. [4] The results demonstrated that the students whose subject knowledge and ability had been assessed as being below average gained particular benefit and performed significantly better than more-able students, who also performed well.


In addition to employing AI to scaffold the development of these important learning skills, we can also use AI to visualise to students the trajectory of their progress and increase their self-awareness. For example, in Figure 2, the map in the dialogue box entitled ‘Activities’ depicts the area of the curriculum that the child is studying, with each node representing a curriculum topic. When the user clicks on a node in this map, the bar chart below and to the left of the map indicates the level of difficulty of the work that the child has completed while working on this topic, and the dots on the ‘dice’ below and to the right of the map indicate how much help the child has received.


Figure 2 An example of a visualisation of student performance, courtesy of Ecolab


Example 3: Making teaching more effective

Imagine a classroom setting ten years hence where data about each learner’s movements, speech and facial expressions are automatically logged by passive capture devices within the fabric of the classroom. This information is combined with data about each learner’s performance recorded by the school’s assessment system, and by data input from teacher, parent and the learner themselves. All this data is used to update the class teacher’s pupil records and to provide data for an AI-based teaching assistant that keeps track of every learner’s progress: cognitive, emotional and metacognitive.


The AI teaching assistant relieves the teacher of all record keeping and recording activities and is able to provide up to the minute information about any pupil through a teacher activated speech based interface or through a software application. Teachers can also ask their AI assistant to identify an appropriate tutoring application, like those described at the start of this paper, for a group of students who need particular support with an area of the curriculum. The AI assistant can search for resources or media to meet the teacher’s requirements for the day, or it can identify and contact local entrepreneurs who are willing to come and talk to pupils about future work opportunities or how to be an entrepreneur.


The possibilities for the AI assistant are vast and encompass all the routine, data intensive and time consuming activities that are essential to the smooth running of the classroom, but that don’t need the expertise of a teacher. This allows the teacher to focus on the process of teaching and learning ensuring that all pupils benefit from the unique human skills involved in effective intersubjective teaching and learning interactions.


There are more than 30 years of research on AI for education that demonstrate that we can use AI to make teaching more effective and more economical by augmenting teachers with AI systems so that teachers can concentrate on the teaching activities that require the general and specialist intelligence that AI does not (yet?) have. The outputs from this research are now required to build the AI teaching assistants that schools and universities need, such as that described here. We have the technology know how, we now need the initiative to make such assistants a reality. This initiative would need to engage educators across the sectors to help ensure that the capabilities of AI assistants address the requirements of their teaching realities.

Dimension 2. Education about AI


There are three key elements that need to be introduced into the curriculum at different stages of education from early years through to adult education and beyond if we are to prepare people to gain the greatest benefit from what AI has to offer.


The first is that everyone needs to understand enough about AI to be able to work with AI systems effectively so that AI and human intelligence (HI) augment each other and we benefit from a symbiotic relationship between the two. For example, people need to understand that AI is as much about the key specification of a particular problem and the careful design of a solution as it is about the selection of particular AI methods and technologies to use as part of that problem’s solution.

The second is that everyone needs to be involved in a discussion about what AI should and should not be designed to do. Some people need to be trained to tackle the ethics of AI in depth and help decision makers to make appropriate decisions about how AI is going to impact on the world.

Thirdly, some people also need to know enough about AI to build the next generation of AI systems.


In addition to the AI specific skills, knowledge and understanding that need to be integrated into education in schools, colleges, universities and the workplace, there are several other important skills that will be of value in the AI augmented workplace. These skills are a subset of those skills that are often referred to as 21st century skills and they will enable an individual to be an effective lifelong learner and to collaborate to solve problems with both Artificial and Human intelligences.


I have already discussed the importance of both metacognition and self-efficacy. Here I therefore simply note that these two concepts are inter-linked and essential for lifelong learning. Collaborative problem solving brings together thinking about the separate topics of collaboration and problem solving, each with their own research history.  Collaborative problem solving is a key skill for the workplace, and its importance is only likely to grow as further automation takes effect.


There is a mismatch between the substantial evidence in favour of collaborative problem solving and learning reported in the literature and the approaches widely used within schools. This is neither preparing students for university nor the workplace. For example, in an interview for a Davos 2016 debate on the Future of Education, a student from Hong Kong stated that the current school system produced: “industrialised mass-produced exam geniuses who excel in examinations” but who are “easily shattered when they face challenges”. We need employees to be able to tackle challenges and this often involves working effectively with others to solve the problem at the heart of any challenge; we don’t need exam geniuses who crumble under the pressure of the real world.


Collaborative problem solving does not happen spontaneously. Both teachers and students require a high level of training to employ collaborative problem solving effectively, and yet there is little evidence of concerted training effort. This means that when teachers do attempt to employ collaborative problem solving, the quality of the group interactions and dialogue can be poor.


It is extremely difficult to isolate the precise nature of the key factors that impact on the effectiveness, or not, of collaborative problem solving. We can, however, identify factors that are frequently mentioned as being influential upon success. These factors include: the environment in which collaborative problem solving takes place; the composition, stability and size of the group and their problem solving and social skills, and teacher training.


To be effective at collaborative problem solving, people need to be able to:


  1. articulate, clarify and explain their thinking;
  2. re-structure, clarify and in the process strengthen their own understanding and ideas to develop their awareness of what they know and what they do not know
  3. adjust their explanations when presenting their thinking, which requires that they can also estimate others’ understandings;
  4. listen to ideas and explanations from others – this may lead listeners to develop understanding in areas that are missing from their own knowledge;
  5. elaborate and internalise their new understanding as they process the ideas they hear about from others;
  6. actively engage in the construction of ideas and thinking as part of the co-construction of understandings and solutions;
  7. resolve conflicts and respond to challenges by providing complex explanations, counter evidence and counter arguments; and
  8. search for new information to resolve the internal cognitive conflict that arises from discrepancies in the conceptual understanding of others.


Implications for teacher training and professional development


The significant educational implications that AI brings to society, both when AI is viewed as a tool to enhance teaching and learning and when AI is viewed as a subject that must be addressed in the curriculum, make clear that teacher training and teacher professional development must be reviewed and updated.


If teachers are to prepare young people for the new world of work, and if teachers are to prime and excite young people to engage with careers designing and building our future AI ecosystems, then someone must train the teachers and trainers and prepare them for their future workplace and its students’ needs. This is a role for policy makers, in collaboration with the organisations who govern and manage the different teacher development systems and training protocols across countries. If the need for young people to be equipped with a knowledge about AI is urgent, then the need for educators to be similarly equipped is critical and imperative.


On a more positive note, the development of AI teaching assistants will provide an opportunity for developing deeper teaching skills and enriching the teaching profession. This deepening of teacher expertise might be at the subject knowledge level, or it could be concerned with developing the requisite skills to support and nurture collaborative problem solving in our students. It could also result in teachers developing the data science and learning science skills that enable them to gain greater insights from the increasingly available array of data about students’ learning.


Any failure to recognise and address the urgent and critical teaching and training requirements precipitated by the advancement and growth of AI is likely to result in a failure to galvanise the prosperity that should accompany the AI revolution. In particular, I see three major areas of concern:

  • failure to recognise the importance of self-efficacy, because we are only measuring subject knowledge;
  • failure to exploit the power of AI through fear of the security, privacy and protection of our personal data and that of our children; and
  • failure to teach people enough about AI to empower them to make key decisions about what it should and should not, could and could not, will and will not be able to do for society.


Conclusions: the implications of AI for education


In this paper I have highlighted the need to see AI as more than particular technologies, such as machine learning, neural networks, or deep learning algorithms. For education to benefit from the potential of AI, we must focus on the problem specification and solution design elements of AI.


A key action for all of us must be to develop a culture of problem specification that encourages people to unpack educational problems, so that solutions that benefit from the symbiosis of AI and HI can be developed.


We need to start developing a curriculum and a pedagogy to ensure that our students develop the self-efficacy that will set them aside from their AI peers and that will help them to deal effectively with the changing and perhaps turbulent workplace of the future.

There is also great scope to reimagine teaching and schooling through the development of AI augmented teaching practices. This means that educators must ensure that their voices are heard by the technology companies that are developing their particular technology classrooms of the future. Early progress might easily address the administrative and routine tasks that currently take too much teacher time.

In addition there are social, technical and political challenges that also require our attention. Socially, we need to engage teachers, learners, parents and other education stakeholders to work with scientists and policy makers to develop the ethical framework within which AI assessment can thrive and bring benefit. Technically, we need to build international collaborations between academic and commercial enterprise to develop the scaled up AI assessment systems that can deliver a new generation of exam-free assessment. Politically, we need leaders to recognise the possibilities that AI can bring to drive forward much needed educational transformation within tightening budgetary constraints. Initiatives on these three fronts will require financial support from governments and private enterprise working together.


AI has the potential to bring about enormous beneficial change in education, but only if we use our human intelligence to design the best solutions to the most pressing educational problems.





Case study: AI and teacher shortages  (note: to appear in body of paper or at the end, depending on layout requirements)


One of the big problems that we need to address in education is the global shortage of teachers. The UNESCO Institute for Statistics (UIS) has estimated that in order to ensure inclusive and equitable quality primary and secondary education and promote lifelong learning opportunities for all, countries must recruit 68.8 million teachers by 2030.[5] To put this in perspective, the total number of newly qualified teachers in England for the year 2016-2017 is 73,636[6] and for 2015, the number of students completing undergraduate and postgraduate ITE programs in NSW was 5,547 and across Australia the figure was 18,194.[7]


The temptation when faced with a problem such as global teacher shortages is to consider AI as a potential solution through the provision of AI rather than human teachers. There are however at least two significant reasons why this suggestion reflects a poor understanding of the problem. The full spectrum of teaching skills and abilities required of teachers is broad and complex. So while AI tutors may be able to provide tutoring in particular subject areas, AI is not (yet) able to fulfil the entire role of a human teacher.


A much more feasible approach would be through augmenting human teachers AI assistants in the classroom to help human teachers cope more effectively with their classes of students.[8] This could be an excellent way for AI to contribute to the teaching workforce. However, if we look at the problem again and unpick it a little, we see that a key part of the problem is not just the number of people who we need to become teachers, but also the lack of training and qualifications for these people.


The recognition that we need both more teachers and more training and qualification routes was the catalyst for a company called Third Space Learning, working in collaboration with the UCL Institute of Education’s Knowledge Lab, to develop the design for a system that will enable anyone who wants to teach to become a qualified online tutor. To be eligible people need to have a degree level qualification, some time to spare, and access to the internet.  Each tutor will be trained online and will work on a one to one basis with learners from anywhere in the world, both within and outside schools and colleges.


So where is the AI in this design? The AI will be used to automatically evaluate the online human to human tutoring sessions to ensure quality standards are maintained. Evaluation is currently done by human evaluators and the cost of scaling-up this human resource approach are prohibitive. Initially, therefore our task has been to find identifiable signifiers or proxies for student learning within tutorial interactions. The list of dependent factors as proxies of student learning is extensive, however, three main themes emerge and are taken into account in defining best practice:

  • the cognitive domain involving knowledge, understanding and skills about the studied content;
  • the metacognitive domain involving the acquisition of knowledge and skills related to one’s own learning, in other words, the learners’ knowledge and understanding of their own learning;
  • and the affective, often referred to as the emotional domain involving learners’ capacities to deal with their emotions, which effect their attitudes, locus of control, self-efficacy and interest etc.


We consider these three themes to be the core of a learner’s learning state: their susceptibility to learn. The theoretical background from these three themes provided an initial framework for the development of an annotation tool that could be used to score tutorial interactions. Informed by the framework we analysed past tutorial interactions and developed a mark-up language of successful tutorial interaction signifiers that can act as proxies for best practice. These proxies are integrated into an annotation tool that enables tutors to score or tag their sessions in real-time. The evidence from our evaluation of this annotation tool is being used to automate the tagging process using AI techniques. One particularly interesting finding from our early work is that real-time tagging is potentially more accurate than post session evaluation of tuition quality, because the latter are more vulnerable to a human evaluator’s bias.


The AI tool currently under development models the relationship between the inferences drawn from the tutorial interaction annotations, the actions that a tutor takes during those interactions and the performance of the student being tutored. In this way, we will automatically evaluate tutorial sessions according to the three core elements of learning: the cognitive, metacognitive and the emotional.


This AI evaluation process will also provide feedback for tutors and learners to improve their tutoring sessions. And it will provide the basis for personalised continuing professional development qualifications for the tutors, which is individualised according to each tutor’s performance as a tutor. Unpacking the problem of global teacher shortages this way maintains the role of the human as the teacher assisted and continually developed by AI.


[1] Handel, 2012 Handel MJ (2012) ‘Trends in Job Skill Demands in OECD Countries’, working paper 143, OECD. trends-in-job-skill-demands-in-oecd-countries_5k8zk8pcq6td-en

[2] For a more detailed account of this argument for AI assessment see

[3] See for example: Marzano, R. J. (1988) ‘Dimensions of thinking: A framework for curriculum and instruction.’ Alexandria VA: The Association for Supervision and Curriculum Development.

[4] Luckin, R. & du Boulay, B. Int. J. Artif. Intell. Educ. 26, 416–430 (2016).


[5] UNESCO. (2016).




Who Moved My Intelligence?

The title of this article is inspired by a self-help book from the 1990’s called ‘Who moved my Cheese: An Amazing Way to Deal with Change in Your Work and in Your Life’. I have blogged about this book before and this motivated me to write this piece for WISE. Despite significant criticism, this book became a best seller and a popular tool in any change manager’s back pocket. The implications of Artificial Intelligence (AI) and automation for change in the future workplace is the subject of much current debate. But how should educators respond? How can they ensure that they benefit from AI?


AI refers to the capabilities of computers to perform intelligent behaviours that we would think of as essentially human.[1] Most readers will be familiar with a practical application of AI, the sort of technology we use to navigate information on the internet, find our way around our environment or enter a country with our e-Passport. But what does the increased popularity and the increasing sophistication of AI technology mean for education?


To answer this question, I focus on two interpretations of the question: ‘Who moved my Intelligence?’. Interpretation 1 considers how we need to ‘move’ our students’ intelligence beyond the routine cognitive processing of academic subject matter. Interpretation 2 will consider what ‘moving’ certain intelligent workplace behaviours from human performance to AI performance means for educators, including for the job of teaching. 

Developing the uniquely human abilities of students

 Education and training organizations need to review what and how they teach to ensure that AI is designed and used as a tool to make our students and trainees smarter. We do not want AI to be used as a technology that takes over human roles in a way that ‘dumbs us down.’ We therefore need to concentrate on designing and implementing teaching and schooling that develops the uniquely human abilities of our students and instills within them the requisite subject knowledge in a flexible, interdisciplinary and accessible manner.


The human capability for Metacognition, both in terms of self-understanding so that each of us has an accurate knowledge of what we do and do not understand; and self-regulation so that we can all plan and monitor our learning effectively, will be at a premium in the future workplace. This is because metacognition is not something that AI can achieve, and because we will all need to be lifelong learners flexibly developing our knowledge and skills to meet the demands of the future, we will all therefore need to develop better metacognitive skills.

The use of teaching approaches such as Collaborative Problem Solving (CPS) will become more essential. CPS has been shown to have the potential to provide learners with an understanding of key subject knowledge synthesized across disciplines that they can apply in a flexible manner to real world problems[2]. Collaboration and problem solving are also among the key 21st century skills demanded in the modern workplace, because routine cognitive skills and knowledge are easy to automate with AI.

The curriculum will also need to include AI as a subject, not merely to teach a small sub set of the population to design and build AI systems, but to teach the whole population what AI is and what it can and cannot do. Everyone needs to understand enough about AI to be able to use it effectively in their lives at work and at home, to be able to contribute to important decisions about what is and is not ethical and permissible for an AI to do, and to be able to make decisions about the division of labour between artificial and human intelligences.

Re-imagining teaching and schooling

There is no doubt that there will be a shift in the distribution of intelligence within the workplace, including classrooms and schools. In order to extract the most benefit from this redistribution, we need to ensure that the most automation-appropriate activities are done by the AI, and likewise that the most human-appropriate activities are done by people.


Re-imagine teaching and schooling with AI assistants to provide intelligent analysis of multiple data sources about learners, from sleep sensors, library usage and e-learning resource interactions, to social media activity. This analysis will illustrate how learning is progressing to support ongoing detailed formative assessment. AI assistants could also relieve teachers from the routine automatable parts of their job, and enable teachers to focus on human sensitive support and communication.


[1] (2005). ODE: The Oxford dictionary of English (Oxford dictionaries online). Oxford: Oxford University Press. AND Russell, S. J., Norvig, P. & Davis, E. Artificial  intelligence:  A  modern   approach. Upper Saddle River: Prentice Hall.

[2] Solved (2016)

Malala Yousafzai’s A level results are brilliant, we need more successes like this

Who could be anything but delighted to see this headline? A-level results: Malala Yousafzai gets a place at Oxford, this is excellent news and a great boost for those campaigning for equal education. In fact, the publication yesterday of A level results in the UK has spurred me to take a slight diversion from worrying about who is moving my brain or my cheese. I certainly would not want to detract from the hard work that any students have put into their A level studies or to take the shine off their success. It is wonderful to see the smiling faces of successful students across the newspapers.

However, success does not come to all and even on a celebration day, or perhaps I should write especially on a celebration day, I think we need to consider alternatives to the stressful stop and test regime that pervades most education systems. I wrote about this in Nature Human Behaviour earlier this year under the heading: ‘Towards artificial intelligence-based assessment systems’ and it looks like it has been read a few times because it is ranked 5,746th of the 237,966 tracked articles of a similar age in all nature journals which puts it in the 97th percentile. This does not seem bad given that it was only a ‘comment’ piece and not a full paper. On a less positive note in an internal REF assessment exercise it was only ranked as 2*, which is not great and probably reflects the difficulty for academics in publishing more popular style articles. However, the modest success of the article in terms of the altometrics that Nature run encourages me to believe that there is some interest in exploring the possibilities that the intelligent design and application of AI could afford for National assessment systems. I therefore draw attention to this possibility here and hope to encourage further debate. The key point I wanted to convey in the Nature Human Behaviour article was that there are alternatives to exams, that are less stressful, less expensive and that allow teachers and learners to spend more time on teaching and learning (shouldn’t this be the point of education?).

This message may not be what others have selected to focus on, but for me, the most important thing is that we have an assessment system that is holistic, fair and that let’s all students evidence their knowledge, skills and capabilities.


AI is our future, but can we convince Frank?

As a child I was always frustrated by the phrase: “curiosity killed the cat”. This was a frequent retort when I was trying to understand how things worked. Well, I am not reporting any cat killing incidences here, but my curiosity about myself driven by my new ‘misfit’ may have been a primary factor in my newly sprained ankle!


Over enthusiasm to meet that target of 1000 activity points motivated me to get walking and launched me down some steps in a most ungainly and unfortunate manner.  No broken bones, but some swelling and plummy bruising have resulted in my needing to rest up for a few days. Resting up in a Sydney winter is hardly a chore, the sun is out and the sky is blue and I indulged in exploring the ABC TV channel and in particular a great program called The AI Race.

The AI Race

The program presented data from a study into the risks to Australian jobs from AI powered automation. I was relieved to see that professors are only likely to have 13% of their job automated, whilst carpenters are predicted to have 55% of what they do done by smart technology. Might this be the same in the Uk, or different I wondered? The ABC reporter explored various jobs and met up with employees. For example, Frank: a truck driver, was not persuaded that autonomous trucks would be able to replace his experience and intuition about the behaviour of other humans whether pedestrian or driver. The autonomous vehicles would not be able to help out other drivers stranded on the roadside or provide human customer service on delivery of a load either. He was definitely not convinced that AI was going to replace him any time soon.


Further jobs were explored: the legal profession for example where law students were stunned by an AI para legal that could search through thousands of documents to find a specific clause in no time at all. The law students berated their education for not preparing them for a world of automation.


On the one hand we have Frank, who does not believe that AI can replace him, and on the other we have a group of law students who are persuaded that AI can already do a lot of what they are studying to be able to do. Nobody seems very curious about how they might better prepare themselves for AI’s onslaught on their workplace. So, how might I persuade them that understanding more about their own intellect could help them work more effectively with AI? The key to future success has to be that people need to focus on developing the expertise that AI cannot achieve: the still unique human qualities that will be at a premium. Self-knowledge and Self-efficacy are important elements of this expertise, but how do we motivate people to develop themselves? To start answering this, I looked at the best selling self-help books for guidance. People buy these so maybe I can learn something about how to appeal from their sites – which of these might work best?

Who moved my brAIn?

What colour is your AI?

How to win with AI

7 AI habits of effective people

I’m AI, you’re ok

Rich augmented me, poor augmented me

AI is from Silicon, we are from the Gene Pool

I’m not convinced about any of these…….

AI and personal analytics provide a ‘fitbit’ for the Intellect

It is far too long since I last posted to this blog: too many jobs and too little time would be my fist attempt at an excuse. But, perhaps it is just that I am not effective enough, that I need better self-regulatory skills, more intelligence and a better understanding of my own strengths and weaknesses. I talk quite a lot about intelligence and about how AI developers have not yet designed artificially intelligence systems that understand themselves and have metacognitive awareness, but maybe I too lack these abilities? So, how might I become more self-effective?

This thought is one that I intend to worry at while I am completing a research trip to the University of Sydney to work with my colleague Judy Kay. We are working on Personal Analytics for Learners (PALs), or more precisely interface designs for PALs (or iPALs).

In order to help me thing this through I wanted to learn more about some of the work that Judy and her colleague Kalina Yacef have been doing in collaboration with medics and health professionals to develop better data analytics and interfaces for personal health information for education. For example, the iEngage project provides a digital platform for children with information, education and skills to help them to achieve their physical activity and nutritional goals. It connects with ‘misfit‘ activity trackers to provide continuous feedback and summarise the daily activity on a dashboard.

To this end, I bought myself a ‘misfit’: a somewhat cheaper version of a ‘fitbit’ with a great name :-). I am now tracking my sleep and my pulse as well as my physical activity and diet in order to try and understand more about my personal wellbeing. This is nothing new and millions of other people do this too. I notice that popular technology stores stock a good range of fitness tracking devices and increasingly more reasonable prices.

IMG_4190  IMG_4191.jpg

So, in order to also help me be better at understanding my mind and my cognitive progression and metacognitive skills and regulation, I now need a ‘mindset” to help me track how well I am thinking, learning and regulating my working and learning. The interface to such a ‘mindset’ is the idea behind the iPAL that Judy and I are currently designing. I find it interesting to speculate about the kinds of data that we could collect about our intellectual and social interactions that would help us track and better understand our intellectual mental wellbeing as well as our physical welding and fitness. This kind of ‘fitbit’ for the mind might help me to be less distracted by non-priority activities and spend more time on priorities, such as writing.

A search for ‘fitbit for the mind’ yields some hits, though not terrifically interesting ones. There is an article in new scientist about eye-tracking to tell you more about your reading habits, and a mindfulness app that can be linked to fitbit data. The problem here is that we are being offered some automatic tracking of just one type of mental activity – reading, or mindfulness and actually we need something way more sophisticated to tell us about how we our intelligence and self-awareness is progressing. Perhaps something that looks at multiple data sources and provides us with an overview of our activity in a way that motivates us to want to know more about our intellectual fitness in the same way that activity trackers help us understand more about our physical fitness.

Earlier this month, there was a more interesting article in Newsweek that talks about ‘iBrain’ and the possibility for us to be able to track our brain’s electrical output and see markers for the likely occurrence of a range of mental health disorders from anxiety, depression, and schizophrenia, to dementia and Alzheimer’s before symptoms appear. Such information might help early intervention and monitoring. This reminds me of the rise of personal DNA services, such as 23 and me. If people are interested in their DNA and what it might tell them about how they should adjust their lifestyles to avoid certain conditions that they look to be susceptible to, then maybe people are also curious about their intelligence and how they can understand it better.


Over the next few blog posts I plan to explore what such a device might be like, what data it might collect and how I might best benefit from the sorts of information it could provide.

Truth, Lies and Enlightenment: how AI can help us to build knowledge and understanding in the echo chambers of life

AI is both a cause and a solution to the problem of a world where there is far more information than any one person can possibly effectively process to construct their own understanding about what they believe and what they don’t. AI can amplify the echo chamber by promoting the most believed over the most evidenced. BUT it can also help us to recognize valid information from noise, IF we know the right questions to ask and IF WE KNOW HOW TO WORK WITH OUR AI we can develop deep understanding and escape from the maze of invention…

Early in my career I was advised that if I wanted to get a point across when teaching, during an interview, as part of a presentation or when debating, I must repeat the point I wanted to make three times. There is an empirical basis for this advice: something eloquently explained my Malcolm Gladwell and the motivation for my blog identity: The Knowledge Illusion. Put simply, when people are provided with more information about X, they believe that they know more about X, when in fact they often know less about X. I wrote about this many blogs ago (transcribed below for ease of reference) to draw attention to the essential need to help people decipher the huge volume of information that comes their way so that they can discern what is genuine from what is fake.

I still follow the “say things three times” advice in my endeavour to communicate what I consider to be valid, some might say truthful, information. My objective is to persuade people that my perspective, opinion, or information presentation is the stuff to be believed. However, I accept that it is entirely up to my audience to decide whether or not they are won over. The importance of this subjective experience and the belief that an audience are actively analysing the information that comes their way is ever more important. In a world of echo-chambers and deluge of social media, we need people to be able to look at a stream of data and information and make intelligent decisions about what they believe to be the stuff of knowledge.

The problem is not new. It was JFK who once observed that “No matter how big the lie; repeat it often enough and the masses will regard it as the truth.” This is an enormous insult to the intelligence of the “masses”, but unless we pay attention to helping these “masses” to navigate through the morass of mediocracy that social media precipitates, proliferates and perpetuates then we will return to the pre-enlightenment era when the world was flat and knowledge was the privilege of those who knew how to decipher the written word and who acted as the mouth-piece for and the collective intellect of their communities: the “masses”.

The word “masses” is no longer widely used so let’s just refer to the “masses” as the people: the global human race whom education is intended to equip with the skills and abilities to think and make sense of the world and the information others produce about it. To consider what it is we need to do to help people to make sense of the world it is worth travelling even further back in time to the views of Roman Emperor Marcus Aurelius that: “Everything we hear is an opinion, not a fact. Everything we see is a perspective, not the truth.” We need to encourage a nuanced belief system where people are provided with the skills, confidence and resources to construct their own understanding from the tidal wave of data and information that threatens to engulf them.

Again, history can help to inform us. The scientific revolution set the stage for the age of enlightenment that transformed the human race and promoted the importance of reason. Influential thinkers like Bacon, Locke and Descartes paved the way for the likes of Voltaire, Kant and Smith. Life was so much simpler then of course, but the huge increase in what it is possible for an individual to try to understand and know does not discount the important role that influential thinkers can play.

The birth of the www and social media represent a new generation of publications that play the role of the encyclopedias and dictionaries in the age of enlightenment. BUT who are the key philosophers and scientists who can catalyze the popular debates in the way that the philosophers of the enlightenment did? Stephen Hawking would probably be high on the list of influential thinkers who many people (the “masses”) might be able to name. Who else?

Whilst the volume of information and data about the world has ballooned, the number of influential thinkers who can help people find their way to knowledge and understanding has may not have kept pace. Technologies that harvest the ‘wisdom’ of the crowd often promote the loudest shouters and the most-followed, rather than the considered and grounded reasoning of the real intellectuals. The demise of expertise has exacerbated the problem as professional predictions have failed to materialize…. Let’s just stop there for a moment.

Could the real problem be that we, the people, don’t know how to interpret expertise? We want simple answers when there are none to be had. In schools we still encourage the belief that rote learning and subject specific information of the type that can be reproduced by a single person when challenged with a standardized test sufficient. This outdated approach gives the impression that knowledge and understanding are way more simple than they really are. They encourage people to believe that there is a body of stuff that they need to learn and reproduce, and that if they can do this they will be knowledgeable. However, what we should be doing is ALSO encouraging people to constantly probe, prod, compare and conclude for themselves their understanding of the world so that they can apply this knowledge to solve the problems they encounter every day.

The surge of tweets that give the impression that meaningful things can be said in 140 characters is not always helpful either. There is certainly something to be said for trying to distil understanding into a short text — it is difficult and can test how much we really understand. However, the believe that a tweet can be the whole story in and of itself is misguiding. Knowledge and wisdom need to be worked at, by questioning, analyzing, aggregating and synthesizing to reach our own evidence-based beliefs about what we know and what we understand. Someone else’s tweet might start this process, but we have to finish it for ourselves.

Ai can help us to do the work here. AI can analyze and visualize complex data and information in order to literally help us see the ‘wood from the trees’. AI can be built to model human understanding and to justify the decisions and predictions that it makes. AI can explain to us how to complete complex activities, such as solving mathematical equations or managing a complex power plant. BUT Artificial and Human Intelligence must work together to help people extract the truth from the lies. We as humans must ensure that we know enough about what AI is capable of doing to ensure that we ask the right questions. We must learn to be discerning enough to challenge the AI when we are not convinced by what it is telling us.

This means that now more than ever we must educate the educators. Because educators must instill in us, the people, the investigative skills that we need to ask the right questions so that we can differentiate evidence from falsehood. Educators must encourage the confidence and self-efficacy in us that will help us believe our own minds. Educators must engender the perspective taking and integrative thinking that will enable us to work together to solve problems and to develop the influential thinkers we need now more than ever to enlighten us.

More relevant than ever…Information plenty, but knowledge famine: are we succumbing to an illusion?

I am curious about knowledge, not in philosophical sense, but in a practical one. I worry about what it means to know something in a world that is increasingly complex, ill defined and interconnected: a world that demands that we develop, and that we ensure that our children develop, the knowledge capacity to solve the problems it manifests and those that we create.

The first recollections that I have of my own curiosity about knowledge date back to 1966 when I was eight years old and growing up in Manfred Mann’s semi-detached suburbia: dad, mum, older brother and me. My father was an aircraft engineer and my mother taught typing and shorthand to women whose working lives were about to be dramatically changed by the word processing power of the digital computer. My brother was 3 years older than me, and his lack of interest in formal education was causing my parents some concern. Their reaction was to invest in ‘knowledge books’, or at least that’s how they saw the children’s book of knowledge and the encyclopedia that now filled up the bureau bookshelf. To keep us up to date, there was also the weekly general knowledge magazine that plopped on the doormat with a reassuring thud: the weight of its knowledge there for all to hear.

I suspect that my parent’s reaction to their son’s educational malaise was not an unusual one amongst the aspiring middle class families of our neighbourhood. My brother’s reaction to the new literary arrivals was cool; he was far more concerned with exploring the world of the woodland around our housing estate, than with sitting at home and reading about it. My father however, became quite addicted to the weekly general knowledge magazine. He did not have a great deal of time to read, but each evening when he went to bed he would sit in his paisley pyjamas and thumb through the pages. The stock of copies soon grew on the nightstand as his pace of reading failed to match the frequency of their arrival. The corners became slightly curled as the months and years passed and the dust gathered in and around the pile that now extended from the nightstand to the floor. His interest never waned and I do believe there were a pile of old issues by his bedside when he died many years later.

Forty years on and it’s a sunny day and I’m walking along the Euston Road in London. I pass the entrance to the British Library and a sign catches my eye, the sign says: “Step inside – Knowledge freely available”. I dislike the suggestion that one can walk into the British Library and just pick up some knowledge like going into Tesco and buying some bananas. I can relatively quickly formulate an explanation for myself about why the sign irritates me, because I have a clear idea about what I believe knowledge to be. I have moved on from the conception of knowledge loved by my father and represented by the pages of his books and articles. I know that I have to construct knowledge from the evidence available to me, that it is not handed to me by others, though they can certainly help me along the way, and that I can aspire to continually increase my knowledge by weaving together the information resources distributed throughout my world.

This is not the case for many of the youngsters who attend our schools and colleges. For them knowledge is still to be found in the dusty concepts in the out of date magazines on my father’s nightstand or on the shelves of a library they never visit.

“But what of the internet and world wide web?” I hear you wonder. These technological masterpieces offer information resources wherever we are and whenever we need them. These must surely pave the way for us to become more knowledgeable, both personally and as a human community?

The sheer abundance of this information has thrown into sharp relief our understanding of the relationship between information and knowledge. It makes my modest collection of childhood encyclopedias and my father’s overflowing magazine collection look like a speck of dust on the library shelf. I fear however that our understanding of what knowledge is and what it means to know something has not progressed in tandem with this technological progress. This puts us at risk of succumbing to the illusion that we know more than we actually do, because the more information we have the more we become certain that we know something.

Without helping young people to develop an understanding of what knowledge is in a digital age they cannot progress beyond the well meaning, but limited conception of knowledge promoted by the books and magazines that appealed to my parents. Those of us who understand what we mean by knowledge can indulge ourselves, as my father did with his magazines. But, without actively engaging people in the excitement of connecting the knowledge construction process to their own particular context, we merely encourage them to pass the opportunity by in the same way as my brother did all those years ago.

In a time of information plenty we are at risk of a knowledge famine.

I wrote thsi piece originally for  Learning to Live – Creativity, Money and Love

‘Theresa Maybe’, but Colin definitely is an aide for greater educational equality

I was struck yesterday by the juxtaposition of the Economist’s dubbing of Theresa May as “Theresa Maybe“and the Telegraph article authored by the PM about her desire for a shared society that will tackle “everyday injustices”. How exactly will the shared society work, and in particular in what ways will education be changed in order to achieve the worthy goal of a fairer society for all? I feel there is already a lack of decisiveness in the lack of detail about what kinds of policy will deliver this solution to the everyday injustices faced by many learners across all ages and sectors of education. (From an interesting article in the Huffington Post)


Education can change lives for the better, but sadly it often does not and those who are privileged are able to benefit from better opportunities for learning. So here is a suggestion for unlocking some of this inequality.



My colleague, Wayne Holmes and I were asked to write an article for ‘How we get to next‘ and we used this article to pitch the benefits of an AI classroom assistant to help and motivate teachers to ensure that all learners are involved in activities that meet their needs. We tell the story of Jude, a teacher in the year 2027.


“And at Jude’s side, there’s her AI Teaching Assistant, Colin, whom she’s named after a childhood friend. In fact, so many aspects of how Jude understands her students’ learning are different now, thanks to her machine aide.

Through working with Colin, she has become somewhat of a metaphorical judo master, harnessing the data and analytical power of AI to tailor a new kind of education to each of her students. Her role at the helm of the classroom, however, is fundamentally unchanged.


Since Colin makes ongoing assessments based on daily student performance and engagement in the classroom, there is simply no longer any need for what were often inaccurate and stressful evaluations. The AI aide’s primary task is to build and maintain learner models for each child based on a combination of data gathered over time with things like voice recognition (which identifies who is doing and saying what in a team activity) and eye tracking (to note engagement and focus). The profiles are updated continuously, monitoring students’ progress against analysis of their emotional and motivational state.”


I know that well designed AI can help us build a much fairer education system in which all learners benefit and prosper, and that we have the technical and human capacity to create the right type of AI. A better educated population would then surely help us to tackle some of the other major challenges that a shared society agenda might face, such as inequalities in the health system and problems related to immigration.

We can radically redesign the 11 plus exam to make it fairer, so what is stopping us?

AI assessment systems could provide a fairer eleven plus selection, it could also start to address the vexed question of assessing potential rather than just current ability. We know that well designed AI systems that assess learning, are accurate in their assessment. AI assessment can tackle more than subject specific knowledge and reasoning, it can also evaluate skills such as planning and knowing what we know. AI assessment would also provide a fairer assessment system that would evaluate students across a longer period of time and from an evidence-based, value added perspective. We also know how to prevent people from gaming AI assessments, in addition to which AI Assessment systems would also offer tutoring for everyone and support and formative feedback to help students learn and improve. If there is to be a revamp of the grammar school system then we must explore these possibilities.

Theresa May’s plans for new or expanded grammar schools in England have brought a torrent of comment, debate, criticism and rhetoric since these plans were inadvertently revealed last week. Most of the discussions seem to have focused on whether or not grammar schools are the right mechanism to aid social mobility. This is an extremely important issue, but let’s put the rights and wrongs of selection and grammar schools to one side for a moment and look at the eleven-plus examination itself.


The eleven-plus examination is the key to the door of one of the 164 grammar schools in England, or one of the 69 grammar schools in Northern Ireland. The examination is sat by children in their last year of primary school and it varies depending upon where in the country it is taken. In fact, the situation is very complicated with a wide range of approaches even within the same county.  For example, in Yorkshire there are three Local Authorities with Grammar Schools: Calderdale has 2, Kirklees has 1 and North Yorkshire has 3. The 2 grammar schools in Calderdale use Verbal Reasoning tests, and Maths and English examinations using GL Assessment, University of Edinburgh and the school themselves as their examiners. However, the 1 school in Kirklees uses tests in Verbal Reasoning and Non-Verbal Reasoning, plus an English examination and a Numerical Reasoning test. These are all examined by University of Durham. The situation in North Yorkshire is different yet again, with 2 schools using Verbal Reasoning and Non-verbal Reasoning tests examined by NFER and the 1 remaining school administering and examining its own selection tests.


The complexity in the selection process is not helpful to poorer parents, who do not have the time, and possibly not the capability, to navigate the process. In addition to which the examination approach is traditional and outdated. The need to look deeper than the selection process to the eleven plus examination itself was highlighted in an interesting discussion on the Radio 4 Today programme last week. The discussion was between Laura McInerney, the editor of Schools Week, and Sean Worth, from Policy Exchange. Sean pointed out that the current mechanism for selecting children for grammar schools can be gamed and that we therefore need to change the examination if we are to ensure that the poorest children are not disadvantaged. Laura McInerney also pointed out the major problem for poorer children accessing grammar schools is that we “put a test in the way”, especially divisive when the parents of poorer children can’t pay for tutoring to get their offspring through the eleven plus examination.


The Guardian published a depressing article on the problems inherent in the eleven plus test ‘‘Tutor-proof’ 11-plus professor admits grammar school test doesn’t work’. The article reports the failure of a ‘coaching resistant’ test developed by CEM at the University of Durham for use in Buckinghamshire. CEM has now withdrawn the claim that the test could assess “natural” ability. Prof Coe director of CEM is reported as saying: “Whatever system you use it is imprecise, there are false positives and negatives and probably more of those than people realise.” He goes on to reflect that whilst he does not agree with creating if we are to have more then we need to try and make the system fairer. I couldn’t agree more – and the need for a radical rethink is echoed in what the IOE’s Tina Isaacs says about the problems of coming up with any test that can assess future potential.


So, let’s take the test away and develop a radically different, socially equal eleven plus. We are lucky enough to be in a very different situation today from that which existed when the original eleven plus was introduced in 1944. There is now a realistic and economically attractive alternative at our fingertips. We have the Artificial Intelligence (AI) technology to build a superior assessment system should the proposed reforms become a reality. AI provides a powerful tool to open up the ‘black box of learning,’ to provide a deep, fine-grained understanding of when and how learning actually happens. Intelligent algorithms can process information about each learner and reach a view about their progress, knowledge and understanding of a subject or skill over a ‘period of time’. Unlike the eleven plus examination, this ‘period of time’ could be a whole school semester, a year, several years and beyond.

Of course there are serious ethical questions around AI being used in education and these must be explored. But the over-riding and uncontested fact in this debate is that education is the key to changing people’s lives. We trust AI with our personal, medical and financial data without a thought, so let’s trust it with the assessment of our children’s knowledge and understanding. Let’s open our minds and explore the challenges to build a new generation of eleven plus assessment that genuinely irons out the inequalities and gives all children a chance to shine.

[3] Hill, P. & Barber, M. (2014). Preparing for a renaissance in assessment. London:  Pearson., DiCerbo, K. E. & Behrens, J. T. (2014). Impacts of the digital  ocean  on  education.  London: Pearson.

To appear on the IOE blog

Calling education: wake up and smell the coffAI, don’t miss a great opportunity to drive prosperity for all

A recent article in the THES got me thinking. David Matthews reported under the title: The robots are coming for the professionals, and asked if universities need to rethink what they do and how they do it now that artificial intelligence is beginning to take over graduate-level roles? This motivated me to write a blog post for THES that was published on 9 August: Four ways that artificial intelligence can benefit universities, in which I suggested that HE needs to embrace the positives of AI, not just look at the negatives.



These issues are not limited to HE, in fact this is a wake up call for all of Education. We must engage with these technologies and those who are developing them NOW in order to ensure that the AI that we end up with in classrooms, homes and the workplace is informed by what we know about learning and NOT what we know about what the technology can do.

7fba0586036d8b0f2cdf47df1d037557There is a huge and growing interest among those who invest in new technology ventures, specifically Artificial Intelligence (AI) techniques and methods. For example, between 2011 and 16 May 2016 Sentient Technologies received over 143 million USD in funding (Data from CB Insights) Much of the excitement about AI has focused on general purpose AI i.e. intelligence that is applicable across a variety of industries and activities. This is being promoted for technology businesses as a force for good. For example, Antoine Blondeau, the CEO of Sentient, has stated that: “From healthcare to finance to e-commerce, we’re focused on changing people’s lives.” Sentient is reported to be working on financial platforms and on an AI nurse to diagnose patients with sepsis.It is a business that like many who are adopting AI methods has no problem in attracting funding.

However, the same is not yet true of organisations who are adopting AI for education. Yes, there are things like Udacity, that claims it will change HE, and Knewton whose CEO Jose Ferreira, really does believe that his technology will replace human teachers. Such an outcome would make ‘driverless classrooms’ into a science reality. These commercial AI in Education ventures are well funded. BUT it is hard to find mass investment in the application of AI to education, despite the fact that the Educational Technology sector is predicted to grow from £45bn to £129bn by 2020. And to my mind much more significantly, despite the fact that education is the real key to changing people’s lives.maxresdefault.jpg

We need to take a fresh look at education if we are to ensure that the global population is able to reap the potential of the AI revolution that is sweeping across the workplace. AI is both a cause of the radical changes to the workplace that prompted David Matthews to write his piece in the THES and a provider of an answer to the problem of how we make the most of the workplace automation that AI is enabling. The purpose, methods and outcomes of education need re-thinking and AI can help us to tackle the challenge of this re-thinking if we invest in its development and build on the thirty years plus of research in AI for Education.

The importance of the social and economic significance of the developments in autonomous systems and AI was reflected at the annual meeting of the World Economic Forum 2016 in Davos, where the focus was on ‘The Fourth Industrial Revolution. This revolution “is characterized by a range of new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human.” These radical changes do not however seem to manifest themselves in a concerted effort to use AI to revolutionize education. This oversight is shortsighted to say the least. The few exceptions that one can find where AI is being applied to education at some scale have a very narrow perspective and are a long way from changing people’s lives in the positive way that we want and need. For example: Knewton, is just one a a host of companies who believe that Subject Knowledge is the key to unlocking education for all. Through, for example, making artificially knowledgeable adaptive tutors who can personalize their content to meet an individual learner’s needs. This is all very well, but there is so much more to education than subject knowledge and so much more to AI than adaptive educational content

So what are the key attributes of AI for Education that will enable it to start attracting the sort of investment that Horizons Ventures and Tata Communications have made in Sentient Technologies? What are the attributes of AI that will persuade research funders that AI for education is a subject they must prioritize and that it must be a truly interdisciplinary enterprise that is not driven purely by technologist’s dreams. For a change let’s focus on disadvantaged learners’ dreams and see if we can work with technology to turn these dreams int o reality.

One key attribute of AI for Education is the ability that Educationally driven AI techniques and algorithms bring to the analysis of the vast amounts of data about learners that is routinely harvested by the increasing amount of technologies in the world around us from CCTV, to smartphones, wearable technologies and online courses, such as MOOCs. For example, we can

  • Conduct fine-grained analysis of learners’ skills and capabilities so that their development can be tracked at the student/employee, workplace, school, area, and country level;
  • Enable the collation of a dynamic catalogue of the best training and teaching practices across a range of environments and as a result enable us to educate and train the future workforce in an economically productive manner.

A second key attribute of Educationally driven AI is that it can help us to tackle the toughest educational challenges, including learner achievement gaps, teacher skill shortages, continuous professional development for educators. If we think about the business of education for a moment, imagine the AI teaching assistant that can be used to stretch the brightest pupils, while the human teacher devotes their expertise to giving the less able learners the sensitive human support that they need in order to progress. The teacher would train their personal assistant to work in the way that the teacher and their students need and would demand that the AI assistant explain the decisions it has made about students and the educational opportunities the assistant has provided.

But perhaps what we need to focus on first is using AI systems that go beyond the machine learning and neural network techniques that dominate the work of the main AI protagonists within and beyond education, from Knewton to Google DeepMind. The types shutterstock_260422808.jpgof AI we need within education is the AI that enables the technology it powers to explain its reasoning, to justify its decisions and to negotiate with its users. This is the sort of AI technology that could help us address one of the toughest challenges within the current workplace:  The lack of understanding about how humans can best work with AI systems so that the result is AI augmented human intelligence that is greater than the sum of its parts. We need workers who understand how to make the best use of the power that AI automation can bring to industry and commerce. Workers who understand enough about AI to know where and how human intelligence can work with AI to achieve a blended intelligence that can increase productivity. And what is beautiful about all this is that the appropriate type of AI can help us educate and train people to understand enough about their AI colleagues to work alongside them effectively.



Here is what ‘smart’ looks like in an AI tutor


So, what would a piece of education technology driven by AIEd look like? Here is a simplified picture of a typical model-based adaptive tutor.



It is based on the three core models as described above: the learner model (knowledge of the individual learner), the pedagogy model (knowledge of teaching), and the domain model (knowledge of the subject being learned and the relationships between the different parts of that subject matter). AIEd algorithms (implemented in the system’s computer code) process that knowledge to select the most appropriate content to be delivered to the learner, according to their individual capabilities and needs.

While this content (which might take the form of text, sound, activity, video, or animation) is being delivered to the learner, continuous analysis of the learner’s interactions (for example, their current actions and answers, their past achievements, and their current affective state) informs the delivery of feedback (for example, hints and guidance), to help them progress through the content they are learning. Deep analysis of the student’s interactions is also used to update the learner model; more accurate estimates of the student’s current state (their understanding and motivation, for example) ensures that each student’s learning experience is tailored to their capabilities and needs, and effectively supports their learning.

Some systems include so-called Open Learner Models, which present the outcomes of the analysis back to the learners and teachers. These outcomes might include valuable information about the learner’s achievements, their affective state, or any misconceptions that they held. This can help teachers understand their students’ approach to learning, and allows them to shape future learning experiences appropriately. For the learners, Open Learner Models can help motivate them by enabling them to track their own progress, and can also encourage them to reflect on their learning.

One of the advantages of adaptive AIEd systems is that they typically gather large amounts of data, which, in a virtuous circle, can then be computed to dynamically improve the pedagogy and domain models. This process helps inform new ways to provide more efficient, personalised, and contextualised support, while also testing and refining our understanding of the processes of teaching and learning.

In addition to the learner, pedagogical, and domain models, AIEd researchers have also developed models that represent the social, emotional, and meta-cognitive aspects of learning. This allows AIEd systems to accommodate the full range of factors that influence learning. Taken together, this set of increasingly rich AIEd models might become the field’s greatest contribution to learning.



This post is an adapted extract from Intelligence Unleashed published by Pearson.