AI and Education: the Reality and the Potential

This is the script from a talk I gave at the Museum of London

Slide01.jpeg

You can watch a vide of the talk here: https://www.gresham.ac.uk/lectures-and-events/ai-education-reality-potential

While I was enjoying my morning latte on the tube earlier this month, I spotted this headline in the New Scientist: AI achieves its best ever mark on a set of English exam questions: i.e. the knowledge-based curriculum and exams. This is significant in three important ways, and these are also the three ways that I want to discuss AI and education with you this evening.

Firstly, it demonstrates the power of the AI that we can build to learn and to teach what we currently value in our education systems. This speaks to my first point that will be about the way AI can support teaching and learning.

 

Secondly, if this is headline news, then it demonstrates that we do not know enough about AI, because passing an exam is a very AI type problem to solve, and we should not be surprised that AI can do this. It should be something we take for granted, because we all understand enough about AI to know the basics of what it can and cannot achieve.

 

Thirdly, this headline draws our attention to the fact that we can build AI that can achieve what we set our students to achieve. The AI will get better and faster at this and it therefore is not intelligent to continue to educate humans to do what we can automate. We need to change our education systems to value our rich human intelligence. This need to change what and how we teach is also connected with the way that AI powers the automation that is changing our lives at some pace. We need very different skills, abilities and intelligences to thrive in the modern world. One only has to look at our current political failure in the UK, to see that the much-heralded education that we have provided for the last century has not provided our politicians with the emotional and social intelligence and the ability to solve problems collaboratively that the modern world requires. The need to change the what and how of teaching will be my third area for discussion tonight.

 

AI is empowering automation and the Fourth Industrial Revolution and its impact on education will be transformative, but what is this thing called AI?

 

A basic definition of AI is one that describes it as ‘technology that is capable of actions and behaviours that require intelligence when done by humans’. We may think of it as being the stuff of science-fiction, but actually it’s here and with us now from the voice-activated digital assistants that we use on our phones and in our homes, to the automatic passport gates that speed our transit through airports and the navigation apps that help us find our way around new cities and cities that we know quite well. We use AI every day, probably without giving it a thought.

 

The desire to create machines in our own image is not new, we have, for example, been keen on creating mechanical ‘human’ automata for centuries. However, the concept of AI was really born 63 years ago in September 1956 when 10 scientists at Dartmouth College in New Hampshire spent the summer working to create AI. If we look at the premise for this two-month study, we see that it is a premise that believes that: ‘every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.’ And, although it seems incredibly arrogant now, the belief was that over a two-month period the team of scientists, would be able to make ‘a significant advance … in one or more of these problems.’

 

Following on from this there were some early successes. For example, expert systems that were used for tasks such as diagnosis in medicine. These systems were built from a series of rules through which the symptoms a patient presented with could be matched to potential diseases or causes, so enabling the doctor to make a decision about treatment. These systems were relatively successful, but they were limited because they could not learn. All of the knowledge that these expert systems could use to make decisions had to be written at the time the computer program was created. If new information was discovered about a particular disease or its symptoms, then if this was to be encompassed by the expert system it’s rule-based had to be changed. In the 1980s and 90s we had built useful systems, but certainly we were not anywhere near the dreams of the 1963 Dartmouth College conference. We plunged into what has been described as an AI winter where little significant progress was made and disappointment was felt by those who had such high expectations of what could be achieved.

 

Then in March 2016 came a game changing breakthrough. A breakthrough that was based on many years of research. A breakthrough that was made when Google Deepmind produced the AI system called AlphaGo that beat Lee Sedol, the world ‘Go’ champion. This was an amazing feat. A feat that could seem like magic, and whilst many of the techniques behind these machine learning algorithms are very sophisticated, these systems are not magic and they have their limitations. Smart as AlphaGo is, the real breakthrough was due to a combination that one might describe as a perfect storm. This perfect storm arises due to the combination of our ability to capture huge amounts of data, combined with the development of very sophisticated AI machine learning algorithms, plus affordable competing power and memory. These three factors when combined provide us with the ability to produce a system that could beat the world Go champion. Each of the elements in that perfect storm: the data, the sophisticated AI algorithms and the computing power and memory are important, but it is the data that has captured the imagination. And that has led to claims that ‘data is the new oil’, because it is the power behind AI and AI is a very profitable business, just like oil.

 

However, it’s important to remember, that just like oil, data is crude and it must be refined in order to derive its value. It must be refined by these AI algorithms. But even before the data can be processed by these algorithms it must be cleaned. So just like oil, there is a lot of work that needs to be done on the data before its value can be reaped. And even when we do reap this value from the data, it’s important to remember that machine learning is still basically just a form of pattern matching. Machine learning is certainly smart, very smart indeed, but it cannot learn everything.

 

AI has its limitations. For example, AI does not understand itself and struggles to explain the decisions that it makes. It has no common sense. If I ask you the audience this evening these questions: are you an empathetic friend? How well do you understand quantum physics? How are you feeling right now? Can you meditate? You will not struggle to answer, but AI would. So, the first important point to remember is that humans are intelligent in many ways. AI and Human Intelligence (HI) are not the same and the differences are extremely important, it is true that we have built our AI systems to be intelligent in the way that we perceive value in our human intelligence.

 

I remember in the early days of studying AI, the first grandmaster level Chess-playing Computer had been built and had beaten world champion Garry Kasparov. This seemed an amazing feat and there were people who thought that having cracked chess, which could be described as the pinnacle of human intelligence, intelligent people play chess after all. It was thought that we had cracked intelligence. And then people realised that the abilities that we take for granted, such as the ability to see, are far harder to achieve than is playing chess. Decades later, we have managed to build AI systems that can see, to an extent, but they still have their limitations.

 

What we need if we are to progress and grow our human intelligence, is to make sure that we recognise the need for humans to complementAI, not to mimic and repeat what the AI can do faster and more consistently than we can.

 

And so, what are the implications: the potential and the reality of AI within education? I believe that it is useful to think about this question from three perspectives:

 

1: using AI in education to tackle some of the big educational challenges;

2: educating people about AI so that they can use it safely and effectively;

3: changing education so that we focus on human intelligence and prepare people for an AI augmented world.

 

It is important to recognise that these three elements are not mutually exclusive. In fact, they are far from being mutually exclusive. They are interrelated in important ways.

 

Let us start with using AI in education to tackle some of the big educational challenges. Challenges such as the achievement gaps we see between those who achieve well educationally and those who do not. And challenges, such as those posed by learners with special and particular learning needs. If we start by looking at the reality of the systems that are available here and now, to help us tackle some of these challenges, then we will see the beginnings of the potential for the future. To start the ball rolling, I am going to handover to my friend Lewis Johnson, who runs an AI company in the US called Alelo. He can explain to you far better than I can exactly what is happening when it comes to data, AI and computing power in education.

 

Play video clip

 

Well, you heard it there from Lewis: Data has been a game changer when it comes to educational AI. And that’s true for companies working here in the UK too. If we take the London-based Century Tech, they have developed a machine learning platform that can personalise learning to the needs of individual students across curriculum areas to help them achieve their best. Their machine learning is informed by what we understand from neuroscience about the way the human brain learns. A further reality is that, in addition to being able to build intelligent platforms, such a Century, we can build intelligent tutors that can provide individual instruction to students in a specific subject area. These systems are extremely successful, not as successful as a human teacher who is teaching another human on a one-to-one basis, but the AI can, when well-designed, be as effective as a teacher teaching a whole class of students.

 

In addition to intelligent platforms and intelligent tutoring systems, there are many intelligent recommendation systems that can help teachers to identify the best resources for their students to use, and that help learners identify exactly what materials are most suitable for them at any particular moment in time. It is not just by learning particular areas of the curriculum that AI can make a big difference. AI can also help us to build our cognitive fitness, so that we have good executive functioning capabilities, so that we can pay attention when needed, remember what we learn and focus on what needs to be done. This system called MyCognition, for example, enables each person who uses it to complete a personal assessment of their cognitive fitness and then train themselves using a game called Aquasnap. AI helps Aquasnap to individualise training according to the needs of the particular person who is playing.

 

Finally, just in case you thought the reality of AI was only for adults, think again. This example from Oyalabs is a room-based monitor that can track the progress of a baby and provide that baby’s parents with individual supportive feedback to help them support their child’s development as effectively as possible.

 

That’s the reality of what’s available here and now when it comes to AI for education.

 

But what about the potential for the future? You’ll remember that I mentioned before that data can be described as the new oil and that it is the power behind AI. You heard Lewis talk about the way that data has been a ‘game changer’ for AI in education. And data can also be the power behind human intelligence. We can collect data in many, many ways, from our interactions with our smartphone to wearable technologies that track our heart rate, temperature, pulse, the speed of our movement and the length of our stillness. We can collect data about our interactactions with technologies in traditional ways, we can collect data passively through cameras that observe what is happening, we can collect data from technologies that are embedded in the clothes that we wear.

 

There are, of course, many important ethical implications associated with collecting data on this scale and these need to be addressed, but the scale of data collection is already happening and it is important to think about how this data could power education systems, not just systems designed to influence our spending or voting habits. If we accept the premise that data is the new oil and we are willing to invest the time in cleaning the data, then the final ingredient that we need to add, if we are to meet the potential that AI can bring to teaching and learning, is that we must design the AI algorithms that we use to process the data in a way that is informed by what we understand from research in the learning sciences, such as psychology, neuroscience, education. If we get this right then we can turn the sea of data that can be generated as people interact in the world into, an intelligence infrastructurethat can power all of the educational interactions of an individual.

 

This intelligence infrastructure can empower what we do with our smartphones, laptops, desktops, robotic interfaces, virtual and augmented reality interfaces and of course when we sit alone reading and working through books or when we interact with another person as part of our learning process. This intelligence infrastructure can tell us about how we are learning, about the process of learning, about where we are struggling and where we are excelling, based on extremely detailed data and smart algorithmic processing informed by what we understand about how people learn.

 

This intelligence infrastructure can also be used to power technologies to support people with disabilities and in so doing to help improve equality and social justice. We will be able to build intelligent exoskeletons, we can build intelligent glasses that can help the blind to see, and we will be able to tap in to what processing is happening in the brain allowing people to think what they want to happen on the computer screen and see it happen. But we need to remember as I highlighted earlier, that there are ethical Implications here. The potential for good is great, but unfortunately so is the potential for bad. Technologies that can be embedded in the body that can tap into the brain and bring a danger of what Noah Yuval Harare calls ‘hacking humans’.

 

So, what about the second implication of AI education? This implication is about educating people about artificial intelligence, so that they can use it safely and effectively. This tree diagram summarises the three key areas that I believe we need to educate people about when it comes to AI. We need everyone to have a basic understanding of AI, so that they have the skills and the abilities to work and live in an AI enhanced world. This is not coding, this is understanding why data is important to AI and what AI can and cannot achieve. We also need everyone to understand the basics of ethics, but we need a small percentage of the population to understand a great deal more about this so that they can take responsibility for the regulatory frameworks that will be necessary to try and ensure that ethical AI is what we build and use. And then there is the real technical understanding of AI that we need to build the Next Generation of AI system. Again, a small percentage of the population will need this kind of expert subject knowledge.

 

I would like to dwell for a moment on the ethical aspect of that tree diagram. There are many organisations exploring ethics and AI, or ethics and data. I find it useful when thinking about ethics to break down the problem into different elements. Firstly, there is the data that powers AI. Here, we need to ask questions such as: who decided that this data should be collected? Has that decision been driven by sound ethical judgement? Who knows that this data is being collected and who has given informed consent for this data to be collected and used? What is the purpose of this data collection, is it ethical.? What is the justification for collecting this data, is it sound? We must always remember that we can say no.

 

Next, we need to consider the processing that happens when the machine learning AI algorithms get to work. Have these algorithms been designed in a way that has been informed by a sound understanding of how humans learn? Have the AI algorithms been trained on datasets that are biased, or are they representative of the population for whom processing is being done?

 

And finally, there is the output – the results of the processing we have done through our AI algorithms. Is the output suitable to the audience? Is it genuine or is it fake? What’s happening when that output is received by the human interlocutor? Are we collecting more data about their reactions to this output?

 

There are many questions to be asked about the ethics involved in AI and education and here I have just scratched the surface, but it’s important to highlight that the ethical issues are extremely important. This is the reason I co-founded the Institute for Ethical AI and Education, because we believe that it’s an area that needs far more intention. We will be working towards the design of regulatory frameworks, BUT it’s important to remember that education will always be crucial, because regulation will never be enough on its own. We simply cannot keep up with those who want to do harm through the use of AI. We must therefore ensure that everyone is educated enough to keep themselves safe.

 

Finally, we come to the third category of implications from AI and education: changing education so that we can focus on human intelligence and prepare people for an AI world.

 

Many people, including the World Education Forum are telling us that we are now entering the Fourth Industrial Revolution – the time when many factors across the globe, including the way that AI is powering workplace automation, are changing the workplace and our lives for ever. There is much media attention to this Fourth Industrial Revolution with some coverage making such positive predictions as these from Australia that suggest that we will have two hours more time each week, because some of the more tedious aspects of our jobs will be automated. Our workplaces will be safer, and jobs will be more satisfying as we learn more.

 

Not everyone is as optimistic and there are an increasing number of reports that consider the consequences for jobs of the increased automation taking place in the workplace. This is an example from a report called ‘Will robots really steal our jobs’ published in 2018 by PWC. We can see from this graph from the report that transportation and storage appear to be the areas of the economy where most job losses will occur. Education will be the least prone to automation. We could interpret that as meaning that education will not change. However, I believe that education will change dramatically. It will change as we use more AI and it will change as what we need to teach changes in order to ensure that our students can prosper in an AI augmented world. And if we look at the second chart, it is perfectly clear that the impact will not be felt by everyone equally. Of course, those with higher education levels will be least vulnerable when it comes to automation and job loss. We therefore need to provide particular support for those with lower levels of education.

 

Personally, I do not think all these reports are that useful, interesting as they are. As a race, we humans are rather poor at prediction and the differences of opinion across the different reports indicates the complexity of predicting anything in such fast-changing circumstances. Trying to work out what to do for the best in a changing world, is a little bit like driving a car in dense fog along a road that you don’t know. In these circumstances, a map about the road ahead has limited use. What we really need is to know that we have a car that is well-equipped, we have brakes that work, lights that work. We want to be warm and we want to know that as a driver, we understand how to operate the car, that we understand the rules of the road, that we have eyesight that’s good enough to help us to see in the limited visibility ahead and that we can hear what is going on so that you can spot any impending dangers if they are indicating their presence by being noisy. A huge truck thundering towards us, for example.

 

So, what’s the equivalent of this good car and good driver when it comes to what we need in order to find our way through the fog of uncertainty around the Fourth Industrial Revolution?This is a subject that I have studied and written about quite a lot and a subject that is covered in this book: Machine Learning and Human intelligence: the future of education 21stcentury. Here I can only skim over the way that I unpack the intelligence that we need humans to develop if we are to find our way through this foggy landscape. This is the intelligence that can help us to cope with the uncertainty and it can help us to differentiate ourselves from AI systems. This is an interwoven model of intelligence that has seven interacting elements:

 

The first element of this interwoven intelligence is: interdisciplinary academic intelligence. This is the stuff that is part of many education systems at the moment. However, rather than considering it through individual subject areas as we do now, we need to consider it in an interdisciplinary manner. Complex problems are rarely solved through single disciplinary expertise, they require multiple experts to work together. The world is now full of complex problems and we need to educate people to be able to tackle these complex problems effectively. We therefore need to help our students see the relationships between different disciplines. We need them to be able to work with individuals who have different subject expertise and to synthesise across these disciplines to solve complex problems.

 

Secondly, we need to help our students understand what knowledge is, where it comes from, how we identify evidence that is sound enough to justify that we should believe that something is true. I refer to this as meta knowing,but of course we can use the terminology of epistemology and personal epistemology to describe this meta knowing.

 

The third elements of our intelligence that we really need to develop in very sophisticated ways is social intelligence. It is very hard for any artificially intelligent systems to achieve social intelligence, and it is fundamental to our success. Because, we need to collaborate increasingly in order to solve the kinds of complex problems that we will be faced with on a daily basis.

 

Fourthly, we need to develop our meta cognitive intelligence. This is the intelligence that helps us to understand what we need to know to understand how we learn, how we can control our mental processes, how we can maintain our focus and spot when our attention is skidding away from what it is we are trying to learn. These metacognitive processes are fundamental to sophisticated intelligence and they are again hard for AI to achieve.

 

The fifth element of intelligence we must consider is our meta emotional intelligence. This is what makes us human. We need to understand the subjective emotional experiences we sense and we need to understand the emotional perspectives of the others with whom we interact in the world. This emotional intelligence is also hard for AI.  AI can simulate some of this, but it cannot actually feel and experience these emotions.

 

We also need to recognise the importance of our physical presence in the world and the different environments with which we interact. We as humans, are very good at working out how to interact intelligently in multiple different environments. This meta contextual intelligence is something at which we can excel, and something that AI has great trouble with. Context here means more than simply physical location it means location, it means the people with whom we interact, the resources that are available to us and the subject areas that we need to acquire and apply in order to achieve our goal.

 

 

If we can build these interwoven elements of our human intelligence then we can really achieve what’s important for the future of learning and that is: accurate perceived self-efficacy. By this I mean that we can see how we can be effective at achieving a particular goal, at identifying what that goal consists of, identifying what aspects of that goal we believe we can achieve now, what aspects we need to learn about and train ourselves to achieve. In order to be self-effective, we must understand than to apply all the elements of intelligence so that we can work across and between multiple disciplines with other people with effective control and understanding of our mental and emotional processes.

 

Let me take a moment to stress something important here.  This is about intelligence. It is not about 21stcentury skills or so-called softskills. It is about something much more foundational than any skill or knowledge. It is about our human intelligence. I also want to emphasize that we can measure the development of our intelligence across all seven elements. They can all be measured and importantly they can all be measured in increasingly nuanced ways through the use of AI. This enhanced and continual formative assessment of our developing intelligence will shed light on aspects of intelligence and humanity that we have not been able to evidence before. We can use our AI, to help us to be more intelligent, and this is very important.

 

The truth of the matter is that being human is extremely important. The very aspects of our humanity, the aspects that we do not measure, but that are fundamental to what it means to be human, are the ones that we are likely to need more of in the future. For example, empathy, love and compassion. If I ask you to look at these pictures – what do you feel?

 

In the words of Yuval Noah Harare from 21 Lessons for the 21stCentury: …” if you want to know the truth about the universe, about the meaning of life, and about your own identity, the best place to start is by observing suffering and exploring what it is.”

 

What AI can or will ever be able to do this? It is important that we ensure that we still can.

 

And now, if we look at these pictures – again, I ask you what do you feel? How do those feelings impact upon the way you might behave? We undervalue these aspects of humanity when it comes to our evaluations of intelligence, and yet I would suggest that it is our human emotional and meta intelligence that enables us to feel horror at human suffering and pleasure at human love.

 

The holistic set of interwoven intelligence enables us to be human and AI can help us to both develop the sophistication of our intelligence, across all its elements, and it will also help us to assess, and yes, to measure much of this. If that is what we want to do. What we need is good data, and smart AI algorithms that have been designed in a way that is informed by what we know about how humans learn.

 

We can collect data from a vast array of sources these days. We can collect it as people interact in the world, even when they do not realise, they are interacting with technology. We are no-longer simply restricted to collecting data explicitly through the interfaces of our desktop, laptop, tablet and smart phone technologies. We can collect it through observation, wearable technology and facial recognition.

 

For example, in research conducted at the Knowledge Lab with my colleague Dr Mutlu Cukurova, we collected a range of data in our attempts to understand how we could identify signifiers of collaborative problem-solving efficacy that would be susceptible to AI collection and analysis. As you can see from this visualisation of all the data sources, we were able to capture, it is complex and does not tell us anything of great interest.

 

However, by using learning from the social sciences that provides evidence that factors, such as the synchrony of individual group members’ behaviours can signify positive collaborative behaviours. We can then inform the design of the AI algorithms that we use to process this data. We therefore analysed the eye tracking and hand movement data that we collected through this research test rig. We found that there was indeed a greater instance of synchrony of eye gaze and hand movements between different members of a group, when that group was behaving in a positively collaborative manner, as assessed by an independent expert.

 

This is just one example of a small signifier, but when combined with a battery of other detailed signifiers, we can start to generate accurate and nuanced accounts of what is happening as people learn. Accounts that can be extremely useful to teachers and learners. AI can help us to track and support the development of our human intelligence in very sophisticated ways.

 

But, what does this mean for teaching?

Numeracy and literacy, including data literacy, will of course remain fundamental to all education, as will the basics of AI;

Emphasis for the remaining subject areas needs to be on what these subjects are, how they have arisen, why they exist, how to learn them and how to apply them to solve complex interdisciplinary problems;

Debate and Collaborative Problem Solving provide powerful ways to help students understand their relationships to knowledge and to hone their ability to challenge and question;

 

To ensure that teachers and trainers have the time to work with their students and trainees to develop these complex skills, we can use AI to:

Provide tutoring for numeracy, literacy (including data literacy) and basic subject knowledge;

 

We then blend this with our human-intelligent teachers who can refine this understanding through activities such as debate and collaborative problem solving; and to develop learners’ social and meta intelligence (meta-cognitive, meta subjective, meta contextual and accurate perceived self-efficacy);

 

And then the finesse to this piece – we use the AI to analyse learner and learning data, so that teachers know when to provide optimal support and learners get to know themselves more effectively.

 

I find that decision makers in education are very risk averse and often do not want to make big changes, because they are concerned that such changes might disadvantage the people in the process of their education when the change hits. I can understand this. However, if we do not make big changes the consequences are likely to be worse, and the risks much greater. As the FT expressed this in 2017:

 

“The risk is that the education system will be churning out humans who are no more than second-rate computers, so if the focus of education continues to be on transferring explicit knowledge across the generations, we will be in trouble.” (Financial Times 2017).

 

This would be a retrograde step indeed and would take us back to the first instances of robots, as seen here in this image from a play by Czech writer Karel Čapek who introduced the word robot in 1920 to describe a race of artificial humans in a futurist dystopia.

 

To sum things up as we draw to a close: We need to make these three things happen: Use AI to tackle educational challenges, prioritize the development of our uniquely human intelligence, educate people about AI. To do this we need partnerships between educational stakeholders to build capacity.

 

Partnerships of the type that we build through the EDUCATE programme. These partnerships are to generate the golden triangle that is the foundation of impactful high-quality educational technology (including AI) design and application. The idea of the Golden Triangle formed at a meeting in January 2012 when I was talking with my research colleagues Mike Sharples and Richard Noss, along with Clare Riley from Microsoft and Dominic Savage from BESA – we had met up with some educators and were puzzling about why the UK Educational Technology business was not better connected to researchers and educators. This was the birth of the triangle: the points of which are the EdTech developers, the researchers and the educators – all of whom need to be brought together to develop and apply the best that technology can provide for education.

 

The triangle is golden, because it is grounded in data. It is a tringle, because it connects the three key communities: the people who use the technology, the people who build the technology and the people who know how to evidence the efficacy of the technology for learning and/or teaching.

 

This golden triangle is at the heart of what needs to be done for AI to be designed and used for education in ways that will support our educational needs. It is the triangle at the heart of the partnerships that engage the AI developers, most of whom do not understand learning or teaching, with the educators, most of whom do not understand AI, and the researchers who understand AI and learning and teaching. It is the co-design partnership that will drive better AI for use in education, more educated educators who can drive the changes to the way we teach and learn that are required for the fourth industrial revolution and also help their students understand AI, and more educationally savvy AI developers

 

The Reality and the Potential of AI is that:

 

AI is smart, but humans are and can be way smarter.

 

There are 3 ways AI can enhance Learning and Teaching

  • Tackle Educational Challenges using AI;
  • Prioritize Human Intelligence;
  • Educate people about AI: Attention to Ethical AI for Education is essential;

 

Partnerships are the only way we can achieve this.

 

Many thanks for coming to hear me speak this evening. I hope that I have piqued your curiosity and that you will have many questions to ask me.

 

Education for a Changing World: the implications of AI for Education

I am really delighted to be speaking at a fascinating event in Sydney, New South Wales where they are embracing the need to plan for the impact of AI in Education. The symposium is part of the Education for a Changing World project of the NSW Department of Education that is examining the implications that advances in technology will have for education. The project aims to stimulate informed discussions about how we should be preparing children to thrive in an increasingly complex and interconnected world.

EducationForAChangingWorld

The project’s first discussion paper explores some of the department’s initial thinking about the challenges of the big technological, economic, demographic and social shifts occurring around the globe. In addition to which, the department has established the Education: Future Frontiers Occasional Paper Series to widen the evidence base and the case for change. The paper I contributed is published here below:

Occasional Paper: The implications of Artificial Intelligence for teachers and schooling

Rose Luckin, Professor of Learning with Digital Technologies, University College London Institute of Education’s Knowledge Lab

 

Most people in countries where modern technology is widely used will be interacting with Artificial Intelligence (AI) through its many practical applications in computers that have visual capabilities, that can learn, solve problems, make plans, and understand and produce natural language, both spoken and written. These AI applications are used in areas such as medical diagnosis, language translation, face recognition, autonomous vehicle design and robotics.

 

AI is also already being applied to educational settings. For example Alelo has been developing culture and language learning products since 2005 and specialises in experiential digital learning driven by virtual role play simulations powered by AI. Carnegie Learning produce the software that can support students with their mathematics and Spanish studies. In order to provide individually tailored support for each learner the software must continually assess each student’s progress. The assessment process is underpinned by an AI-enabled computer model of the mental processes that produce successful and near-successful student performance.

 

UK-based Century Tech, has developed a learning platform with input from neuroscientists that track students’ interactions, from every mouse movement and each keystroke. Century’s AI looks for patterns and correlations in the data from the student, their year group, and their school to offer a personalised learning journey for the student. It also provides teachers with a dashboard, giving them a real-time snapshot of the learning status of every child in their class.

 

These examples merely scratch the surface of what is possible with AI. The purpose of this paper is to explore how AI is relevant to education and what AI can contribute to teaching and learning to help students and educators progress their understanding and knowledge more effectively.

 

The relevance of AI to education

 

In order to benefit from the potential advantages of AI – from personalised cancer treatment specified according to individual genetic profiles generated by AI to workplace automation that increases productivity – we must attend to the needs of education as a matter of urgency.

 

To be blunt, none of the potential AI benefits will be achieved at scale unless we address education and AI now. The nature of what needs to be done is summarised in Figure 1, which illustrates the elements involved in the AI and education knowledge tree. There are two key dimensions that need to be addressed:

 

  1. How can AI improve education and help us to address some of the big challenges we face?
  2. How do we educate people about AI, so that they can benefit from AI?

 

This paper will examine both of these dimensions in turn.

Figure 1: The AI and Education Knowledge Tree

The Tree of AI

 

Dimension 1. Addressing educational challenges with AI

 

The thoughtful design of AI approaches to educational challenges has the potential to provide significant benefits to educators, learners, parents and managers. But it must not start with the technology, it must start with a thorough exploration of the educational problem to be tackled.

 

A clear specification of the problem provides the basis on which a well-designed solution can be developed. Only when a solution design exists, can we start to consider what role AI can best play in that solution and what type of AI method, technique or technology should be used within that solution. There is an obvious and important role for teachers in the pursuit of a problem specification and solution design. Without this enterprise, the technologists cannot design effective AI solutions to the key educational challenges recognised across the globe.

 

Identifying the problem

 

For the purposes of this paper, I’ll take the definition of AI used in the Oxford dictionary, which defines AI as: computer systems that have been designed to interact with the world through capabilities (for example, visual perception and speech recognition) and intelligent behaviours (for example, assessing the available information and then taking the most sensible action to achieve a stated goal) that we would think of as essentially human.

 

AI is an interdisciplinary area of study that includes psychology, philosophy, linguistics, computer science and neuroscience. The study of AI is complex and the disciplines are interlinked as we strive for a greater understanding of human intelligence as well as attempting to build smart computer technology that behaves intelligently.

 

A key aspect of this definition that is often overlooked is the initial statement about an AI being a computer system that has been designed to interact with the world in ways we think of as human and intelligent. In current discussions of AI in the media, for example, we tend to focus on the AI technology rather than the problem and the design process that has preceded and informed the implementation of the AI technology. This is ironic, because the most important aspect of AI is the identification of the problem to which intelligence is to be applied and the design of a clear understanding and representation of that problem.

 

Without this problem specification process, there is no chance of developing a good solution to which AI technology can be applied. The AI designer must have a good understanding of the problem AI is supposed to solve, as well as the type of AI technique that might be appropriate. The features of the problem must be specified along with the features of the environment in which the AI must operate. Once we recognise the importance of the AI design stage we can start to unpack the relevance of AI to teaching and learning and the vital role that educators need to play if AI is to meet its potential in the benefits it can provide to education.

 

I remember when I was an undergraduate studying AI, one of the hardest final year examinations was a paper that we could complete outside normal exam conditions over a three-day period. The paper presented candidates with a selection of problems. For example: a complex of road junctions where fluctuating traffic flow rates and poor visibility had resulted in a series of accidents, or a teacher who needed to provide support for a class of language learning students who were all at very different levels of proficiency.

 

As students we were required to select one of the problems, describe the problem as we understood it including any assumptions we were making, develop a potential solution and design the AI techniques and technologies that could be used to implement our proposed solution. The first example problem requires predominantly a planning or possibly a computer vision approach, whereas the second is more likely to be concerned with knowledge collation and representation, possibly also knowledge elicitation. Students were not required to implement any technology or write any code; the paper was designed to test their design skills.

 

My point here then is that when we ask how AI can contribute to teaching and learning, we need to start from the problems that we believe need to be tackled.

 

Designing solutions

 

Thinking about the problem specification and solution design stage of AI should prompt us to start considering how AI could help us to transform problematic educational activities and bring about changes to the working lives of teachers. Changes that would make best use of teachers’ uniquely human skills and abilities, and that would remove much of teachers’ routine administration, record keeping and assessment work.

 

Before looking at examples of the changes that could be made to the job of being a teacher, it is important to consider briefly the changes to the workforce that are likely to occur, partly due to the automation brought about by AI. Schools will need to ensure that they equip students to be effective in the future workforce and educators will therefore need to know which skills, abilities and knowledge are most valuable for their students to learn.

 

The impact of technology, particularly automation, on employment is a key topic of debate at the moment in much of the western world. Predictions about the future pace of technological change, due to AI have historically been over-optimistic.  In fact, the jobs and skills composition of a workforce have tended to change only gradually over time.[1] The most dramatic historical shift was from agriculture to industry rather than due to an ICT-driven transformation. Current estimates of the impact of future automation on the number of jobs and the types of jobs most at risk vary. See Marc Tucker’s essay Educating for a digital future – the challenge as part of this series for detailed consideration of these issues.

 

Some jobs are more likely to be augmented by AI rather than replaced through the automation of specific tasks. For example, lawyers routinely conduct document reviews, which is a task that can be automated in some contexts. However, lawyers also provide advice to their clients and complete negotiations for their clients and these tasks are much harder to automate. Not only does this suggest that there is not a clear one-to-one relationship between a job lost and a task automated, but also that the coordination of the different tasks between machines and humans may be a new job in its own right. The situation is made more complex by the many factors at play beyond automation: globalisation, environmental sustainability, urbanisation, increasing inequality, political uncertainty for example.

 

The only thing we can be sure about is that the future workplace will be uncertain, unpredictable and that our students will therefore need to be able to cope with this uncertainty, to be resilient, flexible and lifelong learners. The way to achieve this is to focus on individuals as learners and enable them to be effective for themselves and with and for others and society too.

 

The key skill people will need for their future work lives will be self-efficacy – by this I mean that every individual needs to have an evidence-based and accurate belief in their ability to succeed in specific situations and to accomplish tasks both alone and with others. A person’s sense of self-efficacy plays a key role in how people tackle tasks and challenges, and how they set their goals, both as individuals and as collaborators. It is something that can be taught and mentored and it requires an extremely good knowledge of what one does and does not know, what one is and is not so good at, where one needs help and how to get this help. This self-knowledge is not just about subject specific knowledge and understanding, but also about one’s wellbeing, emotional strength and intelligence.

 

This self-knowledge and efficacy is particularly important because these are skills that AI cannot replicate. No AI developed to date understands itself, no AI has the human capability for metacognitive awareness and self-knowledge. We must therefore ensure that we develop our knowledge and skills to take advantage of what is uniquely human and use AI wisely to do what it does best: the routine cognitive and mechanical skills that we have spent decades instilling in learners and testing in order to award qualifications.

 

The implications of this for school systems, the curriculum and teaching are profound and educators must engage in discussing what needs to change as a matter of urgency. This is not a job for the technologists, but if we do not motivate educators to engage in discussions about what AI could and should be used for in education the large technology companies may usurp the educators and occupy the AI vacuum that a lack of engagement will produce.

 

Leveraging AI to enhance teaching and learning

 

What I hope is clear from the discussion about the future of the workforce is that we need to review what and how we teach and ensure that AI is designed and used as a tool to make our students (and ourselves) smarter, not as a technology that takes over human roles and dumbs us down. To achieve this we need to concentrate on developing teaching and schooling that develops the uniquely human abilities of our students as well as instilling within them the requisite subject knowledge in a flexible, interdisciplinary and accessible manner.

 

The parallel in teaching is that we need AI assistants to relieve teachers from the routine automatable parts of their job and enable them to focus on the human communication, the sensitive scaffolding and supporting the wellbeing of their students so that they can build the self-knowledge and self-efficacy that will ensure that they are able to advance in their chosen workplace.

 

Three examples of the ways in which teaching and schooling could be re-imagined are presented below, each is driven by a significant educational challenge.

 

Example 1: Assessing what can’t be automated not what we can easily automate

The current outdated assessment systems that prevail across the world revolve around testing and examining the routine cognitive subject knowledge that can easily be automated. These assessment systems are also ineffective, time consuming and the cause of great anxiety for learners, parents and teachers. However, there is now an alternative due to the potential information we can gain from combining big data and AI and applying it to the problem of assessing learning. There is a rather beautiful irony in the fact that while unable to understand itself or develop any self-knowledge, AI can help us to understand ourselves as learners, teachers and workers.

 

Let me explain what I mean by this:

  • The careful collection, collation and analysis of the data that can be harvested through people’s use of technology gives us a rich source of evidence about how learners are progressing, cognitively, metacognitively and emotionally;
  • Continuing work in psychology, neuroscience and education has increased our understanding of how humans learn. This increased knowledge can be used to specify signifiers or behaviours that evidence learner progress;
  • Our increased knowledge about human learning can also be used to design AI algorithms and models that can analyse data about learners, recognise signifiers of learning and build dynamic models of each individual students’ progress holistically so that we can chart their development of self-knowledge and self-efficacy as well as their increased knowledge and understanding of key subject knowledge;
  • The final step in the process is to design ways in which we can visualise the data that has been analysed to define each learner’s progress cognitively, metacognitively and emotionally. These visualisations can be used by learners, educators, parents and managers to understand the detailed needs of each learner and to develop within each learners the skills and abilities that will enable them to be effective learners throughout their lives.

 

An AI assessment system that was composed of these AI tools and that illustrated to every learner the analysis of their progress in an accessible format would support learning and teaching through continually assessing learning of both subject specific knowledge and the skills and capabilities that the AI augmented workforce will require, such as negotiation, communication and collaborative problem solving.

 

This AI assessment system would be more accurate and cheaper than the human intensive examination systems currently in place and it would free up time for teaching and learning that is currently taken up when we stop teaching in order for people to sit tests and exams. Assessment would happen continuously while people learn. This assessment change requires political will as well as investment in technology development and engagement with teachers, students and parents so that they fully understand the AI assessment proposition.[2]

Example 2:  Addressing the achievement gap between advantaged and disadvantaged learners

AI could also help to make the education system more equitable. Education is the key to changing people’s lives but the less able and poorer students in society are generally least well served by education systems. Wealthier families can afford to pay for the coaching and tutoring that can help students access the best schools and pass those currently cherished exams.

 

AI would provide a fairer assessment system that would evaluate students across a longer period of time and from an evidence-based, value-added perspective. It would not be possible for students to be coached specifically for an AI assessment, because the assessment would be happening in the background, over time, without necessarily being obvious to the student. AI assessment systems would, for example, be able to demonstrate how a student deals with challenging subject matter, how they persevere and how quickly they learn when given appropriate support.

 

One of the key benefits that AI can bring to all learners is the capability to understand more about themselves: what they know and where they need help to understand, their strengths and weaknesses and their well-being. Metacognitive awareness is a complex concept, but broadly it refers to any knowledge or cognitive process that references, monitors or controls any aspect of cognition. Scholars distinguish between a person’s knowledge of their cognitive processes and the processes they use to monitor and regulate their cognition. This latter regulatory process incorporates a variety of executive functions and strategies, such as planning, resource allocation, monitoring, checking and error detection and correction.

 

Good metacognitive awareness and regulation enhances cognitive performance, including attention, problem-solving and intelligence and it has been shown to increase learning outcomes[3]. Successful students continually evaluate, plan and regulate their progress, which makes them aware of their own learning and promotes deep-level processing. Metacognitive awareness and regulation can be taught and supported, and can benefit learners of all abilities.

 

A series of studies we conducted using an AI software simulation called the Ecolab demonstrated that AI could be employed to scaffold learners to develop metacognitive skills, in particular help seeking and task difficulty selection skills. [4] The results demonstrated that the students whose subject knowledge and ability had been assessed as being below average gained particular benefit and performed significantly better than more-able students, who also performed well.

 

In addition to employing AI to scaffold the development of these important learning skills, we can also use AI to visualise to students the trajectory of their progress and increase their self-awareness. For example, in Figure 2, the map in the dialogue box entitled ‘Activities’ depicts the area of the curriculum that the child is studying, with each node representing a curriculum topic. When the user clicks on a node in this map, the bar chart below and to the left of the map indicates the level of difficulty of the work that the child has completed while working on this topic, and the dots on the ‘dice’ below and to the right of the map indicate how much help the child has received.

EcolabInterface

Figure 2 An example of a visualisation of student performance, courtesy of Ecolab

 

Example 3: Making teaching more effective

Imagine a classroom setting ten years hence where data about each learner’s movements, speech and facial expressions are automatically logged by passive capture devices within the fabric of the classroom. This information is combined with data about each learner’s performance recorded by the school’s assessment system, and by data input from teacher, parent and the learner themselves. All this data is used to update the class teacher’s pupil records and to provide data for an AI-based teaching assistant that keeps track of every learner’s progress: cognitive, emotional and metacognitive.

 

The AI teaching assistant relieves the teacher of all record keeping and recording activities and is able to provide up to the minute information about any pupil through a teacher activated speech based interface or through a software application. Teachers can also ask their AI assistant to identify an appropriate tutoring application, like those described at the start of this paper, for a group of students who need particular support with an area of the curriculum. The AI assistant can search for resources or media to meet the teacher’s requirements for the day, or it can identify and contact local entrepreneurs who are willing to come and talk to pupils about future work opportunities or how to be an entrepreneur.

 

The possibilities for the AI assistant are vast and encompass all the routine, data intensive and time consuming activities that are essential to the smooth running of the classroom, but that don’t need the expertise of a teacher. This allows the teacher to focus on the process of teaching and learning ensuring that all pupils benefit from the unique human skills involved in effective intersubjective teaching and learning interactions.

 

There are more than 30 years of research on AI for education that demonstrate that we can use AI to make teaching more effective and more economical by augmenting teachers with AI systems so that teachers can concentrate on the teaching activities that require the general and specialist intelligence that AI does not (yet?) have. The outputs from this research are now required to build the AI teaching assistants that schools and universities need, such as that described here. We have the technology know how, we now need the initiative to make such assistants a reality. This initiative would need to engage educators across the sectors to help ensure that the capabilities of AI assistants address the requirements of their teaching realities.

Dimension 2. Education about AI

 

There are three key elements that need to be introduced into the curriculum at different stages of education from early years through to adult education and beyond if we are to prepare people to gain the greatest benefit from what AI has to offer.

 

The first is that everyone needs to understand enough about AI to be able to work with AI systems effectively so that AI and human intelligence (HI) augment each other and we benefit from a symbiotic relationship between the two. For example, people need to understand that AI is as much about the key specification of a particular problem and the careful design of a solution as it is about the selection of particular AI methods and technologies to use as part of that problem’s solution.

The second is that everyone needs to be involved in a discussion about what AI should and should not be designed to do. Some people need to be trained to tackle the ethics of AI in depth and help decision makers to make appropriate decisions about how AI is going to impact on the world.

Thirdly, some people also need to know enough about AI to build the next generation of AI systems.

 

In addition to the AI specific skills, knowledge and understanding that need to be integrated into education in schools, colleges, universities and the workplace, there are several other important skills that will be of value in the AI augmented workplace. These skills are a subset of those skills that are often referred to as 21st century skills and they will enable an individual to be an effective lifelong learner and to collaborate to solve problems with both Artificial and Human intelligences.

 

I have already discussed the importance of both metacognition and self-efficacy. Here I therefore simply note that these two concepts are inter-linked and essential for lifelong learning. Collaborative problem solving brings together thinking about the separate topics of collaboration and problem solving, each with their own research history.  Collaborative problem solving is a key skill for the workplace, and its importance is only likely to grow as further automation takes effect.

 

There is a mismatch between the substantial evidence in favour of collaborative problem solving and learning reported in the literature and the approaches widely used within schools. This is neither preparing students for university nor the workplace. For example, in an interview for a Davos 2016 debate on the Future of Education, a student from Hong Kong stated that the current school system produced: “industrialised mass-produced exam geniuses who excel in examinations” but who are “easily shattered when they face challenges”. We need employees to be able to tackle challenges and this often involves working effectively with others to solve the problem at the heart of any challenge; we don’t need exam geniuses who crumble under the pressure of the real world.

 

Collaborative problem solving does not happen spontaneously. Both teachers and students require a high level of training to employ collaborative problem solving effectively, and yet there is little evidence of concerted training effort. This means that when teachers do attempt to employ collaborative problem solving, the quality of the group interactions and dialogue can be poor.

 

It is extremely difficult to isolate the precise nature of the key factors that impact on the effectiveness, or not, of collaborative problem solving. We can, however, identify factors that are frequently mentioned as being influential upon success. These factors include: the environment in which collaborative problem solving takes place; the composition, stability and size of the group and their problem solving and social skills, and teacher training.

 

To be effective at collaborative problem solving, people need to be able to:

 

  1. articulate, clarify and explain their thinking;
  2. re-structure, clarify and in the process strengthen their own understanding and ideas to develop their awareness of what they know and what they do not know
  3. adjust their explanations when presenting their thinking, which requires that they can also estimate others’ understandings;
  4. listen to ideas and explanations from others – this may lead listeners to develop understanding in areas that are missing from their own knowledge;
  5. elaborate and internalise their new understanding as they process the ideas they hear about from others;
  6. actively engage in the construction of ideas and thinking as part of the co-construction of understandings and solutions;
  7. resolve conflicts and respond to challenges by providing complex explanations, counter evidence and counter arguments; and
  8. search for new information to resolve the internal cognitive conflict that arises from discrepancies in the conceptual understanding of others.

 

Implications for teacher training and professional development

 

The significant educational implications that AI brings to society, both when AI is viewed as a tool to enhance teaching and learning and when AI is viewed as a subject that must be addressed in the curriculum, make clear that teacher training and teacher professional development must be reviewed and updated.

 

If teachers are to prepare young people for the new world of work, and if teachers are to prime and excite young people to engage with careers designing and building our future AI ecosystems, then someone must train the teachers and trainers and prepare them for their future workplace and its students’ needs. This is a role for policy makers, in collaboration with the organisations who govern and manage the different teacher development systems and training protocols across countries. If the need for young people to be equipped with a knowledge about AI is urgent, then the need for educators to be similarly equipped is critical and imperative.

 

On a more positive note, the development of AI teaching assistants will provide an opportunity for developing deeper teaching skills and enriching the teaching profession. This deepening of teacher expertise might be at the subject knowledge level, or it could be concerned with developing the requisite skills to support and nurture collaborative problem solving in our students. It could also result in teachers developing the data science and learning science skills that enable them to gain greater insights from the increasingly available array of data about students’ learning.

 

Any failure to recognise and address the urgent and critical teaching and training requirements precipitated by the advancement and growth of AI is likely to result in a failure to galvanise the prosperity that should accompany the AI revolution. In particular, I see three major areas of concern:

  • failure to recognise the importance of self-efficacy, because we are only measuring subject knowledge;
  • failure to exploit the power of AI through fear of the security, privacy and protection of our personal data and that of our children; and
  • failure to teach people enough about AI to empower them to make key decisions about what it should and should not, could and could not, will and will not be able to do for society.

 

Conclusions: the implications of AI for education

 

In this paper I have highlighted the need to see AI as more than particular technologies, such as machine learning, neural networks, or deep learning algorithms. For education to benefit from the potential of AI, we must focus on the problem specification and solution design elements of AI.

 

A key action for all of us must be to develop a culture of problem specification that encourages people to unpack educational problems, so that solutions that benefit from the symbiosis of AI and HI can be developed.

 

We need to start developing a curriculum and a pedagogy to ensure that our students develop the self-efficacy that will set them aside from their AI peers and that will help them to deal effectively with the changing and perhaps turbulent workplace of the future.

There is also great scope to reimagine teaching and schooling through the development of AI augmented teaching practices. This means that educators must ensure that their voices are heard by the technology companies that are developing their particular technology classrooms of the future. Early progress might easily address the administrative and routine tasks that currently take too much teacher time.

In addition there are social, technical and political challenges that also require our attention. Socially, we need to engage teachers, learners, parents and other education stakeholders to work with scientists and policy makers to develop the ethical framework within which AI assessment can thrive and bring benefit. Technically, we need to build international collaborations between academic and commercial enterprise to develop the scaled up AI assessment systems that can deliver a new generation of exam-free assessment. Politically, we need leaders to recognise the possibilities that AI can bring to drive forward much needed educational transformation within tightening budgetary constraints. Initiatives on these three fronts will require financial support from governments and private enterprise working together.

 

AI has the potential to bring about enormous beneficial change in education, but only if we use our human intelligence to design the best solutions to the most pressing educational problems.

 

 

 

 

Case study: AI and teacher shortages  (note: to appear in body of paper or at the end, depending on layout requirements)

 

One of the big problems that we need to address in education is the global shortage of teachers. The UNESCO Institute for Statistics (UIS) has estimated that in order to ensure inclusive and equitable quality primary and secondary education and promote lifelong learning opportunities for all, countries must recruit 68.8 million teachers by 2030.[5] To put this in perspective, the total number of newly qualified teachers in England for the year 2016-2017 is 73,636[6] and for 2015, the number of students completing undergraduate and postgraduate ITE programs in NSW was 5,547 and across Australia the figure was 18,194.[7]

 

The temptation when faced with a problem such as global teacher shortages is to consider AI as a potential solution through the provision of AI rather than human teachers. There are however at least two significant reasons why this suggestion reflects a poor understanding of the problem. The full spectrum of teaching skills and abilities required of teachers is broad and complex. So while AI tutors may be able to provide tutoring in particular subject areas, AI is not (yet) able to fulfil the entire role of a human teacher.

 

A much more feasible approach would be through augmenting human teachers AI assistants in the classroom to help human teachers cope more effectively with their classes of students.[8] This could be an excellent way for AI to contribute to the teaching workforce. However, if we look at the problem again and unpick it a little, we see that a key part of the problem is not just the number of people who we need to become teachers, but also the lack of training and qualifications for these people.

 

The recognition that we need both more teachers and more training and qualification routes was the catalyst for a company called Third Space Learning, working in collaboration with the UCL Institute of Education’s Knowledge Lab, to develop the design for a system that will enable anyone who wants to teach to become a qualified online tutor. To be eligible people need to have a degree level qualification, some time to spare, and access to the internet.  Each tutor will be trained online and will work on a one to one basis with learners from anywhere in the world, both within and outside schools and colleges.

 

So where is the AI in this design? The AI will be used to automatically evaluate the online human to human tutoring sessions to ensure quality standards are maintained. Evaluation is currently done by human evaluators and the cost of scaling-up this human resource approach are prohibitive. Initially, therefore our task has been to find identifiable signifiers or proxies for student learning within tutorial interactions. The list of dependent factors as proxies of student learning is extensive, however, three main themes emerge and are taken into account in defining best practice:

  • the cognitive domain involving knowledge, understanding and skills about the studied content;
  • the metacognitive domain involving the acquisition of knowledge and skills related to one’s own learning, in other words, the learners’ knowledge and understanding of their own learning;
  • and the affective, often referred to as the emotional domain involving learners’ capacities to deal with their emotions, which effect their attitudes, locus of control, self-efficacy and interest etc.

 

We consider these three themes to be the core of a learner’s learning state: their susceptibility to learn. The theoretical background from these three themes provided an initial framework for the development of an annotation tool that could be used to score tutorial interactions. Informed by the framework we analysed past tutorial interactions and developed a mark-up language of successful tutorial interaction signifiers that can act as proxies for best practice. These proxies are integrated into an annotation tool that enables tutors to score or tag their sessions in real-time. The evidence from our evaluation of this annotation tool is being used to automate the tagging process using AI techniques. One particularly interesting finding from our early work is that real-time tagging is potentially more accurate than post session evaluation of tuition quality, because the latter are more vulnerable to a human evaluator’s bias.

 

The AI tool currently under development models the relationship between the inferences drawn from the tutorial interaction annotations, the actions that a tutor takes during those interactions and the performance of the student being tutored. In this way, we will automatically evaluate tutorial sessions according to the three core elements of learning: the cognitive, metacognitive and the emotional.

 

This AI evaluation process will also provide feedback for tutors and learners to improve their tutoring sessions. And it will provide the basis for personalised continuing professional development qualifications for the tutors, which is individualised according to each tutor’s performance as a tutor. Unpacking the problem of global teacher shortages this way maintains the role of the human as the teacher assisted and continually developed by AI.

 

[1] Handel, 2012 Handel MJ (2012) ‘Trends in Job Skill Demands in OECD Countries’, working paper 143, OECD. http://www.oecd-ilibrary.org/social-issues-migration-health/ trends-in-job-skill-demands-in-oecd-countries_5k8zk8pcq6td-en

[2] For a more detailed account of this argument for AI assessment see http://www.nature.com/articles/s41562-016-0028

[3] See for example: Marzano, R. J. (1988) ‘Dimensions of thinking: A framework for curriculum and instruction.’ Alexandria VA: The Association for Supervision and Curriculum Development.

[4] Luckin, R. & du Boulay, B. Int. J. Artif. Intell. Educ. 26, 416–430 (2016).

 

[5] UNESCO. (2016). http://www.uis.unesco.org/Education/Documents/FS39-teachers-2016-en.pdf

[6]https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/572290/ITT_Census_1617_SFR_Final.pdf

[7] https://www.cese.nsw.gov.au/images/stories/PDF/Workforce_Profile_NSW_2015_FA7_AA.pdf

[8] https://howwegettonext.com/a-i-is-the-new-t-a-in-the-classroom-dedbe5b99e9e

Who Moved My Intelligence?

The title of this article is inspired by a self-help book from the 1990’s called ‘Who moved my Cheese: An Amazing Way to Deal with Change in Your Work and in Your Life’. I have blogged about this book before and this motivated me to write this piece for WISE. Despite significant criticism, this book became a best seller and a popular tool in any change manager’s back pocket. The implications of Artificial Intelligence (AI) and automation for change in the future workplace is the subject of much current debate. But how should educators respond? How can they ensure that they benefit from AI?

robot400

AI refers to the capabilities of computers to perform intelligent behaviours that we would think of as essentially human.[1] Most readers will be familiar with a practical application of AI, the sort of technology we use to navigate information on the internet, find our way around our environment or enter a country with our e-Passport. But what does the increased popularity and the increasing sophistication of AI technology mean for education?

xyleme-learner-teacher-instructional-design

To answer this question, I focus on two interpretations of the question: ‘Who moved my Intelligence?’. Interpretation 1 considers how we need to ‘move’ our students’ intelligence beyond the routine cognitive processing of academic subject matter. Interpretation 2 will consider what ‘moving’ certain intelligent workplace behaviours from human performance to AI performance means for educators, including for the job of teaching. 

Developing the uniquely human abilities of students

 Education and training organizations need to review what and how they teach to ensure that AI is designed and used as a tool to make our students and trainees smarter. We do not want AI to be used as a technology that takes over human roles in a way that ‘dumbs us down.’ We therefore need to concentrate on designing and implementing teaching and schooling that develops the uniquely human abilities of our students and instills within them the requisite subject knowledge in a flexible, interdisciplinary and accessible manner.

Slide11

The human capability for Metacognition, both in terms of self-understanding so that each of us has an accurate knowledge of what we do and do not understand; and self-regulation so that we can all plan and monitor our learning effectively, will be at a premium in the future workplace. This is because metacognition is not something that AI can achieve, and because we will all need to be lifelong learners flexibly developing our knowledge and skills to meet the demands of the future, we will all therefore need to develop better metacognitive skills.

The use of teaching approaches such as Collaborative Problem Solving (CPS) will become more essential. CPS has been shown to have the potential to provide learners with an understanding of key subject knowledge synthesized across disciplines that they can apply in a flexible manner to real world problems[2]. Collaboration and problem solving are also among the key 21st century skills demanded in the modern workplace, because routine cognitive skills and knowledge are easy to automate with AI.

The curriculum will also need to include AI as a subject, not merely to teach a small sub set of the population to design and build AI systems, but to teach the whole population what AI is and what it can and cannot do. Everyone needs to understand enough about AI to be able to use it effectively in their lives at work and at home, to be able to contribute to important decisions about what is and is not ethical and permissible for an AI to do, and to be able to make decisions about the division of labour between artificial and human intelligences.

Re-imagining teaching and schooling

There is no doubt that there will be a shift in the distribution of intelligence within the workplace, including classrooms and schools. In order to extract the most benefit from this redistribution, we need to ensure that the most automation-appropriate activities are done by the AI, and likewise that the most human-appropriate activities are done by people.

Slide33

Re-imagine teaching and schooling with AI assistants to provide intelligent analysis of multiple data sources about learners, from sleep sensors, library usage and e-learning resource interactions, to social media activity. This analysis will illustrate how learning is progressing to support ongoing detailed formative assessment. AI assistants could also relieve teachers from the routine automatable parts of their job, and enable teachers to focus on human sensitive support and communication.

 

[1] (2005). ODE: The Oxford dictionary of English (Oxford dictionaries online). Oxford: Oxford University Press. AND Russell, S. J., Norvig, P. & Davis, E. Artificial  intelligence:  A  modern   approach. Upper Saddle River: Prentice Hall.

[2] Solved (2016) http://www.nesta.org.uk/sites/default/files/solved-making-case-collaborative-problem-solving.pdf

Malala Yousafzai’s A level results are brilliant, we need more successes like this

Who could be anything but delighted to see this headline? A-level results: Malala Yousafzai gets a place at Oxford, this is excellent news and a great boost for those campaigning for equal education. In fact, the publication yesterday of A level results in the UK has spurred me to take a slight diversion from worrying about who is moving my brain or my cheese. I certainly would not want to detract from the hard work that any students have put into their A level studies or to take the shine off their success. It is wonderful to see the smiling faces of successful students across the newspapers.

However, success does not come to all and even on a celebration day, or perhaps I should write especially on a celebration day, I think we need to consider alternatives to the stressful stop and test regime that pervades most education systems. I wrote about this in Nature Human Behaviour earlier this year under the heading: ‘Towards artificial intelligence-based assessment systems’ and it looks like it has been read a few times because it is ranked 5,746th of the 237,966 tracked articles of a similar age in all nature journals which puts it in the 97th percentile. This does not seem bad given that it was only a ‘comment’ piece and not a full paper. On a less positive note in an internal REF assessment exercise it was only ranked as 2*, which is not great and probably reflects the difficulty for academics in publishing more popular style articles. However, the modest success of the article in terms of the altometrics that Nature run encourages me to believe that there is some interest in exploring the possibilities that the intelligent design and application of AI could afford for National assessment systems. I therefore draw attention to this possibility here and hope to encourage further debate. The key point I wanted to convey in the Nature Human Behaviour article was that there are alternatives to exams, that are less stressful, less expensive and that allow teachers and learners to spend more time on teaching and learning (shouldn’t this be the point of education?).

This message may not be what others have selected to focus on, but for me, the most important thing is that we have an assessment system that is holistic, fair and that let’s all students evidence their knowledge, skills and capabilities.

 

Who moved my brAIn? Is this the next self help best seller to get us ready for our AI future?

I’m pleased to report that my ankle is progressing well and I am now once again able to achieve my ‘misfit‘ challenge of 1000 activity points per day: clearly it is a good job I was only mildly curious. However, I want to be more than mildly curious when it come to my intellect, and I want to do this without injury. I had therefore better take care, both of my own intellect and of the intellect of those I am trying to encourage to be appropriately curious. I therefore return to my thoughts about what a useful self-help book to prepare people for their AI augmented futures, might be like. To this end, I also return to ‘The AI Race‘ and specifically to the man behind the survey that was used to calculate how much of different people’s jobs are likely to be automated.

AustralianAutomationData

Andrew Charlton is his name and he is economist and director of AlphaBeta, an economics and strategy consulting firm. He did not beat about the bush! AI will impact on ALL jobs and he encouraged the TV audience to embrace AI. His ‘top tip’ was that we must carefully manage the transition from now to the situation when widespread AI augmentation will be common place.

He was clear that we must take advantage of what AI has to offer by increasing the diversity of our own skills sets. He saw AI as an “Iron Man Suit” for humans. This suit would transform us mere humans into super humans. This is a great analogy, who would not want to be super human? BUT embracing AI augmented working is not as simple as putting on a new outfit – especially an iron outfit. And increasing the diversity of our skill sets requires educators and trainers who are themselves skilled and trained in developing these new diverse skill sets. BUT where are these educators and trainers to be found? Who is helping the educators and trainers to gain the skills and expertise they will need to train their students in?

superteacher1

Andrew has little comfort to offer here.  His next comment about education is that 60% of the curriculum that students are studying at school is developing them for jobs that will no longer exist in 30 years-time. We need to re-design the curriculum he advises. So educators need to re-skill themselves as well as their students, and they need to revise the curriculum. Clearly educators will be busy! And clearly there will also be a significant job to be done in (re-)motivating all those students who discover that they have been learning stuff that nobody will want them to know by the time they are looking for a job.

Now we hit the nub of the matter, education and educators must prepare students for the new AI order of things. Educators lives are going to change in significant ways NOT because their roles are likely to be automated away BUT because they will need to teach a different curriculum and probably teach in a different way. To make matters worst: there is no clear consensus from the experts about exactly which jobs educators will need to educate people for. I think educators may be the most in need of a good self-help book to help them cope with the inevitable changes to their lives.

robot-seminar-comic

The self-help book I need to write might therefore be the updated version of the motivational best seller Who Moved My Cheese? An Amazing Way to Deal with Change in Your Work and in Your Life. I suggested this might become ‘Who moved my brAIn? An Amazing Way to Deal with ChAnge in Your Work and in Your Life’

ByebyeBrain

The original book called ‘Who Moved My Cheese’ was a story featuring 4 characters: two mice, called “Sniff” and “Scurry,” and two little people, called: “Hem” and “Haw.” These 4 characters all live together in a maze through which they all search for cheese (for cheese think – happiness and success). Their search bears fruit when all of them find cheese in “Cheese Station C.” “Hem” and “Haw” are content with this state of affairs and work out a schedule for how much cheese they can eat each day, they enjoy their cheese and relax.

images

“Sniff” and Scurry” meanwhile remain vigilant and do not relax, but keep their wits about them. When horror of horrors there is no cheese at Cheese Station C  one day, “Sniff” and Scurry” are not surprised: they had seen this coming as the cheese supply had diminished and they had prepared themselves for the inevitable arduous cheese hunt through the maze and they get started with the search together straightaway. In contrast “Hem” and “Haw” are angry and annoyed when they find the cheese gone and “Hem” asks: “Who moved my cheese?” “Hem” and “Haw” get angrier and feel that the situation they find themselves in is unfair. “Hem” is unwilling to search for more cheese and would rather wallow in feeling victimized, “Haw” would be willing to search, but is persuaded not to by “Haw”.

Unknown

While “Hem” and “Haw” get annoyed, “Sniff” and “Scurry” find a new cheese supply at “Cheese Station N,” and enjoy a good feast. “Hem” and “Haw” start to blame each other for their lack of cheese. Once again “Haw” suggests they go and look for more cheese, but grumpy “Hem” is frightened about the unknown and wants to stick with what he knows, he refuses to search. However, one day “Hem” confronts his fears and decides it is time to move on. Before he leaves “Cheese Station C” he scribbles on the wall: “If You Do Not Change, You Can Become Extinct” and “What Would You Do If You Weren’t Afraid?” He starts his trek and whilst he is still worried, he finds some bits of cheese that that keep him going as he searches. He finds some more empty cheese stations, but also some more crumbs and is able to keep hunting. “Haw” has realized that the cheese did not simply vanish, it was eaten. He is able to move beyond his fears and he feels ok. He decides that he should go back to find “Hem” equipped with the morsels of cheese he has found. Sadly, “Hem” is still grumpy and refuses the cheese morsels. Undeterred, though somewhat disappointed, “Haw” heads back into the maze and a life of cheese hunting. He continues to write messages on the wall as a way of externalizing his thinking and in the hope that “Hem” might one day move on and be guided by these messages. One day “Hem” finds Cheese Station N with all its lovely cheese, he reflects on his experience, but decides not to go back to “Hem”, but rather to let “Hem” find his own way. He uses the largest wall in the maze to write the following (original to the left, my re-interpretation to the right):

Who move my cheese? Who moved my brAIn?
Change Happens: They Keep Moving The Cheese Computers keep getting smarter and intelligent tasks are moving from human to machine
Anticipate Change: Get Ready For The Cheese To Move Prepare for some of your intellectual activity to be taken on by AI
Monitor Change: Smell The Cheese Often So You Know When It Is Getting Old Keep checking in on your own intelligence and make sure you are really using it and keeping it fresh
Adapt To Change Quickly: The Quicker You Let Go Of Old Cheese, The Sooner You Can Enjoy New Cheese Adapt to change thoughtfully (quickly is not necessarily right here), make sure you offload intellectual activity carefully so that you maintain your human intellectual integrity
Change: Move With The Cheese Move with the intelligence (both human/natural and machine/artificial)
Enjoy Change!: Savor The Adventure And Enjoy The Taste Of New Cheese! Enjoy intelligence and the experience of your developing greater intelligence – being smart ‘tastes good’!
Be Ready To Change Quickly And Enjoy It Again: They Keep Moving The Cheese Never feel you are intelligent enough and keep striving for intellectual growth

“Haw” is never complacent and continually monitors his cheese store and searches through the maze and hopes that one day his old friend “Hem” will find his way through and that they will meet again.

1728797174-characters

Whilst the book “Who moved my cheese” was extremely popular, it was also the subject of considerable criticism. For example, that it was too positive about change, that it was patronizing and compared people inappropriately to ‘rats in a maze’. BUT can I learn anything from this as I try to encourage people to want to understand themselves and their changing intellectual capabilities?

I think there is still value in “Haw’s” writing on the wall and I have tried to clarify this nee value for AI in the right hand column of the table above. I also think perhaps that my original revised title of: “Who moved my brAIn?” is not quite correct. The more important question is “Who moved my intelligence?”.

More to come on this in the next blog post…

 

AI is our future, but can we convince Frank?

As a child I was always frustrated by the phrase: “curiosity killed the cat”. This was a frequent retort when I was trying to understand how things worked. Well, I am not reporting any cat killing incidences here, but my curiosity about myself driven by my new ‘misfit’ may have been a primary factor in my newly sprained ankle!

30f3f936e4d8fb46617675f1fa7b5d95.jpg

Over enthusiasm to meet that target of 1000 activity points motivated me to get walking and launched me down some steps in a most ungainly and unfortunate manner.  No broken bones, but some swelling and plummy bruising have resulted in my needing to rest up for a few days. Resting up in a Sydney winter is hardly a chore, the sun is out and the sky is blue and I indulged in exploring the ABC TV channel and in particular a great program called The AI Race.

The AI Race

The program presented data from a study into the risks to Australian jobs from AI powered automation. I was relieved to see that professors are only likely to have 13% of their job automated, whilst carpenters are predicted to have 55% of what they do done by smart technology. Might this be the same in the Uk, or different I wondered? The ABC reporter explored various jobs and met up with employees. For example, Frank: a truck driver, was not persuaded that autonomous trucks would be able to replace his experience and intuition about the behaviour of other humans whether pedestrian or driver. The autonomous vehicles would not be able to help out other drivers stranded on the roadside or provide human customer service on delivery of a load either. He was definitely not convinced that AI was going to replace him any time soon.

Unknown.jpegSW-030711-CARTOON.gif

Further jobs were explored: the legal profession for example where law students were stunned by an AI para legal that could search through thousands of documents to find a specific clause in no time at all. The law students berated their education for not preparing them for a world of automation.

AAEAAQAAAAAAAAfQAAAAJGFmMjZjNmY5LTEyNWUtNDAzNy1iZWE4LTJmZThkNTY4MTU4ZQ.jpg

On the one hand we have Frank, who does not believe that AI can replace him, and on the other we have a group of law students who are persuaded that AI can already do a lot of what they are studying to be able to do. Nobody seems very curious about how they might better prepare themselves for AI’s onslaught on their workplace. So, how might I persuade them that understanding more about their own intellect could help them work more effectively with AI? The key to future success has to be that people need to focus on developing the expertise that AI cannot achieve: the still unique human qualities that will be at a premium. Self-knowledge and Self-efficacy are important elements of this expertise, but how do we motivate people to develop themselves? To start answering this, I looked at the best selling self-help books for guidance. People buy these so maybe I can learn something about how to appeal from their sites – which of these might work best?

Who moved my brAIn?

What colour is your AI?

How to win with AI

7 AI habits of effective people

I’m AI, you’re ok

Rich augmented me, poor augmented me

AI is from Silicon, we are from the Gene Pool

I’m not convinced about any of these…….

AI and personal analytics provide a ‘fitbit’ for the Intellect

It is far too long since I last posted to this blog: too many jobs and too little time would be my fist attempt at an excuse. But, perhaps it is just that I am not effective enough, that I need better self-regulatory skills, more intelligence and a better understanding of my own strengths and weaknesses. I talk quite a lot about intelligence and about how AI developers have not yet designed artificially intelligence systems that understand themselves and have metacognitive awareness, but maybe I too lack these abilities? So, how might I become more self-effective?

This thought is one that I intend to worry at while I am completing a research trip to the University of Sydney to work with my colleague Judy Kay. We are working on Personal Analytics for Learners (PALs), or more precisely interface designs for PALs (or iPALs).

In order to help me thing this through I wanted to learn more about some of the work that Judy and her colleague Kalina Yacef have been doing in collaboration with medics and health professionals to develop better data analytics and interfaces for personal health information for education. For example, the iEngage project provides a digital platform for children with information, education and skills to help them to achieve their physical activity and nutritional goals. It connects with ‘misfit‘ activity trackers to provide continuous feedback and summarise the daily activity on a dashboard.

To this end, I bought myself a ‘misfit’: a somewhat cheaper version of a ‘fitbit’ with a great name :-). I am now tracking my sleep and my pulse as well as my physical activity and diet in order to try and understand more about my personal wellbeing. This is nothing new and millions of other people do this too. I notice that popular technology stores stock a good range of fitness tracking devices and increasingly more reasonable prices.

IMG_4190  IMG_4191.jpg

So, in order to also help me be better at understanding my mind and my cognitive progression and metacognitive skills and regulation, I now need a ‘mindset” to help me track how well I am thinking, learning and regulating my working and learning. The interface to such a ‘mindset’ is the idea behind the iPAL that Judy and I are currently designing. I find it interesting to speculate about the kinds of data that we could collect about our intellectual and social interactions that would help us track and better understand our intellectual mental wellbeing as well as our physical welding and fitness. This kind of ‘fitbit’ for the mind might help me to be less distracted by non-priority activities and spend more time on priorities, such as writing.

A search for ‘fitbit for the mind’ yields some hits, though not terrifically interesting ones. There is an article in new scientist about eye-tracking to tell you more about your reading habits, and a mindfulness app that can be linked to fitbit data. The problem here is that we are being offered some automatic tracking of just one type of mental activity – reading, or mindfulness and actually we need something way more sophisticated to tell us about how we our intelligence and self-awareness is progressing. Perhaps something that looks at multiple data sources and provides us with an overview of our activity in a way that motivates us to want to know more about our intellectual fitness in the same way that activity trackers help us understand more about our physical fitness.

Earlier this month, there was a more interesting article in Newsweek that talks about ‘iBrain’ and the possibility for us to be able to track our brain’s electrical output and see markers for the likely occurrence of a range of mental health disorders from anxiety, depression, and schizophrenia, to dementia and Alzheimer’s before symptoms appear. Such information might help early intervention and monitoring. This reminds me of the rise of personal DNA services, such as 23 and me. If people are interested in their DNA and what it might tell them about how they should adjust their lifestyles to avoid certain conditions that they look to be susceptible to, then maybe people are also curious about their intelligence and how they can understand it better.

MINDBIT1 MINDBIT2.jpg

Over the next few blog posts I plan to explore what such a device might be like, what data it might collect and how I might best benefit from the sorts of information it could provide.