AI technology has the potential to transform the education landscape. But alongside the benefits are significant risks and challenges that schools and universities must overcome to ensure a responsible and ethical adoption of AI.
In this article we consider the key challenges and opportunities of AI in the education sector, together with top tips how to avoid the worst of the bear traps.
The AI which has generated such interest recently is Open AI’s ChatGPT (there are other models out there but, for ease, in this article we use ‘GPT’ as shorthand for all generative AI).
GPT combines a vast data set (acquired from the internet and elsewhere) with a tool capable of mimicking human conversation. GPT allows users to engage in conversational exchanges, and can provide answers to even the most complex of questions. It can generate reports, produce a lesson plan, and even write essays.
GPT works as a statistical model trained to predict what output a person might produce based on the data which GPT has access to. GPT can produce work in seconds that would take a human many hours to generate. For the most part, the output has a high degree of accuracy and well-reasoned arguments. However, sometimes the output is fallible, with GPT confidently stating falsehoods as fact.
Given GPT’s limitations in terms of the accuracy, educators must be cautious reproducing its output. However, the potential for the tool is huge. And because it is AI – it is always learning, and more powerful iterations will evolve. As output accuracy increases, GPT’s limitations seem likely to be eroded with those further iterations.
1. Inclusive education: AI has the potential to help support students with disabilities or special needs by providing tailored support and accessible learning materials, ensuring equal opportunities for all students.
2. Personalised learning: AI-powered tools have been used for some time to create customised learning experiences tailored to individual students’ needs, abilities, and learning styles. With the launch of GPT such tools will become more powerful and useful.
3. Teaching aids: AI can provide support to educators; offering real-time feedback on student performance, facilitating lesson planning, and generating teaching materials. Producing and refining lesson plans and handouts using simple ‘plain English’ prompts.
4. Administrative tasks: AI is already used to streamline administrative processes across a range of businesses. In education, areas such as admissions, enrolment, and timetabling are likely to increasingly be supported by AI, saving time and resources for both staff and students.
5. Employee engagement: a key benefit of using AI in the workplace is the facilitation of routine tasks and the availability of products which save time and support staff in the delivery of their roles.
1. Data privacy and security: the use of AI tools in connection with, for example, assessment, recruitment and admissions (among many other use cases) will often require the sharing and processing of personal data. An understanding of how these products work is crucial in ensuring compliance with data protection obligations.
2. The digital divide: the launch of ChatGPT and similar products, freely available to the public, will go some way to democratising access to AI. However, beyond those publicly available tools, it seems likely that unequal access to AI technologies will exacerbate existing inequalities in education and risk of disadvantaging students from lower socio-economic backgrounds.
3. Ethical considerations: the adoption of AI is likely to raise ethical questions around transparency, fairness, and accountability. We anticipate that institutions will look to establish robust governance frameworks to ensure responsible AI use.
4. Outsourcing and contracting: there is an increasing proliferation of providers and tools offering AI-based solutions to educators. Schools and universities will need to think carefully about which tools they wish to use, and carry out appropriate diligence on the organisations that they engage with, ensuring that contracts provide appropriate levels of protection and accountability, and that they follow rapidly developing best practices in this area.
Handle with care! Don’t rely solely on the output of GPT.
1. Policies and guidance: educators will need to develop AI governance frameworks, including updating policies, guidance and processes for the responsible, ethical and transparent use of AI. In particular, review your safeguarding, e-safety, SEND, curriculum, assessment, and staff development policies, to accommodate the integration of AI technologies.
2. Invest in staff training: schools and Universities will need to provide ongoing professional development opportunities for staff on: using AI tools; their potential applications; and the associated challenges, to ensure effective integration into the classroom.
3. Foster digital literacy and AI ethics: incorporate digital literacy and AI ethics into the curriculum, equipping students with the skills to navigate the evolving digital landscape responsibly.
4. Engage in cross-sector collaboration: collaborate with AI developers, researchers, policymakers, and other education institutions to share best practices, insights, and experiences in implementing AI technologies.
At Greenwoods, we understand the regulatory environment in which educators operate, and the additional layer of complexity that this regulatory environment brings when considering how best to embrace new technologies.
Our data privacy and technology specialists work with both users and providers of a broad range of technology products.
We are familiar with the plethora of AI products available to educators and can apply our understanding of education regulation to questions around the implementation of those technologies.
We support schools, universities, and other businesses in contracting with a wide range of technology providers, including traditional and cloud/SaaS software models, hosting, support and maintenance arrangements, and data management providers.
We also support cutting-edge software businesses in ensuring regulatory compliance, and in developing contractual arrangements that support their particular business models – including in the development and deployment of AI-based products.
AI clearly offers potential for real efficiencies: there are both cost and quality benefits in automating certain roles and processes which traditionally could only be delivered by individuals. However, in the education and development of young people, we must never forget the irreplaceable value of the ‘human interface’ which AI simply cannot offer.
Greenwoods Legal LLP is a Limited Liability Partnership, registered in England, registered number OC306912. Our registered office is Queens House, 55-56 Lincoln’s Inn Fields, London, WC2A 3LJ. A list of the members’ names is available for inspection at our offices in Peterborough, Cambridge and London. Authorised and regulated by the Solicitors Regulation Authority, SRA number 401162. Details of the Solicitors’ Codes of Conduct can be found at www.sra.org.uk. All instructions accepted by Greenwoods Legal LLP are subject to our current Terms of Business. VAT Reg No: 161 9287 89.