Limited support for Thanet
We do not currently provide detailed weekly summaries for Thanet Council. Running the service is expensive, and we need to cover our costs.
You can still subscribe!
If you're a professional subscriber and need support for this council, get in touch with us at community@opencouncil.network and we can enable it for you.
If you're a resident, subscribe below and we'll start sending you updates when they're available. We're enabling councils rapidly across the UK in order of demand, so the more people who subscribe to your council, the sooner we'll be able to support it.
If you represent this council and would like to have it supported, please contact us at community@opencouncil.network.
Extraordinary, General Purposes Committee - Tuesday, 9th September, 2025 5.00 pm
September 9, 2025 View on council website Watch video of meetingSummary
The General Purposes Committee of Thanet District Council met on 9 September 2025 to consider an Artificial Intelligence (AI) policy for the organisation. The AI policy was subject to a 30-day consultation with staff and the council's recognised trade unions. Following consideration by the General Purposes Committee, the policy is scheduled to proceed to Cabinet for final approval and implementation.
Artificial Intelligence (AI) Policy
The General Purposes Committee was scheduled to consider an Artificial Intelligence (AI) Policy for Thanet District Council. The report pack included a recommendation for the committee to note the proposed AI policy, and also to note that the proposed policy would be presented to the cabinet for final approval.
The council recognises AI as a rapidly evolving technology that, when harnessed effectively and responsibly, has the potential to add significant value for the council's service users. The council believes that AI can bring the opportunity to provide more efficient, cost-effective, joined-up and evidence-based decision making and operations.
The council's Transformation Vision states:
We will transform and improve the way we deliver services online, creating a streamlined and consistent customer experience. We will ensure the needs of the customer are central to our decisions and that our digital aspirations are inclusive and accessible to all.
Our ambitious Transformation Programme will ensure that staff are working efficiently and effectively through the safe use of technology. We will use all available data to make informed business decisions so we deliver excellent services and improve the customer experience.
The AI policy has been designed to set out how the council intends to balance the opportunities of AI with risk mitigation, and to outline the council's proposed AI governance, responsibilities, risk management, and operational processes.
The policy stipulates specific requirements of use, including:
- Only using the permitted AI platforms, which for Thanet District Council will be Google Gemini and Microsoft Copilot.
- Not installing other generative AI platforms without permission from the Transformation Programme Managers.
- Always reviewing the information that AI populates to ensure accuracy.
- Not entering personal data into AI as this would be classed as a data breach.
- Not copying and pasting text or code from the internet into AI.
- Reporting anything unusual to the appropriate Technology colleagues via the internal reporting mechanisms.
A working group was formed to develop the new AI policy, including Hannah Thorpe, Head of Strategy and Transformation, Jessica Seaward, Transformation Programme Manager - Digital, the Information Governance and Equality Manager, the Digital Innovation Lead and the Policy Manager.
The AI policy is designed to align with the wider ICT Policy Suite already in place. The other policies are:
- Acceptable Use Policy
- Cyber Security and Cyber Attacks Policy
- Digital Security Policy
- Payment Card Industry Data Security (PCI DSS) compliance policy
Consultation Feedback
A summary of the feedback received during the consultation period and the subsequent amendments made to the policy are appended in Annex 1.
There were 14 responses to the consultation.
A recurring concern was the negative environmental impact of AI, specifically its high energy consumption and water use for cooling data centres, with suggestions to align the policy with climate pledges. As a result, section 8 - Risk Management and Security was updated with a section specifically focussed on the environmental impact of using AI tools.
A recurring question was why specific AI tools like Google Gemini and Microsoft Copilot are approved, while ChatGPT is not, given they are all generative AI models. Section 9. Rules for Use was updated to provide some more clarity as to why only specific tools are permitted.
A strong theme was the need for mandatory training on how to review AI outputs, and training on understanding what constitutes personal data
and the risks associated with using it in AI. A new section 11. Training, skills and communication has been added to provide clarity as to what training staff will be given to support the use of AI.
Several comments suggested the policy is overly risk-averse and doesn't adequately emphasise the potential for AI to enhance productivity, quality, and efficiency in routine tasks. Communications and training will be used to emphasise the positive potential of AI within roles for day-to-day tasks.
Concerns were raised about how the policy will be managed and self-policed, particularly regarding regulating AI use from an IT perspective. This will be covered in the roll out and staff training where mechanisms will be put in place to build a culture of reporting anything suspicious.
The principle of human oversight and accountability for AI outputs was strongly supported.
Equality Impact Assessment
The report pack also included an Equality Impact Assessment (EIA) of the AI policy. The EIA was focused on the direct audience of the policy, which is internal. The indirect audience of potential AI service level changes, which would be more likely to impact/affect external customers, will be subject to individual DPIAs1.
The EIA found that AI systems can be biased and can get things wrong, presenting incorrect information as fact. There is a risk that AI can perpetuate bias and discrimination against people with protected characteristics2. For example, facial recognition technology may be less accurate for certain racial groups, and algorithms used in recruitment may disadvantage older employees.
The policy requires that a Data Protection Impact Assessment (DPIA) be carried out for any new AI platform or software, which helps identify and mitigate risks that could lead to discrimination. The policy is written in conjunction with the council's Data Protection Policy, which is designed to protect individuals' rights and freedoms, including the right to prevent profiling.
The policy states that the council is firmly committed to eliminating discrimination and promoting equality of opportunity.
It acknowledges that some AI tools, like Grammarly, can support staff in their work and that the council will consider reasonable adjustments regarding AI for employees in conversation with their line manager/HR.
The policy states the council is committed to fostering good relations within our organisation and our community
. AI outputs can be inaccurate, biased, or discriminatory. The policy's rules on non-reliance and reporting suspicious activity are meant to mitigate these risks and prevent the spread of misinformation or biased content that could negatively impact relations.
The EIA concluded that a full Equality Impact Assessment has been completed.
Declarations of Interest
The committee was also scheduled to receive any declarations of interest from members. Councillors were advised to consider the advice contained within the Declaration of Interest advice attached to the agenda.
-
A Data Protection Impact Assessment (DPIA) is a process to help identify and minimise the data protection risks of a project. ↩
-
Protected characteristics are specific aspects of a person's identity defined in the Equality Act 2010 that are protected from discrimination. These include age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. ↩
Attendees
Topics
No topics have been identified for this meeting yet.
Meeting Documents
Agenda