Analytical Modelling and UK Government Policy
In the last decade, the UK Government has attempted to implement improved processes and procedures in modelling and analysis in response to the Laidlaw report of 2012 and the Macpherson review of 2013. The Laidlaw report was commissioned after failings during the Intercity West Coast Rail (ICWC) Franchise procurement exercise by the Department for Transport (DfT) that led to a legal challenge of the analytical models used within the exercise. The Macpherson review looked into the quality assurance of Government analytical models in the context of the experience with the Intercity West Coast franchise competition. This paper examines what progress has been made in the 8 years since the Laidlaw report in model building and best practise in government and proposes several recommendations for ways forward. This paper also discusses the Lords Science and Technology Committees of June 2020 that analysed the failings in the modelling of COVID. Despite going on to influence policy, many of the same issues raised within the Laidlaw and Macpherson Reports were also present in the Lords Science and Technology Committee enquiry. We examine the technical and organisational challenges to progress in this area and make recommendations for a way forward.
Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice
AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and AI audit frameworks. We highlight the responsibilities of funding bodies to ensure investment is channelled towards trustworthy and safe AI systems and provides case studies as to how other ethical funding principles are managed. We offer a first sight of two proposals for funding bodies to consider regarding procedures they can employ. The first proposal is for the inclusion of a Trustworthy AI Statement’ section in the grant application form and offers an example of the associated guidance. The second proposal outlines the wider management requirements of a funding body for the ethical review and monitoring of funded projects to ensure adherence to the proposed ethical strategies in the applicants Trustworthy AI Statement. The anticipated outcome for such proposals being employed would be to create a ‘stop and think’ section during the project planning and application procedure requiring applicants to implement the methods for the ethically aligned design of AI. In essence it asks funders to send the message “if you want the money, then build trustworthy AI!”.
Dehumanisation and AI :Published in CADE 2023 by IET
Click here link to read.
AI is becoming more widespread than ever as we offload decision making to algorithms. Recently we have seen many legal challenges against algorithm powered decision such as discrimination relating to gender and race and incorrect benefits allocation. A recent observable issue around the implementation of AI is the issue of dehumanisation. Dehumanisation is the human reaction to overused anthropomorphism and lack of social contact caused by excessive interaction with technology. This can lead humans to devalue technology, but also then to begin to devalue other humans. Resulting discrimination towards perceived outgroups causes division within society in the on and offline worlds. The potential exploitation that can be achieved by the manipulation of human belief systems leading to dehumanisation is substantial. This is a contradiction of the growing popularity of AI, as the negative effects of unchecked and poorly understood technology could certainly outweigh any perceived positive effects of its use. It is clear to see that, due to lack of testing and modelling forethought, we are entering unchartered territory that holds a vast array of consequences, some that we are yet to observe.
The Future of Condition Based Monitoring: Risks of Operator Removal on Complex Platforms
Click this link to read.
Complex platforms are very difficult to manage and maintain. This is why we see teams of engineers, many highly specialised, that carry out this role for industries such as aerospace, nuclear and subsurface. It is a critical undertaking to maintain the aforementioned systems, which often have components at varying degrees of degradation. To maintain complex systems, Condition Based Monitoring (CBM) (a type of predictive maintenance that uses sensors to measure the status of an asset over time while it is in operation) is most frequently used. Artificial Intelligence(AI) models that have been developed in the area of CBM are currently not well explained, nor well understood by users or operators. When AI is brought into a complex system we observe varying degrees of success.The level of success rests on the complexity of the system, the training and understanding of the end operator as well as the maintenance processes around the system. Implementing AI or complex algorithms into a platform can mean that the Operators control over the system is diminished or removed altogether. For example, in the Boeing 737 Air MaxDisaster, AI had been added to a platform and removed the operators’control of the system. This meant that the operator could not then move outside the extremely reserved, algorithm defined, ‘envelope’ of operation, leading to loss of life. Therefore, the implementation of AI leading to any removal of operator system management in complex systems, especially related to aerospace and subsurface industries, has to be considered carefully. In this paper we analyse the risks of removing operator system control and implementing algorithms, or AI, in complex systems.
Technical challenges & Perception: Does AI have a PR Issue?
Click here link to read.
From collecting robust data, to modelling the real world and interpreting output, modelling is a complex undertaking. Increasingly, models have been highlighted that not only disadvantage society but those whom the model was originally designed to benefit. An increasing number of legal challenges around the world illustrate this. A surge of recent work has focussed on the technicalbut not necessarily the real-world challenges for practitioners. Through two studies we conduct an investigation into perceptionand real-world needs within industry. In study one we re-run the 2019 survey by Holstein et al. to determine differences betweenpractitioner challenges in the UK and USA and we analyse any advancements apparent since the 2019 study. In study two we examinethe perception of users and practitioners towards AI. This study helps to unlock interdisciplinary reasons behind existing challenges.Based on these findings we highlight directions for future research in this area
Towards Pedagogy supporting Ethics in Analysis
Click this link to read.
Over the past few years there have been an increasing number of legal proceedings related to inappropriately implemented technology. At the same time career paths have diverged from the foundation of statistics out to data science, machine learning and AI. All of these being fundamentally branches of statistics and mathematics. This has meant that formal educational training has struggled to keep up with what is required in the plethora of new roles. Mathematics as a taught subject is still based in decades old teaching specifications and has not been updated centrally in the UK as a curriculum to include new technologies, coding or ethics. The disciplines involved in technology, mathematics and related subjects are firmly split between ICT (Information and Computer Technology) and mathematics in secondary school, continuing on to be split between computer science and mathematics at university. As we continue to develop technology, we see these academic fields becoming increasingly intertwined.
This paper proposes that existing education for concepts such as ethics and societal responsibility that are critical in building robust and applicable models do currently exist in isolation but have not been incorporated into the mainstream curriculum of School or University. This is partially due to the split between fields in an educational setting but also the speed with which education is able to keep up with industry and its requirements. Principles and frameworks of socially responsible modelling beginning at school level would mean that ethics and real-life modelling are introduced much earlier than is currently done. Integrating these concepts with philosophical principles of society and ethics would ensure suitable foundations for future modellers and users of technology to build upon.
Anthropomorphism and its impact on the Perception and Implementation of AI
Click this link to buy the book.
Anthropomorphism is a technique used by humans to make sense of their surroundings. Anthropomorphism is a widely used technique used to influence consumers to purchase goods or services. These techniques can entice consumers into buying something to fulfil a gap or desire in their life, ranging from loneliness to the desire to be exclusive. By manipulating belief systems, consumer behaviour can be exploited. This paper examines a series of studies to show how anthropomorphism can be used as a basis for exploitation. The first set of studies in this paper examine how anthropomorphism is used in marketing and the effects on humans engaging with this technique. The second set of studies examines how humans can be potentially exploited by artificial agents. We then discuss the consequences of this type of activity within the context of dehumanisation. This research has found potential serious consequences for society and humanity which indicate an urgent need for further research in this area.
The economic case for getting asylum decisions right the first time
Click here to Read The Article
Click here to read the media coverage in the Independent
Research with Pro Bono Economics and the Refugee Survival Trust
Over half the total applications for asylum the UK receives each year are initially rejected, yet nearly a third of these initial rejections are subsequently overturned on appeal. This system that fails to get decisions right first time imposes significant costs, not just on the applicants themselves, but also more widely on UK taxpayers.
The taxpayer and Treasury bear the costs of this system failure in a number of different ways. Directly, resource is wasted within the courts and the legal aid system. The more protracted the process, the longer the Home Office must fulfil its obligations to provide accommodation and subsistence to asylum seekers at risk of destitution. There are also additional administrative costs to the Home Office: we estimate the cost of incorrect initial decisions adds up to £4 million per year.
The NHS must also manage the knock-on impacts of incorrect initial asylum decisions. More than 61% of asylum seekers and refugees experience serious mental distress including higher rates of depression, post-traumatic stress disorder and other anxiety disorders, and being refused asylum is the strongest predictor of depression and anxiety within asylum seekers.
In addition, the longer the appeals process drags on, the greater the opportunity costs for the UK economy. With the majority of asylum seekers banned from working, the Exchequer misses out on significant tax receipts. While refugees are stuck in a position of unemployment, their skills can become eroded: only 15% of refugees find employment in the UK of a similar status to that they had held in their country of origin. That has long-term impacts for the economy, with asylum seekers earning and working less than UK nationals and economic migrants.
At a time of real pressure both on Public Sector departmental budgets and NHS services, and when businesses are struggling to fill skills gaps, these costs cannot be dismissed. Nor can the potential benefits of refugees’ skills and experience be underestimated.
Reducing the number of incorrect initial decisions on asylum applications would require tackling a number of challenges that exist within the system, from the training of Home Office staff to the consistent provision of competent translators. Our research indicates that the support provided to asylum seekers during their application process may play a key role in affecting the outcomes of their applications.
The environment in which many people apply for asylum in the UK is an incredibly unstable one. Often arriving in the UK with very few resources, facing great uncertainty about their future and forbidden from working, many asylum seekers are reliant on the state and charities to survive and meet their essential needs, from bus passes to food. Only a very limited support system is provided by the government, and many individuals and families find themselves in precarious financial positions in addition to coping with the substantial trauma of the circumstances which forced them to flee home. This backdrop can impact the ability of asylum seekers to represent and advocate for themselves during the asylum process.
This is backed by evidence that suggests that the most vulnerable groups of asylum seekers are consistently more likely to have their appeals upheld by the courts. That includes women who have been more likely to succeed on their appeals every year for the last decade aside from 2015. There is also a marked difference in success rates between nationalities, with asylum seekers from nations experiencing extreme violence – such as Afghanistan, Sudan, Yemen and Libya – twice as likely to be successful at appeal than those from more less overtly violent nations. Coming to the UK having experienced significant trauma and with few resources, these groups are precisely those who need the most support from the asylum system.
Given this, investment in forms of support for asylum seekers which help create a more stable environment in which to go through the asylum process could help not only cut down on the costs of incorrect initial decisions but also on other potentially greater costs for the taxpayer. Charities which provide services such as help to access childcare, education, integration, transportation, essential goods, and accommodation to asylum seekers play an essential role in helping to ensure asylum applications are right first time by contributing to a more stable environment in which to apply.
Published Research with Scoliosis SOS at SOSORT 2022
Click this link to read.
Our extensive clinical and statistical work with Scoliosis SOS has resulted in 4 abstracts being presented at this year’s SOSORT Conference in San Sebastian. We are very proud to work with Scoliosis SOS and to be able to help those with Scoliosis have an improved quality of life.