Blog

Oldfield Consultancy: Your Trusted AI Supplier for the DIPS Framework

Introduction In an increasingly data-driven public sector, having the right technology partners is crucial for success. Oldfield Consultancy is proud to announce its role as an official supplier to the DIPS Framework, a significant step in providing cutting-edge AI and data science expertise to government and public sector organizations. This blog post explores what the DIPS Framework means and how our consultancy is uniquely positioned to deliver innovative, responsible, and effective AI solutions. Understanding the DIPS Framework The DIPS (Data, Implementation, Procurement and Support) Framework is a vital procurement vehicle designed to help government bodies access a wide range of data-related services. It simplifies the process of hiring expert suppliers for projects involving data analytics, AI implementation, and technological support. By being a part of this framework, Oldfield Consultancy can seamlessly offer its specialized skills, removing bureaucratic barriers and accelerating the delivery of critical public projects. The Role of a Trusted AI Supplier Being an AI supplier under the DIPS Framework is more than just providing technology; it’s about delivering expertise and ensuring trust. Our consultancy brings a wealth of knowledge in AI ethics, explainability, and risk management. We are committed to building AI systems that are not only powerful

Read More »

Dr Oldfield working with Turing on Business Competency Framework

Dr Oldfield has been part of the advisory group to the Turing on the new Business Competency Framework. With version 2 launching in 2026 it has been important to involve the IST, a body of AI Professionals and Experts, in the creating of this critical piece of work. . From providing a voice for the community to providing professional accreditation and quality marks the IST is the global leading body for AI.  Providing calls for evidence into government, working with the EU and international Governments, working on funded projects on education and safe development of autonomous systems as well as impacting gender and diversity within AI the IST preforms a critical role in formalising the discipline of AI.

Read More »

Dr Oldfield speaks at British Science Week

British Science week is an excellent time for scientists of all walks of life to come together to discuss their profession. In this session Dr Oldfield discussed how we can develop technology safely and robustly whilst not stifling innovation. A key element of this talk was to alert the community to the fact that every one of them is a subject matter expert and with either use, develop or be affected by AI within their lifetime. Therefore their expertise is extremely valuable and their challenge is critical of systems to be robust and safe. Click to watch

Read More »

Dr. Oldfield’s Keynote on Data & AI Ethics at the CIVICA Forum

Introduction The CIVICA Forum brought together leading minds to discuss the ethical challenges of our digital age. At the heart of the conversation was Dr. Oldfield’s keynote speech on data and AI ethics, a topic that is becoming increasingly relevant as data-driven decisions impact society on a daily basis. This article provides a comprehensive overview of the key themes from that address, offering a roadmap for building a more responsible technological future. To watch the presentation click here What is Data and AI Ethics? Dr. Oldfield began by defining the core concepts of his talk. Data ethics involves the responsible collection, use, and management of information, while AI ethics focuses on the moral principles governing artificial intelligence systems. He explained that these two fields are deeply interconnected. Without ethical data practices, it is impossible to create ethical AI, as the models will simply reflect the biases and flaws present in their training data. Key Highlights from the CIVICA Forum During his keynote at the CIVICA Forum, Dr. Oldfield highlighted several critical issues. He discussed the importance of algorithmic transparency, or the ability to understand how an AI system makes decisions. He also spoke about the need for accountability, emphasizing that

Read More »

Dr. Oldfield on Dehumanisation & Tech at the CADE Conference

Introduction In an era defined by rapid technological advancement, the human element is at risk of being left behind. Dr. Oldfield recently delivered a thought-provoking keynote at the International Conference on AI and the Digital Economy (CADE 2023), where she addressed the critical issue of dehumanisation in our technology-driven world. This blog post explores the key insights from her speech, offering a powerful look at how we can ensure that technology serves humanity, rather than the other way around. The Rise of Dehumanisation in the Digital Age Dr. Oldfield began by defining dehumanisation not as a lack of human-like robots, but as the removal of a human’s autonomy, dignity, and purpose. She argued that in the digital economy, this can manifest in several ways: from automated systems that treat individuals as mere data points to technologies that diminish our social connections and critical thinking skills. Her talk highlighted that while technology promises efficiency, it can sometimes come at the cost of our humanity. Technology’s Impact on the Future of Work A key part of the discussion focused on the future of work. Dr. Oldfield acknowledged that AI will automate many repetitive tasks, but she warned that if not managed correctly,

Read More »

Dr. Oldfield on Technical AI Challenges: Insights from the Machine Ethics Podcast

Introduction In an era of rapid technological advancement, AI practitioners are confronted with a complex set of challenges that go beyond simple coding. In a recent appearance on the Machine Ethics Podcast, Dr. Oldfield provided a deep dive into the technical hurdles facing the AI community. This blog post explores the key insights from her discussion, offering a clear perspective on how to navigate the complexities of data, ethics, and implementation to build a more responsible AI future. To view the discussion click here. The Interconnectedness of Technical and Ethical Challenges Dr. Oldfield’s talk emphasized that the technical challenges in AI are not separate from the ethical ones. Issues like data bias, lack of algorithmic transparency, and the need for robust testing are fundamentally both technical and moral problems. She explained that building an AI system that is technically sound also means ensuring it is fair, safe, and transparent. The podcast discussion highlighted that a truly effective practitioner must be equally adept at coding and ethical reasoning. Data Quality: A Foundational Hurdle At the core of many AI failures is poor data quality. Dr. Oldfield stressed that without clean, relevant, and unbiased data, even the most sophisticated algorithms will produce

Read More »

Dr Oldfield Speaks at SPRITE+ on AI, Risk Awareness and Psychological Nudging

Dr Oldfield Speaks at SPRITE+ on AI, Risk Awareness and Psychological Nudging. To read the full paper click here. Psychological Nudging is a technique used to get people to behave in a certain way. This can be used for good or evil. For example, are you being nudged for the ‘safety of resources’ or for coercive purposes. Do you know you are being nudged? If you log on to the internet, see advertisements or read the paper you are constantly being nudged and influenced. Every ‘topic of the day’ nudges your opinion based on your underlying beliefs and how you see the world. This can reduce or elevate your risk perception that you see in situations – but not necessarily for your own benefit. Psychologists and Philosophers state that updating your beliefs however comes at a great energy cost and once they are set, you are unlikely to change them, no matter how right or wrong they are. So do you really have free will or are you a puppet on a string? As a society humans have put themselves at risk.

Read More »

Oldfield Consultancy working with the New Regulator for Housing Quality NHQB

Oldfield Consultancy has been working on the new policy and compliance requirements for the new Housing Quality Regulator NHQB. Our Director, Dr Oldfield stated that ” This work is welcomed and is crucial to ensuring the quality of new homes moving forward. It is no secret that, in the past, developers have failed far short of quality standards in building new housing and we hope, moving forward, that the establishment of a new policy and regulator will improve the situation” The New Homes Quality Board (NHQB) is an independent not-for-profit body which was established for the purpose of developing a new framework to oversee reforms in the build quality of new homes and the customer service provided by developers. The framework was introduced in 2022 and has delivered a step change in developer behaviour, a consistently high standard of new home quality and service, and strengthened redress for the purchasers of new-build homes where these high standards are not achieved. The NHQB was formally constituted as a legal entity in January 2021 and the board members were appointed with representatives from consumer bodies, the lending industry, Homes England, independent members, developers and providers of new home warranties, to deliver its

Read More »

Dr Oldfield speaks at Fujitsu on ethical development of AI

It is crucial to undertsand how to build ethical AI; and this does not just include the tech build. When we use language or try to encourage a certain behaviour from people this can also be unethical. Manipulation and exploitation can easily then be the result. So we need to know how we are affecting people at the other end of our implemented technology. This helps us build better and more positive AI that does not disadvantage or exploit people and society.

Read More »

Dr Oldfield speaks at Legal and General on AI and Ethics

Dr Oldfield said “AI and Ethics are a critical area to address so that we can progress in a way that is fair to society and minimise any disadvantage or unfairness”. Some developers use extra programmes such as SHAP and LIME to ‘explain’ their AI model. Looking at these models they look at the mechanics of the  model and how how a data point was calculated. However, without understanding the model how can we understand or know why the data point was calculated. It is critical to have and understanding of context when building a model and without that understanding the end product can cause negative and catastrophic consequences for individuals and society.

Read More »