Ethics
Research Topic
Language: English
This is a research topic created to provide authors with a place to attach new problem publications.
Research topics below this in the hierarchy
Research topics above this in the hierarchy
Research problems linked to this topic
- From what other sources can police enrich their own analytical data where it is legal and proportionate to do so?
- What are the key factors regarding public trust on autonomous systems?
- What emerging biological or behavioural measurements and calculations can be used to ascertain or impersonate a person’s identity?
- How will AI impact societal cohesion, including through trust in institutions and the government and through factionalism?
- What are the public’s perceptions, beliefs, and concerns about policing’s existing and emerging science and technology capabilities?
- What are the economics underlying the spread of mis/disinformation (i.e., what is the scale and nature of the for profit mis/disinformation)?
- How can AI be used to identify harmful content?
- How will the use of generative AI to create ‘deepfakes’ that manipulate people’s likeness (face, body, voice) evolve? What is the psychological impact of being deepfaked, and what harmful uses (e.g. intimate image abuse, fraud, reputational damage) will develop and increase?
- Statistics representing society: How well or poorly do statistics represent society, and what are the impacts of this on how they are used and valued?
- How can policing measure and demonstrate the level of accuracy in predictive algorithms?
- How can diversity in the cyber security workforce be improved? What can be done to have immediate impact and what should be done to affect long-term change?
- How can policing improve our workforce vetting processes to mitigate organisational risk?
- How can policing demonstrate that algorithms are fair and unbiased in using policing data?
- What are the best practices for conducting pre-interview assessments to identify vulnerability or intimidation to determine appropriate ‘special measures’ and communication?
- How can the UK design a flexible anticipatory approach to regulation while ensuring protection to consumers and environment? What are the ethical aspects of regulating future technology and how can these be incorporated into an anticipatory regulatory framework? Are there successful case studies we can draw lessons from?
- How can we attribute the role that AI had in causing a particular harm, rather than something else?
- What, if any, are the emerging risks to personal privacy and victim intrusion from new digital forensic technologies?
- How can we reduce bias when using AI?
- How can we ensure use of AI is ethical?
- What computational and analytical techniques can deliver accurate, large scale, automated image capture, processing, and amalgamation, while maintaining privacy and proportionality?
- How might the growth in innovative uses of location data impact public attitudes on the responsible use of location data, for example population movement data?
- What methods and policies could policing use to identify deepfakes rapidly and automatically, while retaining victim privacy?
- How can we ensure public attitudes to AI are positive, and maximise trust in safe AI?
- How do online bystanders respond to viewing perceived online mis/disinformation (e.g. report, share, ignore), and how could their behaviours be influenced?
- How can policing use advances in robotics to reduce or remove the need for police officers to enter hazardous environments e.g., water, fire, electrical, natural disaster, CBRN (chemical, biological, radiological and nuclear)? Further, how can a seamless and secure operation be enabled in such environments?
- What legal and ethical challenges does the use of autonomous systems in policing face as robots become more able to operate independently?
- What are the public needs for explanability in AI?
- What human systems are resilient to impacts from AI and which are less so?
- How do we design public-serving autonomous systems to be fair and inclusive?
- How can AI and other emerging technologies be implemented in education settings so that they do not widen existing inequalities or create new inequalities?
- How can policing best communicate our ambitions, decisions and use of science and technology to improve public awareness and understanding?
- How can we best communicate and understand the public’s perception of autonomous vehicles and drone usage in policing?
- What are the potential criminal opportunities posed by emerging biometrics?
- What behavioural and attitudinal considerations can be mapped in this area and how do we encourage good behaviours across organisations?
- How could machine learning and predictive analytics be used robustly and ethically to create claimant personas/segments for disability benefits?
- How can policing co-develop useful predictive models with public debate and consent?
- Which harmful online uses of AI are likely to increase? What could be the impact of AI-generated content on attitudes, beliefs, behaviours or psychological wellbeing?
- What impact do abuse, threats and violence have on journalists in the UK? What is the most appropriate way to define ‘abuse’, particularly online abuse, of journalists? What are the perceived boundaries between abuse and valid criticism by different stakeholders? What are the potential triggers for journalist abuse in the UK and internationally, including through analysis of online abuse on social media platforms and publisher websites, and the online accounts posting this abuse and wider evidence gathering?
- How does mis/disinformation spread between social media platforms, particularly primary and secondary platforms? How can it be identified and contained?
- What are the potential harms and wellbeing risks that can impact consumers? How do we best measure and quantify these harms taking into account different online markets? How do different markets and tools engage in terms of reach and impact on individuals, for example to what extent does advertising follow and adversely impact consumers?
- What are the potential investigative opportunities for policing through the use of emerging biometrics, including the current limits of biometrics to identify and trace individuals?
- What are the potential risks or unintended consequences of counter-disinformation interventions?
- How will AI affect existing kinds of harmful online content (e.g. online abuse, scams) and what new kinds of online harmful content might it give rise to?
- What are the best ways to ensure that AI is used safely, ethically, and in ways that protect the data and interests of children, young people, teachers, and schools and colleges? What forms of regulation and enforcement may be appropriate?
- What risks to personal privacy are emerging from the increasing ability to identify data characteristics unique to a person?