This post first appeared on IBM Business of Government. Read the original article.
New forces and trends are reshaping law enforcement and public safety
Blog Co-Authors: Cristina Caballé, IBM Global Public Sector and Markus Straub, IBM Public Sector Germany
Law enforcement is changing more rapidly than ever before. New forms of crime, advanced technologies, and evolving relationships with citizens and communities are challenging and shifting the very foundations and scope of law enforcement and public safety and how organizations and the officials perform their critical mission.
As a result, new processes, new tools and new strategies are urgently needed to equip law enforcement and public safety personnel with the resources and capabilities required to focus on core, value-added mission functions. Success will require deep understanding and adoption of new technologies, coupled with cultural and organizational changes. As emerging technologies drive new business and service models, governments must be prepared to rapidly create, modify, and enhance policy and regulatory frameworks to help ensure they’re leveraged effectively and responsibly and maximize value to the communities they serve.
New technological developments are resulting in an explosion of data that is reshaping the law enforcement mission
There is an ongoing convergence of key digital technologies that have the potential to transform the way law enforcement officers work and perform their critical function of protecting and serving communities. Key disruptive technologies include 5G mobile communications, artificial intelligence (AI), data analytics, and the Internet of Things (IoT). Independently, each of these technologies have the potential for significant impact; however, the combination of these technologies is poised to create unprecedented opportunities to significantly augment the capabilities of law enforcement and public safety personnel and to significantly improve mission outcomes. The explosion of real time data available to organizations can accelerate the adoption of AI systems that will augment human capabilities. AI represents machine-based intelligence, typically manifested in “cognitive” functions that humans associate with other human minds. There are a range of different technologies involved in AI including machine learning, natural language processing, deep learning, and more. Cognitive computing involves self-learning systems that use data mining, pattern recognition, and natural language processing to mimic the way the human brain works.
AI has many potentially beneficial applications in law enforcement including (pre-) processing large amounts of data (e.g. from confiscated digital devices, police reports or digitized cold cases), finding case-relevant information to aid investigation and prosecution, providing easier to use services for civilians (e.g. with interactive forms or chatbots), and generally enhancing productivity and paperless end-to-end workflows. It can be used to promote human rights, dignity, freedom, equality, solidarity, democracy and rule of law. AI also helps socialize the tacit knowledge across the organization and improves collaboration beyond organizational boundaries.
The Challenges of AI, Human and Machine Interaction
The heart of new investigative analytics and similar tools is human-machine integration. The idea is not to have digital technology replace humans that perform critical law enforcement and public safety functions, but rather to have it augment their capabilities, allowing each to work to focus on and leverage their strengths. Digital technology can crunch massive volumes of data to pull out clues a human could never find, while the human can adapt it for context, understand circumstances, and interact with other humans.
While AI is indeed a very promising technology that could provide many benefits to law enforcement and public safety missions, it is important to be aware of its limitations as well as understanding and ensuring the ethical and responsible use of AI. This is imperative to begin to remedy common misconceptions people have about the use of this technology. Established and accepted guidelines and legal frameworks for the ethical use of AI (e.g., the EU Ethics Guidelines for Trustworthy AI, EU Artificial Intelligence Act, the OECD AI Principles, Australia’s AI Ethics Principles) continue to emerge and will constantly evolve. These frameworks will help set guidelines for the development and use of this technology and AI-enabled systems and capabilities will need to be adaptable to these new standards and regulations.
Building on Momentum to Strengthen Readiness
The Munich Innovation Labs have been working on a series of innovations to leverage AI to help enhance the role of the law enforcement and public safety personnel while also ensuring its responsible use. These innovations include the following:
- Leveraging AI to detect hate speech: Create standards and harmonize practices to capture threat intelligence from internal and external data sources through an ecosystem of security tool integrations and open-source intelligence (OSINT) feeds to help public safety agencies detect and share threat data faster and prepare preventive actions against crime.
- Gaining efficiency and automation in a very fragmented system: AI is a complex technology, but it is critical that AI is human-friendly, usable, fair, trustworthy, and explainable. There are a set of use cases that demonstrate how to design and build AI applications that are transparent and maintain human control of decision making throughout all processes.
- Collaboration across national boundaries: Data Analytics and AI contribute to knowledge sharing across countries and help improve law enforcement. Lessons learned from different countries are being analyzed to accelerate the right adoption of technology and its adherence to the EU Ethics Guidelines for Trustworthy AI and the EU Artificial Intelligence Act.
New and emerging technologies have the ability to help law enforcement and public safety organizations keep pace with advances in criminal capabilities and evolving mission requirements. Organizations like the Munich Innovation Labs (MIL) and other major nonprofit organizations, are envisioning the ways these new technologies can enhance the missions of these organizations and developing recommendations based on real-world use cases to help organizations around the world better prepare for the future.
Over the coming months, we plan to convene a series of discussions with global experts and share lessons learned through a series of blogs outlining strategies and solutions for governments to address the challenges that lie ahead. Stay tuned for the next blog on “Leveraging AI to detect hate speech.”