Laden...

Responsible and innovative: AI for a fair Dutch judicial system

Dit is een afdruk van een pagina op Rechtspraak.nl. Kijk voor de meest actuele informatie op Rechtspraak.nl (http://www.rechtspraak.nl). Deze pagina is geprint op 01-01-1970.

Skip Navigation LinksKwaliteit van de rechtspraak > Innovatie binnen de rechtspraak > Responsible and innovative: AI for a fair Dutch judicial system

The strategy adopted by the Dutch judicial system for artificial intelligence

Artificial Intelligence (AI) is radically changing the world. It will affect how the Dutch judicial system operates, the way judgments are rendered and the supervision thereon. Parties to the proceedings will use AI, AI (or content and decisions generated by AI) will be a subject of a dispute and AI will impact the furnishing of proof. AI can also provide the solution for major issues the Dutch judicial system is facing, such as the structural shortage of judges, access to the legal system and the long-time taken for the Dutch judicial system to deal with cases. At the same time, we shouldn’t ignore the challenges involved, such as the protection of fundamental rights, including privacy, non-discrimination and (judicial) autonomy. This strategy explains how the Dutch judicial system views AI and how it wishes to handle this development.

Artificial Intelligence is everywhere. AI is a system technology and will drastically change our society, just like the internet has done. The Dutch judicial system must, nevertheless take a stance. On the one hand, AI manifests in the casuistry. How shall we deal with the documents of the proceedings and evidence generated by AI? What effect does AI have on the furnishing of proof, equality of arms? On the other hand, AI also offers opportunities to the Dutch judicial system. How shall we put them to the best use? How can we use AI when addressing the major issues faced by the Dutch judicial system?

We see opportunities in the short term, for the improvement of the labour-intensive administrative and logistics processes, such as scheduling and planning, pseudonymization of judgments and the improvement of the process management by using forecasting models for incoming items and routing. It could also be used as a tool for drawing up news messages, letters and presentations. We also see opportunities for improving the contact with society/citizens; for example by automatically summarizing judgments at B1-language level and having information provided by chatbots in natural language. We also believe that we can improve labour-intensive processes in the legal realm with the use of AI, for example for the period check, finding the right caselaw and detecting deviations within the Account and Accountability reports drawn up by the Supervisory department. And we see that AI could play a major supporting role in the analysing, structuring and summarizing of large, complex case files and, for example, for the drawing up of draft judgments.

The use of AI technology (both by the parties to proceedings and the Dutch judicial system) must be in line with the judicial requirements for a fair trial, the access to the court and the judicial independ-ence and impartiality. This may seem obvious, but is not. Technology is not value-free and is also a factor of power due to the concentration of knowledge, capital and data. We must ensure that the judicial domain is safeguarded against unwanted technological influences. A tool that can be used to achieve this is the European AI Decree. This Decree differentiates between the low and high risk applications and imposes various obligations when using AI, depending on the risk profile. Natu-rally, the Dutch judicial system complies with this Decree.

 

The judiciary is the third state power and plays a crucial role in the democratic rule of law. This power must remain independent, both from the two political state powers: the legislature and the executive (power), and from others. Artificial Intelligence (AI) has the potential to directly and indirectly influence the judicial autonomy. Both the European Union in its AI Decree and the European council in the CCJE Opinion 26, have explicitly called for measures that guarantee that AI does not inappropriately influence the judicial findings.

It is the ambition of the Dutch judicial system to recognize the risks of AI and to deal with them appropriately. At the same time, AI can be a powerful tool, which can help tackle the major chal-lenges faced by the Dutch judicial system. However, AI will only be used within the framework of the legal rule of law, thereby focussing on the human verification, transparency and ethical guar-antees.

The Dutch judicial system considers that the use of AI could be very beneficial in low-risk proceed-ings and therefore focuses on this use. This is done responsibly and in line with the core values of the Dutch judicial system. A learning-by-doing approach is adopted, in order to gain technologi-cal, legal and organisational knowledge and experience. This will provide a solid basis for the future compliance with the strict requirements imposed in future respect of high-risk AI. Until such time, AI will not be used in high-risk proceedings, so as to fully guarantee the independence and reliability of the judiciary.

The Dutch judicial system plays a crucial role in the application - and therefore also in the further in-terpretation - of law in an AI age by way of the specific cases that are pending. The Dutch judicial system is an essential actor in the shaping of legal protection, legal certainty etc. in a situation where-by AI is reaching further and further into the capillaries of a democratic rule of law (including the providing of ‘counterweight’). We adopt a 10-point plan in order to realise this strategy:

  1. We develop a weighting framework for the purposes of protection the judiciary as an inde-pendent third state power, the autonomy of the judge, fundamental rights (including Article 6 ECHR), our core values (including values relevant for AI, such as sovereignty and sustaina-bility), ethics (for example by the use of the IAMA, which stands for Impact Assessment Hu-man Rights and Algorithm), etc;
  2. We set up an independent supervision on the adequate use of that weighting framework, making sure not only the question ‘may we use it?’ is asked, but also ‘do we want to use it?;
  3. Together with the law content department, we will develop and draw up visions, regulations, arrangements and anything that is needed to adequately deal with AI in the casuistry. We are discussing this with relevant stakeholders, such as NOvA and chain partners;
  4. We invest in an informative, development and training programme into how to use AI in the casuistry of the judicial practice. We also do this in order to recognise, develop, use and en-sure the correct implementation of the new AI uses.
  5. We invest and actively participate in public initiatives, such as the joint collaboration with chain partners such as J&V data lab, NFI and GPT-NL. We are also part of a European collabo-ration (ENCJ, CCJE), social partners and scientific partners;
  6. We continue to conduct experiments, following initiatives taken by the courts, as well as ide-as put forward by the courts. Where possible, we quickly scale up to national introduction and share our experiences;
  7. We invest in a data and AI platform that allows for experimentation and is able to scale up, and, in doing so, can be a real asset for the courts and those seeking justice.
  8. We invest in continuously improving quality and availability of judicial data, as we realise that these are essential for the use of AI. We develop a framework for the responsible sharing of data, taking into account privacy, availability, integrity and reliability of data; 
  9. We focus, in particular, on the low-risk uses. By adopting the learning-by-doing approach, we are better able to identify the essential requirements for higher-risk applications. However, we rule out the use of AI for judicial findings (for example, a robot judge).
  10. We ensure the adequate control and monitoring of AI projects and AI applications. We com-ply with the AI Decree . We are transparent about the use of AI. We include the use of AI in a publicly accessible algorithm/AI-register. We regularly evaluate and update our AI-strategy, in line with technological and social developments.