dataforethic-solutions-intelligentes-detecter-prevenir-violence-harcelement-favicon

When talking about predictive justice, we all think of the movie Minority Report (Steven Spielberg, 2002) in which precogs reveal future criminals, their future crimes, as well as the time and location, allowing the police to apprehend them before the crime is even committed. The central issue of the film is that errors can occur, and technology can be manipulated. 

Artificial Intelligence is already present in the legal sector, although the latter is lagging far behind in terms of technology (due to a lack of resources, but also because the sector has long been resistant to it). Nevertheless, A.I. is moving cautiously into all fields, and justice is no exception. Therefore, it is necessary to discuss and integrate this new technology within judicial settings.

  • AI can streamline judicial decision-making by reducing judicial uncertainties. It would standardize decisions by providing one solution to a problem. However, the subjectivity of the judge is an essential characteristic, as they consider the situation as a whole: the motive, mitigating circumstances, etc. 
  • AI can also serve as a decision-making aid, as it already does in many fields. It would thus allow the judge to provide a response perfectly tailored to the situation at hand (unlike the standardization of judicial decisions). The outcome of a judgment would always be unique, with objective and justified reasons. 
  • AI could accelerate the quantitative processing of cases. It can process a greater number of files in less time, and could thus take on “mass cases”, meaning cases that do not present particular or unique circumstances. 

In reality, predictive justice is nothing more than a fantasy: the expertise, analytical mind and humanity of judges can never be mathematically modelled (although researchers are attempting to do so). By seeking to rationalize and thus erase possible inequalities, A.I. would only reinforce them. It cannot adapt to unforeseen situations and, unfortunately, as it is fed by existing judicial data, biases have already been pointed out (notably in the United States, where the risk of recidivism among African-Americans was overestimated by A.I.). 

For the moment, A.I. in the legal field is used to carry out extremely specific tasks that do not involve judgement: reviewing contracts or clauses, checking compliance policies (to avoid corruption and counterfeiting), jurisprudence research tools, research assistance, legal monitoring, etc.

We are still a long way from the world of Minority Report, and judges will probably never be replaced by robots. Nevertheless, researchers are continuing to think about Artificial Intelligence in the legal field, to ensure that it is not used to undermine the independence of the justice system, and that no biases interfere with its use. Justice is blind, and so must be the Artificial Intelligence that accompanies it.