Intelligenza artificiale e gestione delle controversie

AI & Disputes

Artificial Intelligence and Dispute Management

A guide for companies that want to innovate without overexposing themselves

AI can streamline corporate processes, but in strategic decisions and disputes human intelligence remains central:
context, accountability and reputation are not automatable.

I. Where AI adds value (and where it does not)

Companies already use AI for:

  • contract pre-screening (penalty clauses, jurisdiction/venue, warranties);
  • assisted drafting (contracts, minutes, letters, negotiation playbooks);
  • decision support (dossier summaries, mediation-offer hypotheses, BATNA/WATNA analysis);
  • legal/reputational risk monitoring (early warning, issue mapping).

Real benefits: speed, lower operational load, ability to process large document volumes.

Structural limitation: models do not capture negotiating intent, power asymmetries or deal dynamics and they do not assume responsibility for outcomes.
Treating AI as a decision-maker is a mistake: it is a work tool for pre‑analysis and support, not a substitute for professional judgment.

II. Risks you cannot afford to ignore

1) The autonomy illusion

An AI system can suggest text or strategies, but it has no awareness of risk or of your market. If the output misleads, liability remains with the company (and, where applicable, with directors).

2) The context gap

A dispute (litigation, arbitration, mediation) is a process to be managed: objectives, constraints, people, timing. AI does not perceive the weak signals (relationships between parties, timing, postures) that often determine the outcome.

3) Applicable law already in force (GDPR)

Whenever AI processes personal data, the GDPR applies—most notably:

  • Article 22 GDPR: the right not to be subject to a decision based solely on automated processing (including profiling) that produces legal or similarly significant effects, except for the cases listed in para. 2 and with safeguards (meaningful human involvement, the possibility to express one’s view and to contest the decision).
  • Articles 13(2)(f), 14(2)(g), 15(1)(h): information duties on the existence of automated decision‑making/profiling, the underlying logic and the envisaged effects for the data subject.

In practice: candidate scoring or disciplinary measures that are entirely automated are, as a rule, not permitted unless an Article 22(2) exemption applies and effective, verifiable safeguards are in place (genuine human review, traceability, challenge channels).

III. EU AI Act: what actually changes (scope and timeline)

The EU Artificial Intelligence Act – Regulation (EU) 2024/1689 adopts a risk‑based approach with phased application:

  • Entry into force: 1 August 2024; general applicability: 2 August 2026.
  • Prohibited practices (unacceptable risk): applicable from 2 February 2025 (e.g., generalised social scoring, subliminal manipulation, exploitation of vulnerabilities, certain emotion recognition uses in sensitive contexts).
  • GPAI / general‑purpose models (Chapter V): provider obligations from 2 August 2025 (technical transparency, documentation, copyright/IP compliance); enhanced duties for models with systemic risk.
  • High‑risk systems: core obligations from 2 August 2026 (risk management, data governance, logging, transparency towards deployers, structured human oversight).

Note: transitional regimes apply to GPAI already on the market before 2 August 2025; no substantial postponement of the timeline is currently expected.

IV. Governance, contracts and supply‑chain: audit‑ready AI

1) Privacy & data protection (GDPR)

  • DPIA (Art. 35) for high‑risk processing; records (Art. 30); Art. 28 processor agreements; extra‑EU transfers (Chapter V) with SCCs and transfer impact assessment.
  • Complete notices (Arts. 13–14) and channels to exercise rights (Art. 15, including the explanation of the logic where ADM is involved).

2) Security (NIS2)

For essential/important entities and for organisations relying on critical cloud/AI vendors: appropriate technical/organisational measures, supply‑chain risk management, incident reporting and business continuity.

3) Data and portability (Data Act)

Contract clauses on fair access/use of data, data sharing with partners, cloud switching and anti lock‑in measures; define SLAs and service credits for data extraction/portability upon termination.

4) Contracting with AI vendors

  • Use of your inputs/outputs for model training only if expressly agreed (or expressly prohibited).
  • EU data location (or equivalent guarantees); audit & logging; indemnities; SLAs aligned with NIS2 expectations.
  • For GPAI consumed via API: obtain technical documentation and cooperation commitments (AI Act Chapter V; Arts. 53–55 for systemic‑risk models).

5) Evidence, privilege and confidentiality

Avoid entering privileged/confidential information into consumer tools. Put in place litigation holds, chain of custody and retention rules for prompts/outputs/logs to preserve evidential defensibility.

V. Disputes, ADR and arbitration: responsible use of AI

Recent guidance (2024–2025) requires transparency and proportionality:

  • SVAMC (2024) and CIArb (2025): targeted disclosure of AI use where material, no delegation of decision‑making to AI, tool due diligence, model procedural orders/AI protocols.
  • ICCA–NYC Bar–CPR Cybersecurity Protocol (2022): a framework of “reasonable” security measures in arbitration (breach notification, access controls, proportionality criteria).

Sample clause language

  • AI‑Use Disclosure: each party discloses whether and to what extent it uses AI for activities that affect submissions or evidence, and preserves relevant logs at the tribunal’s request.
  • No‑Delegation: no delegation to AI of adjudicative functions or probative assessments by the tribunal or the parties.
  • Deepfake ban: prohibition of undisclosed synthetic manipulation; obligation to provide originals and metadata.

VI. Typical use cases and red flags

Legal Department

Use case: pre‑screening of clauses (change of control, most‑favoured‑customer, liability caps).
Red flags: hallucinations on applicable law; reuse of text conflicting with IP policy; poor source traceability.

HR

Use case: support for job descriptions and interview grids.
Red flags: fully automated candidate scoring/disciplinary measures without an Art. 22(2) exception and without adequate safeguards.

Sales/Commercial

Use case: drafting offers, LOIs, term sheets.
Red flags: unintended promises; inconsistencies between GTCs, POs and SLAs.

Compliance

Use case: whistleblowing triage, sanctions screening.
Red flags: missing DPIA; unassessed extra‑EU transfers; insufficient security/logging controls.

VII. A “minimum” operating model (that actually works)

  • AI Register: systems used, data processed, legal bases, roles (provider/deployer), risks and mitigations.
  • Policies & SOPs: safe prompting, handling of confidential data, four‑eyes principle on critical outputs, confidentiality classes.
  • Human oversight: who may approve what, with which competencies and controls.
  • Contracts: technical schedules with security controls, SLAs, audit rights, tech‑change/sanctions‑change clauses.
  • Auditability: logs, versioning, minimum explainability where people or critical decisions are affected; defensible preservation of evidence.
  • Periodic training: risks, limits and do’s & don’ts for Legal, HR, IT, Sales.
  • Incident plan: rapid channels, breach triage, containment and structured lessons learned.

✅ Legal Checklist: using AI without exposing your business

  1. Map processing (GDPR): data categories, legal bases, retention, extra‑EU transfers (Chapter V), Art. 30 records, Art. 35 DPIA where needed.
  2. Notices & rights: update privacy notices (Arts. 13–14) and Art. 15(1)(h) channels for logic/effects in case of ADM.
  3. No “solely automated” decisions where prohibited: ensure meaningful human review and challenge channels (Art. 22).
  4. AI Act: classify systems (high‑risk/GPAI), set up risk management, data governance, logging, oversight; comply with 2025–2027 milestones.
  5. Vendor due diligence: Art. 28 DPAs, EU data location, no training without consent, audits; Chapter V AI Act commitments.
  6. Cybersecurity (NIS2): organisational/technical measures, vendor management, incident reporting.
  7. Data Act: clauses on data access/use, sharing and cloud switching (anti lock‑in).
  8. ADR/Arbitration: policy on AI in procedure, disclosure, no‑delegation, logging, deepfake ban; cybersecurity protocols.

Key sources

  • EU AI Act – Regulation (EU) 2024/1689: entry into force 01.08.2024; prohibited practices from 02.02.2025; GPAI duties from 02.08.2025; high‑risk duties from 02.08.2026.
  • GDPR – Regulation (EU) 2016/679: Arts. 13(2)(f), 14(2)(g), 15(1)(h), 22.
  • NIS2 – Directive (EU) 2022/2555.
  • Data Act – Regulation (EU) 2023/2854.
  • Soft‑law ADR – SVAMC AI Guidelines (2024); CIArb AI Guideline (2025); ICCA–NYC Bar–CPR Cybersecurity Protocol (2022).
Speak to a lawyer
Studio Legale Rosano · Counsel on AI, privacy, contracts, ADR