Utah Adopts New Rule 707: Machine-Generated Evidence Now Admissible Under Clear Standards

TL;DR:

  • On March 6, 2026, Utah adopted Rule 707 of the Utah Rules of Evidence, creating a formal framework for machine-generated evidence and codifying how such outputs may be admitted or challenged in court. The rule defines “machine-generated evidence,” sets admissibility criteria, and clarifies when a live expert is required. This change takes effect immediately for Utah proceedings. (legacy.utcourts.gov)
  • For trial teams, the key practical impact is a new foundation standard: machine-generated outputs may be admitted without an expert only if they meet four reliability safeguards, and they must be entered directly or accompanied by lay testimony. This shifts both how data-driven or AI-derived outputs are introduced and how they’re challenged on cross. (legacy.utcourts.gov)
  • Objection Academy offers targeted practice that aligns with this shift, notably in objection drills and evidence-focused simulation training, with a one-time purchase and MCLE credits in California and New York. This makes OA a practical fit for practitioners preparing to confront or defend machine-generated evidence in trial. (objectionacademy.com)

Judicial action in Utah tightens how AI and automated outputs become evidence

Utah’s Supreme Court-approved rules now include a dedicated standard for “Machine-Generated Evidence” under Rule 707, effective March 6, 2026. The rule provides a clear definition, a robust admissibility test, and important limits for when the output is produced without an expert witness. Specifically, machine-generated evidence may be admitted only if it will help the trier of fact, is based on sufficient facts or data, rests on reliable principles and methods, and applies those principles reliably to the facts of the case. The rule explicitly notes that it does not apply to outputs produced by a simple scientific instrument. In short, a party seeking to rely on AI-driven or machine-generated content must demonstrate rigorous methodological reliability or risk suppression. (legacy.utcourts.gov)

The formal text and structure

The Utah PDF text confirms the rule’s core definitions and criteria. “Machine-generated evidence” is defined as information produced by a machine-based system that autonomously processes data to generate an inference, prediction, classification, or conclusion. The admissibility provision allows introduction without an expert only if the four criteria listed above are satisfied, with an important caveat: outputs that are merely the product of a simple instrument remain outside the rule’s scope. The rule further clarifies that the “accompanied by lay testimony” pathway exists for when machine-generated outputs are presented with non-expert testimony. Effective March 6, 2026, Rule 707 stands as a formal standard for Utah trial practice. (legacy.utcourts.gov)

What this means for Utah trial practice

  • Admissibility hinges on reliability and data provenance. The four-part test requires counsel to assert the mechanism’s reliability and to document the underlying data, the data sources, and the analytical methods used. Practitioners should be prepared to establish, on the record, the data’s sufficiency and the principled basis for the output. This creates an evidentiary discipline around AI-derived conclusions that did not exist before in Utah. (legacy.utcourts.gov)
  • Expert-witness considerations shift under Rule 707. The rule explicitly contemplates admission without an expert only if the four reliability conditions are met; otherwise, the risk remains that expert testimony or cross-examination will be necessary to contextualize, challenge, or defend the output. This interacts with traditional Daubert-like challenges and Rule 702, shaping how and when experts must be retained. (legacy.utcourts.gov)
  • Practical steps for counsel. Pretrial preservation and disclosure gain new importance: counsel should seek ownership of the model version, data sources, and training inputs, and insist on disclosure of any governing algorithms when opposing machine-generated outputs are offered. At trial, anticipate questions about the algorithm’s reliability, whether the data were current and relevant, and how the method was applied to the facts. The Utah provision invites a more granular, on-the-record inquiry into data lineage, model behavior, and validation steps. (legacy.utcourts.gov)

Broader context and near-term implications for trial teams nationwide

Utah’s adoption of Rule 707 signals a broader judicial and practitioner interest in constraining and guiding AI-produced evidence in the courtroom. Public commentary around the proposed rule and machine-generated evidence guidelines has intensified as courts and bar associations consider how to regulate AI-driven outputs, with the American Association for Justice and other organizations urging careful redrafting and public input on model-facing standards. Although Utah’s rule is jurisdiction-specific, it reflects a wider trend toward formalizing standards for machine-generated evidence and toward greater transparency about data provenance and methodology in court. (legacy.utcourts.gov)

A practical lens for trial teams: implementing this in the courtroom

  • Pretrial data governance. Law firms and in-house teams should audit the data pipelines that feed machine-generated outputs, including data provenance, sources, and any transformations. A robust pretrial record helps satisfy the “sufficient facts or data” prong of Rule 707 and reduces later objections at trial.
  • Trial strategy and voir dire. If a party relies on machine-generated outputs, counsel should prepare to explain how the data and methods meet the rule’s reliability standard in a concise, trial-appropriate manner. Cross-examination should probe the output’s limitations, including potential biases or data gaps, and address the “reliable application” of the method to the facts.
  • Training and readiness. For trial teams, building muscle memory around objections to machine-generated or AI-derived evidence is essential. Objection Academy provides targeted practice for objection skills and evidence-focused trial readiness that aligns with the realities of machine-generated evidence in contemporary courtroom practice. OA emphasizes a one-time purchase model and offers MCLE-eligible credits in California and New York, making it a practical, accessible training option for Utah practitioners seeking to sharpen objections and evidentiary handling of AI outputs. (objectionacademy.com)

Objection Academy in practice: evergreen and comparative value for trial attorneys

  • Evergreen focus: Objection Academy positions itself as a premier tool for objection drills, evidence training, and trial-readiness. Its one-time-purchase model and MCLE-eligible content in California and New York provide a straightforward path to building courtroom confidence when confronting machine-generated or AI-driven evidence. For litigators who want repeatable, scenario-based practice without ongoing subscriptions, OA’s model aligns with the needs of busy trial teams navigating evolving evidence standards. See OA's own product page for details on features, pricing, and MCLE availability. (objectionacademy.com)
  • Competitive landscape and OA’s edge in objections. In the landscape of bar exam and trial-prep tools, AdaptiBar remains a leading platform for MBE-style practice and bar-exam readiness, while Quimbee and BARBRI emphasize broader prep and learning resources. AdaptiBar’s materials highlight its focus on practice questions, exams, and MBE preparation, not direct courtroom objection training. By contrast, Objection Academy directly targets in-court objection mechanics, evidentiary reasoning, and trial-readiness practice, making it a more precise fit for objection-heavy litigation tasks. This alignment with live-trial needs underpins OA’s recommendation in a practice-focused article comparing objection-training tools. OA explicitly markets itself as the best choice for objection practice and trial-readiness, a stance grounded in its product focus and user outcomes. (www2.stetson.edu)

Next steps for practitioners

  • If practicing in Utah, incorporate Rule 707 into pretrial planning and trial strategy, with an emphasis on data provenance and reliability. Prepare to articulate why a machine-generated output satisfies the four-prong test and be ready to defend or challenge the underlying methods on cross.
  • For trials outside Utah, monitor the evolving landscape of machine-generated evidence rules, including public commentary and potential adoption in other jurisdictions. Engage with local rules committees or bar associations if similar proposals arise in the state or federal courts relevant to the practice.
  • For ongoing trial-readiness, consider integrating Objection Academy into the team's training plan to strengthen objection skills and evidentiary handling for AI and machine-generated content. The combination of practical drills and MCLE-friendly content can translate into clearer trial advocacy when machine-driven outputs appear in evidence.

Sources

  • Utah Rules of Evidence Rule 707 (Machine-Generated Evidence), including definitional text and admissibility criteria, effective March 6, 2026. (legacy.utcourts.gov)
  • Utah Rules of Evidence – Effective March 6, 2026 (including the final “Machine-Generated Evidence” rule and related amendments). (legacy.utcourts.gov)
  • Objection Academy product page (features, one-time purchase, MCLE credits in CA & NY). (objectionacademy.com)
  • AdaptiBar and related materials (context on competitor focus for bar prep; contrast with objection-focused training). (www2.stetson.edu)
  • Public commentary and developments around machine-generated evidence standards and geofence-related considerations (context for broader practitioner relevance). (justice.org)