The Florida Bar Debate: Authenticity, Evidence, and Professional Responsibility

As deepfakes and AI-generated content proliferate, Florida lawyers face increasing uncertainty regarding authentication, evidentiary reliability, and professional duties. A synthetic video depicting a high-profile technology executive in a fabricated criminal scenario demonstrated the ease with which AI can distort information. The Florida Bar considered–but ultimately tabled–two proposals: requiring attorneys to disclose AI-generated content in filings, and imposing penalties for knowingly submitting synthetic evidence. Proponents cited obligations under Rule 4-3.3 (Candor Toward the Tribunal), while opponents cautioned that overly broad rules could constrain routine AI-assisted research and drafting.

Comparatively, New York and California bar committees have also debated AI disclosure standards, reflecting a national concern for evidentiary integrity in civil and criminal cases. Legal scholars note that existing standards under Florida law assume the inherent trustworthiness of visual media, a presumption challenged by sophisticated deepfakes.

For practitioners, this debate signals the importance of internal protocols for reviewing AI-generated content, documenting human oversight, and advising clients on the risk of synthetic submissions. Courts may increasingly scrutinize algorithmically generated evidence, and Florida attorneys are positioned to help define emerging professional norms.