Court Flags False Claims in AI-Assisted Brief Written by: Victoria Xikis In February, the U.S. Fifth Circuit Court of Appeals...
- Advised on AI development, deployment, data use, and third‑party risk.
- Designed enterprise AI governance programs and internal controls.
- Supported AI‑related strategy, contracting, and cloud and data infrastructure decisions.
- Conducted DPIAs and risk assessments for automated decision‑making and high‑risk processing.
- Drafted practical AI policies and playbooks for product and engineering teams.
- Counseled boards and executives on emerging AI laws, risk exposure, and compliance strategy.
- Advised on U.S. state privacy laws governing automated decision‑making and opt‑out rights.
- Supported industry disclosures, reporting obligations, and AI governance frameworks.
- Authored and advised on artificial intelligence and machine learning policy for mission and administrative systems within the U.S. Department of Homeland Security.
- Designed privacy‑preserving governance frameworks that enabled the use of advanced analytics and emerging technologies in national security and interagency vetting programs.
- Advised on the lawful use, oversight, and risk mitigation of AI‑enabled data processing involving sensitive and regulated information.
- Applied federal government experience in intelligence, privacy, and incident response to enterprise AI governance, deployment risk, and regulatory readiness.

The Beckage Firm is uniquely positioned because we not only have lawyers with AI MIT certifications, but also technical leaders in AI implementation from the world’s largest organizations. Our team members have worked with government and private organizations on AI initiatives and program development, taught masters data scientists at the State University of New York on AI, data security, privacy, and ethics, and speak around the globe in leading industry conferences on AI.
Members of our team have spoken on AI at notable events such as the InsureTRUST CRC Cyber Summit, Cybersecurity Docket’s Incident Response Forum, ExecuSummit, NetDiligence® Cyber Risk Summit, and The Lexing Network Global Conference to name a few.
AI, Cybersecurity, and Emerging Systemic Risk
Artificial intelligence is no longer just a productivity tool or experimental technology. Recent developments demonstrate that advanced AI systems can identify, analyze, and exploit software vulnerabilities at a scale and speed not previously possible. As AI capabilities grow, organizations must prepare for a new category of risk—one where existing weaknesses in infrastructure, applications, and governance can be surfaced faster than traditional defensive measures were designed to handle.
These developments affect organizations across sectors, particularly those responsible for sensitive data, critical systems, or financial infrastructure. The question is no longer if AI will change the cyber threat landscape, but how quickly organizations adapt.
WHO THIS IMPACTS
Organizations most likely to be affected include:
Financial Institutions
Banks, insurers, fintechs, and payment processors operating complex or legacy systems.
Critical Infrastructure and Essential Services
Energy, utilities, telecommunications, transportation, and internet service providers.
Healthcare, Education, and Public Sector Entities
Organizations managing highly sensitive data or operating at national or community scale.
Technology and Cloud-Dependent Businesses
Companies heavily reliant on third-party software, open-source components, or interconnected vendors.
Digital Asset and Crypto-Related Organizations
Exchanges, wallet providers, custody platforms, and blockchain-based service providers.
Preparing for ai-accelerated cyber risk
Advanced AI does not create new vulnerabilities—it dramatically increases the ability to discover and exploit existing ones. Organizations should reevaluate their risk posture with that reality in mind.
Effective preparation includes:
Incorporating AI-driven threats into board, executive, and enterprise risk discussions.
Identifying and remediating system vulnerabilities before they are exploited.
Assuming faster attack cycles, reduced detection time, and cross-system impacts.
Understanding how dependencies may expand exposure beyond internal systems.
Anticipating how “reasonable security” standards may evolve as AI capabilities advance.
The Beckage Firm works with organizations to align technical security measures with legal, regulatory, and governance obligations—before an incident occurs, not after.
As governments and regulators become aware of AI-driven cyber capabilities, expectations around cybersecurity practices are likely to rise. What was once considered reasonable may no longer suffice.
Organizations should expect:
Proactive legal and technical alignment is becoming essential to risk management in the AI era.
We help identify potential pitfalls and liabilities with AI use, such as compliance, discrimination, bias, ethical evaluations, and assessing and implementing risk mitigation strategies.
We assist with board and management education and reporting on AI impact and strategy. We help establish clear roles and responsibilities for accountability and transparency in AI projects.
We help draft and negotiate agreements, addressing limitations on use, specialized contractual terms, specific use of data in training, and rights in AI output. This includes responding to third party inquiries and security questionnaires regarding AI uses and practices. Companies using third-party generative AI tools may also face legal, practical, and reputational risks due to the provider’s noncompliance, which we assist through contracting and operations risk mitigation efforts.
As leaders in insurance markets, we provide clients with no-cost evaluations of their insurance policies to address AI uses and coverage with insurance carriers.
We respond to data security incidents involving AI, such as the use of AI by threat actors, including instances in which threat actors impact on AI systems results in data loss, disclosure, or modification of AI for unintended consequences.
Security cannot be ignored in developing AI models. We help organizations establish internal controls and white papers for AI use while considering unique ways to keep personal information protected from unauthorized use.
We create detailed plans for organizations that identify applicable AI laws and regulations and steps to achieve compliance.
It is difficult for organizations in this rapidly-emerging technological landscape to understand and assess the risk of AI use in mergers and acquisitions. Our tech lawyers and AI technologists can assist with due diligence measures.
We assist clients in assessing the legal implications of obtaining data and content from public sources (e.g., web scraping to third party data acquisitions) to train machine learning (ML) models and various AI applications. Training data may include personal information that was collected in violation of privacy laws, which may negatively impact the AI model and derivative uses.
We have drafted many AI policies and worked with AI development teams to create sound playbooks for internal operational practices.
We advise clients on global privacy requirements related to the use of personal information for automated decision making, which may have legal, financial, or social consequences for individual consumers or employees. Data subjects may also make requests (data subject access requests (DSARs)) regarding the use and removal of their information. Numerous privacy laws provide these individuals the right to access, delete, and correct their personal information, which may be difficult where the personal data has been absorbed into AI models and systems.
The Beckage Firm assists in responding to inquiries from regulators on AI practices.
We work with clients to determine how they can use AI and related processes within their organizations, including the potential impact on existing contractual requirements, business operations, data collection and use, and use of AI by third parties. Our clients seek guidance on AI projects while balancing data privacy, data protection and security, litigation, and information governance considerations.
With any new technology, employees and management must be trained on AI use, ethics, and risk mitigation efforts. We support clients with foundational knowledge regarding AI processes, including data assembly, AI and algorithm operation, audit practices, potential for disparate impacts, and related contractual and regulatory obligations.
Cardinal News



