Skip to content
April 2026

When AI Legal Searches Become Exhibit A: Discovery Risk for Architects and Engineers

As artificial intelligence becomes a routine tool in practice, its use is extending beyond design into areas with real legal implications. What may feel like a quick, private search for clarity could, in certain circumstances, become part of the legal record. This article explores how AI use intersects with discovery risk—and what design professionals should consider before turning to these tools.

Why AI legal searches may not be as private—or protected—as they seem

A project manager receives a notice of claim. Before calling counsel, she types into an AI platform:

 

“What exposure does an architect have for delay damages if shop drawings were late?”

 

An owner of an architecture firm, after meeting with outside counsel, later asks:

 

“Realistically, how likely is liability if we knew the schedule was slipping but didn’t document it?”

 

Those questions feel private. They feel exploratory. They feel like preparation. If a claim were to be pursued against the design professionals asking these questions to an AI bot, these searches may become discoverable and could be used against the design professionals.

 

The Emerging Legal Fault Line

 

Artificial intelligence is rapidly entering professional practice. For architects and engineers, the legal risk does not arise merely from generative design tools or BIM automation. A more immediate exposure lies in how design firms use AI to analyze legal issues.

 

Recent U.S. discovery decisions have addressed whether AI-generated searches and chatbot interactions are protected from discovery by an adverse party in litigation or arbitration. Unfortunately, courts have not spoken with one voice. Some rulings suggest that communications with third-party AI platforms are analogous to other digital communications and may be discoverable absent privilege. Others have entertained arguments about whether such queries could, in limited circumstances, be protected against discovery.

 

The emerging case law has moved in two directions. Some courts have treated communications with AI platforms as unprivileged third-party interactions, particularly where no attorney was involved and no anticipation of litigation was documented. Other courts have been more cautious, treating such communications as privileged, having been developed in anticipation of litigation. But those rulings are fact-specific, and do not establish a broad rule that can be applied by design professionals with certainty.  For architects and engineers, the practical takeaway is sobering: the law has not settled.

 

What is clear is this: there is no established body of law creating a safe harbor for AI-based legal research conducted by non-lawyers. And that uncertainty is dangerous.

 

Privilege Is Not Automatic

 

Under existing law, attorney-client communications and attorney-directed work product are protected from being discovered during dispute resolution proceedings. An AI bot, however, is not a lawyer.

 

When a project manager or executive types a legal question into a generative AI platform, that communication is not, by default, privileged in the same way as it would be if directed to a lawyer. The query is (arguably) a communication with a third-party technology provider. AI queries may be stored, logged, or retrievable through metadata. In litigation, rules governing the scope of materials which may be discoverable are broad. Parties are entitled to obtain nonprivileged material relevant to claims or defenses. Search histories and internal digital communications routinely fall within that scope. As such, AI search queries may be discoverable.

 

For design professionals, the litigation risk extends beyond simple disclosure. Consider how an opposing counsel might use an AI search query at trial:

 

  • “Before notifying the client, your project manager asked whether your firm was liable for delay damages, correct?”
  • “The owner of your firm ran a search to understand what damages were recoverable for code noncompliance, didn’t she?”
  • “So, you knew there was a problem?”

 

Discovery is not just about access; it is about narrative. AI queries could be framed as evidence of knowledge, foreseeability, or even guilt.

 

Some potentially risky scenarios involving the use of AI may include:

 

  • Project managers asking AI to evaluate project records before escalating to counsel, and potentially generating recommendations that may (or may not) be accurate. This newly created evidence, if subject to discovery, may create exposure for a design professional.
  • Firm leaders conducting legal research before or after meeting with attorneys. While getting an AI-generated response to an inquiry about a legal issue may ease immediate apprehension on an issue, it may also create new (and unpredictable) risk for the design professional in the future.
  • Junior team members on the front line of discovering an issue conducting AI searches to figure out a potential “solution” for avoiding damages. Instead of escalating issues to supervisors to be addressed proactively, information provided by an AI bot may be used to sweep an issue under the rug.

 

In each instance, the design firm may unintentionally create a discoverable record of its internal legal risk assessment without the protection of privilege. As a means of managing this risk, the prudent design professional will assume that the communication is not protected.

 

Practical Safeguards for A/E Firms

 

Design firms should treat AI legal queries as they would internal emails about liability-related issues. Some practical steps for architects and engineers to consider include the following:

 

  1. Adopt a written AI governance policy. Define permissible uses and prohibit non-lawyers from seeking legal opinions through AI platforms.
  2. Route legal questions to counsel. Make clear that exposure analysis must occur through attorneys to preserve privilege.
  3. Educate project managers and executives. Many users mistakenly assume AI interactions are private and/or protected.
  4. Evaluate enterprise AI tools carefully. Terms of service, data retention policies, and confidentiality provisions matter.
  5. Document escalation procedures. Establish clear protocols for claims, potential claims, and dispute analysis.

 

These measures do not eliminate uncertainty, but they reduce the likelihood of the (mis)use of AI creating greater exposure for a design professional.

 

Conclusions

 

The most dangerous misconception architects and engineers may hold is that AI is merely a neutral research assistant. It is not. It is a third-party platform operating within an unsettled legal framework. If courts continue applying traditional discovery principles, AI search histories could become standard requests in litigation. Until legislatures or higher courts provide clarity, design professionals should avoid the use of AI platforms as it relates to circumstances which may give (or have already given) rise to a claim against the firm.

 

By Jonathan C. Shoemaker, Lee/Shoemaker PLLC

Jonathan C. Shoemaker is a lawyer at Lee/Shoemaker PLLC, a law firm devoted to the representation of design professionals in DC, Maryland, and Virginia. The content of this article was prepared to educate related to potential risks, but is not intended to be a substitute for professional legal advice.

 

 

More on Firm Management & Professional Liability & Risk & Small Firms