Ethical challenges of generative AI in architectural practice
As you navigate the evolving risk management challenges created by the emergence of generative AI (GenAI) tools, Victor Insurance Managers and the AIA Trust are here to assist. This article examines the crucial yet frequently overlooked impact of GenAI tools on your ethical obligations. Our aim is to provide you with the knowledge and strategies required to promote the ethical and responsible use of GenAI in your practice.
Text
Ethical considerations
The lack of transparency around how GenAI tools and platforms work—what inputs are used and incorporated into the finished product or how your inputs and outputs are used by these platforms thereafter—can make it difficult to comply with a number of your ethical obligations. For instance, the duties required by the AIA Code of Ethics and Professional Conduct (2024 edition) involving competence, candor and truthfulness, confidentiality, proper attribution, and your supervisory responsibility over others should be carefully managed as they relate to use of GenAI. More specifically:
Competence
Under Rule 3.102: Members shall undertake to perform professional services only when they, together with those whom they may engage as consultants, are qualified by education, training, or experience in the specific technical areas involved.
This requires that you “perform services only where you are qualified by education, training, or experience in the specific technical areas involved.” The commentary to this rule notes that “Members venturing into areas that require expertise that they do not possess may obtain that expertise by additional education, training, or through the retention of consultants with the necessary expertise.” The considerations created by Rule 3.102 are also reiterated by Rule 4.102.
Under Rule 4.102: Members shall not sign or seal drawings, specifications, reports, or other professional work for which they do not have responsible control.
This requires you to have “responsible control” over all drawings, specifications, and reports you sign or seal. The commentary of this rule defines “responsible control” as the degree of knowledge and supervision ordinarily required by the professional standard of care. With respect to the work of licensed consultants, members may sign or seal such work if they have “reviewed it, coordinated its preparation, or intend to be responsible for its accuracy.”
The implications of these rules with regard to the use of GenAI is to limit your use of these tools to the areas where you have the appropriate education, training, and expertise to meaningfully review and gauge the accuracy of all outputs. Where you lack the required competence, consider retaining qualified consultants or subconsultants who can assist in providing that meaningful review.
Candor and truthfulness
Under Rule 3.301: Members shall not intentionally or recklessly mislead existing or prospective clients about the results that can be achieved through the use of the Members’ services, nor shall the Members state that they can achieve results by means that violate applicable law or this Code.
It’s no secret that GenAI systems can produce results with the following issues:
Inaccurate results
If a GenAI system is trained using data available up to 2021, its responses may not account for or be in line with the latest laws and regulations. Additionally, if the underlying training data contains inaccurate or outdated information regarding applicable laws, those inaccuracies will be mimicked and reinforced by the GenAI system as the outputs incorporating these inaccuracies are used to further train the system.
Misleading results
Hallucinations, fabrications, and confabulations are all terms that have been used to describe a phenomenon in which GenAI invents false information, images, or texts and confidently presents it in the output. According to the National Institute of Standards and Technology (NIST), this is “a natural result of the way generative models are designed: they generate outputs that approximate the statistical distribution of their training data; for example, LLMs [large learning models] predict the next token or word in a sentence or phrase.” This phenomenon garnered much attention in 2023 when a lawyer cited fake cases generated by ChatGPT in his legal brief. For architects, it might involve the removal of existing, immovable structures when generating conceptual images or models, or suggestions for the use of materials that don’t exist or don’t have the performance capabilities required, to name a few examples.
Biased results
If the training data contains biases, the GenAI’s outputs may contain those same biases. For instance, if most of the inputs used to train that system involve designs that don’t adequately account for ADA requirements, the output that’s generated might contain similar biases. Or if the GenAI system was trained using residential plans from a specific country, its output could reflect that country’s specific cultural norms and regulatory requirements and may not be applicable or appropriate for projects located in other countries. Therefore, it’s important to review all outputs for accuracy, feasibility, and compliance with project criteria. Otherwise, it could result in a reckless misrepresentation to the client about what can actually be achieved on their project.
Confidentiality
Under Rule 3.401: Members shall not knowingly disclose information that would adversely affect their client or that they have been asked to maintain in confidence, except as otherwise allowed or required by this Code or applicable law.
This prohibits you from knowingly disclosing information that would adversely affect your client or that you’ve been asked to maintain in confidence unless, of course, the disclosure is required by law or some other provision of the code (e.g., disclosure of information related to illegal activity). An example of where something like this might come into play is if you’re working on a high-security facility, such as a prison or offices for the Department of Defense (DOD) or Central Intelligence Agency (CIA). Publicly posting the designs or interior images of those spaces may go against your client’s security interests. Even on residential projects, particularly when working with a high-profile client or a security-conscious one, sharing photos of the home and exposing the entry and exit points of those spaces may have an adverse impact on your client’s sense of security.
Proper attribution
Under Rule 4.201: Members shall not make misleading, deceptive, or false statements or claims about their professional qualifications, experience, or performance and shall accurately state the scope and nature of their responsibilities in connection with work for which they are claiming credit.
Further, under Rule 5.301: Members shall recognize and respect the professional contributions of their employees, employers, professional colleagues, and business associates.
These rules mean you cannot claim or imply credit for work you did not do, mislead others, or deny other participants on a project their proper share of credit. In other words, make sure all participants on a project are given their fair share of credit.
The lack of transparency around how GenAI platforms work and what inputs are used and incorporated into the outputs can make it difficult to comply with these rules requiring proper disclosure about the nature and scope of work you are claiming credit for, as well as proper attribution for the contributions of others.
Responsibility for others
Under Rule 4.202: Members shall make reasonable efforts to ensure that those over whom they have supervisory authority conform their conduct to this Code.
This means that you have an obligation to ensure that those under your supervisory authority are:
- provided guidance on what systems are appropriate for business use;
- properly trained on how to effectively use GenAI tools; and
- instructed on how to craft effective prompts that align with the firm’s contractual, legal, and ethical obligations.
Risk mitigation strategies
In thinking about how to successfully and responsibly incorporate GenAI into your practice in a way that aligns with your firm’s ethical responsibilities, here are some things to keep in mind.
Firm policy and training
Provide guidance on which AI platforms and tools employees are permitted to use and the parameters of that use. This might include limiting use of specifically prescribed and approved GenAI platforms and clarifying what information is and is not allowed to be used on these platforms. Further, don’t just regulate, educate. Employees should be properly trained on:
- how to responsibly and effectively use approved GenAI tools, e.g., how to ascertain whether the firm has the right to use and license the data before uploading it to any GenAI platform(s);
- how to properly scrub sensitive or confidential client, project, and firm information;
- how to craft effective prompts; and
- the dangers of using publicly available, unapproved platforms.
Helpful resources in crafting your firm’s GenAI policy may include the NIST’s guide to identifying and managing the risks associated with GenAI, NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, and the U.S. Department of State’s guide to “design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights,” Risk Management Profile for Artificial Intelligence and Human Rights.
Full disclosure of GenAI use
Be upfront about the use of GenAI tools in your practice. Ensure that all parties are comfortable with your use of these tools by taking the time to explain to relevant stakeholders how GenAI tools are responsibly incorporated into your processes, whether to assist you on a project for a client or on a publication for a trade journal. Be prepared to address any concerns or reservations clients or others may have. If the client is uncomfortable with your use of GenAI tools, it is much better to have that clarity ahead of time.
Contract
Know your contractual obligations. Identify what restrictions or risks the client contract imposes on you with respect to your use of GenAI on any given project. Identify the extent that other project parties are using GenAI and whether they’re using these tools responsibly. Work with an attorney to incorporate appropriate contract terms to protect your firm against the improper use of GenAI tools by other project parties. For example, consider the inclusion of a right to rely provision to ensure that to the extent other project parties improperly use GenAI tools resulting in errors in the information provided to you, you are not liable for the resulting damages.
Proper QA/QC procedures
Have a thorough quality assurance/quality control (QA/QC) process to validate the assumptions, inputs, calculations, and outputs of the firm’s designs and documents, and revise them as needed. This is a key consideration for all designs or documents created by your firm, including those created using GenAI tools. Proper QA/QC in the context of GenAI tools necessarily requires that these tools only be used to assist in tasks within the firm’s field of expertise or for which proper oversight and review can be arranged. The firm’s QA/QC procedures might also include steps to identify any substantial similarities between GenAI produced outputs and publicly available content to proactively manage risks related to copyright infringement.
Insurance coverage
Make sure your firm is properly insured for the type of services you provide, the type of projects you take on, and your firm size. The kind of claims that are likely to come out of the use of GenAI tools are claims that firms are already at risk of, but with a GenAI component. Accordingly, these claims may be covered under existing policies. For instance, a claim arising from the allegedly negligent use of GenAI tools, at its core, may simply be a negligence claim, which may be covered under the firm’s professional liability insurance policy.
The exposures and the risks are already there; it is just a matter of whether the use of AI is going to change the frequency and severity of risks that are already in place in ways that we do not yet understand (e.g., increased use of and reliance on GenAI could magnify cyber liability and privacy risks) so make sure your firm is properly insured.
Conclusion
Despite the novelty of GenAI tools, the risks and mitigation strategies needed to minimize the potential exposures are not new. A good rule of thumb to prevent avoidable perils is to treat GenAI outputs as you would work produced by a new associate: take the time to identify the underlying assumptions, verify their work, and put it through the appropriate quality control processes. By taking these steps you can ensure the accuracy and conformity of all outputs with project requirements and any ethical and legal obligations that may apply, such as those involving confidentiality or proper attribution. Remember, at the end of the day, whether your work product is generated using assistance from a GenAI tool or a human associate, you are ultimately responsible for that work.
For more information on GenAI risks, sign up for Victor’s upcoming webinar on the topic, Inspiration or infringement: Guide to copyright issues and Generative AI for design professionals. Additional information on risk management strategies is also available on the Victor Risk Advisory website.