blue background with blue, purple, pink, and white strand of lines and lights indicating neural networks and technology
amiak/Shutterstock.com

Artificial intelligence (AI) has become more mainstream, especially with the launch of ChatGPT in November 2022. Today’s AI creates, augments, modifies, replaces and encroaches, but it also intrigues and startles us for many reasons, including its propensity to trespass on territory once considered uniquely human. The implications of this incursion (or opportunity, depending on one’s perspective) are immense.

Because AI is powerful, it often draws the attention of governing bodies and policymakers, such as governments, or in our case, ACA. Earlier this year, ACA President Edil Torres Rivera created an AI task force charged with creating statements and recommendations to practicing counselors and for clients about the use of AI. Those documents are forthcoming. Kent Butler, ACA past president, also appointed a task force to explore AI in the counseling profession during his presidential term. Both presidents should be commended for recognizing AI as a force worthy of address.

AI regulation is actively discussed internationally, including in the United States, European Union and China, and in professional associations across the globe. The effort to ensure both safety and a climate of scientific progress is a complicated one involving many moving parts. A central question is: How are innovation and ethics balanced? In this article, I discuss some advantages and objections to AI regulatory oversight at a governmental and associational level.

Questionable motives

In May, OpenAI CEO Sam Altman testified before Congress recommending regulation for AI. Shortly thereafter, Altman warned that his company would cease operations in Europe if the European Union overregulated, a notice he later rescinded. Then during the AI Summit in September, tech executives convened in a closed-door meeting led by Sen. Chuck Schumer (D-NY) to discuss AI regulation. Reports suggest that during the summit, which was predominantly attended by CEOs and politicians with a collective net worth estimated at $550 billion, there was a loose endorsement of the concept of AI regulation. I invite you to ponder the motivations of the people in that room in light of the information I discuss in this article.

Counseling and regulation

Our field is no stranger to regulation. The ACA Code of Ethics qualifies as a form of oversight. Counselors actively lobby the government, often for matters related to money. For example, counselors were instrumental in passing the Mental Health Access Improvement Act (S. 828/H.R. 432), which allows counselors to receive reimbursement from Medicare. The passage of licensure laws is another example of a time when counseling and government overlap; our accrediting body, the Council for Accreditation of Counseling and Related Educational Programs, is also intertwined in this process.

We know that regulation happens, but is it always a good idea? Are there drawbacks? These are hard questions to answer, and a case can be made, using ethics and efficacy studies, that there are pros and cons to regulation. Let’s delve into why AI regulation is important, consider some reasons for exercising caution and highlight a few areas that may fall under either category. (Please note that this is not an exclusive list.)

Reasons to regulate

There are three areas where regulation may help to ensure the protection of the communities we serve: confidentiality, environmental impact and bias.

Given the pivotal role of confidentiality in counseling ethics and the substantial use of personal data by AI, potential breaches of confidentiality are certainly a valid concern. AI regulation, however, may help ensure data security and the protection of sensitive information. Two essential regulations include instituting a cybersecurity regulatory structure (for comprehensive protection) and securing client privacy and confidentiality (for client protection).

An underrecognized reason to regulate is the environmental impact of AI and technology in general. AI has a large carbon and water footprint, yet it concurrently has the capacity to help address climate change, such as through the analysis of climate datasets, smart agriculture and energy-efficient urban planning. According to the 2023 preprint paper “Making AI less ‘thirsty’: Uncovering and addressing the secret water footprint of AI models” (posted on arXiv), training GPT-3 consumed 700,000 liters of freshwater. The researchers also found that ChatGPT consumes the equivalent of a 16-ounce water bottle every time someone asks it five to 50 prompts or questions. The 2019 paper “Energy and policy considerations for deep learning in NLP,” published in the Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, showed that emissions from training an AI model surpassed 626,000 pounds of carbon dioxide. Regulations may help curtail some of the harmful environmental effects of AI.

AI algorithms are also known harbingers of bias. Bias usually occurs from the design and machine learning training of the AI. For example, if the computer is trained using datasets that underrepresent an ethnic group, then the computer will follow programed rules, or instructions, that may discriminate against that group. The result may be that a person is misidentified, misdiagnosed or suffers from an assortment of other unfair or harmful practices. Thus, regulation could help protect vulnerable groups. Many government agencies, such as the Federal Trade Commission, Consumer Financial Protection Bureau, Department of Justice and Equal Employment Opportunity Commission, already have laws prohibiting unfair or deceptive practices, including the sale or use of racially biased algorithms.

Caution against regulation

Theoretically, regulations should improve quality and safety, but in many cases, there is a surprising lack of evidence that it actually works. Take licensure, for example, which is a form of regulation. We hope that licensure ensures high standards and competence, but if we ask the question, “Is there evidence of safety and quality improvement because of licensure enactment?” then the answer for many fields is no. For counseling, the research is incomplete. Few if any experiments exist that look at improvements in safety and quality of counseling service because of licensure. However, many articles attest to the diligence of counselors in pursuing licensure and how this process enables counselors to be paid and establishes counseling as a legitimate profession.

Regulation, in fact, can serve as a barrier to entry and come with a high opportunity cost. Large companies often handle regulatory burdens better than startups, and these companies are known to lobby Congress to enact regulations for the purpose of keeping competitors out of the market. Meeting regulatory requirements also takes time and resources. The trade-off, or what never came to fruition because of the time spent meeting regulatory requirements, represents an opportunity cost. There is a hidden cost to every decision. As I write this, I am listening to “Fly Me to the Moon” by Frank Sinatra. The opportunity cost is every other song I could be listening to.

Regulation, especially overregulation, means that advancements will happen someplace else. Peter Diamandis, an aerospace entrepreneur and founder of several high-tech companies, captured the sentiment well: “If the government regulates against use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.” When applied to the counseling field, this implies that excessive regulation of AI use by counselors (or a lack of participation in the AI arena) could result in psychology, social work and other mental health disciplines assuming the responsibilities and research initiatives. I’m aware that some may try to counter this argument by saying, “Well, good. If psychology jumps off a cliff, should we, too?” Nonetheless, if counseling shirks AI, then someone else could reap the value that AI offers.

Free speech, jobs and AI

A contentious point in the recent writer’s strike by the Writers Guild of America highlights an ongoing issue in the AI regulatory question: the impact of AI on jobs. The strike ended once they reached an agreement that, among other things, prohibits a company from using AI to write or rewrite scripts. Speculation has long run rampant that AI will automate jobs out of existence (for humans) to the point that mass unemployment ensues. Solutions for this possibility include regulation or a universal basic income. Presently, it makes sense for career counselors to include automation and AI as considerations in their models and be prepared to deal with how the anticipated increase in automation will affect the populations they serve.

Regulation also brings up another issue: questions of free speech. What happens when an imperative for free speech, a tendency toward misinformation and the speedy acceleration of disinformation via AI get thrown in a bag? A debate, that’s what. Recall that if misinformation means that a person accidentally got it wrong, then disinformation signifies that they meant to get it wrong. AI is good at both misinformation and disinformation depending on who and what created the AI, their intention, their attentiveness to ensuring fairness and diversity, or even their naivete. Of course, when permitted (i.e., when people are free to pursue solutions through machine learning), AI is capable of producing breakthroughs in science, medicine and, potentially, our field as well. However, regulation around AI in the domain of speech remains tricky and controversial.

Finding a balance

Categorically, the major options are regulation or no regulation for AI. The actuality, or what will likely happen, is probably somewhere in between. Sometimes, intervention is precipitated by a publicized event, maybe one in which people unfortunately get hurt. Counselors are bound by principles that might conflict in relation to AI regulation. For instance, the World Health Organization estimates that there is a global shortage of 4.3 million mental health workers, a number expected to reach 10 million by 2030. The shortage is more pronounced in low- and lower-middle-income countries. If AI can assist in providing mental health support, yet is curtailed by excessive regulation, then perhaps the principles of justice and beneficence are violated. Alternatively, unleashing AI capable of bias and unfairness also violates justice, not to mention nonmaleficence.

The question of how to balance innovation and ethics is a hot one that may intensify in the coming years. With each milestone, AI seems to encroach further into human domains, and this pattern may increase the call for regulation. Staying abreast of developments in a fast-paced field is demanding but may prove valuable in a future where AI’s prominence grows.


headshot of Russell Fulmer

 

Russell Fulmer is a professor and director of graduate counseling programs at Husson University. He is the chair of the ACA Task Force on AI. Contact him at fulmerr@husson.edu.

 


Opinions expressed and statements made in articles appearing on CT Online should not be assumed to represent the opinions of the editors or policies of the American Counseling Association.