THE AMERICA ONE NEWS
Jun 1, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Forbes
Forbes
9 Aug 2023


Future artificial intelligence robot and cyborg.

The bonanza of AI adoption is giving rise to new AI laws, which need to be suitably and sensibly ... [+] devised by observing vital guidelines like the ones discussed herein.

getty

In today’s column, I’ll be focusing on an important position statement that the American Bar Association (ABA) has established regarding AI and the law, notably referred to as Resolution 604. I bring up this topic partially as a follow-on to my recent comprehensive analysis of the state-of-the-art on AI and the law (see the link here), and also due to having participated as a panel member on AI and the law at the recent ABA Annual Conference wherein Resolution 604 was brought up.

The panel took place in Denver at the American Bar Association’s Annual Meeting and was entitled “The AI Trap: The Missing Guardrails for Lawyers.” The panelists included Dr. Lance Eliot (that’s me), venerated Dazza Greenwood, and esteemed Lucy L. Thomson, Esq. CISSP, and was moderated by famed Dina Temple-Raston of NPR. Thanks go to the ABA Cybersecurity Legal Task Force and the ABA Section of Science and Technology Law for having put together the panel and gotten the event onto the program schedule. Special added thanks go to Ruth Hill Bro, Harvey Rishikof, Claudia Rast, Maureen Kelly, Holly McMahon, and many others for their tireless and herculean efforts in doing so.

During the panel discussion, the significance of Resolution 604 was brought up. Fittingly so since Lucy L. Thomson had championed the measure and been a founding member of the ABA Cybersecurity Legal Task Force and a past chair of the ABA Science & Technology Law Section. A brief overview of Resolution 604 was provided, and highlights were notably mentioned. After the panel concluded, some attendees came up to chat with me and wanted to get my further thoughts about the nature of Resolution 604 and its nuances.

This spurred me to put together today’s column proffering my insights on the topic.

Without further ado, let’s get underway.

Vital ABA Positions Regarding AI And The Law

Before we leap into Resolution 604, a bit of background context will be helpful and instructive.

First, as I’ve covered previously (see the link here), you can conceive of AI and the law as an intertwining of two perspectives. One perspective entails the application of our laws to the governance of AI including those entities and individuals that craft and field AI systems. The other perspective is the application of AI to the practice of law such as using AI apps to aid in performing legal tasks. Both of these perspectives are equally vital. They also at times dovetail synergistically.

My discussion herein will be focusing on the application of the law to AI. For the other side of the coin, namely, the application of AI to the law, see my coverage at the link here and the link here, just to name a few.

Second, in case you aren’t already aware, the ABA is a highly prized professional organization of lawyers and law students that is especially known for its model codes of legal practice and their policy recommendations pertaining to the law and the rule of law. As indicated on the ABA website: “The ABA was founded in 1878 on a commitment to set the legal and ethical foundation for the American nation. Today, it exists as a membership organization and stands committed to its mission of defending liberty and pursuing justice.”

Third, when it comes to the AI topic and the law, the ABA has a lot to say, rightfully so. You would have to be living in a cave to not be cognizant that there is a tremendous amount of attention these days toward concerns about AI. AI can contain undue biases. AI can portend harm to society in a variety of ways. And so on. The aim is to ensure that our laws, including new laws that are being envisioned by lawmakers and regulators, will be suitably crafted to aid in the human-values alignment of AI.

I tend to recommend that anyone interested in the ABA’s considerations about AI and the law at least look at these four milestones (there are others, so please go wider and deeper if the topic interests you):

Let’s take a quick look at those statements.

ABA Model Rule 1.1 Comment 8

Here’s what the ABA Model Rule 1.1 Comment 8 has to say:

The statement indicates that lawyers are stridently encouraged to keep up with relevant technologies, for which you can easily and compellingly make the case that AI abundantly applies. Lawyers are needed to aid in the crafting of laws and regulations that seek to govern AI. If said lawyers are unfamiliar with AI, they are unlikely to be able to demonstrably contribute toward best devising new laws about AI.

Furthermore, lawyers are going to increasingly be sought by businesses and consumers to provide legal representation when AI seemingly goes too far and in some fashion causes harm. You see, to a certain extent, the AI realm is somewhat akin to the Wild West right now. You can readily anticipate that gradually there will be all manner of legal cases and legal redress sought for AI-promulgated harms.

Firms that are trying to get ahead of the coming tsunami of legal actions about AI are already leveraging their legal advisers. Rather than getting mired in a costly legal battle after-the-fact, companies that are thinking ahead of the game make use of AI-savvy lawyers at the get-go. This includes not just legal counseling for AI makers but also for any firm that makes use of AI in their services or products. I find that many top executives that I consult with are blindly unaware that their firm is using AI, often due to the AI being embedded within systems and somewhat hidden from view. Nonetheless, legal issues are awaiting them like a ticking legal timebomb.

For those reasons and a slew of other bona fide considerations, lawyers should be getting themselves up-to-speed about AI. Likewise, the same prudent recommendation applies to law firms all told.

ABA Resolution 112

Moving on, I’d next like to introduce to you ABA Resolution 112.

Here it is:

This is a bit of an oldie but goodie, having been established in August 2019 though absolutely still fully applicable today and into the future.

Resolution 112 urges both lawyers and the courts to be aware of and seek to address the ethical and legal issues underlying AI.

For example, I’ve covered the recently activated New York City (NYC) law that covers the use of AI as embedded into AEDT (Automated Employment Decision Tools), see the link here. Local Law 144 seeks to surface any AI that might have been data-trained or set up with undue biases that violate employment laws such as discriminating based on race or gender when hiring employees. I’ve predicted that now that the new law has been activated, and if NYC enforces the law, businesses that have employees in NYC are undoubtedly going to eventually get dinged if they have AI AEDT that doesn’t meet the stipulations of Local Law 144. This includes the need to have a third-party auditor do an audit of the AEDT being utilized and expose any found AI-powered biases. Lawyers are inevitably going to be involved in the evolution of this new law, and likewise, the odds are that many other locales across the country will attempt to enact similar legislation.

ABA Resolution 700

Next, let’s examine ABA Resolution 700.

Here’s what it says:

You have to read somewhat between the lines to garner how AI comes into play on Resolution 700. This statement was promulgated in 2022 and continues to be vitally important.

The deal is that pretrial risk assessments are increasingly being undertaken via the use of automated tools. Those tools typically include AI. The AI might have been data-trained on data that already contains various inherent biases. If the AI then spits out an assessment and doesn’t provide a viable and wholly transparent indication of how the result was computationally rendered, you might have no idea whatsoever that unsurfaced biases are contained therein.

Plus, and an eyebrow-raising surprise to many non-AI-versed lawyers, the AI might provide a seemingly sensible and completely fair explanation, yet the explanation has little to do with how the AI actually calculated the assessment. You are possibly being boondoggled with an explanation that is contrived or concocted and bears no resemblance to the computational underpinnings.

Hopefully, this Resolution 700 keeps lawyers on their toes about being prudently cautious when it comes to the latest and coolest automation underlying such risk assessments. Do not fall for the classic “the AI said so” as a refrain since this is merely a magical allusion to the imagery that AI is flawless and perfect (it is not!).

ABA Resolution 604

We are now at the headliner of today’s column, ABA Resolution 604 which was established this year, specifically in February 2023. It is relatively new. By and large, lawyers are gradually becoming familiar with what it consists of and the legal ramifications that arise accordingly.

I will show Resolution 604 in its entirety and then do an unpacking to help showcase the foundations and significance thereof:

I’ll step you through the statement in a moment.

You might at an initial glance readily discern that the statement squarely addresses AI. There’s no need to read between the lines on this one. The statement is relatively widely encompassing and covers numerous AI-related societal, ethical, and legal concerns. This is somewhat remarkable given that the statement is pretty succinct and doesn’t drag you through tons of detail.

Speaking of detail, I urge you to take a look at the full report associated with this ABA Resolution 604 (my advice is the same for the other resolutions that I’ve listed too). These resolutions typically are accompanied by a detailed report that provides cited works that pertain to the resolution. The in’s and out’s of the resolution are also conveyed in the corresponding report. This can be handy if you want to learn how the resolution came to be and the logic or basis for its coming to fruition.

I’d like to share an excerpt from the report on Resolution 604 expressing why the statement was devised, as noted in the report per overall designated indications of Claudia Rast and Maureen Kelly, Co-Chairs, ABA Cybersecurity Legal Task Force:

I trust that gives you a semblance of how and why Resolution 604 is overarchingly important.

Doing A “Show Me” About Resolution 604

When I do conference speaking or guest lectures about AI and the law, I usually include Resolution 604 since it gives me ample opportunity to range through many AI-related legal and ethical issues. There is a lot that is fruitfully jampacked into the statement.

Consider this.

If you take a closer look, you should notice that there are three major components, numbered one to three (this tends to draw your main attention). Make sure to also observe that there is a preamble and a post-commentary in Resolution 604 too. We don’t want to inadvertently overlook those. They are part and parcel of the coherence and cohesiveness involved.

I like to do a customary and veritable “show me” about Resolution 604, serving to be a plain language translator if you will, divulging perspectives and scenarios that will illustrate what the meaning portends.

Ponder these questions about the statement:

The answer to those substantive questions is what I’ll be covering next, doing so for each of the five delineated components in these distinctly five ways:

Let’s dig in.

The Preamble Of Resolution 604

Here again, is the preamble:

It is worthwhile to mindfully parse those words in the preamble. The depiction via implication refers to the full systems development life cycle (SDLC) of AI systems.

Here are the AI issues at play.

At the first step or stage of development, AI systems are imagined or envisioned by an AI developer as to what they want the AI system to do (conceptual stage). An astute developer will then focus on sketching out the design of the AI (design stage), doing so before leaping into the building or constructing the AI system. Once the design is sufficiently figured out, the AI is developed or built (construction stage), and hopefully tested prior to fielding of the AI (testing stage). The last step entails fielding or deploying the AI (deployment stage). At that juncture, the AI is presumably put into use (in-use stage). One other facet, though not explicitly named in the preamble, consists of doing maintenance and adjustments of the AI (upkeep stage).

An AI system doesn’t necessarily have to rigidly proceed stage-by-stage and can be devised iteratively rather than in a so-called classic waterfall pattern. In any case, the odds are that each of those stages will take place. Sometimes a particular stage of the life cycle of the AI system is shorter or longer, more complex or simpler, and otherwise varies depending upon various key underlying factors.

Why does all of that matter?

Because sometimes those that are unfamiliar with AI and unfamiliar with the principles of the systems development life cycle are apt to concentrate only on one stage and forsake the other stages. For example, lawmakers or regulators might decide to enact laws that limit AI at the deployment stage and yet fail to consider what happens after deployment. The AI might be working properly at deployment and then, later on, during maintenance or upkeep, begin to falter and go astray (this is especially likely when the AI consists of machine learning, deep learning, large language models, generative AI, etc., such that the AI might contain “self-learning” capabilities that it adjusts itself from time-to-time).

Simply stated, make sure to take into account all stages of development associated with AI. Be on the look for concerns that can enter into the picture at any stage. Don’t just assume that focusing on the latter stages will solve all problems. If the badness or wrongfulness is baked into the AI at an earlier stage, it will often be quite difficult to ferret out such issues and the problematic aspects can sit silently and undiscovered. The latent aspects might not arise until the least expected moments.

I present to you some weighty AI and law issues to contemplate as spurred via the preamble:

That covers the preamble, and we are ready to next explore the major components of Resolution 604.

First Major Component Of Resolution 604

Here as a reminder is the first major component:

We shall once again closely parse things.

From an AI issues perspective, the opening portion rightfully seeks to clarify what is meant by AI “developers”. Some might assume that an AI developer is only the individual or firm that first constructed the AI. The problem there is that a system integrator might have acquired or licensed the AI and then added their own bells and whistles to the AI. In that sense, they too are suitably considered AI developers (this can be legally ambiguous and debatable). The same goes for suppliers, operators, and a range of others that might have had a heavy hand or even a light hand in shaping or reshaping the AI.

The second key element in the statement consists of singling out the notion of human authority, oversight, and control pertaining to AI. This is to a great extent a reference to the vital human-in-the-loop (HITL), human-on-the-loop (HOTL), and human-out-of-the-loop (HOOTL) precepts.

Allow me to briefly explain.

An AI system that incorporates a human-in-the-loop capacity will presumably ensure that at sufficient checkpoints a human will be involved in ascertaining what direction the AI proceeds and whether it should even proceed further at all. In contrast, a human-on-the-loop capacity tends to entail that a human will be involved by the AI only when at the end of the AI processing, which can sometimes be too late, and unfortunately some damages or harms might already have occurred. The human-out-of-the-loop is generally a condition of having no provision for human involvement in the monitoring or control aspects, and it is said that the AI is autonomous in its activities.

I present to you some weighty AI and law issues to contemplate as spurred via the first major component of Resolution 604:

A quick added example of the qualms over fully autonomous AI, the kind that has humans entirely out of the loop (i.e., HOOTL), might be best exemplified by the emergence of autonomous weapons systems. As I’ve discussed at the link here, we are faced with perilous times such as the advent of AI-controlled autonomous weapons that might be weapons of mass destruction. Do we want such weapons to be launched solely by AI computational decision-making? A worrisome prospect.

Not wanting to overly suggest that autonomous AI is inherently a bad idea, let’s also acknowledge that there are exceedingly positive and beneficial efforts toward devising and fielding autonomous vehicles such as AI-guided self-driving cars, see my coverage at the link here. The hope is that self-driving cars will profoundly reduce automobile-related injuries and deaths, plus will potentially provide mobility to those that today are unable to readily realize mobility and offer other keen advantages.

Autonomous AI has the prospects of boosting society, while regrettably also potentially being able to undercut humankind. We will all need to decide which is which.

Second Major Component Of Resolution 604

As a reminder, here is the second major component:

Let’s parse this second major component.

There are roughly two crucial elements contained therein.

First, is the pivotal importance of noting that individuals and organizations ought to be held accountable for the AI that they devise or make use of. You might be tempted to think that this is an obvious declaration. Turns out this is not as universally understood as you might assume. For example, there is a position taken that perhaps the AI is the responsible party. Ongoing debates about the topic are heated and contentious. For my coverage on the coming societal and legal war over the meaning of legal personhood, see the link here.

Second, the statement indicates the prevailing concerns over AI that causes harm and mentions too that reasonable measures to avoid or mitigate harm should be taken into consideration regarding accountability and responsibility by individuals and organizations involved in the devising and fielding of such AI. This is worthy of mentioning because some have sought to argue that no amount of safety precautions can overcome fault when an AI commits harm. The problem though is that this would (some would argue) cut off nearly all AI that might be currently devised and deployed. You would essentially shut down advances in AI and close off AI innovations. The argument is that we need to strike a balance between permitting AI to advance and doing so by providing breathing room when AI causes (some modicum of) harm, though still clamping down on those that fail to take measured and reasonable efforts to mitigate those harms.

I’ve covered the back-and-forth on the balancing act of harm versus mitigations at the link here.

I present to you some weighty AI and law issues to contemplate as spurred via the second major component of Resolution 604:

Third Major Component Of Resolution 604

You might recall that the third major component says this:

I’ve extensively covered the explosive growth in AI Ethics and the dozens upon dozens of AI Ethics frameworks (known as soft law), such as the notable AI Ethics precepts adopted by the United Nations via UNESCO (nearly 200 countries agreed to it), see the link here. I bring this up because the need for transparency and traceability concerning AI is a cornerstone in just about all bona fide AI Ethics proclamations.

Why are those characteristics so important?

The answer is that we have seen and will continue to witness AI apps that are completely closed and inscrutable. There is no indication of what is going on inside the AI. You might merely be told by the AI maker or the AI adopters that it is fine and dandy, completely safe and secure. You might be gleefully told that AI is super-duper and based on the latest in AI techniques and technologies.

What you won’t be told is the reality of what is happening within the AI. There is essentially no transparency. Furthermore, if you want to find out what the AI did in a particular situation, there isn’t even any traceability provided. It did what it did, so move along. Stop asking pesky questions.

Those questions are going to be asked when AI causes some form of harm. The typical reply is a shrug of the shoulders and an attempt to dodge the lack of transparency and the lack of traceability. We must not let this techie kind of skullduggery prevail. As stated in this major component of Resolution 604, transparency and traceability of AI has to be keenly undertaken. No oops, we forget to do that, should be permitted.

I present to you some weighty AI and law issues to contemplate as spurred via the third major component of Resolution 604:

Post-Commentary Of Resolution 604

Finally, here’s again the post-commentary:

This last piece of Resolution 604 is straightforward.

At any level of our government, these guidelines about AI are fruitful and should be mindfully considered. Period, end of story.

I suppose you might be tempted to believe that certainly, this must be the case, namely that at all levels and across the board, our government is well-versed in and fully cognizant of these aspects of AI. Such a belief is nice and heartwarming, but regrettably not the case. The reality is that indubitably few know about these crucial AI precepts, or if they do know of them, the AI precepts seem to go out the window when they should instead be closely counted.

Society is currently at times demanding that our legislative bodies and regulators do something about AI. The push can be frenetic. This can readily lead to sloppy new AI regulations and laws. A new AI law can appear to be the next best thing since sliced bread, but when the law gets contested in court, all kinds of blemishes and legally invalid conditions can render the law ineffective.

AI laws and AI regulations are popping up at the federal level, state level, and local levels. I’ve covered many of them, such as Congressional efforts, and federal agency efforts such as the FTC and the EEOC, at the link here, the link here, and the link here. Guidelines such as Resolution 604 are vital since they provide food for thought to those that are composing and enacting AI-related laws. One hopes this will garner sensible and suitable AI laws and regulations and avert AI laws and AI regulations that are replete with guffaws.

I present to you some weighty AI and law issues to contemplate as spurred via the post-commentary component of Resolution 604:

Conclusion

Dwight D. Eisenhower, our 34th President of the United States, stated this about the law: “The clearest way to show what the rule of law means to us in everyday life is to recall what has happened when there is no rule of law.”

Some are worried that our existing laws are inadequate to contend with the emergence of AI in today’s world. They would argue that there is little to no rule of law when it comes to particular facets of modern AI and that this will worsen as AI is further advanced. This then logically and naturally leads to the belief that new laws regarding AI are needed. Not everyone agrees with this sentiment, and some are concerned that we will overshoot, crafting all manner of new AI laws that conflict with each other and even conflict with existing laws.

Let’s not rush pell-mell into composing new AI laws, nor should we avoid mindfully composing new AI laws where there is a societal and legal justification to do so. Those laws should be given not just a once over before they are enacted, they need to be given twice over, thrice over, and so on. Laws are big things. They should not be trivially or willy-nilly put in place.

Montesquieu, legendary judge and philosopher, said this about the law: “There is no greater tyranny than that which is perpetrated under the shield of law and in the name of justice.”

One supposes we should be aiming for the Goldilocks of new AI laws; they should be not too cold and not too hot. Leveraging the ABA precepts such as Resolution 112, Resolution 700, and more recently Resolution 604, plus a plethora of other notable guidelines about AI and law, might be constructively leaned upon to get new AI laws into the proper and appropriate Goldilocks zone.

Let’s strive mightily for that aspirational goal.