(To clarify, this article is not about legislation written by AI, but rather legislation written about AI)
AI. We all know about AI at this point, but not all of us know how AI is regulated, or even how government are approaching regulating such a new technology. As it's such a new technology, AI-specific legislation isn’t as widespread as one might think, but there is still more than enough in the US' Congress to get a good idea of how the US government is going about tackling such a new emerging technology. To get a decent idea I’ll be looking at some of the more recent proposed pieces of legislation (at time of writing: April 8th, 2024), and what they imply for AI were they to pass. (And if you want to take a look yourself, simply look up “AI Legislation Tracker”.)
H.R. 6936 Federal Artificial Intelligence Risk Management Act of 2024
Proposed on the 10th of January, 2024, bill H.R. 6936 (Federal Artificial Intelligence Risk Management Act of 2024), simply requires that any federal agencies utilizing AI have to also use the Artificial Intelligence Risk Management Framework (AI RMF). It is a very short bill, definitions notwithstanding. AI RMF, released by NIST on the 26th of January, 2023, is a framework that governs the use of AI which can “generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.” But what exactly does the framework do? As the name suggests, it is a way of approaching AI that ensures that society gets access to the benefits of AI, while being shielded from the possible harms of it. In the framework itself this is defined as an organization’s “social responsibility”, which is its responsibility “for the impacts of its decisions and activities on society and the environment through transparent and ethical behavior”. Interestingly, NIST also defines AI as being inherently socio-technical, meaning they are always influenced by the complex interplay that is social and human behavior. By extension this means that AI has a lot of vectors through which bias (and thus, harm) can be introduced. This is a common thread running through all AI legislation, in which it basically safeguards against the possible misuse and harm of AI. An example might be a company using AI to filter job applications, and the AI (due to improper risk management) replicates the biases within the hiring practices it learned from, thus disproportionately filtering out applications from marginalized groups.
This is complimented by bills like S.3478 (Eliminating Bias in Algorithmic Systems Act of 2023), which “require[s] agencies that use, fund, or oversee algorithms to have an office of civil rights focused on bias, discrimination, and other harms of algorithms, and for other purposes.” Since in this case “agencies” refers to government agencies, the bill also authorizes funding for the creation of said offices.
S.3686 The Preventing Algorithmic Collusion Act of 2024
AI can be used in a much bigger context, however. Take for instance market collusion. Collusion is when companies coordinate amongst each other to gain a competitive advantage by engaging in things like price fixing. On paper this is illegal, and so companies engaging in this through traditional means would be subject to criminal prosecution. However, what if instead of using human actors AI was used instead? By feeding an AI “nonpublic competitor data”, and asking for what price to set the company’s products at, it might be possible to engage in noncompetitive practices through what the AI outputs. Bill S.3686, introduced on the 30th of January 2024, (The Preventing Algorithmic Collusion Act of 2024) safeguards against this by not only explicitly outlawing such a practice (which, to be clear, doesn’t outlaw pricing algorithms but rather pricing algorithms using nonpublic competitor data), but also by creating a mechanism that allows the government to ask for a written report on any pricing algorithm being utilized. This gives the government the ability to audit pricing AIs, and make sure companies are in compliance with antitrust laws.
H.R.5077 (Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023, or the CREATE AI Act)
The final bill I’ll be looking at in detail in this article is bill H.R.5077, which was proposed on the 28th of July, 2023. (Which is also called the Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023, or the CREATE AI Act.) Despite what the bill might make it sound like, this is not actually for everyone to experiment with. It’s geared instead towards US researchers who might need tools like AI, with the aim being to –according to the NHS, the agency leading the effort– create a “shared national research infrastructure for responsible discovery and innovation in AI.” Currently, an executive order authorized a pilot of this program (NAIRR, National Artificial Intelligence Research Resource) to be created, which will run for two years starting January 24, 2024. The pilot focuses on a variety of things, with “initial priority topics includ[ing] safe, secure and trustworthy AI; human health; and environment and infrastructure. A broader array of priority areas will be supported as the pilot progresses”. Additionally, despite being only a pilot program it is already a huge endeavor involving 10 other government agencies (DARPA, DoD, DoE, NIH, etc.) and 15 non-federal partners (AMD, Google, IBM, etc.) helping make the resource a reality.
Fun fact, bills can have fun names, such as H.R.7120 which is called the “R U REAL Act”. (It basically requires telemarketing to disclose whether AI was used or not.)
Comments