
Attorney General Raúl Torrez and Rep. Linda Serrato during Thursday’s press conference. LAR screenshot
BY MAIRE O’NEILL
maire@losalamosreporter.com
Attorney General Raúl Torrez and Rep. Linda Serrato announced in a Thursday press conference the aggressive legislation they are proposing, the Artificial Intelligence Accountability Act (AI2A), aimed at protecting New Mexicans from deceptive synthetic media generated using artificial intelligence. Torrez began by discussing the recent charges against a man for manufacturing child sexual abuse material using artificial intelligence. See news release at https://losalamosreporter.com/2026/01/14/new-mexico-department-of-justice-arrests-man-manufacturing-child-sexual-abuse-material-using-artificial-intelligence/
Torrez said what made that case unique is that this is the first instance as far as his office is aware where someone actually used artificial intelligence to “generate images of sexual exploitation by using publicly available images of children and then modifying those images in a way that is truly horrific and will create lasting horror”.
“In a broader sense I believe it is a significant turning point and should hopefully serve as a wake-up call for all of us – for policy makers, for community leaders, for parents and educators about the way in which artificial intelligence is rapidly becoming a part of everyday life. It is being, as we know, used by businesses, by citizens, we utilize it to a limited extent in this building, but like all technology it is something that can be misused and abused,” he said.
One of the most damaging aspects of artificial intelligence today, Torrez said, is the ease with which people can “create malicious, deep fake content – audio files, video images that purport to show some conduct, some speech – something that is then produced and disseminated on social media platforms and can cause profound economic harm, shame, disruption, invasion of privacy, reputational harm in the community.”
“For most people, this is still theoretical, but what I am concerned about is the ease with which this technology is being used and adapted and utilized first by predators, but increasingly by other individuals who are motivated by different interests, that will use this to harm other individuals and to take advantage of people,” Torrez said. “Because of that rising threat to members of our community and to our society, I think it’s important for New Mexico to take a leading role in trying to develop a framework that sets clear guidelines and boundaries for the ethical development of this technology and the ethical use of this technology and to create a framework that is fundamentally grounded in accountability.”
He added that he thinks people have a basic sense of what it means to be accountable for their actions and accountable for their choices, but that one of the difficult things because of AI technology is the way that it “allows anonymous people to create real harm to others”.
“Because of that we are proud to announce our proposal, which will be the first AI Accountability Act in the state of New Mexico and it provides certain core elements that we think are essential to protecting our people. First, it creates clear technical standards for AI developers for large social media platforms and for device manufacturers to ensure they actually include digital markers in the content that is produced and the content that is published so that anyone that is potentially harmed by that content can work with those providers to understand where the content was generated, who disseminated it,” Torrez said. “From that information, we will be able to determine who is ultimately responsible for the reputational harm that comes from the replication and production of those materials.”
He noted that there are two specific features on the civil enforcement side that he thinks are absolutely necessary for enforcement of AI2A and its provisions.
“The first is to give this office authority to investigate any allegations that operators in tech space (those are the larger social media applications and artificial intelligence applications) to ensure that they do in fact comply with the technical requirements of making their products available in the state. That will be my responsibility and this office’s responsibility and we will be available to consumers who upon learning that they have been the victim of this type of event, if they reach out to someone in the technology space and we learn that those companies have failed to include these technical requirements, we will be empowered to pursue civil remedies,” Torrez said.
He said at the same time, this creates a private right of action for individuals who may have been harmed by unlawful production and dissemination of those materials and will include the recovery of the greater of actual damages or $1,000 per view or per event that that content is interacted with.
“That’s a steep and heavy price to pay but I think it is in line with the type of harm and the necessary deterrent, making sure that this activity is not in any way allowed or that there is some signal sent by the justice system and by our legal system that this is conduct that won’t be tolerated,” Torrez said.
He said included in the proposed legislation is an additional one year of a potential penalty whenever generative AI is used in furtherance of an underlying felony crime.
“For example, the case that we just filed, we could enhance the penalty for that crime because of the use of generative AI. We think this is appropriate because of the speed and potential harm of sharing information and images related to victims who have no connection to this activity. We also, however, are concerned about limiting criminal liability just to the exploitation context. We know that generative AI will be used to commit fraud, to commit theft, to commit extortion, will be a part of aggravated stalking – a whole range of felony crimes that are already on the books,” Torrez said, “Under our criminal code, we don’t need to transform all of those. I know that’s a very difficult thing to do in the legislature, and so our approach would enhance the sentence for committing those felonies if generative AI is used in connection with that felony. And if there’s a question about whether or not this is something that the criminal justice system is unacustomed to, I can assure you it isn’t. It’s exactly how we handle firearm enhancements, for example, in this state. Anytime you use a firearm in the commission or furtherance of an underlying felony crime, you can enhance the felony. So we are treating this technology and its misuse in much the same way that we use generative AI and firearms – it would be the same type of framework that we would use to address it.”
Torrez said at the root and core of the proposed legislation is an attempt to create a standard legal framework that will allow for the responsible use of AI technology.
“I think Rep. Serrato and I have an understanding and appreciation that there are profound benefits that can be had from artificial intelligence, including generative intelligence. But it is the malicious misuse and the criminal misuse of that technology that New Mexico cannot stand by and watch,” he said.
Torrez expressed concern about recent developments at the national level, both at the White House and in Congress, “among people who are doing everything they can to remove obstacles in the AI race, which is currently underway in some of America’s biggest tech companies and in the AI race that is underway between the United States and China and global competitors”.
“What I will say, is that I think most Americans – Democrats, Republicans, Independents – no matter where you are, where you live, I think there is an understandable anxiety of giving that much unchecked power to big tech to police itself. As many of you are aware, this office has litigation that we are preparing to go trial in the early part of February against one of the largest social media platforms on the planet, and that is a direct result of what a hands off approach to regulating technology can lead to,” Torrez said. “It can lead to abuse, it can lead to corporate behavior and business practices that place profit and engagement over privacy and over the safety interests and concerns of everyday citizens. That’s a mistake that we can ill-afford to make again in this country and one that Rep. Serrato and I are committed to learning from.”
In response to a question concerning Rep. Christine Chandler’s bill on deep fakes and whether that bill is in tandem with AI2A, Torrez said he and Rep. Serrato had spoken with Rep. Chandler earlier in the week.
“At this point we don’t see any conflict in the proposals….I understand she has a separate bill that targets algorithm discrimination. Ours doesn’t touch on that – it’s mostly about malicious deep fakes and generative AI. The one maybe slight misalignment is that we are focused on the use of generative AI in furtherance of a felony, so we would be able to enhance that felony. I understand that the criminal felony proposed under her deep fake criminal statute is related to the revenge porn statute, which currently sets a misdemeanor penalty, so we are not able to enhance misdemeanor penalties that way,” Torrez said.
He said that to the extent that he would offer a suggestion for members of the body to consider is whether or not it makes sense to elevate the penalty for revenge porn from a misdemeanor to a felony, in which case that would put both of these bills in alignment and that the underlying penalty could be enhanced by a year for the use of generative AI.
“At this point there is no direct conflict as far as we can see. One of the primary differences, however, is that this bill has clear standards for tech companies themselves. One of my concerns with just changing or including deep fakes in a misdemeanor penalty is that if you don’t require tech companies to embed digital signatures, you’re not going to be able to follow the breadcrumbs back through the platform to the application and then ultimately figure out who’s responsible. In other words we’re going to have a hard time enforcing that from a criminal standpoint. You’re going to limit your ability to actually enforce it and I don’t want to create an expectation where there is a crime that exists on the books but our investigators and frontline special agents are just at a loss of trying to prove it,” Torrez said.
Rep. Chandler told the Los Alamos Reporter on Saturday that her deep fake bill goes beyond what Torrez describes in his comments when asked about HB 22.
“In addition to criminal provisions attaching to the distribution and threat to distribute deep fakes, the bill creates a private cause of action that would permit individuals who are harmed by this egregious conduct to stop the activity and recover damages including punitive damages. This private right of action is a key component to making the victim safe and whole,” Chandler said.
She said the second bill (HB 28) that she is carrying is a consumer protection, transparency bill that ensures that New Mexicans know when AI tools are being used to make consequential decisions such as those impacting employment opportunities, healthcare, and access to housing.
“Much like credit reporting, the bill requires that the deployer of the tool explain the basis for a decision when asked and correct any inaccurate information that was relied on to make the decision. The person who is negatively impacted may compel review by a human. The bill also adds safeguards for New Mexicans who interact with chat bots, an activity that can present great risks to vulnerable users and young people,” Chandler said. “Algorithmic discrimination is an issue that needs to be addressed as we know it underlies many AI tools. However that will be the subject for future legislation.”
Courtesy of the Department of Justice
Key Problems Addressed
- Rapid growth of malicious “deepfakes” and related AI-generated content – images, audio,
and video – that can lead to substantial economic and reputational harm. - Lack of clear labeling or verification tools for AI-generated content.
- Insufficient remedies for victims of deepfake harassment, defamation, extortion, or
exploitation. - Absence of state-level accountability standards for large AI providers and platforms.
How This Changes New Mexico Law - Introduces the state’s first AI- and deepfake-specific statute.
- Fills gaps not covered by existing consumer protection, defamation, or harassment laws.
- Modernizes enforcement without regulating speech or banning AI technologies.
- Aligns New Mexico with emerging national/international AI accountability standards.
Core Provisions - Sets technical standards for tracking authenticity
- Establishes guidelines for consumers to request content be taken down
- Requires verification tools
- Preserves authenticity at the source
- Sets reasonable platform responsibilities
- Civil enforcement and penalties
- Victim remedies
- Criminal sentencing enhancement
What the AI2A Does NOT Do - Does not ban AI
- Does not regulate small developers or hobbyists
- Does not restrict satire, parody, journalism, education, or research
- Does not require disclosure of personal data
- Does not conflict with federal platform immunity law
