Thinking Machines, Thinking Laws [HTC #70]
Welcome to Hold the Code #70!
This issue explores recent American legislation on AI as well as the use of AI in governmental agencies like DOGE.
As always, please don’t hesitate to reach out with your thoughts and ideas about Hold the Code. Thank you for your support, and happy reading!
Move Fast, Break Democracy? The Stakes of Musk’s AI-Powered Government Overhaul
Written By: Melinda Chang

Elon Musk stepped down on May 30 from his role heading the Department of Governmental Efficiency, one of the Trump administration’s several new initiatives aiming to refigure American bureaucracy. DOGE’s stated purpose is to modernize federal tech and redirect taxpayer money to more efficient ends. Trump seems to have granted Musk and DOGE free reign thus far, but whether or not they have accomplished any of said objectives is controversial. Significantly, his dismantling of USAID has dealt a debilitating blow to Africa’s medical infrastructure and sparked considerable backlash from American and international voices alike.
Musk’s activities at the White House (as well as that of other Silicon Valley technocrats like Peter Thiel) have foregrounded several increasingly urgent questions about the use of AI in assisting, regulating, and surveilling government agencies. In January, federal workers received a mass email from the Office of Personnel Management asking them to justify what they did that week or risk expulsion; various sources have attested that DOGE used a large language model (LLM) to analyze and evaluate their responses. According to WIRED, insiders allege that the particular LLM they employed was Meta’s Llama 2. Even the most ubiquitous LLMs like ChatGPT are notoriously unreliable technologies, so it may be dismaying to learn that DOGE left the fate of millions of federal jobs in the hands of one. It is unknown what regulations DOGE may have put in place to mitigate these substantial risks. Musk’s professed commitment to running the department with all of the “move fast and break things” bombast of a tech startup that he is accustomed to has done little to abate criticism.
We have also seen Musk’s willingness both inside and outside of DOGE to encourage the seemingly unauthorized use of Grok, his company xAI’s flagship LLM available in chatbot form on X (formerly Twitter). This is cause for alarm: these chatbots record and parse user input to feed to the underlying model. We cannot be sure what classified information federal workers may relay to them, nor can we know what happens to this information as it is transformed along the LLM pipeline—who has access, where and how it is stored, or how it will be used. Companies have a vested interest in keeping their one truly profitable asset—data—private. Transparency in this area will be hard to come by.
Thanks in part to the rapid and ambitious escalation of DOGE’s reach, tech policy is evolving just as rapidly. GOP legislators’ controversial “Big, Beautiful Bill,” which passed the House on May 22 by a single vote, includes a provision to allocate $500 million until 2034 to migrate the government away from legacy tech systems and toward “artificial intelligence systems and automated decision systems.” Additionally, during this ten-year period, states will be prohibited from regulating AI either by passing legislation or enforcing existing legislation. California governor Gavin Newsom signed off on several major AI regulations that went into effect this January, and other states aren't far behind—this mega-bill will undermine such protections. While we probably won’t know the repercussions of this bill for some time, it’s safe to expect a major and prolonged impact on governmental AI use as well as the structure of responsible AI advocacy nationwide.
For now, it's difficult to foresee the trajectory of DOGE and federal tech policy moving forward. For many of us, AI’s incorporation into every aspect of public existence has felt imminent ever since ChatGPT launched in 2022. We hope that it will continue to receive proportionately heavy scrutiny in the domain of U.S. politics, where any and all of its effects can have far-reaching consequences for people domestically and abroad. The real test of DOGE’s legacy may not be its contested efficiency metrics, but whether our institutions can maintain rigorous oversight over the algorithms that influence our lives.
Sources
https://www.govinfo.gov/content/pkg/FR-2025-01-29/pdf/2025-02005.pdf
https://www.nytimes.com/2025/03/28/us/politics/usaid-trump-doge-cuts.html
https://apnews.com/article/usaid-federal-judge-trump-administration-bdc919a5d98eda5ab72a32fdfe2f147d
https://www.wired.com/story/doge-used-meta-ai-model-review-fork-emails-from-federal-workers/
https://www.youtube.com/watch?v=ey1rpNtRADg&t=203s
https://www.congress.gov/bill/119th-congress/house-bill/1/text
https://www.whitecase.com/insight-alert/california-kentucky-tracking-rise-state-ai-laws-2025
Exploring Recent Federal and State Legislation on AI
Written By: Ashley Wei
Is artificial intelligence safe? How do we want AI impacting our lives? Policies are a crucial way to answer these questions. In the United States, the federal government has taken a relatively hands-off approach towards AI regulation, leaving states with the flexibility to determine their own paths in regulation.
The U.S. House of Representatives on May 22, 2025 narrowly passed the “One, Big, Beautiful Bill” in a 215 to 214 vote. One aspect of this bill seeks to ban state-level artificial intelligence enforcement for 10 years. If this bill passes in the Senate, AI oversight in the United States will be insurmountably weakened for the next decade.
In the past, the federal government has raised a voice in AI safety but has not passed anything concrete. An executive order under President Biden outlines a general commitment to ensure future AI regulations consider safety, privacy, and equity. Other proposed laws target controversial ethical issues such as political advertisements, surveilling employees, and consumer privacy. The AI Research Innovation and Accountability Act specifically develops testing and evaluation procedures for high-risk AI systems to create transparent reports, increasing accountability and security.
Twenty-six states have enacted AI legislation, and 48 states have introduced AI legislation as of 2025. The Colorado AI Act, introduced on May 17, 2024, is the first comprehensive state-level AI legislation; it will go into effect in 2026. The legislation focuses on high-risk AI systems that make “consequential decisions;” it targets numerous key issues including algorithmic discrimination, disclosure to consumers, and AI deployer responsibility.
State legislation tends to attempt to address a few key issues. The Colorado AI Act addresses safety and security. Other similar examples include California’s Health Care Services: AI Act which requires disclaimers for AI-generated patient communications and the state’s Defending Democracy from Deepfake Deception Act which requires the labeling and removal of deepfakes to protect election security and the democratic process.
Additionally, some laws enforce equity in artificial intelligence. As AI relies upon data generated by people, any biases in the dataset could be transferred to AI outputs, potentially leading to discrimination against certain groups. The Colorado AI Act, mentioned previously, requires “reasonable care” to avoid discrimination by developers and deployers of artificial intelligence systems, including impact assessments and proactive disclosures.
Protecting privacy in regards to artificial intelligence is enforced via legislation such as the Utah Artificial Intelligence Policy Act, which requires disclosure of generative AI communications.
As the United States navigates its regulatory path, maintaining a balance between innovation and oversight is critical.
Sources
https://builtin.com/articles/trump-big-beautiful-bill-ai-regulation
https://keymakr.com/blog/regional-and-international-ai-regulations-and-laws-in-2024/
https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation
https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
https://leg.colorado.gov/bills/sb24-205