Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality

2025-08-28

Why Investors Call MAGAX the First ‘Real’ Meme Project — Utility, CertiK Audit, and 2025 Growth Path

2025-08-28

Solana’s 24-Hour Rally Hits 9%, but Investors Are Turning Toward Meme-to-Earn MAGAX for Bigger Returns

2025-08-28
Facebook X (Twitter) Instagram
Trending
  • Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»How Can Congress Regulate AI? Erect Guardrails, Ensure Accountability And Address Monopolistic Power
Opinion

How Can Congress Regulate AI? Erect Guardrails, Ensure Accountability And Address Monopolistic Power

Instead of licensing companies to release advanced AI technologies, the government could license auditors and push for companies to set up institutional review boards.
The ConversationBy The Conversation2023-05-31No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
ChatGPT
ChatGPT. Photo by Shutterstock
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

by Anjana Susarla, Michigan State University

Takeaways:

  • A new federal agency to regulate AI sounds helpful but could become unduly influenced by the tech industry. Instead, Congress can legislate accountability.
  • Instead of licensing companies to release advanced AI technologies, the government could license auditors and push for companies to set up institutional review boards.
  • The government hasn’t had great success in curbing technology monopolies, but disclosure requirements and data privacy laws could help check corporate power.

OpenAI CEO Sam Altman urged lawmakers to consider regulating AI during his Senate testimony on May 16, 2023. That recommendation raises the question of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting. But what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks.

Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly.

As a researcher who studies social media and artificial intelligence, I believe that Altman’s suggestions have highlighted important issues but don’t provide answers in and of themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies’ economic power and political sway.

An agency to regulate AI?

Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman’s testimony. The European Union’s AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example.

The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks.

Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on some of the risks inherent in AI. The Consumer Product Safety Commission and other agencies have a role to play as well.

Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy.

Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers today.

Play
Cognitive scientist and AI developer Gary Marcus explains the need to regulate AI.

Licensing auditors, not companies

Though OpenAI’s Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with humanlike intelligence that could pose a threat to humanity. That would be akin to companies being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a role to play well before such a futuristic scenario comes to pass.

Algorithmic auditing would require credentialing, standards of practice and extensive training. Requiring accountability is not just a matter of licensing individuals but also requires companywide standards and practices.

Experts on AI fairness contend that issues of bias and fairness in AI cannot be addressed by technical methods alone but require more comprehensive risk mitigation practices such as adopting institutional review boards for AI. Institutional review boards in the medical field help uphold individual rights, for example.

Academic bodies and professional societies have likewise adopted standards for responsible use of AI, whether it is authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.

Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It’s also important to recognize that greater data accountability and transparency may impose new restrictions on organizations.

Scholars of data privacy and AI ethics have called for “technological due process” and frameworks to recognize harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance and health care calls for licensing and audit requirements to ensure procedural fairness and privacy safeguards.

Requiring such accountability provisions, though, demands a robust debate among AI developers, policymakers and those who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the appearance of compliance.

AI monopolies?

What was also missing in Altman’s testimony is the extent of investment required to train large-scale AI models, whether it is GPT-4, which is one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for developing the world’s largest language models.

Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale.

It is also important to acknowledge that the training data for tools such as ChatGPT includes the intellectual labor of a host of people such as Wikipedia contributors, bloggers and authors of digitized books. The economic benefits from these tools, however, accrue only to the technology corporations.

Proving technology firms’ monopoly power can be difficult, as the Department of Justice’s antitrust case against Microsoft demonstrated. I believe that the most feasible regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for AI firms and users of AI alike, to urge comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Learn what you need to know about artificial intelligence by signing up for our newsletter series of four emails delivered over the course of a week. You can read all our stories on generative AI at TheConversation.com.The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI AI chat bot Amazon ChatGPT Google Meta Microsoft OpenAI Regulate AI Regulation technology
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
The Conversation
  • Website

Related Posts

Amid Uncertainty, Navigating The AI transition, A Roadmap For Africa

2025-08-26

Where Are The Black Investors And VCs?

2025-08-25

Why South Africa Is The Hidden Powerhouse For Global Executive Search

2025-08-25

Unilabs Finance (UNIL) AI Hedge Fund Crosses $14 Million After Announcing Mining Fund Beta Launch

2025-08-23

BNB Momentum Slows—Now Analysts Back This Low-Cap Altcoin for 50x in 2025

2025-08-21

Your WiFi Router Is About To Start Watching You

2025-08-21

Scaling Vision: How AI is Advancing Image Intelligence from Smartphones to Self-Driving Cars

2025-08-21

The AI-Powered Future of Work: Achraf Golli on Education, Workforce Readiness, and the Role of AI

2025-08-21

How AI is Reshaping Education: A Conversation with Quizard AI’s Co-Founders

2025-08-21
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

KZN’s First Supercar-Centric Luxury Residential Development Unveiled

The Master Developers of Zimbali Lakes have shifted luxury living into high gear with the…

DFA & Ciena Set 1.6 Tbps World Record On Single Wavelength

2025-08-27

Government Pensions Administration Agency CEO Placed On Precautionary Suspension

2025-08-26

Airtel Africa & Vodacom Forge Landmark Infrastructure Partnership

2025-08-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

SA’s Skhokho 2.0 Puts Enterprise AI In SME Hands

2025-08-28

Please Call Me: After 25 Years, Will SCA’s New Bench Silence ConCourt?

2025-08-26

Vodacom Invests R400M To Expand Network In Free State And Northern Cape

2025-08-26

Elon Musk’s Starlink Backs BEE Equity Equivalents, Not 30% Ownership

2025-08-18

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality

2025-08-28

Why Investors Call MAGAX the First ‘Real’ Meme Project — Utility, CertiK Audit, and 2025 Growth Path

2025-08-28

Solana’s 24-Hour Rally Hits 9%, but Investors Are Turning Toward Meme-to-Earn MAGAX for Bigger Returns

2025-08-28
Recent Posts
  • Ethereum Stays Stable Above $4,600, But Meme-to-Earn Is The Next Big Growth Reality
  • Why Investors Call MAGAX the First ‘Real’ Meme Project — Utility, CertiK Audit, and 2025 Growth Path
  • Solana’s 24-Hour Rally Hits 9%, but Investors Are Turning Toward Meme-to-Earn MAGAX for Bigger Returns
  • Preparing For Windows 11: Transitioning From Planning To Implementation
  • XRP continues to benefit, and Quid Miner Cloud Mining has launched a daily passive income contract
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2025 TechFinancials. Designed by TFS Media.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.