If you were impacted by storms on April 27 or May 6, CLICK HERE to find resources available for recovery.

Lankford Presses Industry Leaders about National Security and Privacy Implications of Artificial Intelligence

CLICK HERE to watch Lankford’s Q&A on YouTube.

CLICK HERE to watch Lankford’s Q&A on Rumble.

WASHINGTON, DC – Senator James Lankford (R-OK) participated in a Senate Homeland Security and Governmental Affairs, Subcommittee on Emerging Threats and Spending Overnight hearing entitled, “Advanced Technology: Examining Threats to National Security,” in which Lankford asked about the ongoing complications for privacy and security of using emerging technology like artificial intelligence (AI) and others. Lankford’s questions focused on how we make sure the federal government is pushing methods to protect people and privacy as we pursue new technology that may be vulnerable to data theft or corporate privacy risks.

Witnesses at the hearing included Gregory Allen, Director of the Wadhwani Center for AI and Advanced Technologies; Jeff Alstott, Ph.D, Senior Information Scientist at the RAND Corporation; and Dewey Murdick, Ph.D., Executive Director of the Center for Security and Emerging Technology.

Lankford recently introduced a bill to require transparency of the federal government’s use of AI. The bill would require federal agencies to notify individuals when they are interacting with or subject to decisions made using certain AI or other automated systems. This bill comes on the heels of Lankford and Senate Finance Committee members sounding the alarm on the potential use of AI such as ChatGPT, to generate persuasive, tailored scams that are intended to rob Americans by getting access to their personal financial information.


Lankford: …Once we hand data over for machine-learning, we’ve handed a lot of data over, who owns that data, where does that data go, how could that data actually be used? Could that then be resold for all that data to be able to go somewhere else, and if there’s a problem with it at the end that ends up being a national security issue, liability issues then start to be able to fall into it. Do you see what I’m talking about? This gets into the weeds of actual practical applications of how AI is used and how the interchange is happening right now on national security.