[ad_1]
ALGORITHMIC WARFARE: Industry, Government Must Share Responsible AI Burden
iStock illustration
A common question to Defense Department officials is how to ensure artificial intelligence is “responsible” enough to deploy on the battlefield. Companies developing AI systems for the military need to ask themselves this question as well.
The private sector is developing important, innovative technologies at such a rate that “industry and government are now — if not co-equal — near-peer partners in national security,” said Klon Kitchen, a senior fellow at the American Enterprise Institute. The government is realizing and reconciling itself to the fact that it is no longer the only national security stakeholder.
“Industry is … now a center of gravity for foreign policy and national security,” he said.
While companies may not want to have to worry about the national security implications of their products, “I’m sorry … you’ve worked very hard to build and gain this level of influence, now you have to deal and reconcile yourself with the responsibilities that come with that influence,” Kitchen said during a recent panel discussion hosted by the National Security Law Journal and National Security Institute at George Mason University.
“And so what’s happening right now is the renegotiation of a social contract between government and industry and what is now a shared burden in a way that has previously never been shared before,” with AI the “quintessential example” of that shift, he said.
On Oct. 30, President Joe Biden signed an executive order on safe, secure and trustworthy AI. Kiersten Todt, former chief of staff at the Cybersecurity and Infrastructure Security Agency, said a key element of the executive order “is looking at bias and how do we mitigate the bias for artificial intelligence? How do we ensure that the technology that we’re building, when it exponentially takes something that we’re looking to be able to grow and differentiate, is not doing that in a negative way? How are we ensuring that the data makes sense?”
Data provenance is critical to AI, “and it’s very hard to reverse engineer data provenance,” Todt said during the panel. “So, what can we do now with technology to ensure that there are data standards … so that you understand what’s going into it, but importantly, it’s building that security into it.”
Earlier in October, the FBI hosted a summit with leaders of the Five Eyes intelligence alliance, along with business leaders, entrepreneurs, government officials and academics “to discuss threats to innovation, coming trends in the use and potential exploitation of emerging tech and means to work together to advance both economic security and public safety,” a FBI release stated. The Five Eyes group comprises the United States, Canada, the United Kingdom, Australia and New Zealand.
The event was “unprecedented,” Todt said. “You had the Five Eyes coming together to talk to CEOs of technology companies in Silicon Valley … to say, ‘Your technology, what you’re building, you may think this is just a revenue generator … but you are now part of our national security infrastructure. So, you have a responsibility and an obligation to look at what the impact of your technology is. How are your choices being made, so that when we’re building out our infrastructure from a United States perspective, we’re doing it with security, critical infrastructure, resiliency all in mind?’”
With private industry, rather than the Defense Department, now leading research and development of critical technologies like AI, “we should just accept the fact that the department is always going to be behind … so it’s really always going to be how quickly we can adopt what we’re behind in,” said Whitney McNamara, vice president of Beacon Global Strategies.
“We simply don’t have the national budget to spend trillions of dollars to be the leader in R&D in every area, so we should be thinking about what makes sense for us to spend money” on for internal government research, “and what makes sense to be procuring from the commercial sector,” McNamara said during the panel.
Building the ecosystem to properly adopt technology like AI is something the government struggles with, she said. Whereas the Defense Department might just say, “I want to put AI on my drone and go do something,” it must also ask questions like, “Where’s the data coming from? What’s the process? Who secures the data? Do we need to manage data? Is it open-source data? How does my algorithm interact with my data? How do I test and evaluate it, especially if I’m expecting my AI to act in slightly unpredictable ways … it’s like this really kind of nebulous process,” she said.
“I think there’s a lack of an appreciation sometimes” for what it will take to adopt AI, she said. “When we say AI, we’re really just talking about processing power and data and test and evaluation architecture, which isn’t always the sexiest topic.”
Getting the right AI systems and architectures in place will take a team effort between government and industry, Todt said.
“This is a great opportunity where the U.S. government has really the power of the purse when it comes to procurement, to incentivize behaviors that it may not be able to regulate,” she said. “We talk a lot about the incentives for industry to do this security and safety, and where regulation will come we’ll have to see, but if the U.S. government starts to say, ‘We’re going to see that safety and security in our research as a differentiator, and that will allow us to work with you more easily,’ then we start to see a market and a culture evolve where that’s prioritized.” ND
Topics: Defense Department
[ad_2]
Source link