[ad_1]
AI Still in Experimentation Phase for Training, Simulations
Photo-illustration, Defense Dept., iStock images
Were the movie The Graduate to be remade today, the career advice would not be “plastics.” Instead, it would be: “There’s a great future in artificial intelligence.”
Like just about every other organization today, the Defense Department is — cautiously — evaluating where AI can support the mission. Department officials and members of industry and academia agree one area AI needs to play a bigger role is in training and simulations.
Andy Van Schaack, associate professor of the practice of engineering at Vanderbilt University, said he has revised all his courses to require students to use AI because when they graduate, they will be required to use those tools.
“And that’s the same recommendation that I make to the U.S. Navy and the projects that I’m working with them — that we should use AI in every facet of the development of training solutions,” he said during an August web discussion on generative AI and training hosted by the National Training and Simulation Association, an affiliate of the National Defense Industrial Association.
That includes instructional design, instructional delivery, project management, all the way up through management of the company, he said.
“Bottom line, my recommendations to you are to focus less on computer science as you evaluate AI, and more on cognitive psychology,” he said.
Panelist Svitlana Volkova, chief computational scientist in Aptima Inc.’s office of science and technology, said generative AI could completely transform the way people learn, acquire, interact with, transfer and share knowledge.
“So, first would be personalized, intelligent tutors,” she said. “I definitely can predict the transformation in adaptive training in personalized learning, team training experiences — generative AI will allow us to go through multimodal data and generate new content very rapidly, adversarial simulations as well as human performance augmentation.”
There could also be AI assistance for instructors, she said. “I imagine intelligent assistants that augment the instructors to do this multimodal content generation at a scale that was previously unimagined.”
It goes beyond text and large language models, she said. “We’re talking about other pre-trained models, simulations — models that generate videos, images and other types of modalities.”
Generative AI can also help with performance analysis and predictive analytics and even course planning and curriculum updates, she added.
AI can also help leadership. “AI reasoning agents could help with training exercises, and analysis of the large-scale, progress report generation, real-time insight generation about the trainer and trainee performance,” she added. AI could provide continuous situational awareness about training progress and gaps and readiness levels.
“If this type of intelligence can be proactive, that is going to transform the way that we learn and train,” she said.
However, for all the promise of how AI can transform training and simulations, it’s still more academic than practice, she said, adding the technology is not perfect. “I really want to make sure that we all track that these models hallucinate, that they are biased and there is a lot that we have to do in terms of research to make them better.”
AI and machine learning models are specialized and brittle, she said.
“We moved from this era of narrow AI to more generalizable models that can be rapidly adapted to many tasks and domains, but they’re still very specialized and brittle,” she said.
“And they make stupid mistakes,” she added. “If you go to ChatGPT, and you ask it to generate your bio, it gets many things right — because it learns from vast amounts of data on the internet … but it also makes mistakes.”
Lastly, she noted that AI models lack common sense. They cannot reason and plan and predict consequences of actions the way humans can.
Volkova said there are significant research needs for training and learning. AI needs to move from text to the simulated world of images and videos. “We don’t have many models that are truly multimodal yet. The data fusion, the ability to incorporate signals from multiple sensors during training from the environment, is also very important, and it hasn’t been done yet. And it’s a very complex task.”
And researchers need to develop generative agents that can reason and plan. Plus, there needs to be more learning about human and AI interactions, she added.
Keith Brawner, senior researcher and program manager in Army Futures Command’s Simulation and Training Technology Center, noted learning is ongoing with generative AI in use across all research-and-development efforts.
“The fresh crop of research coming up has items into using generative AI to do things in simulations like pre-render items so as to reduce network bandwidth, or generating content in scenarios,” he said. “We have this library of scenarios, and we can generate the appropriate thing for people to interact with, or generation of simulated students to test out system interactions.”
Army Futures Command is running simulated students through interactions with systems and has automatic generation of after-action review from synthetic training data, he said.
“Some of our synthetic training data was Wikipedia battle narratives, and then some of that synthetic training data was simulation-generated items with AI [opposition forces] playing against each other,” he said.
“That was used somewhat recently in order to do a proof-of-concept demo out to the National Training Center for, given this live exercise, how would you summarize it, and how would you summarize the key points?” he said.
“So, somewhere in there, the AI-based individual and group models and behaviors have significant amounts of generative AI content within them, and then those are used over on the simulation side,” and they are also used in “conjunction with Army University in order to develop content and teach it out and scale content and predict which things people will be doing good at and bad at in the future.”
Jonathan Elliott, director of AI assessment and assurance in the Algorithmic Warfare Directorate of the Defense Department’s Chief Digital and Artificial Intelligence Office, said in an interview the deployment of AI in training and simulations is still in an exploration and research experimentation stage.
“There are research activities looking at using it as the blue, red or gray force in battle simulations to reduce manpower required to conduct simulated training events,” he said. AI is “also being used in wargaming to simulate multiple scenarios and generate novel solutions that are not intuitive to us as the human user or acting as advisors to humans.”
The department and services are “doing a lot of ongoing experimentation and research about how to integrate AI into training and what capabilities are required for a particular AI [to integrate] into a particular training event.”
The objectives of a training exercise will drive the type of AI that could or should be added, and then it’s a matter of sorting out the data needed.
“Data is always the critical underpinning of using AI and understanding, not as much is there the data, but what subset of that data is important to bring to the AI and feed the AI, and label or curate so that it can then perform as we intend it to?” Elliott said.
Typically, training activities are already designed or set up, and then AI is being added onto the scenarios.
“So, we already have an understanding of what the data a person being trained interacts with already,” Elliott said. “And so that gives us at least a loose understanding of the types of data that a human user in a war game or something needs to then move forward with the war game.
“So that helps bound the problem some, but it is still an incredibly difficult area to measure and model and understand the important data for an AI — especially in DoD scenarios,” he continued.
The complexity of Defense Department training events makes it difficult to determine the most important data to be measured and captured, he said. “And how do you measure that, such things as like topography, unit readiness, weather’s effects on terrain, gear and movement, speed — those are all things that are both measurable in some sense, like we have topographic maps, but also very tough to model or measure and then feed into an AI system so they can take into account in fine levels of detail.”
Currently, those kinds of variables are built into training scenarios through abstracts and assumptions, he added.
“And so, it’s not something fundamentally new when you’re trying to incorporate AI into a training event, we already deal with these complexities of measuring and modeling the highly complex environment we operate in,” he said.
“It’s then trying to bring in the data scientists and narrow down, given our learning objective, just like how we derive our simulation and the resolution required of our simulation, what resolution is required of our AI system to help us narrow down using the data scientist and identify the key features in the data that we do need to feed him,” he continued.
For now, when troops conduct training that involves AI, it is with an awareness that the AI system is being tested and evaluated, he said. “We are still trying to accomplish training and a learning objective of the personnel, but also you experiment with and work with those personnel being trained to understand, did the AI system here help? Did it provide novel scenarios or something new that challenged the person being trained? Or did it hinder training?” ND
Topics: Cyber, Defense Department
[ad_2]
Source link