What models will work best for military AI-human teams? That’s the question Nunn School of International Affairs Associate Professor Margaret E. Kosal will work to help answer as part of a Georgia Tech Research Institute-led project examining the use of human-AI teams.
“We’re testing the use of AI and machine learning algorithms to assist the military in decision-making in situations where they have information overload and time constraints,” said Kosal. “Our emphasis is on building human-centered and trustworthy AI for national security and defense applications that are in alignment with international law.”
Why it’s important: The U.S. Department of Defense wants to ramp up its adoption and use of AI technologies, but these technologies pose numerous ethical and legal issues. Kosal, who previously worked as a science and technology advisor in the office of the U.S. Secretary of Defense, will provide the GTRI team with deep knowledge of the use of emerging technologies in national security contexts and help find solutions that satisfy legal and ethical concerns.
More about the project:
- It will explore how the military might develop and use a human-AI team that works well together in difficult situations like a combat zone.
- They’ll model such a system and measure how well skilled operators can work with it.
- One component includes creating a human digital twin, or a digital version of an operator, that can help human teammates perform better.
- They hope the results will be useful not only in military contexts, but also in humanitarian, disaster response, public health, and other situations.
Michael Pearson
Ivan Allen College of Liberal Arts