Research
Research
Seminars
Events Calendar
Dissertation Defense
Distributed Task Negotiation Under Deception
Donghae Kim
Ph.D. Candidate
Aerospace Engineering and Engineering Mechanics
The University of Texas at Austin
Wednesday, March 25, 2026
1:00 pm - 2:00 pm
1:00 pm - 2:00 pm
ASE 2.202
Distributed multi‑agent systems have attracted extensive attention due to their ability to overcome limitations of conventional centralized approaches, such as communication and computational bottlenecks. One prominent application of these systems is task allocation, in which agents collaboratively and distributively search for solutions using local perception and communication with neighboring agents. In such settings, communication information is not always verifiable; agents may not know whether the information they receive is accurate or truthful.
This uncertainty raises a critical research question: how does the overall multi-agent system perform in the likely presence of malicious agents that deliberately disseminate false information to steer the algorithm toward self‑beneficial outcomes? To address this problem, this dissertation investigates trust‑based iterative mechanisms for distributed task allocation. A mathematical notion of trust is defined and used to quantify an agent’s perceived truthfulness, and agents adapt their behavior—cooperative or selfish—based on these trust assessments.
Within a task negotiation framework, agents disclose more task information to reliable counterparts with higher trust while concealing information from malicious agents with low trust to mitigate the impact of deceptive behaviors. This dissertation examines such mechanisms under both Nash and utilitarian social welfare objectives and analyzes their computational complexity, bilateral decomposability, protocols, and corresponding equilibrium strategies.
This uncertainty raises a critical research question: how does the overall multi-agent system perform in the likely presence of malicious agents that deliberately disseminate false information to steer the algorithm toward self‑beneficial outcomes? To address this problem, this dissertation investigates trust‑based iterative mechanisms for distributed task allocation. A mathematical notion of trust is defined and used to quantify an agent’s perceived truthfulness, and agents adapt their behavior—cooperative or selfish—based on these trust assessments.
Within a task negotiation framework, agents disclose more task information to reliable counterparts with higher trust while concealing information from malicious agents with low trust to mitigate the impact of deceptive behaviors. This dissertation examines such mechanisms under both Nash and utilitarian social welfare objectives and analyzes their computational complexity, bilateral decomposability, protocols, and corresponding equilibrium strategies.
Contact Maruthi Akella (makella@mail.utexas.edu)
Sign Up for Seminar Announcements
To sign up for our weekly seminar announcements, send an email to sympa@utlists.utexas.edu with the subject line: Subscribe ase-em-seminars.