Decoding Trust: Comprehensive Assessment of Trustworthiness in GPT Models

ABSTRACT

Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance – where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives – including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/ ; our dataset can be previewed at https: //huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.

About Dr. Bo Li
I am on the advisory board of the Center for Artificial Intelligence Innovation (CAII) at Illinois, and I am a member of the Information Trust Institute (ITI). I am also affiliated with several research centers aiming to broaden the research collaboration and bridge different communities, such as the Advanced Digital Science Center (ADSC), the Center for Cognitive Computing Systems Research (C3SR), and the Quantum Information Science and Technology Center (IQUIST). I also serve in the Accelerated Learning and Engineering Research Training (ALERT) program.

My research focuses on trustworthy machine learning, with an emphasis on robustness, privacy, generalization, and their interconnections. We believe that closing today's trustworthiness gap in ML requires us to tackle these grappled problems in a holistic framework, driven by fundamental research focusing on not only each problem but also their underlying interactions.

The long-term goal for our group, Secure learning lab (SL2), is to make machine learning systems robust, private, and generalizable with guarantees for different real-world applications. We have worked on exploring different types of adversarial attacks, including evasion and poisoning attacks in digital and physical worlds, under various constraints. We have developed and will continue to explore robust learning systems based on game-theoretic analysis, knowledge-enabled logical reasoning, and properties of learning tasks. Our work directly benefits applications such as computer vision, natural language processing, safe autonomous driving, and trustworthy federated learning systems

mintong headshot

Mintong Kang is a third-year Ph.D. student at UIUC CS advised by Professor Bo Li. Their research interests lie in trustworthy machine learning and AI safety. Mintong is interested in uncovering the vulnerability of advanced ML models and developing certifiable defense mechanisms to safeguard their universal deployments. Most recently they are working on the trustworthiness of multi-modal models (VLM, audio/video LLMs) and LLM agent systems.

Mintong received a bachelor of engineering degree from the CS department of Zhejiang University, and worked with Professor Xi Li at DCD Lab at Zhejiang University. Mintonbg is also lucky to work with Professor Alan L. Yuille at CCVL Lab at Johns Hopkins University.

License: CC-3.0
Submitted by Regan Williams on