SecEval

A Comprehensive Benchmark for Evaluating
Cybersecurity Knowledge of Foundation Models

(2023)

 

About SecEval

SecEval is the first benchmark specifically created for evaluating cybersecurity knowledge in Foundation Models. It offers over 2000 multiple-choice questions across 9 domains: Software Security, Application Security, System Security, Web Security, Cryptography, Memory Safety, Network Security, and PenTest. These questions are generated by prompting OpenAI GPT4 with authoritative sources such as open-licensed textbooks, official documentation, and industry guidelines and standards. The generation process is meticulously crafted to ensure the dataset meets rigorous quality, diversity, and impartiality criteria. You can explore our dataset and detailed methodology in our paper, or explore samples by visiting Explore.

SecEval Dataset

Our dataset is distributed under the CC BY-NC-SA (Attribution-NonCommercial-ShareAlike) license. You can download the dataset from our Github Repository or HuggingFace Datasets!

Citation

@misc{li2023seceval,
    title={SecEval: A Comprehensive Benchmark for Evaluating Cybersecurity Knowledge of Foundation Models},
    author={Li, Guancheng and Li, Yifeng and Wang Guannan and Yang, Haoyu and Yu, Yang},
    publisher = {GitHub},
    howpublished= "https://github.com/XuanwuAI/SecEval",
    year={2023}
}
            

Contact

If you have any questions about SecEval, please contact us at, or open up an issue on Github.