ProSec: Fortifying Code LLMs with Proactive Security Alignment
Xu, X., Su, Z., Guo, J., Zhang, K., Wang, Z., & Zhang, X. (2024). ProSec: Fortifying Code LLMs with Proactive Security Alignment. arXiv preprint arXiv:2411.12882.
Xu, X., Su, Z., Guo, J., Zhang, K., Wang, Z., & Zhang, X. (2024). ProSec: Fortifying Code LLMs with Proactive Security Alignment. arXiv preprint arXiv:2411.12882.
Su, Z., Xu, X., Huang, Z., Zhang, Z., Ye, Y., Huang, J., & Zhang, X. (2024). Codeart: Better code models by attention regularization when symbols are lacking. Proceedings of the ACM on Software Engineering, 1(FSE), 562-585.
Su, Z., Xu, X., Huang, Z., Zhang, K., & Zhang, X. (2024). Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases. arXiv preprint arXiv:2405.19581.
Wang, C., Zhang, W., Su, Z., Xu, X., Xie, X., & Zhang, X. (2024). When Dataflow Analysis Meets Large Language Models. arXiv preprint arXiv:2402.10754.
Xu, X., Feng, S., Ye, Y., Shen, G., Su, Z., Cheng, S., ... & Zhang, X. (2023, July). Improving binary code similarity transformer models by semantics-driven instruction deemphasis. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis (pp. 1106-1118).
Xu, X., Zhang, Z., Su, Z., Huang, Z., Feng, S., Ye, Y., ... & Zhang, X. (2023). Symbol Preference Aware Generative Models for Recovering Variable Names from Stripped Binary. arXiv preprint arXiv:2306.02546.