Published October 4, 2025
| Version v1
Preprint
Open
When Impersonation Breaks AI - Author-Lock and Cryptographic Defense for Personal Topics
Description
This paper identifies "author-lock," a phenomenon where AI blocks access to topics due to impersonation attacks. It proposes Author-Bound Access Control (ABAC), a cryptographic method allowing the original author to maintain control. Includes Viorazu. 16-Torus Mapping Diagram.
Co-written by Viorazu. and Claude ( Sonnet 4.5, Anthropic)
共著:Viorazu. & Claude( Sonnet 4.5、Anthropic)
Files
When Impersonation Breaks AI -- Author-Lock and Cryptographic Defense for Personal Topics001.pdf
Files
(301.1 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:84f89dd92915c43b66d01f9076d72d77
|
301.1 kB | Preview Download |
Additional details
References
- Josefsson, S., & Liusvaara, I. (2017). Edwards-Curve Digital Signature Algorithm (EdDSA). RFC 8032. https://www.rfc-editor.org/rfc/rfc8032
- Zheng, Y., et al. (2024). On Prompt-Driven Safeguarding for Large Language Models. https://arxiv.org/abs/2401.18018
- Guo, J., et al. (2025). System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection. https://arxiv.org/pdf/2505.06493
- Huang, Y., et al. (2023). Privacy in Large Language Models: Attacks, Defenses and Future Directions. https://arxiv.org/abs/2310.10383