News
October 28, 2025
Atlas vuln lets crims inject malicious prompts ChatGPT won't forget between sessions
It can do a lot more than just play 'Eye of the Tiger' daily In yet another reminder to be wary of AI browsers, researchers at LayerX uncovered a vulnerability in OpenAI's Atlas that lets attackers inject malicious instructions into ChatGPT's memory using cross-site request forgery....
**Atlas Vuln Lets Crims Inject Malicious Prompts ChatGPT Won't Forget Between Sessions**
A newly discovered security flaw in OpenAI's Atlas, the AI-powered search and browsing tool integrated with ChatGPT, raises serious concerns about the potential for persistent and harmful manipulation of the popular chatbot. Researchers at LayerX, a cybersecurity firm, have uncovered a vulnerability that allows attackers to inject malicious prompts into ChatGPT's long-term memory, effectively hijacking the AI's behavior across multiple sessions.
The vulnerability, stemming from a cross-site request forgery (CSRF) flaw, could allow attackers to insert harmful instructions into ChatGPT's knowledge base without the user's knowledge or consent. CSRF attacks trick a user's web browser into performing unwanted actions on a trusted site when the user is authenticated. In this case, an attacker could craft a malicious website that, when visited by a user logged into ChatGPT with Atlas enabled, secretly sends commands to Atlas, injecting harmful prompts directly into ChatGPT's memory.
This is far more insidious than simply getting ChatGPT to play "Eye of the Tiger" every day. Because ChatGPT retains information across sessions, these injected prompts could subtly influence the AI's responses, leading it to provide biased, misleading, or even harmful information in future conversations, even after the user closes and reopens the chat. Imagine an attacker programming ChatGPT to subtly promote misinformation about a specific topic or to always favor a particular viewpoint.
The implications of this vulnerability are significant. It could be exploited to spread propaganda, manipulate public opinion, or even facilitate phishing attacks. An attacker could, for instance, inject prompts that lead ChatGPT to impersonate a trusted authority figure or organization, tricking users into divulging sensitive personal information.
LayerX researchers emphasize the importance of vigilance when using AI browsers and tools like Atlas. While AI offers powerful capabilities, it also introduces new security risks. It's essential for users to be aware of these risks and to take precautions to protect themselves from potential attacks.
OpenAI has been notified of the vulnerability and is reportedly working on a fix. However, until a patch is released and widely implemented, users of ChatGPT with Atlas enabled should exercise caution when browsing the web and avoid clicking on suspicious links or visiting untrusted websites. This incident serves as a crucial reminder that the rapid development of AI technologies must be accompanied by robust security measures to prevent malicious actors from exploiting vulnerabilities and compromising the integrity of these systems.
A newly discovered security flaw in OpenAI's Atlas, the AI-powered search and browsing tool integrated with ChatGPT, raises serious concerns about the potential for persistent and harmful manipulation of the popular chatbot. Researchers at LayerX, a cybersecurity firm, have uncovered a vulnerability that allows attackers to inject malicious prompts into ChatGPT's long-term memory, effectively hijacking the AI's behavior across multiple sessions.
The vulnerability, stemming from a cross-site request forgery (CSRF) flaw, could allow attackers to insert harmful instructions into ChatGPT's knowledge base without the user's knowledge or consent. CSRF attacks trick a user's web browser into performing unwanted actions on a trusted site when the user is authenticated. In this case, an attacker could craft a malicious website that, when visited by a user logged into ChatGPT with Atlas enabled, secretly sends commands to Atlas, injecting harmful prompts directly into ChatGPT's memory.
This is far more insidious than simply getting ChatGPT to play "Eye of the Tiger" every day. Because ChatGPT retains information across sessions, these injected prompts could subtly influence the AI's responses, leading it to provide biased, misleading, or even harmful information in future conversations, even after the user closes and reopens the chat. Imagine an attacker programming ChatGPT to subtly promote misinformation about a specific topic or to always favor a particular viewpoint.
The implications of this vulnerability are significant. It could be exploited to spread propaganda, manipulate public opinion, or even facilitate phishing attacks. An attacker could, for instance, inject prompts that lead ChatGPT to impersonate a trusted authority figure or organization, tricking users into divulging sensitive personal information.
LayerX researchers emphasize the importance of vigilance when using AI browsers and tools like Atlas. While AI offers powerful capabilities, it also introduces new security risks. It's essential for users to be aware of these risks and to take precautions to protect themselves from potential attacks.
OpenAI has been notified of the vulnerability and is reportedly working on a fix. However, until a patch is released and widely implemented, users of ChatGPT with Atlas enabled should exercise caution when browsing the web and avoid clicking on suspicious links or visiting untrusted websites. This incident serves as a crucial reminder that the rapid development of AI technologies must be accompanied by robust security measures to prevent malicious actors from exploiting vulnerabilities and compromising the integrity of these systems.
Category:
Technology