Hacking LLMs with prompt injections

And ways hackers can attack GPT-based applications

Vickie Li
8 min readJun 1, 2023
Photo by Brett Jordan on Unsplash

I recently had the opportunity to attend Google IO, during which many new products were announced. One aspect that stood out about many of these products was the focus on AI, particularly generative AI.

Generative AI is fascinating and I am excited to see what we can do by integrating its capabilities with different functionalities. I already use these tools for scripting, copywriting, and generating ideas about blogs. While these applications are super cool, it’s also important to study how we can use these technologies securely.

Large language models are already being woven into products that handle private information, going way beyond just summarizing news articles and copyediting emails. It’s like a whole new world out there! I’ve already seen them being planned for use in customer service chatbots, content moderation, and generating ideas and advice based on user needs. They’re also being used for code generation, unit test generation, rule generation for security tools, and much more.

Let me preface by saying that I’m a total newbie in the AI and machine learning field. Just like everyone else, I’m still trying to wrap my head around the capabilities of these advancements. When it comes to large language models, I’m…

--

--

Vickie Li
Vickie Li

Written by Vickie Li

Professional investigator of nerdy stuff. Hacks and secures. Creates god awful infographics. https://twitter.com/vickieli7

Responses (5)