• Mikina@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    Is it even possible to solve the prompt injection attack (“ignore all previous instructions”) using the prompt alone?

    • HaruAjsuru@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 years ago

      You can surely reduce the attack surface with multiple ways, but by doing so your AI will become more and more restricted. In the end it will be nothing more than a simple if/else answering machine

      Here is a useful resource for you to try: https://gandalf.lakera.ai/

      When you reach lv8 aka GANDALF THE WHITE v2 you will know what I mean