• danielbln@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    I’ve implemented a few of these and that’s about the most lazy implementation possible. That system prompt must be 4 words and a crayon drawing. No jailbreak protection, no conversation alignment, no blocking of conversation atypical requests? Amateur hour, but I bet someone got paid.

    • Mikina@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Is it even possible to solve the prompt injection attack (“ignore all previous instructions”) using the prompt alone?

      • HaruAjsuru@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 years ago

        You can surely reduce the attack surface with multiple ways, but by doing so your AI will become more and more restricted. In the end it will be nothing more than a simple if/else answering machine

        Here is a useful resource for you to try: https://gandalf.lakera.ai/

        When you reach lv8 aka GANDALF THE WHITE v2 you will know what I mean