What's Supported with AI Access Security?
Focus
Focus
AI Access Security

What's Supported with AI Access Security?

Table of Contents

What's Supported with
AI Access Security
?

Learn about what
AI Access Security
supports.
Where Can I Use This?
What Do I Need?
  • NGFW (Managed by Panorama or Strata Cloud Manager)
  • Prisma Access (Managed by Panorama or Strata Cloud Manager)
One of the following:
  • AI Access Security
    license
  • CASB-PA license
  • CASB-X
    license
Learn more about what
AI Access Security
supports.

Supported
AI Access Security
Use Cases

Use
AI Access Security
to safely adopt GenAI apps based on the app use case.
You can safely adopt GenAI apps using
AI Access Security
based on the following use cases. One GenAI app can be associated with more than one use case.
  • Audio Generators
    Audio Generators use AI models to create sound effects, music, and other audio clips from text prompts or audio reference inputs provided by the user.
    Associated Risks
    • Copyright Infringement
      —Audio Generators can be trained on data sets of existing music and sound effects that can include copyrighted materials. This can lead to generated outputs that reproduce copyrighted material.
    • Data Privacy
      —Audio Generators can recreate an individual's voice, speech patterns, or other identifiable audio characteristics without consent, which poses privacy risks.
    • Exposure of Sensitive Information
      —Audio Generators can receive audio as an input that can contain proprietary or sensitive information. By providing this proprietary or sensitive data as an input, you risk that data being used to train the GenAI app, which may then be exposed to users outside of your organization.
  • Conversational Agent
    Conversational Agents provide assistance with a wide variety of tasks in a natural, user-friendly manner similar to how to how humans interact with each other. They are typically in the form of chatbots that accept text and files as input.
    Associated Risks
    • Malicious Prompting
      —Bad actors can exploit Conversational Agents by crafting and injecting indirect prompts that could exfiltrate sensitive information.
    • Exposure of Sensitive Information
      —Conversational Agents are trained on user shared data. If you share data with sensitive information, it might inadvertently be exposed to other users through the Conversational Agent's responses.
    • Insecure Plugin Configurations
      —Some Conversational Agents allow launching plugins that enable interaction with external applications and services but might have an insecure configuration. This significantly increases your attack surface and weakens your security posture.
    • Hallucinations, Bias, and Ethical Concerns
      —Conversational Agents can generate biased, inaccurate, or misleading information.
  • Code Assistants & Generators
    Code Assistant & Generators can significantly boost developer productivity by generating code snippets and suggestions.
    Associated Risks
    • Security Vulnerabilities
      —Code generated by a Code Assistant & Generator could contain flaws, vulnerabilities, or malware that could be exploited thereby compromising application security.
    • Intellectual Property Violations
      —Code generated by models trained on public repositories could violate licenses or reproduce copyrighted code.
    • Exposure of Sensitive Information
      —Code Assistant & Generators might leak your priority algorithms because you're training a third-party model on your code.
  • Developer Platforms
    Developer Platforms streamline and orchestrate the process of building a GenAI application.
    Associated Risks
    • Exposure of Sensitive Information
      —Fine-tuning large language models (LLM) often requires training them with proprietary data. A security breach or unauthorized access could lead to severe data leaks.
  • Enterprise Searches
    Enterprise Searches aim to provide a centralized search experience across an organization's data sources.
    Associated Risks
    • Data Privacy and Security
      —Enterprise Searches have access to a wide range of sensitive data of the organization by design. A security breach or unauthorized access could lead to severe data leaks.
    • Malicious Prompting
      —Bad actors can exploit these Enterprise Searches by crafting and injecting indirect prompts that could exfiltrate sensitive information.
  • Image Editor & Generators
    Image Editor & Generators leverage AI models to generate, manipulate, and edit images based on text prompts or input images.
    Associated Risks
    • Copyright Infringement
      —Image Editor & Generators can be trained on copyrighted image data, potentially leading to generated content that violates intellectual property rights.
    • Deepfake Misuse
      — Image Editor & Generators can be used to generate highly realistic deepfakes, which could be weaponized for misinformation campaigns.
    • Bias and Ethical Concerns
      —Image Editor & Generators can generate content that can perpetuate societal biases, generating harmful and offensive content.
  • Meeting Assistants
    Meeting Assistants offer capabilities such as meeting summarization, including action items and follow-up task list generation.
    Associated Risks
    • Privacy and Confidentiality Breaches
      —You risk training Meeting Assistants on sensitive or proprietary information discussed during meetings. Additionally, they could store this data in their cloud environments that might not follow the best security standards.
  • Productivity Assistants
    Productivity Assistants provide general task assistance synthesizing information directly within familiar productivity tools. They are privy to all data, proprietary documents, confidential information, and trade secrets that the productivity tools have access to.
    Associated Risks
    • Data Privacy and Security
      —Productivity Assistants by design have access to a wide range of sensitive data of the organization. A security breach or unauthorized access could lead to severe data leaks.
    • Excessive Agency
      —Productivity Assistants have integrations and might support multiple types of plugins and extensions that enable interaction with external applications and services but might have an insecure configuration. This significantly increases your attack surface and weakens your security posture.
    • Malicious Prompting
      —Bad actors could exploit these assistants by crafting and injecting indirect prompts that could exfiltrate sensitive information.
  • Video Editors & Generators
    Video Editor & Generators use AI models to generate, manipulate, and edit videos based on text prompts or input images.
    Associated Risks
    • Copyright Infringement
      —Video Editor & Generators can be trained data sets of existing image or video data that can include copyrighted images and videos. This can lead to generated outputs that reproduce copyrighted material.
    • Deepfake Misuse
      —Video Editors & Generators can be used to generate highly realistic deepfakes, which could be weaponized for misinformation campaigns.
    • Bias and Ethical Concerns
      —Video Editor & Generators can generate content that can perpetuate societal biases, generating harmful and offensive content.
  • Writing Assistants
    Writing Assistants enhance productivity by offering writing suggestions, grammar corrections, and content generation capabilities.
    Associated Risks
    • Plagiarism and Copyright Infringement
      —Writing Assistants can be trained on data sets of existing text that can include copyrighted writing. This can lead to generated outputs that directly reproduce material without proper attribution.
    • Exposure of Sensitive Information
      —Writing Assistants often have access to sensitive information such as proprietary documents, personal data, or confidential communications. A security breach or unauthorized access could lead to a severe data leak.
    • Bias and Ethical Concerns
      —Writing Assistants can generate content that can perpetuate societal biases, generating harmful and offensive content.

GenAI Apps Supported by
AI Access Security

Learn more about the generative AI (GenAI) apps supported by
AI Access Security
Enterprise Data Loss Prevention (E-DLP)
is the detection engine that powers
AI Access Security
and prevents exfiltration of sensitive data to generative AI (GenAI) apps.
AI Access Security
supports the same GenAI apps supported by
Enterprise DLP
.
  • All GenAI app support requires PAN-OS 10.2.3 or later release.
    This applies to
    NGFW
    ,
    Panorama™ management server
    , and the
    Prisma Access
    dataplane version.
  • All GenAI apps support only nonfile inspection unless otherwise specified in the list of supported GenAI apps.

Recommended For You