About Enterprise DLP

About the Enterprise Data Loss Prevention (DLP) on the Panorama™ management server.
Data loss prevention (DLP) is a set of tools and processes that allow you to protect sensitive information against unauthorized access, misuse, extraction, or sharing.
Enterprise DLP is a cloud-based service that uses supervised machine learning algorithms to sort sensitive documents into Financial, Legal, Healthcare, and other categories for document classification to guard against exposures, data loss, and data exfiltration. These patterns can identify the sensitive information in traffic flowing through your network and protect them from exposure.
Enterprise DLP allows you to protect sensitive data in the following ways:
  • Prevent file uploads and non-file based traffic from leaking to unsanctioned web application
    —Discover and conditionally stop sensitive data from being leaked to untrusted web applications.
  • Monitor uploads to sanctioned web applications
    —Discover and monitor sensitive data when it is uploaded to sanctioned corporate applications.
To help you inspect content and analyze the data in the correct context so that you can accurately identify sensitive data and secure it to prevent incidents, Enterprise DLP is enabled through a cloud service. Enterprise DLP supports over 380 data patterns and many predefined data filtering profiles. Enterprise DLP is designed to automatically make new patterns and profiles available to you for use in Security policy rules as soon they are added to the cloud service.
Use the following tools to configure Enterprise DLP:
  • Data Patterns—Help you detect sensitive content and how that content is being shared or accessed on your network.
    Predefined data patterns and built-in settings make it easy for you to protect data that contain certain properties (such as document title or author), credit card numbers, regulated information from different countries (such as driver’s license numbers), and third-party DLP labels. To improve detection rates for sensitive data in your organization, you can supplement predefined data patterns by creating custom data patterns that are specific to your content inspection and data protection requirements. In a custom data pattern, you can also define regular expressions and data properties to look for metadata or attributes in the file’s custom or extended properties and use it in a data filtering profile.
  • Data Filtering Profiles—Power the data classification and monitor capabilities available on your managed firewalls to prevent data loss and mitigate business risk.
    Data filtering profiles are a collection of data patterns that re grouped together to scan for a specific object or type of content. To perform content analysis, the predefined data filtering profiles have data patterns that include industry-standard data identifiers, keywords, and built-in logic in the form of machine learning, regular expressions, and checksums for legal and financial data patterns. When you use the data filtering profile in a Security policy rule, the firewall can inspect the traffic for a match and take action.
    After you utilize the data patterns (either predefined or custom), you manage the data filtering profiles from Panorama. You can use a predefined data filtering profile, or create a new profile, and add data patterns to it. You then create security policies and apply the profiles you added to the policies you create. For example, if a user uploads a file and data in the file matches the criteria in the policies, the managed firewall either creates an alert notification or blocks the file upload.
When traffic matches a data filtering profile that a security rule is using, a data filtering log is generated. The log entry contains detailed information regarding the traffic that match one or more data pattern in the data filtering profile. The log details enable forensics by allowing you to verify when an matched data generated an alert notification or was blocked.
You view the snippets in the Data Filtering logs. By default, data masking partially masks the snippets to prevent the sensitive data from being exposed. You can completely mask the sensitive information, unmask snippets, or disable snippet extraction and viewing.
To improve detection accuracy and reduce false positives, you can also specify:
  • Proximity keywords
    —An asset is assigned a higher accuracy probability when a keyword is within a 200-character distance of the expression. If a document has a 16-digit number immediately followed by
    , that's more likely to be a credit card number. But if Visa is the title of the text and the 16-digit number is on the last page of the 22-page document, that's less likely to be a credit card number.
    You can also use more than one keyword in a keyword group and include or exclude keywords to find when occurrences of specific words appear or do not appear within 200 characters of the expression.
  • Confidence levels
    —Along with proximity keywords, confidence levels allow you to specify the probability of the occurrence of proximity keywords in a pattern match. With a
    confidence the managed firewall does not use proximity keywords to identify a match; with a
    confidence the managed firewall looks for the proximity keywords within 200 characters of the regular expressions in the pattern before it considers the data pattern in a file or non-file based traffic to be a match.
  • Basic and weighted regular expressions
    —A regular expression (regex for short) describes how to search for a specific text pattern and then display the match occurrences when a pattern match is found. There are two types of regular expressions—
    • A
      basic regular expression
      searches for a specific text pattern. When a pattern match is found, the service displays the match occurrences.
    • A
      weighted regular expression
      assigns a score to a text entry. When the score threshold is exceeded, the service returns a match for the pattern.
      To reduce false-positives and maximize the search performance of your regular expressions, you can assign scores using the weighted regular expression builder when you create data patterns to find and calculate scores for the information that is important to you. Scoring applies a match threshold, and when a score threshold is exceeded, such as enough expressions from a pattern match an asset, the asset will be indicated as a match for the pattern.
      For more information, including a use case and best practices, see Configure Regular Expressions.

Recommended For You