Pattern: Safely Importing Data

Overview

In today’s interconnected world, computer systems must interact effectively with external environments. It is crucial to enable the importation of data while ensuring that malicious code is not transferred into your organization’s systems.

However, the same channels that allow the legitimate transfer of data can be exploited by attackers to introduce malware. This risk applies to all data import methods, including both network connections and removable media. As any external system could potentially be compromised, all such sources pose a threat to your security systems.

This guidance outlines technical controls designed to mitigate risks associated with data imports over networks, especially critical for systems handling sensitive data or maintaining high integrity and confidentiality, like personal information, classified data, or industrial control operations.


Organization of Content

This guidance is structured into several sections for clarity:

  1. Preventing network-based attacks
  2. Mitigating the import of malicious content
  3. Guidelines for safe data import practices
  4. Monitoring for security breaches


A. Preventing Network-Based Attacks

Data is commonly imported through network connections using various transport protocols, whether standard ones like SFTP and SMTP or specific APIs.

Yet, both these connections and protocols carry vulnerabilities. If compromised, they can allow attackers to seize control over network devices and servers.

Types of Attacks

Network-based strikes may target different levels of the OSI model.

Media Layers (physical, data link, network)

Potential attacks include:

  • Creating malformed Ethernet frames that exploit vulnerabilities in the Ethernet driver of the receiving device.
  • Crafting defective IPv4 or IPv6 headers to exploit weaknesses in the destination device’s IP stack.

Host Layers (transport, session, presentation, application)

Potential attacks include:

  • Sending malformed protocol headers that exploit vulnerabilities in protocol libraries (such as TCP or UDP).
  • Submitting malformed messages to exploit weaknesses in transport layer compression libraries.
  • Creating flawed messages to exploit vulnerabilities in transport layer encryption.
  • Targeting applications or services that process application layer protocols.

Defensive Measures

To minimize the risk of successful network attacks, consider implementing the following controls:

  • Timely patching of software and firmware in network infrastructure, including the operating systems and applications. Regular updates substantially reduce risks from known vulnerabilities.
  • One-way flow control. This can be achieved with devices like data diodes, ensuring that data can only flow in one direction. While not stopping internal vulnerabilities, it inhibits an attacker’s ability to export data or gain control after an intrusion.
  • Utilizing simple transfer protocols with procedural breaks. A procedural break terminates existing connections before handling the data via a reduced protocol. This severely complicates potential protocol-based attacks. These measures usually work in tandem with flow control, providing a more effective security barrier.

If correctly put in place, a combination of a procedural break and flow control greatly diminishes the chance of network-based attacks. Ensure appropriate testing is conducted to validate that both mechanisms work as intended.


B. Mitigating the Import of Malicious Content

Attackers often embed malicious code into legitimate-looking files or data objects that target systems process.

Such malicious content is crafted to execute the attacker’s code, which may either be part of the content or downloaded during the exploit.

Types of Attacks

Attackers have various methods for introducing malicious content into systems:

Examples include:

  • Submitting malformed compressed data designed to exploit vulnerabilities in decompression algorithms.
  • Providing malformed encrypted information aimed at exploiting decryption weaknesses.
  • Delivering syntactically faulty content that can compromise parsers used by the system.
  • Delivering semantically incorrect content that takes advantage of logical errors in processing.
  • Embedding active code within formats that support it, such as scripts and macros in documents.

More intricate data formats, like office productivity documents, often provide rich surfaces for potential vulnerabilities, making it easier for attackers to exploit weaknesses.

Defensive Strategies

The following practices can help safeguard systems against the risks posed by malicious content:

  • Promptly patching all systems that handle content to close known vulnerabilities.
  • Thorough engineering and testing of components that will deal with external content, identifying potential vulnerabilities during the development phase.
  • Validation of content for both syntax and semantics before it is processed to ensure safety. Any potentially harmful active components should be stripped from content.
  • Transforming complex formats into safer, simpler ones to mitigate risks from parsing complex data.
  • Applying non-persistence and sandboxing techniques to programs that render content, thereby limiting the impact of any compromised session.
  • Disallowing the execution of active code unless absolutely necessary.

Handling Nested Content

Particular caution should be exercised with formats containing embedded content. Nested components require extraction, verification, and potentially transformation.

Implement limits to prevent excessive recursion and potential vulnerabilities.


C. Guidelines for Safe Data Import Practices

The outlined controls can be blended to formulate a reliable data import framework. Our recommended model emphasizes:

Data Import Framework

The arrangement of these components is intentional for optimal security. Notably, transformation occurs before any protocols are broken, with verification implemented as the final measure.

The Flow Control acts as a distinction between less trusted and more reputable sides of the interface. It is vital to perform transformation on the less secure side due to the inherent risks involved in processing unknown content.

A well-implemented gateway enhances security by ensuring that the verification process is straightforward compared to the more complex transformation tasks.

Upon successful verification, the data can be redirected to its destination, potentially reverting to its original format for the intended recipient.

Simplifying Components

Not all components are mandatory in every instance. For instance, if content is in a simple format, it may not require transformation. The expected level of verification can also vary based on the robustness of the destination system.

Architectural Considerations

Key security perspectives regarding architecture include:

  • Management procedures must not weaken gateway security. Low-side components should be administered separately to prevent breaches from affecting high-side operations.
  • Utilizing non-persistence and sandboxing can limit the impact of any vulnerabilities that occur within processing components.
  • Steps in the framework should be sequential. It is crucial to ensure network designs prevent bypassing these procedures, thereby safeguarding the integrity of the entire system.


D. Monitoring for Security Breaches

Effective monitoring is essential at each stage of the gateway. However, the most critical aspects for vigilance include the verification engine and the main system receiving the data.

Supervising the Verification Engine and Destination System

The verification engine serves as the core security mechanism within the gateway and requires continuous oversight. Ideally, if functioning properly, the verification process should not result in failures; any discrepancies signal issues with the transformation component.

In addition, the destination system is the target. Regular monitoring for signs of compromise is crucial, focusing on components that handle external content. Depending on the system’s sensitivity, isolation of components handling external data may be necessary for enhanced monitoring.

Overseeing Other Components

For high-risk environments, it’s beneficial to apply monitoring to all components within the gateway:

Monitoring External Network Connection

For systems limited to specific source connections, strict rules can manage access. For those with diverse inputs, a ‘known bad’ approach may be more suitable.

Monitoring the Transformation Engine

Given its exposure to raw external data, the transformation engine should be viewed as vulnerable. Monitoring for any unusual activity such as unexpected network requests or process crashes can indicate a need for action.

Oversight of Protocol Break and Flow Control

If optical devices are used, monitor for disruptions and ensure integrity in flow control. Any errors may suggest a breach.

Monitoring the Internal Network

In isolated networks, a ‘known good’ monitoring style focusing on unusual communications can provide alerts for irregular activities.

Integrating Logs

To optimize oversight of the gateway and enhance operational security insights, consolidate logs from all components into a centralized analysis platform.

Ensure the validity of logs using proven techniques before analysis, as attackers could manipulate log entries to evade detection.


Final Thoughts

This framework has evolved through practical applications and while not foolproof, it offers robust defenses against potential attacks applicable across various systems.

As always, introducing security measures necessitates a comprehensive understanding of the broader system, encompassing technology, personnel, and processes. Changes to transformation and verification may alter user experiences, so extensive testing is warranted before full deployment.

Based on an article from ncsc.gov.uk: https://www.ncsc.gov.uk/guidance/pattern-safely-importing-data

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top