Internet-Draft | I2NSF Security Policy Translation | February 2022 |
Jeong, et al. | Expires 25 August 2022 | [Page] |
This document proposes a scheme of security policy translation (i.e., Security Policy Translator) in Interface to Network Security Functions (I2NSF) Framework. When I2NSF User delivers a high-level security policy for a security service, Security Policy Translator in Security Controller translates it into a low-level security policy for Network Security Functions (NSFs). For this security policy translation, this document specifies the mapping between a high-level security policy based on the Consumer-Facing Interface YANG data model and a low-level security policy based on the NSF-Facing Interface YANG data model. Also, it describes an architecture of a security policy translator along with an NSF database, and the process of security policy translation with the NSF database.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 25 August 2022.¶
Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
This document defines a scheme of a security policy translation in Interface to Network Security Functions (I2NSF) Framework [RFC8329]. First of all, this document explains the necessity of a security policy translator (shortly called policy translator) in the I2NSF framework.¶
The policy translator resides in Security Controller in the I2NSF framework and translates a high-level security policy to a low-level security policy for Network Security Functions (NSFs). A high-level policy is specified by I2NSF User in the I2NSF framework and is delivered to Security Controller via Consumer-Facing Interface [I-D.ietf-i2nsf-consumer-facing-interface-dm]. It is translated into a low-level policy by Policy Translator in Security Controller and is delivered to NSFs to execute the rules corresponding to the low-level policy via NSF-Facing Interface [I-D.ietf-i2nsf-nsf-facing-interface-dm].¶
Security Controller acts as a coordinator between I2NSF User and NSFs. Also, Security Controller has capability information of NSFs that are registered via Registration Interface [I-D.ietf-i2nsf-registration-interface-dm] by Developer's Management System [RFC8329]. As a coordinator, Security Controller needs to generate a low-level policy in the form of security rules intended by the high-level policy, which can be understood by the corresponding NSFs.¶
A high-level security policy is specified by RESTCONF/YANG [RFC8040][RFC6020], and a low-level security policy is specified by NETCONF/YANG [RFC6241][RFC6020]. The translation from a high-level security policy to the corresponding low-level security policy will be able to rapidly elevate I2NSF in real-world deployment. A rule in a high-level policy can include a broad target object, such as employees in a company for a security service (e.g., firewall and web filter). Such employees may be from human resource (HR) department, software engineering department, and advertisement department. A keyword of employee needs to be mapped to these employees from various departments. This mapping needs to be handled by a security policy translator in a flexible way while understanding the intention of a policy specification. Let us consider the following two policies:¶
The above two sentences are examples of policies for blocking malicious websites. Both policies are for the same operation. However, NSF cannot understand the first policy, because the policy does not have any specified information for NSF. To set up the policy at an NSF, the NSF MUST receive at least the source IP address and website address for an operation. It means that the first sentence is NOT compatible for an NSF policy. Conversely, when I2NSF Users request a security policy to the system, they never make a security policy like the second example. For generating a security policy like the second sentence, the user MUST know that the NSF needs to receive the specified information, source IP address and website address. It means that the user understands the NSF professionally, but there are not many professional users in a small size of company or at a residential area. In conclusion, the I2NSF User prefers to issue a security policy in the first sentence, but an NSF will require the same policy as the second sentence with specific information. Therefore, an advanced translation scheme of security policy is REQUIRED in I2NSF.¶
This document proposes an approach using Automata theory [Automata] for the policy translation, such as Deterministic Finite Automaton (DFA) and Context Free Grammar (CFG). Note that Automata theory is the foundation of programming language and compiler. Thus, with this approach, I2NSF User can easily specify a high-level security policy that will be enforced into the corresponding NSFs with a compatibly low-level security policy with the help of Security Policy Translator. Also, for easy management, a modularized translator structure is proposed.¶
Commonly used security policies are created as XML(Extensible Markup Language) [XML] files. A popular way to change the format of an XML file is to use an XSLT (Extensible Stylesheet Language Transformation) [XSLT] document. XSLT is an XML-based language to transform an input XML file into another output XML file. However, the use of XSLT makes it difficult to manage the security policy translator and to handle the registration of new capabilities of NSFs. With the necessity for a security policy translator, this document describes a security policy translator based on Automata theory.¶
Figure 1 shows the overall design for Security Policy Translator in Security Controller. There are four main components for Security Policy Translator: Data Extractor, Data Converter, Policy Generator, and Data Model Mapper.¶
Extractor is a DFA-based module for extracting data from a high-level policy which I2NSF User delivered via Consumer-Facing Interface. Data Model Mapper creates a mapping model for mapping the elements between Consumer-Facing Interface and NSF-Facing Interface. Data Converter converts the extracted data to the capabilities of target NSFs for a low-level policy. It refers to an NSF Database (DB) in order to convert an abstract subject or object into the corresponding concrete subject or object (e.g., IP address and website URL). Policy Generator generates a low-level policy which will execute the NSF capabilities from Converter.¶
Figure 2 shows a design for Data Extractor in the security policy translator. If a high-level policy contains data along the hierarchical structure of the standard Consumer-Facing Interface YANG data model [I-D.ietf-i2nsf-consumer-facing-interface-dm], data can be easily extracted using the state transition machine, such as DFA. The extracted data can be processed and used by an NSF to understand it. Extractor can be constructed by designing a DFA with the same hierarchical structure as a YANG data model.¶
After constructing a DFA, Data Extractor can extract all of data in the entered high-level policy by using state transitions. Also, the DFA can easily detect the grammar errors of the high-level policy. The extracting algorithm of Data Extractor is as follows:¶
To explain the Data Extractor process by referring to an example scenario, assume that Security Controller received a high-level policy for a web-filtering as shown in Figure 3. Then we can construct DFA-based Data Extractor by using the design as shown in Figure 2. Figure 4 shows the architecture of Data Extractor that is based on the architecture in Figure 2 along with the input high-level policy in Figure 3. Data Extractor can automatically extract all of data in the high-level policy according to the following process:¶
The above process is constructed by an extracting algorithm. After finishing all the steps of the above process, Data Extractor can extract all of data in Figure 3, 'block_web_security_policy', 'block_malicious', 'Son's_PC', 'malicious_websites', and 'drop'.¶
Since the translator is modularized into a DFA structure, a visual understanding is feasible. Also, the performance of Data Extractor is excellent compared to one-to-one searching of data for a particular field. In addition, the management is efficient because the DFA completely follows the hierarchy of Consumer-Facing Interface. If I2NSF User wants to modify the data model of a high-level policy, it only needs to change the connection of the relevant DFA node.¶
Every NSF has its own unique capabilities. The capabilities of an NSF are registered into Security Controller by a Developer's Management System, which manages the NSF, via Registration Interface. Therefore, Security Controller already has all information about the capabilities of NSFs. This means that Security Controller can find target NSFs with only the data (e.g., subject and object for a security policy) of the high-level policy by comparing the extracted data with all capabilities of each NSF. This search process for appropriate NSFs is called by policy provisioning, and it eliminates the need for I2NSF User to specify the target NSFs explicitly in a high-level security policy.¶
Data Converter selects target NSFs and converts the extracted data into the capabilities of selected NSFs. If Security Controller uses this data convertor, it can provide the policy provisioning function to I2NSF User automatically. Thus, the translator design provides big benefits to the I2NSF Framework.¶
The NSF Database contains all the information needed to convert high-level policy data to low-level policy data. The contents of NSF Database are classified as the following two: "endpoint information" and "NSF capability information".¶
The first is "endpoint information". Endpoint information is necessary to convert an abstract high-level policy data such as Son's_PC, malicious to a specific low-level policy data such as 10.0.0.1, illegal.com. In the high-level policy, the range of endpoints for applying security policy MUST be provided abstractly. Thus, endpoint information is needed to specify the abstracted high-level policy data. Endpoint information is provided by I2NSF User as the high-level policy through Consumer-Facing Interface, and Security Controller builds NSF Database based on received information.¶
The second is "NSF capability information". Since capability is information that allows NSF to know what features it can support, NSF capability information is used in policy provisioning process to search the appropriate NSFs through the security policy. NSF capability information is provided by Developer's Management System (DMS) through Registration Interface, and Security Controller builds NSF Database based on received information. In addition, if the NSF sends monitoring information such as initiating information to Security Controller through NSF-Facing Interface, Security Controller can modify NSF Database accordingly.¶
Figure 5 shows an Entity-Relationship Diagram (ERD) of NSF Database designed to include both endpoint information received from I2NSF User and NSF capability information received from DMS. By designing the NSF database based on the ERD, all the information necessary for security policy translation can be stored, and the network system administrator can manage the NSF database efficiently.¶
ERD was expressed by using Crow's Foot notation. Crow's Foot notation represents a relationship between entities as a line and represents the cardinality of the relationship as a symbol at both ends of the line. Attributes prefixed with * are key values of each entity. A link with two vertical lines represents one-to-one mapping, and a bird-shaped link represents one-to-many mapping. An NSF entity stores the NSF name (nsf_name), NSF specification (inbound, outbound, bandwidth), and NSF activation (activated). A Capability entity stores the capability name (capa_name) and the index of the capability field in a Registration Interface Data Model (capa_index). An Endpoint entity stores the keyword of abstract data conversion from I2NSF User (keyword). A Field entity stores the field name (field_name), the index of the field index in an NSF-Facing Interface Data Model, and converted data by referring to the Endpoint entity and a 'convert' relationship.¶
Figure 6 shows an example for describing a data conversion in Data Converter. High-level policy data MUST be converted into low-level policy data which are compatible with NSFs. If a system administrator attaches a database to Data Converter, it can convert contents by referring to the database with SQL queries. Data conversion in Figure 6 is based on the following list:¶
When translating a policy, the mapping between each element of the data models are necessary to properly convert the data. The Data Model Mapper create a mapping model between the elements in Consumer-Facing Interface Data Model and NSF-Facing Interface Data Model. Each element in the Consumer-Facing Interface Policy Data Model has at least one or more corresponding element in NSF-Facing Interface Data Model.¶
Figure 7 shows a mapping list of data fields between Consumer-Facing Interface Data Model and NSF-Facing Interface Data Model. Figure 7 describes the process of passing the data value to the appropriate data field of the Data Model in detail after the data conversion.¶
The mapping list shown in the Figure 7 shows all mapped components. This data list should be saved into the NSF Database to provide the mapping information for converting the data. It is important to produce the list automatically as the Consumer-Facing Interface and NSF-Facing Interface can be extended anytime by vendors according to the provided NSF. The Data Model Mapper in Security Policy Translator should be used to produce the mapping model information automatically.¶
Figure 8 shows the mapping for I2NSF Security Policy Translator. The mapper uses the Consumer-Facing Interface and NSF-Facing Interface YANG Data Model as inputs. The process the Data Model and converts it into a Tree Graph. Tree Graph is used to proces the Data Model as a Tree instead of individual elements. Then the Data Model Mapper calculates the Tree Edit Distance between each element in Consumer-Facing Interface and each element in NSF-Facing Interface. The Tree Edit Distance can be calculated with an algorithm, e.g., Zhang-Shasha algorithm [Zhang-Shasha], with the calculation should start from the root of the tree.¶
The Zhang-Shasha calculates the distance by three operations:¶
The insert and delete operations are a simple of adding/deleting a node or element with the length of the label of the node. The change operation must be calculated between the label of the element to produce the distance. There are methods to calculate this, such as Levenshtein Distance, Cosine Similarity, or Sequence Matching. For this data model mapper, cosine similarity should be the best choice as it measures the similarity between words. The data models have similarity between words and it can helps in calculating as minimum distance as possible.¶
When the minimum distance is obtained, the NSF-Facing Interface element is saved as the candidates for mapping the Consumer-Facing Interface element. This information should be saved to the NSF Database for the Data Converter.¶
Do note that the proper mapping can be achieved because the similarity between the Consumer-Facing Interface and NSF-Facing Interface. An extension created for the Consumer-Facing Interface and NSF-Facing Interface should keep the close similarity relationship between the data models to be able to produce the mapping model information automatically.¶
Generator searches for proper NSFs which can cover all of capabilities in the high-level policy. Generator searches for target NSFs by comparing only NSF capabilities which is registered by Vendor Management System. This process is called by "policy provisioning" because Generator finds proper NSFs by using only the policy. If target NSFs are found by using other data which is not included in a user's policy, it means that the user already knows the specific knowledge of an NSF in the I2NSF Framework. Figure 9 shows an example of policy provisioning. In this example, log-keeper NSF and web-filter NSF are selected for covering capabilities in the security policy. All of capabilities can be covered by two selected NSFs.¶
Generator makes low-level security policies for each target NSF with the extracted data. We constructed Generator by using Context Free Grammar (CFG). CFG is a set of production rules which can describe all possible strings in a given formal language(e.g., programming language). The low-level policy also has its own language based on a YANG data model of NSF-Facing Interface. Thus, we can construct the productions based on the YANG data model. The productions that makes up the low-level security policy are categorized into two types, 'Content Production' and 'Structure Production'.¶
Content Production is for injecting data into low-level policies to be generated. A security manager(i.e., a person (or software) to make productions for security policies) can construct Content Productions in the form of an expression as the following productions:¶
Square brackets mean non-terminal state. If there are no non-terminal states, it means that the string is completely generated. When the duplication of content tag is allowed, the security manager adds the first production for a rule. If there is no need to allow duplication, the first production can be skipped because it is an optional production.¶
The second production is the main production for Content Production because it generates the tag which contains data for low-level policy. Last, the third production is for injecting data into a tag which is generated by the second production. If data is changed for an NSF, the security manager needs to change "only the third production" for data mapping in each NSF.¶
For example, if the security manager wants to express a low-level policy for URL, Content Production can be constructed in the following productions:¶
Structure Production is for grouping other tags into a hierarchy. The security manager can construct Structure Production in the form of an expression as the following production:¶
Structure Production can be expressed as a single production. The above production means to group other tags by the name of a tag which is called by 'struct_tag'. [prod_x] is a state for generating a tag which wants to be grouped by Structure Production. [prod_x] can be both Content Production and Structure Production. For example, if the security manager wants to express the low-level policy for the I2NSF tag, which is grouping 'name' and 'rules', Structure Production can be constructed as the following production where [cont_name] is the state for Content Production and [struct_rule] is the state for Structure Production.¶
The security manager can build a generator by combining the two productions which are described in Section 4.4.1 and Section 4.4.2. Figure 10 shows the CFG-based Generator construction of the web-filter NSF. It is constructed based on the NSF-Facing Interface Data Model in [I-D.ietf-i2nsf-nsf-facing-interface-dm]. According to Figure 10, the security manager can express productions for each clause as in following CFG:¶
Then, Generator generates a low-level policy by using the above CFG. The low-level policy is generated by the following process:¶
The last production has no non-terminal state, and the low-level policy is completely generated. Figure 11 shows the generated low-level policy where tab characters and newline characters are added.¶
The implementation considerations in this document include the following three: "data model auto-adaptation", "data conversion", and "policy provisioning".¶
Security Controller which acts as the intermediary MUST process the data according to the data model of the connected interfaces. However, the data model can be changed flexibly depending on the situation, and Security Controller may adapt to the change. Therefore, Security Controller can be implemented for convenience so that the security policy translator can easily adapt to the change of the data model.¶
The translator constructs and uses the DFA to adapt to Consumer-Facing Interface Data Model. In addition, the CFG is constructed and used to adapt to NSF-Facing Interface Data Model. Both the DFA and the CFG follow the same tree structure of YANG Data Model.¶
The DFA starts at the node and expands operations by changing the state according to the input. Based on the YANG Data Model, a container node is defined as a middle state and a leaf node is defined as an extractor node. After that, if the nodes are connected in the same way as the hierarchical structure of the data model, Security Controller can automatically construct the DFA. The DFA can be conveniently built by investigating the link structure using the stack, starting with the root node.¶
The CFG starts at the leaf nodes and is grouped into clauses until all the nodes are merged into one node. A leaf node is defined as the content production, and a container node is defined as the structure production. After that, if the nodes are connected in the same way as the hierarchy of the data model, Security Controller can automatically construct the CFG. The CFG can be conveniently constructed by investigating the link structure using the priority queue data, starting with the leaf nodes.¶
Security Controller requires the ability to materialize the abstract data in the high-level security policy and forward it to NSFs. Security Controller can receive endpoint information as keywords through the high-level security policy. At this time, if the endpoint information corresponding to the keyword is mapped and the query is transmitted to the NSF Database, the NSF Database can be conveniently registered with necessary information for data conversion. When a policy tries to establish a policy through the keyword, Security Controller searches the details corresponding to the keyword registered in the NSF Database and converts the keywords into the appropriate and specified data.¶
This document stated that policy provisioning function is necessary to enable users without expert security knowledge to create policies. Policy provisioning is determined by the capability of the NSF. If the NSF has information about the capability in the policy, the probability of selection increases.¶
Most importantly, selected NSFs may be able to perform all capabilities in the security policy. This document recommends a study of policy provisioning algorithms that are highly efficient and can satisfy all capabilities in the security policy.¶
First, by showing a visualized translator structure, the security manager can handle various policy changes. Translator can be shown by visualizing DFA and Context-free Grammar so that the manager can easily understand the structure of Security Policy Translator.¶
Second, if I2NSF User only keeps the hierarchy of the data model, I2NSF User can freely create high-level policies. In the case of DFA, data extraction can be performed in the same way even if the order of input is changed. The design of the security policy translator is more flexible than the existing method that works by keeping the tag's position and order exactly.¶
Third, the structure of Security Policy Translator can be updated even while Security Policy Translator is operating. Because Security Policy Translator is modularized, the translator can adapt to changes in the NSF capability while the I2NSF framework is running. The function of changing the translator's structure can be provided through Registration Interface.¶
There is no security concern in the proposed security policy translator as long as the I2NSF interfaces (i.e., Consumer-Facing Interface, NSF-Facing Interface, and Registration Interface) are protected by secure communication channels.¶
This document does not require any IANA actions.¶
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Ministry of Science and ICT (MSIT), Korea, (R-20160222-002755, Cloud based Security Intelligence Technology Development for the Customized Security Service Provisioning). This work was supported in part by the IITP (2020-0-00395, Standard Development of Blockchain based Network Management Automation Technology). This work was supported in part by the MSIT under the Information Technology Research Center (ITRC) support program (IITP-2021-2017-0-01633) supervised by the IITP.¶
The following changes are made from draft-yang-i2nsf-security-policy-translation-09:¶