Internet-Draft | Computerate Specifying | September 2021 |
Petit-Huguenin | Expires 10 March 2022 | [Page] |
This document specifies a paradigm named Computerate Specifying, designed to simultaneously document and formally specify communication protocols. This paradigm can be applied to any document produced by any Standard Developing Organization (SDO), but this document targets specifically documents produced by the IETF.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 10 March 2022.¶
Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
If, as the unofficial IETF motto states, we believe that "running code" is an important part of the feedback provided to the standardization process, then as per the Curry-Howard equivalence [Curry-Howard] (that states that code and mathematical proofs are the same), we ought to also believe that "verified proof" is an equally important part of that feedback. A verified proof is a mathematical proof of a logical proposition that was mechanically verified by a computer, as opposed to just peer-reviewed.¶
The "Experiences with Protocol Description" paper from Pamela Zave [Zave11] gives three conclusions about the usage of formal specifications for a protocol standard. The first conclusion states that informal methods (i.e. the absence of verified proofs) are inadequate for widely used protocols. This document is based on the assumption that this conclusion is correct, so its validity will not be discussed further.¶
The second conclusion states that formal specifications are useful even if they fall short of the "gold standard" of a complete formal specification. We will show that a formal specification can be incrementally added to a document.¶
The third conclusion from Zave's paper states that the normative English language should be paraphrasing the formal specification. The difficulty here is that to be able to keep the formal specification and the normative language synchronized at all times, these two should be kept as physically close as possible to each other.¶
To do that we introduce the concept of "Computerate Specifying" (note that Computerate is a British English word). "Computerate Specifying" is a play on "Literate Computing", itself a play on "Structured Computing" (see [Knuth92] page 99). In the same way that Literate Programming enriches code by interspersing it with its own documentation, Computerate Specifying enriches a standard specification by interspersing it with code (or with proofs, as they are the same thing), making it a computerate specification.¶
Note that computerate specifying is not specific to the IETF, just like literate computing is not restricted to the combination of TeX and Pascal described in Knuth's paper. What this document describes is a specific instance of computerate specifying that combines [AsciiDoc] as formatting language and [Idris2] as programming language with the goal of formally specifying IETF protocols.¶
The remaining of this document is divided in 3 parts:¶
After the Terminology (Section 3) section starts a tutorial on how to write a specification. This tutorial is meant to be read in sequence, as concepts defined in early part will not be repeated later. On the other hand the tutorial is designed to present information progressively and mostly in order of complexity, so it is possible to start writing effective specifications without reading or understanding the whole tutorial.¶
The tutorial begins by explaining how to write private specifications (Section 4), which are specifications that are not meant to be shared. Then the tutorial continues by explaining how to write an self-contained specification (Section 5), which is a specification that contains Idris code that relies only on the Idris Standard Library. Writing self-contained specifications is difficult and time-consuming, so the tutorial continues by explained how to import specifications (Section 6) that contain reusable types and code. The tutorial ends with explanations on how to design a exportable specification (Section 7).¶
After the tutorial come the description of all the packages and modules in the Computerate Specifying Standard Library (Section 8).¶
Appendix A explains how to install and use the associated tooling, Appendix B contains the reference manual for the standard library, Appendix D explains how to convert Colored Petri Nets in a form that can be used in specifications, Appendix E is a tutorial on using Programs and Types to prove propositions, Appendix F explains the distributed architecture of the standard library, and Appendix G describes the format of the files distributed in the standard library.¶
Literate Idris2 code embedded in an AsciiDoc document, containing both formal descriptions and human language texts, and which can be processed to produce documents in human language.¶
Any text that contains the documentation of a protocol in the English language. A document is the result of processing a specification.¶
A specification created after a document was published such as the generated document coincides with the published document.¶
In this document, the same word can be used either as an English word or as an Idris identifier used inside the text.
To explicitly differentiate them, the latter is always displayed like this
.
E.g. IdrisDoc
is meant to convey the fact that IdrisDoc in that case is an Idris module or type.
On the other hand the word IdrisDoc refers to the IdrisDoc specification.¶
Similarly blocks of code, including literate code, are always sandwiched between "<CODE BEGINS>" and "<CODE ENDS>". Code blocks will be presented in their literate form only when necessary, i.e. when mixed AsciiDoc and Idris are required. However, in a computerate specification, Idris code must in fact be used in its literate form.¶
By convention an Idris function that returns a type and types themselves will always start with an uppercase letter. Functions not returning a type start with a lowercase letter.¶
For the standard library, the types names are also formed by taking the English word or expression, making the first letter of each word upper case, and removing any symbols like underscore, dash and space. Thus bitvector would become "Bitvector" after conversion as a type name but bit diagram would become "BitDiagram".¶
"Metanorma" is a trademark of Ribose Inc.¶
Nowadays documents at the IETF are written in a format named xml2rfc v3 [RFC7991] but unfortunately making that format Computerable is not trivial, mostly because there is no simple solution to mix code and XML together in the same file. Instead, the [AsciiDoc] format forms the basis for specifications as it permits the generation of documents in the xml2rfc v3 format (among other formats) and also because it can be enriched with code in the same file.¶
AsciiRFC [I-D.ribose-asciirfc] and [Metanorma-IETF] describe a backend for the [Asciidoctor] tool that converts an AsciiDoc document into an xml2rfc v3 document. The AsciiRFC document states various reasons why AsciiDoc is a superior format for the purpose of writing standards, so that will not be discussed further. Note that the same team developed Asciidoctor backends [Metanorma] for other Standards Developing Organizations (SDO), making it easy to develop computerate specifications targeting the documents developed by these SDOs.¶
The code in a computerate specification uses the programming language [Idris2] in literate programming [Knuth92] mode using the Bird-style, by having each line of code starting with a ">" mark in the first column.¶
That same symbol is also used by AsciiDoc as an alternate way of defining a blockquote [Blockquotes] way which is no longer available in a computerate specification. Bird-style code will simply not appear in the rendered document.¶
The result of Idris code execution can be inserted inside the AsciiDoc part of a specification by inserting that code fragment between the "{`" string and the "`}" strings.
That code fragment must return a value of a type that implements the Show
interface.¶
A computerate specification is processed by an Asciidoctor preprocessor that does the following:¶
For instance the following document fragment taken from the computerate specification of [RFC8489]:¶
<CODE BEGINS>> rto : Int > rto = 500 > > rc : Nat > rc = 7 > > rm : Int > rm = 16 > > -- A stream of transmission times > transmissions : Int -> Int -> Stream Int > transmissions value rto = value :: transmissions (value + rto) > (rto * 2) > > -- A specific transmission time > transmission : Int -> Nat -> Int > transmission timeout i = index i $ transmissions 0 timeout > > a1 : Int > a1 = rto > > a2 : String > a2 = concat (take (rc - 1) (map (\t => show t ++ " ms, ") > (transmissions 0 rto))) ++ "and " ++ show (transmission rto > (rc - 1)) ++ " ms" > > a3 : Int > a3 = transmission rto (rc - 1) + rto * rm For example, assuming an RTO of {`a1`}ms, requests would be sent at times {`a2`}. If the client has not received a response after {`a3`} ms, the client will consider the transaction to have timed out. <CODE ENDS>¶
is rendered as¶
To render a line starting with the >
character as text (e.g. in an example block) instead of as a line of code, prepend it with a \
character.
Similarly, to render the {`
string prepend it with a \
character, but only in a literate file.¶
The Idris2 programming language has been chosen because its type system supports dependent and linear types [Brady17], and that type system is the language in which propositions are written. The Idris2 programming also has reflection capabilities and support for meta-programming, also known as elaboration.¶
Following Zave's second conclusion, computerate specifying is not restricted to the specification of protocols, or to property proving. There is a whole spectrum of formalism that can be introduced in a specification, and we will present it in the remaining sections by increasing order of complexity. Note that because the specification language is a programming language, these usages are not exhaustive, and plenty of other usages can and will be found after the publication of this document.¶
At the difference of an RFC which is immutable after publication, the types and code in a specification will be improved over time, especially as new properties are proved or disproved. The latter will happen when a bug is discovered in a specification and a proof of negation is added to the specification, paving the way to a revision of the standard.¶
A private specification is a specification that is not meant to be shared. There is mostly two reasons for a specification to be kept private, as explained in the next sections.¶
In the simplest case, writing a specification with the goal of publishing the resulting document does not require sharing that specification. This is quite similar to what was done with xml2rfc before the IETF adopted RFC 7991 as the canonical format for Internet-Drafts and RFCs; most people used xml2rfc to prepare their document, but did not share the xml2rfc file beyond the co-authors of the document.¶
In that case writing a specification is straightforward: write the specification from scratch using AsciiDoc for the text and Idris for the formal parts.¶
One effective rule to quickly discover that the Idris code and the AsciiDoc document are diverging is to keep both of them as close as possible to each other. This is most effectively done by having the matching Idris code right after each AsciiDoc paragraph, such as it is then easy to compare each to the other.¶
Idris itself imposes an order in which types and code must be declared and defined, because it does not by default look for forward references. Because, by the rule above, the text will follow the order the Idris code is organized, the document generated by such specification tends to be naturally easier to implement, because it induces the same workflow than a software implementer will follow when implementing the document.¶
[RFC8489] and [RFC8656] are examples of standards that were made easier to implement because they follow the order a software developer will develop each component.¶
Idris package description files can also be used in literate mode by using the .lipkg
extension and using the >
character at the beginning of each line that contains package syntax.
This permits to merge the top AsciiDoc document (which contains mostly includes of various sections written either as AsciiiDoc or literate documents) with the package description.¶
A second reason to write a private specification is for the purpose of doing a review of an existing document, most likely of an Internet-Draft during the standardization process.¶
This is done by first turning the existing document into a specification by converting it into an AsciiDoc document, which can be either done manually, or by using the rfc2mn
program distributed with the tooling (Appendix A.4).
After this step, the specification can be enriched by adding some Idris code and replacing some of the text with the execution of Idris code fragments.
Comparing the original document with a document generated by processing the specification permits to validate that the original document is correct regarding the formalism introduced.¶
Documents that are not generated from a specification do not always have a structure that follow the way a software developer will implement it. When that is the case it will be difficult to add the Idris code right after a paragraph describing its functionality, because the final code may not type-check because of the lack of support for forward references. It could be a signal that the text needs to be reorganized to be more software-development friendly.¶
One alternative is to use a technique named self-inclusion, which is the possibility to change the order of paragraphs in an AsciiDoc document and thus keeping the Idris code in an order that type-checks.¶
This is done by using tags to delimit the text that needs to be copied. The ifdef/endif directives prevents the text to be displayed there:¶
<CODE BEGINS> ifdef::never[] // tag::para1[] Text that describes a functionality // end::para1[] endif::never[] > -- Code that implements that functionality. <CODE ENDS>¶
Then a self-include can insert the text inside the tags to a different place, without changing the order of the Idris code:¶
<CODE BEGINS> include::Main.adoc[tag=para1] <CODE ENDS>¶
Another use for self-inclusion is to insert Idris code in an example or code block.
To do that, add the code directly as code in the source, and then self-include it in the block.
This guarantees that the example is always typechecked.
In that case the >
character at the beginning of each lines is automatically escaped.¶
<CODE BEGINS> > -- tag::proof[] > total currying : ((a, b) -> c) -> (a -> b -> c) > currying f = \x => \y => f (x, y) > -- end::proof[] ---- include::MyFile.lidr[tag=proof] ---- <CODE ENDS>¶
Note that the IETF Trust licences [TLP5] do not grant permission to distribute an annotated Internet-Draft as a whole so redistributing such specification would be a copyright license infringement. But as in this case the specification is not meant to be distributed, there is no infringement possible.¶
A self-contained specification is a specification where the Idris code does not use anything but the types and functions defined in its standard library, thus not requiring to install anything but Idris2 itself.¶
A specification uses Idris types to specify both how stream of bits are arranged to form valid Protocol Data Units (PDU) and how the exchange of PDUs between network elements is structured to form a valid protocol. In addition a specification can be used to prove or disprove a variety of properties for these types.¶
The PDUs in a communication protocol determines how data is laid out before it is sent over a communication link. Generally a PDU is described only in the context of the layer that this particular protocol is operating at, e.g. an application protocol PDU only describes the data as sent over UDP or TCP, not over Ethernet or Wi-Fi.¶
PDUs can generally be split into two broad categories, binary and text, and a protocol PDU mostly falls into one of these two categories.¶
PDU descriptions can be defined as specifications for at least three reasons: the generation of examples that are correct by construction, correctness in displaying the result of calculations, and correctness in representing the structure of a PDU. Independently of these reasons, a PDU description is a basic component of a specification that will probably be needed regardless.¶
Examples in protocol documents are frequently incorrect, which proves to have a significant negative impact as they are too often misused as normative text. See Appendix C for statistics about the frequency of incorrect examples in RFC errata.¶
Ensuring example correctness is achieved by adding the result of a computation (the example) directly inside the document. If that computation is done from a type that is (physically and conceptually) close to the normative text, then we gain some level of assurance that both the normative text and the derived examples will match.¶
Generating an example that is correct by construction always starts by defining a type that describes the format of the data to display. The Internet Header Format in section 3.1 of [RFC0791] will be used in the following sections as example.¶
In this section we start by defining an Idris type, using a Generalized Algebraic Data Type (GADT).
In that case we have only one constructor (MkInternetHeader
) which is defined as a Product Type that "concatenate" all the fields on the Internet Header.
One specific aspect of Idris types is that we can enrich the definition of each field with constraints that then have to be fulfilled when a value of that type will be built.¶
<CODE BEGINS>> data InternetHeader : Type where > MkInternetHeader : > (version : Int) -> version = 4 => > (ihl : Int) -> ihl >= 5 && ihl < 16 = True => > (tos : Int) -> tos >= 0 && tos <= 256 = True => > (length : Int) -> length >= (5 * 4) && length < 65536 = True => > (id : Int) -> id >= 0 && id < 65536 = True => > (flags : Int) -> flags >= 0 && flags < 16 = True => > (offset : Int) -> offset >= 0 && offset < 8192 = True => > (ttl : Int) -> ttl >= 0 && ttl < 256 = True => > (protocol : Int) -> protocol >= 0 && protocol < 256 = True => > InternetHeader <CODE ENDS>¶
where¶
An Idris type where the fields in a constructor are organized like the InternetHeader
by ordering them in a sequence is called a Pi type - or, when there is no dependencies between fields as there is in version = 4
, a Product type.
Although there is no equivalence in most programming languages to a Pi type, Product types are known as classes in Java and struct in C.¶
Another way to organize a type is called the Sum type, which is a type with multiple constructors that act as alternative. Sum types can be used in C with a combination of struct and union, and since Java 14 by using sealed records.¶
Sum types have a dependent counterpart named a Sigma type, which is a tuple in which the type of the second element depends on the value of the first element. This is mostly returned by functions, with the returned Sigma type carrying both a value and a proof of the validity of that value.¶
From that point it is possible to define a value that fulfills all the constraints. The following values are taken from example 1 in [RFC0791] Appendix A.¶
<CODE BEGINS>> example1 : InternetHeader > example1 = MkInternetHeader 4 5 0 21 111 0 0 123 1 <CODE ENDS>¶
The =>
symbol after a constraint indicates that Idris should try to automatically find a proof that this constraint is met by the values in the example, which it successfully does in the example above.¶
The following example, where the constraints defined in the InternetHeader type are not met, will not type-check in Idris (an error message will be generated) and thus can not be used to generate an example.¶
<CODE BEGINS>example1' : InternetHeader example1' = MkInternetHeader 6 5 0 21 111 0 0 123 1 <CODE ENDS>¶
The next step is to define an Idris function that converts a value of the type InternetHeader
into the kind of bit diagram that is showed in Appendix A of [RFC0791].¶
<CODE BEGINS>> Show InternetHeader where > show (MkInternetHeader version ihl tos length id flags offset > ttl protocol) = ?showPrec_rhs_1 <CODE ENDS>¶
Here we implement the Show
interface that permits to define the adhoc polymorphic function show
for InternetHeader
, function that will convert the value into the right character string.
Idris names starting with a question mark like in ?showPrec_rhs_1
are so-called holes, which are placeholder for code to be written, while still permitting type-checking.¶
After replacing the hole by the actual code, the following embedded code can be used in the document to generate an example that is correct by construction, at least up to mistakes in the specification (i.e. the constraints in InternetHeader
) and bugs in the show
function.¶
<CODE BEGINS>.... {`example1`} .... <CODE ENDS>¶
will generate the equivalent AsciiDoc text:¶
<CODE BEGINS>.... 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Ver= 4 |IHL= 5 |Type of Service| Total Length = 21 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Identification = 111 |Flg=0| Fragment Offset = 0 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Time = 123 | Protocol = 1 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ .... <CODE ENDS>¶
This generated example is similar to the first of the examples in appendix A of RFC 791.¶
The previous section showed how to define a type that precisely describes a PDU, how to generates examples that are are values of that type, and how to insert them in a document.¶
Our specification, which has the form of an Idris type, can be seen as a generalization of all the possible examples for that type. Now that we went through the effort of precisely defining that type, it would be useful to use it to also calculate statements about that syntax.¶
In RFC 791 the description of the field IHL states "[...]that the minimum value for a correct header is 5." The origin of this number may be a little mysterious, so it is better to use a formula to calculate it and insert the result instead.¶
Inserting a calculation is easy:¶
<CODE BEGINS>Note that the minimum value for a correct header is is {`sizeHeader `div` ihlUnit`}. > sizeHeader : Int > sizeHeader = 20 > ihlUnit : Int > ihlUnit = 4 <CODE ENDS>¶
Here we can insert a code fragment that is using a function that is defined later in the document because the Idris code is evaluated before the document is processed.¶
Note the difference with examples: The number 5
is not an example of value of the type InternetHeader
, but a property of that type.¶
Systematically using the result of calculation on types in a specification makes it more resistant to mistakes that are introduced as result of modifications.¶
The layout of a PDU, i.e. the size and order of the fields that compose it can be represented in a document in various forms. One of them is just an enumeration of these fields in order, each field identified by a name and accompanied by some description of that field in the form of the number of bits it occupies in the PDU and how to interpret these bits.¶
That layout can also be presented as text, as a list, as a table, as a bit diagram, at the convenience of the document author. In all cases, some parts of the description of each field can be extracted from our Idris type just like we did in Section 5.1.2.¶
RFC 791 section 3.1 represents the PDUs defined in it both as bit diagrams and as lists of fields.¶
A network protocol, which is how the various PDUs defined in a document are exchanged between network elements, can always be understood as a set of state machines. At the difference of PDUs, that are generally described in a way that is close to their Idris counterpart, state machines in a document are generally only described as text.¶
Note that, just like an Idris representation of a PDU should also contain all the possible constraints on that PDU but not more, a state machine should contain all the possible constraints in the exchange of PDUs, but not less.¶
This issue is most visible in one of the two state machines defined in RFC 791, the one for fragmenting IP packets (the other is for unfragmenting packets). The text describes two different algorithms to fragment a packet but in that case each algorithm should be understood as one instance of a more general state machine. That state machine describes all the possible sequences of fragments that can be generated by an algorithm that is compliant with RFC 791 and it would be an Idris type that is equivalent to the following algorithm:¶
For a specific packet size, generate a list of all the binary values {b0,.., bN} with N being the packet size divided by 8 and rounded-up, and 0..N representing positional indexes for each of the 8 byte chunks of the packet.¶
For each binary value in that list, generate a list of values that represents the number of consecutive bits of the same value (e.g.. 0x110001011
generates a [2, 3, 1, 1, 2]
list), each such sequence representing a given fragment.¶
Remove from that list of lists any list that contains a number that, after multiplication by 8, is higher than the maximum size of a fragment.¶
For each remaining list in that list, generate the list of fragments, i.e with the correct offset, length and More bit.¶
Generate all the possible permutations for each list of fragments.¶
We can see that this state machine takes in account the fact that an IP packet can not only be fragmented in fragments of various sizes - as long as the constraints are respected - but also that these fragments can be sent in any order.¶
Then the algorithms described in the document can be seen as generating a subset of all the possible list of fragments that can be generated by our state machine. It is then easy to check that these algorithms cannot generate fragments lists that cannot be generated by our state machine.¶
As a consequence, the unfragment state machine must be able to regenerate a valid unfragmented packet for any of the fragments list generated by our fragment state machine. Furthermore, the unfragment state machine must also take in account fragment lists that are modified by the network (itself defined as a state machine) in the following ways:¶
fragments can be dropped;¶
the fragments order can change (this is already covered by the fact that our fragment state machine generates all possible orders);¶
fragments can be duplicated multiple times;¶
fragments can be delayed;¶
fragments can be received that were never sent by the fragment state machine.¶
Then the algorithm described in the document can be compared with the unfragment state machine to verify that all states and transitions are covered.¶
Defining a state machine in Idris can be done in an ad-hoc way [Linear-Resources], particularly by using linear types that express resources' consumption.¶
Under the Curry-Howard equivalence, the Idris types that we created to describe PDUs and state machine are formal logic propositions, and being able to construct values from these types (like we did for the examples), is proof that these propositions are true. These are also called internal verifications [Stump16].¶
External verifications are made of additional propositions (as Idris types) and proofs (as code for these types) with the goal of verifying additional properties.¶
One kind of proofs that one would want in a specification are related to isomorphism, i.e. a guarantee that two or more descriptions of a PDU or a state machine contain exactly the same information, but there is others.¶
The Idris types that are used for generating examples, calculations or representations are generally very close to the bit structure of the PDU. But some properties may be better expressed by defining types that are more abstract. We call the former Wire Types, and the latter Semantic Types.¶
As example, the type in Section 5.1.1 is a wire type, because it follows exactly the PDU layout. But fragmentation can be more easily described using the following semantic type:¶
<CODE BEGINS>> data InternetHeader' : Type where > Full : (ihl : Int) -> ihl >= 5 && ihl < 16 = True => > (tos : Int) -> tos >= 0 && tos <= 256 = True => > (length : Int) -> length >= (5 * 4) && > length < 65536 = True => > (ttl : Int) -> ttl >= 0 && ttl < 256 = True => > (protocol : Int) -> protocol >= 0 && > protocol < 256 = True => > InternetHeader' > First : (ihl : Int) -> ihl >= 5 && ihl < 16 = True => > (tos : Int) -> tos >= 0 && tos <= 256 = True => > (length : Int) -> length >= (5 * 4) && > length < 65536 = True => > (id : Int) -> id >= 0 && id < 65536 = True => > (ttl : Int) -> ttl >= 0 && ttl < 256 = True => > (protocol : Int) -> protocol >= 0 && > protocol < 256 = True => > InternetHeader' > Next : (ihl : Int) -> ihl >= 5 && ihl < 16 = True => > (tos : Int) -> tos >= 0 && tos <= 256 = True => > (length : Int) -> length >= (5 * 4) && > length < 65536 = True => > (offset : Int) -> length > 0 && > length < 8192 = True => > (id : Int) -> id >= 0 && id < 65536 = True => > (ttl : Int) -> ttl >= 0 && ttl < 256 = True => > (protocol : Int) -> protocol >= 0 && > protocol < 256 = True => > InternetHeader' > Last : (ihl : Int) -> ihl >= 5 && ihl < 16 = True => > (tos : Int) -> tos >= 0 && tos <= 256 = True => > (length : Int) -> length >= (5 * 4) && > length < 65536 = True => > (offset : Int) -> length > 0 && > length < 8192 = True => > (id : Int) -> id >= 0 && id < 65536 = True => > (ttl : Int) -> ttl >= 0 && ttl < 256 = True => > (protocol : Int) -> protocol >= 0 && > protocol < 256 = True => > InternetHeader' <CODE ENDS>¶
First the version
field is eliminated, because it always contains the same constant.¶
Then the flags
and offset
fields are reorganized so to provide four different alternate packets:¶
The Full
constructor represents an unfragmented packet.
It is isomorphic to a MkInternetHeader
with a flags
and offset
values of 0.¶
The 'First' constructor represents the first fragment of a packet.
It is isomorphic to a MkInternetHeader
with a flags
value of 1 and offset
value of 0.¶
The 'Next' constructor represents a intermediate fragments of a packet.
It is isomorphic to a MkInternetHeader
with a flags
value of 1 and offset
value different than 0.¶
Finally the 'Last' constructor represents the last fragment of a packet.
It is isomorphic to a MkInternetHeader
with a flags
value of 0 and offset
value different than 0.¶
One of the main issue of having two types for the same data is ensuring that they both contains the same information, i.e. that they are isomorphic. To ensure that these two types are carrying the same information we need to define and implement four functions that, all together, prove that the types are isomorphic. This is done by defining the 4 types below, as propositions to be proven:¶
<CODE BEGINS>> total > to : InternetHeader -> InternetHeader' > total > from : InternetHeader' -> InternetHeader > total > toFrom : (x : InternetHeader') -> to (from x) = x > total > fromTo : (x : InternetHeader) -> from (to x) = x <CODE ENDS>¶
Successfully implementing these functions will prove that the two types are isomorphic.
Note the usage of the total
keyword to ensure that these are proofs and not mere programs.¶
For documents that describe a conversion between different data layouts, having a proof that guarantees that no information is lost in the process can be beneficial. For instance, we observe that syntax encoding tends to be replaced each ten years or so by something "better". Here again isomorphism can tell us exactly what kind of information we lost and gained during that replacement.¶
Here, for example, the definition of a function that would verify an isomorphism between an XML format and a JSON format:¶
<CODE BEGINS>isXmlAndJsonSame: Iso (XML, DeltaXML) (JSON, DeltaJson) ... <CODE ENDS>¶
DeltaXML expresses what is gained by switching from XML to JSON, and DeltaJson expresses what is lost.¶
— Jon Postel - RFC 761Be conservative in what you do, be liberal in what you accept from others.¶
One of the downsides of having specifications is that there is no wiggle room possible when implementing them. An implementation either conforms to the specification or does not.¶
One analogy would be specifying a pair of gears. If one decides to have both of them made with tolerances that are too small, then it is very likely that they will not be able to move when put together. A bit of slack is needed to get the gear smoothly working together but more importantly the cost of making these gears is directly proportional to their tolerance. There is an inflexion point where the cost of an high precision gear outweighs its purpose.¶
We have a similar issue when implementing a specification, where having an absolutely conform implementation may cost more money than it is worth spending. On the other hand a specification exists for the purpose of interoperability, so we need some guidelines on what to ignore in a specification to make it cost effective.¶
Postel's law proposes an informal way of defining that wiggle room by actually having two different specifications, one that defines a data layout for the purpose of sending it, and another one that defines a data layout for the purpose of receiving that data layout.¶
Existing documents express that dichotomy in the form of the usage of SHOULD/SHOULD NOT/RECOMMENDED/NOT RECOMMENDED [RFC2119] keywords. For example the SDP spec says that "[t]he sequence CRLF (0x0d0a) is used to end a line, although parsers SHOULD be tolerant and also accept lines terminated with a single newline character." This directly infers two specifications, one used to define an SDP when sending it, that enforces using only CRLF, and a second specification, used to define an SDP when receiving it (or parsing it), that accepts both CRLF and LF.¶
Note that the converse is not necessarily true, i.e. not all usages of these keywords are related to Postel's Law.¶
To ensure that the differences between the sending specification and the receiving specification do not create interoperability problems, we can use a variant of isomorphism, as shown in the following example (data constructors and code elided):¶
<CODE BEGINS>data Sending : Type where data Receiving : Type where to : Sending -> List Receiving from : Receiving -> Sending toFrom : (y : Receiving) -> Elem y (to (from y)) fromTo : (y : Sending) -> True = all (== y) [from x | x <- to y] <CODE ENDS>¶
Here we define two data types, one that describes the data layout that is permitted to be sent (Sending
) and one that describes the data layout that is permitted to be received (Receiving
).
For each data layout that is possible to send, there is one or more matching receiving data layouts.
This is expressed by the function to
that takes as input one Sending value and returns a list of Receiving values.¶
Conversely, the from
function maps a Receiving data layout onto a Sending data layout.
Note the asymmetry there, which prevents using a standard proof of isomorphism.¶
Then the toFrom
and fromTo
proofs verify that there is no interoperability issue by guaranteeing that each Receiving value maps to one and only one Sending instance and that this mapping is isomorphic.¶
All of this will provide a clear guidance of when and where to use a SHOULD keyword or its variants, without loss of interoperability.¶
As an trivial example, the following proves that accepting LF characters in addition to CRLF characters as end of line markers does not break interoperability:¶
<CODE BEGINS>data Sending : Type where S_CRLF : Sending Eq Sending where S_CRLF == S_CRLF = True data Receiving : Type where R_CRLF : Receiving R_LF : Receiving to : Sending -> List Receiving to S_CRLF = [R_CRLF, R_LF] from : Receiving -> Sending from R_CRLF = S_CRLF from R_LF = S_CRLF toFrom : (y : Receiving) -> Elem y (to (from y)) toFrom R_CRLF = Here toFrom R_LF = There Here fromTo : (y : Sending) -> True = all (== y) [from x | x <- to y] fromTo S_CRLF = Refl <CODE ENDS>¶
Postel's Law is not limited to the interpretation of PDUs as a state machine on the receiving side can also be designed to accept more than what a sending state machine can produce. A similar isomorphism proof can be used to ensure that this is done without loss of interoperability.¶
When applied, the techniques described in Section 5.1 and Section 5.2 result in a set of types that represents the whole protocol. These types can be assembled together, using another set of types, to represent a simulation of that protocol that covers all sending and receiving processes.¶
The types can then be implemented, and that implementation acts as a proof that this protocol is actually implementable.¶
To make these pieces of code composable, a specification is split in multiple modules, each one represented as a unique function. The type of each of these functions is derived from the state machines described in Section 5.2, by bundling together all the inputs of the state machine as the input for that function, and bundling all the outputs of the state machine as the output of this function.¶
For instance the IP layer is really 4 different functions:¶
A function that converts between a byte array and a tree representation (parsing).¶
A function that takes a tree representation and a maximum MTU and returns a list of tree representations, each one fitting inside the MTU.¶
A function that accumulates tree representations of an IP fragment until a tree representation of a full IP packet can be returned.¶
A function that convert a tree representation into a byte array.¶
The description of each function is incomplete, as in addition to the input and the output listed, these functions needs some ancillary data, in the form of:¶
state, which is basically values stored between evaluations of a function,¶
an optional signal, that can be used as an API request or response. As timers are a fundamental building block for communication protocols, one common uses for that signal are to request the arming of a timer, and to receive the indication of the expiration of that timer.¶
Proving that a protocol does not loop is equivalent to proving that a implementation of the types for that protocol does not loop either i.e., terminates.
This is done by using the type described in Section 5.3.4 and making sure that it type-check when the total
keyword is used.¶
One of the ultimate goals of this document is to convince authors to use the techniques described there to write their documents. Because doing so requires a lot of efforts, an important intermediate goal is to show authors that the benefits of Computerate Specifying are worth learning and becoming proficient in these techniques.¶
The best way to reach that intermediate goal is to apply these technique to documents that are in the process of being published by the IETF and if issues are found, report them to the authors. Doing that on published RFCs, especially just after their publication, would be unnecessarily mean. On the other hand doing that on all Internet-Drafts as they are published would not be scalable.¶
The best place to do a Computerate Specifying oriented review is when a document enters IETF Last Call. These reviews would then be indistinguishable from the reviews done by an hypothetical Formal Specification Directorate. An argument can be made that, ultimately, writing a specification for a document could be an activity too specialized, just like Security reviews are, and that an actual Directorate should be assembled.¶
Alas, it is clear that writing a specification from scratch (as in Section 5) for an existing document takes far more time than the Last Call duration would allow. On the other hand the work needed could be greatly reduced if, instead of writing that specification from scratch, libraries of code would be available for the parts that are reusable between successive specifications. These libraries fall into 3 categories:¶
General types and common presentations. E.g., bit diagrams are a very common way of presenting data, and so reusable types and functions to generate and compare them would accelerate a formalization. The libraries in that category are explained in Section 6.1, in Section 8.1, and its associated reference in Appendix B.¶
Types and common representations for meta-languages. A few meta-languages are used in documents to formalize some parts of them, so having libraries to formalize these meta-languages also helps accelerating their verification. The libraries in that category are explained in Section 6.2, in Section 8.2, and its associated reference in Appendix B.¶
Types and common representation for common protocols. Most documents are about modifying or defined new usages for existing protocols, which is why it makes sense to establish libraries of these existing protocols for reuse. The libraries in that category are explained in Section 6.3, in Section 8.3, and its associated reference in Appendix B.¶
Together these libraries form the Computerate Specifying Standard Library (Section 8).¶
These libraries are in fact computerate specifications that, instead of being private, are designed to export types and code and be imported in other computerate specifications. Section 7 describes how to build an specification that can be exported.¶
The types and code in a computerate specification form an Idris package, which is a collection of Idris modules. An Idris module form a namespace hierarchy for the types and functions defined in it and is physically stored as a file.¶
Different types of specification can be combined, for instance an exporting library may import from another specification, and this recursively until importing specifications that are both self-contained and exporting.¶
Each public computerate specification, including the one behind this document, is available as an individual git repository. There is exactly one Idris package per git repository. Appendix A.5 explains how to gain access to these computerate specifications.¶
This document is itself generated from a computerate specification that contains data types and functions that can be reused in future specifications, and as a whole is part of the standard library for computerate specifying. The following sections describes the Idris modules defined in that specification.¶
The code described in Section 5 directly generates text that is to be embedded inside an AsciiDoc document. This is fine for small examples but AsciiDoc has quite a lot of escaping rules that are difficult to use in a consistent manner.¶
For this reason the specification behind this document provides a module named AsciiDoc
that contains a set of types that can be used to guarantee that the AsciiDoc text generated is compliant with its specification.
All these types implement the Show
interface so they can be directly returned by the embedded code.¶
So instead of implementing a show function, a function returning an instance of one of the types can be executed directly as embedded code:¶
<CODE BEGINS>> example : InternetHeader -> Block > example ih = ?example_rhs {`example example1`} <CODE ENDS>¶
In the example above, the example
function converts an InternetHeader
value into an AsciiDoc
block, which is automatically serialized as AsciiDoc text.¶
The AsciiDoc
module is not limited to generating examples, but can be used to generate any AsciiDoc structure from Idris code.
E.g., the tables in Appendix C are generated using that technique.¶
Section 8.1.1 provides a description of the AsciiDoc
module.¶
Using an intermediary type will also permit to correctly generate AsciiDoc that can generate an xml2rfc v3 file that supports both text and graphical versions of a figure. This will be done by having AsciiDoc blocks converted into <artwork> elements that contains both the 72 column formatted text and an equivalent SVG file, even for code source (instead of using the <sourcecode> element).¶
The type in Section 5.1.1 seems a good representation of the structure of the Internet Header, but the origin of a lot of the values in the constraints does not seems very obvious, and as such are still prone to errors. E.g., the calculation in Section 5.1.2 could be better if it was using the type itself as a source for the calculated data.¶
It also may be more convenient to use types that already have some of the properties we need, instead of having to add a bunch of constraints to the Int
type.¶
The truth of the matter is that the Idris standard library contains very few predefined types that are useful to specify the syntax of communication protocols.
E.g., none of the builtin types (Int
, Integer
, Double
, Char
, String
, etc) are really suitable to describe a PDU syntax, and so should be avoided.
For this reason, it is preferable to use the types provided by the Computerate Specifying Standard Library.¶
We are going to redefine the InternetHeader
type, but using three modules from the standard library:¶
A sequence of bits, or bit-vector, is the most primitive type with which a packet can be described.
This module provides a type BitVector n
that represents a sequence of bit of fixed size n
.
The module also provides a set of functions that permits to manipulate bit-vectors.
See Section 8.1.2 for a description of the BitVector
module.¶
The Unsigned
module provides a type Unsigned n
that is built on top of the BitVector
module.
In addition of the properties of a bit-vector, an Unsigned n
is considered a number and so all the integer operations applies to it.
See Section 8.1.3 for a description of the Unsigned
module.¶
Some numbers (also called denominate numbers) are used in conjunction with a so-called unit of measure.
The Dimension
module provides a way to associate a dimension, in the form of a unit of measure, to an Idris number, including to the numbers defined in the Unsigned
module.
The Dimension
module provides two dimensions, Data (with bit, octet, etc, as units of information) and Time (with second, millisecond, etc, as unit of time).
See Section 8.1.4 for a description of the Dimension
module.¶
A redefinition of the type in Section 5.1.1 using the types in these modules would look like this:¶
<CODE BEGINS>> data InternetHeader : Type where > MkInternetHeader : > (version : BitVector 4) -> version = [O, I, O, O] => > (ihl : (Unsigned 4, Data)) -> snd ihl = tetra => > (tos : Tos) -> > (length : (Unsigned 16, Data)) -> snd length = wyde => > (id : Unsigned 16) -> > (flags : Flags) -> > (offset : (Unsigned 13, Data)) -> snd offset = octa => > (ttl : (Unsigned 8, Time)) -> snd ttl = second => > (protocol : BitVector 16) -> > (checksum : BitVector 16) -> > (source : IP) -> > (dest : IP) -> > (options : List Option) -> > (padding : BitVector n) -> > n = pad 32 options => padding = bitVector {m=n} => > InternetHeader <CODE ENDS>¶
This is bit-vector, but it always contains the same value, so a constraint states that.
Because bit-vectors are not integers, the value must be expressed by a list of O
(for 0) and I
(for 1) constructors.¶
This is an unsigned integer with a size of 4 bits.
It is associated with a dimension, here the Data
dimension, which is constrained to use the tetra
unit (32-bit words).
Basically a denominate number can only be added or subtracted with numbers with the same dimension (but not necessarily with the same unit).
E.g. adding the ihl
value with the ttl
value will be rejected by Idris, because that operation does not make sense.
A denominate number can also be divided or multiplied by a dimensionless number.¶
These are defined as bit-vectors, because they are not really numbers - they do not need to be compared, or be part of a calculation. The number in this type (and all the others) is the number of bits allocated.¶
This is an unsigned number with a size of 16 bits, a Data
dimension and a byte
unit (8 bits).
After casting as denominate numbers, subtracting ihl
from length
gives directly the size of the payload, without risk of scaling error.¶
This is an unsigned integer. Comparisons and calculations are possible on this field.¶
This is an unsigned number with a length of 13 bits, a Data
dimension and an octa
unit (64 bits).
Again, adding or subtracting this value after casting to another of the same dimension is guaranteed to return the correct value.¶
This is a denominate number with Time
as dimension and second
as unit.¶
This is a variable length field that contains a list of options, which are defined in a separate type named Option
.¶
This is a bit-vector whose length is variable.¶
We can constrain the size of a field, like is done for the padding
field above.
In that case the length is calculated in the first constraint by calling the pad
function, function that calculates the number of bits needed to pad a value of a type that implements the Size
interface to a word boundary, here 32 bits.
The second constraint checks that whatever the length of the padding field is, it is always equal to a zero-filled bit-vector, as returned by the function bitVector
.¶
As we can see the noise in the definition of our type is greatly reduced by using these specialized types, which in turn permits to add even more constraints.¶
Dimensions can also be combined to seamlessly build more complex dimensions.
For example, all "length" values of sent packets can be added up during a period of time, while keeping beginning and ending times as denominate numbers: dividing the length
sum by the difference between the end time and the begin time gives us directly the data speed in bits per second (or whatever unit we prefer), with the guarantee that Idris will not let us mix oranges and apples.¶
Here's an example of Sum type that implements some of the variants for an Option
in an InternetHeader
:¶
<CODE BEGINS>data Option : Type where Eoo : (flag : BitVector 1) -> flag = [O] => (class : BitVector 2) -> class = [O, O] => (number : BitVector 5) -> number = [O, O, O, O, O] => Option Noop : (flag : BitVector 1) -> flag = [O] => (class : BitVector 2) -> class = [O, O] => (number : BitVector 5) -> number = [O, O, O, I, O] => Option Security : (flag : BitVector 1) -> flag = [I] => (class : BitVector 2) -> class = [O, O] => (number : BitVector 5) -> number = [O, O, O, I, O] => (length : Unsigned 8) -> length = 11 => (s : BitVector 16) -> (c : BitVector 16) -> (h : BitVector 16) -> (tcc : BitVector 24) -> Option Lssr : (flag : BitVector 1) -> flag = [I] => (class : BitVector 2) -> class = [O, O] => (number : BitVector 5) -> number = [O, O, O, I, I] => (length : Unsigned 8) -> (pointer : Unsigned 8) -> pointer >= 4 = True => Option <CODE ENDS>¶
The imported types that we are using in the definition of our types all implement the Size
interface, which provides a definition for the adhoc polymorphic function size
, function that returns the size of a field as a dimensional number of dimension Data
.
This interface can be implemented for the type InternetHeader
by making its size the sum of the size of all its fields:¶
<CODE BEGINS>Size InternetHeader where size (MkInternetHeader version ihl tos length id flags offset ttl protocol checksum source dest options padding) = size version + size ihl + ... size padding <CODE ENDS>¶
We can then define a minimal header, and insert its size, using the right unit, in the document:¶
<CODE BEGINS>> minHeader : Data > minHeader = size $ MkInternetHeader [O, I, O, O] > (5, tetra) > (MkTos 0 [O] [O] [O] [O, O]) > (1000, wyde) > 0 > (MkFlags bitVector bitVector bitVector) > (0, octa) > (64, second) > bitVector > bitVector > (A [O] bitVector bitVector) > (A [O] bitVector bitVector) > [] > [] Note that the minimum value for a correct header is {`fromDenominate minHeader tetra`} <CODE ENDS>¶
— Agent Smith in The Matrix (1999)Never send a human to do a machine's job.¶
A better solution than defining an adhoc type for our state machines, as explained in Section 5.2, is to use Petri Nets.¶
Concurrent systems can be represented using two different families of techniques, algebraic and graphical. Algebraic techniques (e.g., process calculi) are mathematically well-defined, but lack an intuitive representation that would be useful to developers not completely familiar with these techniques.¶
On the other hand, graphical representations of concurrent systems (e.g., state machines) can be understood by a larger segment of developers, but generally lack a standardized and/or mathematical definition.¶
Petri Nets are at the intersection of these two techniques. They are typically graphical representations of concurrent processes, but are based on a well-defined mathematical theory. One way to look at Petri Nets is as a way to group multiple state machines together. A Petri Net also has the advantage that the same graph can be reused to derive other Petri Nets, e.g., Timed Petri Nets (that can be used to collect performance metrics) or Stochastic Petri Nets (which can be seen as a way to group multiple Markov chains together).¶
A Typed Petri Net (TPN) is an algebraic specification of a Petri Net, such as it can be expressed as an Idris value, and be easily reused for various purposes. TPNs are based on Colored Petri Nets, as defined in [Jensen09] and [Aalst11]. [Jensen07] is a shorter introduction to Colored Petri Net that should be read first. Particularly, section 2 contains the various definition of the terminology that is used in this document, augmented as follow:¶
The word Color is used instead of the word Colour.¶
unification is defined in the middle of the left column of page 6.¶
free variable is defined in the middle of the right column of page 6.¶
A TPN that covers a whole protocol (i.e. client, network, and server) is useful to prove the properties listed in Section 5.3.4, Section 5.3.5, and Section 5.3.6. But a TPN can also be designed so each part of the protocol is defined separately from the others, making it a Hierarchical TPN.¶
<CODE BEGINS>> No : Type > No = Int > Data : Type > Data = String > NoxData : Type > NoxData = (No, Data) > namespace Sender > export > sender : Module () > sender = do > packetsToSend <- port "Packets To Send" NoxData Both > nextSend <- place "NextSend" No {init=pure 1} > a <- port "A" NoxData Out > d <- port "D" No In > transition "Send Packet" > [input packetsToSend (No, Data) pure, > input nextSend No pure] > [output (No, Data) packetsToSend pure, > output No nextSend pure, > output (No, Data) a pure] > (\((n, d), n') => do guard (n == n') > pure ((n, d), n, (n, d))) > transition "Receive Ack" > [input nextSend No pure, > input d No pure] > [output No nextSend pure] > (pure . snd) <CODE ENDS>¶
<CODE BEGINS>> namespace Top > export > top : Module () > top = do > packetsToSend <- place "Packets To Send" NoxData > {init=[(1, "COL"), (2, "OUR"), (3, "ED "), (4, "PET"), > (5, "RI "), (6, "NET")]} > dataReceived <- place "Data Received" Data {init=pure ""} > a <- place "A" NoxData > b <- place "B" NoxData > c <- place "C" No > d <- place "D" No > instance "" sender [packetsToSend, a, d] > instance "" network [a, b, c, d] > instance "" receiver [b, dataReceived, c] <CODE ENDS>¶
In these examples, the place
, port
, input
, output
, transition
, and instance
functions are the combinators used to define Typed Petri Net modules in computerate specifications and are described in Section 8.1.7.¶
TPN modules are written as constants of type Module ()
, which is a Monad used to implement a deep embedded DSL.
The Monad ensures that places, ports and instances all have a unique name inside a module.
It also ensures that all places and ports used by transitions and instances are declared in the same module.
Finally it ensures that all ports exported by a module are correctly mapped to a local place or port when imported as an instance.¶
Ultimately these combinators are not meant to be used as a way to directly design a TPN, as doing this is very tedious and error-prone. Instead the general advice is to use the graphical tool cpntools [Cpntools] to design a CPN and then to follow the step-by-step tutorial in Appendix D to convert it into a Typed Petri Net. Experience shows that, even for the simplest of protocols, systematically starting formalization by 1) designing a complete semantic type and 2) designing a top-level Petri Net, both in cpntools, is the most efficient way to proceed.¶
Timed TPN are built by using Timed tokens (which are types wrapped into the Timed
type) and by adding delays in transitions and arcs.¶
The following is an example of an implementation in CPN of a timer, here that will timeout after 100 units of time:¶
This CPN can be translated as the following Timed TPN:¶
Here the "Timer" place contains Timed ()
values, which are added by the "start timer" transition.
The added token will enable the "timeout" transition only after 100 unit of time have passed.
Note that the clock used to calcualte enablement of transitions is discrete, so a simulation does not have to really wait for that time.
The "timeout" transition also removes the token from the "Timer" place, effectively making sure that it does not trigger a second time.¶
The "stop timer" transition is used when an event made that timer useless. In that case we want to remove the token from the "Timer" place ahead of its expiration time, so it is not fired later. This is the meaning of the delay inscription in the input arc, which is understood as a preemptive delay.¶
<CODE BEGINS>> timer : Module () > timer = do > state <- place "State" () > timer <- place "Timer" (Timed ()) > transition "start timer" > empty > [output () state pure, > output () timer {delay=100} (\x => [timed x])] > (pure . dup) > transition "stop timer" > [input state () pure, > input timer () {delay=100} (pure . untimed)] > empty > (pure . fst) > transition "timeout" > [input state () pure, > input timer () (pure . untimed)] > empty > (pure . fst) <CODE ENDS>¶
The TPN values created in Section 6.1.4.1 can be used to test, debug and validate a protocol.¶
This is done by running a simulation of the protocol. The plan is that this simulation will be driven from the future graphical interface but meanwhile it is possible to directly call a set of functions.¶
A simulation is executed when a succession of transitions occurs. It starts by building an initial marking:¶
<CODE BEGINS>:let marking = initialMarking top <CODE ENDS>¶
After this each transition occurrence is composed of 3 steps:¶
Find the list of transitions that are enabled from the current marking:¶
<CODE BEGINS>:let transitions = enabledTransitions marking top <CODE ENDS>¶
Find the possible bindings for the transition selected. There is only one transition possible for that TPN instance, so we can use it to list the bindings:¶
<CODE BEGINS>:let bindings = bindings marking top (head transitions) <CODE ENDS>¶
Create a new current marking from the transition and binding selected. Again we have only one possible binding in this example, so we can use it to update the marking after applying the binding and transition:¶
<CODE BEGINS>:let marking = bindings marking top (head transitions) (head bindings) <CODE ENDS>¶
We can then calculate the next occurrence by looping back to step (1).¶
The simulation stops when there is no enabled transition for the current marking.¶
Designing and validating a Petri Net are essential tasks, but they cannot be directly used to guarantee that a process is following part or totality of this Petri Net.¶
To do so we need to generate a Sum type that encodes all the transitions as constructors. This type then can be used to build a proof that a list of binding elements are valid according to that Petri Net.¶
In CPN, a binding is a list of (name, value) tuples, making it easy to read.
In TPN we are using instead a tuple of the values as taken as input to the Transition
inscription.
That means that the variables are identified by position in this tuple, instead of by name.¶
The example below shows two constructors for the example Petri Net used in this document.¶
<CODE BEGINS>sendPacket' : (NoxData, No) -> List (NoxData, No, NoxData) updateSendPacket : Marking xs -> (NoxData, No) -> Marking xs data T210 : Marking xs -> Type where Init : T210 (initialMarking top) SendPacket : (binding : (NoxData, No)) -> (elem (fst binding) (packetToSend m)) => (elem (snd binding) (Ns m)) => (sendPacket' binding = pure (fst binding, snd binding, fst binding)) => T210 m -> T210 (updateSendPacket binding m) <CODE ENDS>¶
This type can then be used for various purposes, e.g. to draw a Message Sequence Chart as described in Section 6.1.5.2.¶
Another usage of our Idris type would be to generate a textual representation of that type.¶
Figure 4 in RFC 791 is a good example of a representation of a data layout, here as a bit diagram. Because we already have an Idris type which is describing exactly the same thing, the idea of syntax representation is to convert that type into text, and insert it in place of the bit diagram.¶
For each textual representation of a type, it is possible to write a function that takes as parameter this type and generate an AsciiDoc
value that can then be inserted in the document.¶
Some document uses representations that are unique to this document but often multiple documents share the same representation and so that function can be also shared between them. A set of such functions is available as part of the Computerate Specification Standard Library.¶
The bit diagram is one of the most frequently used representation of a PDU specification in documents, so a function to convert an Idris type into a bit diagram is provided as part of the standard library.¶
That function takes as parameters an Idris type, a structure containing additional informations, and returns an AsciiDoc
value that can be inserted in the document.¶
The additional structure is a list of the properties associated to each field that are needed to generate the bit diagram. For a bit diagram the only property is a character string containing the name of the field.¶
For our InternetHeader
type, that additional structure would look like this:¶
<CODE BEGINS>names : Pdu names = MkPdu `{{MkInternetHeader}} [ MkField "Version", MkField "IHL", MKField "Type of Service", MkField "Total Length", MkField "Identification", MkField "Flags", MkField "Fragment Offset", MkField "Time to Live", MkField "Protocol"] <CODE ENDS>¶
The Pdu type takes care of verifying that each name is unique in the structure, and that each name length does not exceed 2 * (size field) - 1
, so it is guaranteed to fit in the bit diagram cell.¶
After that it is just a matter of inserting the function call in the document (the %runElab
keyword indicates that the Idris code is using reflection elaboration, which is used to inspect a type).¶
<CODE BEGINS>{`%runElab toBitDiagram names`} <CODE ENDS>¶
Message Sequence Charts (MSC) are a common way to represent an example of execution of a protocol, i.e. of the interactions between the underlying state machines. Although sequence charts are often implicitly used to describe a protocol, that description can only be partial and thus cannot replace completely a description of the protocol by other means.¶
There is 4 steps to generate automatically an MSC that is guaranteed to be conform to the specification:¶
Design a Colored Petri Net of the behavior of the protocol. A Petri Net models all sides of a communications protocol, including the network itself, this is why it is the best way to generate an MSC.¶
Build a sequence of binding elements as described in Section 6.1.4.3.¶
Convert the TPN into a specialized type that guarantees that the list of (transition, binding) that represent the MSC to draw is valid according to the TPN. This is explained in Section 6.1.4.4.¶
E.g. the following instance typechecks with the generated type for the example CPN used in this document:¶
<CODE BEGINS>test : T210 ? test = Init |> SendPacket ((1, "COL"), 1) |> TransmitPacket ((1, "COL"), True) |> ReceivePacket ((1, "COL"), "", 1) |> TransmitAck 1 |> ReceiveAck (1, 1) <CODE ENDS>¶
The last step is to pass that instance to a function that will generate the MSC:¶
<CODE BEGINS>.... {`generateMsc test [(A, D), (B, C)]`} .... <CODE ENDS>¶
The parameters of the generateMsc
function are the list of bindings and the name of the Petri Net places between which lines will be drawn.
If for some reason the network manipulates the token between A and B, or B and C, the function will accordingly show that the packet is either lost, duplicated, delayed or even that a packet arrived from an unknown sender.¶
It is also possible to pass a user-defined function that will take as parameter a token as sent by places A or C and convert it in a packet that is to be showed after the MSC itself.¶
This function can be provided by the type of proofs described in Section 5.3.1. That means that, as long as we have a proof of isomorphism between then, we can use Semantic Types directly in our TPN instead of Wire Types, making the model simpler.¶
When different representations of a specification share some common characteristics, it is usual to generalize them into a formal language.¶
One shared limitation of these languages is that they cannot always formalize all the constraints of a specific data layout, so they have to be enriched with comments. One consequence of this is that they cannot be used as a replacement for the Idris types described in Section 5.1.1 or Section 6.1.2, types that are purposely designed to be as complete as possible.¶
Another consequence is the proliferation of these languages, with each new formal language trying to integrate more constraints than the previous ones. For that reason Computerate Specifying does not favor one formal language over the others, and will try to provide code to help use all of them.¶
Similarly to what was explained in Section 5.1 a set of types can be designed and then used to type-check instance of that formal language, and convert them into a textual representation. Most of the formal languages used at the IETF already come with a set of tools that permits to verify that the text representation in an RFC is syntactically correct, so that type does not add much to that.¶
On the other hand that type can be the target of a converter from an ad-hoc type. This will ensure that the generated instance of the formal language matches the specification, which is something that external tools cannot do.¶
When a PDU is described with a formal language, we end up with two descriptions, one using the Idris dependent type (and used to generate examples) and the other using the formal language.¶
Proving isomorphism requires generating an Idris type from the formal language instance, which is done using an Idris elaborator script.¶
In Idris, Elaborator Reflection [Christiansen16] is a metaprogramming facility that permits writing code generating type declarations and code (including proofs) automatically.¶
For instance the ABNF language is itself defined using ABNF, so after converting that ABNF into an instance of the Syntax type (which is an holder for a list of instances of the Rule type), it is possible to generate a suite of types that represents the same language:¶
<CODE BEGINS>> abnf : Syntax > abnf = MkSyntax [ > "rulelist" `Eq` (Repeat (Just 1) Nothing (Group (Altern > (TermName "rule") (Group (Concat (Repeat Nothing Nothing > (TermName "c-wsp")) (TermName "c-nl") [])) []))), > ... > ] > > %runElab (generateType "Abnf" abnf) <CODE ENDS>¶
The result of the elaboration can then be used to construct a value of type Iso, which requires four total functions, two for the conversion between types, and another two to prove that sequencing the conversions results in the same original value.¶
The following example generates an Idris type "SessionDescription" from the SDP ABNF. It then proves that this type and the Sdp type contain exactly the same information (the proofs themselves have been removed, leaving only the propositions):¶
<CODE BEGINS>import Data.Control.Isomorphism sdp : Syntax sdp = MkSyntax [ "session-description" `Eq` (Concat (TermName "version-field") (TermName "origin-field") [ TermName "session-name-field", Optional (TermName "information-field"), Optional (TermName "uri-field"), Repeat Nothing Nothing (TermName "email-field"), Repeat Nothing Nothing (TermName "phone-field"), Optional (TermName "connection-field"), Repeat Nothing Nothing (TermName "bandwidth-field"), Repeat (Just 1) Nothing (TermName "time-description"), Optional (TermName "key-field"), Repeat Nothing Nothing (TermName "attribute-field"), Repeat Nothing Nothing (TermName "media-description") ]), ... ] %runElab (generateType "Sdp" sdp) same : Iso Sdp SessionDescription same = MkIso to from toFrom fromTo where to : Sdp -> SessionDescription from : SessionDescription -> Abnf toFrom : (x : SessionDescription ) -> to (from x) = x fromTo : (x : Sdp) -> from (to x) = x <CODE ENDS>¶
As stated in Section 5.3.1, the Idris type and the type generated from the formal language are not always isomorphic, because some constraints cannot be expressed in that formal language. In that case isomorphism can be used to precisely define what is missing information in the formal language type. To do so, the generated type is augmented with a delta type, like so:¶
<CODE BEGINS>data DeltaSessionDescription : Type where ... same : Iso Sdp (SessionDescription, DeltaSessionDescription) ... <CODE ENDS>¶
Then the DeltaSessionDescription type can be modified to include the missing information until the same function type checks. After this we have a guarantee that we know all about the constraints that cannot be encoded in that formal language, and can check manually that each of them are described as comments.¶
An interesting comment in [Momot16] states that if the input of an application is too complex to be expressed in ABNF without adding comments, it is too complex to be safe. The technique described in this section can be used to evaluate the safety of such ABNF by clearly specifying the impact of these additional comments.¶
The following sections describe how these formal languages have been or will be themselves be converted into types with the goal of importing them in computerate specifications.¶
The following sections describe the support for meta-languages that are defined in an RFC.¶
Routing Policy Specification Language (RPSL) [RFC4012] is a formal language used to describe a view of the global routing policy in a single cooperatively maintained distributed database.¶
External Data Representation (XDR) [RFC4506] is a formal language for the description and encoding of data.¶
Abstract Syntax Notation X (ASN.X) [RFC4912] is a semantically equivalent Extensible Markup Language (XML) representation for Abstract Syntax Notation One (ASN.1) specifications.¶
Augmented Backus-Naur Form (ABNF) [RFC5234] is a formal language used to describe a text based data layout.¶
An ABNF can be described by defining a value for the types from the RFC5234.Main
module:¶
<CODE BEGINS>rulename : Rule rulename = "rulename" `Eq` (Concat (TermDec 97 []) (TermDec 98 []) [TermDec 99 []]) <CODE ENDS>¶
That value can then be inserted in a document, which will convert it as a proper ABNF, so¶
<CODE BEGINS> [source,abnf] ---- {`rulename`} ---- <CODE ENDS>¶
is rendered as¶
See Section 8.2.2 for details on that package.¶
Routing Backus-Naur Form (RBNF) [RFC5511] is a formal language used to specify routing protocols.¶
The following sections describe the support for meta-languages that are used in RFCs but that were defined outside the IETF.¶
TBD.¶
The following sections describe the support for meta-languages that are used in RFCs but that are defined in an ad-hoc fashion, generally in the same RFC.¶
TBD.¶
The following sections describes the support for meta-languages that are were defined in Internet-Drafts that are still in development.¶
Augmented Packet Header Diagram (APHD) [I-D.mcquistin-augmented-ascii-diagrams] is a formal language used to describe an augmented bit diagram in a machine-readable format.¶
It can be seen as an extension to the self-contained bit diagram in Section 5.1.3, where more information are extracted from the Idris type, and more properties are carried in the list of properties:¶
From the Idris type:¶
The size of a field in the Idris type is converted into the field's width.¶
The size constraints in Idris are converted into a variable size field (Section 4.1).¶
A constraint that reduces the possible values (like for the version field) is converted into a constraint on field value (Section 4.4).¶
Alternative constructors (i.e., a Sum type) generate a presence predicate (Section 4.2).¶
From the additional structure:¶
The description for each field is a value of AsciiDoc
type, which permits to correctly format it.
In addition, it is possible to insert calculation or even other type representation in the description by using an AsciiDoc
type that works similarly than code embedding.¶
Reusing the type in Section 6.1.2, the conversion process would partially look like this:¶
<CODE BEGINS>> ipv4 : AphdPdu > ipv4 = MkAphd `{{MkInternetHeader}} "IPv4 Header" [ > MkField "Version" (Just "V") Bit [(MkSentence "This is a" ++ > "fixed-width field, whose full label is shown in the " ++ > "diagram. The field's width --), MkCode(`(size version)), > MkSentence(" bits -- is given in the label of the " ++ > "description list, separated from the field's label " ++ > "by a colon.")], > ... > ] {`%runElab toAphd names`} <CODE ENDS>¶
and is rendered as:¶
Cosmogol [I-D.bortzmeyer-language-state-machines] is a formal language designed to define states machines. The Internet-Draft will be retrofitted as a computerate specification to provide an internal Domain Specific Language (DSL) that permits specifying an instance of that language.¶
As a Petri Net can be seen as a set of state machines, it will be possible to extract part of a Petri Net and generate the equivalent state machine in Cosmogol format.¶
Protocols evolve over time, and the documents that standardize them also need to evolve with them. Each SDO has a specific set of methods to do so, from having the possibility of modifying a document, to systematically releasing a complete new document when a modification is needed. The IETF uses a combination of methods to update the documents that define a protocol.¶
One such method is to release a new document that completely replaces ("obsoletes") an existing protocol. E.g., TLS 1.2 [RFC5246] was completely replaced by TLS 1.3 [RFC8446] such as there is no need to read RFC 5246 to be able to implement RFC 8446.¶
Alternatively only part of a protocol needs modification, so the method used in that case is to issue a new document that only updates that specific part. E.g., RFC 2474 updates only the definition of the ToS field in the Internet Header defined in RFC 791, so reading both documents is required to implement the Internet Protocol. These two methods can be combined together, like was done for RFC 2474. RFC 2474 obsoleted RFC 1349 and RFC 1349 was the original update for RFC 791.¶
Systematically updating a protocol in new documents instead of replacing it means that sometimes a lot of different documents has to be read before implementing a modern implementation of a specific protocol. E.g., the DNS was originally defined in RFC 1034 and 1035, but was updated by more than 30 documents since, requiring to read all of them to implement that protocol.¶
In the DNS example we are not even counting definitions of codepoints as protocol updates. This is the third method used at the IETF to evolve a standard, by defining new codepoints and their associated data. That last method will be explored in more detail in Section 6.3.2, so the remaining of this section can focus on the two other methods.¶
Writing a computerate specification for a new document or a document that obsoletes another one is straightforward, as the specification will contain all the types that are needed to formalize it. On the other hand it is less clear what should go into a specification that updates another one.¶
A simplistic solution is to copy the whole Idris content from the original specification into the new one and modify that new content, but this creates a few problems:¶
Firstly the content from the original specification will have to be copied again each time it was modified, as computerate specifications are meant to evolve, even if the underlying document did not.¶
Secondly the size of the code should be roughly proportional to the size of the document itself, so the actual update is made obvious from the content.¶
So instead of manually copying the content, an Idris elaboration can be used to copy it automatically and apply the minimal modifications needed at the same time.¶
But first the specification that will be updated needs to be prepared, by encapsulating the types in a function that will be used to generate the types themselves:¶
<CODE BEGINS>export internetHeader : List Decl internetHeader = `[ ||| InternetHeader export data InternetHeader : Type where MkInternetHeader : (version : BitVector 4) -> version = [O, I, O, O] => (ihl : (Unsigned 4, Data)) -> snd ihl = tetra => (tos : Tos) -> (length : (Unsigned 16, Data)) -> snd length = octet => (id : Unsigned 16) -> (flags : Flags) -> (offset : Unsigned 13, Data)) -> snd offset = octa => (ttl : (Unsigned 8, Time)) -> snd ttl = second => (protocol : BitVector 16) -> (checksum : BitVector 16) -> (source : BitVector 32) -> (dest : BitVector 32) -> (options : List Option) -> (padding : BitVector n) -> n = pad 32 options => padding = bitVector => InternetHeader ] %runElab declare internetHeader <CODE ENDS>¶
This code behaves exactly like the previous definition, with the major difference that the documentation is not generated for that type.
Idris2 has been enhanced with the possibility to cache the result of an elaboration directly in the source code, and to automatically send a warning when the cache needs to be refreshed.
The interactive command :gc <line>
automatically generates the code followed by a %cacheElab
line that indicates where the code generated ends, something like this:¶
<CODE BEGINS>%runElab declare internetHeader export ||| InternetHeader data InternetHeader : Type where MkInternetHeader : (version : BitVector 4) -> version = [O, I, O, O] => (ihl : (Unsigned 4, Data)) -> snd ihl = tetra => (tos : Tos) -> (length : (Unsigned 16, Data)) -> snd length = octet => (id : Unsigned 16) -> (flags : Flags) -> (offset : Unsigned 13, Data)) -> snd offset = octa => (ttl : (Unsigned 8, Time)) -> snd ttl = second => (protocol : BitVector 16) -> (checksum : BitVector 16) -> (source : BitVector 32) -> (dest : BitVector 32) -> (options : List Option) -> (padding : BitVector n) -> n = pad 32 options => padding = bitVector => InternetHeader %cacheElab 1506359842985480550 1506359842985480550 <CODE ENDS>¶
The numbers on the %cacheElab
line are hashes of, respectively, the elaboration code and the generated text and permit to detect if either were modified since the last time the code was cached.¶
With that we can import the definition of the InternetHeader
type and clone in in our new specification:¶
<CODE BEGINS>import RFC791.IP %runElab declare internetHeader <CODE ENDS>¶
The modification needed by the new document can be done by replacing the ToS
field by the newly defined DSField
, using the replace
function:¶
<CODE BEGINS>import RFC791.IP import ComputerateSpecifying.Transform dscp : List Decl dscp = `[ public export data Dscp : Type where MkDscp : (dscp : BitVector 6) -> (reserved : BitVector 2) -> reserved = bitVector => Dscp ] %runElab declare dscp %runElab declare (add mypath internetHeader dscp) <CODE ENDS>¶
At this point using elaboration caching would permit to check that the new type indeed uses the Dscp
type instead of the old Tos
type.¶
At the difference of the previous section, that describes how to formalize the unplanned evolution of a protocol, most protocols are designed with the potentiality of evolution, also known as extensibility. These potentialities are generally expressed as values for some fields that will be later assigned to a new meaning.¶
The meaning for a new value will be defined in a new document, with all the documents giving new meanings to a field easily locatable in a registry.¶
Following up on our previous example, RFC 791 defines IP Options only for values 0, 1, 7, 68, 131, 136, and 137. These values, together with new values defined by other documents, are listed in the IP Option Numbers IANA registry. E.g., that IANA registry also defines, among others, value 25 in RFC 4782.¶
The values that are part of a registry are designed to be used with the protocol that defined that registry, so it makes sense to synthesize a Sum type of all these values in the computerate specification for the document that defined that registry.¶
When new codepoints are defined after the RFC holding the registry definition is published, the specification for that RFC must be modified to reference the new type defined in the specification of the RFC (or other document) that contains the codepoint definition.¶
As example, the type for the option field in the Internet Protocol header could be defined like this:¶
<CODE BEGINS>data Option : Type where EoolOption : (codepoint : BitVector 8) -> codepoint = 0 => Option NopOption : (codepoint : BitVector 8) -> codepoint = 1 => Option ... UnknownOption : (codepoint : BitVector 8) -> (length : Unsigned 8) -> (data : Vect (BitVector 8) length) -> Option <CODE ENDS>¶
Then a new codepoint is defined like this in the specification for RFC 1103:¶
<CODE BEGINS>public export data DoDBasicSecurity : Type where MkDoDBasicSecurity : (codepoint : BitVector 8) -> (codepoint = 130) => (length : Unsigned 8) -> (level : BitVector 8) -> (flags : Vect (BitVector 8) (length - 1)) -> DoDBasicSecurity <CODE ENDS>¶
Finally a dependency to the specification of RFC 1103 is added to the specification for RFC 791, and the Option type is modified as follow:¶
<CODE BEGINS>data Option : Type where EoolOption : (codepoint : BitVector 8) -> codepoint = 0 => Option NopOption : (codepoint : BitVector 8) -> codepoint = 1 => Option ... DoDBasicSecurityOption : DoDBasicSecurity -> Option UnknownOption : (codepoint : BitVector 8) -> (length : Unsigned 8) -> (data : Vect (BitVector 8) length) -> Option <CODE ENDS>¶
This type actually acts as a codepoint registry that mirrors the equivalent IANA registry.¶
Computerate specifications can formalize their content to make it reusable as a building block for other specifications. A specification that organizes its content along the guidelines presented in this section can become a part of the Computerate Specification Standard Library.¶
To be part of the Standard Library, specifications must be organized in 4 components:¶
This is the formalization of the content of the standard as an Idris package i.e., a set of Idris modules (i.e. files) that exports some or all of the types and functions defined in it. The code of these Idris modules is generally interspersed with the content of the standard to form literate code.¶
This is a document section that guides the reader step by step in the use of the Idris package in a Computerate Specification. A tutorial may import the package itself to validate the examples provided as part of the tutorial. This section is considered informative.¶
This is a document section that explains the Idris package as a whole i.e, grouping explanations by feature.¶
This is a document section that is auto-generated from the structured comments in the types and functions of the code Idris package. It lists all the types and functions in alphabetic order, including the comments on parameters.¶
This document is itself an Idris package that is part of the Standard Library, Section 7 contains the tutorial part of that package, Section 8.1 forms its description part, and Appendix B contains its reference.¶
For a retrofitted document, the code will be mixed with the existing standard to produce a Computerate Specification but the tutorial, description and reference parts cannot be added to that standard, so they have to be part of a separate document. It can be a new specification written for the express purpose of documenting that package. This is the case for this specification, which documents a selection of retrofitted Computerate Specifications that are part of the Standard Library. E.g., Section 6.2.1.4, and Section 8.2.2 are respectively the tutorial and the description for [RFC5234].¶
For a new document, the four components should be part of it.
E.g., in this document Section 6.1.5.1, Section 8.1.5, and Appendix B.1.2 are respectively the tutorial, description, and reference for the BitDiagram
module.¶
RFCs, Internet-Drafts and standard documents published by other SDOs did not start their life as computerate specifications, so to be able to use them as Idris packages they will need to be progressively retrofitted. This is done by converting the documents into an AsciiDoc documents and then enriching them with code, in the same way that would have been done if the standard was developed directly as a computerate specification.¶
Converting the whole document in AsciiDoc and enriching it with code, instead of just maintaining a library of code, seems a waste of resources. The reason for doing so is to be able to verify that the rendered text is equivalent to the original standard, which will validate the examples and formal languages.¶
Because the IETF Trust does not permit modifying an RFC or Internet-Draft as a whole (except for translation purposes), a special git mechanism stores the RFC or I-D in its canonical form along with a set of files that contains transclusions from the canonical file. The transclusions are automatically added or removed when a computerate specification file is respectively staged or checked-out. This is explained in more details in Appendix G.¶
Types and functions are exported by using the export
keyword.
Type constructors, interface functions and type functions implementation can be additionally exported by prepending the keyword public
to the export
keyword.¶
Additionally, types that may be transformed should be declared as explained in Section 6.3.2, i.e. by wrapping them first in a exported function that uses a quote declaration, then generating them locally using a declare
elaboration.¶
Only the following modules are available in the Docker image:¶
ComputerateSpecifying.Metanorma.Ietf¶
The MetaNorma.Ietf module provides a way to programmatically build an AsciiDoc document without having to worry about the particular formatting details. The types in this library are not meant to be used directly, but as the return type of functions in modules that generates text for insertion in a document.¶
At the difference of the AsciiDoc rendering process that tries very hard to render a document in any circumstances, the types in this module are meant to enforce the generation of AsciiDoc that results in valid xml2rfc v3 document.¶
The Text
,
Must
,
MustNot
,
Required
,
Shall
,
ShallNot
,
Should
,
ShouldNot
,
Recommended
,
May
,
Optional
,
HardBreak
,
Contact
,
Comment
,
Italic
,
Link
,
Index
,
Citation
,
Bold
,
Subscript
,
Superscript
,
Monospace
,
Unicode
,
Cross
,
and Attribute
constructors are used to build individual inline elements.¶
The Paragraph
constructor is used to build a paragraph, and is composed from a list of inline elements.¶
The Computerate Specifying Library provides a number of types and functions aimed at defining and manipulating the data types that are commonly found in Protocol Data Units (PDU). The most elementary type of data is the bit-vector, which is a list of individual bits. Bit-vectors are not always sufficient to describe the subtleties the data types carried in a PDU, and several more precise types are built on top of them. See Section 8.1.3 for unsigned integers.¶
BitVector
is a dependent type representing a list of bits, indexed by the number of bits contained in that list.
The type is inspired by Chapter 6 of [Kroening16] and by [Brinkmann02].¶
A value of type BitVector n
can be built as a series of zeros (bitVector
) or can be built by using a list of O
(for 0) and I
(for 1) constructors.
E.g., [O, I, O, O]
builds a bit-vector of type BitVector 4
with a value equivalent to 0b0100.¶
Bit-vectors can be compared for equality, but they are not ordered. They also are not numbers so arithmetics operations cannot be applied to them.¶
Bit-vectors can be concatenated (concat
), a smaller bit-vector can be extracted from an existing bit-vector (extract
), or a bit-vector can be extended by adding a number of zeros in front of it (extend
).¶
The usual unary bitwise (shiftL
, shiftR
, not
) operations are defined for bit-vectors, as well as binary bitwise operations between two bit-vectors of the same size (and
, or
, xor
)¶
Finally it is possible to convert the bit at a specific position in a bit-vector into a Bool
value (test
).¶
This module permits to manipulate denominate numbers, which are numbers associated with a unit.
Examples of denominate numbers are cast (5, meter / second)
(which uses a unit of speed), or cast (10, meter * meter * meter)
(which uses a unit of volume).¶
In this module a denominate number is a value of type Denominate xs
.
It carries one number as a fraction.
Its type is indexed over a list of dimensions, each associated with an exponent number.
All together this type can represent any unit that is based directly or indirectly from the base dimensions defined in the Dimension
type.¶
Denominate numbers are constructed by passing a tuple made of a number (either an Integer
or a Double
) and a unit to the cast
function.
E.g., cast (5, megabit)
will build the denominate number 5 with the megabit
unit.¶
Dimensionless denominate numbers can be constructed by using the none
unit, as in cast (10, none)
¶
Denominate numbers can be converted back into a tuple with the fromDenominate
function.¶
Denominate numbers can be added, subtracted or negated (respectively +
, -
, and neg
).
All these operations can only be done on denominate numbers with the same exact dimension, and the result will also carry the same dimension.
This prevents what is colloquially known as mixing apples and oranges.¶
For the same reason, adding a number to a non-dimensionless denominate number is equally impossible.¶
The *
, /
, and recip
operations respectively multiply, divide and calculate the reciprocal of denominate numbers.
These operations can be done on denominate number that have different types, and the result dimension will be derived from the dimension of the arguments.
E.g. multiplying cast (5, meter)
by cast (6, meter)
will return the equivalent of cast (30, meter * meter)
.¶
Also multiplying a denominate number by a (dimensionless) number is possible e.g., as in multiplying cast (5, meter)
by cast (10, none)
, which will return the equivalent of cast (50, meter)
.¶
Ultimately we want to insert in a computerate specification the value of a denominate number, together with its unit, as text, which is done by implementing the Show
interface on a denominate number in its tuple form.
E.g. fromDenominate (cast (5, meter / second)) (kilometer / hour)
can be directly inserted in a document and will be substituted with the string 18 km/h
.¶
For each dimension we define a list of constants that represents units of that dimension.
Units that uses a prefix are automatically generated, which is the case for SI units for the Time
dimension (i.e., from yoctosecond
to yottasecond
), SI units (only positive powers of 10) for the Data
dimension (i.e., from kilobit
to yottabit
), and IEC units (positive powers of 2) for the Data
dimension (i.e., from kibibit
to yobibit
).¶
A bit diagram displays a graphical representation of a data layout at the bit level.¶
The BitDiagram
type is used to build BitDiagrams values.¶
The toAsciiDoc
function converts a BitDiagram
value into an AsciiDoc Literal Block which can be inserted directly in the document.¶
Adhoc types can also be used to generate a bit diagram, by passing that type to the toDiagram
function and the returned value to the toAsciiDoc
function.
The toDiagram
function will build a field only for types that have an implementation for the Size
interface.
The function toDiagram
also takes an auxiliary Type Names
that associate names with these types.¶
This module permits to manipulate values that are in the very generic form of trees. These manipulations consist of removing, or replacing a selected value or values in that tree.¶
The values to manipulate are selected using a path, which is a series of instructions used to move the focus of the manipulation up, down and sideway in the tree and to apply a predicate until a set of values are chosen.¶
The values selected are then either removed or replaced by a new value. The rest of the tree stays unmodified.¶
This mechanism is very generic and can be applied to any tree, but it is meant to modify the types defined in the Language.Reflection.TTImp
and Language.Reflection.TT
standard modules, with the goal of generating types that are derived from existing types.¶
The Tpn
module permits to build Typed Petri Nets.
It is designed to mimic Hierarchical Colored Petri Nets so conversions could be done mechanically.¶
The Timed
type is used as a wrapper around another type when its values need to be associated with time, with the timed
and untimed
functions used to respectively unwrap and wrap a value.¶
An Ellipse
is a type whose values holds tokens of a specific type.
The word Ellipse references its shape when drawn.
An Ellipse
has two constructors:¶
The Place
constructor builds a Petri Net place, which is a structure that holds state in the form of tokens.
A place has a name, a type (or color) and an initial content which defaults to emptyness.¶
Alternatively the Port
constructor can be used to build a Petri Net port, which is used to define the interface of a Petri Net module and, like a place, has a name and a type.
A port does not have an initial content but contains the direction tokens are allowed to flow between an outer module instance and a module definition.¶
The input
function builds an input arc for a transition.
The inscription is a function that takes a token as input and generates an optional value of the input arc type.
The optionality of the output type permits to decide if a token can or cannot be used in the transition.¶
The type of the codomain of that function may be different from the type of the place to permit to do some manipulation on the token itself.¶
These two features can be combined together by using pattern matching inside the inscription.¶
It is not possible for an input arc to take multiple tokens from a place at once, but it is possible to simulate that behavior by adding multiple input arcs that take tokens from the same place.¶
The delay value (which defaults to 0) in an output arc indicates a preemptive time from which a Timed
wrapped token will enable the transition.¶
The output
function builds an output arc for a transition.
An output arc takes a value of the type created by a transition, converts it using the function (acting as inscription) into a list of tokens and insert that list in the destination place.
The list of tokens can be empty if no token has to be added, or can contain one or more tokens.¶
The order of the parameters of that function is the inverse of the input
function to show that the function codomain is the type of the place.¶
An input arc can contain a delay value (which defaults to 0) that will be added to any token that is wrapped in the Timed
type.¶
The Module
Monad is used to group together places, ports, transitions and instances of other modules into a module by using the do
notation.¶
The place
function binds an instance of the Place
constructor.
That binding can then be used in transitions and instances.¶
The port
function binds an instance of the Port
constructor.
That binding can then be used in transitions and instances.¶
The transition
function brings together input arcs and output arcs and the function that will convert input tokens into output tokens.¶
The instance
function imports another module in this module and assign a local Place or Port to each Port imported from that module.¶
The Marking
type describes a marking, which is a set of places and their content.
It represents the global state of the system described by a TPN.
A marking is indexed over the list of types of the places in it, such as we can guarantee that the marking structure never changes when transitions are fired.¶
The initial state of the system is generated by the initialMarking
function.¶
The enabledTransitions
function generates a list of all the transitions that are enabled by a specific marking.
The algorithm to decide that a transition is enabled is:¶
Filter the content of each place by using the input arc inscription connected to that place.¶
Build the Cartesian product of the filtered place content.¶
Filter out the results where the same token from the same place is used multiple times.¶
Filter out the results by using the transition inscription.¶
If there is one or more result after filtering, then the transition is enabled.¶
If there is no result after filtering, then the transition is disabled.¶
A Binding
is a tuple that has the same size as the number of input arcs and that contains a token from each of the respective places such as substituting the input places in a transition by places that contain only that value will still enable the transition.¶
The bindings
function lists all the possible bindings for a specific marking and an enabled transition.¶
The transition
function transforms a marking into a new marking by applying a specific binding to a transition.
The algorithm is:¶
Remove the tokens in the binding from their respective place in the marking.¶
Apply the binding to the transition like for the enabledTransition
.
This will generate a unique tuple, with one value per output arc.¶
Split the tuple per output arc, and apply the corresponding output arc inscription to each value.¶
Concatenate each resulting list to the respective place content in the marking.¶
The deriveType
function takes a top-level TPN (as an instance of the type Instance []
) and generates the declaration of a new Sum type, with one constructor per transition, plus one constructor for the initial marking.
This type then can serve to define a proof that a list of (transition, binding) tuples are valid according to that Petri Net.¶
On the generated type, the Init
constructor builds an initial marking.
Then each other constructors are used to validate a sequence of binding elements.
Each non-initial constructor carries a set of proofs, one per input arc that prove that the binding is originating from the places in the marking, and one that prove that this transition is enabled, by showing that the transition using that binding is deterministic.
Finally each transition updates the marking according to the output arcs, i.e removing and adding tokens.¶
The augmented-ascii-diagram
Idris package provides a set of modules that permits to generate parts of AsciiDoc documents that are conform to the [I-D.mcquistin-augmented-ascii-diagrams] specification.¶
The AAD.Pdu
type is used to define a PDU.¶
TBD.¶
The computerate command line tools are run inside a Docker image, so the first step is to install the Docker software or verify that it is up to date (https://docs.docker.com/install/).¶
Note that for the usage described in this document there is no need for Docker EE or for having a Docker account.¶
The following instructions assume a Unix based OS, i.e. Linux or MacOS. Lines separated by a "\" character are meant to be executed as one single line, with the "\" character removed.¶
To install the computerate tools, the fastest way is to download and install the Docker image using BitTorrent. The BitTorrent magnet URI for the version distributed with this version of the document is:¶
magnet:?xt=urn:btih:4bcf9ae446c26b31d592b2c952770be947dd3ffa&dn=tools-12.tar.xz¶
After this, the image can be loaded in Docker as follow:¶
docker load -i tools-12.tar.xz¶
Note that a new version of the tooling is released at the same time a new version of this document is released, each time with a new BitTorrent magnet URI.¶
computerate
Command
The Docker image main command is computerate
, which takes the same parameters as the metanorma
command from the Metanorma tooling:¶
docker run --rm -u $(id -u):$(id -g) -v $(pwd):/computerate \ computerate/tools computerate -t ietf -x txt <file>¶
The differences with the metanorma
command are explained in the following sections.¶
The computerate
command can process Literate Idris files (files with a "lidr" extension, aka lidr files), in addition to AsciiDoc files (files with an "adoc" extension, aka adoc files).
When a lidr file is processed, all embedded code fragments (text between prefix "{`" and suffix "`}") are evaluated in the context of the Idris code contained in this file.
Each code fragment (including the prefix and suffix) are then substituted by the result of that evaluation.¶
The computerate
command can process included lidr files in the same way.
The embedded code fragments in the imported file are processed in the context of the included lidr file, not in the context of the including file.
Idris modules (either from an idr or lidr file) can be imported the usual way.¶
The literate code (which is all the text that is starting by a ">" symbol in column 1) in a lidr file will not be part of the rendered document.¶
The computerate
can include in a document the result of the generation of the IdrisDoc for a package.
This is done by including a line like this:¶
<CODE BEGINS> include::computerate-specifying.ipkg[leveloffset=+2] <CODE ENDS>¶
The leveloffset
attribute is used to adjust the level of the section generated, as the sections generated always have the level 2.¶
Instead of generating a file based on the name of the input file, the computerate
command generates a file based on the :name:
attribute in the header of the document.¶
In addition to the "txt", "html", "xml", and "rfc" output formats supported by metanorma
, the computerate
command can also be used to generate for the "pdf" and "json" formats by using these names with the -x
command line parameter.¶
If the type of document passed to the computerate
command (options -t
or --type
) is one of the following, then the document will be processed directly using asciidoctor
, and not metanorma
:
"html,
"html5,
"xhtml",
"xhtml5",
"docbook",
"docbook5",
"manpage",
"pdf",
and "revealjs".
The asciidoctor-diagram extension is available in this mode with the following supported diagram types:
"actdiag",
"blockdiag",
"ditaa",
"graphviz",
"meme",
"mscgen",
"nwdiag",
"plantuml",
and "seqdiag".¶
Because most references are stable, there is not much point in retrieving them each time the document is processed, even with the help of a cache, so lookup of external references is disabled.¶
The following command can be used to fetch an RFC reference:¶
tools relaton fetch "IETF(RFC.2119)" --type IETF >ietf.xml¶
Then ietf.xml file needs to be edited by removing the first two lines. After this the xml file can be converted into a AsciiDoc document:¶
tools relaton convert ietf.xml -f asciibib¶
This will generate an ietf.adoc file that can be copied in the bibliography section. Note that section level of the bibliographic item needs to be one up the section level of the bibliography section.¶
One exception is a reference to a standard document that is under development, like an Internet-Draft.¶
In that case the best way is to have a separate script that fetch, edit and convert Internet-Drafts as separate files. Then these files can be inserted dynamically in the bibliography section using includes.¶
The command to retrieve an Internet-Draft reference is as follow:¶
tools relaton fetch "IETF(I-D.bortzmeyer-language-state-machines)" \ --type IETF >bortzmeyer-language-state-machines.adoc¶
Additionally the following sections show how to manually format some common types of bibliographic items, most of then adapted from [RFC-Guide]. The format is described in [AsciiBib].¶
<CODE BEGINS>[%bibitem] === {blank} id:: RFC-STYLE title.content:: RFC Style Guide contributor:: contributor.person.name.completename.content:: Heather Flanagan contributor.role.type:: author contributor:: contributor.person.name.completename.content:: Sandy Ginoza contributor.role.type:: author date.type:: published date.on:: 2014-07-20 link:: link.type:: TXT link.content:: https://www.ietf.org/.../draft-flanagan-style-03.txt docid:: docid.type:: Work docid.id:: in Progress docid:: docid.type:: Internet-Draft docid.id:: draft-flanagan-style-03 <CODE ENDS>¶
<CODE BEGINS>[%bibitem] === {blank} id:: RFC-STYLE2 title.content:: RFC Style Guide contributor:: contributor.person.name.completename.content:: Heather Flanagan contributor.role.type:: author contributor:: contributor.person.name.completename.content:: Sandy Ginoza contributor.role.type:: author date.type:: published date.on:: 2014-09 link:: link.type:: src link.content:: http://www.rfc-editor.org/info/rfc7322 docid:: docid.type:: RFC docid.id:: 7322 docid:: docid.type:: DOI docid.id:: 10.17487/RFC7322 <CODE ENDS>¶
<CODE BEGINS>[%bibitem] === {blank} id:: reftag title.content:: Subject: Subject line contributor:: contributor.person.name.completename.content:: A. Sender contributor.role.type:: author date.type:: published date.on:: 2014-09-05 link:: link.type:: src link.content:: https://mailarchive.ietf.org/.../Ed4OHwozljyjklpAE/ docid:: docid.type:: message to the docid.id:: listname mailing list <CODE ENDS>¶
<CODE BEGINS>[%bibitem] === {blank} id:: IANA-IKE title.content:: Internet Key Exchange (IKE) Attributes contributor.person.name.completename.content:: IANA contributor.role.type:: author link:: link.type:: src link.content:: http://www.iana.org/assignments/ipsec-registry <CODE ENDS>¶
<CODE BEGINS>[%bibitem] === {blank} id:: pysaml2 title.content:: Python implementation of SAML2 date.type:: published date.on:: 2018-03-01 docid:: docid.type:: commit docid.id:: 7135d53 link:: link.type:: src link.content:: https://github.com/IdentityPython/pysaml2 <CODE ENDS>¶
idr and lidr files can be loaded directly in the Idris REPL for debugging:¶
docker run --rm -it -u $(id -u):$(id -g) -v $(pwd):/computerate \ computerate/tools idris2 <lidr-file>¶
It is possible to directly modify the source code in the REPL by entering the :e
command, which will load the file in an instance of VIM preconfigured to interact with the REPL.¶
The idris2-vim
add-ons (which provides interactive commands and syntax coloring) is augmented with a feature that permits to use both Idris and AsciiDoc syntax coloring.
To enable it, add the following line at the end of all lidr file:¶
> -- vim:filetype=lidris2.asciidoc¶
For convenience, the docker image provides the latest version of the xml2rfc, aspell, idnits, and rfc2mn tools.¶
docker run --rm -u $(id -u):$(id -g) -v $(pwd):/computerate \ computerate/tools xml2rfc docker run --rm -u $(id -u):$(id -g) -v $(pwd):/computerate \ computerate/tools idnits --help docker run --rm -u $(id -u):$(id -g) -v $(pwd):/computerate \ computerate/tools aspell docker run --rm -u $(id -u):$(id -g) -v $(pwd):/computerate \ computerate/tools java -jar /opt/rfc2mn/rfc2mn-1.0.jar¶
The following sections list the tools distributed in the Docker image that have been modified for integration with the computerate
tool.¶
0.4.0 commit 2a5739c¶
Files with lipkg extension are processed as .ipkg literate files.¶
An Idris file can be used in scripting mode by adding a shebang line.¶
The interactive command :gc
permits to display the result of an elaboration.¶
The types in TTImp can carry the documentation for the types that will be generated from them.¶
The %cacheElab
directive permits to cache the result of an elaboration in the source code instead of been regenerated at each type-checking.¶
The --dg asciidoc
option can be used to generate on stdout the package documentation in AsciiDoc instead of HTML.¶
Elaborations can be exported and documented.¶
package
and depends
in ipkg file can use quoted strings.¶
--paths
now displays the paths after modification.¶
Replace the literate processor by a faster one. Remove support for reversed Bird style.¶
2.0.16¶
2.3.6¶
commit 964cebe¶
The current version of Docker in Ubuntu fails, but this can be fixed with the following commands:¶
sudo apt-get install containerd.io=1.2.6-3 sudo systemctl restart docker.service¶
:gc is currently broken.¶
Docstrings are not generated correctly.¶
Interactive commands are missing or not working well with literate code.¶
Changing the installation prefix requires two installations.¶
Documentation not generated for namespaces and record fields.¶
RFC and I-D references are not correctly generated by relaton. The workaround is to remove the IETF docid and to add the following:¶
Add documentation support for all types in TTImp.¶
:gc!
should update the file.¶
%cacheElab
should check hashes.¶
Add a way to generate a hole name.¶
Extract bibliography from computerate specification.¶
Generate xml2rfc <br>, <contact>, <cref> and <u> elements.¶
Starting vim in docker often result in an invalid terminal size when a file is loaded. Using the following command line solves the problem:¶
docker run --rm -it -u $(id -u):$(id -g) -e COLUMNS=$(tput cols) \ -e LINES=$(tput lines) -v $(pwd):/computerate computerate/tools \ vim <lidr-file>¶
This future tool will be able to convert an xml2rfc v3 file into an AsciiDoc file. It will also be able to update an already converted file without losing the Idris annotations.¶
The Builtin Computerate Specification Standard Library.¶
A module with generic types that can be used to track identifiers and references.¶
Concatene the second bit-vector after the first one.¶
Bitwise and between bit-vectors of identical size.¶
Extract a bit-vector.¶
Bitwise not of a bit-vector.¶
Bitwise or between bit-vectors of identical size.¶
Shift the bit-vector to the left by n bits, inserting zeros.¶
Shift the bit-vector to the right by n bits, inserting zeros.¶
Return a boolean that is True if the bit at position m is set.¶
Bitwise xor between bit-vectors of identical size.¶
A module that defines types, constants and operations on denominate numbers.¶
The multiplication operation between denominate numbers.¶
The addition operation between denominate numbers.¶
The subtraction operation between denominate numbers.¶
The division operation between denominate numbers.¶
The type of a denominate number for the data dimension.¶
The type of a dimensionless denominate number¶
The type of a denominate number for the time dimension.¶
Bit, the base unit of data.¶
The byte unit, as 8 bits.¶
The day, as unit of time.¶
Convert a denominate number into a tuple made of the dimensionless value (as a Double
) calculated after applying a unit, and that unit.¶
Generate all the IEC units based on the bit, from kibibit to yobibit.¶
Generate all the SI units based on the bit, from kilobit to yottabit.¶
Generates all the SI units based on the second, from yoctosecond to yottasecond.¶
The hour, as unit of time.¶
The minute, as unit of time.¶
The negation operation of a denominate number.¶
The unit for a dimensionless denominate number.¶
The octa unit, as 64 bits.¶
The reciprocal operation of a denominate number.¶
Second, the base unit of time.¶
The tetra unit, as 32 bits.¶
The wyde unit, as 16 bits.¶
A module used to generate an AsciiDoc fragment that can be inserted in a metanorma-ietf document with the goal of generating a valid xml2rfc v3 document.¶
A block of text¶
Implements Show.¶
Implements Show.¶
Implements Eq, Show.¶
Type to build inline elements.¶
Implements Show.¶
Plain text.¶
An hard break. NOTE: Not currently supported by metanorma-ietf.¶
Contact information. NOTE: Not currently supported by metanorma-ietf.¶
A comment. NOTE: Not currently supported by metanorma-ietf.¶
A list of inline elements to be rendered in italics.¶
An embedded URI.¶
An indexed term.¶
A citation, i.e. a crossreference to a bibliographic reference. NOTE: Not currently supported by metanorma-ietf.¶
The anchor for the bibliographic reference.¶
A list of inline elements to be rendered in bold.¶
A list of inline elements to be rendered in subscript.¶
A list of inline elements to be rendered in superscript.¶
A list of inline elements to be rendered in monospace.¶
One or more unicode characters. NOTE: Not currently supported by metanorma-ietf.¶
A crossreference to an anchor in this document.¶
The URI.¶
An AsciiDoc attribute¶
A module that defines types for Petri Net.¶
A module.¶
Implements Functor, Applicative, Monad.¶
Declare a new instance of a module, checking that the places used are declared in this module.¶
List all the bindings from a marking and a transition.¶
Transition to a new marking¶
List all the enabled transitions from a marking.¶
An input arc.¶
Declare an instance of a module.¶
An output arc.¶
Declare a place local to this module.¶
Declare a port local to this module.¶
An inhibitor arc.¶
The ellipse that will be emptied when the transition will be triggered.¶
Wrap a value into a timed value.¶
The time of this token, defaults to 0.¶
Declare a transition.¶
The name of the transition. Must be unique over all places, ports, transitions, and instances names in a module.¶
The list of input arcs.¶
The list of output arcs.¶
A function that converts the input tokens into output tokens.¶
The delay added by the transition.¶
A module to transform values structured as trees, with specialization to transform types via elaboration.¶
A selection path¶
Add a value as a sibling to values in a tree that are selected by a path.¶
Add a binding between a codepoint and a type in an extended registry¶
An unsigned number with a length.¶
An unsigned integer is just a wrapper around a bit-vector of the same size.¶
For sanity sake, this type always assumes that the value of a bit is 2 ^ m - 1, with m the size of the unsigned int. In other words the first bit is the MSB, the last bit (the closer to Nil) is the LSB.¶
Implements Num, Integral, Eq, Ord, Size.¶
The ABNF Core rules.¶
An ASCII alphabetic character.¶
A "0" or "1" ASCII character.¶
Any ASCII character, starting at SOH and ending at DEL.¶
A Carriage Return ASCII character.¶
A Carriage Return ASCII character, followed by the Line Feed ASCII character.¶
Any ASCII control character.¶
Any ASCII digit.¶
A double-quote ASCII character.¶
Any hexadecimal ASCII character, in lower and upper case.¶
A Horizontal Tab ASCII character.¶
A Line Feed ASCII character.¶
A potentially empty string of space, horizontal tab, or line terminators, that last one followed by a space or horizontal tab.¶
A 8-bit value.¶
An ASCII space.¶
A printable ASCII character.¶
A potentially empty string of space, or horizontal tab.¶
A module to generate a valid ABNF.¶
Implements Show.¶
A module to generate augmented packet header diagrams.¶
A boolean expression¶
Implements ShowPrec, Show.¶
An expression¶
Implements ShowPrec, Show.¶
This module provides types for Internet Protocol Address.¶
Types for the Internet Protocol.¶
Internet Protocol Header.¶
Internet Protocol Header Options.¶
Implements Size.¶
End of Options.¶
No operation.¶
Security Option.¶
Loose Source and Record Route Option.¶
In an effort to quantify the potential benefits of using formal methods at the IETF, an effort [rfc-prolog] to relabel the Errata database is under way.¶
The relabeling uses the following labels:¶
Label | Description |
---|---|
AAD | Error in an ASCII bit diagram |
ABNF | Error in an ABNF |
Absent | The errata was probably removed |
ASN.1 | Error in ASN.1 |
C | Error in C code |
Diagram | Error in a generic diagram |
Example | An example does not match the normative text |
Formula | Error preventable by using Idris code |
FSM | Error in a State machine |
Ladder | Error in a ladder diagram |
Rejected | The erratum was rejected |
Text | Error in the text itself, no remedy |
TLS | Error in the TLS language |
XML | Error in an XML Schema |
At the time of publication the first 1600 errata, which represents 25.93% of the total, have been relabeled. On these, 135 were rejected and 51 were deleted, leaving 1414 valid errata.¶
Label | Count | Percentage |
---|---|---|
Text | 977 | 69.09% |
Formula | 118 | 8.34% |
Example | 112 | 7.92% |
ABNF | 71 | 5.02% |
AAD | 49 | 3.46% |
ASN.1 | 40 | 2.82% |
C | 13 | 0.91% |
FSM | 13 | 0.91% |
XML | 12 | 0.84% |
Diagram | 6 | 0.42% |
TLS | 2 | 0.14% |
Ladder | 1 | 0.07% |
Note that as the relabeling is done in in order of erratum number, at this point it covers mostly older RFCs. A change in tooling (e.g. ABNF verifiers) means that these numbers may drastically change as more errata are relabeled. But at this point it seems that 31.89% of errata could have been prevented with a more pervasive use of formal methods.¶
As explained in this document, for now the workflow is to prepare a Colored Petri Net with the cpntools software, and then manually translate that Petri Net into an Idris Type using the library_tpn (Section 8.1.7) module, as explained in the following sections.¶
Colored Petri Nets are explained in [Jensen09] and in [Cpntools]. [Aalst11] is also a good introduction to Colored Petri Nets.¶
CPN adds some restriction on the types that can be used in a Petri Net because of limitations in the underlying programming language, SML. As the underlying programming language used in TPN, Idris, does not have these limitations, any well-formed Idris type (including polymorphic, linear and dependent types) can be directly used in a TPN.¶
The following sections explain how to convert a CPN Color Set into an Idris type. It refers to webpages at [Cpntools], and the Idris examples shown below are translations of the CPN ML examples in these pages. CPN's with clauses can be translated as added constraints to simple dependent types.¶
See http://cpntools.org/2018/01/09/unit-color-set/ for the definition of the unit color set.¶
The unit color set can be replaced by the ()
type:¶
<CODE BEGINS>U : Type U = () <CODE ENDS>¶
For int color sets using a with clause, a dependent type can be created:¶
<CODE BEGINS>data E = MkE <CODE ENDS>¶
See http://cpntools.org/2018/01/09/boolean-color-set/ for the definition of the bool color set.¶
The bool color set can be replaced by the Bool
type.¶
<CODE BEGINS>B : Type B = Bool <CODE ENDS>¶
For bool color sets using a with clause, a dependent type can be created:¶
<CODE BEGINS>data Answer = No | Yes <CODE ENDS>¶
See http://cpntools.org/2018/01/09/integer-color-sets/ for the definition of the int color set.¶
The int colour set can be replaced by the Int
type.¶
<CODE BEGINS>INT : Type INT = Int <CODE ENDS>¶
For int color sets using a with clause, a dependent type can be created:¶
<CODE BEGINS>data SmallInt : Type where MkSmallInt : (i : Int) -> i >= 1 && i <= 10 = True => SmallInt <CODE ENDS>¶
See http://cpntools.org/2018/01/09/large-integer-color-sets/ for the definition of the intinf color set.¶
The intinf colour set can be replaced by the Integer
type.¶
<CODE BEGINS>INTINF : Type INTINF = Integer <CODE ENDS>¶
For intint color sets using a with clause, a dependent type can be created:¶
<CODE BEGINS>data SmallLargeInt : Type where MkSmallLargeInt : (i : Integer) -> i >= 1 && i <= 10 = True => SmallInt <CODE ENDS>¶
See http://cpntools.org/2018/01/09/real-color-sets/ for the definition of the real color set.¶
The real color set can be replaced by the Double
type.¶
<CODE BEGINS>R : Type R = Double <CODE ENDS>¶
For real color sets using a with clause, a dependent type can be created:¶
<CODE BEGINS>data SomeReal : Type where MkSomeReal : (d : double) -> d >= 1.0 && d <= 10.0 = True => SomeReal <CODE ENDS>¶
See http://cpntools.org/2018/01/09/string-color-sets/ for the definition of the string color set.¶
The string color set can be replaced by the String
type.¶
<CODE BEGINS>S : Type S = String <CODE ENDS>¶
For string color sets using a with clause, a dependent type can be created:¶
<CODE BEGINS>data LowerString : Type where MkLowerString : (s : String) -> all (\c => c >= 'a' && c <= 'z') (unpack s) = True => LowerString <CODE ENDS>¶
Similarly for string color sets using an and clause:¶
<CODE BEGINS>data SmallString : Type where MkSmallString : (s : String) -> all (\c => c >= 'a' && c <= 'd') (unpack s) = True => length s >= 3 && length s <= 9 = True => SmallString <CODE ENDS>¶
See http://cpntools.org/2018/01/09/enumeration-color-set/ for the definition of the with color set.¶
A with color set can be implemented as a Sum type:¶
<CODE BEGINS>data Day = Mon | Tues | Wed | Thurs | Fri | Sat | Sun <CODE ENDS>¶
See http://cpntools.org/2018/01/09/index-color-sets/ for the definition of the index color set.¶
An index color set can be implemented as a dependent type:¶
<CODE BEGINS>data PH : Type where MkPH : (i : Nat) -> i >= 1 && i <= 5 => PH <CODE ENDS>¶
Compound color sets are color sets that combine simple colors sets and compound color sets together.¶
See http://cpntools.org/2018/01/09/product-color-sets/ for the definition of the product color set.¶
The product color set can be replaced by the Pair
type, which can also be represented as a tuple.¶
<CODE BEGINS>P : Type P = (U, I) <CODE ENDS>¶
See http://cpntools.org/2018/01/09/record-color-sets/ for the definition of the record color set.¶
The record color set can be replaced by a record type.¶
<CODE BEGINS>record PACK where se : SITES re : SITES no : INT <CODE ENDS>¶
See http://cpntools.org/2018/01/09/list-color-sets/ for the definition of the list color set.¶
The list color set can be replaced by a List a
type.¶
<CODE BEGINS>INTlist = Type INTlist = List INT <CODE ENDS>¶
For list color sets using a with clause, a dependent type can be created:¶
<CODE BEGINS>data ShortBoolList : Type where MkShortBoolList : (l : List Bool) -> length l <= 2 && length l >= 4 => ShortBoolList <CODE ENDS>¶
See http://cpntools.org/2018/01/09/union-color-sets/ for the definition of the union color set.¶
The union color set can be replaced by a Sum type.¶
<CODE BEGINS>data Packet : Type where Data : Data -> Packet Ack : Packet <CODE ENDS>¶
See http://cpntools.org/2018/01/09/subset-color-sets/ for the definition of the subset color set.¶
A subset color set can be replaced by a dependent type:¶
<CODE BEGINS>data EvenInt : Type where MkEvenInt : (i : Int) -> i `mod` 2 = 0 => EvenInt <CODE ENDS>¶
See http://cpntools.org/2018/01/09/alias-color-sets/ for the definition of the alias color set.¶
The alias color set can be replaced by a type function:¶
<CODE BEGINS>WholeNumber : Type WholeNumber = INT DayOff : Type DayOff = Weekend <CODE ENDS>¶
See http://cpntools.org/2018/01/09/timed-color-sets/ for the definition of timed color sets.¶
To convert a Timed color set, just wrap it in the Timed
type.¶
<CODE BEGINS>IT : Type IT = Timed Int P : Type P = Timed (Bool, IT) <CODE ENDS>¶
Converting a CPN Place is straightforward.
It is done by using the function place
.¶
The name of the place goes into the first parameter of the function. Note that all places in a module must have different name.¶
The color, after conversion as explained in Appendix D.1, goes into the second parameter.¶
The marking initialization, after conversion into a List
of the place type, goes into the implicit parameter init
, which by default takes the empty
value.¶
<CODE BEGINS>packetsToSend <- place "Packets To Send" NoxData {init=[(1, "COL"), (2, "OUR"), (3, "ED "), (4, "PET"), (5, "RI "), (6, "NET")]} <CODE ENDS>¶
Note that some of the tokens in use in Petri Net places are meant to represent network PDUs. It is recommended to use for that abstract types instead of wire types and to provide a proof of isomorphism as explained in Section 5.3.1.¶
Converting a CPN transition into a TPN transition is done by using the transition
function.
This function takes 4 parameters: a name, a list of input arcs, a list of output arcs and a function that combines both unification of bindings and the execution of the transition's guard.¶
In a Timed TPN, the implicit delay parameter can be set to the time that will be added to be tokens generated by this transition.¶
Before starting the conversion, CPN doubled-headed arcs need to to be duplicated, once as an input arc and once as an output arc, but with the same inscription.¶
Before the CPN input arc conversion, arcs that can take multiple tokens at once from a place must be duplicated, one for each token.¶
Each TPN input arc is built by the input
function.
This function takes 3 parameters: a place, a type, and a function that is converted from the inscription.¶
In a Timed TPN, the implicit delay parameter can be set to the preemptive time that a timed token can be that be removed from the place.¶
The type of the TPN input arc must be a tuple of the type of each variable used by the CPN input arc inscription. The domain of the function is the type of the place, its codomain is the type of the input arc.¶
The function itself takes as input one token from the place and returns an optional tuple of values, one for each variable. The returned value is optional so the function can indicate that no token are valid for that transition.¶
<CODE BEGINS>input packetToSend (No, Data) (\x => case x of (n, d) => pure (n, d)) <CODE ENDS>¶
In that case the function, defined as a lambda, can be simplified as just pure
.¶
Although they are not described in the cpntools web site, inhibitor arcs are implemented in cpntools.¶
Each TPN inhibitor arc is built by the inhibitor
function.
This function does not carry an inscription and must be considered as generating a ()
token when the place is empty.¶
Although they are not described in the cpntools web site, reset arcs are implemented in cpntools.¶
Each TPN reset arc is built by the reset
function.
This function does not carry an inscription and must be considered as always generating a ()
token that will empty the place when the transition is triggered.¶
Each TPN output arc is built by the output
function.
This function takes 3 parameters: a type, a place, and the function that is converted from the function.¶
In a Timed TPN, the implicit delay parameter can be set to the time that will be added to be tokens generated by this arc.¶
The type of the TPN output arc must be a tuple of the type of each variables used by the CPN arc inscription.¶
The function itself takes as input a tuple of values, one for each variable used by the CPN arc function, and returns a list of values to be inserted in the place. The returned value is a list so 0, one or more token can be inserted at once.,¶
E.g., the output arc in the arc in the bottom right left of Figure 1 of [Jensen07] can be translated as follow:¶
<CODE BEGINS>output (No, No) c (\(n, k) => if n == k then pure (k + 1) else k) <CODE ENDS>¶
The function in a TPN transition is assembled from two parts in the CPN transition: from the unification of variables defined in CPN input arcs with the same name and from the guard boolean expression.¶
The variables to unify are all the variables that hold the same name but in different input arc inscriptions. Unification is then done by verifying that these variables are equal, and by returning empty if they are not. Note that in TPN, variables are indexed by position in a tuple, not by name.¶
E.g., the unification part for the transition in the top left of Figure 1 of [Jensen07] can be translated as follow:¶
<CODE BEGINS>transition "send packet" [input packetsToSend (No, Data) pure, input nextSend No pure] [output (No, Data) packetsToSend pure, output No nextSend pure, output (No, Data) a pure] (\((n, d), n') => if n /= n' then empty else pure (n, d)) <CODE ENDS>¶
Here n
is the n coming from the PacketsToSend place, whereas n'
is coming from the NextSend place.¶
Converting guards is straightforward, as the boolean expression is simply tested after unification, and also returns empty if the result is false.¶
<CODE BEGINS>transition "remove packet" [input packetsToSend (No, Data, Time) pure input nextSend No pure] [output No nextSend pure] (\((n, d, t), k) => if n < k then empty else pure n) <CODE ENDS>¶
Because unification is already translated as a boolean expression, it is easy to combine it with the guard boolean expression.¶
Each free variable in a CPN transition must be converted into a new place, a new input arc, and a new output arc.¶
The new place is of the type of the variable, and its initial content is all the possible values for that type.¶
The new input arc uses the new place, the same type than the place, and its function is pure
.¶
The new output arc uses the same type than the place, the new place, and its function is also pure
.¶
Note that places with the same type must not be shared between input arcs that are connected to the same transition.¶
<CODE BEGINS>do allbools <- place "AllBools" Bool [True, False] transition "" [input allbool Bool pure, ...] [output Bool allbool pure, ...] (...) <CODE ENDS>¶
This document uses a special interpretation of Programs and Types that permits to build evidence-based answers to the kind of questions that a network protocol designer would be asking of its designs.¶
Although that interpretation is not new, few textbooks are available to concretely learn it and even when available, these textbooks generally take the long road by choosing to teach first Constructive Logic and then apply these teaching to Programs and Types. As there is in fact an even longer road that would take from Fibred Category Theory to Constructive Logic and then to Programs and Types, it is reasonable to think that there should be a shortcut there that would permit to start directly with Programs and Types, especially when the target audience is programmers, a segment of the technical population that is known to dislike mathematics.¶
Still, the mathematically inclined or the non-programmer can look at [Nederpelt14], [Bornat05], or [Mimram20] for an approach based on mathematics.¶
Basically the goals of that interpretation of Program and Types are:¶
To answer any question with either "Yes", "No", or "Don't know".¶
To ensure that any "Yes" or "No" answer is provided with evidence.¶
To use programming to achieve these goals.¶
The kind of questions that a network protocol designer may want to get that kind of evidence-based questions for are many:¶
Is my new version of the protocol safe to interoperate with the previous versions?¶
Is the protocol free of deadlock?¶
Is there corner cases that I neglected to take in account.¶
Is that faster but more efficient protocol equivalent to the slower but simpler original protocol?¶
Notice that when we talk about evidence-based answers, we exclude by definition any answer that has a probability different of 0.0 or 1.0, and furthermore exclude evidence-free answers like the ones given by AI/ML.¶
As a consequence, we have to admit that there are questions that do not have an evidence-based answer. That could be for a short list of reasons:¶
We did not looked in the literature yet if there is an existing answer to a particular question.¶
Nobody yet tried to find an answer to that question.¶
Nobody found an answer to the question yet.¶
There is no answer to that question.¶
There is clearly a question of locality of our knowledge at play here, and we are not pretending to get to some absolute truth with this technique.¶
In the 90s came this new idea that it was possible to use the C++ type system to encode calculations. A famous example was generating all the prime numbers during the compilation of a C++ program. The result was provided as a result of compiling the program, and the compiled program itself was irrelevant to get that result. This was done by reinterpreting the type system into a computational system.¶
Here we are going to do the same thing, and reinterpret the type system of a programming language, Idris, as a way to encode our questions.¶
As we will see, to be able to do this reinterpretation, the type system needs to be stronger than in a traditional programming language so to be able to encode a large variety of questions. We will also see that, paradoxically, the computational power of our programming language needs to be reduced to be sure that the evidence of our answers is valid.¶
One defining feature of that programming language is that the compilation step that in traditional programming languages is monolithic, is here split in 2 separate steps:¶
The typechecking step takes a set of source files and verifies that all values in these sources (including the code as a value of the function type) can be assigned to the correct type. Because of the complexity of the type system, an Idris interpreter is used to evaluate expressions during the typechecking step.¶
The code generation step generates executable code.¶
As our interpretation relies only on what happens in the typechecking step, we have no use for the second step of the compilation process.¶
The cornerstone of our new interpretation is that the evidence that the answer of a question is Yes is an value of the type that encodes that question. We will see later that the evidence that the answer is No is the inability to produce a value of a type.¶
Although there is no real usage for these, if we interpret the basic types in our programming languages as questions, then the answers to these are always Yes, because we can always find a value for these types:¶
<CODE BEGINS>1 : Int "s" : String <CODE ENDS>¶
In Idris a value of a type is written first, then followed by a colon and by the type of that value.¶
Note that it does not matter if you can find one or two millions different pieces of evidence - the answer is still Yes. The exact value we pick as evidence is absolutely irrelevant, which is something that may seems strange to a programmer.¶
This is why basic types are not really interesting in our interpretation, as their answer is always Yes.¶
Idris is a pure functional programming language, so functions are first class citizens of the language, and their type is called a Function Type.¶
The interpretation of a Function Type is that of an implication. Implications are a form of "if P then Q" statement, that says something about the relationship between two other Types, here P and Q.¶
In Idris the Function Type is represented as an arrow that separate the first type (sometimes called the domain of the function) from the second type (sometimes called the codomain of the function).¶
<CODE BEGINS>P -> Q <CODE ENDS>¶
To answer the question P -> Q
we need to find a value of that type.
An value of a Function Type is a program, so a program that takes values of P as parameter and returns a value of Q is an evidence that the answer to P -> Q
is Yes.
Another equivalent reading would be "Assuming that we can provide values for P and values for Q, then can we provide a function that typechecks?"¶
Notice again that there maybe many programs that fulfill that condition but again that is irrelevant, as we need only one to serve as evidence.¶
We can easily produce an evidence of that, let's say using Int and String as our types:¶
<CODE BEGINS>\x => "a" : Int -> String <CODE ENDS>¶
The expression on the left of the colon is a lambda expression.
x
will be bound to whatever value of Int will be passed as parameter, and the function will return True.¶
Note that this works only because Idris is a pure functional language, meaning that a function can only use the values passed as parameters in its evaluation of the returned value. Side effects or global variables are not available in a pure functional language.¶
A function in Idris can only take one parameter, but it is possible to return a function, which permits to simulate a multi-parameter function (this is known as currying):¶
<CODE BEGINS>\x => \y => True : Int -> (String -> Bool) <CODE ENDS>¶
Function types associate to the right, so the parenthesis in the example above is not really necessary.¶
Functions in Idris can also take a function as parameter, which will permit to encode the classical question:¶
"Socrates is a human, all humans are mortals, is Socrates a mortal?"¶
We can encode this in the Idris type system:¶
<CODE BEGINS>data Human : Type data Mortal : Type isSocratesMortal : Human -> (Human -> Mortal) -> Mortal <CODE ENDS>¶
Notice that here the parenthesis are mandatory. The question can be read like this:¶
"Assuming Socrates is a Human, and assuming that all Humans are Mortals, then is Socrates a Mortal?"¶
The evidence is easy to find:¶
<CODE BEGINS>isSocratesMortal : Human -> (Human -> Mortal) -> Mortal isSocratesMortal = \h => \f => f h <CODE ENDS>¶
One important point is that we are not trying to say that Socrates is a Human (maybe he was an alien). Similarly we are not trying to say that there is an absolute rule that all humans are mortals (in fact there is evidence that, at the time of writing, the human author of this document was immortal).¶
What we are saying is that assuming that we have evidence of a human (Socrates in that case) and assuming that we have evidence that all humans are mortals, then the only conclusion is that, Yes, Socrates is mortal, and the evidence for this is the program \h => \f => f h
.¶
Note that in a function definition, the parameters can be moved to the left hand side (LHS) of the equal sign, like this:¶
<CODE BEGINS>isSocratesMortal : Human -> (Human -> Mortal) -> Mortal isSocratesMortal h f = f h <CODE ENDS>¶
In some cases, questions can be made more general and still have a unique answer. This is the case for the question explored in the previous section, where the question can be generalized to something called syllogism (also known as Modus Ponens).¶
Polymorphism permits to substitute a type with a value that represents any type. Thus finding an evidence shows that the answer is Yes for a whole family of related questions.¶
Here we express that new generic question (the answer is the same) as this:¶
<CODE BEGINS>syllogism : p -> (p -> q) -> q syllogism x f = f x <CODE ENDS>¶
An identifier that starts with a lowercase character in an Idris type stands for all possible types.¶
Here we have evidence that a question with this particular shape can always be answered with Yes.¶
We saw previously that any value of a type is evidence of the Yes answer to the question encoded in that type. So the absence of a value for a type is evidence that the answer is No.¶
We have a problem here, as the evidence of No is that we cannot provide an evidence. But, from our local point of view, there is no difference between the fact that there is no evidence, and the fact that we did not searched hard enough for the evidence.¶
We can work around this by using a property of implication, which is that only a type with a No answer can imply a type with a No answer. So if we can implement a function (the Yes answer to the implication) between a type and an empty type (i.e., a type with a No answer), then we know that the former type is empty and that the answer it represents is also No.¶
Idris provides an empty type for that: Void (not to be confused with the Java type Void, which is not an empty type).¶
<CODE BEGINS>noEvidence: Int -> Void noEvidence x = ?aa <CODE ENDS>¶
Here we cannot complete the program because we cannot produce a value of type Void, and that's because Int has Yes as answer.¶
In Idris names that starts with a question mark are called holes and stand for a part of the program that we cannot or did not yet complete.¶
<CODE BEGINS>data Empty : Type where emptyIsNo : Empty -> Void emptyIsNo x impossible <CODE ENDS>¶
Here we can write a program that shows that that Empty
is equivalent to Void
, this program acting as evidence that there is no evidence for Empty
, and so that the answer is No.¶
Programmers will again be intrigued that a program that typechecks cannot be executed or tested.¶
The possibility of defining a type like Void that does not have any values by definition is one of the reason we need a different type system that used in most programming languages (most programming languages permits the use of null
as value for any type).¶
We also touched on the fact that our programming language must be less powerful than usual, and it is also related to the answer No.¶
An implication to a type that contains at least one value is a function that returns that value. But there is two cases where that function could not return that value, and thus acts as if the returned type is empty, and thus represents No instead of Yes.¶
The first case is if the function crashes because it does not know how to handle the value passed as parameter. A simple example example would be a function that divide 1 by the parameter - if the parameter is zero then the function will crashes and for the purpose of our interpretation is equivalent to an evidence of No because no value will be returned. To prevent that problem our programming language should be covering all inputs values, i.e. not typecheck if there is cases not covered.¶
The second case is when, for some reason, the code get stuck inside the function e.g., because of an infinite loop. That would again be equivalent to an evidence of No.¶
Idris prevents these two cases by using the total
keyword, which basically turns Idris into a non-Turing Complete language.¶
Note that there is no way to possibility to write code that will detect for any possible code if it will loop or not. That's why Idris may reject some code that will not loop, but it will never accept code that will loop.¶
Because there is not much difference between a No answer without evidence and not finding an answer, it's often useful to check and recheck that the question really expresses what we intended.¶
In the previous section we showed that syllogisms always have an answer of Yes. There is a series of fallacies [Bennett15] that are closely related to syllogisms, and here's one of them:¶
<CODE BEGINS>syllogism : p -> (q -> p) -> q <CODE ENDS>¶
That can be read as "Assuming Socrates is a Human, and assuming that all Mortal are Humans, then is Socrates a Mortal?" It may seems obvious that we cannot answer that question, so we may be able to get a No answer by rewriting the question that way:¶
<CODE BEGINS>syllogistic_fallacy : (p -> (q -> p) -> q) -> Void <CODE ENDS>¶
But in spite of our efforts, we cannot provide an evidence of that, which means that it is time to look closer at our question.¶
The issue is that for this to be a fallacy, we need to assume that there is no evidence that all Humans are Mortals, which the previous question does not say. With this modified question, we can now produce a evidence that it is indeed a fallacy:¶
<CODE BEGINS>syllogistic_fallacy : ((p -> q) -> Void) -> (p -> (q -> p) -> q) -> Void syllogistic_fallacy f g = f (\x => g x (\y => x)) <CODE ENDS>¶
The Product type permits to combine two or more questions such as the question represented by this type will have an answer of Yes only if all the questions also have an answer of Yes.¶
The simplest Product type in Idris is the tuple, which is represented as a list of types separated by commas and enclosed in parentheses:¶
<CODE BEGINS>product : (String, Int, Char) -> Bool product : (x, y, z) -> True <CODE ENDS>¶
The evidence has the same form as the type.¶
We can also provide evidence that the form above is equivalent to its curried form in general, and vice-versa:¶
<CODE BEGINS>curry : ((a, b) -> c) -> (a -> b -> c) curry f x y = f (x, y) uncurry : (a -> b -> c) -> ((a, b) -> c) uncurry f x = f (fst x) (snd x) <CODE ENDS>¶
The Sum type is a way to combine two or more questions such as the question represented by the Sum type will have an answer of Yes if at least one of the questions have an answer of Yes.¶
The simplest Sum type for two questions in Idris is Either a b
.¶
<CODE BEGINS>sum : Either String Void -> Bool sum x = True <CODE ENDS>¶
We can combine Sum and Product types to reorganize a question and show evidence that the answer is general.¶
<CODE BEGINS>dist : (a, Either b c) -> (Either a b, Either a c) dist x = (Left (fst x), Left (fst x)) <CODE ENDS>¶
Sum and Product combined with negation gives us more general answers:¶
<CODE BEGINS>dm1 : (Either (a -> Void) (b -> Void)) -> ((a, b) -> Void) dm1 (Left x) y = x (fst y) dm1 (Right x) y = x (snd y) dm2 : (a -> Void, b -> Void) -> ((Either a b) -> Void) dm2 x (Left y) = fst x y dm2 x (Right y) = snd x y dm3 : ((Either a b) -> Void) -> (a -> Void, b -> Void) dm3 f = (\x => f (Left x), \x => f (Right x)) <CODE ENDS>¶
TBD.¶
TBD.¶
TBD.¶
TBD.¶
— Melvin E. ConwayAny organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.¶
One long-term goal of this document is to establish a library of exported specifications for network protocols, much like the Lean Mathematical Library [Community20]. But unlike mathlib, which is built as a single git repository hosted in GitHub, the computerate library is built to mirror the intended distributed design of the Internet.¶
Computerate specifications are composed of code, so using a git repository to store that code and its evolution seems the right choice. But instead of using a single repository, each computerate specification is stored into a separate git repository. This permits to let each contributor choose how they will provide a public access to each git repository, anywhere in the spectrum from hosting providers (GitHub, GitLab, BitBucket...) to distributed services (IPFS, Radicle...), and including self-hosted servers (Gitolite, GitLab...).¶
Be able to export specifications would be useless without the ability to import other specifications. This requires the use of dependencies, such as the graph of dependencies between specifications will grow until it mirrors the graph of normative references between standards documents.¶
Here also we eschew the usual solution to that issue, which is using a centralized artifact repository (Maven, Gem, Apt, ...), in favor of a distributed solution, both for the storage of the binary artifacts and for the resources needed to build and verify them.¶
The solutions developed to fulfill these requirements ensure better availability, scalability and freedom than if it was designed as a single GitHub repository.¶
gits
Protocol
— Amanda Seyfried's character in Anon (2018)It's not that I have something to hide. I have nothing I want you to see.¶
Among all transport protocols that can be used to fetch commits, the Git protocol is the fastest as it runs directly over a TCP connection.
It is directly implemented by the git daemon
command.
For this reason it is the best choice to make public a git repository in read-only mode.¶
But, although everyone can fetch commits from such a repository, the transfer of the commits themselves is not encrypted.
The gits
transport protocol solves that issue by replacing the TCP connection by a TLS connection.
On the client side, the remote helper git-remote-gits
is provided as part of the tooling.
On the server side, the simplest solution is to use stunnel
together with git daemon
.¶
mgit
and mgits
Protocols
The mgit protocol provides an indirection to a list of git URLs, all pointing to identical mirrors of the same repository. An mgit URL provides scalability, reliability, and the ability of adding and removing git mirrors without having to changing the URL itself. Note that mirrors can use any kind of git URL (http, ssh, git, etc...), including gits URLs.¶
An mgit URL has the format "mgit://<random>", where <random> is a 40 hexadecimal characters string. Such URL is very stable, and can be published in a document for the purpose of retrieving the git repository holding the computerate specification that was used to generate that document.¶
Internally that string is used as the index to store the list of URLs pointing to the git mirrors in a RELOAD [RFC6940] P2P Overlay. Note that storing such list of URLs in the overlay requires credentials.¶
The remote helper git-remote-mgit
is provided as part of the tooling.
When used to fetch commit, it first contacts the overlay to retrieve the list of mirrors, then choose one randomly and fetch the commits from there.
If the mirror does not respond, then another mirror is randomly selected from the list, until all mirrors have be tried.¶
The 'git-remote-mgits` remote helper behaves similarly, but always excludes URLs that do not encrypt the transport.¶
A specification is a set of files that, together, are used as input to a process that generate a document. The subset of Idris files in that set forms an Idris Package. One and only one Idris package is stored per Git repository.¶
For specifications defined by this document, dependencies to other Idris Packages are defined by adding the Git repositories storing these repositories as Git submodules. This permits to distribute the graph of dependencies between each repository, without requiring a separate way of storing that information. The downside of doing that is that a new commit needs to be created to change the URL of a submodule.¶
In the traditional use of a submodule, 1) the URL to the Git repository together with 2) the commit that needs to be checked out in that repository, are the two pieces of information that are stored.¶
Instead of using a traditional https
URL for the submodule, we use an mgits
URL which brings a better reliability and availability, together with the ability to add or remove mirrors without having to create a new commit.¶
We are storing binary artifacts in a distributed cache that is colocated with each git repository.¶
Because the size of a cache cannot be unbounded, we need to be able to rebuild any artifact at any time from any commit in a git repository, and ensure that the artifact built for a commit stays the same regardless of when and where it is built.¶
We solve this issue by making all specification builds reproducible. In that context, reproducible means that building binary artifacts now or in 10 years will result in two sets of artifacts that are bit-exact identical.¶
That property permits to identifies each build by the commit-id of the commit of the source that was used to build it.¶
A clear consequence of this is that the Idris compiler and its runtime (chezscheme) should be available as source on the same commit, and should be part of the build. The simplest way to guarantee that is that these tools are also available as git submodules.¶
We use the git-lfs extension storage, when available, to store the binary artifacts.¶
When a build ends successfully, all the artifacts created in the source tree, which are easily recognizable because they are the files that are not already managed by git (ignoring the content of the .gitignore file), are packed in a file (zip, tar...) and uploaded into the git-lfs server, using the SHA256 hash of the commit-id of the source tree as OID.¶
The first step of a build is to try to retrieve the artifacts from the git-lfs storage, using the current commit-id. If that succeeds, the zip file retrieved is expanded, such as the artifacts are installed in the source tree exactly as they are after a successful build. If that fails then the build proceeds as usual.¶
Building a specification generally starts with building dependencies, as defined by submodules. Because submodules may themselves have submodules, the build becomes recursive.¶
Building a specification recursively from source would take a long time, so we take advantage of the distributed caches to download the binary artifacts instead when they are available.¶
To do so we need first to index the git repositories specifications by their commit-id.¶
We are using of one of the properties of a commit-id, which is a hash of the content of a commit, including the date and time, and the commit-id of the previous commit. That property is that a commit-id is a statistically globally unique identifier for the content of that commit, meaning that if the same commit-id lives in different repositories, they will still points to the exact same content. That means that by building an index for all the commit-ids of interest, we can associate each of them with the list of git repositories that contain it.¶
We are doing this indexation again in a distributed way, using the same RELOAD P2P overlay used in Appendix F.1.2.¶
Each time commits for a computerate specification are pushed in a git repository that have a public access, our mgits
git remote helper is used as a wrapper for the actual URI:¶
mgits::gitolite3@example.org:computerate-specifying.git¶
The 'computerate' wrapper stores each commit pushed in the P2P overlay as index, with the gitolite3@example.org:computerate-specifying.git
URL in the associated RELOAD Array.
If the Array for a specific commit already exist, the URL is added to that Array if the URL is not already there.¶
Before building a submodule, the build process queries the P2P Overlay using the commit-id of the submodule as index. That returns a list of git URLs, each pointing to a git repositories that hold that particular commit. The build process chooses one of these, and tries to download the binary artifacts, as explained in Appendix F.2.2. If the artifacts are not available, the sources files are retrieved from one of the git repositories, and the build is applied to each submodule before building that submodule.¶
The whole process would require to start the typechecking of any specification that is not yet cached by rebuilding the Idris compiler and the chezscheme runtime. This is why it is permitted to pin down some builds in the distributed cache, i.e. they are never candidate for removal, which would make them always available.¶
In most cases computerate specifications cannot be distributed because the IETF Trust declined to grant a license for that purpose (see item 5 in [Minutes]). Thus any distribution of a computerate specification would be a copyright infringement.¶
To work around that limitation, computerate specifications are not distributed as an annotated RFC or Internet-Draft, but as a set of files colocated with the RFC/I-D, using transclusions to merge the files only when a specification is checked out by Git.¶
A new git command git computerate
is distributed with the tooling, and permits to manage a distributable repository containing a specification.¶
The content of such Git repository can be seen as conversions between 3 states:¶
The reference file contains an RFC or Internet-Draft, exactly as stored in the RFC Editor or IETF Secretariat databases. For the time being, only the text version of these documents is used, as the xml2rfc v3 version for I-Ds is not canonical, or even existing in the first place.¶
The computerate specification, which is composed of at least one file with the .lipkg extension, and a set of optional .adoc and .lidr files, all of them included from the .lipkg file. These files only exist on a checked out repository and are never pushed or pulled from or to a remote.¶
The transcluded specification, which is composed of the exact same set of files than above, but with the copyrighted text replaced by transclusions. These files are the one that are pulled and pushed to and from a remote.¶
There is 4 different conversions taking place between these states:¶
The 'convert' subcommand takes an RFC or I-D in text format, stores it in the current git repository under the name ".reference.txt" and generates an initial computerate specification named "Main.lipkg". That same program can optionally take as parameter the set of existing files for the computerate specification and calculate a patchset to be applied when a new version of an Internet-Draft is submitted, when the RFC is published, or when an erratum for that RFC is verified.¶
The 'computerate' program takes the computerate specification files and convert them into a reference file. See Appendix A.2 for the usage.¶
the 'clean' program takes one of the computerate specification files, the reference file, and replaces this computerate specification file by substituting all the text that also exist in the reference file by transclusions.¶
the 'smudge' program takes a transcluded specification file, the reference file and replace this transcluded specification file by substituting the transclusions with the text in the reference file.¶
In addition, a diff program can compare two text versions of the same RFC or Internet-Drafts, but excluding the differences in formatting.
When an RFC or I-D is converted to a computerate specification, which itself is converted back to a text document, a diff of the original text and of the generated text should result in no difference.
Similarly a computerate specification converted as text and then converted back into a computerate specification should be equal to the original computerate specification.
This goes beyond the capability of the rfcdiff
program, e.g., by ignoring how sentences are wrapped-up in a paragraph.¶
Similarly the clean and smudge programs should be able to convert back and forth between the computerate specification and the transcluded specification without loss of information. These two programs are executed automatically by git when a specification file is either checked-out or staged.¶
Note that it is crucial to understand that the act of merging reference file and transcluded file must only be done on a local computer, so to not infringe on the IETF Trust copyright. This makes Git web interfaces like GitHub less useful than for other types of files, as such web interface can only display the transcluded file. The package management system for computerate specifications described in Appendix F does not rely on the availability of a web interface for the git repositories.¶
It is recommended to populate a "copyright" file [Copyright], colocated with each computerate specification, that contains the exact license used for each file in the repository,¶
Thanks to Jim Kleck, Eric Petit-Huguenin, Nicolas Gironi, Stephen McQuistin, Greg Skinner, and Raluca Toth for the comments, suggestions, questions, and testing that helped improve this document and its associated tooling.¶
Thanks to Ronald Tse and the Ribose team for the metanorma and relaton tools and their diligence in fixing bugs and implementing improvements.¶
Stephane Bryant¶
Email: stephane.ml.bryant@gmail.com¶
Tooling:¶
Document:¶
Tooling:¶
Remove temporary files before processing.¶
Update idnits to 2.17.00.¶
Backticks inside code fragment are correctly processed.¶
Update metanorma to 1.3.9.1.¶
Files with .lipkg extension are also processed as literate files. This permits to have a top adoc file also containing an Idris package definition.¶
Document:¶
Tooling:¶
A subset of the computerate specifying standard library is now distributed in the Docker image. Currently the Metanorma.Ietf module is the only module distributed.¶
Update asciidoctor to 2.0.16.¶
Base image is now Debian bullseye-slim.¶
Remove transclusion processor.¶
Insert SeriesInfo in correct order (BCP|STD|FYI, RFC|Internet-Draft, DOI).¶
Process correctly literate error.¶
Processing of Idris files is done only once.¶
Remove useless metanorma patches.¶
Update Idris2 to 0.4.0.¶
Update xml2rfc to 3.9.1.¶
Update metanorma to 1.3.9.¶
Update metanorma-ietf to 2.4.0¶
Document:¶
Tooling:¶
The include directive for lidr file now supports range, tag and tags attributes. This permits to copy the actual code into a block.¶
The --dg asciidoc
option for idris2 generates the documentation in AsciiDoc instead of HTML.¶
Update asciidoctor to 2.0.15.¶
Update metanorma to 1.3.3.¶
Update metanorma-ietf to 2.3.2.¶
Document:¶
Most of the bibliography is now generated from a Zotero collection, resulting in a better and easier to maintain bibliography.¶
Nits.¶
Explanations on how to convert a CPN transition into a TPN transition.¶
New appendix F describing the distributed system for the computerate specifications library.¶
Improvements in the TPN Tutorial, Reference and IdrisDoc.¶
Tooling:¶
Add rfc2mn tool to convert an xml2rfc file into an AsciiDoc document.¶
Update asciidoctor to 2.0.14¶
Update metanorma to 1.3.0.¶
Update metanorma-ietf to 2.3.0.¶
Update xml2rfc to 3.7.0.¶
Multiline problem in postal address is fixed in metanorma.¶
Default figure wrapping problem is fixed in metanorma.¶
Document:¶
Tooling:¶
Document:¶
Rename abstract type as semantic type, and coloured petri nets as colored petri net.¶
Remove figure wrapper from all source code, and added markers when missing.¶
Rewrite and extension of sections 6.1.4 and 6.1.5.2 to show how to generate a Message Sequence Chart from a Petri Net.¶
New step by step explanation on how to manually convert a CPN into a TPN as appendix D.¶
New tutorial on Evidence-Based Answers as appendix E.¶
Document:¶
Tooling:¶
Document:¶
Sections 2, 3, 4 and 5 have been completely reorganized, edited, and extended as a tutorial.¶
New Terminology section.¶
Add a new Standard Library section, that contains the description of all the Idris modules and external packages that will be available for developing specifications.¶
Improve bibliography.¶
Extend the CLI section to cover:¶
Generate IdrisDoc of standard library packages and modules as a new appendix.¶
Update errata stats.¶
More compact changelog.¶
Many modifications following Stephane's reviews.¶
Tooling:¶
Additional metanorma features:¶
Generate json file.¶
Various bug fixes in metanorma and relaton.¶
Additional Idris2 features:¶
Idris2 wrapper to load local packages.¶
New include processor to generate IdrisDoc.¶
Process multiple fragments on each line.¶
Add support for asciidoctor outputs, including revealjs and diagrams.¶
Embedding code must now return a value that implements Show
.
String values are then stripped of their first and last double-quotes.¶
Fix bug where transcluded text is converted into ASCII art.¶
Embedded code in examples in lidr files can now be escaped with \
.¶
Replace Idris with Idris2 version 0.2.1.¶
Update metanorma to 1.1.4.¶
Update metanorma-ietf to 2.2.2.¶
Update xml2rfc to 3.0.0.¶
Downgrade idnits to 2.16.04.¶
Decommission the Docker image in dat://78f80c850af509e0cd3fd7bd6f5d0dd527a861d783e05574bbd040f0502da3c6.¶
Document:¶
Notes are now correctly displayed.¶
Add "Implementations Oriented Standards" section.¶
Add "Extended Registries" section and appendix.¶
Add paragraph about hierarchical petri nets.¶
Convert "Verified Code" section into a top level section, and expand it.¶
Add "Implementation-Oriented Standards" section.¶
Tooling:¶
Many bug fixes in metanorma-ietf.¶
Update xml2rfc to 2.40.1.¶
Rebuilding text for an RFC with xml2rfc now uses pagination.¶
Update metanorma-ietf to version 2.0.5.¶
The "computerate" command can directly generate PDF files.¶
Add support in xml2rfc for generating PDF files.¶
Add asciidoctor-revealjs.¶
Update metanorma to version 1.0.0.¶
Update metanorma-cli to version 1.2.10.1.¶
Document¶
Switch to rfcxml3.¶
Status is now experimental.¶
Many nits.¶
Fix incorrect errata stats.¶
Move acknowledgment section at the end.¶
Rewrite the APHD section (formerly known as AAD) to match draft-mcquistin-augmented-diagrams-01.¶
Fix non-ascii characters in the references.¶
Intermediate AsciiDoc representation for serializers.¶
Tooling¶
Document¶
New changelog appendix.¶
Fix incorrect reference, formatting in Idris code.¶
Add option to remove container in all docker run
command.¶
Add explanations to use the Idris REPL and VIM inside the Docker image.¶
Add placeholders for ASN.1 and RELAX NG languages.¶
New Errata appendix.¶
Nits.¶
Improve Syntax Examples section.¶
Tooling¶