Internet-Draft | Computerate Specifying | November 2019 |
Petit-Huguenin | Expires 20 May 2020 | [Page] |
This document specifies a paradigm named Computerate Specifying, designed to simultaneously document and formally specify communication protocols. This paradigm can be applied to any document produced by any Standard Developing Organization (SDO), but this document targets specifically documents produced by the IETF.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 20 May 2020.¶
Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
If, as the unofficial IETF motto states, we believe that "running code" is an important part of the feedback provided to the standardization process, then as per the Curry-Howard equivalence [Curry-Howard] (that states that code and mathematical proofs are the same), we ought to also believe that "verified proof" is an equally important part of that feedback. A verified proof is a mathematical proof of a logical proposition that was mechanically verified by a computer, as opposed to just peer-reviewed.¶
The "Experiences with Protocol Description" paper from Pamela Zave [Zave] gives three conclusions about the usage of formal specifications for a protocol standard. The first conclusion states that informal methods (i.e. the absence of verified proofs) are inadequate for widely used protocols. This document is based on the assumption that this conclusion is correct, so its validity will not be discussed further.¶
The second conclusion states that formal specifications are useful even if they fall short of the "gold standard" of a complete formal specification. We will show that a formal specification can be incrementally added to a standard.¶
The third conclusion from Zave's paper states that the normative English language should be paraphrasing the formal specification. The difficulty here is that to be able to keep the formal specification and the normative language synchronized at all times, these two should be kept as physically close as possible to each other.¶
To do that we introduce the concept of "Computerate Specifying" (note that Computerate is a British English word). "Computerate Specifying" is a play on "Literate Computing", itself a play on "Structured Computing" (see [Knuth92] page 99). In the same way that Literate Programming enriches code by interspersing it with its own documentation, Computerate Specifying enriches a standard specification by interspersing it with code (or with proofs, as they are the same thing), making it a computerate specification.¶
Note that computerate specifying is not specific to the IETF, just like literate computing is not restricted to the combination of Tex and Pascal described in Knuth's paper. What this document describes is a specific instance of computerate specifying that combines [AsciiDoc] as formatting language and [Idris] as programming language with the goal of formally specifying IETF protocols.¶
Nowadays specifications at the IETF are written in a format named xml2rfc v3 [RFC7991] but unfortunately making that format "Computerable" is not trivial, mostly because there is no simple solution to mix code and XML together in the same file. Instead, we chose the AsciiDoc format as the basis for computerate specifications as it permits the generation of specifications in the xmlrfc v3 format (among other formats) and also because it can be enriched with code in the same file.¶
[I-D.ribose-asciirfc] describes a backend for the [Asciidoctor] tool that converts an AsciiDoc document into an xmlrfc3 document. The AsciiRFC document states various reasons why AsciiDoc is a superior format for the purpose of writing standards, so we will not discuss these further. Note that the same team developed Asciidoctor backends for other Standards Developing Organizations (SDO) [Metanorma], making it easy to develop computerate specifications targeting the standards developed by these SDOs.¶
The code in a computerate specification uses the programming language Idris in literate programming [Literate] mode using the Bird-style, by having each line of code starting with a ">" mark in the first column.¶
That same symbol was also used by AsciiDoc as an alternate way of defining a blockquote [Blockquotes] way which is no longer available in a computerate specification. Bird-style code will simply not appear in the rendered document.¶
The result of Idris code execution can be inserted inside the document part by putting that code fragment in the document between the "{`" string and the "`}" string.¶
A computerate specification is processed by an Asciidoctor preprocessor that does the following:¶
For instance the following computerate specification fragment taken from the computerate specification of STUNbis¶
is rendered as¶
Appendix A explains how to install the command line tools to process a computerate specification.¶
The Idris programming language has been chosen because its type system supports dependent and linear types, and that type system is the language in which formal specifications are written.¶
Following Zave's second conclusion, a computerate specification does not have to be about just formally specifying a protocol and proving properties about it. There is a whole spectrum of formalism that can be introduced in a specification, and we will present it in the remaining sections by increasing order of complexity. Note that because the formal language is a programming language, these usages are not exhaustive, and plenty of other usages can and will be found after the publication of this document.¶
A computerate specification does not disappear as soon the standard it describes is published. Quite the opposite, each specification is designed to be used as an Idris module that can be imported in subsequent specifications, reducing over time the amount of code that needs to be written. At the difference of an RFC that is immutable after publication, the code in a specification will be improved over time, especially as new properties are proved or disproved. The latter will happen when a bug is discovered in a specification and a proof of negation is added to the specification, paving the way to a revision of the standard.¶
This document is itself a computerate specification that contains data types and functions that can be reused in future specifications, and as a whole can be considered as the standard library for computerate specifying.¶
For convenience, each public computerate specification, including the one behind this document, will be made available as an individual git repository. Appendix B explains how to gain access to these computerate specifications.¶
RFCs, Internet-Drafts and standard documents published by other SDOs did not start their life as computerate specifications, so to be able to use them as Idris modules they will need to be progressively retrofitted. This is done by converting the documents into an AsciiDoc documents and then enriching them with code, in the same way that would have been done if the standard was developed directly as a computerate specification.¶
Converting the whole document in AsciiDoc and enriching it with code, instead of just maintaining a library of code, seems a waste of resources. The reason for doing so is to be able to verify that the rendered text is equivalent to the original standard, which will validate the examples and formal languages.¶
Retrofitted specifications will also be made available as individual git repositories as they are converted.¶
Because the IETF Trust does not permit modifying an RFC as a whole (except for translation purposes), a retrofitted RFC uses transclusion, a mechanism that includes parts of a separate document at runtime. This way, a retrofitted RFC is distributed as two separate files, the original RFC in text form, and a computerate specification that contains only code and transclusions.¶
Transclusion is a special form of AsciiDoc include that takes a range of lines as parameters:¶
Here the include
macro will be replaced by the content of lines 26 to 35 (included) of RFC 5234.¶
The "sub" parameter permits modifying the copied content according to a regular expression. For instance the following converts references into the AsciiDoc format:¶
In the following example, the text is converted into a note:¶
Standards evolve, but because RFCs are immutable, revisions for a standard are done by publishing new RFCs.¶
The matching computerate specifications need to reflect that relationship by extending the data type of syntax and semantics in the new version, instead of recreating new data types from scratch. There are two diametrically opposed directions when extending a type:¶
This is correct in theory, but in practice creating new specifications from old ones as described above is not very convenient. Maybe an alternate solution is to define the new specifications from scratch, and use an isomorphism proof to precisely define the differences between the two. An Idris elaboration script may permit duplicating a type and modifying it without having to manually copy it.¶
Communication protocol specifications are generally split in two distinct parts, syntax (the data layout of the messages exchanged) and semantics (the rules that drive the exchange of messages).¶
Section 3 will discuss in detail the application of computerate specifying to syntax descriptions, and Section 4 will be about specifying semantics.¶
The syntax of a communication protocol determines how data is laid out before it is sent over a communication link. Generally the syntax is described only in the context of the layer that this particular protocol is operating at, e.g. an application protocol syntax only describes the data as sent over UDP or TCP, not over Ethernet or Wi-Fi.¶
Syntaxes can generally be split into two broad categories, binary and text, and generally a protocol syntax falls completely into one of these two categories.¶
Syntax descriptions can be formalized for at least three reasons that will be presented in the following sections.¶
Examples in protocol documentation are frequently incorrect, which should not be that much of an issue but for the fact that most developers do not read the normative text when an example is available. See Appendix C for statistics about the frequency of incorrect examples in RFC errata.¶
Moving the examples into appendices or adding caution notices may show limited success in preventing that problem.¶
So ensuring that examples match the normative text seems like a good starting point for a computerate specification. This is done by having the possibility of adding the result of a computation directly inside the document. If that computation is done from a type that is (physically and conceptually) close to the normative text, then we gain some level of assurance that both the normative text and the derived examples will match. Note that examples can be inserted in the document as whole blocks of text, or as inline text.¶
Appendix B.2.1 showcases the conversion of the examples in [RFC5234].¶
The first step is to define an Idris type that completely defines the layout of the messages exchanged. By "completely define" we mean that the type checker will prevent creating any invalid value of this type. That ensures that all values are correct by construction.¶
The second step is writing a serializer from that type into the wire representation. For a text format, it is done by implementing the Show interface:¶
Define binary serializer.¶
Instead of directly generating character strings, the serializer will use as target a dependent type that formalizes the AsciiDoc language. This will permit to know the context in which a character string will be subsequently generated and to correctly escape special characters in that string.¶
Using an intermediary type will also permit to correctly generate AsciiDoc that can generate an xmlrfc 3 file that supports both text and graphical versions of a figure. This will be done by having AsciiDoc blocks converted into <artwork> elements that contains both the 72 column formatted text and an equivalent SVG file, even for code source (instead of using the <sourcecode> element).¶
Some specifications use a formal language to describe the data layout. One shared property of these languages is that they cannot always formalize all the constraints of a specific data layout, so they have to be enriched with comments. One consequence of this is that they cannot be used as a replacement for the Idris data type described in Section 3.1.1, data type that is purposely complete.¶
The following sections describe how these formal languages have been or will be themselves formalized with the goal of using them in computerate specifications.¶
Augmented Backus-Naur Form [RFC5234] (ABNF) is a formal language used to describe a text based data layout.¶
The [RFC5234] document has been retrofitted as a computerate specification to provide an internal Domain Specific Language (DSL) that permits specifying an ABNF for a specification. The encoding of an example from Section 2.3 of [RFC5234] looks like this:¶
A serializer, also defined in the same specification, permits converting that description into proper ABNF text that can be inserted into the document such as in the following fragment:¶
is rendered as¶
See Appendix B.2.1 for access to the source of the retrofitted specification for [RFC5234].¶
Augmented Packet Header Diagram [I-D.mcquistin-augmented-ascii-diagrams] (APHD) is a formal language used to describe a binary data layout in a machine-readable format.¶
The [I-D.mcquistin-augmented-ascii-diagrams] document will be retrofitted as a computerate specification to provide an internal Domain Specific Language (DSL) that permits specifying an APHD for a specification. The partial encoding of an example from section 4.1 looks like this:¶
A serializer, also defined in the same specification, permits converting that description into proper ABNF text that can be inserted into the document such as in the following fragment:¶
is rendered as¶
Here the serializer generates an instance of the intermediary AsciiDoc type that describes the header line (which can be concatenated to previous lines), the block containing the diagram itself, and then a list of all the field definitions.¶
AsciiDoc supports writing equations using either asciimath or latexmath. The rendering for RFCs will generate an artwork element that contains both the text version of the equation and a graphical version in an SVG file.Not sure what to do with inline formulas, as we cannot generate an artwork element in that case.¶
An Idris type will be used to described equations at the type level. An interpreter will be used to calculate and insert examples in the document.¶
A serializer will be used to generate the asciimath code that is inserted inside a stem block.¶
The kind of proofs that one would want in a specification are related to isomorphism, i.e. a guarantee that two or more descriptions of a data layout contain exactly the same information.¶
We saw above that when a data layout is described with a formal language, we end up with two descriptions of that data layout, one using the Idris dependent type (and used to generate examples) and one using the formal language.¶
Proving isomorphism requires generating an Idris type from the formal language instance, which is done using an Idris elaborator script.¶
In Idris, Elaborator Reflection [Elab] is a metaprogramming facility that permits writing code generating type declarations and code (including proofs) automatically.¶
For instance the ABNF language is itself defined using ABNF, so after converting that ABNF into an instance of the Syntax type (which is an holder for a list of instances of the Rule type), it is possible to generate a suite of types that represents the same language:¶
The result of the elaboration can then be used to construct a value of type Iso, which requires four total functions, two for the conversion between types, and another two to prove that sequencing the conversions results in the same original value.¶
The following example generates an Idris type "SessionDescription" from the SDP ABNF. It then proves that this type and the Sdp type contain exactly the same information (the proofs themselves have been removed, leaving only the propositions):¶
As stated in Section 3.2, the Idris type and the type generated from the formal language are not always isomorphic, because some constraints cannot be expressed in that formal language. In that case isomorphism can be used to precisely define what is missing information in the formal language type. To do so, the generated type is augmented with a delta type, like so:¶
Then the DeltaSessionDescription type can be modified to include the missing information until the same function type checks. After this we have a guarantee that we know all about the constraints that cannot be encoded in that formal language, and can check manually that each of them is described as comment.¶
Idris elaborator scripts will be developped for each formal languages.¶
For specifications that describe a conversion between different data layouts, having a proof that guarantees that no information is lost in the process can be beneficial. For instance, we observe that syntax encoding tends to be replaced each ten years or so by something "better". Here again isomorphism can tell us exactly what kind of information we lost and gained during that replacement.¶
Here, for example, the definition of a function that would verify an isomorphism between an XML format and a JSON format:¶
Here DeltaXML expresses what is gained by switching from XML to JSON, and DeltaJson expresses what is lost.¶
The syntax of the data layout may be modified as part of the evolution of a standard. In most case a version number prevents old format to be used with the new format, but in cases where that it is not possible, the new specification can ensure that both formats can co-exist by using the same techniques as above.¶
Conversely these techniques can be used during the design phase of a new version of a format, to check if a new version number is warranted.¶
Be conservative in what you do, be liberal in what you accept from others.¶
— Jon Postel
One of the downsides of formal specifications is that there is no wiggle room possible when implementing it. An implementation either conforms to the specification or does not.¶
One analogy would be specifying a pair of gears. If one decides to have both of them made with tolerances that are too small, then it is very likely that they will not be able to move when put together. A bit of slack is needed to get the gear smoothly working together but more importantly the cost of making these gears is directly proportional to their tolerance. There is an inflexion point where the cost of an high precision gear outweighs its purpose.¶
We have a similar issue when implementing a formal specification, where having an absolutely conformant implementation may cost more money than it is worth spending. On the other hand a specification exists for the purpose of interoperability, so we need some guidelines on what to ignore in a formal specification to make it cost effective.¶
Postel's law proposes an informal way of defining that wiggle room by actually having two different specifications, one that defines a data layout for the purpose of sending it, and another one that defines a data layout for the purpose of receiving that data layout.¶
Existing specifications express that dichotomy in the form of the usage of SHOULD/SHOULD NOT/RECOMMENDED/NOT RECOMMENDED [RFC2119] keywords. For example the SDP spec says that "[t]he sequence CRLF (0x0d0a) is used to end a line, although parsers SHOULD be tolerant and also accept lines terminated with a single newline character." This directly infers two specifications, one used to define an SDP when sending it, that enforces using only CRLF, and a second specification, used to define an SDP when receiving it (or parsing it), that accepts both CRLF and LF.¶
Note that the converse is not necessarily true, i.e. not all usages of these keywords are related to Postel's Law.¶
To ensure that the differences between the sending specification and the receiving specification do not create interoperability problems, we can use a variant of isomorphism, as shown in the following example (data constructors and code elided):¶
Here we define two data types, one that describes the data layout that is permitted to be sent (Sending) and one that describes the data layout that is permitted to be received (Receiving).
For each data layout that is possible to send, there is one or more matching receiving data layouts.
This is expressed by the function to
that takes as input one Sending value and returns a list of Receiving values.¶
Conversely, the from
function maps a Receiving data layout onto a Sending data layout.
Note the asymmetry there, which prevents to use a standard proof of isomorphism.¶
Then the toFrom
and fromTo
proofs verify that there is no interoperability issue by guaranteeing that each Receiving value maps to one and only one Sending instance and that this mapping is isomorphic.¶
All of this will provide a clear guidance of when and where to use a SHOULD keyword or its variants, without loss of interoperability.¶
As an trivial example, the following proves that accepting LF characters in addition to CRLF characters as end of line markers does not break interoperability:¶
The semantics of a communication protocol determine what messages are exchanged over a communication link and the relationship between them. The semantics are generally described only in the context of the layer that this particular protocol is operating at.¶
The semantics of a specification require to define an Idris type that strictly enforces these semantics. This can be done in an ad hoc way [Type-Driven], particularly by using linear types that express resources' consumption.¶
But a better solution is to design these graphically, particularly by using Petri Nets. This specification defines a DSL that permits describing a Typed Petri Net (TPN) which is heavily influenced by Coloured Petri Nets [CPN] (CPN). A CPN adds some restriction on the types that can be used in a Petri Net because of limitations in the underlying programming language, SML. The underlying programming used in TPN, Idris, does not have these limitations, so any well-formed Idris type (including polymorphic, linear and dependent types) can be directly used in TPN.A graphical editor for TPN is planned as part of the integration tooling. The graphical tool will use the document directly as storage.¶
Here's an example of TPN (from figure 2.10 in [CPN]):¶
The DSL is being currently designed, so the example shows the generated value.¶
From there it is easy to generate (using the non-deterministic monad in Idris) an interpreter for debugging and simulation purposes:¶
Replace by the generic variant of the interpreter.¶
A Petri Net has the advantage that the same graph can be reused to derive other Petri Nets, e.g., Timed Petri Nets (that can be used to collect performance metrics) or Stochastic Petri Nets.The traditional way of verifying a Petri Net is by using model checking. There is nothing in the design that prevents doing that, but because that takes quite some time to run and so cannot be part of the document processing, how do we store in the document a proof that the model checking was successful?¶
Semantics examples can be wrong, so it is useful to be sure that they match the specification.¶
As explained above, semantics can be described in an ad hoc manner, or using the TPN DSL.¶
At the difference of syntax, where there are more or less as many ways to display them than there are syntaxes, semantics examples generally use sequence diagrams, eventually augmented with the content of the packets exchanged (and so using the techniques described in Section 3.1).¶
Similarly to what is done in Section 3.2.2, an Asciidoctor block processor similar to the "msc" type of diagram used by the asciidoctor-diagram extension will be designed.We unfortunately cannot reuse the asciidoctor-diagram extension because it cannot generate both text and SVG versions of a sequence diagram.¶
The serializer for an example derived from a TPN generates the content of the msc AsciiDoc block, by selecting one particular path and its associated bindings through the Petri Net.We probably want to use AsciiDoc callouts for these, although that would require a modification in AsciiRfc. In fact callout would be a far better technique for other diagrams, like AAD, as it will let the renderer take care of the best way to place elements depending on the output format.¶
Some specifications use a formal language to describe the state machines. One shared property of these languages is that they cannot always formalize all the constraints of specific semantics, so they have to be enriched with comments. One consequence of this is that they cannot be used as a replacement for the Idris data type described in Section 4.1, a data type that is purposely complete.¶
Cosmogol [I-D.bortzmeyer-language-state-machines] is a formal language designed to define states machines. The Internet-Draft will be retrofitted as a computerate specification to provide an internal Domain Specific Language (DSL) that permits specifying an instance of that language. A serializer and elaborator script will also be defined.¶
Finally, an Asciidoctor block processor would be used to convert the language into both a text and a graphical view of the state machine.Add examples there.¶
Like for syntax formal languages, an elaborator script permits generating a type from a TPN instance. That type can then be used to write proofs of the properties that we expect from the semantics.¶
An isomorphism proof can be used between two types derived from the semantics of a specification, for example to prove that no information is lost in the converting between the underlying processes, or when upgrading a process.¶
An example of that would be to prove (or more likely disprove) that the SIP state machines are isomorphic to the WebRTC state machines.¶
Like for the syntax, semantics can introduce wiggle room between the state machines on the sending side and the state machines on the receiving side. A similar isomorphism proof can be used to ensure that this is done without loss of interoperability.¶
The TPN type can be used to verify that the protocol actually terminates, or that it always returns to its initial state. This is equivalent to proving that a program terminates.¶
The TPN type can be used to verify that the protocol is productive, i.e. that it does not loop without making progress.¶
A TPN that covers a whole protocol (i.e. client, network, and server) is useful to prove the properties listed in the previous sections. But the TPN is also designed in a way that each of these parts can be defined separately from the others (making it a Hierarchical TPN). This permits using the type generated from these (through an elaborator script) as a type for real code, and thus verifying that this code conforms to both the syntax and the semantics specifications.¶
The computerate command line tools are run inside a Docker image, so the first step is to install the Docker software or verify that it is up to date (https://docs.docker.com/install/).¶
Note that for the usage described in this document there is no need for Docker EE or for having a Docker account.¶
The following instructions assume a Unix based OS, i.e. Linux or MacOS. Lines separated by a "\" character are meant to be executed as one single line, with the "\" character removed.¶
To install the computerate tools, the fastest way is to download and install the Docker image using a temporary image containing the dat tool:¶
After this, the image can be loaded in Docker. The newly installed Docker image also contains the dat command, so there is no need to keep the veggiemonk/dat-docker image after this:¶
A new version of the tools is released at the same time a new version of this document is released. After this, running the following command in the computerate directory will pull any new version of the tool tar file:¶
The docker image can then be loaded as above.¶
computerate
Command
The Docker image main command is computerate
, which takes the same parameters as the metanorma
command from the Metanorma tooling:¶
The differences with the metanorma
command are:¶
computerate
command can process Literate Idris files (files with a "lidr" extension, aka lidr files), in addition to AsciiDoc files (files with an "adoc" extension, aka adoc files).
When a lidr file is processed, all embedded code fragments (text between prefix "{`" and suffix "`}") are evaluated in the context of the Idris code contained in this file.
Each code fragment (including the prefix and suffix) are then substituted by the result of that evaluation.¶
computerate
command can process included lidr files in the same way.
The embedded code fragments in the imported file are processed in the context of the included lidr file, not in the context of the including file.
Idris modules (either from an idr or lidr file) can be imported the usual way.¶
computerate
command generates a file based on the :name:
attribute in the header of the document.¶
The computerate
command can also be used to generate an xmlrfc v3 file, ready for submission to the IETF:¶
idr and lidr files can be loaded directly in the Idris REPL for debugging:¶
It is possible to directly modify the source code in the REPL by entering the :e
command, which will load the file in an instance of VIM preconfigured to interact with the REPL.¶
Alternatively the Idris REPL can be started as a server:¶
Then if a source file is loaded in a separate console with the VIM instance inside the Docker image, it can interact with the REPL server:¶
Note that both commands must be run from the same directory.¶
For convenience, the docker image provides the latest version of the xml2rfc, idnits, aspell, and languagetool tools.¶
The Docker image also contains a extended version of git that will be used to retrieve the computerate specifications in Appendix B.¶
The git repositories that compose the Computerate Specification Library are distributed over a peer-to-peer protocol based on dat.¶
This requires an extension to git, extension that is already installed in the Docker image described in Appendix A. The following command can be used to retrieve a computerate specification:¶
Here <public-key> is the dat public key for a specific computerate specification and <name> is the recommended name. Do not use the dat URIs given in Appendix A, as only the dat public keys listed in Appendix B.2 can be used with a git clone.¶
Updating the repository also requires using the Docker image:¶
All the git commands that do not require access to the remote can be run natively or from the Docker image.¶
Note that for the computerate specification library the computerate
command must be run from the directory that is one level above the git repository.
The name of the root document is always Main.lidr
or Main.adoc
:¶
In an effort to quantify the potential benefits of using formal methods at the IETF, an effort to relabel the Errata database is under way.¶
The relabeling uses the following labels:¶
Label | Description |
---|---|
AAD | Error in an ASCII bit diagram |
ABNF | Error in an ABNF |
Absent | The errata was probably removed |
ASN.1 | Error in ASN.1 |
C | Error in C code |
Diagram | Error in a generic diagram |
Example | An example does not match the normative text |
Formula | Error preventable by using Idris code |
Ladder | Error in a ladder diagram |
Rejected | The erratum was rejected |
Text | Error in the text itself, no remedy |
TLS | Error in the TLS language |
At the time of publication the first 700 errata, which represents 11.88% of the total, have been relabeled. On these, 34 were rejected and 27 were deleted, leaving 639 valid errata.¶
Label | Count | Percentage |
---|---|---|
Text | 396 | 61.97% |
Formula | 66 | 10.32% |
Example | 64 | 10.0% |
ABNF | 38 | 5.94% |
AAD | 32 | 5.00% |
ASN.1 | 27 | 4.22% |
C | 9 | 1.40% |
Diagram | 4 | 0.62% |
TLS | 2 | 0.31% |
Ladder | 1 | 0.15% |
Note that as the relabeling is done in in order of erratum number, at this point it covers mostly older RFCs. A change in tooling (e.g. ABNF verifiers) means that these numbers may drastically change as more errata are relabeled. But at this point it seems that 38.02% of errata could have been prevented with a more pervasive use of formal methods.¶
Thanks to Jim Kleck, Stephane Bryant, Eric Petit-Huguenin, Nicolas Gironi, Stephen McQuistin, and Greg Skinner for the comments, suggestions, questions, and testing that helped improve this document.¶
draft-petithuguenin-computerate-specifying-02¶
Document¶
Tooling¶
Library¶
draft-petithuguenin-computerate-specifying-01¶
Document¶
docker run
command.¶
Tooling¶
Library¶