Saturday, April 29, 2017

Minimal "openssl.cnf"

Summary: You need an openssl.cnf to issue certificates with OpenSSL. This post gives minimal working examples for typical test/demo cases.

Why is it needed?

OpenSSL commands (that is, the ones invoked from the command line) are mostly controlled by their options. But in order to issue X.509 certificates or CSRs OpenSSL needs a configuration file. This file may contain no useful information, but is still required. This post gives and explains minimal configuration files for OpenSSL commands used for certificate generation: x509, req and ca.

OpenSSL installation should supply a default configuration file, normally called openssl.cnf. Below, we'll use expressions openssl.cnf and "OpenSSL configuration file" interchangeably.

The topic of this post is the minimal configuration file. Sometimes the command will need more parameters than such minimum contains; in these cases they will be provided via command-line options. These options are kept at minimum too, so in real applications you'd likely need to pass more parameters - either via the config file or through the options.

Generate the private key

Generation of the private key can be included in some of the commands below, but for clarity let's create a private key with a separate command. To sound modern, pick an elliptic curve from openssl ecparam -list_curves with the name which sounds cute, such as secp521r1 (on a serious note - selection of the elliptic curve is out of scope of this post):

$ openssl ecparam -genkey -name secp521r1 -out privkey.pem
 
We'll use this same key for all certificates which appear later; of course such practice is acceptable only for demonstration purposes.

Create a self-signed CA certificate

This command needs a config file. Unless one is provided, OpenSSL will use the default, which may not contain what you want. There are two ways to provide a custom config:
  • -config option
  • OPENSSL_CONF environment variable
Below, we'll use the first way to show the config file in a more explicit manner.

Using "req -x509" command

The minimum config file for this command is:

[ req ]
distinguished_name = req_dn

[ req_dn ] 
 
Call this file openssl-min-req.cnf. With it, you can issue self-signed certificates:

$ openssl req -new -x509 -nodes -key privkey.pem -config openssl-min-req.cnf -subj "/CN=My self-signed CA certificate" -out ca.pem

If we do not specify the version explicitly or request any of X.509v3 extensions, then OpenSSL sets the version the certificate to 1.
Check with openssl x509 -purpose -in ca.pem and observe that various "CA" purposes come with Yes (WARNING code=3).

Surprisingly, for a V3 certificate, there seems to be no clear indication whether it is a CA certificate or not. Two extensions claim properties related to CA functionality. OpenSSL's C API function X509_check_ca returns non-zero when the certificate either has CA:TRUE in basicConstraints extension, or keyCertSign in keyUsage extension. Other software may follow other rules (for example, require CA:TRUE). To satisfy OpenSSL's interpretation, the minimal openssl.cnf can be like this:

[ req ]
distinguished_name = req_dn
x509_extensions = v3_ext

[ req_dn ]

[ v3_ext ]
basicConstraints = CA:true

Using "ca -selfsign" command

Below is the minimal openssl-ca.cnf which would do the job.

[ ca ]
default_ca      = CA_default

[ CA_default ]
database        = index.txt
serial          = serial.txt
policy          = policy_default

[ policy_default ] 
 
For the command to succeed, three files referenced above must be created:

$ touch index.txt index.txt.attr
$ echo '01' > serial.txt
 
When done, you can get your own self-signed CA certificate:

$ openssl ca -config openssl-ca.cnf -selfsign -in csr.pem -keyfile privkey.pem -md default -out ca.pem -outdir . -days 365 -batch

Create a certificate signed by your new CA

Create a CSR

A CSR (Certificate Signing Request) is made with req command. For it, the "minimum request openssl.cnf" is sufficient:

$ openssl req -new -config openssl-min-req.cnf -key privkey.pem -nodes -subj "/CN=Non-CA example certificate" -out csr.pem

Inspect the CSR with openssl req -text -noout -in csr.pem.
Having a CSR, the corresponding certificate can be issued using x509 or ca commands.

Sign the CSR using "x509 -req" command

$ openssl x509 -req -in csr.pem -CA ca.pem -CAkey privkey.pem -CAcreateserial -out cert.pem

The x509 command seems to have no option -config, but it honors the environment variable OPENSSL_CONF. If the file is not readable, OpenSSL prints
WARNING: can't open config file: /usr/lib/ssl/openssl.cnf
and continues. For the example above, config file is not needed; and if it is not needed, you should not use it, because it may bring in items which you do not want. It looks like the way to "disable" config file processing is to set OPENSSL_CONF=some_nonexisting_file.

Sign the CSR using "ca" command

Previously mentioned openssl-ca.cnf works also for this case, and cannot be reduced.

$ openssl ca -config openssl-ca.cnf -cert ca.pem -keyfile privkey.pem -in csr.pem -out cert.pem -outdir . -md default -days 365 -batch

By default the ca command does not copy the X.509v3 extensions from the CSR (the ones specified in x509_extensions section of the [ req ] part in our example openssl-min-req.cnf) to the signed certificate. To do so, add copy_extensions = copy line to the CA section ([ CA_default ] in our example openssl-ca.cnf). Alternatively, the CA may add own extensions when signing a CSR - for example, set CA:FALSE. These should be listed in the section with the name given by the variable x509_extensions in the [ CA_default ] section.

Conclusion

  • To issue certificates with OpenSSL, a configuration file is needed.
  • OpenSSL will use the default config file unless you provide another one via command-line option or an environment variable.
    • Except that x509 -req is missing the option.
  • While the default may work for some cases, if you need any control over your certificate, you'll need to create the config file.
  • You get more control over the content of your certificates when starting with the bare minimum than with the default. The minimal examples are provided in this post.

Saturday, December 31, 2016

OpenSSL PKCS#7 verification "unsupported certificate purpose"

Problem

You verify a signature of PKCS#7 structure with OpenSSL and get error:

  unsupported certificate purpose

This post explains the reason for this error and ways to proceed.

Background

By "verify a signature", one probably means that:
  1. The signature itself (e.g. an RSA block) taken over the corresponding data (or its digest) validates against the signing certificate.
  2. Two sets of certificates are available, which we'd call "trusted certificates" and "chaining certificates". A chain from the signing certificate up to at least one of the trusted certificates can be built with the chaining certificates.
  3. All certificates in this chain have "acceptable" X.509v3 extensions.
The first requirement is clear.
The second one is clear when the sets are defined. OpenSSL API requires them to be passed as parameters for the verification.
The last requirement relies on X.509v3 extensions, which are a terrible mess.
It's hard to provide a non-messy solution for a messy specification. Section CERTIFICATE EXTENSIONS in the OpenSSL manual for x509 subcommand has this passage:
The actual checks done are rather complex and include various hacks and
workarounds to handle broken certificates and software.
It looks like PKCS7 verification fell victim of these "hacks and workarounds".

OpenSSL certificate verification and X.509v3 extensions

Before getting to the topic (verifying PKCS#7 structures), look at how OpenSSL verifies certificates. Both command-line openssl verify and C API X509_verify_cert() have a notion of purpose, explained in the section CERTIFICATE EXTENSIONS of man x509. This notion seems to be particular to OpenSSL.
  • If the purpose is not specified, then OpenSSL does not check the certificate extensions at all.
  • Otherwise, for each purpose, OpenSSL allows certain combinations of the extensions.
The correspondence between OpenSSL's purpose and X.509v3 extensions is nothing like one-to-one. For example, purpose S/MIME Signing (or in short variant smimesign) requires that:
  1. "Common S/MIME Client Tests" pass (description of how they translate to X.509v3 extension takes a long paragraph in man x509).
  2. Either KeyUsage extension is not present, or it is present and contains at least one of digigalSignature and nonRepudiation flags.
For another example, there seems to be no OpenSSL command-line option for verify to require presense of Extended Key Usage bits like codeSigning. For that, one must use C API to separately check every extension bit.
So far, this sounds about as logical as it could be to somehow handle The Terrible Mess of X.509v3 extensions. OpenSSL CLI seems to have made an attempt to compose some "frequently used combinations" of the extensions and call them with own term "purpose".

OpenSSL PKCS#7 verification and X.509v3 extensions

By reason unknown yet to the author, OpenSSL uses a different strategy when verifying PKCS#7.

Command-line

There are two command-line utilities which can do that: openssl smime -verify and openssl cms -verify (S/MIME and CMS are both PKCS#7). Both accept -purpose option, which according to manual pages has the same meaning as for certificate verification. But it does not. These are the differences:
  1. If no -purpose option is passed, both commands behave as though they received -purpose smimesign.
  2. It is possible to disable this smimesign purpose checking by passing -purpose any.

C API

On the C API side, one is supposed to use PKCS7_verify() for PKCS#7 verification. This function also behaves as though it verifies with smimesign purpose. (see setting X509_PURPOSE_SMIME_SIGN in pk7_doit.c:919).
Similarly as with the command-line, it is possible to disable checking the extensions, although with more typing.
In the C API, the verification "purpose" is a property of X509_STORE, passed to PKCS7_verify(), which plays the role of the trusted certificate set.
Side note: manipulation of the parameters directly on the store was added only to OpenSSL 1.1.0 with X509_STORE_get0_param(X509_STORE *store). In earlier versions, an X509_STORE_CTX must have been created from the store and parameters manipulates with X509_STORE_CTX_get0_param(). BTW support for OpenSSL v1.0.1 has ended just on the day of this writing.

Possible reasoning

One might imagine reasoning like this: for openssl smime, smimesign is kind of "default purpose" and thus is implicitly required; and openssl cms is in fact an attempt to rewrite openssl smime, thus behaving in the same way.
Such behavior is fine for S/MIME, and is not what you would expect for anything else packed into PKCS#7.
Translating from OpenSSL's "purpose" to X.509v3 extensions, verification fails unless your signing certificate satisfies the two conditions:
  1. If the Key Usage extension is present, then it must include the digitalSignature bit or the nonRepudiation bit.
  2. If the Extended Key Usage extension is present, then it must include email protection OID.
In fact, the first condition is "reasonable": RFC5280 states in section "Key Usage" that
For example, when an RSA key should be used only to verify signatures on objects other than public key certificates and CRLs, the digitalSignature and/or nonRepudiation bits would be asserted.
PKCS#7 qualifies as "object other than public key certificates and CRLs". But the second condition is not relevant for anything else than S/MIME. (Of course, in the end it is your certificate practice policy which determines, what is accepted and what not; the above is just "common sense").

Demo

Prepare the files

Create a chain of certificates: self-signed "root", then an "intermediate" signed by the root, then a "signing" signed by the intermediate. Add some extendedKeyUsage extension to the signing, but do not add emailProtection. For example, add codeSigning.
Create appropriate OpenSSL config files like explained in the "minimal openssl.cnf" post.
Create requests for all the three:
  $ openssl req -config openssl-CA.cnf -new -x509 -nodes -outform pem -out root.pem -keyout root-key.pem
  $ openssl req -config openssl-CA.cnf -new -nodes -out intermediate.csr -keyout intermediate-key.pem
  $ openssl req -config openssl-signing.cnf -new -nodes -outform pem -out signing.csr -keyout signing-key.pem
Sign the intermediate and the signing certificates:
  $ mkdir -p demoCA/newcerts
  $ touch demoCA/index.txt
  $ echo '01' > demoCA/serial
  $ openssl ca -config openssl-CA.cnf -in intermediate.csr -out intermediate.pem -keyfile root-key.pem -cert root.pem
  $ openssl ca -config openssl-signing.cnf -in signing.csr -out signing.pem -keyfile intermediate-key.pem -cert intermediate.pem
Create some PKCS7 structure, signed with the signing certificate. The chain certificates must be provided during the verification, or embedded into the signature. Let's embed the intermediate certificate. (If there had been more than one certificate in the chain, they would need to be simply placed in one .pem file):
  $ echo 'Hello, world!' > data.txt
  $ openssl smime -sign -in data.txt -inkey signing-crlsign-key.pem -signer signing-crlsign.pem -certfile intermediate.pem -nodetach > signed-crlsign.pkcs7
We have everything ready for verifying.

Verification with command-line OpenSSL tools

Attempt to verify it:
  $ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem 
  Verification failure
  139944505955992:error:21075075:PKCS7 routines:PKCS7_verify:certificate verify error:pk7_smime.c:336:Verify error:unsupported certificate purpose
Attempt to verify, skipping extension checks:
  $ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem -purpose any
  Verification successful
Attempt to verify it, specifying the OpenSSL "purpose" which the signing certificate satisfies:
  $ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem -purpose crlsign
  Verification successful

Verification with the C OpenSSL API

The code below is "demo", any real application would have at least to check return codes of all system calls and free any allocated resources. But it shows how the verification of PKCS#7 structure (unexpectedly) fails, and succeeds after setting the "purpose" which the signing certificate satisfies:

    #include <stdlib.h>
    #include <stdio.h>
    #include <fcntl.h>              /* open() */

    #include <openssl/bio.h>
    #include <openssl/err.h>
    #include <openssl/ssl.h>
    #include <openssl/pkcs7.h>
    #include <openssl/safestack.h>
    #include <openssl/x509.h>
    #include <openssl/x509v3.h>     /* X509_PURPOSE_ANY */
    #include <openssl/x509_vfy.h>

    int main(int argc, char* argv[]) {
      X509_STORE *trusted_store;
      X509_STORE_CTX *ctx;
      STACK_OF(X509) *cert_chain;
      X509 *root, *intermediate, *signing;
      BIO *in;
      int purpose, ret;
      X509_VERIFY_PARAM *verify_params;
      PKCS7 *p7;
      FILE *fp;
      int fd;

      SSL_library_init();
      SSL_load_error_strings();

      fd = open("signed-ext-no-smimesign.pkcs7", O_RDONLY);
      in = BIO_new_fd(fd, BIO_NOCLOSE);
      p7 = SMIME_read_PKCS7(in, NULL);

      cert_chain = sk_X509_new_null();

      fp = fopen("root.pem", "r");
      root = PEM_read_X509(fp, NULL, NULL, NULL);
      sk_X509_push(cert_chain, root);

      fp = fopen("intermediate.pem", "r");
      intermediate = PEM_read_X509(fp, NULL, NULL, NULL);
      sk_X509_push(cert_chain, intermediate);

      trusted_store = X509_STORE_new();
      X509_STORE_add_cert(trusted_store, root);

      fp = fopen("signing-ext-no-smimesign.pem", "r");
      signing = PEM_read_X509(fp, NULL, NULL, NULL);

      ret = PKCS7_verify(p7, cert_chain, trusted_store, NULL, NULL, 0);
      printf("Verification without specifying params: %s\n", ret ? "OK" : "failure");

      /* Now set a suitable OpenSSL's "purpose", or disable its checking.
       * Note: since OpenSSL 1.1.0, we'd not need `ctx`, but could just use:
       * verify_params = X509_STORE_get0_param(trusted_store); */

      ctx = X509_STORE_CTX_new();
      X509_STORE_CTX_init(ctx, trusted_store, signing, cert_chain);
      verify_params = X509_STORE_CTX_get0_param(ctx);
      purpose = X509_PURPOSE_get_by_sname("crlsign"); /* Or: purpose = X509_PURPOSE_ANY */
      X509_VERIFY_PARAM_set_purpose(verify_params, purpose);
      X509_STORE_set1_param(trusted_store, verify_params);

      ret = PKCS7_verify(p7, cert_chain, trusted_store, NULL, NULL, 0);
      printf("Verification with 'crlsign' purpose: %s\n", ret ? "OK" : "failure");
      return 0;
    }
If our policy requires crlSign Key Usage, then we can use this example code. What if the policy needs some extension combination for which there is no suitable OpenSSL "purpose" - for example, CodeSigning Extended Key Usage? In that case it would not be possible to do it with just one call to PKCS7_verify, but the extensions need to be checked separately.

Conclusion

If you use OpenSSL for verifying PKCS#7 signatures, you should check whether either the following holds:
  1. Your signing certificate has Extended Key Usage extension, but no emailProtection bit.
  2. Your signing certificate has KeyUsage extension, but no digitalSignature neither nonRepudiation OID.
If this is the case, then verification with OpenSSL fails even if your signature "should" verify correctly.
For checking signatures with command-line openssl smime -verify, a partial workaround can be adding option -purpose any. In this case OpenSSL will not check Extended Key Usage extensions at all. This can be acceptable or not by your verification policy.
-purpose option allows to check only for certain (although probably common) x509v3 extension combinations. OpenSSL defines a number of what it calls "purposes". If you need to check a combination which does not correspond to any of these "purposes", it must be done in a separate operation.
For checking signatures with C API PKCS7_verify(), the algorithm can be the following:
  1. If your policy does not care about X.509v3 extensions, set your verification parameters to X509_PURPOSE_ANY.
  2. Otherwise, check X509v3 extensions of the signing certificate as required by your policy (example). Set a custom verification callback with X509_STORE_CTX_set_verify_cb(), which might either ignore the "unsupported certificate purpose" error, i.e. X509_V_ERR_INVALID_PURPOSE, or have some more complicated logic.

Tuesday, January 26, 2016

ARP messages: Request, Reply, Probe, Announcement

ARP, as originally specified in RFC 826, had two message types: request and reply. The protocol can be "abused" in the sense that a host can send a reply message not in response to any preceding request; this has been formalized in RFC 5227. The two "new" messages are probe and announcement, the latter also called "gratuitous ARP". They can be used for detecting duplicate IP addresses on the same LAN.

The following questions may not be immediately clear:
  • The "new" messages do not extend the protocol - they do not introduce new ARP message types. How can one speak about "new messages"?
  • Source and destination MAC addresses are naturally present in the MAC headers of the ARP message. But there are also "Source Hardware Address" (SHA) and "Target Hardware Address" (THA) fields in the ARP message body, which are also MAC addresses. How are the former and the latter related, and why seemingly the same information is duplicated?
The table below shows the MAC headers and (relevant) ARP protocol fields for each of the four messages. The differences help to understand how each field is used.
Message Sent when the host... MAC headers ARP body
src MAC dst MAC message type SHA (MAC) SPA (IP) THA (MAC) TPA (IP)
request wants to send an IP packet, but does not know the MAC address of the destination own broadcast REQUEST own own 0 destination's
reply receives an ARP request to an IP address this host owns own destination's, or broadcast1 REPLY own own requestor's requestor's
probe (RFC 5227, 2.1) configures a new IP address for an interface own broadcast REQUEST2 own 03 0 probed
announcement (RFC 5227, 2.3) after a probe, concludes that it will use the probed address own broadcast REQUEST2 own new own 0 new own

Notes to the table:

  1. RFC 5227, section 5.6 explains why "Broadcast replies are not universally recommended, but may be appropriate in some cases".
  2. RFC 5227, section 3 notes that the type here could be REPLY as well, then continues to give reasons why REQUEST is recommended.
  3. Zero is specified here in order not to pollute ARP caches of other hosts in case when the probed address is already used in this LAN and the probing host will not take it.

Saturday, October 3, 2015

Possible implications of netmask mismatch

Summary: an IPv4 host with a netmask not matching that of the subnet to which the interface is connected likely builds incorrect routing tables, misses some broadcasts, may incorrectly identify broadcasts as unicasts, and unintentionally broadcast to own subnet.

What does the netmask determine?

The netmask (in IPv4 terminology) and network prefix (in IPv6 terminology) can be associated with an IP subnet, and correspondingly with a network interface. This post handles IPv4 only, so the term "netmask" will be used. Together with own IP address, the netmask determines whether another IP address belongs to the same IP subnet as the NIC. Good, so how is this knowledge used?

Processing of multicast packets is not affected by the netmask, thus multicast would not be mentioned here further. For unicast and broadcast, the netmask is consulted in three different situations, listed in the following sections.

Case 1. Netmask can be used as input for constructing the routing table

The routing system normally automatically creates routes to the subnet to which each network interface belongs. I.e. for each network interface I with address AI and netmask M, the host calculates the subnet of this interface SI = AI & M. Outgoing packets to any address AP such that AP & M = SI would be emitted from the interface I.
While this behavior is typical, nothing mandates hosts to create such routing table entry. For example, if a host has two interfaces on the same subnet, then obviously some more information is needed to decide, which of the interfaces shall send the packets destined to their common subnet. Another example is a firewall with more restrictive forwarding policy than just "put every packet for subnet SI to interface I".

Case 2. Netmask is used to determine whether an arrived packet is a (directed) broadcast to a subnet of some local interface

After the routing is covered, we can limit our further investigation to only:
  1. Unicast packets, destined to "this host" (i.e. one of its interfaces).
  2. Directed broadcast packets to "this network". There can be more than one "this" network if the host has more than one network interface (the host can be or not be a router).
Really,
  • Directed broadcast to a network not in "our network" set is handled as any other packet subject to possible routing.
  • Local broadcast packets are obviously not affected by the netmask setting.

For hosts which are not routers RFC922 defines handling of broadcast packets in a simple way:
In the absence of broadcasting, a host determines if it is the
recipient of a datagram by matching the destination address against
all of its IP addresses.  With broadcasting, a host must compare the
destination address not only against the host's addresses, but also
against the possible broadcast addresses for that host.

Now imagine that an interface of some host has netmask, which does not match one of the subnet this interface is connected to.

Interface misconfigured with shorter netmask fails to process broadcasts: they are understood as unicasts by such host.
    • Example: in 1.1.1.0/24 network, a packet to a broadcast address 1.1.1.255 will not be recognized as broadcast by a misconfigured interface 1.1.1.1/16.
  • That is, unless the network has all bits in the netmask difference equal to 1.
    • Example: in 1.1.255.0/24 network, a packet to a broadcast address 1.1.255.255 will be, by a coincidence, correctly accepted as broadcast by a misconfigured interface 1.1.1.1/16.
  • Broadcast packet which is incorrectly understood as unicast by a misconfigured interface can also happen to bear the destination address of this interface itself.
    • Example: in 1.1.0.0/16 network, a broadcast packet to 1.1.255.255 will be received as unicast by a misconfigured interface 1.1.255.255/8.
  • Additionally, the host may attempt to send a unicast packet which would appear as a valid broadcast on the network.
    • Example: in 1.1.0.0/16 network, a host misconfigured as 1.1.1.1/8 sends a unicast to destination address 1.1.255.255. It appears as broadcast on this network. In fact, there can be no host with address 1.1.255.255 on this network (as it is a broadcast address), so nobody answers ARP query and the host will not be able to send such packet.
Interface misconfigured with a longer netmask fails to process broadcasts as well: it will consider them not belonging to own subnet.
    • Example: in 1.0.0.0/8 network, a packet to a broadcast address 1.255.255.255 will not be received by a misconfigured interface 1.1.1.1/16.
  • Again, unless the address of the misconfigured interface happens to have all bits in the netmask difference being equal to 1.
    • Example: in that same network, that same broadcast packet will be accepted just fine by a misconfigured interface 1.255.1.1/16.
For hosts which are routers, RFC922 adds the clause concerning for broadcast packets destined to other interface than the one on which the packet is received:
...if the datagram is addressed to a hardware network
to which the gateway is connected, it should be sent as a
(data link layer) broadcast on that network.  Again, the
gateway should consider itself a destination of the datagram.

In this case, the netmask of the router's interface, where the packet has been received, is not relevant - packet should be processed anyway. Instead, the packet's destination interface configuration is the basis for the decision. Correspondingly, mismatch between the netmask of the destination interface and the sender's expectation of the netmask leads to same consequences as listed above for non-forwarding hosts.

Have we covered all cases? Three independent factors affect the outcome:
  1. Is the receiver's netmask shorter or longer than of the subnet it is connected to?
  2. Are the bits from the difference in netmask lengths all equal to one?
  3. Is the packet unicast or (directed) multicast?
All 8 possibilities have been considered above.

Case 3. Netmask is used for setting destination address of outgoing broadcast packets

When a host wishes to send a broadcast packet from certain interface, it sets the destination address to that of the interface and puts 1 to all bits which are zeros in the netmask. Correspondingly:

Host with shorter netmask will set too many bits to 1. On the local subnet, these packets will be recognized as belonging to other subnet by other hosts and consequently not processed.
    • Example: in /24 network 1.1.1.0/24, host misconfigured as 1.1.1.1/16 sends what it thinks a "broadcast" with destination 1.1.255.255. (It will be sent as link-layer broadcast.) No other host on this network accepts it.
  • Unless if the network has all bits in the netmask difference being equal to one.
    • Example: in /24 network 1.1.255.0/24, a misconfigured host 1.1.255.1/16 sends a "broadcast" packet to 1.1.255.255, which happens to be a valid broadcast on this network.
Host with longer netmask will not set enough bits to 1. The packets sent as broadcast will be recognized as unicast by other hosts on this subnet.
    • Example: in /8 network 1.0.0.0/8, a host misconfigured as 1.1.1.1/16 sends what it thinks to be a broadcast packet to 1.1.255.255. It appears as valid unicast on this subnet. If there is a host with address 1.1.255.255, this host will accept this packet. (Besides probably unexpected IP content, the host may also notice that the layer 2 address of this packet was a layer 2 broadcast.)
Naturally, these cases are "reversed" repetition of the cases for the receiving hosts.

 

Conclusion

1. Netmask is normally (but not necessarily) used as input for the routing table construction. If used, then a wrong interface netmask makes possible the following routing failures:
  • Too long netmask: the host will have no route for some packets, actually belonging to a subnet of this interface. Attempt to send packet to a host outside the too long misconfigured netmask but inside the correct netmask of the net results in ICMP error "Destination net unreachable". If there is a default outgoing interface, the host will not generate the error, but send the packets to the default interface instead of the interface of this subnet.
  • Too short netmask: the host may attempt to send to the interface packets, which would not be received by any host of the connected subnet. This attempt probably fails, because no host answers the ARP request. This results in ICMP error "Destination host unreachable".
2. IPv4 only: Directed broadcast packets are sent and received utilizing the netmask information. Directed broadcast is a marginal case; such packets are rarely used and dropped by most routers as per RFC2644. But if directed broadcasts are used, then mismatched netmask results in any of:
  • failure to receive broadcast packets
  • failure to forward broadcast packets by routers
  • forwarding broadcast packets, destined to own network
  • accepting unicast packets, destined to some host, as broadcasts
  • accepting broadcast packets as unicast.

Sunday, December 21, 2014

Support for elliptic curves by jarsigner

Summary: Support for cryptography features by jarsigner depends on available Java crypto providers.

Suppose you are defining a PKI profile. You naturally want to use the stronger algorithms with better performance, which (as of year 2014) most likely means elliptic curves. Besides bit strength and performance, you want to be absolutely sure that the curve is supported by your software. If the latter includes jarsigner, you'll be surprised to find that Oracle documentation seems to not mention at all, what elliptic curves does jarsigner support.

Signing a JAR means adding digests of the JAR data to the manifest file, adding digest of the latter to the manifest signature file, and then creating the JAR signature block file. The last step involves two operations:
  1. calculating a digest over the manifest signature file;
  2. signing - meaning, encrypting with the private key - that digest. 
Jarsigner has an option '-sigalg', which is supposed to specify the two algorithms used in these two steps. (There is also '-digestalg' option, but it is not used for the signature block file; it defines the algorithm used in the two initial steps.) Well, this option is irrelevant for the signing step: the curve is in fact defined by the provided private key. So jarsigner will either do the job or choke on the key which comes from an unsupported curve.

A curve may "not work" because it is unknown to jarsigner itself, or to an underlying crypto provider. (The latter case was a reason to a bug 1006776; only three curves actually worked, while many returned a totally unclear error "certificate exception: java.io.IOException: subject key, Could not create EC public key".)

In a particular setup the support can be tested. For curves, supported by OpenSSL, the test can be done by creating the keypair on each curve and attempting the signing. Create the list of curves with 'openssl ecparam -list_curves', remove manually some extra words openssl puts there, and feed it to the stdin:

#!/bin/bash
# Test, which OpenSSL-supported elliptic curves from the list are supported also by jarsigner.

result="supported-curves.txt"
source_data="data.txt"
jar="data.jar"
key="key.pem"
cert="cert.pem"
pfx="keystore.pfx"
key_alias="foo"         # Identificator of the key in the keystore
storepass="123456"      # jarsigner requires some

touch $source_data

while read curve; do
        # Generate an ECDSA private key for the selected curve:
        openssl ecparam -name $curve -genkey -out $key

        # Generate the certificate for the key; give some dummy subject:
        openssl req -new -x509 -nodes -key $key -out $cert -subj /CN=foo

        # Wrap key+cert in a PKCS12, so that jarsigner can use it:
        openssl pkcs12 -export -in $cert -inkey $key -passout pass:$storepass -out $pfx -name $key_alias

        # Create a fresh jar and attempt to sign it
        jar cf $jar $source_data
        jarsigner -keystore $pfx -storetype PKCS12 -storepass $storepass $jar $key_alias
        [ $? -eq 0 ] && echo $curve >> $result
done

rm $source_data $key $cert $pfx $jar

And enjoy the list in supported-curves.txt.


Conclusion: support of elliptic curves by jarsigner depends on jarsigner itself and on the JRE configuration. There is no command-line option to list all supported curves. For a particular system, support of curves supported by OpenSSL can be easily tested.

Monday, December 8, 2014

JAR signature block file format


Summary: this post explains the content of the "JAR signature block file" - that is, the file "META-INF/*.RSA", "META-INF/*.DSA" or "META-INF/*.EC" inside the JAR.

Oracle does not document it

Signed JAR file contains the following additions over a non-signed JAR:
  1. checksums over the JAR content, stored in text files "META-INF/MANIFEST.MF" and "META-INF/*.SF"
  2. the actual cryptographic signature (created with the private key of a signer) over the checksums in a binary signature block file.
Surprisingly, format of the latter does not seem to be documented by Oracle. JAR file specification provides only a useful knowledge that "These are binary files not intended to be interpreted by humans".

Here, the content of this "signature block file" is explained. We show how it can be created and verified with a non-Java tool: OpenSSL.

Create a sample signature block file

For our investigation, generate such file by signing some data with jarsigner:
  • Make an RSA private key (and store it unencrypted), corresponding self-signed certificate, pack them in a format jarsigner understands:
openssl genrsa -out key.pem
openssl req -x509 -new -key key.pem -out cert.pem -subj '/CN=foo'
openssl pkcs12 -export -in cert.pem -inkey key.pem -out keystore.pfx -passout pass:123456 -name SEC_PAD
  • Create the data, jar it, sign the JAR, and unpack the resulting "META-INF" directory:
echo 'Hello, world!' > data
jar cf data.jar data
jarsigner -keystore keystore.pfx -storetype PKCS12 -storepass 123456 data.jar SEC_PAD
unzip data.jar META-INF/*
 
The "signature block file" is META-INF/SEC_PAD.RSA.

What does this block contain

The file appears to be a DER-encoded ASN.1 PKCS#7 data structure. DER-encoded ASN.1 file can be examined with asn1parse subcommand of the OpenSSL:

openssl asn1parse -in META-INF/SEC_PAD.RSA -inform der -i > jarsigner.txt


For more verbosity, you may use some ASN.1 decoder such as one at lapo.it.

You'll see that the two top-level components are:
  • The certificate.
  • 256-byte RSA signature.
You can extract the signature bytes from the binary data and verify (=decrypt) them with openssl rsautl. That includes some "low-level" operations and brings you one more step to understanding the file's content. A simple "high-level" verification command would be:

openssl cms -verify -noverify -content META-INF/SEC_PAD.SF -in META-INF/SEC_PAD.RSA -inform der

This command tells: "Check that the CMS structure in META-INF/SEC_PAD.RSA is really a signature of META-INF/SEC_PAD.SF; do not attempt to validate the certificate".

Creating the signature block file with OpenSSL

For this example, we created the signature block file with jarsigner. Knowing the file's content, we can look for other ways to produce or verify such structure. It may be not that hard to construct it "manually", although authorities and  illustrations all recommend against implementing own crypto.

There are at least two OpenSSL commands which can produce similar structures: cms and smime. Options make the signature closer to that by jarsigner:

openssl cms -sign -binary -noattr -in META-INF/SEC_PAD.SF -outform der -out openssl-cms.der -signer cert.pem -inkey key.pem -md sha256
openssl smime -sign -noattr -in META-INF/SEC_PAD.SF -outform der -out openssl-smime.der -signer cert.pem -inkey key.pem -md sha256

To satisfy the curiosity, peek into these files and compare them to jarsigner.txt with your favorite diff tool:
 
openssl asn1parse -inform der -in openssl-cms.der -i >  openssl-cms.txt
openssl asn1parse -inform der -in openssl-smime.der -i >  openssl-smime.txt 

 

Testing the "DIY signature"

Underlying ASN.1 structures are, in both cms and smime cases, very close but not identical to those made by jarsigner. As the format of the signature block file is not documented, we can do tests to have some ground to say that "it works". Just replace the original signature block file with our signature created by OpenSSL:

cp openssl-cms.der META-INF/SEC_PAD.RSA
zip -u data.jar META-INF/SEC_PAD.RSA
jarsigner -verify -keystore keystore.pfx -storetype PKCS12 -storepass 123456 data.jar SEC_PAD

Lucky strike: a signature produced by 'openssl cms' is recognized by jarsigner (that is, at least by some particular version).

Note that the data which is signed is SEC_PAD.SF, and it was created by jarsigner. If not using the latter, you'll need to produce that file in some way, for example with python-javatools.

What's the use for this knowledge?

Besides better understanding your data, one can think of at least two reasons to sign JARs with non-native tools. Both are somewhat untypical, but not completely irrelevant:

1. The signature must be produced in a system, where native Java tools are not available.
Such system must have access to private key (in one form or another), and security administrators may not like the idea of having such overbloated software as JRE in a tightly controlled environment.
2. The signature must be produced or verified in a system, where available tools do not support the required signature algorithm.
There can be reasons that restrict tools, algorithms, or both; examples include compliance with regulations or compatibility with legacy systems. On a certain system, testing which elliptic curves are supported by jarsigner reveals just three curves (which is not much).

 

 Conclusion

  • JAR signature block file is a DER-encoded PKCS#7 structure, representing a detached signature over the .SF file.
  • Its exact content can be viewed with "openssl asn1parse" or with any ASN.1 decoder.
  • OpenSSL can verify signatures in signature block files and create almost identical structures.
  • Java tools have been shown to accept these "almost identical" structures.