Saturday, December 31, 2016

OpenSSL PKCS#7 verification "unsupported certificate purpose"

Problem

You verify a signature of PKCS#7 structure with OpenSSL and get error
unsupported certificate purpose

 

Background

The meaning of the phrase "verify a signature" can be different in different contexts. An interpretation you may be expecting is:
  1. The signature itself (e.g. RSA block) taken over the corresponding data (or its digest) validates against the signing certificate.
  2. Two sets of certificates are available, which we'd call "trusted certificates" and "chaining certificates" (how they are obtained is out of scope here). A chain from the signing certificate up to at least one of the trusted certificates can be built with the chaining certificates.
  3. Every certificate in this chain has "suitable" X.509v3 extensions.
The first requirement is clear.
The second one is clear when the named sets are defined. OpenSSL API requires them to be passed as parameters for the verification.
The last requirement relies on X.509v3 extensions, which are a terrible mess.
It's hard to provide a non-messy solution for a messy requirement. Section CERTIFICATE EXTENSIONS in the OpenSSL manual for x509 subcommand has this passage:
The actual checks done are rather complex and include various hacks and workarounds to handle broken certificates and software.
It looks like PKCS7 verification fell victim of these "hacks and workarounds".

 

OpenSSL certificate verification and X.509v3 extensions

Before getting to the topic (verifying PKCS#7 structures), look at how certificates can be verified. Both command-line openssl verify and C API X509_verify_cert() have a notion of purpose, explained in the section CERTIFICATE EXTENSIONS of man x509. This notion seems to be particular to OpenSSL. If purpose is not specified, then OpenSSL does not check the certificate extensions at all. Otherwise, OpenSSL requires certain combinations of certain extensions for the validation to succeed.

The correspondence between OpenSSL's "purposes" and X509v3 extensions is nothing like one-to-one. For example, purpose S/MIME Signing (or in short variant smimesign) requires that:
  1. "Common S/MIME Client Tests" pass (description of how they translate to X.509 extension takes a long paragraph in man x509).
  2. Either KeyUsage extension is not present, or it is present and digigalSignature bit is set.
For another example, there seems to be no OpenSSL option for verify to require presense of Extended Key Usage bits like codeSigning. For that, C API allows to separately check every extension bit.

So far, this sounds logical - at least about as logical as it could be to somehow handle The Terrible Mess of X509v3 extensions.

 

OpenSSL PKCS#7 verification and X.509v3 extensions

By reason unknown yet to the author, OpenSSL uses different strategy when verifying PKCS#7 structures.
There are two command-line utilities which can do that: openssl smime -verify and openssl cms -verify (S/MIME and CMS are PKCS#7). Both accept -purpose option, which according to manual pages has the same meaning as for certificate verification. But it does not. These are the differences:
  1. If no -purpose option is passed, both commands behave as though they received -purpose smimesign.
  2. It is possible to disable purpose checking by passing -purpose any.
On the C API side, one is supposed to use PKCS7_verify() for PKCS#7 verification. This function also behaves as though it verifies with smimesign purpose (see setting X509_PURPOSE_SMIME_SIGN in pk7_doit.c:919). This again means that verification fails unless your signing certificate satisfies the two conditions referred to in the example above:
  1. If the "Extended Key Usage" extension is present, then it must include "email protection" OID.
  2. If the "Key Usage" extension is present, then it must include the "digitalSignature" bit.
Similarly as with the command-line, it is possible to disable checking the extensions, although with more typing. In C API, the verification "purpose" is a property of X509_STORE, passed to PKCS7_verify(), which plays the role of the trusted certificate set.
(Manipulation of the parameters directly on the store was added only to OpenSSL 1.1.0 with X509_STORE_get0_param(X509_STORE *store). In earlier versions, an X509_STORE_CTX must have been created from the store and parameters manipulates with X509_STORE_CTX_get0_param(). BTW support for OpenSSL v1.0.1 has ended just on the day of this writing.)

verify_params = X509_STORE_get0_param(trusted_store);
X509_VERIFY_PARAM_set_purpose(verify_params, X509_PURPOSE_ANY);

 

Demo

 

Prepare the files

Create a chain of certificates: self-signed "root", then an "intermediate" signed by the root, then a "signing" signed by the intermediate.
Write appropriate OpenSSL config files openssl-CA.cnf and openssl-signing.cnf.

Create requests for all the three certificates:
$ openssl req -config openssl-CA.cnf -new -x509 -nodes -outform pem -out root.pem -keyout root-key.pem
$ openssl req -config openssl-CA.cnf -new -nodes -out intermediate.csr -keyout intermediate-key.pem
$ openssl req -config openssl-signing.cnf -new -nodes -outform pem -out signing.csr -keyout signing-key.pem

Sign the intermediate and the signing certificates:
$ mkdir -p demoCA/newcerts
$ touch demoCA/index.txt
$ echo '01' > demoCA/serial
$ openssl ca -config openssl-CA.cnf -in intermediate.csr -out intermediate.pem -keyfile root-key.pem -cert root.pem
$ openssl ca -config openssl-signing.cnf -in signing.csr -out signing.pem -keyfile intermediate-key.pem -cert intermediate.pem

Create some PKCS7 structure, signed with the signing certificate. The chain certificates must be provided during the verification, or embedded into the signature. Let's embed the intermediate certificate. (If there had been more than one certificate in the chain, they would need to be simply placed in one .pem file):
$ echo 'Hello, world!' > data.txt
$ openssl smime -sign -in data.txt -inkey signing-crlsign-key.pem -signer signing-crlsign.pem -certfile intermediate.pem -nodetach > signed-crlsign.pkcs7
We have everything ready for the verification.

 

Verification with command-line OpenSSL tools

Attempt to verify it:
$ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem
Verification failure
139944505955992:error:21075075:PKCS7 routines:PKCS7_verify:certificate verify error:pk7_smime.c:336:Verify error:unsupported certificate purpose

Attempt to verify, skipping extension checks:
$ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem -purpose any
Verification successful

Attempt to verify it, specifying the OpenSSL "purpose" which the signing certificate satisfies:
$ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem -purpose crlsign
Verification successful

 

Verification with the C API

The code below is "demo", any real application would have at least to check return codes of all system calls and free any allocated resources. But you can see how the verification of PKCS#7 structure (unexpectedly) fails, and succeeds after setting the "purpose" which the signing certificate satisfies:
int main(int argc, char* argv[]) {
  X509_STORE *trusted_store;
  X509_STORE_CTX *ctx;
  STACK_OF(X509) *cert_chain;
  X509 *root, *intermediate, *signing;
  BIO *in;
  int purpose, ret;
  X509_VERIFY_PARAM *verify_params;
  PKCS7 *p7;
  FILE *fp;
  int fd;

  SSL_library_init();
  SSL_load_error_strings();

  fd = open("signed-ext-no-smimesign.pkcs7", O_RDONLY);
  in = BIO_new_fd(fd, BIO_NOCLOSE);
  p7 = SMIME_read_PKCS7(in, NULL);

  cert_chain = sk_X509_new_null();

  fp = fopen("root.pem", "r");
  root = PEM_read_X509(fp, NULL, NULL, NULL);
  sk_X509_push(cert_chain, root);

  fp = fopen("intermediate.pem", "r");
  intermediate = PEM_read_X509(fp, NULL, NULL, NULL);
  sk_X509_push(cert_chain, intermediate);

  trusted_store = X509_STORE_new();
  X509_STORE_add_cert(trusted_store, root);

  fp = fopen("signing-ext-no-smimesign.pem", "r");
  signing = PEM_read_X509(fp, NULL, NULL, NULL);

  ret = PKCS7_verify(p7, cert_chain, trusted_store, NULL, NULL, 0);
  printf("Verification without specifying params: %s\n", ret ? "OK" : "failure");

  /* Now set a suitable OpenSSL's "purpose", or disable its checking.
   * Note: since OpenSSL 1.1.0, we'd not need `ctx`, but could just use:
   * verify_params = X509_STORE_get0_param(trusted_store); */

  ctx = X509_STORE_CTX_new();
  X509_STORE_CTX_init(ctx, trusted_store, signing, cert_chain);
  verify_params = X509_STORE_CTX_get0_param(ctx);
  purpose = X509_PURPOSE_get_by_sname("crlsign"); /* Or: purpose = X509_PURPOSE_ANY */
  X509_VERIFY_PARAM_set_purpose(verify_params, purpose);
  X509_STORE_set1_param(trusted_store, verify_params);

  ret = PKCS7_verify(p7, cert_chain, trusted_store, NULL, NULL, 0);
  printf("Verification with 'crlsign' purpose: %s\n", ret ? "OK" : "failure");
  return 0;
}

If our policy requires crlSign Key Usage, then we can use this example code. What if the policy needs some extension combination for which there is no suitable OpenSSL "purpose" - for example, CodeSigning Extended Key Usage? In that case it would not be possible to verify its presence with just one call to PKCS7_verify, but the extensions need to be checked separately.

 

Conclusion

If you use OpenSSL for verifying PKCS#7 signatures, you should check whether either the following holds:
  1. Your signing certificate has Extended Key Usage extension, but no emailProtection bit.
  2. Your signing certificate has KeyUsage extension, but no digitalSignature OID.
If this is the case, then verification with OpenSSL fails even if your signature "should" verify correctly.
For checking signatures with command-line openssl smime -verify, a partial workaround can be adding option -purpose any. In this case OpenSSL will not check Extended Key Usage extensions at all. This can be acceptable or not by your verification policy.
-purpose option allows to check only for certain (although probably common) X509v3 extension combinations. OpenSSL defines a number of what it calls "purposes". If you need to check a combination which does not correspond to any of these ready-made "purposes", then the check must be done in a separate operation.
For checking signatures with C API PKCS7_verify(), the algorithm can be the following:
  1. Check X509v3 extensions of the signing certificate as required by your policy (example).
  2. Either set your verification parameters to X509_PURPOSE_ANY, or set a custom verification callback, which would ignore the "unsupported certificate purpose" error, i.e. X509_V_ERR_INVALID_PURPOSE.

Tuesday, January 26, 2016

ARP messages: Request, Reply, Probe, Announcement

ARP, as originally specified in RFC 826, had two message types: request and reply. The protocol can be "abused" in the sense that a host can send a reply message not in response to any preceding request; this has been formalized in RFC 5227. The two "new" messages are probe and announcement, the latter also called "gratuitous ARP". They can be used for detecting duplicate IP addresses on the same LAN.

The following questions may not be immediately clear:
  • The "new" messages do not extend the protocol - they do not introduce new ARP message types. How can one speak about "new messages"?
  • Source and destination MAC addresses are naturally present in the MAC headers of the ARP message. But there are also "Source Hardware Address" (SHA) and "Target Hardware Address" (THA) fields in the ARP message body, which are also MAC addresses. How are the former and the latter related, and why seemingly the same information is duplicated?
The table below shows the MAC headers and (relevant) ARP protocol fields for each of the four messages. The differences help to understand how each field is used.
Message Sent when the host... MAC headers ARP body
src MAC dst MAC message type SHA (MAC) SPA (IP) THA (MAC) TPA (IP)
request wants to send an IP packet, but does not know the MAC address of the destination own broadcast REQUEST own own 0 destination's
reply receives an ARP request to an IP address this host owns own destination's, or broadcast1 REPLY own own requestor's requestor's
probe (RFC 5227, 2.1) configures a new IP address for an interface own broadcast REQUEST2 own 03 0 probed
announcement (RFC 5227, 2.3) after a probe, concludes that it will use the probed address own broadcast REQUEST2 own new own 0 new own

Notes to the table:

  1. RFC 5227, section 5.6 explains why "Broadcast replies are not universally recommended, but may be appropriate in some cases".
  2. RFC 5227, section 3 notes that the type here could be REPLY as well, then continues to give reasons why REQUEST is recommended.
  3. Zero is specified here in order not to pollute ARP caches of other hosts in case when the probed address is already used in this LAN and the probing host will not take it.

Saturday, October 3, 2015

Possible implications of netmask mismatch

Summary: an IPv4 host with a netmask not matching that of the subnet to which the interface is connected likely builds incorrect routing tables, misses some broadcasts, may incorrectly identify broadcasts as unicasts, and unintentionally broadcast to own subnet.

What does the netmask determine?

The netmask (in IPv4 terminology) and network prefix (in IPv6 terminology) can be associated with an IP subnet, and correspondingly with a network interface. This post handles IPv4 only, so the term "netmask" will be used. Together with own IP address, the netmask determines whether another IP address belongs to the same IP subnet as the NIC. Good, so how is this knowledge used?

Processing of multicast packets is not affected by the netmask, thus multicast would not be mentioned here further. For unicast and broadcast, the netmask is consulted in three different situations, listed in the following sections.

Case 1. Netmask can be used as input for constructing the routing table

The routing system normally automatically creates routes to the subnet to which each network interface belongs. I.e. for each network interface I with address AI and netmask M, the host calculates the subnet of this interface SI = AI & M. Outgoing packets to any address AP such that AP & M = SI would be emitted from the interface I.
While this behavior is typical, nothing mandates hosts to create such routing table entry. For example, if a host has two interfaces on the same subnet, then obviously some more information is needed to decide, which of the interfaces shall send the packets destined to their common subnet. Another example is a firewall with more restrictive forwarding policy than just "put every packet for subnet SI to interface I".

Case 2. Netmask is used to determine whether an arrived packet is a (directed) broadcast to a subnet of some local interface

After the routing is covered, we can limit our further investigation to only:
  1. Unicast packets, destined to "this host" (i.e. one of its interfaces).
  2. Directed broadcast packets to "this network". There can be more than one "this" network if the host has more than one network interface (the host can be or not be a router).
Really,
  • Directed broadcast to a network not in "our network" set is handled as any other packet subject to possible routing.
  • Local broadcast packets are obviously not affected by the netmask setting.

For hosts which are not routers RFC922 defines handling of broadcast packets in a simple way:
In the absence of broadcasting, a host determines if it is the
recipient of a datagram by matching the destination address against
all of its IP addresses.  With broadcasting, a host must compare the
destination address not only against the host's addresses, but also
against the possible broadcast addresses for that host.

Now imagine that an interface of some host has netmask, which does not match one of the subnet this interface is connected to.

Interface misconfigured with shorter netmask fails to process broadcasts: they are understood as unicasts by such host.
    • Example: in 1.1.1.0/24 network, a packet to a broadcast address 1.1.1.255 will not be recognized as broadcast by a misconfigured interface 1.1.1.1/16.
  • That is, unless the network has all bits in the netmask difference equal to 1.
    • Example: in 1.1.255.0/24 network, a packet to a broadcast address 1.1.255.255 will be, by a coincidence, correctly accepted as broadcast by a misconfigured interface 1.1.1.1/16.
  • Broadcast packet which is incorrectly understood as unicast by a misconfigured interface can also happen to bear the destination address of this interface itself.
    • Example: in 1.1.0.0/16 network, a broadcast packet to 1.1.255.255 will be received as unicast by a misconfigured interface 1.1.255.255/8.
  • Additionally, the host may attempt to send a unicast packet which would appear as a valid broadcast on the network.
    • Example: in 1.1.0.0/16 network, a host misconfigured as 1.1.1.1/8 sends a unicast to destination address 1.1.255.255. It appears as broadcast on this network. In fact, there can be no host with address 1.1.255.255 on this network (as it is a broadcast address), so nobody answers ARP query and the host will not be able to send such packet.
Interface misconfigured with a longer netmask fails to process broadcasts as well: it will consider them not belonging to own subnet.
    • Example: in 1.0.0.0/8 network, a packet to a broadcast address 1.255.255.255 will not be received by a misconfigured interface 1.1.1.1/16.
  • Again, unless the address of the misconfigured interface happens to have all bits in the netmask difference being equal to 1.
    • Example: in that same network, that same broadcast packet will be accepted just fine by a misconfigured interface 1.255.1.1/16.
For hosts which are routers, RFC922 adds the clause concerning for broadcast packets destined to other interface than the one on which the packet is received:
...if the datagram is addressed to a hardware network
to which the gateway is connected, it should be sent as a
(data link layer) broadcast on that network.  Again, the
gateway should consider itself a destination of the datagram.

In this case, the netmask of the router's interface, where the packet has been received, is not relevant - packet should be processed anyway. Instead, the packet's destination interface configuration is the basis for the decision. Correspondingly, mismatch between the netmask of the destination interface and the sender's expectation of the netmask leads to same consequences as listed above for non-forwarding hosts.

Have we covered all cases? Three independent factors affect the outcome:
  1. Is the receiver's netmask shorter or longer than of the subnet it is connected to?
  2. Are the bits from the difference in netmask lengths all equal to one?
  3. Is the packet unicast or (directed) multicast?
All 8 possibilities have been considered above.

Case 3. Netmask is used for setting destination address of outgoing broadcast packets

When a host wishes to send a broadcast packet from certain interface, it sets the destination address to that of the interface and puts 1 to all bits which are zeros in the netmask. Correspondingly:

Host with shorter netmask will set too many bits to 1. On the local subnet, these packets will be recognized as belonging to other subnet by other hosts and consequently not processed.
    • Example: in /24 network 1.1.1.0/24, host misconfigured as 1.1.1.1/16 sends what it thinks a "broadcast" with destination 1.1.255.255. (It will be sent as link-layer broadcast.) No other host on this network accepts it.
  • Unless if the network has all bits in the netmask difference being equal to one.
    • Example: in /24 network 1.1.255.0/24, a misconfigured host 1.1.255.1/16 sends a "broadcast" packet to 1.1.255.255, which happens to be a valid broadcast on this network.
Host with longer netmask will not set enough bits to 1. The packets sent as broadcast will be recognized as unicast by other hosts on this subnet.
    • Example: in /8 network 1.0.0.0/8, a host misconfigured as 1.1.1.1/16 sends what it thinks to be a broadcast packet to 1.1.255.255. It appears as valid unicast on this subnet. If there is a host with address 1.1.255.255, this host will accept this packet. (Besides probably unexpected IP content, the host may also notice that the layer 2 address of this packet was a layer 2 broadcast.)
Naturally, these cases are "reversed" repetition of the cases for the receiving hosts.

 

Conclusion

1. Netmask is normally (but not necessarily) used as input for the routing table construction. If used, then a wrong interface netmask makes possible the following routing failures:
  • Too long netmask: the host will have no route for some packets, actually belonging to a subnet of this interface. Attempt to send packet to a host outside the too long misconfigured netmask but inside the correct netmask of the net results in ICMP error "Destination net unreachable". If there is a default outgoing interface, the host will not generate the error, but send the packets to the default interface instead of the interface of this subnet.
  • Too short netmask: the host may attempt to send to the interface packets, which would not be received by any host of the connected subnet. This attempt probably fails, because no host answers the ARP request. This results in ICMP error "Destination host unreachable".
2. IPv4 only: Directed broadcast packets are sent and received utilizing the netmask information. Directed broadcast is a marginal case; such packets are rarely used and dropped by most routers as per RFC2644. But if directed broadcasts are used, then mismatched netmask results in any of:
  • failure to receive broadcast packets
  • failure to forward broadcast packets by routers
  • forwarding broadcast packets, destined to own network
  • accepting unicast packets, destined to some host, as broadcasts
  • accepting broadcast packets as unicast.

Sunday, December 21, 2014

Support for elliptic curves by jarsigner

Summary: Support for cryptography features by jarsigner depends on available Java crypto providers.

Suppose you are defining a PKI profile. You naturally want to use the stronger algorithms with better performance, which (as of year 2014) most likely means elliptic curves. Besides bit strength and performance, you want to be absolutely sure that the curve is supported by your software. If the latter includes jarsigner, you'll be surprised to find that Oracle documentation seems to not mention at all, what elliptic curves does jarsigner support.

Signing a JAR means adding digests of the JAR data to the manifest file, adding digest of the latter to the manifest signature file, and then creating the JAR signature block file. The last step involves two operations:
  1. calculating a digest over the manifest signature file;
  2. signing - meaning, encrypting with the private key - that digest. 
Jarsigner has an option '-sigalg', which is supposed to specify the two algorithms used in these two steps. (There is also '-digestalg' option, but it is not used for the signature block file; it defines the algorithm used in the two initial steps.) Well, this option is irrelevant for the signing step: the curve is in fact defined by the provided private key. So jarsigner will either do the job or choke on the key which comes from an unsupported curve.

A curve may "not work" because it is unknown to jarsigner itself, or to an underlying crypto provider. (The latter case was a reason to a bug 1006776; only three curves actually worked, while many returned a totally unclear error "certificate exception: java.io.IOException: subject key, Could not create EC public key".)

In a particular setup the support can be tested. For curves, supported by OpenSSL, the test can be done by creating the keypair on each curve and attempting the signing. Create the list of curves with 'openssl ecparam -list_curves', remove manually some extra words openssl puts there, and feed it to the stdin:

#!/bin/bash
# Test, which OpenSSL-supported elliptic curves from the list are supported also by jarsigner.

result="supported-curves.txt"
source_data="data.txt"
jar="data.jar"
key="key.pem"
cert="cert.pem"
pfx="keystore.pfx"
key_alias="foo"         # Identificator of the key in the keystore
storepass="123456"      # jarsigner requires some

touch $source_data

while read curve; do
        # Generate an ECDSA private key for the selected curve:
        openssl ecparam -name $curve -genkey -out $key

        # Generate the certificate for the key; give some dummy subject:
        openssl req -new -x509 -nodes -key $key -out $cert -subj /CN=foo

        # Wrap key+cert in a PKCS12, so that jarsigner can use it:
        openssl pkcs12 -export -in $cert -inkey $key -passout pass:$storepass -out $pfx -name $key_alias

        # Create a fresh jar and attempt to sign it
        jar cf $jar $source_data
        jarsigner -keystore $pfx -storetype PKCS12 -storepass $storepass $jar $key_alias
        [ $? -eq 0 ] && echo $curve >> $result
done

rm $source_data $key $cert $pfx $jar

And enjoy the list in supported-curves.txt.


Conclusion: support of elliptic curves by jarsigner depends on jarsigner itself and on the JRE configuration. There is no command-line option to list all supported curves. For a particular system, support of curves supported by OpenSSL can be easily tested.

Monday, December 8, 2014

JAR signature block file format


Summary: this post explains the content of the "JAR signature block file" - that is, the file "META-INF/*.RSA", "META-INF/*.DSA" or "META-INF/*.EC" inside the JAR.

Oracle does not document it

Signed JAR file contains the following additions over a non-signed JAR:
  1. checksums over the JAR content, stored in text files "META-INF/MANIFEST.MF" and "META-INF/*.SF"
  2. the actual cryptographic signature (created with the private key of a signer) over the checksums in a binary signature block file.
Surprisingly, format of the latter does not seem to be documented by Oracle. JAR file specification provides only a useful knowledge that "These are binary files not intended to be interpreted by humans".

Here, the content of this "signature block file" is explained. We show how it can be created and verified with a non-Java tool: OpenSSL.

Create a sample signature block file

For our investigation, generate such file by signing some data with jarsigner:
  • Make an RSA private key (and store it unencrypted), corresponding self-signed certificate, pack them in a format jarsigner understands:
openssl genrsa -out key.pem
openssl req -x509 -new -key key.pem -out cert.pem -subj '/CN=foo'
openssl pkcs12 -export -in cert.pem -inkey key.pem -out keystore.pfx -passout pass:123456 -name SEC_PAD
  • Create the data, jar it, sign the JAR, and unpack the resulting "META-INF" directory:
echo 'Hello, world!' > data
jar cf data.jar data
jarsigner -keystore keystore.pfx -storetype PKCS12 -storepass 123456 data.jar SEC_PAD
unzip data.jar META-INF/*
 
The "signature block file" is META-INF/SEC_PAD.RSA.

What does this block contain

The file appears to be a DER-encoded ASN.1 PKCS#7 data structure. DER-encoded ASN.1 file can be examined with asn1parse subcommand of the OpenSSL:

openssl asn1parse -in META-INF/SEC_PAD.RSA -inform der -i > jarsigner.txt


For more verbosity, you may use some ASN.1 decoder such as one at lapo.it.

You'll see that the two top-level components are:
  • The certificate.
  • 256-byte RSA signature.
You can extract the signature bytes from the binary data and verify (=decrypt) them with openssl rsautl. That includes some "low-level" operations and brings you one more step to understanding the file's content. A simple "high-level" verification command would be:

openssl cms -verify -noverify -content META-INF/SEC_PAD.SF -in META-INF/SEC_PAD.RSA -inform der

This command tells: "Check that the CMS structure in META-INF/SEC_PAD.RSA is really a signature of META-INF/SEC_PAD.SF; do not attempt to validate the certificate".

Creating the signature block file with OpenSSL

For this example, we created the signature block file with jarsigner. Knowing the file's content, we can look for other ways to produce or verify such structure. It may be not that hard to construct it "manually", although authorities and  illustrations all recommend against implementing own crypto.

There are at least two OpenSSL commands which can produce similar structures: cms and smime. Options make the signature closer to that by jarsigner:

openssl cms -sign -binary -noattr -in META-INF/SEC_PAD.SF -outform der -out openssl-cms.der -signer cert.pem -inkey key.pem -md sha256
openssl smime -sign -noattr -in META-INF/SEC_PAD.SF -outform der -out openssl-smime.der -signer cert.pem -inkey key.pem -md sha256

To satisfy the curiosity, peek into these files and compare them to jarsigner.txt with your favorite diff tool:
 
openssl asn1parse -inform der -in openssl-cms.der -i >  openssl-cms.txt
openssl asn1parse -inform der -in openssl-smime.der -i >  openssl-smime.txt 

 

Testing the "DIY signature"

Underlying ASN.1 structures are, in both cms and smime cases, very close but not identical to those made by jarsigner. As the format of the signature block file is not documented, we can do tests to have some ground to say that "it works". Just replace the original signature block file with our signature created by OpenSSL:

cp openssl-cms.der META-INF/SEC_PAD.RSA
zip -u data.jar META-INF/SEC_PAD.RSA
jarsigner -verify -keystore keystore.pfx -storetype PKCS12 -storepass 123456 data.jar SEC_PAD

Lucky strike: a signature produced by 'openssl cms' is recognized by jarsigner (that is, at least by some particular version).

Note that the data which is signed is SEC_PAD.SF, and it was created by jarsigner. If not using the latter, you'll need to produce that file in some way, for example with python-javatools.

What's the use for this knowledge?

Besides better understanding your data, one can think of at least two reasons to sign JARs with non-native tools. Both are somewhat untypical, but not completely irrelevant:

1. The signature must be produced in a system, where native Java tools are not available.
Such system must have access to private key (in one form or another), and security administrators may not like the idea of having such overbloated software as JRE in a tightly controlled environment.
2. The signature must be produced or verified in a system, where available tools do not support the required signature algorithm.
There can be reasons that restrict tools, algorithms, or both; examples include compliance with regulations or compatibility with legacy systems. On a certain system, testing which elliptic curves are supported by jarsigner reveals just three curves (which is not much).

 

 Conclusion

  • JAR signature block file is a DER-encoded PKCS#7 structure, representing a detached signature over the .SF file.
  • Its exact content can be viewed with "openssl asn1parse" or with any ASN.1 decoder.
  • OpenSSL can verify signatures in signature block files and create almost identical structures.
  • Java tools have been shown to accept these "almost identical" structures.