Hello,
this is a heads-up for an update to the ca-certificates package that I've just submitted for updates-testing for Fedora 19 and 20.
The upstream Mozilla CA list maintainers have decided to start removing CA certificates that use a weak 1024-bit key. Although those certificates are still valid, Mozilla has worked with the CAs, and they did agree that it's OK to remove them.
However, there are end-entity and intermediate-CA certificates which have been issued by the removed CAs, which are still valid, and they might still be used by some - despite the CAs having attempted to reach out to all their customers and getting them to reconfigure their systems.
This means, when installing the updated ca-certificates package version 2014.2.1, some SSL/TLS connections might suddenly fail, because the related CA certificate is no longer trusted.
If you experience such situations, the right approach is to contact the owner of the certificate (or the server), and ask them to get a replacement certificate, or to install a replacement certificate on their SSL/TLS server.
Additional details can be found in the update description, which I'll paste at the end of this message.
(I have disabled karma-automation for this update, in case there's a need for a longer testing period. Note that this updated set of CA certificates is currently planned to be part of Firefox 32, which will get released around SEP 02.)
Regards Kai
Update description: =================== This is an update to the latest released set of CA certificates according to the Mozilla CA Policy. It's the same set that has been released in NSS versions 3.16.4 and 3.17.
It's noteworthy that several CA certificates with a weak key size of 1024-bits have been removed, prior to their expiration. (It is expected that additional CA certificates with weak 1024-bit keys will be removed in future releases.)
The removed CA certificates have been used to issue end-entity and intermediate-CA certificates which are still valid. Those certificates are likely to be rejected when using this upated ca-certificates package. The owners of affected certificates should contact their CA and ask for replacement certificates. In some scenarios it might be sufficient to install an alternative intermediate CA certificate (e.g. on a TLS server), allowing an alternative trust chain to another root CA certificate to be found.
More information about the affected CA certificates and other recent modifications can be found in the NSS release notes for version 3.16.3 at https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.16.3_rel... with amendments to the changes as explained in the NSS release notes for version 3.16.4 https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.16.4_rel...
----- Original Message -----
If you experience such situations, the right approach is to contact the owner of the certificate (or the server), and ask them to get a replacement certificate, or to install a replacement certificate on their SSL/TLS server.
That’s the right thing to do of course, but leaves the users with an unusable system in the mean time. Could the update description at least generally point to how to work around this if the certificate owner is not (sufficiently quickly) responsive? Mirek
On Tue, 2014-08-19 at 10:07 -0400, Miloslav Trmač wrote:
----- Original Message -----
If you experience such situations, the right approach is to contact the owner of the certificate (or the server), and ask them to get a replacement certificate, or to install a replacement certificate on their SSL/TLS server.
That’s the right thing to do of course, but leaves the users with an unusable system in the mean time. Could the update description at least generally point to how to work around this if the certificate owner is not (sufficiently quickly) responsive? Mirek
Most software has options to override certificate errors.
I don't want to encourage how to do that, and covering all potential applications would result in a big list.
I'd assume that people who are desparate will find the options on how to override certificate errors and connect anyway.
Kai
On Tue, 2014-08-19 at 10:07 -0400, Miloslav Trmač wrote:
That’s the right thing to do of course, but leaves the users with an unusable system in the mean time. Could the update description at least generally point to how to work around this if the certificate owner is not (sufficiently quickly) responsive?
I'd expect that users would be blocked from using just one application, or from connecting to just a few servers - but should be able to connect to the majority of the Internet just fine.
Can you think of scenarios, where a system is mostly unusable?
A general workaround is to downgrade to the previous package version, do you think we need to state that explicitly in the update description?
Kai
----- Original Message -----
On Tue, 2014-08-19 at 10:07 -0400, Miloslav Trmač wrote:
That’s the right thing to do of course, but leaves the users with an unusable system in the mean time. Could the update description at least generally point to how to work around this if the certificate owner is not (sufficiently quickly) responsive?
I'd expect that users would be blocked from using just one application,
Isn’t that enough? Imagine Fedora contributors being blocked from using “just” koji :)
A general workaround is to downgrade to the previous package version, do you think we need to state that explicitly in the update description?
Well, it’s not a _long-term_ workaround but it does resolve the way I have worded the issue, where everyone involved wants to upgrade the certificate and only the timing isn’t right. Fair enough. Mirek
Hi Kay,
This update has potential to break RubyGems with error:
$ gem fetch power_assert ERROR: Could not find a valid gem 'power_assert' (>= 0), here is why: Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://s3.amazonaws.com/production.s3.rubygems.org/latest_specs.4.8.gz)
Upstream RubyGems ships the certificates, but on your request, I removed the bundled certificates [1]. Now, 3 months later are RubyGems broken in F21+ due to this update. Luckily, I have never backported this commit to F20, so this particular update is not harmful for stable Fedora release, but what am I supposed to do with F21+?
I don't feel like contacting Amazon. You claim that nothing should break and Mozilla contacted everybody, so why not Amazon? Are they so negligible?
Should I follow your advises or follow upstream? Sorry, but this puzzles me ...
Vít
[1] http://pkgs.fedoraproject.org/cgit/ruby.git/commit/?id=efdf386e3192775d84b69...
Dne 18.8.2014 23:48, Kai Engert napsal(a):
Hello,
this is a heads-up for an update to the ca-certificates package that I've just submitted for updates-testing for Fedora 19 and 20.
The upstream Mozilla CA list maintainers have decided to start removing CA certificates that use a weak 1024-bit key. Although those certificates are still valid, Mozilla has worked with the CAs, and they did agree that it's OK to remove them.
However, there are end-entity and intermediate-CA certificates which have been issued by the removed CAs, which are still valid, and they might still be used by some - despite the CAs having attempted to reach out to all their customers and getting them to reconfigure their systems.
This means, when installing the updated ca-certificates package version 2014.2.1, some SSL/TLS connections might suddenly fail, because the related CA certificate is no longer trusted.
If you experience such situations, the right approach is to contact the owner of the certificate (or the server), and ask them to get a replacement certificate, or to install a replacement certificate on their SSL/TLS server.
Additional details can be found in the update description, which I'll paste at the end of this message.
(I have disabled karma-automation for this update, in case there's a need for a longer testing period. Note that this updated set of CA certificates is currently planned to be part of Firefox 32, which will get released around SEP 02.)
Regards Kai
Update description:
This is an update to the latest released set of CA certificates according to the Mozilla CA Policy. It's the same set that has been released in NSS versions 3.16.4 and 3.17.
It's noteworthy that several CA certificates with a weak key size of 1024-bits have been removed, prior to their expiration. (It is expected that additional CA certificates with weak 1024-bit keys will be removed in future releases.)
The removed CA certificates have been used to issue end-entity and intermediate-CA certificates which are still valid. Those certificates are likely to be rejected when using this upated ca-certificates package. The owners of affected certificates should contact their CA and ask for replacement certificates. In some scenarios it might be sufficient to install an alternative intermediate CA certificate (e.g. on a TLS server), allowing an alternative trust chain to another root CA certificate to be found.
More information about the affected CA certificates and other recent modifications can be found in the NSS release notes for version 3.16.3 at https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.16.3_rel... with amendments to the changes as explained in the NSS release notes for version 3.16.4 https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.16.4_rel...
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
On Tue, Aug 26, 2014 at 12:36:47PM +0200, Vít Ondruch wrote:
$ gem fetch power_assert ERROR: Could not find a valid gem 'power_assert' (>= 0), here is why: Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://s3.amazonaws.com/production.s3.rubygems.org/latest_specs.4.8.gz)
Upstream RubyGems ships the certificates, but on your request, I removed the bundled certificates [1]. Now, 3 months later are RubyGems broken in F21+ due to this update. Luckily, I have never backported this commit to F20, so this particular update is not harmful for stable Fedora release, but what am I supposed to do with F21+?
I don't feel like contacting Amazon. You claim that nothing should break and Mozilla contacted everybody, so why not Amazon? Are they so negligible?
Should I follow your advises or follow upstream? Sorry, but this puzzles me ...
Hmmm, according to SSLLabs[0] rubygems.org is using a 2048-bit certificate and chains all the way up to the CA with 2048-bit certificate. The s3.amazonaws.com URL also uses a 2048-bit cert and chains up to the CA with 2048-bit certs as well. If the "fix" to the CA trust file only removed CAs with weak (<2048-bit) certificates it would appear that the breakage you see wouldn't be affected by this.
Out of curisity, did certificate verification get turned on in the F21 version?
- -- Eric
- -------------------------------------------------- Eric "Sparks" Christensen Fedora Project
sparks@fedoraproject.org - sparks@redhat.com 097C 82C3 52DF C64A 50C2 E3A3 8076 ABDE 024B B3D1 - --------------------------------------------------
Dne 26.8.2014 17:00, Eric H. Christensen napsal(a):
On Tue, Aug 26, 2014 at 12:36:47PM +0200, Vít Ondruch wrote:
$ gem fetch power_assert ERROR: Could not find a valid gem 'power_assert' (>= 0), here is why: Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
(https://s3.amazonaws.com/production.s3.rubygems.org/latest_specs.4.8.gz)
Upstream RubyGems ships the certificates, but on your request, I removed the bundled certificates [1]. Now, 3 months later are RubyGems broken in F21+ due to this update. Luckily, I have never backported this commit to F20, so this particular update is not harmful for stable Fedora release, but what am I supposed to do with F21+?
I don't feel like contacting Amazon. You claim that nothing should break and Mozilla contacted everybody, so why not Amazon? Are they so
negligible?
Should I follow your advises or follow upstream? Sorry, but this puzzles me ...
Hmmm, according to SSLLabs[0] rubygems.org is using a 2048-bit certificate and chains all the way up to the CA with 2048-bit certificate. The s3.amazonaws.com URL also uses a 2048-bit cert and chains up to the CA with 2048-bit certs as well. If the "fix" to the CA trust file only removed CAs with weak (<2048-bit) certificates it would appear that the breakage you see wouldn't be affected by this.
These are the certificates which RubyGems upstream bundles:
https://github.com/rubygems/rubygems/tree/master/lib/rubygems/ssl_certs
Actually I discussed this a bit with Tomáš Mráz and he sed that the cert chain is 2048 bit server cert -> 2048 bit intermediate -> 1024 root CA and OpenSSL can't handle this situation by default.
Out of curisity, did certificate verification get turned on in the F21 version?
No. It is turned on already for some time. The difference, that in F20, these certificates are still bundled in rubygems package and they are explicitly loaded by RubyGems. If you remove them manually from /usr/share/rubygems/rubygems/ssl_certs/ (and this is what we basically do in F21+), you can reproduce the error on F20 as well. I.e. without that certificates, RubyGems work with ca-certificates-2013.1.97-1.fc20 but don't work with ca-certificates-2014.2.1-1.0.fc20.
Vít
On Tue, 2014-08-26 at 12:36 +0200, Vít Ondruch wrote:
$ gem fetch power_assert ERROR: Could not find a valid gem 'power_assert' (>= 0), here is why: Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://s3.amazonaws.com/production.s3.rubygems.org/latest_specs.4.8.gz)
The gem tool appears to use openssl.
$ openssl s_client -showcerts -connect rubygems.org:443 2>&1 \ |grep "Verify return code" Verify return code: 0 (ok)
$ openssl s_client -showcerts -connect s3.amazonaws.com:443 2>&1 \ |grep "Verify return code" Verify return code: 20 (unable to get local issuer certificate)
The failure is with the s3.amazonaws.com host. Looking at the certificates the server sends:
$ openssl s_client -showcerts -connect s3.amazonaws.com:443 2>&1 \ |egrep " s:| i:"
0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=s3.amazonaws.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
This means, the server sends three certificates during the handshake. One server cert, and two intermediates.
The intermediate at level 2 was issued by root CA: C=US O=VeriSign, Inc. OU=Class 3 Public Primary Certification Authority
This root CA is very old, it had been issued in 1996:
With the recent upstream update 2.1 this certificate was disabled for the SSL/TLS use, see: https://bugzilla.mozilla.org/show_bug.cgi?id=986005
(Symantac/Verisign was aware, cc'ed on the bug, and didn't object.)
When connecting to this server using an NSS client, such as Firefox, it works. I believe this is because an alternative trust chain can be found.
The intermediate certificate sent by the server at level 1 was issued by: C=US O=VeriSign, Inc. OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only CN=VeriSign Class 3 Public Primary Certification Authority - G5
A root CA with this subject is included in our trust list. So, NSS can find this root CA cert, and succeeded the verification, and ignores the unnecessary, additional intermediate CA cert sent by the server.
I guess that openssl strictly wants to make use of all intermediates sent by the server, and doesn't search for alternative chains. And the only certificate satisfying this chain has been marked as untrusted for SSL/TLS in our update.
I believe that we must contact Amazon and Symantec about this issue. Amazon should remove the second intermediate, ending the path with the G5 intermediate. This will allow openssl to find the trusted root CA.
Also, Symantec should reach out to all of their customers and tell them you update their configuration.
I will contact them.
If we want things to just work, without requiring server administration, then openssl should be enhanced to try additional chains, (or the Ruby software could be changed to use NSS).
Kai
On Sat, 2014-09-06 at 01:58 +0200, Kai Engert wrote:
The failure is with the s3.amazonaws.com host. Looking at the certificates the server sends: $ openssl s_client -showcerts -connect s3.amazonaws.com:443 2>&1 \ |egrep " s:| i:" 0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=s3.amazonaws.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA
- G3
1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA
- G3 i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
This means, the server sends three certificates during the handshake. One server cert, and two intermediates.
The intermediate at level 2 was issued by root CA: C=US O=VeriSign, Inc. OU=Class 3 Public Primary Certification Authority This root CA is very old, it had been issued in 1996: [...] When connecting to this server using an NSS client, such as Firefox, it works. I believe this is because an alternative trust chain can be found.
Unfortunately only NSS works. Both openssl and gnutls fail to connect to popular sites because of that change. It should not be assumed that the users of ca-certificates are only programs using nss.
A root CA with this subject is included in our trust list. So, NSS can find this root CA cert, and succeeded the verification, and ignores the unnecessary, additional intermediate CA cert sent by the server. I guess that openssl strictly wants to make use of all intermediates sent by the server, and doesn't search for alternative chains. And the only certificate satisfying this chain has been marked as untrusted for SSL/TLS in our update.
I guess this is verification based on the rfc5280 path validation. Unlike that NSS ignores the provided trust chain and tries to construct a new one internally. That's interesting and happens to work around the issue here but it is not and must not be required for all software to reconstruct trust chains. The TLS is very specific on that issue, the chain is provided by the server.
If we want things to just work, without requiring server administration, then openssl should be enhanced to try additional chains, (or the Ruby software could be changed to use NSS).
I do not agree. Such changes are dangerous to be performed on a stable release, and may introduce more issues than solve. Ca-certificates should not assume that NSS is its only user. That is either (1) it should include the trusted certificates that are still in wild use, or (2) it should include the intermediates of the trusted certificates that are in use.
regards, Nikos
On Mon, 2014-09-08 at 10:06 +0200, Nikos Mavrogiannopoulos wrote:
Unfortunately only NSS works. Both openssl and gnutls fail to connect to popular sites because of that change. It should not be assumed that the users of ca-certificates are only programs using nss.
[1] is an interesting read. I get the impression that certificates are being removed as long as there is a compatible replacement that NSS can validate, based on NSS's custom strategies for certificate validation. Is this claim accurate?
This is a very big problem for the GNOME stack, which uses gnutls. We're getting complaints about sites that Epiphany can't display because the CSS fails certificate validation, or sites that don't display at all, which all work fine in Firefox.
I guess this is verification based on the rfc5280 path validation. Unlike that NSS ignores the provided trust chain and tries to construct a new one internally. That's interesting and happens to work around the issue here but it is not and must not be required for all software to reconstruct trust chains. The TLS is very specific on that issue, the chain is provided by the server.
From my perspective as an application developer who wants the Internet to "just work," and where proper functionality is defined as "whatever Firefox and Chrome do"... any deviation from NSS's behavior is problematic. :/ I know this is unfortunate but that's the reality of the Internet. We have a partially-finished port of glib-networking from gnutls to NSS, I guess for this reason.
Intermediate cert caching is another big pain point. My university ran an important site for years without a chain of trust, and kept closing my issue reports until I realized that they were using Firefox to validate their chain of trust, and the cert that had signed the only one they were sending was cached for them. This behavior is harmful not just to other browsers, but also to Firefox users who happen to not have that certificate cached yet.
I do not agree. Such changes are dangerous to be performed on a stable release, and may introduce more issues than solve. Ca-certificates should not assume that NSS is its only user. That is either (1) it should include the trusted certificates that are still in wild use, or (2) it should include the intermediates of the trusted certificates that are in use.
I think (2) is what they're trying to do in [1], but it looks like this relies on NSS-specific behavior. (And I'm aware that [1] is just one case out of many.)
On Mon, 2014-09-08 at 09:00 -0500, Michael Catanzaro wrote:
I guess this is verification based on the rfc5280 path validation. Unlike that NSS ignores the provided trust chain and tries to construct a new one internally. That's interesting and happens to work around the issue here but it is not and must not be required for all software to reconstruct trust chains. The TLS is very specific on that issue, the chain is provided by the server.
From my perspective as an application developer who wants the Internet to "just work," and where proper functionality is defined as "whatever Firefox and Chrome do"... any deviation from NSS's behavior is problematic. :/ I know this is unfortunate but that's the reality of the Internet.
I understand but this is not the case here. The internet isn't broken because of gnutls and openssl have some limitation, but because the current NSS derived ca-certificates work assume the NSS validation strategy. This should not be allowed in the Fedora package.
regards, Nikos
On Mon, 2014-09-08 at 17:07 +0200, Nikos Mavrogiannopoulos wrote:
I understand but this is not the case here. The internet isn't broken because of gnutls and openssl have some limitation, but because the current NSS derived ca-certificates work assume the NSS validation strategy. This should not be allowed in the Fedora package.
I would say, "The Internet is broken because NSS is more permissive than gnutls and openssl, and also because the current NSS derived ca-certificates assume the NSS validation strategy." Even once this fallout gets straightened out, we will still have cases of sites that work in Firefox and Chrome but not in Epiphany, which is unfortunate.
Thanks a bunch for your help with debugging the issue!
Michael
On Mon, 2014-09-08 at 09:00 -0500, Michael Catanzaro wrote:
On Mon, 2014-09-08 at 10:06 +0200, Nikos Mavrogiannopoulos wrote:
Unfortunately only NSS works. Both openssl and gnutls fail to connect to popular sites because of that change. It should not be assumed that the users of ca-certificates are only programs using nss.
[1] is an interesting read. I get the impression that certificates are being removed as long as there is a compatible replacement that NSS can validate, based on NSS's custom strategies for certificate validation. Is this claim accurate?
"Custom strategies" is an interesting concept. AFAICS, the TLS standard:
http://tools.ietf.org/html/rfc5246
does not exactly define 'standard' certificate verification strategies, so in a sense, they're *all* "custom". In other words, we're in good old Standard Ambiguity Land here. What that doc has to say about chains, AFAICS, is:
7.4.2. Server Certificate ... certificate_list This is a sequence (chain) of certificates. The sender's certificate MUST come first in the list. Each following certificate MUST directly certify the one preceding it. Because certificate validation requires that root keys be distributed independently, the self-signed certificate that specifies the root certificate authority MAY be omitted from the chain, under the assumption that the remote end must already possess it in order to validate it in any case.
Note: this doesn't say anything about how the client should *validate* the server's certificate list. It defines properties of the list, but not its interpretation by the client.
7.4.5. Server Hello Done ... Upon receipt of the ServerHelloDone message, the client SHOULD verify that the server provided a valid certificate, if required, and check that the server hello parameters are acceptable.
Again, this doesn't specify precisely how the client should interpret the requirement for "a valid certificate".
F.1.1. Authentication and Key Exchange ... If the server is authenticated, its certificate message must provide a valid certificate chain leading to an acceptable certificate authority. Similarly, authenticated clients must supply an acceptable certificate to the server. Each party is responsible for verifying that the other's certificate is valid and has not expired or been revoked.
Note: this doesn't define exactly *how* the client should verify that the server provides "a valid certificate chain leading to an acceptable certificate authority". It doesn't seem to me that the NSS implementation falls outside of this requirement, for instance.
On Mon, 2014-09-08 at 23:26 -0700, Adam Williamson wrote:
On Mon, 2014-09-08 at 09:00 -0500, Michael Catanzaro wrote:
On Mon, 2014-09-08 at 10:06 +0200, Nikos Mavrogiannopoulos wrote:
Unfortunately only NSS works. Both openssl and gnutls fail to connect to popular sites because of that change. It should not be assumed that the users of ca-certificates are only programs using nss.
[1] is an interesting read. I get the impression that certificates are being removed as long as there is a compatible replacement that NSS can validate, based on NSS's custom strategies for certificate validation. Is this claim accurate?
"Custom strategies" is an interesting concept. AFAICS, the TLS standard:
http://tools.ietf.org/html/rfc5246
does not exactly define 'standard' certificate verification strategies, so in a sense, they're *all* "custom". In other words, we're in good old Standard Ambiguity Land here. What that doc has to say about chains, AFAICS, is:
You are referring to wrong document. Certificate validation is outside the scope of TLS, and as you already notice it only mentions the format of the chain and nothing more. A Certificate Path validation algorithm is defined in RFC5280 by the PKIX working group which is (or was) the relevant group for X.509 certificates in IETF.
That is the only path validation algorithm described in a standard, and although no-one is required to support that, it pretty much defines the base-line. Our ca-certificates (in testing) would fail to connect to amazon.com if the RFC5280 validation is used, as it removed a root which is still active and used by popular domains.
So it may be that everyone uses a slightly different verification algorithm, but we should expect at least the base-line to work. We should not require software to be NSS.
regards, Nikos
On Tue, 2014-09-09 at 10:34 +0200, Nikos Mavrogiannopoulos wrote:
On Mon, 2014-09-08 at 23:26 -0700, Adam Williamson wrote:
On Mon, 2014-09-08 at 09:00 -0500, Michael Catanzaro wrote:
On Mon, 2014-09-08 at 10:06 +0200, Nikos Mavrogiannopoulos wrote:
Unfortunately only NSS works. Both openssl and gnutls fail to connect to popular sites because of that change. It should not be assumed that the users of ca-certificates are only programs using nss.
[1] is an interesting read. I get the impression that certificates are being removed as long as there is a compatible replacement that NSS can validate, based on NSS's custom strategies for certificate validation. Is this claim accurate?
"Custom strategies" is an interesting concept. AFAICS, the TLS standard:
http://tools.ietf.org/html/rfc5246
does not exactly define 'standard' certificate verification strategies, so in a sense, they're *all* "custom". In other words, we're in good old Standard Ambiguity Land here. What that doc has to say about chains, AFAICS, is:
You are referring to wrong document. Certificate validation is outside the scope of TLS, and as you already notice it only mentions the format of the chain and nothing more. A Certificate Path validation algorithm is defined in RFC5280 by the PKIX working group which is (or was) the relevant group for X.509 certificates in IETF.
Ah, indeed, missed that one. Thanks.
So it may be that everyone uses a slightly different verification algorithm, but we should expect at least the base-line to work. We should not require software to be NSS.
I think you're making a good point, but possibly too strongly...the ca-certificates folks are just trying to keep the database strong, it's not as if they set out to 'require software to be NSS'. As I mentioned, the folks maintaining the ca-certificates package are the same folks behind the Shared System Certificates feature - https://fedoraproject.org/wiki/Features/SharedSystemCertificates - which required a whole chunk of work to get the major TLS engines using the same certificate store; they're certainly not unfamiliar with openssl and gnutls, I don't think. The database uses NSS's certificate list as its starting point because it's the strongest contender for such a role, I think.
Your report has already been taken up for action, it appears:
https://bugzilla.mozilla.org/show_bug.cgi?id=986005
specifically:
"I think Symantec should reach out to Amazon, and potentially to other customers, too, and suggest to remove intermediates from their server configurations that point to these old roots."
"Brian, thanks for the pointer. I will work with our team to see about getting our cert chains updated for S3. Leaving in needinfo until I have more data." (from an Amazon employee)
so...it seems like wheels are in motion. Note that the updates for both F19 and F20 are still in u-t and have not been pushed stable yet...as Kai explicitly sent the update to u-t with a high auto-push threshold and sent this email out to ask people to report cases where it caused problems, I'd say things are working out more or less as intended, you've raised an issue and it's being dealt with.
On Mon, 2014-09-08 at 23:26 -0700, Adam Williamson wrote:
certificate_list This is a sequence (chain) of certificates. The sender's certificate MUST come first in the list. Each following certificate MUST directly certify the one preceding it.
We recently learned the hard way in GNOME that if you rely on this behavior, some sites won't work because webmasters test their sites with NSS, and NSS doesn't care which order certificates are sent in. (gnutls can reorder certificates too, though.)
Am 09.09.2014 um 08:26 schrieb Adam Williamson:
certificate_list This is a sequence (chain) of certificates. The sender's certificate MUST come first in the list. Each following certificate MUST directly certify the one preceding it. Because certificate validation requires that root keys be distributed independently, the self-signed certificate that specifies the root certificate authority MAY be omitted from the chain, under the assumption that the remote end must already possess it in order to validate it in any case
sure?
IMHO normally i bild a PEM file for httpd over years with cat intermediate.pem ca.pem cert.pem key.pem > your.pem
https://www.ssllabs.com/ssltest/ also says that's fine https://www.ssllabs.com/ssltest/analyze.html?d=secure.thelounge.net
well, i happily admit that i did it wrong and rebuild the PEM-files while the order has some logic for me
* "ca.pem" is sigend by "intermediate.pem" * first load "intermediate.pem" to verify "ca.pem" against it * at the end the server cert signed by the chain before
On Tue, 2014-09-09 at 15:28 +0200, Reindl Harald wrote:
Am 09.09.2014 um 08:26 schrieb Adam Williamson:
certificate_list This is a sequence (chain) of certificates. The sender's certificate MUST come first in the list. Each following certificate MUST directly certify the one preceding it. Because certificate validation requires that root keys be distributed independently, the self-signed certificate that specifies the root certificate authority MAY be omitted from the chain, under the assumption that the remote end must already possess it in order to validate it in any case
sure?
Well, I mean, that's what's written down in the RFC, you can go read it for yourself. I'm not setting myself up as the world's leading authority on TLS, I need at least another fifteen minutes of googling before I do that. ;)
On Mon, 2014-09-08 at 09:00 -0500, Michael Catanzaro wrote:
On Mon, 2014-09-08 at 10:06 +0200, Nikos Mavrogiannopoulos wrote:
Unfortunately only NSS works. Both openssl and gnutls fail to connect to popular sites because of that change. It should not be assumed that the users of ca-certificates are only programs using nss.
[1] is an interesting read. I get the impression that certificates are being removed as long as there is a compatible replacement that NSS can validate, based on NSS's custom strategies for certificate validation. Is this claim accurate?
Yes. Getting phased out old, weak 1024-bit root CA certificates is difficult work, because there are so many issued certificates that still chain up to them.
If we wanted to wait for all of them to expire, it would take many additional years, until users were safe from attackers trying to generate certificates that appear to have valid signatures from CA certificates that use a weak signing key.
Bridge CA certificates are a common way to enable transitioning from old CA to newer CA certificates, while keeping compatibility.
Shipping intermediate CA certificates to help find software find alternative trust chain is a good solution, in my opinion, and indeed is used by upstream to clean up the Mozilla CA list, while keeping compatibility.
In my opinion, if other software cannot find the alternative trust chains, that's a bug.
I think it's good that we have started experimenting with these removals in the testing areas of Fedora, because it raises awareness of these issues, and hopefully can bring higher priority to getting OpenSSL and GnuTLS enhanced.
But given the heavy complaints, maybe it's necessary that we delay shipping the upstream removals into stable Fedora a little longer, until we have a better solution (either by having OpenSSL/GnuTLS enhanced, or maybe by implementing a way that enables users/admins to re-enable legacy CA certificates).
Kai
On Wed, 2014-09-17 at 14:16 +0200, Kai Engert wrote:
I think it's good that we have started experimenting with these removals in the testing areas of Fedora, because it raises awareness of these issues, and hopefully can bring higher priority to getting OpenSSL and GnuTLS enhanced.
But given the heavy complaints, maybe it's necessary that we delay shipping the upstream removals into stable Fedora a little longer, until we have a better solution (either by having OpenSSL/GnuTLS enhanced,
Sounds good. Thanks for taking this issue seriously!
or maybe by implementing a way that enables users/admins to re-enable legacy CA certificates).
For the purposes of Fedora Workstation, no user intervention should be required.
Michael
On 09/08/2014 04:00 PM, Michael Catanzaro wrote:
This is a very big problem for the GNOME stack, which uses gnutls. We're getting complaints about sites that Epiphany can't display because the CSS fails certificate validation, or sites that don't display at all, which all work fine in Firefox.
Firefox also builds a repository of intermediate certificates over time and uses them automatically to fill gaps in certificate chains for completely unrelated sites. This leads to somewhat non-predictable behavior regarding the set of sites to which Firefox can connect reliably. This is difficult to emulate in one-shot command line tools such as wget which do not keep any local state by default.
On Tue, 2014-11-18 at 12:11 +0100, Florian Weimer wrote:
Firefox also builds a repository of intermediate certificates over time and uses them automatically to fill gaps in certificate chains for completely unrelated sites. This leads to somewhat non-predictable behavior regarding the set of sites to which Firefox can connect reliably. This is difficult to emulate in one-shot command line tools such as wget which do not keep any local state by default.
And that's arguably the biggest problem of all. The goal is to reduce certificate validation failures for users who have seen a particular intermediate cert before, but the effect is that web developers get false positives when testing whether their sites are set up properly or not. This just makes things worse in the long run.
Chrome does this as well (when using NSS -- not sure if Chrome on Linux uses NSS, but Chrome on Windows does).
Am 18.11.2014 um 16:12 schrieb Michael Catanzaro:
On Tue, 2014-11-18 at 12:11 +0100, Florian Weimer wrote:
Firefox also builds a repository of intermediate certificates over time and uses them automatically to fill gaps in certificate chains for completely unrelated sites. This leads to somewhat non-predictable behavior regarding the set of sites to which Firefox can connect reliably. This is difficult to emulate in one-shot command line tools such as wget which do not keep any local state by default.
And that's arguably the biggest problem of all. The goal is to reduce certificate validation failures for users who have seen a particular intermediate cert before, but the effect is that web developers get false positives when testing whether their sites are set up properly or not. This just makes things worse in the long run.
true - *but* anybody responsible for a https site should at leat once per month run https://www.ssllabs.com/ssltest/ against it
as far as i can say the best tool available, not only for check the certificate chain, also browser support, optimal cipher configuration and last but not least recent security issues reported
On 11/18/2014 05:44 PM, Reindl Harald wrote:
Am 18.11.2014 um 16:12 schrieb Michael Catanzaro:
On Tue, 2014-11-18 at 12:11 +0100, Florian Weimer wrote:
Firefox also builds a repository of intermediate certificates over time and uses them automatically to fill gaps in certificate chains for completely unrelated sites. This leads to somewhat non-predictable behavior regarding the set of sites to which Firefox can connect reliably. This is difficult to emulate in one-shot command line tools such as wget which do not keep any local state by default.
And that's arguably the biggest problem of all. The goal is to reduce certificate validation failures for users who have seen a particular intermediate cert before, but the effect is that web developers get false positives when testing whether their sites are set up properly or not. This just makes things worse in the long run.
true - *but* anybody responsible for a https site should at leat once per month run https://www.ssllabs.com/ssltest/ against it
https://victi.ms/ receives an “A+” rating, even though it lacks an intermediate certificate and connections from non-browser clients fail. You have to read the results carefully to discover that the site is misconfigured in a significant way.
On Mon, 2014-09-08 at 10:06 +0200, Nikos Mavrogiannopoulos wrote:
Unfortunately only NSS works. Both openssl and gnutls fail to connect to popular sites because of that change. It should not be assumed that the users of ca-certificates are only programs using nss.
No-one working on ca-certificates assumes that, believe me. They're intimately involved in the work it takes to make all the major SSL implementations use the same database, because it is not at all straightforward.
Dne 6.9.2014 01:58, Kai Engert napsal(a):
On Tue, 2014-08-26 at 12:36 +0200, Vít Ondruch wrote:
$ gem fetch power_assert ERROR: Could not find a valid gem 'power_assert' (>= 0), here is why: Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://s3.amazonaws.com/production.s3.rubygems.org/latest_specs.4.8.gz)
The gem tool appears to use openssl.
That is correct.
$ openssl s_client -showcerts -connect rubygems.org:443 2>&1 \ |grep "Verify return code" Verify return code: 0 (ok)
$ openssl s_client -showcerts -connect s3.amazonaws.com:443 2>&1 \ |grep "Verify return code" Verify return code: 20 (unable to get local issuer certificate)
The failure is with the s3.amazonaws.com host. Looking at the certificates the server sends:
$ openssl s_client -showcerts -connect s3.amazonaws.com:443 2>&1 \ |egrep " s:| i:"
0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=s3.amazonaws.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA
- G3
1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA
- G3 i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
This means, the server sends three certificates during the handshake. One server cert, and two intermediates.
The intermediate at level 2 was issued by root CA: C=US O=VeriSign, Inc. OU=Class 3 Public Primary Certification Authority
This root CA is very old, it had been issued in 1996:
With the recent upstream update 2.1 this certificate was disabled for the SSL/TLS use, see: https://bugzilla.mozilla.org/show_bug.cgi?id=986005
(Symantac/Verisign was aware, cc'ed on the bug, and didn't object.)
When connecting to this server using an NSS client, such as Firefox, it works. I believe this is because an alternative trust chain can be found.
The intermediate certificate sent by the server at level 1 was issued by: C=US O=VeriSign, Inc. OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only CN=VeriSign Class 3 Public Primary Certification Authority - G5
A root CA with this subject is included in our trust list. So, NSS can find this root CA cert, and succeeded the verification, and ignores the unnecessary, additional intermediate CA cert sent by the server.
I guess that openssl strictly wants to make use of all intermediates sent by the server, and doesn't search for alternative chains. And the only certificate satisfying this chain has been marked as untrusted for SSL/TLS in our update.
Thanks for the detailed analysis. I could learn something new from it :)
I believe that we must contact Amazon and Symantec about this issue. Amazon should remove the second intermediate, ending the path with the G5 intermediate. This will allow openssl to find the trusted root CA.
Also, Symantec should reach out to all of their customers and tell them you update their configuration.
I will contact them.
Great! Thanks. Should I open ticket against ca-certificates to keep track about this issue?
If we want things to just work, without requiring server administration, then openssl should be enhanced to try additional chains, (or the Ruby software could be changed to use NSS).
I was told by Tomáš Mráz that recent OpenSSL can do something like this, but it is not enabled by default, so it is hardly useful for this case.
Vít
On Mon, 2014-09-08 at 12:53 +0200, Vít Ondruch wrote:
I believe that we must contact Amazon and Symantec about this issue. Amazon should remove the second intermediate, ending the path with the G5 intermediate. This will allow openssl to find the trusted root CA.
Also, Symantec should reach out to all of their customers and tell them you update their configuration.
I will contact them.
Great! Thanks. Should I open ticket against ca-certificates to keep track about this issue?
There was a short discussion here: https://bugzilla.mozilla.org/show_bug.cgi?id=986005#c4
In this particular case, because it works with NSS/Firefox, the admins don't think it's necessary to reconfigure?
I think it doesn't help to track the issue with this particular web site. I've been told this is a default configuration, which had been recommended by the CA to the customers for a long time, in order to achieve maximum compatibility with clients. So it's unlikely to get all sites changed, for two reasons, worry of site admins to break compatibility, and the fact that it's unrealistic to reach and convince all site admins.
This means, we'll either have to find a software solution (such as getting gnutls/openssl enhanced to construct alternative chains), or wait with weak 1024-bit removals by default, until all involved server certificates have expired, which would be very unfortunate (and which might take several years, because of the transitioning trick, that causes recently issued certificates to appear to have been issued by both the weak legacy and stronger replacement root ca cert).
Kai
Dne 17.9.2014 v 14:05 Kai Engert napsal(a):
On Mon, 2014-09-08 at 12:53 +0200, Vít Ondruch wrote:
I believe that we must contact Amazon and Symantec about this issue. Amazon should remove the second intermediate, ending the path with the G5 intermediate. This will allow openssl to find the trusted root CA.
Also, Symantec should reach out to all of their customers and tell them you update their configuration.
I will contact them.
Great! Thanks. Should I open ticket against ca-certificates to keep track about this issue?
There was a short discussion here: https://bugzilla.mozilla.org/show_bug.cgi?id=986005#c4
In this particular case, because it works with NSS/Firefox, the admins don't think it's necessary to reconfigure?
I think it doesn't help to track the issue with this particular web site. I've been told this is a default configuration, which had been recommended by the CA to the customers for a long time, in order to achieve maximum compatibility with clients. So it's unlikely to get all sites changed, for two reasons, worry of site admins to break compatibility, and the fact that it's unrealistic to reach and convince all site admins.
This means, we'll either have to find a software solution (such as getting gnutls/openssl enhanced to construct alternative chains), or wait with weak 1024-bit removals by default, until all involved server certificates have expired, which would be very unfortunate (and which might take several years, because of the transitioning trick, that causes recently issued certificates to appear to have been issued by both the weak legacy and stronger replacement root ca cert).
I am in favor of the former solution, but the later is good as well.
Nevertheless, I am still unsure how to proceed with RubyGems. Should I ship the bundled certificates again? Or should I wait until somebody notices?
Vít
On Wed, 2014-10-15 at 12:28 +0200, Vít Ondruch wrote:
Nevertheless, I am still unsure how to proceed with RubyGems. Should I ship the bundled certificates again? Or should I wait until somebody notices?
Sorry for my late reply, because I didn't have a good suggestion earlier.
We should work with the upstream OpenSSL and the GnuTLS projects, and motivate them to implement more advanced path building. This would be a long term project.
For the short term, I'd like to suggest the following strategy:
All legacy root CA certificates, which seem to be required for full compatibility with either OpenSSL or GnuTLS, will continue to be included and enabled in the ca-certificates package.
For users who are willing to accept the breakage and prefer using the latest trust, only, we provide a mechanism to disable the legacy trust.
I've described the proposed approach in more detail at https://bugzilla.redhat.com/show_bug.cgi?id=1158197
I've pushed experimental packages with this implementation to Rawhide and updates-testing for Fedora 21. I have disabled the karma automatism, because I'll be offline for the next 2 weeks, and don't want things to go live while I'm away. I think it will be helpful to collect test feedback during that time, and see if it's suitable, and make a ship/no-ship decision of this approach later.
So, to answer Vít's original question:
I'd prefer if RubyGems didn't ship its own copy. I think our recent achievement that all software packages on a system use the same (default) set of trusted CA certificates is a good improvement, and I think we should keep it.
Thanks Kai
On Fri, 2014-10-31 at 14:05 +0100, Kai Engert wrote:
On Wed, 2014-10-15 at 12:28 +0200, Vít Ondruch wrote:
Nevertheless, I am still unsure how to proceed with RubyGems. Should I ship the bundled certificates again? Or should I wait until somebody notices?
Sorry for my late reply, because I didn't have a good suggestion earlier.
We should work with the upstream OpenSSL and the GnuTLS projects, and motivate them to implement more advanced path building. This would be a long term project.
Is there some issue with gnutls in F21? As far as I understand it should work as expected with the certificates removed.
So, to answer Vít's original question: I'd prefer if RubyGems didn't ship its own copy. I think our recent achievement that all software packages on a system use the same (default) set of trusted CA certificates is a good improvement, and I think we should keep it.
More than agree. No package should try provide "better" defaults than the shipped ca-certificates, not only because it won't be better, but because this is system configuration which administrators can and _do_ change.
regards, Nikos
On Fri, 2014-10-31 at 15:00 +0100, Nikos Mavrogiannopoulos wrote:
We should work with the upstream OpenSSL and the GnuTLS projects,
and
motivate them to implement more advanced path building. This would
be a
long term project.
Is there some issue with gnutls in F21? As far as I understand it should work as expected with the certificates removed.
It works as expected in the sense that GnuTLS can no longer handle major web sites like Amazon and Kickstarter, this being the natural consequence of removing a root before the certificates issued by it have expired....
On Fri, 2014-10-31 at 09:49 -0500, Michael Catanzaro wrote:
We should work with the upstream OpenSSL and the GnuTLS projects,
and
motivate them to implement more advanced path building. This would
be a
long term project.
Is there some issue with gnutls in F21? As far as I understand it should work as expected with the certificates removed.
It works as expected in the sense that GnuTLS can no longer handle major web sites like Amazon and Kickstarter, this being the natural consequence of removing a root before the certificates issued by it have expired....
Are you sure that this is the case with the current package? My F21 can no longer connect to network to test, but gnutls in it should reconstruct the chain similarly to what nss does (not very similarly to be precise but the end result should be the same). If it is not the case please report it as bug and I'll check it out.
regards, Nikos
Am 31.10.2014 um 15:53 schrieb Nikos Mavrogiannopoulos:
On Fri, 2014-10-31 at 09:49 -0500, Michael Catanzaro wrote:
We should work with the upstream OpenSSL and the GnuTLS projects,
and
motivate them to implement more advanced path building. This would
be a
long term project.
Is there some issue with gnutls in F21? As far as I understand it should work as expected with the certificates removed.
It works as expected in the sense that GnuTLS can no longer handle major web sites like Amazon and Kickstarter, this being the natural consequence of removing a root before the certificates issued by it have expired....
Are you sure that this is the case with the current package? My F21 can no longer connect to network to test, but gnutls in it should reconstruct the chain similarly to what nss does (not very similarly to be precise but the end result should be the same). If it is not the case please report it as bug and I'll check it out.
the point is that if somebody buys a certificate for 6 years he may have a checklist when to change them and if some 3rd party decides to remove the CA certificate -> game over for users of that 3rd party
from where will you "reconstruct the chain"?
* webserver a) has a certificate for 6 years * the issuer is CA b) which you remove * you make that certificate invalid by intention * frankly, that certificate still shows "i am valid until" * that certificate would have to be replaced * that won't happen in many cases
you can hope and expect that large internet copmanies are doing that in a timely manner, but you *really really* can not expect that from anybody out there and you won't notice small websites and other services breaking caused by that
the worst case is that somebody with no technical clue installed the certificate, becomes very few complaints, verfies that it works everywhere and claims Fedora to be broken - and frankly he is just right with that claim because nobody but the CA is in the position to revoke CA certs which are valid
there is a difference in CA's call back certificates and force there users to re-new their certificates or a random OS supplier just removes them from the chain - the CA normally knows which certificates are issued for which customer with a specific CA certificate - the blind butcher making CA certificates invalid don't know
the whole CA trust idea is broken by design, but you won't fix it by remove vaild CA certificates *without coordinate that with the affected CA and make sure all affected customer certificates are replaced*
On Fri, 2014-10-31 at 16:11 +0100, Reindl Harald wrote:
Are you sure that this is the case with the current package? My F21 can no longer connect to network to test, but gnutls in it should reconstruct the chain similarly to what nss does (not very similarly to be precise but the end result should be the same). If it is not the case please report it as bug and I'll check it out.
the point is that if somebody buys a certificate for 6 years he may have a checklist when to change them and if some 3rd party decides to remove the CA certificate -> game over for users of that 3rd party
from where will you "reconstruct the chain"?
- webserver a) has a certificate for 6 years
- the issuer is CA b) which you remove
I'm also not particularly fond of this approach as it adds complexity to an otherwise very complex protocol. However, in gnutls an alternative certificate path is calculated if there is a trusted certificate which has the same name as the issuer of a CA certificate in the path, and it also has the same key.
This is the particular case that Kai refers to. For example in that case, a verisign intermediate certificate was removed, and replaced with a root CA certificate, that has the same DN, and the same key.
regards, Nikos
On Fri, 2014-10-31 at 15:53 +0100, Nikos Mavrogiannopoulos wrote:
Are you sure that this is the case with the current package? My F21 can no longer connect to network to test, but gnutls in it should reconstruct the chain similarly to what nss does (not very similarly to be precise but the end result should be the same). If it is not the case please report it as bug and I'll check it out.
No, I haven't tested this in a month or two. If there's been recent work on NSS compatibility, that's awesome.
Complicating the matter is that these pages sometimes work and sometimes don't (CDN magic I suppose) so we really have to rely on bug reports to know if there's breakage, and we won't get those unless the compat certificates are removed (which I certainly don't suggest).
Thanks,
Michael
On Fri, 2014-10-31 at 15:00 +0100, Nikos Mavrogiannopoulos wrote:
Sorry for my late reply, because I didn't have a good suggestion earlier.
We should work with the upstream OpenSSL and the GnuTLS projects, and motivate them to implement more advanced path building. This would be a long term project.
Is there some issue with gnutls in F21? As far as I understand it should work as expected with the certificates removed.
I confirm that using GnuTLS 3.3.9-2.fc21 on Fedora 21 testing, with ca-certificates-2014.2.1-1.3.fc21, and ca-legacy set to disabled, the command gnutls-cli -p443 www.amazon.com reports a trusted certificate.
That's great, thanks Nikos for fixing it in the newer GnuTLS on Fedora 21!
(Just for the record, using gnutls 3.1.27 on Fedora 20, and a scratch build of the new ca-certificates package, and set to disabled, the certificate is still rejected, which I understand is because of the older GnuTLS version.)
If anyone can still see problems with GnuTLS and the above configuration (disable) on Fedora 21, please let us know which site has the issue.
This means, the remaining package that needs fixing is OpenSSL.
Thanks Kai
On Fri, 2014-10-31 at 16:28 +0100, Kai Engert wrote:
I confirm that using GnuTLS 3.3.9-2.fc21 on Fedora 21 testing, with ca-certificates-2014.2.1-1.3.fc21, and ca-legacy set to disabled, the command gnutls-cli -p443 www.amazon.com reports a trusted certificate.
This isn't a recent change, see [1]. I presume Amazon is most likely still broken in Epiphany (when these roots are removed) as there's been no action on [1], where we decided that gnutls-cli accepted www.amazon.com because it uses certs if they're valid for either email or TLS, whereas GLib only uses certs if they're valid for TLS.
Note that due to CDN magic, sites like Amazon load lots of subresources like images and CSS over connections using unrelated certs, so a more reliable test is to actually open the web page in a browser.
P.S. To both Kai and Nikos: thanks for all your effort on this matter. A couple months ago I was quite worried, but now I expect things will turn out fine.
----- Original Message -----
This isn't a recent change, see [1]. I presume Amazon is most likely still broken in Epiphany (when these roots are removed) as there's been no action on [1], where we decided that gnutls-cli accepted www.amazon.com because it uses certs if they're valid for either email or TLS, whereas GLib only uses certs if they're valid for TLS. Note that due to CDN magic, sites like Amazon load lots of subresources like images and CSS over connections using unrelated certs, so a more reliable test is to actually open the web page in a browser. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1134602
I've reassigned the original bug to gnutls and closed with next release (F21). A fix for F20 is very hard to occur and would most probably introduce unncessary issues. If anything remains, feel free to reopen with more information.
regards, Nikos
On Fri, 2014-10-31 at 14:05 +0100, Kai Engert wrote:
All legacy root CA certificates, which seem to be required for full compatibility with either OpenSSL or GnuTLS, will continue to be included and enabled in the ca-certificates package.
For users who are willing to accept the breakage and prefer using the latest trust, only, we provide a mechanism to disable the legacy trust.
I've described the proposed approach in more detail at https://bugzilla.redhat.com/show_bug.cgi?id=1158197
I've pushed experimental packages with this implementation to Rawhide and updates-testing for Fedora 21. I have disabled the karma automatism, because I'll be offline for the next 2 weeks, and don't want things to go live while I'm away. I think it will be helpful to collect test feedback during that time, and see if it's suitable, and make a ship/no-ship decision of this approach later.
In the meantime, while I was on vacation, the above has been (accidentally) pushed as a stable update for Fedora 21 already: ca-certificates-2014.2.1-1.5.fc21.noarch
It seems it will be included in the final release of Fedora 21. Given that we keep legacy trust enabled, and given that I haven't seen any problem reports, it's probably OK.
Using the new ca-legacy utility, users/administrators who are willing to accept the compatibility issues and who prefer to closely follow the Mozilla CA trust decisions, can disable trust for the legacy root CA certificates as a systemwide configuration, by executing this command as root: ca-legacy disable
The configuration will be remembered in /etc/pki/ca-trust/ca-legacy.conf and will be used on future package upgrades, when additional certificates are moved to the legacy state.
If required, it's possible to undo the configuration and restore to the current default, using: ca-legacy enable
The current configuration can be shown using: ca-legacy check
Regarding Fedora 19 and Fedora 20:
On F19/F20, GnuTLS is also affected by the breakage, when disabling trust for the legacy CAs, because GnuTLS has been enhanced in Fedora 21 and later, only.
Updated packages for F19 and F20, that provide the update to version 2.1 of the ca-certificates list, and which also include the new ca-legacy utility and configuration mechanism, have been pushed to updates-testing: https://admin.fedoraproject.org/updates/ca-certificates-2014.2.1-1.5.fc19 https://admin.fedoraproject.org/updates/ca-certificates-2014.2.1-1.5.fc20
Kai
FYI, I'm documenting the changes that we make on top of the Mozilla CA list at: https://fedoraproject.org/wiki/CA-Certificates
Kai
On Fri, 2014-11-21 at 14:03 +0100, Kai Engert wrote:
On Fri, 2014-10-31 at 14:05 +0100, Kai Engert wrote:
All legacy root CA certificates, which seem to be required for full compatibility with either OpenSSL or GnuTLS, will continue to be included and enabled in the ca-certificates package.
For users who are willing to accept the breakage and prefer using the latest trust, only, we provide a mechanism to disable the legacy trust.
I've described the proposed approach in more detail at https://bugzilla.redhat.com/show_bug.cgi?id=1158197
I've pushed experimental packages with this implementation to Rawhide and updates-testing for Fedora 21. I have disabled the karma automatism, because I'll be offline for the next 2 weeks, and don't want things to go live while I'm away. I think it will be helpful to collect test feedback during that time, and see if it's suitable, and make a ship/no-ship decision of this approach later.
In the meantime, while I was on vacation, the above has been (accidentally) pushed as a stable update for Fedora 21 already: ca-certificates-2014.2.1-1.5.fc21.noarch
It seems it will be included in the final release of Fedora 21. Given that we keep legacy trust enabled, and given that I haven't seen any problem reports, it's probably OK.
Using the new ca-legacy utility, users/administrators who are willing to accept the compatibility issues and who prefer to closely follow the Mozilla CA trust decisions, can disable trust for the legacy root CA certificates as a systemwide configuration, by executing this command as root: ca-legacy disable
The configuration will be remembered in /etc/pki/ca-trust/ca-legacy.conf and will be used on future package upgrades, when additional certificates are moved to the legacy state.
If required, it's possible to undo the configuration and restore to the current default, using: ca-legacy enable
The current configuration can be shown using: ca-legacy check
Regarding Fedora 19 and Fedora 20:
On F19/F20, GnuTLS is also affected by the breakage, when disabling trust for the legacy CAs, because GnuTLS has been enhanced in Fedora 21 and later, only.
Updated packages for F19 and F20, that provide the update to version 2.1 of the ca-certificates list, and which also include the new ca-legacy utility and configuration mechanism, have been pushed to updates-testing: https://admin.fedoraproject.org/updates/ca-certificates-2014.2.1-1.5.fc19 https://admin.fedoraproject.org/updates/ca-certificates-2014.2.1-1.5.fc20
Kai
Kai, this is very important information buried at the bottom of a long email thread; would you mind re-sending this summary in a new thread (also to devel-announce) so that people are sure to see it?
On Fri, 2014-11-21 at 10:45 -0500, Stephen Gallagher wrote:
Kai, this is very important information buried at the bottom of a long email thread; would you mind re-sending this summary in a new thread (also to devel-announce) so that people are sure to see it?
done
On Mon, 2014-08-18 at 23:48 +0200, Kai Engert wrote:
Hello,
this is a heads-up for an update to the ca-certificates package that I've just submitted for updates-testing for Fedora 19 and 20.
The upstream Mozilla CA list maintainers have decided to start removing CA certificates that use a weak 1024-bit key. Although those certificates are still valid, Mozilla has worked with the CAs, and they did agree that it's OK to remove them.
Hey Kai,
This update has caused a lot of pain for Epiphany. Could you take a look at [1] when you get a chance and help us figure out what's gone wrong?
Thanks!
On Mon, 2014-09-01 at 18:03 -0500, Michael Catanzaro wrote:
This update has caused a lot of pain for Epiphany. Could you take a look at [1] when you get a chance and help us figure out what's gone wrong?
Sorry for the delay. I've commented in the bug, let's continue there.
Thanks Kai