Quick Tip – Docker ENV variables

It took me a little while to notice what was happening here, so I’m writing it down in case someone else needs it.

docker_env

Consider this example Dockerfile:

FROM centos:centos7
MAINTAINER Chris Collins

ENV VAR1="foo"
ENV VAR2="bar"

It’s common practice to collapse the ENV lines into a single line, to save a layer:

FROM centos:centos7
MAINTAINER Chris Collins

ENV VAR1="foo" \
    VAR2="bar"

And after building an image from either of these Dockerfiles, the variables are available inside the container:

[[email protected] envtest]$ docker run -it envtest bash
[[email protected] /]# echo $VAR1
foo
[[email protected] /]# echo $VAR2
bar

I’ve also tried to use ENV vars to create other variables, like you can do with bash:

FROM centos:centos7
MAINTAINER Chris Collins

ENV VAR1="foo" \
 VAR2="Var 1 was set to: ${VAR1}"

This doesn’t work, though.  I assume $VAR1 is not set yet when Docker builds the layer, so it cannot be used in $VAR2.

[[email protected] envtest]$ docker run -it envtest bash
[[email protected] /]# echo $VAR1
foo
[[email protected] /]# echo $VAR2
Var 1 was set to:

Using a single line for each ENV does work, though, as the previous layer has been parsed and added to the environment.

FROM centos:centos7
MAINTAINER Chris Collins
ENV VAR1="foo" 
ENV VAR2="Var 1 was set to: ${VAR1}"

[[email protected] envtest]$ docker run -it envtest bash
[[email protected] /]# echo $VAR1
foo
[[email protected] /]# echo $VAR2
Var 1 was set to: foo

So, while it makes sense to try to collapse ENV lines, to save layers**, there are definitely cases where you’d want to separate them.  I am using this in a Ruby-on-Rails image:

[...]
ENV RUBYPKGS='ruby2.1 mod_passenger rubygem-passenger ruby-devel mysql-devel libxml2-devel libxslt-devel gcc gcc-c++' \
    PATH="/opt/ruby-2.1/bin:$PATH" \
    NOKOGIRI_USE_SYSTEM_LIBRARIES='1' \
    HTTPDMPM='prefork'

ENV APPENV='test' \
    APPDIR='/var/www/current' \
    LOGDIR='/var/log/rails' \

ENV RAILS_ENV="${APPENV}" \
    RACK_ENV="${APPENV}"
[...]

A logical separation of sections is helpful here – the first ENV is for system stuff, the second for generic application setup on the host, and the third to set the application environments themselves.

**I have heard rumblings that in future versions of Docker, the ENV stuff will not be a layer – more like metadata, I think.  If that is the case, the need to collapse the lines will be obsoleted.

Apache HTTPS configuration – June 2015

HTTPS is HTTP over TLS.  It allows you to encrypt traffic to and from your web server, providing privacy and security for your clients.  As of this writing, the world is moving ever closer to HTTPS everywhere: thanks to the Snowden documents, there’s been a big push for more privacy and security.  Major companies like Google and Mozilla are securing traffic by default for all their applications.  Cloudflare is offering free HTTPS encryption between clients and their severs.  Let’sEncrypt, a new Certificate Authority offering free, secure certificates is scheduled to open it’s doors in September.
SSLLabs Test A Grade
If you run a webserver, you should be offering HTTPS, and perhaps even forcing HTTPS-only traffic.  This article is about how to configure Apache for HTTPS authentication, supporting modern cipher suites and TLS protocols.  The goal is an “A” rating by the SSLLabs test (https://www.ssllabs.com/ssltest/index.html).

Note:  These are recommended HTTPS configurations as of June 2015.  If you’re reading this more than six months later, it’s almost certainly out of date.


There are four categories to the SSLLabs test: Certificate, Protocol Support, Key Exchange, Cipher Strength.  We’ll cover best practices for each in order.

Certificate

Most of the certificate configuration information is relatively well known.  I’ve included it for completeness if you want or need to read it in Appendix 3: General Certificate Information.

One of the more usual stumbling block for the Certificate section is the certificate chain, so I’ll keep this up here:

Have a complete certificate chain

This can be a tricky part for people new to HTTPS.  Due to the nature of certificates, each Certificate Authority (CA) is verified as trusted by their own CA.  This forms a trust chain from the Root CA certificate, down through each intermediate CA certificate, to your own certificate.  If this chain is broken, your browser cannot verify whether or not your certificate is trusted.  Fortunately, most of the Root and many of the intermediate Certificate Authorities’ certificates are usually included in the CA Bundle for your server by default.  Sometimes, however, you may need to add your CA’s intermediate certificate to the chain to complete it.  You can do this by copying the intermediate certificate to your server (your CA can provide it to you), and using the Apache “SSLCACertificateFile” directive:

SSLCACertificateFile /path/to/the/intermediate/cert
This can be added to your SSL configuration file (/etc/httpd/conf.d/ssl.conf on Red Hat-based systems), or individual Virtual Hosts if they have their own separate SSL configurations.

Protocol Support

Protocol Support is relatively straightforward.  Each TLS Protocol describes how the cryptographic algorithms are used between the client and the server.  As time has gone by, some of these protocols have been found to be insecure, so in order to protect your data in transit, and also receive a good score on the SSLLabs test, you must enable the “good” protocols and disable the insecure ones.
To do this with Apache, use the “SSLProtocol” directive, and add it to your SSL configuration file:
SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 -SSLv2 -SSLv3
This enables TLS versions 1.0, 1.1 and 1.2, and disables the known-insecure SSLv2 and SSLv3 protocols.

Note: It’s possible to get a higher score on the SSLLabs test, and remove the slightly less secure TLS 1.0 protocol by changing +TLSv1 to -TLSv1.  However, as of June 2015, about 30% of browsers out there still support only TLS 1.0, namely Android < 4.4, and IE < 11.  This means users with those browsers will be unable to connect to your server if you disable TLS 1.0.  Hopefully the use of those older browsers will be reduced quickly.


Key Exchange

The best way to get a good score for the Key Exchange category and add security to your HTTPS connection is to use a key with a length of at least 4096 bits, not allow anonymous key exchange and not use a weak (Debian OpenSSL flaw) key.

4096 Bit Key

This is easy.  Generate your key with 4096 bits.  If you’re doing it manually, with the OpenSSL command, you’d simply specify 4096 as the key length.
openssl genrsa -out  <name for your key file> 4096

Disable Anonymous Key Exchange

Covered in the Cipher Strength below

No Weak Key (Debian OpenSSL flaw)

This is an older bug in Debian’s OpenSSL package. If you’re using a Debian-based system, update to the latest OpenSSL package before generating your key, and you’re good to go.

Cipher Strength

There are dozens of Ciphers supported by the OpenSSL packages.  In order to secure your traffic, you should enable only the most secure ciphers available in your OpenSSL package.  The easiest way to get a list of trusted ciphers is to follow Mozilla.org’s recommendations for the Modern Compatibility Cipher Suites (https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility).
In order to configure Apache to use the recommended ciphers as of June 2015, modify the “SSLCipherSuite” directive in your SSL configuration as follows:
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
This supports the majority of modern browsers.  As with the SSLProtocol above, you can take it a step further and remove some of the less secure ciphers from this list to get a better score and better protect your traffic, but a larger portion of browsers will be unable to connect to your server.   If you’ve already disabled TLS 1.0, then that may not be an issue for you.

Conclusion

This information covers the basic configurations for setting up an Apache server with HTTPS support, and making sure it’s acceptably secure.  Using insecure HTTPS settings is effectively just as bad as using no HTTPS – maybe more so if you lull your clients into a false sense of security – so making sure you stay up-to-date with vulnerabilities is extremely important.
As mentioned previously, this is valid as of June 2015.  The older this article gets, the more out of date these recommendations are.  By 2016, you should probably verify the information here to make sure it’s accurate.

Appendix 1: Perfect Forward Secrecy

Another beneficial security, and one required for an “A” grade from SSLLabs, is Perfect Forward Secrecy (PFS).  PFS is a protocol that protects data transmission in the event that one of the keys used is compromised in the future.  (Check the “Further Reading” section for more specific details).
Until relatively recently, the version of OpenSSL shipped with some of the modern distributions of Linux did not support the ciphers required for PFS.  The list of cipher suites in the Cipher Strength section includes ciphers that support PFS, but in order to make sure it’s used, you have to require that the cipher order is honored (ie: use the best first; lesser only if the client cannot interact with the best).  To do that with Apache, set the SSLHonorCipherOrder directive in your SSL configuration file:
SSLHonorCipherOrder on
If your version of OpenSSL does not support the more secure ciphers, this will not break anything – they just will not be used.  However, your server will not support Perfect Forward Secrecy either.

Appendix 2: Server Name Indication

Server Name Indication (SNI) is an extension of the TLS protocol that allows a client to send a request to the server that informs the server of the hostname the browser is attempting to connect to, without the server having to find a TLS key with which to decrypt the traffic first.
Before SNI, there was no way to differentiate what host the client was attempting to connect to before the TLS decryption  occurred, so Apache could not tell which host to direct traffic to.  This meant each HTTPS enabled site had to have it’s own IP address, so traffic was routed via IP instead.
Functionally, this allows Apache to host more than a single HTTPS enabled site per IP address.
If you are using SNI, it’s worth noting that SSLLabs does a check for “Incorrect SNI Alerts”.  These alerts are sent by the server if an SNI-enabled server sends a certificate which contains Subject or Subject Alternative Names for which the server or or it’s virtual hosts are not configured.
For example:  If your certificate included “www.example.org” and “example.org”, and was used with a Virtual Host with no ServerName or ServerAlias directives setup for “www.example.org” or “example.org”, this would trigger the “Incorrect SNI Alert”.
The same thing would happen if your host was configured with a ServerName for just one of the two Subject names included in the certificate.

Note: This is not the same thing as the certificate not matching the domain.  That is a separate issue, and discussed in Appendix 3: General Certificate Information.


To fix Incorrect SNI Alerts, the Virtual Host or server responding to the SSL request MUST have the ServerName directive set for the primary Subject name, and ServerAlias directives for ALL of the other Subject Alternative Names in the certificate.

Appendix 3:  General Certificate Information

The certificate section is probably the easiest to get setup correctly.  To score well, you need to meet a couple of criteria.  The certificate must:

Match the domain name of the site it’s used on

This simply means you must use a certificate that matches your domain name.  A certificate for “example.org” does NOT match the “www.example.org” domain, and vice versa.
A certificate CAN have multiple subjects, through the use of Subject Alternative Names, so your cert can include both “example.org” and “www.example.org”, or more.

Not be expired, revoked, or not yet valid

This is easy.  When you get a cert, it will be valid for a specific period of time.  Chances are it won’t be valid starting in the future, so you’re OK there.  As long as you replace it with a new one before it expires, and don’t use a certificate that’s been revoked, that should cover the rest.

Be signed by a trusted Certificate Authority

A trusted Certificate Authority is one that’s included in trust stores by general community consent.  Your Certificate Authority derives it’s trust from it’s Certificate Authority, and on up the line.  If you are unsure how to find a trusted Certificate Authority, use Let’sEncrypt – their certificates are also signed by IdenTRUST (https://letsencrypt.org/faq/)

Use a secure certificate signature

Your Certificate Authority should sign your certificate with a secure signature  (not MD2 or MD5, etc).   If they do not, find another CA.

Using Docker and AWS to Survive an Outage

Last week at $WORK, we suffered from an outage that slowed down a large part of our network and took down our main website for both internal and external customers.  We were under a distributed denial of service attack focused on the website itself.  The site is load-balanced, and this resulted in slowdowns or outages for all the services behind the load balancers, as well.

While folks were bouncing ideas around on how to bring the site up again while still struggling with the outage, I mentioned that I could pretty quickly migrate the site over to Amazon Web Services and run it in Docker containers there. The higher-ups gave me the go-ahead and a credit card (very important, heh) and told me to get it setup.  The idea was to have it there so we could fail over to the cloud if we were unable to resolve the outage in a reasonable time.

TL;DR – I did, it was easy, and we failed over all external traffic to the cloud. Details below.

Amazon Web Services

DockerDespite having a credit card and a pretty high blanket “OK”, I wanted to make sure we didn’t spend any money unless it was absolutely necessary. To that end, I created three of the “free tier” EC2 instances (1GB RAM, 1 CPU, 10GB Storage) rather than one or more larger instances. After all, these servers were going to be doing one thing and one thing only – running Docker. I took all the defaults, except two. First, I opted to use RHEL7 as the OS. We use Red Hat at work, so I’m familiar with it (and let’s be honest, it works really well), especially where setting up Docker comes in. Second, I set up a security group that allowed only HTTP/HTTPS traffic to the EC2 instances, and SSH access only from $WORK. Security groups are like a logical firewall, I guess – run by Amazon in front of the servers themselves.

The EC2 instances started almost immediately, and I logged in via SSH using the key pair I created for this project. The first thing I did was augment the security group by setting the IPTables firewall on the hosts themselves to match: SSH from $WORK only, drop everything else, even pings.  You know, just in case.

Note: Since I was planning to use Docker to run the website, I didn’t need to add IPTables rules for HTTP/HTTPS. Docker uses the FORWARD chain, since it NATs from the host IP to the containers, and Docker has the ability to add and remove rules from the chain itself as needed.

Next, I ran a quick *yum update* to get the latest patches on the EC2 instance. It wasn’t terribly out of date, so this was quick.

Now to the meat of things. I didn’t really want to muck about with repos or try to find which one was required to install the Docker RPM, so I just copied the RPM for Docker from our local repository. The RPM is packaged upstream by Red Hat, and includes Docker 1.2.1. Even though I wanted to use Docker 1.4.1, the older RPM version is no big deal – I just installed it to get the basic config files – systemd service files, sysconfig, etc. Once the RPM was installed, I downloaded the Docker 1.4.1 binary from Docker.io, and replaced the 1.2.1 binary from the RPM. Presto! Latest Docker with the handy *docker exec* command! At this point, the server itself was basically done, and I moved on to setting up the Docker image.

Time spent so far: About 5 minutes

Docker

Now, I didn’t have an image for our website ready to go or anything – I was going to have to build it from scratch.  However, I’ve been lucky enough to be allowed to play around with Docker at $WORK, and had already done some generic Images for web stacks for our public DockerDemos project (https://dockerdemos.github.io/appstack/), so I was familiar with what I’d need to build the image for our site. I wrote a Dockerfile and built the image on my local laptop to test it. I went through a few revisions to get it perfect, but it only took about 15 minutes to write it from scratch. Once that was ready, I copied the Dockerfile and supporting files up to the EC2 servers, and built the images there. With the magic that is Docker and Linux containers, everything functioned exactly as it did on my laptop, and in a few seconds all three EC2 instances had the website image ready to go.

The final step was to run the container from the image. On all three of the EC2 instances, I ran:

docker run --name website -p 80:80 -p 443:443 -d website && \
docker logs -f website

The first command immediately started up the web servers inside the containers and started to sync their content, and the second opened up STDOUT inside the container so I could watch the progress. In a minute or two the sync was done, and the servers were online!

Note: The “sync” I’m talking about is part of how our website works, not something related to Docker itself.

Total time spent: About 25 minutes

So, in one fell swoop – about a half hour – I was able to create three servers running a Docker image to serve our main website, from scratch. It’s a good thing, too. It wasn’t long before we made the call to fail over, and currently all of our external traffic to the site is being served by these three containers.

That seems cool, no?  But check this out:

Sunday night, I needed to add more servers to the rotation. It was late. I was cranky to have been called after hours. I logged into AWS and used the EC2 “Create Image” feature to commit one of the running instances to a custom image (took about a minute). Then, I spun up three more EC2 instances from that image. They started up as quickly as a normal EC2 instance, and contained all the work I’d already done to set up the first servers, including the Docker package, binary, and image. Once they were up, all I had to do was run the *docker run* command again, and they were ready to go. Elapsed time?

2 minutes

It took longer for the 5 minute time-to-live on our DNS entry to expire.

Docker is Awesome. With AWS, it’s Awesome-er. I’m trying to convince folks that we should leave all of our external traffic to be served by Docker in AWS, and to migrate more sites out there. At the very least, it’s extremely flexible and allows us to respond to issues on a whole different timescale than we could before.

Oh, and an added bonus? All of our external monitoring (in multiple sites across the country) report that our page load speeds have improved 3x compared to what they were on the servers hosted in-house with regular non-Docker setups. I’m investigating what is giving us that increase this week.

Oh, and a second added bonus? For the last five days, our bill from Amazon for hosting our main website is a whopping *$4.69*. That’s a cup of crappy venti mochachino soy caramel crumble arabian dark coffee (or whatever) at the local coffee chain. And I can do without the calories.

Update:

Well, it’s been six months since this little adventure took place.  Since then, this solution has worked so well that we left all of our external traffic pointing to these instances at AWS.  Arguably, things have gotten even easier the more I work with both AWS and Amazon.  To that point:

  1. The Docker approach worked so well that we replaced all of the servers hosting our website internally with basic RHEL7 servers running Docker containers.  The servers are considerably more lightweight than they used to be, and as such we can get better performance out of them.
  2. I’ve since added the new(-ish) Docker flag –restart=always to the deploy command.  This saves me the step of even having to start the containers on reboot.
  3. I setup all the hosts to use the Docker API, and TLS authentication, so I can upload new images and start and stop containers on each host without even needing to login to them.  (This required the opening of port 2376 to $WORK in the security group and host firewall, fyi.)
  4. I wrote a couple of simple bash scripts to re-build the image as needed, and deploy locally for testing.  With the portable nature of Docker images, it’s extremely easy for me to test all changes before I push them out.
  5. Rotating the instances in and out of production at AWS is extremely simple with Amazon’s Elastic IP Addresses.  We are able to rotate a host out of service and instantly replace it with another, allowing us to patch them all with zero downtime.
  6. Amazon’s API is a wonderful thing.  I can manage the entire thing with some python scripts on my laptop, or the convenient Amazon CLI package.

Docker and AWS have proven themselves to me, and though this process, the higher-ups at $WORK.  We’re embracing Docker whole-heartedly in our datacenters here, and we’ve moved a number of services to AWS now, as well.  The ease and flexibility of both is a boon to us, and to our clients, and it’s starting to transform the way we do things in IT – the way we do everything in IT.

Some Real-World Info on POODLE (CVE-2014-3566)

POODLE

TL;DR: Remove SSLv3 – the impact is likely very small

We’ve now removed SSLv3 from about 1000 servers in our environment. So far, we’ve only had one issue – a script used to call an API started to fail. The issue was the ruby rest client > 1.7.0. (Yes, that’s greater-than.)

Removing from Apache

SSLv3 is easy to remove in Apache. You probably want this in your ssl.conf (or whatever the equivalent is for your distro):

SSLProtocol all -sslv2 -sslv3

This removes both SSLv2 and SSLv3 (both are known to have vulnerabilities), and relies on TLS. This is good.

Removing from Nginx

It’s similar for Nginx – find any instances of “ssl_protocols” in your conf files in /etc/nginx (or your distro’s equivalent), and change that line to read:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Testing

You can test whether or not SSLv3 is being used by your server by running:

openssl s_client -connect localhost:443 -ssl3

If you receive a handshake error, you’re good. No SSLv3 for you. If you receive certificates and other info back, then SSLv3 is enabled, and you should change that.

Change the port if you want to check a different service (mail, etc).

Client Impact

Removing SSLv3 does remove a protocol that older browsers may use to connect to your server. It’s most likely to impact unpatched version of IE6, or browsers on old mobile devices. In practice, it has been an extremely small segment of our base (it appears to be < 0.1%) or so. Of course, analyze your client base to see for sure.


I’m beginning to copy over my technology-related posts from Google+ to this blog, mostly so I have an easy-to-read record of them. This one was originally published on 15 October 2014: Some Real-World Info on POODLE (CVE-2014-3566)

Docker and Security

At $WORK, we have discussed at length security issues as they relate to Docker and the Linux lxc technologies. Our general takeaway is that you cannot ever really trust an image that you didn’t build yourself. As Daniel Walsh of Red Hat explains in his article Docker Security with SELinux, it appears that those concerns are valid.

Security

The problem here revolves around namespaces, and the fact that not everything in Linux is namespaced. Consider /sys, /proc, /dev/sd*, /dev/mem, etc. So what we have here is that root inside the containers effectively has root access to any of these file systems or devices. If you can somehow communicate with them, then consequently, you can own the host with little effort.

Then, What Do?

You should only run Docker images that fit one of the following criteria:

  • You have built the image yourself, from scratch
  • You have received the image from a trusted source
  • You have built the image from a third-party’s Dockerfile, which you have fully read, and understood

Also be careful with your “trusted” source. The base images are probably OK. Images released by, say, Red Hat or Canonical are probably OK. Images in Docker Hub, Docker’s official image registry, might not be. That’s not a hit on the Docker guys – it’s because there are over 15,000 images they’d have to have verified manually to be sure.

Don’t Forget Traditional Security Practices

Finally, you need to be concerned with the security of your Docker containers as well – just as concerned as if the stuff in the container were running on a regular server. For example, if someone were to compromise the webserver you have running in a container, and get escalated root privileges, then you effectively have a compromised host.

Be careful out there. Maintain the security of your containers, and only run images you can fully trust.

Puppet: "Error: Could not request certificate: stack level too deep"

This is going to be a stub, because I have no idea what the cause is., The “Error: Could not request certificate: stack level too deep” message when running puppet has been such a pain in the rear end that I need to document it. I’m a firm believer that just the act of documenting a fix ensures that the problem will never arise again. Here’s hoping.

Symptom:

$ puppet agent -tv
Info: Not using expired certificate for ca from cache; expired at Tue May 20 00:16:15 UTC 2014 Error: Could not request certificate: stack level too deep Exiting; failed to retrieve certificate and waitforcert is disabled

Sounds like an expired CA cert, but replacing it didn’t fix it. All the posts online with this error talk about a three-year-old activerecord bug, so that’s not valid either. Having no Google-fu solutions, I did the following:

Resolution (perhaps – again, I am unsure. Cargo Cult fix incoming):

On the puppet node:

$ rm -rf /var/lib/puppet/ssl

…and because we’re doing something weird at $WORK:

$ rm -rf /etc/puppet/ssl
$ puppet agent -t # Regenerates the SSL certificates for the agent

On the puppet master:

$ puppet cert sign # Signs the new node certificate

…and that fixed the node, somehow. It’s got to be SSL related, but who knows how it got into that state, or why updating the CA cert didn’t fix it.

Public Service Announcement: Server Name Indication (SNI)

Server Name Indication, or SNI, is an extension to the TLS protocol. It’s function, in plain English, is to allow a browser to tell a web serverwhich website it’s coming to see before starting the SSL connection. The browser then knows which SSL credentials to send back to the browser and an SSL connection can be established.

SNI is supported by all modern browsers. In fact, it’s even supported by a ton of positively ancient browsers. It’s not, however, supported by any version of Internet Explorer on Windows XP.

BUT WINDOWS XP IS NOT EVEN SUPPORTED BY MICROSOFT

If you see an error like the one below, and you are using Internet Explorer on Windows XP, please, take it seriously. Don’t click through. But DO get a REAL browser, like Firefox or Chrome and use that instead, for everything you do .
Internet Explorer 8 SSL Certificate Warning

SNI ERROR

You should ideally upgrade to a newer version of Windows if you’re still on XP. You’re not getting any patches. You will get hacked. It’s not a question of if. It’s not even a question of when. You probably already have been. But I know that’s not feasible for everyone.

A couple of stats for you. Only 5.29% of the entire internet is still using IE8. IE7 holds their impressive 0.17%, and IE6 is actually 0.3%. All together, those three make up 5.76% of the internet.

The take home from all of this?

The rest of the world has moved on, and you should too.

Use Firefox or Chrome. Just do it.

(Stats: http://www.sitepoint.com/browser-trends-june-2014-chromes-ascent-continues/)


This was a technology-related rant post I made on Google+. I’m copying it to this blog, mostly so I have an easy-to-read record. This one was originally published on 25 June 2014: Public Service Announcement: Server Name Indication (SNI)