But that was six months ago. With the Mozilla Observatory being publicly released almost two months ago, I was curious as to whether significant improvement had been made around the internet. After all, in those two months, the Observatory has scanned approximately 1.3M sites, totalling over 2.5M scans.
With that in mind, I ran a new scan of the Alexa Top 1M at the end of October, and here is what I found:
I'll admit, I was a bit taken aback by the overall improvement across the top million sites, especially as some of these security technologies are almost a decade old.
When we did our initial scan of the top million six months ago, a stunning 97.6% of websites were given a failing grade from the Observatory. Have those results changed since then, given the improvements above?
Grade
April 2016
October 2016
% Change
A+
.003%
.008%
+167%
A
.006%
.012%
+100%
B
.202%
.347%
+72%
C
.321%
.727%
+126%
D
1.87%
2.82%
+51%
F
97.60%
96.09%
-1.5%
While a decrease of 1.5% in failing grades might seem like only a small improvement, the latest Observatory scan contained 962,011 bg-successful scans. With each percentage point representing nearly ten thousand sites, a drop from 97.6% to 96.09% represents approximately fifteen thousand top websites making significant improvements in their security.
I'm excited for the possibility of seeing further improvements as additional surveys are completed. Please share the Mozilla Observatory and help to make the internet a safer and more secure place for everyone!
Footnotes:
Allows 'unsafe-inline' in neither script-src nor style-src
Allows 'unsafe-inline' in style-src only
Amongst sites that set cookies
Disallows foreign origins from reading the domain's contents within user's context
Redirects from HTTP to HTTPS on the same domain, which allows HSTS to be set
Redirects from HTTP to HTTPS, regardless of the final domain
Today was a huge leap forward for humankind, for it marks the day that Let's Encrypt now supports internationalized domain names. That means that you can now get certs with non-ASCII characters in them, which will be huge in helping Let's Encrypt improve HTTPS uptake in countries that use languages outside of the traditional ASCII character set.
How did I do this? First, you must transform unicode (in this case, the 👉👁 emoji) into what is called punycode. Punycode is simply a method of representing unicode characters in ASCII, the only characters supported by the domain name system (DNS). There are many ways to do it, including a simple tool at punycoder.com. For 👉👁, its punycode encoding is xn--mp8hpa.
I simply setup DNS for xn--mp8hpa.pokeinthe.io, updated my nginx configuration to include xn--mp8hpa.pokeinthe.io in its server_name parameter, and requested a cert using my favorite Let's Encrypt client (lego):
root@pokeinthe:~# /opt/go/bin/lego-dpokeinthe.io-dwww.pokeinthe.io-d'xn--mp8hpa.pokeinthe.io'--email'april@pokeinthe.io'--accept-tos-kec384--webroot/var/www/pokeinthe.io--path'/etc/lego'run
2016/10/21 17:30:02 [INFO][pokeinthe.io, www.pokeinthe.io, xn--ls8h.pokeinthe.io] acme: Obtaining bundled SAN certificate2016/10/21 17:30:03 [INFO][pokeinthe.io] acme: Authorization already valid; skipping challenge2016/10/21 17:30:03 [INFO][www.pokeinthe.io] acme: Authorization already valid; skipping challenge2016/10/21 17:30:03 [INFO][xn--ls8h.pokeinthe.io] acme: Could not find solver for: tls-sni-012016/10/21 17:30:03 [INFO][xn--ls8h.pokeinthe.io] acme: Trying to solve HTTP-012016/10/21 17:30:04 [INFO][xn--ls8h.pokeinthe.io] The server validated our request2016/10/21 17:30:04 [INFO][pokeinthe.io, www.pokeinthe.io, xn--ls8h.pokeinthe.io] acme: Validations succeeded; requesting certificates2016/10/21 17:30:04 [INFO] acme: Requesting issuer cert from https://acme-v01.api.letsencrypt.org/acme/issuer-cert2016/10/21 17:30:04 [INFO][pokeinthe.io] Server responded with a certificate.
It’s been over 25 years since Tim Berners-Lee created the first web browser, giving humanity the ability to easily access and transmit information with people both strange and familiar. And in the following 25 years of evolution, browser makers such as Mozilla, Microsoft, and Google have created numerous security technologies to protect both users and websites from bad actors from those whose goals are to steal user secrets, install malware, or otherwise ruin Berners-Lee’s vision of what the world wide web could be.
Unfortunately, due to their complexity, many of these technologies have struggled with adoption. Critical security technologies such as HTTPS are in use by only 40% of the world wide web, and adoption rates for other technologies only drop from there. Today, I (and Mozilla) am proud to release Observatory by Mozilla as a way to raise awareness of these security measure.
Observatory is a simple tool that allows site operators to quickly assess not just if they are using these technologies, but also helps them identify how well they’re being used. It uses a simple grading system to provide near instant feedback on site improvements as they are made. To assist developers and administrators, Observatory also provides links to quality documentation that demonstrates how these technologies work.
We’re All Failing
Just how bad is adoption? Well, the Observatory has been used to scan over 1.3 million websites so far, and 91% of them don’t take advantage of modern security advances. These aren’t tiny sites either; among these 1.3 million websites are some of the most popular websites in the world.
Overall Results
Passing
121,984
Failing
1,212,826
Total Scans
1,334,810
When nine out of 10 websites receive a failing grade, it’s clear that this is a problem for everyone. And by “everyone”, I’m including Mozilla — among our thousands of sites, a great deal of them fail to pass. We’re working very hard to fix them all! In fact, we’ve already used the Observatory to help improve many of our web sites, including addons.mozilla.org, bugzilla.mozilla.org, and mozillians.org.
We’re using the Observatory as a way to democratize website security best practices, and increase transparency around the application (or lack) of existing security features. We hope to help everyone make things better.
How and Why We Built Observatory
A little over a year ago, I was fortunate to be offered a job at Mozilla, helping to improve the security of their many websites. Finally, I would have an easy job where I could put my feet up and relax all day. After all, Mozilla makes Firefox — one of the world’s most popular web browsers — so it was a certainty in my mind that their websites would be locked down, secure, and fully taking advantage of all the security technologies that Mozilla had helped create.
With a future of easy work secured, I wrote a small scanning tool to examine Mozilla’s websites and report just how well we were doing. As it examined each new site, I realized with growing dismay that my future would indeed not be filled with relaxation but instead with many tiring hours of actual work. It turned out that Mozilla — Mozilla! — didn’t do a better job of keeping up with modern website security practices than any other company or group I had worked with before.
Closing the Knowledge Gap
For most security engineers, the next several months would be exclusively devoted to getting their own sites set up properly. Luckily, because I work for Mozilla, I was in a unique position. After all, Mozilla's mission isn’t simply to make a great web browser, but to improve the internet as a whole. I was encouraged to to work on my scanning tool and make it available for the world to use.
It turns out that knowledge of all these technologies was considerably more difficult to acquire than I had assumed - even for security professionals. In retrospect, it’s not surprising: these technologies are spread over dozens of standard documents and while individual articles may talk about them, there wasn’t one place to go for site operators to learn what each of the technologies do, how to implement them, and how important they were.
Guidelines and documentation are one thing: you can write documentation until you’re blue in the face, but if people aren’t interested in implementing them, adoption rates will still suffer. And so it was one day while working on a tool to test these same Mozilla sites that I struck upon an idea. A site called SSL Labs that tests website’s SSL/TLS configurations had done immeasurably good for the internet by gamifying the process of improving your server’s configuration. Faced with a public letter grade, users, organizations, and companies quickly moved towards improving their configuration.
Drawing upon their experiences, I went to work wrapping the Observatory in an easy-to-use website to make this knowledge available to more than just security professionals. Now anybody with a web browser, URL, and a bit of curiousity will be able to investigate the problems that their sites may have. By providing accessible and transparent results, every member of a development team - regardless of skill level and specialization - will be able to check the URLs that they own or depend on so that they can help push for better security practices that benefit all of us.
How does it work?
Just visit the site, enter a domain and click “scan me”. That’s it! You’ll get a report back. Below you can see the report for addons.mozilla.org, the website that Firefox users use to download new addons for their browser. It’s one of Mozilla’s most important websites, and served as an early test case for the Observatory.
When we first scanned it, addons.mozilla.org got an F, just like 91% of all websites. Assisted by the constant feedback of a slowly increasing grade and clear guidance on what needed fixing, the engineers on the addons team quickly improved their grade to an A+,
Testing (and Fixing) Made Easy
The Observatory performs a multitude of checks across roughly a dozen tests. You may not have heard of many of them, and that’s because their documentation is spread across thousands of articles, hundreds of websites, and dozens of specifications. In fact, despite some of these standards being old enough to have children (see Appendix below), their usage rate amongst the internet’s million most popular websites ranges from 30% for HTTPS all the way down to a depressingly low .005% for Content Security Policy.
Each test you run with the Observatory not only tells you how well you’ve implemented a given standard, but it links back to Mozilla’s single-page web security guidelines, which have descriptions, reasonings, and implementation examples for every test. You can use these guidelines in concert with Observatory scans to continuously improve and monitor the state of your website. For administrators who have lots of sites to test or developers who want to integrate it into their development process, we offer both an API and command-line tools.
We Can’t All Be Perfect
Of course, the results for the Observatory may not be perfectly accurate for your site -- after all, the security needs of a site like GitHub are a good deal more complicated than those of a personal blog. By encouraging the adoption of these standards even for low-risk sites, we hope to make developers, system administrators, and security professionals around the world comfortable and familiar with them. With their newfound knowledge and experience, we hope move from a 91% failure rate to a world with mostly passing grades, with more and more sites proudly displaying their A+ rating on the Observatory by Mozilla.
Want to help make the web a safer place? Let’s work together by testing your site today!
Appendix: A Brief History of Web Security Technologies
Year
Technology
Attack Vector
Adoption†
1995
Secure HTTP (HTTPS)
Man-in-the-middle Network eavesdropping
29.6%
1997
Secure Cookies
Network eavesdropping
1.88%
2008
X-Content-Type-Options
MIME type confusion
6.19%
2009 - 2011
HttpOnly Cookies
Cross-site scripting (XSS) Session theft
1.88%
2009 - 2011
X-Frame-Options
Clickjacking
6.83%
2010
X-XSS-Protection
Cross-site scripting
5.03%
2010 - 2015
Content Security Policy
Cross-site scripting
.012%
2012
HTTP Strict Transport Security
Man-in-the-middle Network eavesdropping
1.75%
2013 - 2015
HTTP Public Key Pinning
Certificate misissuance
.414%
2014
HSTS Preloading
Man-in-the-middle
.158%
2014 - 2016
Subresource Integrity
Content Delivery Network (CDN) compromise
.015%
2015 - 2016
SameSite Cookies
Cross-site reference forgery (CSRF)
N/A
2015 - 2016
Cookie Prefixes
Cookie overrides by untrusted sources
N/A
† Adoption rate amongst the Alexa top million websites as of April 2016.
The question of how X-Content-Type-Options: nosniff interacts with passive content came up today on Twitter. I had always assumed that browsers would block passive content where the MIME type was incorrect and nosniff was set, but I decided to test.
Below is a delightful image of a Snorlax. It's a PNG, but the extension is .jpg and nginx delivers its MIME type as image/jpeg:
Here, Firefox (50+), Edge, and Internet Explorer block it, but Chrome and Safari display it just fine.
How about audio? With HTML5 audio, you get to tell it exactly what the MIME type is! Here is an mp3 (audio/mpeg), but it has a .ogg (audio/ogg) extension and I've set the HTML5 audio type attribute to audio/mp4:
IE11, Edge, and Safari fail due to MIME type confusion, but not because of X-Content-Type-Options. Firefox and Chrome? They play that sweet, sweet 1987 video game music just fine.
In conclusion, I should stop making assumptions about how browsers behave, particularly when it comes to quasi-standards like X-Content-Type-Options.
Last Saturday (May 14th), I participated in a panel at a local judge conference on the topic of women in Magic:
It was a ton of fun, and I'm always excited for an opportunity to collaborate alongside Morgan and the Magic the Amateuring cast. Unfortunately, it led to a huge uproar on Reddit, YouTube, and Twitter.
One topic in particular led to a much outrage, and that was on the subject of “offensive” playmats. I completely understand this: players invest a lot of time in choosing the perfect playmat that represents their hobbies and personalities. I know I've spent countless hours searching for the ideal image and ended up having to write a heartfelt email in Japanese to get the high-resolution version of the image that I use for my current playmat:
To that end, I wanted to help dispel some of the misconceptions around these playmats, at least from my experiences as a judge.
Who are you or anyone else to decide what is or is not offensive?
This is a completely fair question, and was certainly the most common and pointed of the questions I received. And it's totally justified: nobody wants to participate in a community where they are made to feel like they are being censored. And the line on what is or is not offensive is extremely hard to draw. For example, is this offensive?
This is a popular playmat that currently being sold, and is one that I have seen at a event that I've judged. While I don't find it personally offensive, it's exactly the sort of playmat that I feel doesn't belong at an event that is open to the public at large.
Of course, who am I to judge whether a playmat, card sleeve, or altered card belongs at an event or not? Well, it's part of the job. Judges have a document that outlines what behaviors do and don't constitute infractions. And while many rulings are very clear cut, e.g., drawing four cards off of Ancestral Recall, many infractions and situations require the judge to use their best judgment. For example, here is one of the criteria for Unsporting Conduct — Minor:
A player uses excessively vulgar and profane language.
As with playmats, what can be considered “vulgar” and “profane” varies extremely widely depending upon on the audience. There's no strict definition of what these terms mean, and so it is left up to judges to determine whether or not such language falls under that classification in that circumstance. Playmats are no different. Judges use their best judgment — based upon the event and audience — to determine whether a playmat should or should not be usable at an event. This happens regardless of whether the judge takes action under their own initiative, or whether they are approached by a player. And when it does occur, it almost always involves the agreement of multiple members of the judge staff.
Why are you and other judges constantly imposing your morality on Magic players?
We're not! I've been judging since Innistrad block, and have judged everything from prereleases to Grand Prix. In all those events over the last five years, I've only seen or heard of a player being asked to put away their playmat about a half dozen times — roughly once a year. Although I regularly see playmats that make me, personally, a bit uncomfortable, taking action requires something truly extraordinary.
And despite concerns that players will approach me complaining about playmats featuring art like this:
…it's just something that doesn't happen. And if it did, I would inform the player that the playmat is perfectly acceptable and while I empathize with their concerns, I'm not going to ask the player to put it away.
Why should players be punished for liking what they like?
First of all, it should be clear that requests that involving players putting away their playmat are not an infraction and are not accompanied by any sort of penalty. Instead, players are politely asked to put their playmat into their backpack, and are provided with an alternative playmat if they don't have one available. And in all those half-dozen experiences, I've never seen a player express any serious outrage; these requests have always been a complete non-event.
L3 judge Rob McKenzie recounted a conversation that he had with a player about his playmat:
Rob: Uh, could you please turn it [a playmat featuring a guy giving the opposing player the middle finger] over? Player: Oh! Yeah! [turns playmat over] Why did I think this was okay, and why did you not catch this in the last six rounds? Everyone: much laughter
Is this not a violation of a player's first amendment right to free speech?
Restrictions on free speech are about the government restricting the free speech of the public. It has absolutely nothing to do with what art a player is allowed to display at a private event on private property that is nevertheless open to the public. I and other judges take a player's right to self-expression extremely seriously, and attempt to tread very lightly when it comes to these requests.
Overall, I want to reiterate that these events are extremely rare and that judges and the community are extremely lenient and forgiving when it comes to a player's choice of playmats and sleeves. Asking a player to put away their favorite playmat is only done with extreme circumspection and typically involve the agreement of multiple members of the judge staff.