Adding support for Dark Mode to web applications

MacOS Mojave, Apple's newest operating system, now features a Dark Mode interface. In Dark Mode, the entire system adopts a darker color palette. Many third-party desktop applications have already been updated to support Dark Mode.

Today, more and more organizations rely on cloud-based web applications to support their workforce; from Gmail to Google Docs, SalesForce, Drupal, WordPress, GitHub, Trello and Jira. Unlike native desktop applications, web applications aren't able to adopt the Dark Mode interface. I personally spend more time using web applications than desktop applications, so not having web applications support Dark Mode defeats its purpose.

This could change as the next version of Safari adds a new CSS media query called prefers-color-scheme. Websites can use it to detect if Dark Mode is enabled.

I learned about the prefers-color-scheme media query on Jeff Geerling's blog, so I decided to give it a try on my own website. Because I use CSS variables to set the colors of my site, it took less than 30 minutes to add Dark Mode support on dri.es. Here is all the code it took:

@media (prefers-color-scheme: dark) {
:root {
--primary-font-color: #aaa;
--secondary-font-color: #777;
--background-color: #222;
--table-zebra-color: #333;
--table-hover-color: #444;
--hover-color: #333;
}
}

If you use MacOS Mojave, Safari 12.1 or later, and have Dark Mode enabled, my site will be shown in black:

It will be interesting to see if any of the large web applications, like Gmail or Google Docs will adopt Dark Mode. I bet they will, because it adds a level of polish that will be expected in the future.
Source: Dries Buytaert www.buytaert.net


Make Your Site Faster with Preconnect Hints


Requesting an external resource on a website or application incurs several round-trips before the browser can actually start to download the resource. These round-trips include the DNS lookup, TCP handshake, and TLS negotiation (if SSL is being used).

Depending on the page and the network conditions, these round-trips can add hundreds of milliseconds of latency, or more. If you are requesting resources from several different hosts, this can add up fast, and you could be looking at a page that feels more sluggish than it needs to be, especially on slower cellular connections, flaky wifi, or congested networks.
One of the the easiest ways to speed up your website or application is to simply add preconnect hints for any hosts that you will be requesting assets from. These hints essentially tell the browser what origins will be used for resources, so that it can then prep things by establishing all the necessary connections for those resources.
Below are a few scenarios where adding preconnect hints can make things faster!
Faster Display of Google Fonts
Google Fonts are great. The service is reliable and generally fast due to Google's global CDN. However, because @font-face rules must first be discovered in CSS files before making web font requests, there often can be a noticeable visual delay during page render. We can greatly reduce this delay by adding the preconnect hint below!

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

Once we do that, it’s easy to spot the difference in the waterfall charts below. Adding preconnect removes three round-trips from the critical rendering path and cuts more than a half second of latency.

This particular use case for preconnect has the most visible benefit, since it helps to reduce render blocking and improves time to paint.
Note that the font-face specification requires that fonts are loaded in "anonymous mode", which is why the crossorigin attribute is necessary on the preconnect hint.
Faster Video Display
If you have a video within the viewport on page load, or if you are lazy-loading videos further down on a page, then we can use preconnect to make the player assets load and thumbnail images display a little more quickly. For YouTube videos, use the following preconnect hints:

<link rel="preconnect" href="https://www.youtube.com">
<link rel="preconnect" href="https://i.ytimg.com">
<link rel="preconnect" href="https://i9.ytimg.com">
<link rel="preconnect" href="https://s.ytimg.com">

Roboto is currently used as the font in the YouTube player, so you’ll also want to preconnect to the Google fonts host if you aren’t already.

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

The same idea can also be applied to other video services, like Vimeo, where only two hosts are used: vimeo.com and vimeocdn.com.
Preconnect for Performance
These are just a few examples of how preconnect can be used. As you can see, it’s a very simple improvement you can make which eliminates costly round-trips from request paths. You can also implement them via HTTP Link headers or invoke them via JavaScript. Browser support good and getting better (supported in Chrome and Firefox, coming soon to Safari and Edge). Be sure to use it wisely though. Only preconnect to hosts which you are certain that assets will be requested from. Also, keep in mind that these are merely optimization “hints” for the browser, and as such, might not be acted on each each and every time. If you’ve used preconnect for other use cases and have seen performance gains, let me know in the comments below!


Source: VigetInspire


Animating Border

Transitioning border for a hover state. Simple, right? You might be unpleasantly surprised.
The Challenge
The challenge is simple: building a button with an expanding border on hover.
This article will focus on genuine CSS tricks that would be easy to drop into any project without having to touch the DOM or use JavaScript. The methods covered here will follow these rules

Single element (no helper divs, but psuedo-elements are allowed)
CSS only (no JavaScript)
Works for any size (not restricted to a specific width, height, or aspect ratio)
Supports transparent backgrounds
Smooth and performant transition

I proposed this challenge in the Animation at Work Slack and again on Twitter. Though there was no consensus on the best approach, I did receive some really clever ideas by some phenomenal developers.
Method 1: Animating border
The most straightforward way to animate a border is… well, by animating border.
.border-button {
border: solid 5px #FC5185;
transition: border-width 0.6s linear;
}

.border-button:hover { border-width: 10px; }
See the Pen CSS writing-mode experiment by Shaw (@shshaw) on CodePen.
Nice and simple, but there are some big performance issues.
Since border takes up space in the document’s layout, changing the border-width will trigger layout. Nearby elements will shift around because of the new border size, making browser reposition those elements every frame of the animation unless you set an explicit size on the button.
As if triggering layout wasn’t bad enough, the transition itself feels “stepped”. I’ll show why in the next example.
Method 2: Better border with outline
How can we change the border without triggering layout? By using outline instead! You’re probably most familiar with outline from removing it on :focus styles (though you shouldn’t), but outline is an outer line that doesn’t change an element’s size or position in the layout.
.border-button {
outline: solid 5px #FC5185;
transition: outline 0.6s linear;
margin: 0.5em; /* Increased margin since the outline expands outside the element */
}

.border-button:hover { outline-width: 10px; }

A quick check in Dev Tools’ Performance tab shows the outline transition does not trigger layout. Regardless, the movement still seems stepped because browsers are rounding the border-width and outline-width values so you don’t get sub-pixel rendering between 5 and 6 or smooth transitions from 5.4 to 5.5.

Strangely, Safari often doesn’t render the outline transition and occasionally leaves crazy artifacts.

Method 3: Cut it with clip-path
First implemented by Steve Gardner, this method uses clip-path with calc to trim the border down so on hover we can transition to reveal the full border.
.border-button {
/* Full width border and a clip-path visually cutting it down to the starting size */
border: solid 10px #FC5185;
clip-path: polygon(
calc(0% + 5px) calc(0% + 5px), /* top left */
calc(100% - 5px) calc(0% + 5px), /* top right */
calc(100% - 5px) calc(100% - 5px), /* bottom right */
calc(0% + 5px) calc(100% - 5px) /* bottom left */
);
transition: clip-path 0.6s linear;
}

.border-button:hover {
/* Clip-path spanning the entire box so it's no longer hiding the full-width border. */
clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%);
}

clip-path technique is the smoothest and most performant method so far, but does come with a few caveats. Rounding errors may cause a little unevenness, depending on the exact size. The border also has to be full size from the start, which may make exact positioning tricky.
Unfortunately there’s no IE/Edge support yet, though it seems to be in development. You can and should encourage Microsoft’s team to implement those features by voting for masks/clip-path to be added.
Method 4: linear-gradient background
We can simulate a border using a clever combination of multiple linear-gradient backgrounds properly sized. In total we have four separate gradients, one for each side. The background-position and background-size properties get each gradient in the right spot and the right size, which can then be transitioned to make the border expand.
.border-button {
background-repeat: no-repeat;

/* background-size values will repeat so we only need to declare them once */
background-size:
calc(100% - 10px) 5px, /* top & bottom */
5px calc(100% - 10px); /* right & left */

background-position:
5px 5px, /* top */
calc(100% - 5px) 5px, /* right */
5px calc(100% - 5px), /* bottom */
5px 5px; /* left */

/* Since we're sizing and positioning with the above properties, we only need to set up a simple solid-color gradients for each side */
background-image:
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185);

transition: all 0.6s linear;
transition-property: background-size, background-position;
}

.border-button:hover {
background-position: 0 0, 100% 0, 0 100%, 0 0;
background-size: 100% 10px, 10px 100%, 100% 10px, 10px 100%;
}

This method is quite difficult to set up and has quite a few cross-browser differences. Firefox and Safari animate the faux-border smoothly, exactly the effect we’re looking for. Chrome’s animation is jerky and even more stepped than the outline and border transitions. IE and Edge refuse to animate the background at all, but they do give the proper border expansion effect.
Method 5: Fake it with box-shadow
Hidden within box-shadow's spec is a fourth value for spread-radius. Set all the other length values to 0px and use the spread-radius to build your border alternative that, like outline, won’t affect layout.
.border-button {
box-shadow: 0px 0px 0px 5px #FC5185;
transition: box-shadow 0.6s linear;
margin: 0.5em; /* Increased margin since the box-shado expands outside the element, like outline */
}

.border-button:hover { box-shadow: 0px 0px 0px 10px #FC5185; }

The transition with box-shadow is adequately performant and feels much smoother, except in Safari where it’s snapping to whole-values during the transition like border and outline.
Pseudo-Elements
Several of these techniques can be modified to use a pseudo-element instead, but pseudo-elements ended up causing some additional performance issues in my tests.
For the box-shadow method, the transition occasionally triggered paint in a much larger area than necessary. Reinier Kaper pointed out that a pseudo-element can help isolate the paint to a more specific area. As I ran further tests, box-shadow was no longer causing paint in large areas of the document and the complication of the pseudo-element ended up being less performant. The change in paint and performance may have been due to a Chrome update, so feel free to test for yourself.
I also could not find a way to utilize pseudo-elements in a way that would allow for transform based animation.
Why not transform: scale?
You may be firing up Twitter to helpfully suggest using transform: scale for this. Since transform and opacity are the best style properties to animate for performance, why not use a pseudo-element and have the border scale up & down?
.border-button {
position: relative;
margin: 0.5em;
border: solid 5px transparent;
background: #3E4377;
}

.border-button:after {
content: '';
display: block;
position: absolute;
top: 0; right: 0; bottom: 0; left: 0;
border: solid 10px #FC5185;
margin: -15px;
z-index: -1;
transition: transform 0.6s linear;
transform: scale(0.97, 0.93);
}

.border-button:hover::after { transform: scale(1,1); }

There are a few issues:

The border will show through a transparent button. I forced a background on the button to show how the border is hiding behind the button. If your design calls for buttons with a full background, then this could work.
You can’t scale the border to specific sizes. Since the button’s dimensions vary with the text, there’s no way to animate the border from exactly 5px to 10px using only CSS. In this example I’ve done some magic-numbers on the scale to get it to appear right, but that won’t be universal.
The border animates unevenly because the button’s aspect ratio isn’t 1:1. This usually means the left/right will appear larger than the top/bottom until the animation completes. This may not be an issue depending on how fast your transition is, the button’s aspect ratio, and how big your border is.

If your button has set dimensions, Cher pointed out a clever way to calculate the exact scales needed, though it may be subject to some rounding errors.
Beyond CSS
If we loosen our rules a bit, there are many interesting ways you can animate borders. Codrops consistently does outstanding work in this area, usually utilizing SVGs and JavaScript. The end results are very satisfying, though they can be a bit complex to implement. Here are a few worth checking out:

Creative Buttons
Button Styles Inspiration
Animated Checkboxes
Distorted Button Effects
Progress Button Styles

Conclusion
There’s more to borders than simply border, but if you want to animate a border you may have some trouble. The methods covered here will help, though none of them are a perfect solution. Which you choose will depend on your project’s requirements, so I’ve laid out a comparison table to help you decide.

My recommendation would be to use box-shadow, which has the best overall balance of ease-of-implementation, animation effect, performance and browser support.
Do you have another way of creating an animated border? Perhaps a clever way to utilize transforms for moving a border? Comment below or reach me on Twitter to share your solution to the challenge.
Special thanks to Martin Pitt, Steve Gardner, Cher, Reinier Kaper, Joseph Rex, David Khourshid, and the Animation at Work community.

Animating Border is a post from CSS-Tricks
Source: CssTricks


Apple’s Proposal for HTML Template Instantiation

I'm sure I don't have the expertise to understand the finer nuances of this, but I like the spirit:

The HTML5 specification defines the template element but doesn't provide a native mechanism to instantiate it with some parts of it substituted, conditionally included, or repeated based on JavaScript values — as popular JavaScript frameworks such as Ember.js and Angular allow. As a consequence, there are many incompatible template syntaxes and semantics to do substitution and conditionals within templates — making it hard for web developers to combine otherwise reusable components when they use different templating libraries.
Whilst previously we all decided to focus on shadow DOM and the custom-elements API first, we think the time is right — now that shadow DOM and custom-elements API have been shipping in Safari and Chrome and are in integrationin Firefox — to propose and standardize an API to instantiate HTML templates.

Let the frameworks compete on speed and developer convenience in other ways.
Direct Link to Article — Permalink
Apple’s Proposal for HTML Template Instantiation is a post from CSS-Tricks
Source: CssTricks


Getting Around a Revoked Certificate in OSX

Let me start this off by saying this is not an ideal trick and one I hope no one else needs to use because it's a bad idea to work around a browser feature that's aimed to protect your security.
That said, I am in the process of testing a product and ran into a weird situation where our team had to revoke the SSL certificate we had assigned to our server. We're going to replace it but I have testing to do in the meantime and need access to our staging server, so waiting is kind of a blocker because, well, this message gets me nowhere.

Safari's warning for a site with a revoked certificate.
This message is different from the warnings browsers provide for sites without SSL. Those give you a built-in workaround by simply dismissing the warning. The difference is that a revoked certificate implies that the certificate's private key has been lost or compromised, making the site's security vulnerable to malware, phising, etc. No bueno!
I reached out to Zach Tirrell and he helped me get around this issue with some tinkering that, given the right situation, might be helpful for others.
One last note before we dive in is that I'm working on a Mac. I'm not sure what the equivalent steps would be for other systems, so you're mileage may vary.
Step 1: View the Certificate
Click the "Show Details" link in Safari to reveal an additional option to view the certificate.
Safari displays an option in the error message to view the certificate that is revoked. Click on that to open a dialogue that provides information on that certificate.
Step 2: Save the Certificate to the Desktop
Drag the certificate to the desktop to save it locally.
There's a bit of hidden UI in the dialogue that allows you to save the certificate. Click the certificate icon and literally drag it to the desktop.
Step 3: Add the Certificate to Keychain Access
Drag the certificate into the Keychain Access Certificates screen.
OSX's Keychain Access is typically known for storing a user's passwords, but it also manages secure notes and SSL certificates, among other protected system assets. You can open Keychain Access in your Applications, or search for it in Spotlight (CMD + Space).
Navigate to the Certificates panel and drag the certificate into it. The certificate is now installed and recognizable to Keychain Access.
Step 4: Trust the Certificate
Tell Keychain Access to "Always Trust" the certificate.
Double-click on the certificate to manage the system preferences for handling it. Expand the Trust panel ans set the preference to Always Trust the certificate.
Keychain Access will likely ask you to confirm this change by entering your system password.
Revisit the Site
Now that the system has been instructed to trust the certificate, go ahead and re-visit the site. It should now load as if the certificate had not been revoked in the first place, though you may need to restart the browser to see the effect.
Both Safari and Chrome read permissions from Keychain Access, so those browsers should be well covered by these steps. Firefox has its own layer way of managing these permissions, which can be accessed on the browser's Preferences > Privacy Settings screen.
Wrapping Up
I'll state it again, but I really hope no one ever needs to use this trick. Browsers bake this security in for good reason and working around it is not only frowned upon, but downright risky. The type of scenario for needing to do this has got to be pretty darn rare and, for those, I sure hope this helps.

Getting Around a Revoked Certificate in OSX is a post from CSS-Tricks
Source: CssTricks


The Art of Comments

I believe commenting code is important. Most of all, I believe commenting is misunderstood. I'm tentative to write this article at all. I am not a commenting expert (if there is such a thing) and have definitely written code that was poorly commented, not commented at all, and have written comments that are superfluous.

I tweeted out the other day that "I hear conflicting opinions on whether or not you should write comments. But I get thank you's from junior devs for writing them so I'll continue." The responses I received were varied, but what caught my eye was that for every person agreeing that commenting was necessary, they all had different reasons for believing this.
Commenting is a more nuanced thing than we give it credit for. There is no nomenclature for commenting (not that there should be) but lumping all comments together is an oversimplification. The example in this comic that was tweeted in response is true:
From Abstrusegoose
This is where I think a lot of the misconceptions of comments lie. The book Clean Code by Robert C. Martin talks about this: that comments shouldn't be necessary because code should be self-documenting. That if you feel a comment is necessary, you should rewrite it to be more legible. I both agree and disagree with this. In the process of writing a comment, you can often find things that could be written better, but it's not an either/or. I might still be able to rewrite that code to be more self-documenting and also write a comment as well, for the following reason:
Code can describe how, but it cannot explain why.
This isn't a new concept, but it's a common theme I notice in helpful comments that I have come across. The ability to communicate something that the code cannot, or cannot concisely.
All of that said, there is just not one right way or one reason to write a comment. In order to better learn, let's dig into some of the many beneficial types of comments that might all serve a different purpose, followed by patterns we might want to avoid.
Good comments
What is the Why
Many examples of good comments can be housed under this category. Code explains what you'd like the computer to take action on. You'll hear people talk about declarative code because it describes the logic precisely but without describing all of the steps like a recipe. It lets the computer do the heavy lifting. We could also write our comments to be a bit more declarative
/*
We had to write this function because the browser
interprets that everything is a box
*/
This doesn't describe what the code below it will do. It doesn't describe the actions it will take. But if you found a more elegant way of rewriting this function, you could feel confident in doing so because your code is likely the solution to the same problem in a different way.
Because of this, less maintenance is required (we'll dig more into this further on). If you found a better way to write this, you probably wouldn't need to rewrite the comment. You could also quickly understand whether you could rewrite another section of code to make this function unnecessary without spending a long time parsing all the steps to make the whole.
Clarifying something that is not legible by regular human beings
When you look at a long line of regex, can you immediately grok what's going on? If you can, you're in the minority, and even if you can at this moment, you might not be able to next year. What about a browser hack? Have you ever seen this in your code?
.selector { [;property: value;]; }
what about
var isFF = /a/[-1]=='a';
The first one targets Chrome ≤ 28, Safari ≤ 7, Opera ≥ 14, the second one is Firefox versions 2-3. I have written code that needs something like this. In order to avoid another maintainer or a future me assuming I took some Salvia before heading to work that day, it's great to tell people what the heck that's for. Especially in preparation for a time when we don't have to support that browser anymore, or the browser bug is fixed and we can remove it.
Something that is clear and legible to you is not necessarily clear to others
Who's smart? We are! Who writes clean code? We do! We don't have to comment, look how clear it is. The problem with this way of thinking is that we all have deeper knowledge in different areas. On small teams where people's skillsets and expertise are more of a circle than a venn diagram, this is less of an issue than big groups that change teams or get junior devs or interns frequently. But I'd probably still make room for those newcomers or for future you. On bigger teams where there are junior engineers or even just engineers from all types of background, people might not outrightly tell you they need you to comment, but many of these people will also express gratitude when you do.
Comments like chapters of a book
If this very article was written as one big hunk rather than broken up into sections with whitespace and smaller headings, it would be harder to skim through. Maybe not all of what I'm saying applies to you. Commenting sections or pieces allows people to skip to a part most relevant to them. But alas! You say. We have functional programming, imports, and modules for this now.
It's true! We break things down into smaller bits so that they are more manageable, and thank goodness for that. But even in smaller sections of code, you'll necessarily come to a piece that has to be a bit longer. Being able quickly grasp what is relevant or a label for an area that's a bit different can speed up productivity.
A guide to keep the logic straight while writing the code
This one is an interesting one! These are not the kind of comments you keep, and thus could also be found in the "bad patterns" section. Many times when I'm working on a bigger project with a lot of moving parts, breaking things up into the actions I'm going to take is extremely helpful. This could look like
// get the request from the server and give an error if it failed
// do x thing with that request
// format the data like so
Then I can easily focus on one thing at a time. But when left in your code as is, these comments can be screwy to read later. They're so useful while you're writing it but once you're finished can merely be a duplication of what the code does, forcing the reader to read the same thing twice in two different ways. It doesn't make them any less valuable to write, though.
My perfect-world suggestion would be to use these comments at the time of writing and then revisit them after. As you delete them, you could ask "does this do this in the most elegant and legible way possible?" "Is there another comment I might replace this with that will explain why this is necessary?" "What would I think is the most useful thing to express to future me or other from another mother?"
This is OK to refactor
Have you ever had a really aggressive product deadline? Perhaps you implemented a feature that you yourself disagreed with, or they told you it was "temporary" and "just an AB test so it doesn't matter". *Cue horror music* … and then it lived on… forever…
As embarrassing as it might be, writing comments like
// this isn't my best work, we had to get it in by the deadline
is rather helpful. As a maintainer, when I run across comments like this, I'll save buckets of time trying to figure out what the heck is wrong with this person and envisioning ways I could sabotage their morning commute. I'll immediately stop trying to figure out what parts of this code I should preserve and instead focus on what can be refactored. The only warning I'll give is to try not to make this type of coding your fallback (we'll discuss this in detail further on).
Commenting as a teaching tool
Are you a PHP shop that just was given a client that's all Ruby? Maybe it's totally standard Ruby but your team is in slightly over their heads. Are you writing a tutorial for someone? These are the limited examples for when writing out the how can be helpful. The person is literally learning on the spot and might not be able to just infer what it's doing because they've never seen it before in their lives. Comment that sh*t. Learning is humbling enough without them having to ask you aloud what they could more easily learn on their own.
I StackOverflow'd the bejeezus outta this
Did you just copy paste a whole block of code from Stack Overflow and modify it to fit your needs? This isn't a great practice but we've all been there. Something I've done that's saved me in the past is to put the link to the post where I found it. But! Then we won't get credit for that code! You might say. You're optimizing for the wrong thing would be my answer.
Inevitably people have different coding styles and the author of the solution solved a problem in a different way than you would if you knew the area deeper. Why does this matter? Because later, you might be smarter. You might level up in this area and then you'll spend less time scratching your head at why you wrote it that way, or learn from the other person's approach. Plus, you can always look back at the post, and see if any new replies came in that shed more light on the subject. There might even be another, better answer later.
Bad Comments
Writing comments gets a bad wrap sometimes, and that's because bad comments do indeed exist. Let's talk about some things to avoid while writing them.
They just say what it's already doing
John Papa made the accurate joke that this:
// if foo equals bar ...
If (foo === bar) {

} // end if
is a big pain. Why? Because you're actually reading everything twice in two different ways. It gives no more information, in fact, it makes you have to process things in two different formats, which is mental overhead rather than helpful. We've all written comments like this. Perhaps because we didn't understand it well enough ourselves or we were overly worried about reading it later. For whatever the reason, it's always good to take a step back and try to look at the code and comment from the perspective of someone reading it rather than you as the author, if you can.
It wasn't maintained
Bad documentation can be worse than no documentation. There's nothing more frustrating than coming across a block of code where the comment says something completely different than what's expressed below. Worse than time-wasting, it's misleading.
One solution to this is making sure that whatever code you are updating, you're maintaining the comments as well. And certainly having less and only more meaningful comments makes this upkeep less arduous. But commenting and maintaining comments are all part of an engineer's job. The comment is in your code, it is your job to work on it, even if it means deleting it.
If your comments are of good quality to begin with, and express why and not the how, you may find that this problem takes care of itself. For instance, if I write
// we need to FLIP this animation to be more performant in every browser
and refactor this code later to go from using getBoundingClientRect() to getBBox(), the comment still applies. The function exists for the same reason, but the details of how are what has changed.
You could have used a better name
I've definitely seen people write code (or done this myself) where the variable or functions names are one letter, and then comment what the thing is. This is a waste. We all hate typing, but if you are using a variable or function name repeatedly, I don't want to scan up the whole document where you explained what the name itself could do. I get it, naming is hard. But some comments take the place of something that could easily be written more precisely.
The comments are an excuse for not writing the code better to begin with
This is the crux of the issue for a lot of people. If you are writing code that is haphazard, and leaning back on your comments to clarify, this means the comments are holding back your programming. This is a horse-behind-the-cart kind of scenario. Unfortunately, even as the author it's not so easy to determine which is which.
We lie to ourselves in myriad ways. We might spend the time writing a comment that could be better spent making the code cleaner to begin with. We might also tell ourselves we don't need to comment our code because our code is well-written, even if other people might not agree.
There are lazy crutches in both directions. Just do your best. Try not to rely on just one correct way and instead write your code, and then read it. Try to envision you are both the author and maintainer, or how that code might look to a younger you. What information would you need to be as productive as possible?

People tend to, lately, get on one side or the other of "whether you should write comments", but I would argue that that conversation is not nuanced enough. Hopefully opening the floor to a deeper conversation about how to write meaningful comments bridges the gap.
Even so, it can be a lot to parse. Haha get it? Anyways, I'll leave you with some (better) humor. A while back there was a Stack Overflow post about the best comments people have written or seen. You can definitely waste some time in here. Pretty funny stuff.

The Art of Comments is a post from CSS-Tricks
Source: CssTricks


Help Your Users `Save-Data`

The breadth and depth of knowledge to absorb in the web performance space is ridiculous. At a minimum, I'm discovering something new nearly every week. Case in point: The Save-Data header, which I discovered via a Google Developers article by Ilya Grigorik.
If you're looking for the tl;dr version of how Save-Data works, let me oblige you: If you use Chrome's Data Saver extension on your desktop device or opt into data savings on the Android version of Chrome, every request that Chrome sends to a server will contain a Save-Data header with a value of On. While this doesn't do anything for your site out of the gate, you can consider it an opportunity. Given an opportunity to operate on a header like this, what would you do?
Let me give you a few ideas!

Change your image delivery strategy
I don't know if you've noticed, but images are often the largest chunk of the total payload of any given page. So perhaps the most impactful step you can take with Save-Data is to change how images are delivered. What I settled on for my blog was to rewrite requests for high DPI images to low DPI images. When I serve image sets like this on my site, I do so using the <picture> element to serve WebP images with a JPEG or PNG fallback like so:
<picture>
<source srcset="/img/george-and-susan-1x.webp 1x, /img/george-and-susan-2x.webp 2x">
<source srcset="/img/george-and-susan-1x.jpg 1x, /img/george-and-susan-2x.jpg 2x">
<img src="/img/george-and-susan-1x.jpg" alt="LET'S NOT GET CRAZY HERE" width="320" height="240">
</picture>
This solution is backed by tech that's baked into modern browsers. The <picture> element delivers the optimal image format according to a browser's capabilities, while srcset will help the browser decide what looks best for any given device's screen. Unfortunately, both <picture> and srcset lacks control over which image source to serve for users who want to save data. Nor should they! That's not the job of srcset or <picture>.
This is where Save-Data comes in handy. When users visit my site and send a Save-Data request header, I use mod_rewrite in Apache to serve low DPI images in lieu of high DPI ones:
RewriteCond %{HTTP:Save-Data} =on [NC]
RewriteRule ^(.*)-2x.(png|jpe?g|webp)$ $1-1x.$2 [R=302,L]
If you're unfamiliar with mod_rewrite, the first line is a condition that checks if the Save-Data header is present and contains a value of on. The [NC] flag merely tells mod_rewrite to perform a case-insensitive match. If the condition is met, the RewriteRule looks for any requests for a PNG, JPEG or WebP asset ending in -2x (the high DPI version), and redirects such requests to an asset ending in -1x (the low DPI version). The R=302 flag signals to the browser that this is a temporary (302) redirect. Since the user can toggle Data Saver on or off at will, we don't want to send a permanent (301) redirect, as the browser will cache it. Caching the redirect ensures that it will be in effect even if Data Saver is turned off later on.
Image delivery with Save-Data turned on
Aren't redirects bad for performance? Most of the time, yes. Redirects add to latency because it takes longer for a request to be resolved. In this instance, however, it's a fitting solution. We don't want to rewrite requests without redirecting them because then low-quality images get cached under the same URLs as high-quality ones. What if the user connects to a wifi network, turns off Data Saver, and return to the site at a later time? The low-quality image will be served from the cache, even when Data Saver is turned off. That's not at all what we want. If the user isn't using Data Saver, we should assume the default image delivery strategy.
Of course, there are other ways you could change your image delivery in the presence of Save-Data. For example, Cory Dowdy wrote a post detailing how he uses Save-Data to serve lower quality images. Save-Data gives you lots of room to devise an image delivery strategy that makes the best sense for your site or application.
Opt out of server pushes
Server push is good stuff if you're on HTTP/2 and can use it. It allows you send assets to the client before they ever know they need them. The problem with it, though, is it can be weirdly unpredictable in select scenarios. On my site, I use it to push CSS only, and this generally works well. Although, I do tailor my push strategy in Apache to avoid browsers that have issues with it (i.e., Safari) like so:
<If "%{HTTP_USER_AGENT} =~ /^(?=.*safari)(?!.*chrome).*/i">
Header add Link "</css/global.5aa545cb.css>; rel=preload; as=style; nopush"
</If>
<Else>
Header add Link "</css/global.5aa545cb.css>; rel=preload; as=style"
</Else>
In this instance, I'm saying "Hey, I want you to preemptively push my site's CSS to people, but only if they're not using Safari." Even if Safari users come by, they'll still get a preload resource hint (albeit with a nopush attribute to discourage my server from pushing anything).
Even so, it would behoove us to be extra cautious when it comes to pushing assets for users with Data Saver turned on. In the case of my blog, I decided I wouldn't push anything to anyone who had Data Saver enabled. To accomplish this, I made the following change to the initial <If> header:
<If "%{HTTP:Save-Data} == 'on' || %{HTTP_USER_AGENT} =~ /^(?=.*safari)(?!.*chrome).*/i">
This is the same as my initial configuration, but with an additional condition that says "Hey, if Save-Data is present and set to a value of on, don't be pushing that stylesheet. Just preload it instead." It might not be a big change, but it's one of those little things that could help visitors to avoid wasted data if a push was in vain for any reason.
Change how you deliver markup
With Save-Data, you could elect to change what parts of the document you deliver. This presents all sorts of opportunities, but before you can embark on this errand, you'll need to check for the Save-Data header in a back end language. In PHP, such a check might look something like this:
$saveData = (isset($_SERVER["HTTP_SAVE_DATA"]) && stristr($_SERVER["HTTP_SAVE_DATA"], "on") !== false) ? true : false;
On my blog, I use this $saveData boolean in various places to remove markup for images that aren't crucial to the content of a given page. For example, one of my articles has some animated GIFs and other humorous images that are funny little flourishes for users who don't mind them. But they are heavy, and not really necessary to communicate the central point of the article. I also remove the header illustration and a teeny tiny thumbnail of my book from the nav.
Image markup delivery with Data Saver on and off
From a payload perspective, this can certainly have a profound effect:
The potential effects of Data Saver on page payload
Another opportunity would be to use the aforementioned $saveData boolean to put a save-data class on the <html> element:
<html class="if($saveData === true) : echo("save-data"); endif; ?>">
With this class, I could then write styles that could change what assets are used in background-image properties, or any CSS property that references external assets. For example:
/* Just a regular ol' background image */
body {
background-image: url("/images/bg.png");
}

/* A lower quality background image for users with Data Saver turned on */
.save-data body {
background-image: url("/images/bg-lowsrc.png");
}
Markup isn't the only thing you could modify the delivery of. You could do the same for videos, or even do something as simple as delivering fewer search results per page. Or whatever you can think of!
Conclusion
The Save-Data header gives you a great opportunity to lend a hand to users who are asking you to help them, well, save data. You could use URL rewriting to change media delivery, change how you deliver markup, or really just about anything that you could think of.
How will you help your users Save-Data? Leave a comment and let your ideas be heard!

Jeremy Wagner is the author of Web Performance in Action, available now from Manning Publications. Use promo code sswagner to save 42%.
Check him out on Twitter: @malchata

Help Your Users `Save-Data` is a post from CSS-Tricks
Source: CssTricks


iOS 11 Safari Feature Flags

I was rooting around in the settings for iOS Safari the other day and stumbled upon its "Experimental Features" which act just like feature flags in any other desktop browser. This is a new feature in iOS 11 and you can find it at:
Settings > Safari > Advanced > Experimental Features

Here's what it looks like today:

Right now you can toggle on really useful things like Link Preload, CSS Spring Animations and display: contents (which Rachel Andrew wrote about a while ago). All of which could very well come in handy if you want to test your work in iOS.

iOS 11 Safari Feature Flags is a post from CSS-Tricks
Source: CssTricks


Designing Websites for iPhone X

We've already covered "The Notch" and the options for dealing with it from an HTML and CSS perspective. There is a bit more detail available now, straight from the horse's mouth:

Safe area insets are not a replacement for margins.
... we want to specify that our padding should be the default padding or the safe area inset, whichever is greater. This can be achieved with the brand-new CSS functions min() and max() which will be available in a future Safari Technology Preview release.
@supports(padding: max(0px)) {
.post {
padding-left: max(12px, constant(safe-area-inset-left));
padding-right: max(12px, constant(safe-area-inset-right));
}
}
It is important to use @supports to feature-detect min and max, because they are not supported everywhere, and due to CSS’s treatment of invalid variables, to not specify a variable inside your @supports query.

Jeremey Keith's hot takes have been especially tasty, like:

You could add a bunch of proprietary CSS that Apple just pulled out of their ass.
Or you could make sure to set a background colour on your body element.
I recommend the latter.

And:

This could be a one-word article: don’t.
More specifically, don’t design websites for any specific device.

Although if this pushes support forward for min() and max() as generic functions, that's cool.
Direct Link to Article — Permalink
Designing Websites for iPhone X is a post from CSS-Tricks
Source: CssTricks


“The Notch” and CSS

Apple's iPhone X has a screen that covers the entire face of the phone, save for a "notch" to make space for a camera and other various components. The result is some awkward situations for screen design, like constraining websites to a "safe area" and having white bars on the edges. It's not much of a trick to remove it though, a background-color on the body will do. Or, expand the website the whole area (notch be damned), you can add viewport-fit=cover to your meta viewport tag.
<meta name="viewport" content="width=device-width, initial-scale=1.0, viewport-fit=cover">

Then it's on you to account for any overlapping that normally would have been handled by the safe area. There is some new CSS that helps you accommodate for that. Stephen Radford documents:

In order to handle any adjustment that may be required iOS 11's version of Safari includes some constants that can be used when viewport-fit=cover is being used.

safe-area-inset-top
safe-area-inset-right
safe-area-inset-left
safe-area-inset-bottom

This can be added to margin, padding, or absolute position values such a top or left.
I added the following to the main container on the website.
padding: constant(safe-area-inset-top) constant(safe-area-inset-right) constant(safe-area-inset-bottom) constant(safe-area-inset-left);

There is another awkward situation with the notch, the safe area, and fixed positioning. Darryl Pogue reports:

Where iOS 11 differs from earlier versions is that the webview content now respects the safe areas. This means that if you have a header bar that is a fixed position element with top: 0, it will initially render 20px below the top of the screen: aligned to the bottom of the status bar. As you scroll down, it will move up behind the status bar. As you scroll up, it will again fall down below the status bar (leaving an awkward gap where content shows through in the 20px gap).
You can see just how bad it is in this video clip:

Fortunately also an easy fix, as the viewport-fit=cover addition to the meta viewport tag fixes it.
If you're going to cover that viewport, it's likely you'll have to get a little clever to avoid hidden content!

I think I’ve fixed the notch issue in landscape 🍾 #iphoneX pic.twitter.com/hGytyO3DRV
— Vojta Stavik (@vojtastavik) September 13, 2017

“The Notch” and CSS is a post from CSS-Tricks
Source: CssTricks


Google AMP Links Will Be Shared as Regular Links in iOS 11 Safari by @MattGSouthern

Apple will automatically convert AMP links to regular links when shared from the Safari browser in iOS 11.The post Google AMP Links Will Be Shared as Regular Links in iOS 11 Safari by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Cross Browser Testing with CrossBrowserTesting

(This is a sponsored post.)Say you do your integrationwork on a Mac, but you'd like to test out some designs in Microsoft Edge, which doesn't have macOS version. Or vice versa! You work on a PC and you need to test on Safari, which no longer makes a Windows version.
It's a classic problem, and one I've been dealing with for a decade. I remember buying a copy of Windows Vista, buying software to manage virtual machines, and spending days just getting a testing environment set up. You can still go down that road, if you, ya know, love pain. Or you can use CrossBrowserTesting and have a super robust testing environment for a huge variety of browsers/platforms/versions without ever leaving the comfort of your favorite browser.
It's ridiculously wonderful.

Getting started, the most basic thing you can do is pick a browser/platform, specify a URL, and fire it up!

Once the test is running, you can interact with it just as you might suspect. Click, scroll, enter forms... it's a real browser! You have access to all the developer tools you might suspect. So for example, you can pop open the DevTools in Edge and poke around to figure out a bug.
When you need to do testing like this, it's likely you're in development, not in production. So how do you test that? Certainly, CrossBrowserTesting's servers can't see your localhost! Well, they can if you let them. They have a browser extension that allows you to essentially one-click-allow testing of local sites.

One of the things I find myself reaching to CrossBrowserTesting for is for getting layouts working across different browsers. If you haven't heard, CSS grid is here! It's supported in a lot of browsers, but not all, and not in the exact same way.

CrossBrowserTesting is the perfect tool to help me with this. I can pop open what I'm working on there, make changes, and get it working just how I need to. Perhaps getting the layout replicated in a variety of browsers, or just as likely, crafting a fallback that is different but looks fine.
Notice in that screenshot above the demo is on CodePen. That's relevant as CrossBrowserTesting allows you to test on CodePen for free! It's a great use case for something like Live View, where you can be working on a Pen, save it, and have the changes immediately reflected in the Live View preview, which works great even through CrossBrowserTesting.
The live testing is great, but there is also screenshot-based visual testing, in case you want to, say, test a layout in dozens of browsers at once. Much more practical to view a thumbnail grid all at once!

And there is even more advanced stuff. CrossBrowserTesting has automated testing features that make functional testing and visual testing on real browsers simple. Using Selenium, an open source testing framework, I can write scripts in the language of my choice that mimic a real user's actions: logging into the app, purchasing a plan, and creating a new project. I can then run the tests on CrossBrowserTesting, making sure that these actions work across browsers and devices. Because CrossBrowserTesting is in the cloud, I can run my tests against production websites and applications that bring in revenue.
Functional testing can be a life saver, assuring that everything is working and your customers can properly interact with your product. Once these tests have run, I can even see videos or screenshots of failures, and start debugging from there.
Direct Link to Article — Permalink
Cross Browser Testing with CrossBrowserTesting is a post from CSS-Tricks
Source: CssTricks


More CSS Charts, with Grid & Custom Properties

I loved Robin's recent post, experimenting with CSS Grid for bar-charts. I've actually been using a similar approach on a client project, building a day-planner with CSS Grid. It's a different use-case, but the same basic technique: using grid layouts to visualize data.
(I recommend reading Robin's article first, since I'm building on top of his chart.)
Robin's approach relies on a large Sass loop to generate 100 potential class-names, even though less than 12 are used in the final chart. In production we'll want something more direct and performant, with better semantics, so I turned to definition lists and CSS Variables (aka Custom Properties) to build my charts.

Here's the final result:
See the Pen Bar chart in CSS grid + variables by Miriam Suzanne (@mirisuzanne) on CodePen.
Let's dig into it!
Markup First
Robin was proposing a conceptual experiment, so he left out many real-life data and accessibility concerns. Since I'm aiming for (fictional) production code, I want to make sure it will be semantic and accessible. I borrowed the year-axis idea from a comment on Robin's charts, and moved everything into a definition list. Each year is associated with a corresponding percentage in the list:
<dl class="chart">
<dt class="date">2000</dt>
<dd class="bar">45%</dd>

<dt class="date">2001</dt>
<dd class="bar">100%</dd>

<!-- etc… -->
</dl>
There are likely other ways to mark this up accessibly, but a dl seemed clean and clear to me – with all the data and associated pairs available as structured text. By default, this displays year/percentage pairs in a readable format. Now we have to make it beautiful.
Grid Setup
I started from Robin's grid, but my markup requires an extra row for the .date elements. I add that to the end of my grid-template-rows, and place my date/bar elements appropriately:
.chart {
display: grid;
grid-auto-columns: 1fr;
grid-template-rows: repeat(100, 1fr) 1.4rem;
grid-column-gap: 5px;
}

.date {
/* fill the bottom row */
grid-row-start: -2;
}

.bar {
/* end before the bottom row */
grid-row-end: -2;
}
Normally, I would use auto for that final row, but I needed an explicit height to make the background-grid work properly. Not not worth the trade-off, probably, but I was having fun.
Passing Data to CSS
At this point, CSS has no access to the relevant numbers for styling a chart. We have no hooks for setting individual bars to different heights. Robin's solution involves individual class-names for every bar-value, with a Sass to loop to create custom classes for each value. That works, but we end up with a long list of classes we may never use. Is there a way to pass data into CSS more directly?
The most direct approach might be an inline style:
<dd class="bar" style="grid-row-start: 56">45%</dd>
The start position is the full number of grid lines (one more than the number of rows, or 101 in this case), minus the total value of the given bar: 101 - 45 = 56. That works fine, but now our markup and CSS are tightly coupled. With CSS Variables, we can pass in raw data, and let the CSS decide how it is used:
<dd class="bar" style="--start: 56">45%</dd>
In the CSS we can wire that up to grid-row-start:
.bar {
grid-row-start: var(--start);
}
We've replaced the class-name loop, and bloated 100-class output, with a single line of dynamic CSS. Variables also remove the danger normally associated with CSS-in-HTML. While an inline property like grid-row-start will be nearly impossible to override from a CSS file, the inline variable can simply be ignored by CSS. There are no specificity/cascade issues to worry about.
Data-Driven Backgrounds
As a bonus, we can do more with the data than simply provide a grid-position – reusing it to style a fallback option, or even adjust the bar colors based on that same data:
.bar {
background-image: linear-gradient(to right, green, yellow, orange, red);
background-size: 1600% 100%;

/* turn the start value into a percentage for position on the gradient */
background-position: calc(var(--start) * 1%) 0;
}
I started with a horizontal background gradient from green to yellow, orange, and then red. Then I used background-size to make the gradient much wider than the bar – at least 200% per color (800%). Larger gradient-widths will make the fade less visible, so I went with 1600% to keep it subtle. Finally, using calc() to convert our start position (1-100) into a percentage, I can adjust the background position left-or-right based on the value – showing a different color depending on the percentage.
The background grid is also generated using variables and background-gradients. Sadly, subpixel rounding makes it a bit unreliable, but you can play with the --line-every value to change the level of detail. Take a look around, and see what other improvements you can make!
Adding Scale [without Firefox]
Right now, we're passing in a start position rather than a pure value ("56" for "45%"). That start position is based on an assumption that the overall scale is 100%. In order to make this a more flexible tool, I thought it would be fun to contain all the math, including the scale, inside CSS. Here's what it would look like:
<dl class="chart" style="--scale: 100">
<dt class="date">2000</dt>
<dd class="bar" style="--value: 45">45%</dd>

<dt class="date">2001</dt>
<dd class="bar" style="--value: 100">100%</dd>

<!-- etc… -->
</dl>
Then we can calculate the --start value in CSS, before applying it.
.bar {
--start: calc(var(--scale) + 1 - var(--value));
grid-row-start: var(--start);
}
With both the overall scale and individual values in CSS, we can manipulate either one individually. Change the scale to 200%, and watch the chart update accordingly:
See the Pen Bar Chart with Sale - no firefox by Miriam Suzanne (@mirisuzanne) on CodePen.
Both Chrome and Safari handle it beautifully, but Firefox seems unhappy about calc values in grid-positioning. I imagine they'll get it fixed eventually. For now, we'll just have to leave some calculations out of our CSS.
Sad, but we'll get used to it. 😉
There is much more we could do, providing fallbacks for older browsers – but I do think this is a viable option with potential to be accessible, semantic, performant, and beautiful. Thanks for starting that conversation, Robin!

More CSS Charts, with Grid & Custom Properties is a post from CSS-Tricks
Source: CssTricks


Full Page Screenshots in Browsers

It can be quite useful to get a "full page" screenshot in a browser. That is, not just the visible area. The visible area is pretty easy to get just by screenshotting the screen. A full page screenshot captures the entire web site even if it needs to be scrolled around to see all of it. You could take individual screenshots of the visible area and use a photo editing program to stitch them together, but that's a pain in the but. Nevermind the fact that it's extra tricky with things like fixed position elements.
Fortunately browsers can help us out a bit here.

Chrome
As of Chrome 59, it's built into DevTools. Here's a video. You use "Responsive Design Mode", then the menu option to get the full page screenshot is in the menu in the upper right.

If you need a "mobile" full length screenshot, just adjust the responsive view to the size you want and save again. Handy!
I've also had good luck with the Nimbus extension in Chrome.
Firefox
There is a setting in the Firefox DevTools that you need to turn on called Take a screenshot of the entire page under Available Toolbox Buttons. Flip that on, and you get a button.

Safari
Safari has File > Export as PDF, but it's pretty awkward. I have no idea how it decides what to export and what not to, the layout is weird, and it's broken into multiple pages for some reason.
The Awesome Screenshot extension seems to do the trick.

There are also some native apps like BrowseShot and Paparazzi!

Full Page Screenshots in Browsers is a post from CSS-Tricks
Source: CssTricks


Form Validation Part 3: A Validity State API Polyfill

In the last article in this series, we built a lightweight script (6kb, 2.7kb minified) using the Validity State API to enhance the native form validation experience. It works in all modern browsers and provides support IE support back to IE10. But, there are some browser gotchas.
Not every browser supports every Validity State property. Internet Explorer is the main violator, though Edge does lack support for tooLong even though IE10+ support it. And Chrome, Firefox, and Safari got full support only recently.
Today, we'll write a lightweight polyfill that extends our browser support all the way back to IE9, and adds missing properties to partially supporting browsers, without modifying any of the core code in our script.

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript
A Validity State API Polyfill (You are here!)
Validating the MailChimp Subscribe Form (Coming Soon!)

Let's get started.
Testing Support
The first thing we need to do is test the browser for Validity State support.
To do that, we'll use document.createElement('input') to create a form input, and then check to see if the validity property exists on that element.
// Make sure that ValidityState is supported
var supported = function () {
var input = document.createElement('input');
return ('validity' in input);
};
The supported() function will return true in supporting browsers, and false in unsupported ones.
It's not enough to just test for the validity property, though. We need to make sure the full range of Validity State properties exist as well.
Let's extend our supported() function to test for all of them.
// Make sure that ValidityState is supported in full (all features)
var supported = function () {
var input = document.createElement('input');
return ('validity' in input && 'badInput' in input.validity && 'patternMismatch' in input.validity && 'rangeOverflow' in input.validity && 'rangeUnderflow' in input.validity && 'stepMismatch' in input.validity && 'tooLong' in input.validity && 'tooShort' in input.validity && 'typeMismatch' in input.validity && 'valid' in input.validity && 'valueMissing' in input.validity);
};
Browsers like IE11 and Edge will fail this test, even though they support many Validity State properties.
Check input validity
Next, we'll write our own function to check the validity of a form field and return an object using the same structure as the Validity State API.
Setting up our checks
First, we'll set up our function and pass in the field as an argument.
// Generate the field validity object
var getValidityState = function (field) {
// Run our validity checks...
};
Next, let's setup some variables for a few things we're going to need to use repeatedly in our validity tests.
// Generate the field validity object
var getValidityState = function (field) {

// Variables
var type = field.getAttribute('type') || input.nodeName.toLowerCase(); // The field type
var isNum = type === 'number' || type === 'range'; // Is the field numeric
var length = field.value.length; // The field value length

};
Testing Validity
Now, we'll create the object that will contain all of our validity tests.
// Generate the field validity object
var getValidityState = function (field) {

// Variables
var type = field.getAttribute('type') || input.nodeName.toLowerCase();
var isNum = type === 'number' || type === 'range';
var length = field.value.length;

// Run validity checks
var checkValidity = {
badInput: false, // value does not conform to the pattern
rangeOverflow: false, // value of a number field is higher than the max attribute
rangeUnderflow: false, // value of a number field is lower than the min attribute
stepMismatch: false, // value of a number field does not conform to the stepattribute
tooLong: false, // the user has edited a too-long value in a field with maxlength
tooShort: false, // the user has edited a too-short value in a field with minlength
typeMismatch: false, // value of a email or URL field is not an email address or URL
valueMissing: false // required field without a value
};

};
You'll notice that the valid property is missing from the checkValidity object. We can only know what it is after we've run our other tests.
We'll loop through each one. If any of them are true, we'll set our valid state to false. Otherwise, we'll set it to true. Then, we'll return the entire checkValidity.
// Generate the field validity object
var getValidityState = function (field) {

// Variables
var type = field.getAttribute('type') || input.nodeName.toLowerCase();
var isNum = type === 'number' || type === 'range';
var length = field.value.length;

// Run validity checks
var checkValidity = {
badInput: false, // value does not conform to the pattern
rangeOverflow: false, // value of a number field is higher than the max attribute
rangeUnderflow: false, // value of a number field is lower than the min attribute
stepMismatch: false, // value of a number field does not conform to the stepattribute
tooLong: false, // the user has edited a too-long value in a field with maxlength
tooShort: false, // the user has edited a too-short value in a field with minlength
typeMismatch: false, // value of a email or URL field is not an email address or URL
valueMissing: false // required field without a value
};

// Check if any errors
var valid = true;
for (var key in checkValidity) {
if (checkValidity.hasOwnProperty(key)) {
// If there's an error, change valid value
if (checkValidity[key]) {
valid = false;
break;
}
}
}

// Add valid property to validity object
checkValidity.valid = valid;

// Return object
return checkValidity;

};
Writing the Tests
Now we need to write each of our tests. Most of these will involve using a regex pattern with the test() method against the field value.
badInput
For badInput, if the field is numeric, has at least one character, and at least one of the characters isn't a number, we'll return true.
badInput: (isNum && length > 0 && !/[-+]?[0-9]/.test(field.value))
patternMismatch
The patternMismatch property is one of the easier ones to test. This property is true if the field has a pattern attribute, has at least one character, and the field value doesn't match the included pattern regex.
patternMismatch: (field.hasAttribute('pattern') && length > 0 && new RegExp(field.getAttribute('pattern')).test(field.value) === false)
rangeOverflow
The rangeOverflow property should return true if the field has a max attribute, is a numeric, and has at least one character that's over the max value. We need to convert the string value of max to an integer using the parseInt() method.
rangeOverflow: (field.hasAttribute('max') && isNum && field.value > 1 && parseInt(field.value, 10) > parseInt(field.getAttribute('max'), 10))
rangeUnderflow
The rangeUnderflow property should return true if the field has a min attribute, is a numeric, and has at least one character that's under the min value. Like with rangeOverflow, we need to convert the string value of min to an integer using the parseInt() method.
rangeUnderflow: (field.hasAttribute('min') && isNum && field.value > 1 && parseInt(field.value, 10) < parseInt(field.getAttribute('min'), 10))
stepMismatch
For the stepMismatch property, if the field is numeric, has the step attribute, and the attribute's value isn't any, we'll use the remainder operator (%) to make sure that the field value divided by the step has no remainder. If there's a remainder, we'll return true.
stepMismatch: (field.hasAttribute('step') && field.getAttribute('step') !== 'any' && isNum && Number(field.value) % parseFloat(field.getAttribute('step')) !== 0)
tooLong
With tooLong, we'll return true if the field has a maxLength attribute greater than 0, and the field value length is greater than the attribute value.
<tooLong: (field.hasAttribute('maxLength') && field.getAttribute('maxLength') > 0 && length > parseInt(field.getAttribute('maxLength'), 10))
tooShort
Conversely, with tooShort, we'll return true if the field has a minLength attribute greater than 0, and the field value length is less than the attribute value.
tooShort: (field.hasAttribute('minLength') && field.getAttribute('minLength') > 0 && length > 0 && length < parseInt(field.getAttribute('minLength'), 10))
typeMismatch
The typeMismatch property is the most complicated to validate. We need to first make sure the field isn't empty. Then we need to run one regex text if the field type is email, and another if it's url. If it's one of those values and the field value does not match our regex pattern, we'll return true.
typeMismatch: (length > 0 && ((type === 'email' && !/^([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22))*x40([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d))*$/.test(field.value)) || (type === 'url' && !/^(?:(?:https?|HTTPS?|ftp|FTP)://)(?:S+(?::S*)?@)?(?:(?!(?:10|127)(?:.d{1,3}){3})(?!(?:169.254|192.168)(?:.d{1,3}){2})(?!172.(?:1[6-9]|2d|3[0-1])(?:.d{1,3}){2})(?:[1-9]d?|1dd|2[01]d|22[0-3])(?:.(?:1?d{1,2}|2[0-4]d|25[0-5])){2}(?:.(?:[1-9]d?|1dd|2[0-4]d|25[0-4]))|(?:(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)(?:.(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)*)(?::d{2,5})?(?:[/?#]S*)?$/.test(field.value))))
valueMissing
The valueMissing property is also a little complicated. First, we want to check if the field has the required attribute. If it does we need to run one of three few different tests, depending on the field type.
If it's a checkbox or radio button, we want to make sure that it's checked. If it's a select menu, we need to make sure a value is selected. If it's another type of input, we need to make sure it has a value.
valueMissing: (field.hasAttribute('required') && (((type === 'checkbox' || type === 'radio') && !field.checked) || (type === 'select' && field.options[field.selectedIndex].value < 1) || (type !=='checkbox' && type !== 'radio' && type !=='select' && length < 1)))
The complete set of tests
Here's what the completed checkValidity object looks like with all of its tests.
// Run validity checks
var checkValidity = {
badInput: (isNum && length > 0 && !/[-+]?[0-9]/.test(field.value)), // value of a number field is not a number
patternMismatch: (field.hasAttribute('pattern') && length > 0 && new RegExp(field.getAttribute('pattern')).test(field.value) === false), // value does not conform to the pattern
rangeOverflow: (field.hasAttribute('max') && isNum && field.value > 1 && parseInt(field.value, 10) > parseInt(field.getAttribute('max'), 10)), // value of a number field is higher than the max attribute
rangeUnderflow: (field.hasAttribute('min') && isNum && field.value > 1 && parseInt(field.value, 10) < parseInt(field.getAttribute('min'), 10)), // value of a number field is lower than the min attribute
stepMismatch: (field.hasAttribute('step') && field.getAttribute('step') !== 'any' && isNum && Number(field.value) % parseFloat(field.getAttribute('step')) !== 0), // value of a number field does not conform to the stepattribute
tooLong: (field.hasAttribute('maxLength') && field.getAttribute('maxLength') > 0 && length > parseInt(field.getAttribute('maxLength'), 10)), // the user has edited a too-long value in a field with maxlength
tooShort: (field.hasAttribute('minLength') && field.getAttribute('minLength') > 0 && length > 0 && length < parseInt(field.getAttribute('minLength'), 10)), // the user has edited a too-short value in a field with minlength
typeMismatch: (length > 0 && ((type === 'email' && !/^([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22))*x40([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d))*$/.test(field.value)) || (type === 'url' && !/^(?:(?:https?|HTTPS?|ftp|FTP)://)(?:S+(?::S*)?@)?(?:(?!(?:10|127)(?:.d{1,3}){3})(?!(?:169.254|192.168)(?:.d{1,3}){2})(?!172.(?:1[6-9]|2d|3[0-1])(?:.d{1,3}){2})(?:[1-9]d?|1dd|2[01]d|22[0-3])(?:.(?:1?d{1,2}|2[0-4]d|25[0-5])){2}(?:.(?:[1-9]d?|1dd|2[0-4]d|25[0-4]))|(?:(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)(?:.(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)*)(?::d{2,5})?(?:[/?#]S*)?$/.test(field.value)))), // value of a email or URL field is not an email address or URL
valueMissing: (field.hasAttribute('required') && (((type === 'checkbox' || type === 'radio') && !field.checked) || (type === 'select' && field.options[field.selectedIndex].value < 1) || (type !=='checkbox' && type !== 'radio' && type !=='select' && length < 1))) // required field without a value
};
Special considerations for radio buttons
In supporting browsers, required will only fail on a radio button if no elements in the group have been checked. Our polyfill as it's currently written will throw return valueMissing as true on an unchecked radio button even if another button in the group is checked.
To fix this, we need to get every button in the group. If one of them is checked, we'll validate that radio button instead of the one that lost focus.
// Generate the field validity object
var getValidityState = function (field) {

// Variables
var type = field.getAttribute('type') || input.nodeName.toLowerCase(); // The field type
var isNum = type === 'number' || type === 'range'; // Is the field numeric
var length = field.value.length; // The field value length

// If radio group, get selected field
if (field.type === 'radio' && field.name) {
var group = document.getElementsByName(field.name);
if (group.length > 0) {
for (var i = 0; i < group.length; i++) {
if (group[i].form === field.form && field.checked) {
field = group[i];
break;
}
}
}
}

...

};
Adding the validity property to form fields
Finally, if the Validity State API isn't fully supported, we want to add or override the validity property. We'll do this using the Object.defineProperty() method.
// If the full set of ValidityState features aren't supported, polyfill
if (!supported()) {
Object.defineProperty(HTMLInputElement.prototype, 'validity', {
get: function ValidityState() {
return getValidityState(this);
},
configurable: true,
});
}
Putting it all together
Here's the polyfill in its entirety. To keep our functions out of the global scope, I've wrapped it in an IIFE (immediately invoked function expression).
;(function (window, document, undefined) {

'use strict';

// Make sure that ValidityState is supported in full (all features)
var supported = function () {
var input = document.createElement('input');
return ('validity' in input && 'badInput' in input.validity && 'patternMismatch' in input.validity && 'rangeOverflow' in input.validity && 'rangeUnderflow' in input.validity && 'stepMismatch' in input.validity && 'tooLong' in input.validity && 'tooShort' in input.validity && 'typeMismatch' in input.validity && 'valid' in input.validity && 'valueMissing' in input.validity);
};

/**
* Generate the field validity object
* @param {Node]} field The field to validate
* @return {Object} The validity object
*/
var getValidityState = function (field) {

// Variables
var type = field.getAttribute('type') || input.nodeName.toLowerCase();
var isNum = type === 'number' || type === 'range';
var length = field.value.length;
var valid = true;

// Run validity checks
var checkValidity = {
badInput: (isNum && length > 0 && !/[-+]?[0-9]/.test(field.value)), // value of a number field is not a number
patternMismatch: (field.hasAttribute('pattern') && length > 0 && new RegExp(field.getAttribute('pattern')).test(field.value) === false), // value does not conform to the pattern
rangeOverflow: (field.hasAttribute('max') && isNum && field.value > 1 && parseInt(field.value, 10) > parseInt(field.getAttribute('max'), 10)), // value of a number field is higher than the max attribute
rangeUnderflow: (field.hasAttribute('min') && isNum && field.value > 1 && parseInt(field.value, 10) < parseInt(field.getAttribute('min'), 10)), // value of a number field is lower than the min attribute
stepMismatch: (field.hasAttribute('step') && field.getAttribute('step') !== 'any' && isNum && Number(field.value) % parseFloat(field.getAttribute('step')) !== 0), // value of a number field does not conform to the stepattribute
tooLong: (field.hasAttribute('maxLength') && field.getAttribute('maxLength') > 0 && length > parseInt(field.getAttribute('maxLength'), 10)), // the user has edited a too-long value in a field with maxlength
tooShort: (field.hasAttribute('minLength') && field.getAttribute('minLength') > 0 && length > 0 && length < parseInt(field.getAttribute('minLength'), 10)), // the user has edited a too-short value in a field with minlength
typeMismatch: (length > 0 && ((type === 'email' && !/^([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22))*x40([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d))*$/.test(field.value)) || (type === 'url' && !/^(?:(?:https?|HTTPS?|ftp|FTP)://)(?:S+(?::S*)?@)?(?:(?!(?:10|127)(?:.d{1,3}){3})(?!(?:169.254|192.168)(?:.d{1,3}){2})(?!172.(?:1[6-9]|2d|3[0-1])(?:.d{1,3}){2})(?:[1-9]d?|1dd|2[01]d|22[0-3])(?:.(?:1?d{1,2}|2[0-4]d|25[0-5])){2}(?:.(?:[1-9]d?|1dd|2[0-4]d|25[0-4]))|(?:(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)(?:.(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)*)(?::d{2,5})?(?:[/?#]S*)?$/.test(field.value)))), // value of a email or URL field is not an email address or URL
valueMissing: (field.hasAttribute('required') && (((type === 'checkbox' || type === 'radio') && !field.checked) || (type === 'select' && field.options[field.selectedIndex].value < 1) || (type !=='checkbox' && type !== 'radio' && type !=='select' && length < 1))) // required field without a value
};

// Check if any errors
for (var key in checkValidity) {
if (checkValidity.hasOwnProperty(key)) {
// If there's an error, change valid value
if (checkValidity[key]) {
valid = false;
break;
}
}
}

// Add valid property to validity object
checkValidity.valid = valid;

// Return object
return checkValidity;

};

// If the full set of ValidityState features aren't supported, polyfill
if (!supported()) {
Object.defineProperty(HTMLInputElement.prototype, 'validity', {
get: function ValidityState() {
return getValidityState(this);
},
configurable: true,
});
}

})(window, document);
Adding this to your site will extend the Validity State API back to IE9, and add missing properties to partially supporting browsers. (You can download the polyfill on GitHub, too.)
The form validation script we wrote in the last article also made use of the classList API, which is supported in all modern browsers and IE10 and above. To truly get IE9+ support, we should also include the classList.js polyfill from Eli Grey.

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript
A Validity State API Polyfill (You are here!)
Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 3: A Validity State API Polyfill is a post from CSS-Tricks
Source: CssTricks


Form Validation Part 1: Constraint Validation in HTML

Most JavaScript form validation libraries are large, and often require other libraries like jQuery. For example, MailChimp's embeddable form includes a 140kb validation file (minified). It includes the entire jQuery library, a third-party form validation plugin, and some custom MailChimp code. In fact, that setup is what inspired this new series about modern form validation. What new tools do we have these days for form validation? What is possible? What is still needed?
In this series, I'm going to show you two lightweight ways to validate forms on the front end. Both take advantage of newer web APIs. I'm also going to teach you how to push browser support for these APIs back to IE9 (which provides you with coverage for 99.6% of all web traffic worldwide).
Finally, we'll take a look at MailChimp's sign-up form, and provide the same experience with 28× (2,800%) less code.

It's worth mentioning that front-end form validation can be bypassed. You should always validate your code on the server, too.
Alright, let's get started!

Article Series:

Constraint Validation in HTML (You are here!)
The Constraint Validation API in JavaScript (Coming Soon!)
A Validity State API Polyfill (Coming Soon!)
Validating the MailChimp Subscribe Form (Coming Soon!)

The Incredibly Easy Way: Constraint Validation
Through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern), browsers can natively validate form inputs and alert users when they're doing it wrong.
Support for the various input types and attributes varies wildly from browser to browser, but I'll provide some tricks and workarounds to maximize browser compatibility.
Basic Text Validation
Let's say you have a text field that is required for a user to fill out before the form can be submitted. Add the required attribute, and supporting browsers will both alert users who don't fill it out and refuse to let them submit the form.
<input type="text" required>
A required text input in Chrome.
Do you need the response to be a minimum or maximum number of characters? Use minlength and maxlength to enforce those rules. This example requires a value to be between 3 and 12 characters in length.
<input type="text" minlength="3" maxlength="12">
Error message for the wrong number of characters in Firefox.
The pattern attribute let's you run regex validations against input values. If you, for example, required passwords to contain at least 1 uppercase character, 1 lowercase character, and 1 number, the browser can validate that for you.
<input type="password" pattern="^(?=.*d)(?=.*[a-z])(?=.*[A-Z])(?!.*s).*$" required>
Wrong format error message in Safari.
If you provide a title attribute with the pattern, the title value will be included with any error message if the pattern doesn't match.
<input type="password" pattern="^(?=.*d)(?=.*[a-z])(?=.*[A-Z])(?!.*s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required>
Wrong format message in Opera, with title text explaining RegEx.
You can even combine it with minlength and (as seems to be the case with banks, maxlength) to enforce a minimum or maximum length.
<input type="password" minlength="8" pattern="^(?=.*d)(?=.*[a-z])(?=.*[A-Z])(?!.*s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required>
See the Pen Form Validation: Basic Text by Chris Ferdinandi (@cferdinandi) on CodePen.
Validating Numbers
The number input type only accepts numbers. Browsers will either refuse to accept letters and other characters, or alert users if they use them. Browser support for input[type="number"] varies, but you can supply a pattern as a fallback.
<input type="number" pattern="[-+]?[0-9]">
By default, the number input type allows only whole numbers.
You can allow floats (numbers with decimals) with the step attribute. This tells the browser what numeric interval to accept. It can be any numeric value (example, 0.1 ), or any if you want to allow any number.
You should also modify your pattern to allow decimals.
<input type="number" step="any" pattern="[-+]?[0-9]*[.,]?[0-9]+">
If the numbers should value between a set of values, the browser can validate those with the min and max attributes. You should also modify your pattern to match. For example, if a number has to be between 3 and 42, you would do this:
<input type="number" min="3" max="42" pattern="[3-42]">
See the Pen Form Validation: Numbers by Chris Ferdinandi (@cferdinandi) on CodePen.
Validating Email Addresses and URLs
The email input type will alert users if the supplied email address is invalid. Like with the number input type, you should supply a pattern for browsers that don't support this input type.
Email validation regex patterns are a hotly debated issue. I tested a ton of them specifically looking for ones that met RFC822 specs. The one used below, by Richard Willis, was the best one I found.
<input type="email" pattern="^([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22))*x40([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d))*$">
One "gotcha" with the the email input type is that it allows email addresses without a TLD (the "example.com" part of "email@example.com"). This is because RFC822, the standard for email addresses, allows for localhost emails which don't need one.
If you want to require a TLD (and you likely do), you can modify the pattern to force a domain extension like so:
<input type="email" title="The domain portion of the email address is invalid (the portion after the @)." pattern="^([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22))*x40([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d))*(.w{2,})+$">
Similarly, the url input type will alert users if the supplied value is not a valid URL. Once again, you should supply a pattern for browsers that don't support this input type. The one included below was adapted from a project by Diego Perini, and is the most robust I've encountered.
<input type="url" pattern="^(?:(?:https?|HTTPS?|ftp|FTP)://)(?:S+(?::S*)?@)?(?:(?!(?:10|127)(?:.d{1,3}){3})(?!(?:169.254|192.168)(?:.d{1,3}){2})(?!172.(?:1[6-9]|2d|3[0-1])(?:.d{1,3}){2})(?:[1-9]d?|1dd|2[01]d|22[0-3])(?:.(?:1?d{1,2}|2[0-4]d|25[0-5])){2}(?:.(?:[1-9]d?|1dd|2[0-4]d|25[0-4]))|(?:(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)(?:.(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)*)(?::d{2,5})?(?:[/?#]S*)?$">
Like the email attribute, url does not require a TLD. If you don't want to allow for localhost URLs, you can update the pattern to check for a TLD, like this.
<input type="url" title="The URL is a missing a TLD (for example, .com)." pattern="^(?:(?:https?|HTTPS?|ftp|FTP)://)(?:S+(?::S*)?@)?(?:(?!(?:10|127)(?:.d{1,3}){3})(?!(?:169.254|192.168)(?:.d{1,3}){2})(?!172.(?:1[6-9]|2d|3[0-1])(?:.d{1,3}){2})(?:[1-9]d?|1dd|2[01]d|22[0-3])(?:.(?:1?d{1,2}|2[0-4]d|25[0-5])){2}(?:.(?:[1-9]d?|1dd|2[0-4]d|25[0-4]))|(?:(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)(?:.(?:[a-zA-Zu00a1-uffff0-9]-*)*[a-zA-Zu00a1-uffff0-9]+)*(?:.(?:[a-zA-Zu00a1-uffff]{2,})).?)(?::d{2,5})?(?:[/?#]S*)?$">
See the Pen Form Validation: Email & URLs by Chris Ferdinandi (@cferdinandi) on CodePen.
Validating Dates
There are a few really awesome input types that not only validate dates but also provide native date pickers. Unfortunately, Chrome and Mobile Safari are the only two browsers that implement it. (I've been waiting years for Firefox to adopt this feature!) Other browsers just display it as a text field.
As always, we can provide a pattern to catch browsers that don't support it.
The date input type is for standard day/month/year dates.
<input type="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))">
In supporting browsers, the selected date is displayed like this: MM/DD/YYYY. But the value is actually in this format: YYYY-MM-DD.
You should provide guidance to users of unsupported browsers about this format—something like, "Please use the YYYY-MM-DD format." However, you don't want people visiting with Chrome or Mobile Safari to see this since that's not the format they'll see, which is confusing.
See the Pen Form Validation: Dates by Chris Ferdinandi (@cferdinandi) on CodePen.
A Simple Feature Test
We can write a simple feature test to check for support, though. We'll create an input[type="date"] element, add a value that's not a valid date, and then see if the browser sanitizes it or not. You can then hide the descriptive text for browsers that support the date input type.
<label for="date">Date <span class="description-date">YYYY-MM-DDD</span></label>
<input type="date" id="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))">

<script>
var isDateSupported = function () {
var input = document.createElement('input');
var value = 'a';
input.setAttribute('type', 'date');
input.setAttribute('value', value);
return (input.value !== value);
};

if (isDateSupported()) {
document.documentElement.className += ' supports-date';
}
</scipt>

<style>
.supports-date .description-date {
display: none;
}
</style>
See the Pen Form Validation: Dates with a Feature Test by Chris Ferdinandi (@cferdinandi) on CodePen.
Other Date Types
The time input type let's visitors select a time, while the month input type let's them choose from a month/year picker. Once again, we'll include a pattern for non-supporting browsers.
<input type="time" pattern="(0[0-9]|1[0-9]|2[0-3])(:[0-5][0-9])">
<input type="month" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2]))">
The time input displays time in 12-hour am/pm format, but the value is 24-hour military time. The month input is displayed as May 2017 in supporting browsers, but the value is in YYYY-MM format.
Just like with input[type="date"], you should provide a pattern description that's hidden in supporting browsers.
See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.
This seems super easy. What's the catch?
While the Constraint Validation API is easy and light-weight, it does have some drawbacks.
You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.
Behavior is also inconsistent across browsers. Chrome doesn't display any errors until you try to submit the form. Firefox displays a red border when the field loses focus, but only displays error messages on hover (whereas WebKit browsers keep the errors persistent).
User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience. Bonus CSS tip: style invalid selectors only when they arern't currently being edited with :not(:focus):invalid { }.
Unfortunately, none of the browsers behave exactly this way by default.
In the next article in this series, I'll show you how to use the native Constraint Validation API to bolt-in our desired UX with some lightweight JavaScript. No third-party library required!

Article Series:

Constraint Validation in HTML (You are here!)
The Constraint Validation API in JavaScript (Coming Soon!)
A Validity State API Polyfill (Coming Soon!)
Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 1: Constraint Validation in HTML is a post from CSS-Tricks
Source: CssTricks


Celebrate the web by using another browser than Google’s Chrome

I like Chrome. It’s a great browser. But it’s not so good that it deserves to be the only browser. And that’s the unfortunate opportunity we, people browsing the web, are opening for Google by so overwhelmingly choosing to use it in face of the alternatives.And this is what we get by doing so: DirecTV just announced that they’ll be turning their website into a Chrome desktop app on June 1st. They don’t actually say that, but that’s what they mean. You can’t call directvnow.com a website if it only works in a single browser.You don’t have to be that old to remember the dark days when Internet Explorer strangled the web by its utter domination. When large swaths of the web was only accessible through Redmond. Those were not happy days.Ironically, it was Google’s Chrome that helped fight back the scourge of Internet Explorer’s monopoly. Well, that plus the utter neglect and contempt Microsoft showed the web in those years after they had cut off the air supply to Netscape. Would you believe they even disbanded their browser team after they had conquered the competition? Yup.Why on earth would we want to go back to such arid times? Nobody wins when the beancounters at companies like DirecTV can eye the browser market shares and justify turning their back on the open, standards-backed web to embrace a few cents on the dollar supporting only the victor.But you can stop it. By balancing the browsers, choosing to use not just what’s convenient, but what’s lesser used, you can make the business case for monopoly plays a bad deal. Consider it your civic duty as a fan of the open web.It’s never been easier on web developers to support evergreen browsers. You no longer have to cover every variation of every flavor. Good browsers update automatically. And good browsers support open standards to a degree a developer in 2005 would have cried to have.So please, if you’re using Chrome, take a moment to download another browser and incorporating it into your routine. I personally love Safari and use it for the bulk of my browsing (with Chrome as a pair for development). But the good folks at Firefox deserve your usage just as much.Oh, and if you’re a customer of DirecTV, please tell them what you think about their short-sighted move on Twitter. I hear they just love to get feedback!Celebrate the web by using another browser than Google’s Chrome was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.


Source: 37signals


Fun with Viewport Units

Viewport units have been around for several years now, with near-perfect support in the major browsers, but I keep finding new and exciting ways to use them. I thought it would be fun to review the basics, and then round-up some of my favorite use-cases.

What are viewport units?
Four new "viewport-relative" units appeared in the CSS specifications between 2011 and 2015, as part of the W3C's CSS Values and Units Module Level 3. The new units – vw, vh, vmin, and vmax - work similarly to existing length units like px or em, but represent a percentage of the current browser viewport.

Viewport Width (vw) – A percentage of the full viewport width. 10vw will resolve to 10% of the current viewport width, or 48px on a phone that is 480px wide. The difference between % and vw is most similar to the difference between em and rem. A % length is relative to local context (containing element) width, while a vw length is relative to the full width of the browser window.
Viewport Height (vh) – A percentage of the full viewport height. 10vh will resolve to 10% of the current viewport height.
Viewport Minimum (vmin) – A percentage of the viewport width or height, whichever is smaller. 10vmin will resolve to 10% of the current viewport width in portrait orientations, and 10% of the viewport height on landscape orientations.
Viewport Maximum (vmax) – A percentage of the viewport width or height, whichever is larger. 10vmin will resolve to 10% of the current viewport height in portrait orientations, and 10% of the viewport width on landscape orientations. Sadly, and strangely, vmax units are not yet available on Internet Explorer or Edge.

While these units are derived from viewport height or width, they can all be used everywhere lengths are accepted – from font-size to positioning, margins, padding, shadows, borders, and so on. Let's see what we can do!
Responsive Typography
It's become very popular to use viewport units for responsive typography – establishing font-sizes that grow and shrink depending on the current viewport size. Using simple viewport units for font-size has an interesting (dangerous) effect. As you can see, fonts scale very quickly – adjusting from unreadably small to extra large in a very small range.

This direct scaling is clearly too dramatic for daily use. We need something more subtle, with minimums and maximums, and more control of the growth rate. That's where calc() becomes useful. We can combine a base size in more steady units (say 16px) with a smaller viewport-relative adjustment (0.5vw), and let the browser do the math: calc(16px + 0.5vw)
See the Pen partially-Responsive Type by Miriam Suzanne (@mirisuzanne) on CodePen.
By changing the relationship between your base-size and viewport-relative adjustment, you can change how dramatic the growth-rate is. Use higher viewport values on headings, and watch them grow more quickly than the surrounding text. This allows for a more dynamic typographic scale on larger screens, while keeping fonts constrained on a mobile device - no media-queries required. You can also apply this technique to your line-height, allowing you to adjust leading at a different rate than the font-size.
body {
// font grows 1px for every 100px of viewport width
font-size: calc(16px + 1vw);
// leading grows along with font,
// with an additional 0.1em + 0.5px per 100px of the viewport
line-height: calc(1.1em + 0.5vw);
}
For me, this is enough complexity. If I need to constrain the top-end for rapid-growth headings, I can do that with one single media-query wherever the text becomes too large:
h1 {
font-size: calc(1.2em + 3vw);
}

@media (min-width: 50em) {
h1 {
font-size: 50px;
}
}
Suddenly I wish there was a max-font-size property.
Others have developed more complex calculations and Sass mixins to specify the exact text-size ranges at specific media-queries. There are several existing CSS-Tricks articles that explain the technique and provide snippets to help you get started:

Viewport Sized Typography with Minimum and Maximum Sizes
Fluid Typography
The Math of CSS locks

I think that's overkill in most cases, but your milage will absolutely vary.
Full-Height Layouts, Hero Images, and Sticky Footers
There are many variations on full-height (or height-constrained) layouts – from desktop-style interfaces to hero images, spacious designs, and sticky footers. Viewport-units can help with all of these.
In a desktop-style full-height interface, the page is often broken into sections that scroll individually – with elements like headers, footers, and sidebars that remains in place at any size. This is common practice for many web-apps these days, and vh units make it much simpler. Here's an example using the new CSS Grid syntax:
See the Pen Full-height CSS Grid by Miriam Suzanne (@mirisuzanne) on CodePen.
A single declaration on the body element, height: 100vh, constrains your application to the height of the viewport. Make sure you apply overflow values on internal elements, so your content isn't cut off. You can also achieve this layout using flexbox or floats.Note that full-height layouts can cause problems on some mobile browsers. There's a clever fix for iOs Safari, that we use to handle one of the most noticeable edge-cases.
Sticky-footers can be created with a similar technique. Change your body height: 100vh to min-height: 100vh and the footer will stay in place at the bottom of your screen until it's pushed down by content.
See the Pen Sticky-Footer with CSS Grid by Miriam Suzanne (@mirisuzanne) on CodePen.
Apply vh units to the height, min-height, or max-height of various elements to create full-screen sections, hero images, and more. In the new OddBird redesign, we constrained our hero images with max-height: 55vh so they never push headlines off the page. On my personal website, I went with max-height: 85vh for a more image-dominated look. On other sites, I've applied min-height: 90vh to sections.
Here's an example showing both a max-height heroic kitten, and a min-height section. Combining all these tricks can give you some powerful control around how your content fills a browser window, and responds to different viewports.
Fluid Aspect Ratios
It can also be useful to constrain the height-to-width ratio of an element. This is especially useful for embeded content, like videos. Chris has written about this before. In the good-old-days, we would do that with %-based padding on a container element, and absolute positioning on the inner element. Now we can sometimes use viewport units to achieve that effect without the extra markup.
If we can count on the video being full-screen, we can set our height relative to the full viewport width:
/* full-width * aspect-ratio */
.full-width {
width: 100vw;
height: calc(100vw * (9/16));
}
That math doesn't have to happen in the browser with calc. If you are using a pre-processor like Sass, it will work just as well to do the math there: height: 100vw * (9/16). If you need to constrain the max-width, you can constrain the max-height as well:
/* max-width * aspect-ratio */
.full-width {
width: 100vw;
max-width: 30em;
height: calc(100vw * (9/16));
max-height: calc(30em * (9/16));
}
Here's a demonstration showing both options, with CSS custom properties (variables) to make the math more semantic. Play with the numbers to see how things move, keeping the proper ratio at all times:
See the Pen Fluid Ratios with Viewport Units by Miriam Suzanne (@mirisuzanne) on CodePen.
Chris takes this one step farther in his pre-viewport-units article, so we will too. What if we need actual HTML content to scale inside a set ratio - like presentation slides often do?
We can set all our internal fonts and sizes using the same viewport units as the container. In this case I used vmin for everything, so the content would scale with changes in both container height and width:
See the Pen Fluid Slide Ratios with Viewport Units by Miriam Suzanne (@mirisuzanne) on CodePen.
Breaking the Container
For years now, it's been popular to mix constrained text with full-width backgrounds. Depending on your markup or CMS, that can become difficult. How do you break content outside of a restricted container, so that it fills the viewport exactly?
Again, viewport units can come in handy. This is another trick we've used on the new OddBird site, where a static-site generator sometimes limits our control of the markup. It only takes a few lines of code to make this work.
.full-width {
margin-left: calc(50% - 50vw);
margin-right: calc(50% - 50vw);
}
There are more in-depth articles about the technique, both at Cloud Four and here on CSS Tricks.
Getting Weird
Of course, there's much more you can do with viewport units, if you start experimenting. Check out this pure CSS scroll-indicator (made by someone named Mike) using viewport units on a background image:
See the Pen CSS only scroll indicator by Mike (@MadeByMike) on CodePen.
What else have you seen, or done with viewport units? Get creative, and show us the results!

Fun with Viewport Units is a post from CSS-Tricks
Source: CssTricks