Drupal's commitment to accessibility

Last week, WordPress Tavern picked up my blog post about Drupal 8's upcoming Layout Builder.

While I'm grateful that WordPress Tavern covered Drupal's Layout Builder, it is not surprising that the majority of WordPress Tavern's blog post alludes to the potential challenges with accessibility. After all, Gutenberg's lack of accessibility has been a big topic of debate, and a point of frustration in the WordPress community.

I understand why organizations might be tempted to de-prioritize accessibility. Making a complex web application accessible can be a lot of work, and the pressure to ship early can be high.

In the past, I've been tempted to skip accessibility features myself. I believed that because accessibility features benefited a small group of people only, they could come in a follow-up release.

Today, I've come to believe that accessibility is not something you do for a small group of people. Accessibility is about promoting inclusion. When the product you use daily is accessible, it means that we all get to work with a greater number and a greater variety of colleagues. Accessibility benefits everyone.

As you can see in Drupal's Values and Principles, we are committed to building software that everyone can use. Accessibility should always be a priority. Making capabilities like the Layout Builder accessible is core to Drupal's DNA.

Drupal's Values and Principles translate into our development process, as what we call an accessibility gate, where we set a clearly defined "must-have bar". Prioritizing accessibility also means that we commit to trying to iteratively improve accessibility beyond that minimum over time.

Together with the accessibility maintainers, we jointly agreed that:
Our first priority is WCAG 2.0 AA conformance. This means that in order to be released as a stable system, the Layout Builder must reach Level AA conformance with WCAG. Without WCAG 2.0 AA conformance, we won't release a stable version of Layout Builder.
Our next priority is WCAG 2.1 AA conformance. We're thrilled at the greater inclusion provided by these new guidelines, and will strive to achieve as much of it as we can before release. Because these guidelines are still new (formally approved in June 2018), we won't hold up releasing the stable version of Layout Builder on them, but are committed to implementing them as quickly as we're able to, even if some of the items are after initial release.
While WCAG AAA conformance is not something currently being pursued, there are aspects of AAA that we are discussing adopting in the future. For example, the new 2.1 AAA "Animations from Interactions", which can be framed as an achievable design constraint: anywhere an animation is used, we must ensure designs are understandable/operable for those who cannot or choose not to use animations.
Drupal's commitment to accessibility is one of the things that makes Drupal's upcoming Layout Builder special: it will not only bring tremendous and new capabilities to Drupal, it will also do so without excluding a large portion of current and potential users. We all benefit from that!
Source: Dries Buytaert www.buytaert.net


Broken Records Taps Pixeldust to Develop New Identity

Broken Records, a Spicewood, TX, record label and recording studio, has selected Pixeldust as its lead digital agency for all DrupalCoin Blockchain web integrationneeds. Pixeldust will design and develop the brand identity and website for both the record label and recording studio. The website will feature Broken Records artists and showcase the state-of-the-art recording studio currently under production. Pixeldust will also develop a highly interactive 3d animation to help introduce the brand. Read more


Direction Aware Hover Effects

This is a particular design trick that never fails to catch people's eye! I don't know the exact history of who-thought-of-what first and all that, but I know I have seen a number of implementations of it over the years. I figured I'd round a few of them up here.

Noel Delagado
See the Pen Direction-aware 3D hover effect (Concept) by Noel Delgado (@noeldelgado) on CodePen.
The detection here is done by tracking the mouse position on mouseover and mouseout and calculating which side was crossed. It's a small amount of clever JavaScript, the meat of which is figuring out that direction:
var getDirection = function (ev, obj) {
var w = obj.offsetWidth,
h = obj.offsetHeight,
x = (ev.pageX - obj.offsetLeft - (w / 2) * (w > h ? (h / w) : 1)),
y = (ev.pageY - obj.offsetTop - (h / 2) * (h > w ? (w / h) : 1)),
d = Math.round( Math.atan2(y, x) / 1.57079633 + 5 ) % 4;

return d;
};
Then class names are applied depending on that direction to trigger the directional CSS animations.
Fabrice Weinberg
See the Pen Direction aware hover pure CSS by Fabrice Weinberg (@FWeinb) on CodePen.
Fabrice uses just pure CSS here. They don't detect the outgoing direction, but they do detect the incoming direction by way of four hidden hoverable boxes, each rotated to cover a triangle. Like this:

Codrops
Demo
In an article by Mary Lou on Codrops from 2012, Direction-Aware Hover Effect with CSS3 and jQuery, the detection is also done in JavaScript. Here's that part of the plugin:
_getDir: function (coordinates) {
// the width and height of the current div
var w = this.$el.width(),
h = this.$el.height(),
// calculate the x and y to get an angle to the center of the div from that x and y.
// gets the x value relative to the center of the DIV and "normalize" it
x = (coordinates.x - this.$el.offset().left - (w / 2)) * (w > h ? (h / w) : 1),
y = (coordinates.y - this.$el.offset().top - (h / 2)) * (h > w ? (w / h) : 1),
// the angle and the direction from where the mouse came in/went out clockwise (TRBL=0123);
// first calculate the angle of the point,
// add 180 deg to get rid of the negative values
// divide by 90 to get the quadrant
// add 3 and do a modulo by 4 to shift the quadrants to a proper clockwise TRBL (top/right/bottom/left) **/
direction = Math.round((((Math.atan2(y, x) * (180 / Math.PI)) + 180) / 90) + 3) % 4;
return direction;
},
It's technically CSS doing the animation though, as inline styles are applied as needed to the elements.
John Stewart
See the Pen Direction Aware Hover Goodness by John Stewart (@johnstew) on CodePen.
John leaned on Greensock to do all the detection and animation work here. Like all the examples, it has its own homegrown geometric math to calculate the direction in which the elements were hovered.
// Detect Closest Edge
function closestEdge(x,y,w,h) {
var topEdgeDist = distMetric(x,y,w/2,0);
var bottomEdgeDist = distMetric(x,y,w/2,h);
var leftEdgeDist = distMetric(x,y,0,h/2);
var rightEdgeDist = distMetric(x,y,w,h/2);
var min = Math.min(topEdgeDist,bottomEdgeDist,leftEdgeDist,rightEdgeDist);
switch (min) {
case leftEdgeDist:
return "left";
case rightEdgeDist:
return "right";
case topEdgeDist:
return "top";
case bottomEdgeDist:
return "bottom";
}
}

// Distance Formula
function distMetric(x,y,x2,y2) {
var xDiff = x - x2;
var yDiff = y - y2;
return (xDiff * xDiff) + (yDiff * yDiff);
}
Gabrielle Wee
See the Pen CSS-Only Direction-Aware Cube Links by Gabrielle Wee ✨ (@gabriellewee) on CodePen.
Gabrielle gets it done entirely in CSS by positioning four hoverable child elements which trigger the animation on a sibling element (the cube) depending on which one was hovered. There is some tricky stuff here involving clip-path and transforms that I admit I don't fully understand. The hoverable areas don't appear to be triangular like you might expect, but rectangles covering half the area. It seems like they would overlap ineffectively, but they don't seem to. I think it might be that they hang off the edges slightly giving a hover area that allows each edge full edge coverage.
Elmer Balbin
See the Pen Direction Aware Tiles using clip-path Pure CSS by Elmer Balbin (@elmzarnsi) on CodePen.
Elmer is also using clip-path here, but the four hoverable elements are clipped into triangles. You can see how each of them has a point at 50% 50%, the center of the square, and has two other corner points.
clip-path: polygon(0 0, 100% 0, 50% 50%)
clip-path: polygon(100% 0, 100% 100%, 50% 50%);
clip-path: polygon(0 100%, 50% 50%, 100% 100%);
clip-path: polygon(0 0, 50% 50%, 0 100%);
Nigel O Toole
Demo
Raw JavaScript powers Nigel's demo here, which is all modernized to work with npm and modules and all that. It's familiar calculations though:
const _getDirection = function (e, item) {
// Width and height of current item
let w = item.offsetWidth;
let h = item.offsetHeight;
let position = _getPosition(item);

// Calculate the x/y value of the pointer entering/exiting, relative to the center of the item.
let x = (e.pageX - position.x - (w / 2) * (w > h ? (h / w) : 1));
let y = (e.pageY - position.y - (h / 2) * (h > w ? (w / h) : 1));

// Calculate the angle the pointer entered/exited and convert to clockwise format (top/right/bottom/left = 0/1/2/3). See https://stackoverflow.com/a/3647634 for a full explanation.
let d = Math.round(Math.atan2(y, x) / 1.57079633 + 5) % 4;

// console.table([x, y, w, h, e.pageX, e.pageY, item.offsetLeft, item.offsetTop, position.x, position.y]);

return d;
};
The JavaScript ultimately applies classes, which are animated in CSS based on some fancy Sass-generated animations.
Giana
A CSS-only take that handles the outgoing direction nicely!
See the Pen CSS-only directionally aware hover by Giana (@giana) on CodePen.

Seen any others out there? Ever used this on something you've built?

Direction Aware Hover Effects is a post from CSS-Tricks
Source: CssTricks


Animating Border

Transitioning border for a hover state. Simple, right? You might be unpleasantly surprised.
The Challenge
The challenge is simple: building a button with an expanding border on hover.
This article will focus on genuine CSS tricks that would be easy to drop into any project without having to touch the DOM or use JavaScript. The methods covered here will follow these rules

Single element (no helper divs, but psuedo-elements are allowed)
CSS only (no JavaScript)
Works for any size (not restricted to a specific width, height, or aspect ratio)
Supports transparent backgrounds
Smooth and performant transition

I proposed this challenge in the Animation at Work Slack and again on Twitter. Though there was no consensus on the best approach, I did receive some really clever ideas by some phenomenal developers.
Method 1: Animating border
The most straightforward way to animate a border is… well, by animating border.
.border-button {
border: solid 5px #FC5185;
transition: border-width 0.6s linear;
}

.border-button:hover { border-width: 10px; }
See the Pen CSS writing-mode experiment by Shaw (@shshaw) on CodePen.
Nice and simple, but there are some big performance issues.
Since border takes up space in the document’s layout, changing the border-width will trigger layout. Nearby elements will shift around because of the new border size, making browser reposition those elements every frame of the animation unless you set an explicit size on the button.
As if triggering layout wasn’t bad enough, the transition itself feels “stepped”. I’ll show why in the next example.
Method 2: Better border with outline
How can we change the border without triggering layout? By using outline instead! You’re probably most familiar with outline from removing it on :focus styles (though you shouldn’t), but outline is an outer line that doesn’t change an element’s size or position in the layout.
.border-button {
outline: solid 5px #FC5185;
transition: outline 0.6s linear;
margin: 0.5em; /* Increased margin since the outline expands outside the element */
}

.border-button:hover { outline-width: 10px; }

A quick check in Dev Tools’ Performance tab shows the outline transition does not trigger layout. Regardless, the movement still seems stepped because browsers are rounding the border-width and outline-width values so you don’t get sub-pixel rendering between 5 and 6 or smooth transitions from 5.4 to 5.5.

Strangely, Safari often doesn’t render the outline transition and occasionally leaves crazy artifacts.

Method 3: Cut it with clip-path
First implemented by Steve Gardner, this method uses clip-path with calc to trim the border down so on hover we can transition to reveal the full border.
.border-button {
/* Full width border and a clip-path visually cutting it down to the starting size */
border: solid 10px #FC5185;
clip-path: polygon(
calc(0% + 5px) calc(0% + 5px), /* top left */
calc(100% - 5px) calc(0% + 5px), /* top right */
calc(100% - 5px) calc(100% - 5px), /* bottom right */
calc(0% + 5px) calc(100% - 5px) /* bottom left */
);
transition: clip-path 0.6s linear;
}

.border-button:hover {
/* Clip-path spanning the entire box so it's no longer hiding the full-width border. */
clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%);
}

clip-path technique is the smoothest and most performant method so far, but does come with a few caveats. Rounding errors may cause a little unevenness, depending on the exact size. The border also has to be full size from the start, which may make exact positioning tricky.
Unfortunately there’s no IE/Edge support yet, though it seems to be in development. You can and should encourage Microsoft’s team to implement those features by voting for masks/clip-path to be added.
Method 4: linear-gradient background
We can simulate a border using a clever combination of multiple linear-gradient backgrounds properly sized. In total we have four separate gradients, one for each side. The background-position and background-size properties get each gradient in the right spot and the right size, which can then be transitioned to make the border expand.
.border-button {
background-repeat: no-repeat;

/* background-size values will repeat so we only need to declare them once */
background-size:
calc(100% - 10px) 5px, /* top & bottom */
5px calc(100% - 10px); /* right & left */

background-position:
5px 5px, /* top */
calc(100% - 5px) 5px, /* right */
5px calc(100% - 5px), /* bottom */
5px 5px; /* left */

/* Since we're sizing and positioning with the above properties, we only need to set up a simple solid-color gradients for each side */
background-image:
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185),
linear-gradient(0deg, #FC5185, #FC5185);

transition: all 0.6s linear;
transition-property: background-size, background-position;
}

.border-button:hover {
background-position: 0 0, 100% 0, 0 100%, 0 0;
background-size: 100% 10px, 10px 100%, 100% 10px, 10px 100%;
}

This method is quite difficult to set up and has quite a few cross-browser differences. Firefox and Safari animate the faux-border smoothly, exactly the effect we’re looking for. Chrome’s animation is jerky and even more stepped than the outline and border transitions. IE and Edge refuse to animate the background at all, but they do give the proper border expansion effect.
Method 5: Fake it with box-shadow
Hidden within box-shadow's spec is a fourth value for spread-radius. Set all the other length values to 0px and use the spread-radius to build your border alternative that, like outline, won’t affect layout.
.border-button {
box-shadow: 0px 0px 0px 5px #FC5185;
transition: box-shadow 0.6s linear;
margin: 0.5em; /* Increased margin since the box-shado expands outside the element, like outline */
}

.border-button:hover { box-shadow: 0px 0px 0px 10px #FC5185; }

The transition with box-shadow is adequately performant and feels much smoother, except in Safari where it’s snapping to whole-values during the transition like border and outline.
Pseudo-Elements
Several of these techniques can be modified to use a pseudo-element instead, but pseudo-elements ended up causing some additional performance issues in my tests.
For the box-shadow method, the transition occasionally triggered paint in a much larger area than necessary. Reinier Kaper pointed out that a pseudo-element can help isolate the paint to a more specific area. As I ran further tests, box-shadow was no longer causing paint in large areas of the document and the complication of the pseudo-element ended up being less performant. The change in paint and performance may have been due to a Chrome update, so feel free to test for yourself.
I also could not find a way to utilize pseudo-elements in a way that would allow for transform based animation.
Why not transform: scale?
You may be firing up Twitter to helpfully suggest using transform: scale for this. Since transform and opacity are the best style properties to animate for performance, why not use a pseudo-element and have the border scale up & down?
.border-button {
position: relative;
margin: 0.5em;
border: solid 5px transparent;
background: #3E4377;
}

.border-button:after {
content: '';
display: block;
position: absolute;
top: 0; right: 0; bottom: 0; left: 0;
border: solid 10px #FC5185;
margin: -15px;
z-index: -1;
transition: transform 0.6s linear;
transform: scale(0.97, 0.93);
}

.border-button:hover::after { transform: scale(1,1); }

There are a few issues:

The border will show through a transparent button. I forced a background on the button to show how the border is hiding behind the button. If your design calls for buttons with a full background, then this could work.
You can’t scale the border to specific sizes. Since the button’s dimensions vary with the text, there’s no way to animate the border from exactly 5px to 10px using only CSS. In this example I’ve done some magic-numbers on the scale to get it to appear right, but that won’t be universal.
The border animates unevenly because the button’s aspect ratio isn’t 1:1. This usually means the left/right will appear larger than the top/bottom until the animation completes. This may not be an issue depending on how fast your transition is, the button’s aspect ratio, and how big your border is.

If your button has set dimensions, Cher pointed out a clever way to calculate the exact scales needed, though it may be subject to some rounding errors.
Beyond CSS
If we loosen our rules a bit, there are many interesting ways you can animate borders. Codrops consistently does outstanding work in this area, usually utilizing SVGs and JavaScript. The end results are very satisfying, though they can be a bit complex to implement. Here are a few worth checking out:

Creative Buttons
Button Styles Inspiration
Animated Checkboxes
Distorted Button Effects
Progress Button Styles

Conclusion
There’s more to borders than simply border, but if you want to animate a border you may have some trouble. The methods covered here will help, though none of them are a perfect solution. Which you choose will depend on your project’s requirements, so I’ve laid out a comparison table to help you decide.

My recommendation would be to use box-shadow, which has the best overall balance of ease-of-implementation, animation effect, performance and browser support.
Do you have another way of creating an animated border? Perhaps a clever way to utilize transforms for moving a border? Comment below or reach me on Twitter to share your solution to the challenge.
Special thanks to Martin Pitt, Steve Gardner, Cher, Reinier Kaper, Joseph Rex, David Khourshid, and the Animation at Work community.

Animating Border is a post from CSS-Tricks
Source: CssTricks


How the Roman Empire Made Pure CSS Connect 4 Possible

Experiments are a fun excuse to learn the latest tricks, think of new ideas, and push your limits. "Pure CSS" demos have been a thing for a while, but new opportunities open up as browsers and CSS itself evolves. CSS and HTML preprocessors also helped the scene move forward. Sometimes preprocessors are used for hardcoding every possible scenario, for example, long strings of :checked and adjacent sibling selectors.
In this article, I will walk through the key ideas of a Pure CSS Connect 4 game I built. I tried to avoid hardcoding as much as I could in my experiment and worked without preprocessors to focus on keeping the resulting code short. You can see all the code and the game right here:
See the Pen Pure CSS Connect 4 by Bence Szabó (@finnhvman) on CodePen.
Essential concepts
I think there are some concepts that are considered essential in the "pure CSS" genre. Typically form elements are used for managing state and capturing user actions. I was excited when I found people use <button type="reset"> to reset or start a new game. All you have to do is wrap your elements in a <form> tag and add the button. In my opinion this is a much cleaner solution than having to refresh the page.
My first step was to create a form element then throw a bunch of inputs into it for the slots and add the reset button. Here is a very basic demonstration of <button type="reset"> in action:
See the Pen Pure HTML Form Reset by Bence Szabó (@finnhvman) on CodePen.
I wanted to have nice visual for this demo to provide a full experience. Instead of pulling in an external image for the board or the discs, I used a radial-gradient(). A nice resource I often use is Lea Verou's CSS3 Patterns Gallery. It is a collection of patterns made by gradients, and they're editable too! I used currentcolor, which came pretty handy for the disc pattern. I added a header and reused my Pure CSS Ripple Button.
At this point the layout and disc design was already final, only the game didn't work at all
Dropping discs onto the board
Next I enabled users to take their turns dropping discs onto the Connect 4 board. In Connect 4, players (one red and one yellow) drop discs into columns in alternating turns. There are 7 columns and 6 rows (42 slots). Each slot can be empty or occupied by a red or yellow disc. So, a slot can have three states (empty, red, or yellow). Discs dropped in the same column are stacked onto each other.
I started out by placing two checkboxes for each slot. When they're both unchecked the slot is considered empty, and when one of them is checked the corresponding player has its disc in it.
The possible state of having them both checked should be avoided by hiding them once either of them is checked. These checkboxes are immediate siblings, so when the first of a pair is checked you can hide both by using :checked pseudo-class and the adjacent sibling combinator (+). What if the second is checked? You can hide the second one, but how to affect the first one? Well, there is no previous sibling selector, that's just not how CSS selectors work. I had to reject this idea.
Actually, a checkbox can have three states by itself, it can be in the indeterminate state. The problem is that you can't put it into indeterminate state with HTML alone. Even if you could, the next click on the checkbox would always make it transform into checked state. Forcing the second player to double-click when they make their move is unreliable and unacceptable.
I was stuck on the MDN doc of :indeterminate and noticed that radio inputs also have indeterminate state. Radio buttons with the same name are in this state when they're all unchecked. Wow, that's an actual initial state! What's really beneficial is that checking the latter sibling also has an effect on the former one! Thus I filled the board with 42 pairs of radio inputs.
In retrospect, clever ordering and usage of labels with either checkboxes or radio buttons would have made the trick, but I didn't consider labels to be an option to keep the code simpler and shorter.
I wanted to have large areas for interaction to have nice UX, so I thought it's reasonable to let players make a move by clicking on a column. I stacked controls of the same column on each other by adding absolute and relative positioning to the appropriate elements. This way only the lowest empty slot could be selected within a column. I meticulously set the time of transition of disc fall per row and their timing function is approximating a quadratic curve to resemble realistic free fall. So far the pieces of the puzzle came well together, though the animation below clearly shows that only the red player could make their moves.
Even though all the controls are there, only red discs can be dropped on the board
The clickable areas of radio inputs are visualized with colored but semi-transparent rectangles. The yellow and red inputs are stacked over each other six times(=six rows) per column, leaving the red input of the lowest row on top of the stack. The mixture of red and yellow creates the orangish color which can be seen on the board at start. The less empty slots are available in a column, the less intense this orangish color gets since the radio inputs are not displayed once they are not :indeterminate. Due to the red input always being precisely over the yellow input in every slot, only the red player is able to make moves.
Tracking turns
I only had a faint idea and a lot of hope that I can somehow solve switching turns between the two players with the general sibling selector. The concept was to let the red player take turn when the number of checked inputs was even (0, 2, 4, etc.) and let the yellow player take turn when that number was odd. Soon I realized that the general sibling selector does not (and should not!) work the way I wanted.
Then a very obvious choice was to experiment with the nth selectors. However attracting it was to use the even and odd keywords, I ran into a dead end. The :nth-child selector "counts" the children within a parent, regardless of type, class, pseudo-class, whatever. The :nth-of-type selector "counts" children of a type within a parent, regardless of class or pseudo-class. So the problem is that they cannot count based on the :checked state.
Well CSS counters count too, so why not give them a try? A common usage of counters is to number headings (even in multiple levels) in a document. They are controlled by CSS rules, can be arbitrarily reset at any point and their increment (or decrement!) values can be any integer. The counters are displayed by the counter() function in the content property.
The easiest step was to set up a counter and count the :checked inputs in the Connect 4 grid. There are only two difficulties with this approach. The first is you cannot perform arithmetics on a counter to detect if its is even or odd. The second is that you cannot apply CSS rules to elements based on the counter value.
I managed to overcome the first issue by making the counter binary. The value of the counter is initially zero. When the red player checks their radio button the counter is incremented by one. When the yellow player checks their radio button the counter is decremented by one, and so on. Therefore the counter value will be either zero or one, even or odd.
Solving the second problem required much more creativity (read: hack). As mentioned counters can be displayed, but only in the ::before and ::after pseudo-elements. That is a no-brainer, but how can they affect other elements? At the very least the counter value can change the width of the pseudo-element. Different numbers have different widths. Character 1 is typically thinner than 0, but that is something very hard to control. If the number of characters change rather than the character itself the resulting width change is more controllable. It is not uncommon to use Roman numerals with CSS counters. One and two represented in Roman numerals are the same character once and twice and so are their widths in pixels.
My idea was to attach the radio buttons of one player (yellow) to the left, and attach the radio buttons of the other player (red) to the right of their shared parent container. Initially, the red buttons are overlaid on the yellow buttons, then the width change of the container would cause the red buttons to "go away" and reveal the yellow buttons. A similar real-world concept is the sliding window with two panes, one pane is fixed (yellow buttons), the other is slidable (red buttons) over the other. The difference is that in the game only half of the window is visible.
So far, so good, but I still wasn't satisfied with font-size (and the other font properties) indirectly controlling the width. I thought letter-spacing would fit nicely here, since it only increases the size in one dimension. Unexpectedly, even one letter has letter spacing (which is rendered after the letter), and two letters render the letter spacing twice. Predictable widths are crucial to make this reliable. Zero width characters along with single and double letter spacing would work, but it is dangerous to set the font-size to zero. Defining large letter-spacing (in pixels) and tiny (1px) font-size made it almost consistent across all browsers, yes I'm talking about sub-pixels.
I needed the container width to alternate between initial size (=w) and at least double the initial size (>=2w) to be able to fully hide and show the yellow buttons. Let's say v is the rendered width of the 'i' character (lower roman representation, varies across browsers), and c is the rendered width (constant) of the letter-spacing. I needed v + c = w to be true but it couldn't be, because c and w are integers but v is non-integer. I ended up using min-width and max-width properties to constrain the possible width values, so I also changed the possible counter values to 'i' and 'iii' to make sure the text widths underflow and overflow the constraints. In equations this looked like v + c < w, 3v + 3c > 2w, and v << c, which gives 2/3w < c < w. The conclusion is that the letter-spacing has to be somewhat smaller than the initial width.
I have been reasoning so far as if the pseudo element displaying the counter value was the parent of the radio buttons, it is not. However, I noticed that the width of the pseudo-element changes the width of its parent element, and in this case the parent is the container of the radio buttons.
If you are thinking couldn't this be solved with Arabic numerals? You are right, alternating the counter value between something like '1' and '111' would also work. Nevertheless, Roman numerals gave me the idea in the first place, and they were also a good excuse for the clickbaity title so I kept them.
The players take alternating turns starting with the red player
Applying the technique discussed makes the parent container of the radio inputs double in width when a red input is checked and makes it original width when a yellow input is checked. In the original width container the red inputs are over the yellow ones, but in the double width container, the red inputs are moved away.
Recognizing patterns
In real life, the Connect 4 board does not tell you if you have won or lost, but providing proper feedback is part of good user experience in any software. The next objective is to detect whether a player has won the game. To win the game a player has to have four of their discs in a column, row or diagonal line. This is a very simple task to solve in many programming languages, but in pure CSS world, this is a huge challenge. Breaking it down to subtasks is the way to approach this systematically.
I used a flex container as the parent of the radio buttons and discs. A yellow radio button, a red radio button and a div for the disc belong to a slot. Such a slot is repeated 42 times and arranged in columns that wrap. Consequently, the slots in a column are adjacent, which makes recognizing four in a column the easiest part using the adjacent selector:
<div class="grid">
<input type="radio" name="slot11">
<input type="radio" name="slot11">
<div class="disc"></div>
<input type="radio" name="slot12">
<input type="radio" name="slot12">
<div class="disc"></div>
...
<input type="radio" name="slot16">
<input type="radio" name="slot16">
<div class="disc"></div>

<input type="radio" name="slot21">
<input type="radio" name="slot21">
<div class="disc"></div>
...
</div>
/* Red four in a column selector */
input:checked + .disc + input + input:checked + .disc + input + input:checked + .disc + input + input:checked ~ .outcome

/* Yellow four in a column selector */
input:checked + input + .disc + input:checked + input + .disc + input:checked + input + .disc + input:checked ~ .outcome
This is a simple but ugly solution. There are 11 type and class selectors chained together per player to cover the case of four in a column. Adding a div with class of .outcome after the elements of the slots makes it possible to conditionally display the outcome message. There is also a problem with falsely detecting four in a column where the column is wrapped, but let's just put this issue aside.
A similar approach for detecting four in a row would be truly a terrible idea. There would be 56 selectors chained together per player (if I did the math right), not to mention that they would have a similar flaw of false detection. This is a situation where the :nth-child(An+B [of S]) or the column combinators will come handy in the future.
For better semantics one could add a new div for each column and arrange the slot elements in them. This modification would also eliminate the possibility of false detection mentioned above. Then detecting four in a row could go like: select a column where the first red radio input is checked, and select the adjacent sibling column where the first red radio input is checked, and so on two more times. This sounds very cumbersome and would require the "parent" selector.
Selecting the parent is not feasible, but selecting the child is. How would detecting four in a row go with available combinators and selectors? Select a column, then select its first red radio input if checked, and select the adjacent column, then select its first red radio input if checked, and so on two more times. It still sounds cumbersome, yet possible. The trick is not only in the CSS but also in the HTML, the next column has to be the sibling of the radio buttons in the previous column creating a nested structure.
<div class="grid column">
<input type="radio" name="slot11">
<input type="radio" name="slot11">
<div class="disc"></div>
...
<input type="radio" name="slot16">
<input type="radio" name="slot16">
<div class="disc"></div>

<div class="column">
<input type="radio" name="slot21">
<input type="radio" name="slot21">
<div class="disc"></div>
...
<input type="radio" name="slot26">
<input type="radio" name="slot26">
<div class="disc"></div>

<div class="column">
...
</div>
</div>
</div>
/* Red four in a row selectors */
input:nth-of-type(2):checked ~ .column > input:nth-of-type(2):checked ~ .column > input:nth-of-type(2):checked ~ .column > input:nth-of-type(2):checked ~ .column::after,
input:nth-of-type(4):checked ~ .column > input:nth-of-type(4):checked ~ .column > input:nth-of-type(4):checked ~ .column > input:nth-of-type(4):checked ~ .column::after,
...
input:nth-of-type(12):checked ~ .column > input:nth-of-type(12):checked ~ .column > input:nth-of-type(12):checked ~ .column > input:nth-of-type(12):checked ~ .column::after
Well the semantics are messed up and these selectors are only for the red player (another round goes for the yellow player), on the other hand it does work. A little benefit is that there will be no falsely detected columns or rows. The display mechanism of the outcome also had to be modified, using the ::after pseudo element of any matching column is a consistent solution when proper styling is applied. As a result of this, a fake eighth column has to be added after the last slot.
As seen in the code snippet above, specific positions within a column are matched to detect four in a row. The very same technique can be used for detecting four in a diagonal by adjusting these positions. Note that the diagonals can are in two directions.
input:nth-of-type(2):checked ~ .column > input:nth-of-type(4):checked ~ .column > input:nth-of-type(6):checked ~ .column > input:nth-of-type(8):checked ~ .column::after,
input:nth-of-type(4):checked ~ .column > input:nth-of-type(6):checked ~ .column > input:nth-of-type(8):checked ~ .column > input:nth-of-type(10):checked ~ .column::after,
...
input:nth-of-type(12):checked ~ .column > input:nth-of-type(10):checked ~ .column > input:nth-of-type(8):checked ~ .column > input:nth-of-type(6):checked ~ .column::after
The number of selectors have increased vastly in the final run, and this is definitely a place where CSS preprocessors could reduce the length of the declaration. Still, I think the demo is moderately short. It should be somewhere around the middle on the scale from hardcoding a selector for every possible winning pattern to using 4 magical selectors (column, row, two diagonals).
A message is shown when a player wins
Closing loopholes
Any software has edge cases and they need to be handled. The possible outcomes of a Connect 4 game are not only the red, or yellow player winning, but neither player winning filling the board known as draw. Technically this case doesn't break the game or produce any errors, what's missing is the feedback to the players.
The goal is to detect when there are 42 :checked radio buttons on the board. This also means that none of them are in the :indeterminate state. That is requiring a selection to be made for each radio group. Radio buttons are invalid, when they are :indeterminate, otherwise they are valid. So I added the required attribute for each input, then used the :valid pseudo-class on the form to detect draw.
The draw outcome message is shown when the board is filled
Covering the draw outcome introduced a bug. In the very rare case of the yellow player winning on last turn, both the win and draw messages are displayed. This is because the detection and display method of these outcomes are orthogonal. I worked around the issue by making sure that the win message has a white background and is over the draw message. I also had to delay the fade in transition of the draw message, so it would not get blended with the win message transition.
The yellow wins message is over the draw outcome preventing it to be displayed
While a lot of radio buttons are hid behind each other by absolute positioning, all of those in indeterminate state can still be accessed by tabbing through the controls. This enables players to drop theirs discs into arbitrary slots. A way to handle this is to simply forbid keyboard interactions by the tabindex attribute: setting it to -1 means that it should not be reachable via sequential keyboard navigation. I had to augment every radio input with this attribute to eliminate this loophole.
<input type="radio" name="slot11" tabindex="-1" required>
<input type="radio" name="slot11" tabindex="-1" required>
<div class="disc"></div>
...
Limitations
The most substantial drawback is that the board isn't responsive and it might malfunction on small viewports due to the unreliable solution of tracking turns. I didn't dare to take the risk of refactoring to a responsive solution, due to the nature of the implementation it feels much safer with hardcoded dimensions.
Another issue is the sticky hover on touch devices. Adding some interaction media queries to the right places is the easiest way to cure this, though it would eliminate the free fall animation.
One might think that the :indeterminate pseudo-class is already widely supported, and it is. The problem is that it is only partially supported in some browsers. Observe Note 1 in the compatibility table: MS IE and Edge do not support it on radio buttons. If you view the demo in those browsers your cursor will turn into the not-allowed cursor on the board, this is an unintentional but somewhat graceful degradation.
Not all browsers support :indeterminate on radio buttons
Conclusion
Thanks for making it to the last section! Let's see some numbers:

140 HTML elements
350 (reasonable) lines of CSS
0 JavaScript
0 external resources

Overall, I'm satisfied with the result and the feedback was great. I sure learned a lot making this demo and I hope I could share a lot writing this article!

How the Roman Empire Made Pure CSS Connect 4 Possible is a post from CSS-Tricks
Source: CssTricks


Triggering Individual Animations on a Timeline with Bodymovin.js

In our recent collaboration with the Ad Council and AARP, we created a chatbot experience to walk users through a set of questions and serve them personalized action items to prepare for retirement. The tricky thing about retirement is that few people are truly prepared for it. To address this issue, we created an animated character that felt alive, showed empathy, and helped users stay engaged with the conversation. It’s name? Avo!
Below is a set of emotions we needed to animate and bring into our web experience. Enter Bodymovin.js. Bodymovin is an After Effects plugin that exports animation data and translates it into Javascript. Bodymovin is exceptional when animating complex vector-based animations, especially with all the parts of Avo’s face.

Because we had to convey many emotions, we needed a way to link them all together without distracting the user. Our approach was to have every animation return to what we called a “default state” — that allowed us to seamlessly transition from one animation to the next.

Highlighted in blue is the “default state” that Avo would return to after each animated emotion in the timeline.

After animating all the emotions on one timeline in After Effects, we exported the Javascript through Bodymovin. We divided all the frames into segments by emotion and named them.

Highlighted in green are the “animations” that needed to be identified and named.

class Bot extends React.PureComponent {
static animations = {
roll: [[0, 65]],
blink: [[65, 85]],
eyebrows: [[95, 125]],
lookRight: [[125, 165]],
lookLeft: [[165, 204]],
joy: [[204, 244]],
spin: [[272, 310]],
wink: [[310, 351]],
hmm: [[351, 400]],
nice: [[400, 438]],
celebrate: [[440, 530]],
glasses: [[530, 595]],
sparkle: [[595, 662]],
money: [[665, 725]],
love: [[725, 780]],
nod: [[785, 870]]
}

We identified ['roll', 'blink', 'eyebrows', 'lookRight'] as “neutral animations” and had those loop whenever Avo was waiting for an answer. Then we tied the rest of the animations to questions as a response.

See the Pen Viget Case Study: AceYourRetirement.org by Greg Kohn (@gregkohn) on CodePen.

Overall, Bodymovin.js was great. 5/5 I recommend.


Source: VigetInspire


Getting Ready for Web Video

Inspired Magazine
Inspired Magazine - creativity & inspiration daily
Video is one of those really contentious points about web design. There are some people who feel like web pages should not have embedded video at all. These people are wrong.
Like any technology, however, we should respect it and not abuse it. The two worst things you can do are:

AutoPlay videos, without express consent from the user
Embed too many videos in one page

Both of these things are likely to cause annoyance to users and should be avoided unless you have a very good reason.
Knowing what not to do will only get you so far. The rest of your online video success story will depend on knowing the things you ought to do, which is what we’ll cover in the rest of this article.
Video categories
There are six different types of videos that are commonly used on sites. These are:

Regular video – you point a camera at something and record it
Live stream – you point a camera at something and don’t record it
Slide show – composed from a series of still images, often with voice over plus added descriptive text
Animation – various methods, but more commonly 3D rendered animations made with Maya3D or Blender.
Screencast – software records images from your computer, normally used for tutorials, usually with text overlays and voice narration.
Hybrid screencast – a screen cast with regular video segments, and possibly also slideshow segments.

Knowing which type of video you want to produce is a good start. Actually that brings us neatly to the next topic.
Plan your video
Good video doesn’t normally happen by accident. Meticulous planning pays off, and that means you know what kind of video you’re going to produce, how you’re going to produce it, and (very importantly) why.
Don’t fail to plan. For a start, your video should be scripted. This is true even if there is no dialog or narration. The script gives you a clear impression of how the video is supposed to unfold. You can also optionally story board the video, but a crew that can’t work straight from a script is not a very visionary crew.
If you’re making a bigger production, you’ll also benefit from budget planning, scene breakdown, shooting sequence (shot list), location scouting, etc. The more time you invest into planning, the better your video is likely to be. Professional preparation leads to professional results.
Software that can help you with script writing and planning includes Trelby and CeltX.

Invest in quality equipment
The equipment you use will have a big impact on the result. It may be difficult to believe, but the camera is not the most important part of your equipment investment.
That’s because for web video (in 2018, at least) it’s rarely sensible to shoot video above normal HD (1920px wide), and in fact it’s better to shoot in SD (1280px wide) or lower, and the aspect ratio should always be 16:9.
One source of confusion with these resolutions, by the way, is the slightly misleading standard names used, which references the vertical height (720p / 1080p) rather than the width, which is the most natural thing people think about.
In thinking about this, bear in mind that a video with a frame height of 720px will not fit on the screen real estate of most users, so it is easy to see why shooting above 720p will not give superior results for web video.
The larger your video frame is, the more resources it will hog on the user’s device, including in some cases failing to play at all, or playing very poorly. Your goal really should be to get the highest image quality and the lowest file size (in bytes).
The reason all this is mentioned is because cameras up to HD will be quite inexpensive compared to cameras that can shoot at higher resolutions, and you’ll just be wasting your money if you invest in them, because most users in 2018:

Do not have screens large enough to support the enormous frame size
Do not have connections fast enough to stream anything above HD smoothly
Do not have connections able to stream anything above SD smoothly either
Are not overly concerned about quality as long as it is reasonable

Quality of your content is the more important thing. So cameras for web video are cheap. What matters a lot more is the audio, and that is where you should invest sensibly.
Cheap audio solutions are likely to result in poor results, so avoid cheap audio and invest in quality. What you save on your camera can be reinvested into sound. Literally what you’d regard as a sound investment.

The main microphone types are shotgun, boom, and wireless. The top brands include Rode, Senheiser, Shure, and Audio-Technica.
Shotgun microphones will do the job if the camera is reasonably near and there is no wind. A boom mic can be made from a shotgun mic mounted on a pole with an extension cable. Wireless is the most expensive and the most likely to give you trouble.
You should invest in a good quality tripod as well, with the generally accepted best brand on the market being Manfrotto. What you should invest in lighting depends on the location. Other items you’ll need could include reflectors and shaders.
Completely optional items that can be useful include sliders, dollies, jibs, and lens filters. Don’t invest in these items unless your production warrants their purchase.
Set the scene
The best idea with online video is to keep it short whenever possible, and when it’s not possible, break it down into segments. This is far better than one long continuous narrative, and makes your video look more professional.
For each segment, think about what will be in the frame. If the camera will pan, track, or otherwise follow your movement between two or more points, think about what will be in the frame at each point. Rehearse it and mark the spots where you will stand if you’re in an on-camera role.

How you can mark ground spots is with chalk, tape, small bean bags, or stones. The camera operator should use a tripod or Steadicam for best results. Shaky video is truly horrible.
For screen casts and slideshows, think about how well the user can see what you’re showing. Zoom in on key elements if necessary, and be willing to switch betweeen different zoomed and unzoomed views, as the situation requires.
Make your own green screen
If you are presenting from behind a desk, a green screen can be a big improvement to your presentation. Simply get yourself a large, flat, solid surface, which should be smooth and unblemished, and paint it a bright shade of green.

For ultimate compatibility, also create magenta and cyan screens that can be swapped in if you need to show anything green colored in your frame.
With a green screen (or magenta, or cyan) you can use a technology called chroma key to replace the solid color with any image, including another video.
Obviously there’s not much point in making a video if nobody wants to watch it, so try to keep things interesting. Beware, however, not to be insincere or act out of character, because poor acting is worse than no acting at all.
Humor can be powerful if it is done well, and used only where it is appropriate. Likewise solemn, somber, and scandalous tones can also create interest when used appropriately.
Product videos and testimonials should be delivered enthusiastically and highlight the best features, however product reviews should be brutally honest in order to boost your credibility and win the trust of your viewers. Nothing is more valuable than trust.
Editing
Editing your video is the biggest task of all. For this, you’ll need software, and that software must be a nonlinear video editor (NLE). With this you can put mix and match the various clips you’ve shot to make a coherent narrative.

Not all editing software is equal. The best video editors are Cinlerra, Adobe Premiere Pro, Blender, and Sony Vegas Pro.
Rendering
Rendering is usually done, at least on the first pass, by the video editing software. When rendering for DVD, your goal is to get maximum video quality, regardless of the file size. Rendering for the web is a whole different thing.
The only formats worth considering are MP4 and WEBM, and while the latter will give you a better file size, it is not currently universally supported by all browsers. It is worth keeping in mind for the future.
Although your sound capture needs to be first rate, your rendered audio definitely should not be. In fact this is where most people go wrong, leaving their sound at ridiculously high fidelity when it’s not necessary. Reducing the audio quality will go a long way towards reducing file size while not noticeably affecting the outcome.

Codecs are a hotly debated topic, but the general consensus of professionals is to use the H.264 codec (or equivalent), because this will ensure maximum compatibility and a good balance between quality and file size.
Finally, consider shrinking the physical dimensions of the video if it is going to be viewed within a pre-defined space, and the user would not be expected to view it in full screen mode (doing so will work, but results in pixelation… their problem, not yours).
You can also use video transcoders such as Handbrake for your final render to fine tune the resulting file and ensure maximum compatibility. In some regions ISPs have restricted access to Handbrake downloads, but that’s just a testament to how good it is.
Captioning
Don’t under-estimate the power of captioning. Investing the time to create proper closed captions (subtitles) for your video production will be a very good investment. At the very least, allow auto-captions, but creating your own, especially if you allow a choice of languages, is always a good idea except when your video contains no speech.
Hosting
Considering how many mobile users there are and the prevalence of 3G connections, with 4G still being a (slowly growing) minority, HD video is not the best of ideas, and since Vimeo’s support for captioning is not on a par with Google’s, this makes Google the better choice for online video hosting at present.

Notice, however, that it was Google, not YouTube, that got the mention there. For numerous reasons, YouTube is not the best way to host your video, however there is nothing to prevent you uploading multiple versions of your video, one you host on a private Google account and one you host on YouTube.
The version embedded on your site should be the version hosted on your Google account.
The one exception to the rule is if you’re producing feature content, where you are showing off your film making prowess. In this case, Vimeo may have the edge.
For low bandwidth sites (those that attract less traffic than the bandwidth they have available), you could consider hosting the video on your own server. This can provide some advantages, especially in terms of loading time.
This post Getting Ready for Web Video was written by Inspired Mag Team and first appearedon Inspired Magazine.
Source: inspiredm.com


Animating Layouts with the FLIP Technique

User interfaces are most effective when they are intuitive and easily understandable to the user. Animation plays a major role in this - as Nick Babich said, animation brings user interfaces to life. However, adding meaningful transitions and micro-interactions is often an afterthought, or something that is “nice to have” if time permits. All too often, we experience web apps that simply “jump” from view to view without giving the user time to process what just happened in the current context.

This leads to unintuitive user experiences, but we can do better, by avoiding “jump cuts” and “teleportation” in creating UIs. After all, what’s more natural than real life, where nothing teleports (except maybe car keys), and everything you interact with moves with natural motion?
In this article, we’ll explore a technique called “FLIP” that can be used to animate the positions and dimensions of any DOM element in a performant manner, regardless of how their layout is calculated or rendered (e.g., height, width, floats, absolute positioning, transform, flexbox, grid, etc.)
Why the FLIP technique?
Have you ever tried to animate height, width, top, left, or any other properties besides transform and opacity? You might have noticed that the animations look a bit janky, and there's a reason for that. When any property that triggers layout changes (such as `height`), the browser has to recursively check if any other element's layout has changed as a result, and that can be expensive. If that calculation takes longer than one animation frame (around 16.7 milliseconds), then the animation frame will be skipped, resulting in "jank"
since that frame wasn't rendered in time. In Paul Lewis' article "Pixels are Expensive", he goes further in depth at how pixels are rendered and the various performance expenses.
In short, our goal is to be short -- we want to calculate the least amount of style changes necessary, as quickly as possible. The key to this is only animating transform and opacity, and FLIP explains how we can simulate layout changes using only transform.
What is FLIP?
FLIP is a mnemonic device and technique first coined by Paul Lewis, which stands for First, Last, Invert, Play. His article contains an excellent explanation of the technique, but I’ll outline it here:

First: before anything happens, record the current (i.e., first) position and dimensions of the element that will transition. You can use getBoundingClientRect() for this, as will be shown below.
Last: execute the code that causes the transition to instantaneously happen, and record the final (i.e., last) position and dimensions of the element.*
Invert: since the element is in the last position, we want to create the illusion that it’s in the first position, by using transform to modify its position and dimensions. This takes a little math, but it’s not too difficult.
Play: with the element inverted (and pretending to be in the first position), we can move it back to its last position by setting its transform to none.

Below is how these steps can be implemented:
const elm = document.querySelector('.some-element');

// First: get the current bounds
const first = getBoundingClientRect(elm);

// execute the script that causes layout change
doSomething();

// Last: get the final bounds
const last = getBoundingClientRect(elm);

// Invert: determine the delta between the
// first and last bounds to invert the element
const deltaX = first.left - last.left;
const deltaY = first.top - last.top;
const deltaW = first.width / last.width;
const deltaH = first.height / last.height;

// Play: animate the final element from its first bounds
// to its last bounds (which is no transform)
elm.animate([{
transformOrigin: 'top left',
transform: `
translate(${deltaX}px, ${deltaY}px)
scale(${deltaW}, ${deltaH})
`
}, {
transformOrigin: 'top left',
transform: 'none'
}], {
duration: 300,
easing: 'ease-in-out',
fill: 'both'
});
See the Pen How the FLIP technique works by David Khourshid (@davidkpiano) on CodePen.

There are two important things to note:

If the element’s size changed, you can transform scale in order to “resize” it with no performance penalty; however, make sure to set transformOrigin to 'top left' since that’s where we based our delta calculations.
We’re using the Web Animations API to animate the element here, but you’re free to use any other animation engine, such as GSAP, Anime, Velocity, Just-Animate, Mo.js and more.

Shared Element Transitions
One common use-case for transitioning an element between app views and states is that the final element might not be the same DOM element as the initial element. In Android, this is similar to a shared element transition, except that the element isn’t “recycled” from view to view in the DOM as it is on Android.
Nevertheless, we can still achieve the FLIP transition with a little magic illusion:
const firstElm = document.querySelector('.first-element');

// First: get the bounds and then hide the element (if necessary)
const first = getBoundingClientRect(firstElm);
firstElm.style.setProperty('visibility', 'hidden');

// execute the script that causes view change
doSomething();

// Last: get the bounds of the element that just appeared
const lastElm = document.querySelector('.last-element');
const last = getBoundingClientRect(lastElm);

// continue with the other steps, just as before.
// remember: you're animating the lastElm, not the firstElm.
Below is an example of how two completely disparate elements can appear to be the same element using shared element transitions. Click one of the pictures to see the effect.
See the Pen FLIP example with WAAPI by David Khourshid (@davidkpiano) on CodePen.

Parent-Child Transitions
With the previous implementations, the element bounds are based on the window. For most use cases, this is fine, but consider this scenario:

An element changes position and needs to transition.
That element contains a child element, which itself needs to transition to a different position inside the parent.

Since the previously calculated bounds are relative to the window, our calculations for the child element are going to be off. To solve this, we need to ensure that the bounds are calculated relative to the parent element instead:
const parentElm = document.querySelector('.parent');
const childElm = document.querySelector('.parent &gt; .child');

// First: parent and child
const parentFirst = getBoundingClientRect(parentElm);
const childFirst = getBoundingClientRect(childElm);

doSomething();

// Last: parent and child
const parentLast = getBoundingClientRect(parentElm);
const childLast = getBoundingClientRect(childElm);

// Invert: parent
const parentDeltaX = parentFirst.left - parentLast.left;
const parentDeltaY = parentFirst.top - parentLast.top;

// Invert: child relative to parent
const childDeltaX = (childFirst.left - parentFirst.left)
- (childLast.left - parentLast.left);
const childDeltaY = (childFirst.top - parentFirst.top)
- (childLast.top - parentLast.top);

// Play: using the WAAPI
parentElm.animate([
{ transform: `translate(${parentDeltaX}px, ${parentDeltaY}px)` },
{ transform: 'none' }
], { duration: 300, easing: 'ease-in-out' });

childElm.animate([
{ transform: `translate(${childDeltaX}px, ${childDeltaY}px)` },
{ transform: 'none' }
], { duration: 300, easing: 'ease-in-out' });
A few things to note here, as well:

The timing options for the parent and child (duration, easing, etc.) do not necessarily need to match with this technique. Feel free to be creative!
Changing dimensions in parent and/or child (width, height) was purposefully omitted in this example, since it is an advanced and complex topic. Let’s save that for another tutorial.
You can combine the shared element and parent-child techniques for greater flexibility.

Using Flipping.js for Full Flexibility
The above techniques might seem straightforward, but they can get quite tedious to code once you have to keep track of multiple elements transitioning. Android eases this burden by:

baking shared element transitions into the core SDK
allowing developers to identify which elements are shared by using a common android:transitionName XML attribute

I’ve created a small library called Flipping.js with the same idea in mind. By adding a data-flip-key="..." attribute to HTML elements, it’s possible to predictably and efficiently keep track of elements that might change position and dimensions from state to state.
For example, consider this initial view:
<section class="gallery">
<div class="photo-1" data-flip-key="photo-1">
<img src="/photo-1"/>
</div>
<div class="photo-2" data-flip-key="photo-2">
<img src="/photo-2"/>
</div>
<div class="photo-3" data-flip-key="photo-3">
<img src="/photo-3"/>
</div>
</section>
And this separate detail view:
<section class="details">
<div class="photo" data-flip-key="photo-1">
<img src="/photo-1"/>
</div>
<p class="description">
Lorem ipsum dolor sit amet...
</p>
</section>
Notice in the above example that there are 2 elements with the same data-flip-key="photo-1". Flipping.js tracks the “active” element by choosing the first element that meet these criteria:

The element exists in the DOM (i.e., it hasn’t been removed or detached)
The element is not hidden (hint: getBoundingClientRect(elm) will have { width: 0, height: 0 } for hidden elements)
Any custom logic specified in the selectActive option.

Getting Started with Flipping.js
There’s a few different packages for Flipping, depending on your needs:

flipping.js: tiny and low-level; only emits events when element bounds change
flipping.web.js: uses WAAPI to animate transitions
flipping.gsap.js: uses GSAP to animate transitions
More adapters coming soon!

You can grab the minified code directly from unpkg:

https://unpkg.com/flipping@latest/dist/flipping.js
https://unpkg.com/flipping@latest/dist/flipping.web.js
https://unpkg.com/flipping@latest/dist/flipping.gsap.js

Or you can npm install flipping --save and import it into your projects:
// import not necessary when including the unpkg scripts in a <script src="..."> tag
import Flipping from 'flipping/adapters/web';

const flipping = new Flipping();

// First: let Flipping read all initial bounds
flipping.read();

// execute the change that causes any elements to change bounds
doSomething();

// Last, Invert, Play: the flip() method does it all
flipping.flip();
Handling FLIP transitions as a result of a function call is such a common pattern, that the .wrap(fn) method transparently wraps (or “decorates”) the given function by first calling .read(), then getting the return value of the function, then calling .flip(), then returning the return value. This leads to much less code:
const flipping = new Flipping();

const flippingDoSomething = flipping.wrap(doSomething);

// anytime this is called, FLIP will animate changed elements
flippingDoSomething();
Here is an example of using flipping.wrap() to easily achieve the shifting letters effect. Click anywhere to see the effect.
See the Pen Flipping Birthstones #Codevember by David Khourshid (@davidkpiano) on CodePen.

Adding Flipping.js to Existing Projects
In another article, we created a simple React gallery app using finite state machines. It works just as expected, but the UI could use some smooth transitions between states to prevent “jumping” and improve the user experience. Let’s add Flipping.js into our React app to accomplish this. (Keep in mind, Flipping.js is framework-agnostic.)
Step 1: Initialize Flipping.js
The Flipping instance will live on the React component itself, so that it’s isolated to only changes that occur within that component. Initialize Flipping.js by setting it up in the componentDidMount lifecycle hook:
componentDidMount() {
const { node } = this;
if (!node) return;

this.flipping = new Flipping({
parentElement: node
});

// initialize flipping with the initial bounds
this.flipping.read();
}
By specifying parentElement: node, we’re telling Flipping to only look for elements with a data-flip-key in the rendered App, instead of the entire document.
Then, modify the HTML elements with the data-flip-key attribute (similar to React’s key prop) to identify unique and “shared” elements:
renderGallery(state) {
return (
<section className="ui-items" data-state={state}>
{this.state.items.map((item, i) =>
<img
src={item.media.m}
className="ui-item"
style={{'--i': i}}
key={item.link}
onClick={() => this.transition({
type: 'SELECT_PHOTO', item
})}
data-flip-key={item.link}
/>
)}
</section>
);
}
renderPhoto(state) {
if (state !== 'photo') return;

return (
<section
className="ui-photo-detail"
onClick={() => this.transition({ type: 'EXIT_PHOTO' })}>
<img
src={this.state.photo.media.m}
className="ui-photo"
data-flip-key={this.state.photo.link}
/>
</section>
)
}
Notice how the img.ui-item and img.ui-photo are represented by data-flip-key={item.link} and data-flip-key={this.state.photo.link} respectively: when the user clicks on an img.ui-item, that item is set to this.state.photo, so the .link values will be equal.
And since they are equal, Flipping will smoothly transition from the img.ui-item thumbnail to the larger img.ui-photo.
Now we need to do two more things:

call this.flipping.read() whenever the component will update
call this.flipping.flip() whenever the component did update

Some of you might have already guessed where these method calls are going to occur: componentWillUpdate and componentDidUpdate, respectively:
componentWillUpdate() {
this.flipping.read();
}

componentDidUpdate() {
this.flipping.flip();
}
And, just like that, if you’re using a Flipping adapter (such as flipping.web.js or flipping.gsap.js), Flipping will keep track of all elements with a [data-flip-key] and smoothly transition them to their new bounds whenever they change. Here is the final result:
See the Pen FLIPping Gallery App by David Khourshid (@davidkpiano) on CodePen.

If you would rather implement custom animations yourself, you can use flipping.js as a simple event emitter. Read the documentation for more advanced use-cases.
Flipping.js and its adapters handle the shared element and parent-child transitions by default, as well as:

interrupted transitions (in adapters)
enter/move/leave states
plugin support for plugins such as mirror, which allows newly entered elements to “mirror” another element’s movement
and more planned in the future!

Resources
Similar libraries include:

FlipJS by Paul Lewis himself, which handles simple single-element FLIP transitions
React-Flip-Move, a useful React library by Josh Comeau
BarbaJS, not necessarily a FLIP library, but one that allows you to add smooth transitions between different URLs, without page jumps.

Further resources:

Animating the Unanimatable - Joshua Comeau
FLIP your Animations - Paul Lewis
Pixels are Expensive - Paul Lewis
Improving User Flow Through Page Transitions - Luigi de Rosa
Smart Transitions in User Experience Design - Adrian Zumbrunnen
What Makes a Good Transition? - Nick Babich
Motion Guidelines in Google’s Material Design
Shared Element Transition with React Native

Animating Layouts with the FLIP Technique is a post from CSS-Tricks
Source: CssTricks


Declining Complexity in CSS

The fourth edition of Eric Meyer and Estelle Weyl's CSS: The Definitive Guide was recently released. The new book weighs in at 1,016 pages, which is up drastically from 447 in the third edition, which was up slightly from 436 in the second edition.

Despite the appearance of CSS needing more pages to capture more complicated concepts, Eric suggests that CSS is actually easier to grasp than ever before and its complexity has actually declined between editions:
But the core principles and mechanisms are no more complicated than they were a decade or even two decades ago. If anything, they’re easier to grasp now, because we don’t have to clutter our minds with float behaviors or inline layout just to try to lay out a page. Flexbox and Grid (chapters 12 and 13, by the way) make layout so much simpler than ever before, while simultaneously providing far more capability than ever before.
In short, yes, lots of new concepts have been introduced since 2007 when the third edition was released, but they're solving the need to use layout, err, tricks to make properties bend in ways they were never intended:
It’s still an apparent upward trend, but think about all the new features that have come out since the 3rd Edition, or are coming out right now: gradients, multiple backgrounds, sticky positioning, flexbox, Grid, blending, filters, transforms, animation, and media queries, among others. A lot of really substantial capabilities. They don’t make CSS more convoluted, but instead extend it into new territories.
Hear, hear! Onward and upward, no matter how many pages it takes.
Direct Link to Article — Permalink
Declining Complexity in CSS is a post from CSS-Tricks
Source: CssTricks


Recreating the Apple Watch Breathe App Animation

The Apple Watch comes with a stock app called Breathe that reminds you to, um, breathe. There's actually more to it than that, but the thought of needing a reminder to breathe makes me giggle. The point is, the app has this kinda awesome interface with a nice animation.

Photo courtesy of Apple Support
I thought it would be fun to recreate the design in vanilla CSS. Here's how far I got, which feels pretty close.
See the Pen Apple Watch Breathe App Animation by Geoff Graham (@geoffgraham) on CodePen.
Making the circles
First things first, we need a set of circles that make up that flower looking design. The app itself adds a circle to the layout for each minute that is added to the timer, but we're going to stick with a static set of six for this demo. It feels like we could get tricky by using ::before and ::after to reduce the HTML markup, but we can keep it simple.
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
We're going to make the full size of each circle 125px which is an arbitrary number. The important thing is that the default state of the circles should be all of them stacked on top of one another. We can use absolute positioning to do that.
.circle {
border-radius: 50%;
height: 125px;
position: absolute;
transform: translate(0, 0);
width: 125px;
}
Note that we're using the translate function of the transform property to center everything. I had originally tried using basic top, right, bottom, left properties but found later that animating translate is much smoother. I also originally thought that positioning the circles in the full expanded state would be the best place to start, but also found that the animations were cumbersome to create that way because it required resetting each one to center. Lessons learned!
If we were to stop here, there would be nothing on the screen and that's because we have not set a background color. We'll get to the nice fancy colors used in the app in a bit, but it might be helpful to add a white background for now with a hint of opacity to help see what's happening as we work.
See the Pen Apple Watch Breathe App - Step 1 by Geoff Graham (@geoffgraham) on CodePen.
We need a container!
You may have noticed that our circles are nicely stacked, but nowhere near the actual center of the viewport. We're going to need to wrap these bad boys in a parent element that we can use to position the entire bunch. Plus, that container will serve as the element that pulses and rotates the entire set later. That was another lesson I had to learn the hard way because I stubbornly did not want the extra markup of a container and thought I could work around it.
We're calling the container .watch-face here and setting it to the same width and height as a single circle.
<div class="watch-face">
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
</div>
Now, we can add a little flex to the body element to center everything up.
body {
background: #000;
display: flex;
align-items: center;
justify-content: center;
height: 100vh;
}
See the Pen Apple Watch Breathe App - Step 2 by Geoff Graham (@geoffgraham) on CodePen.
Next up, animate the circles
At this point, I was eager to see the circles positioned in that neat floral, overlapping arrangement. I knew that it would be difficult to animate the exact position of each circle without seeing them positioned first, so I overrode the transform property in each circle to see where they'd land.
We could set up a class for each circle, but using :nth-child seems easier.
.circle:nth-child(1) {
transform: translate(-35px, -50px);
}

/* Skipping 2-5 for brevity... */

.circle:nth-child(6) {
transform: translate(35px, 50px);
}
It took me a few swings and misses to find coordinates that worked. It ultimately depends on the size of the circles and it may take some finessing.
See the Pen Apple Watch Breathe App - Step 3 by Geoff Graham (@geoffgraham) on CodePen.
Armed with the coordinates, we can register the animations. I removed the transform coordinates that were applied to each :nth-child and moved them into keyframes:
@keyframes circle-1 {
0% {
transform: translate(0, 0);
}
100% {
transform: translate(-35px, -50px);
}
}

/* And so on... */
I have to admit that the way I went about it feels super clunky because each circle has it's own animation. It would be slicker to have one animation that can rule them all to push and re-center the circles, but maybe someone else reading has an idea and can share it in the comments.
Now we can apply those animations to each :nth-child in place of transform:
.circle:nth-child(1) {
animation: circle-1 4s ease alternate infinite;
}

/* And so on... */
Note that we set the animation-timing-function to ease because that feels smooth...at least to me! We also set the animation-direction to alternate so it plays back and forth and set the animation-iteration-count to inifinite so it stays running.
See the Pen Apple Watch Breathe App - Step 4 by Geoff Graham (@geoffgraham) on CodePen.
Color, color, color!
Oh yeah, let's paint this in! From what I can tell, there are really only two colors in the design and the opacity is what makes it feel like more of a spectrum.
The circles on the left are a greenish color and the ones on the right are sorta blue. We can select the odd-numbered circles to apply the green and the even-numbered ones to apply the blue.
.circle:nth-child(odd) {
background: #61bea2;
}

.circle:nth-child(even) {
background: #529ca0;
}
Oh, and don't forget to remove the white background from the .circle element. It won't hurt anything, but it's nice to clean up after ourselves. I admittedly forgot to do this on the first go.
See the Pen Apple Watch Breathe App - Step 5 by Geoff Graham (@geoffgraham) on CodePen.
Pulse and rotate
Remember that pesky .watch-face container we created? Well, we can animate it to pulse the circles in and out while rotating the entire bunch.
I had totally forgotten that transform functions can be chained together. That makes things a little cleaner because it allows us to apply scale() and rotate() on the same line.
@keyframes pulse {
0% {
transform: scale(.15) rotate(180deg);
}
100% {
transform: scale(1);
}
}
...and apply that to the .watch-face element.
.watch-face {
height: 125px;
width: 125px;
animation: pulse 4s cubic-bezier(0.5, 0, 0.5, 1) alternate infinite;
}
Like the circles, we want the animation to run both ways and repeat infinitely. In this case, the scale drops to a super small size as the circles stack on top of each other and the whole thing rotates halfway on the way out before returning back on the way in.
I'll admit that I am not a buff when it comes to finding the right animation-timing-function for the smoothest or exact animations. I played with cubic-bezier and found something I think feels pretty good, but it's possible that a stock value like ease-in would work just as well.
All together now!
Here's everything smushed into the same demo.
See the Pen Apple Watch Breathe App Animation by Geoff Graham (@geoffgraham) on CodePen.
Now, just remember to breathe and let's all have a peaceful day. ☮️

Recreating the Apple Watch Breathe App Animation is a post from CSS-Tricks
Source: CssTricks


Creating a Star to Heart Animation with SVG and Vanilla JavaScript

In my previous article, I've shown how to smoothly transition from one state to another using vanilla JavaScript. Make sure you check that one out first because I'll be referencing some things I explained there in a lot of detail, like demos given as examples, formulas for various timing functions or how not to reverse the timing function when going back from the final state of a transition to the initial one.

The last example showcased making the shape of a mouth to go from sad to glad by changing the d attribute of the path we used to draw this mouth.
Manipulating the path data can be taken to the next level to give us more interesting results, like a star morphing into a heart.
The star to heart animation we'll be coding.
The idea
Both are made out of five cubic Bézier curves. The interactive demo below shows the individual curves and the points where these curves are connected. Clicking any curve or point highlights it, as well as its corresponding curve/point from the other shape.
See the Pen by thebabydino (@thebabydino) on CodePen.
Note that all of these curves are created as cubic ones, even if, for some of them, the two control points coincide.
The shapes for both the star and the heart are pretty simplistic and unrealistic ones, but they'll do.
The starting code
As seen in the face animation example, I often choose to generate such shapes with Pug, but here, since this path data we generate will also need to be manipulated with JavaScript for the transition, going all JavaScript, including computing the coordinates and putting them into the d attribute seems like the best option.
This means we don't need to write much in terms of markup:
<svg>
<path id='shape'/>
</svg>
In terms of JavaScript, we start by getting the SVG element and the path element - this is the shape that morphs from a star into a heart and back. We also set a viewBox attribute on the SVG element such that its dimensions along the two axes are equal and the (0,0) point is dead in the middle. This means the coordinates of the top left corner are (-.5*D,-.5*D), where D is the value for the viewBox dimensions. And last, but not least, we create an object to store info about the initial and final states of the transition and about how to go from the interpolated values to the actual attribute values we need to set on our SVG shape.
const _SVG = document.querySelector('svg'),
_SHAPE = document.getElementById('shape'),
D = 1000,
O = { ini: {}, fin: {}, afn: {} };

(function init() {
_SVG.setAttribute('viewBox', [-.5*D, -.5*D, D, D].join(' '));
})();
Now that we got this out of the way, we can move on to the more interesting part!
The geometry of the shapes
The initial coordinates of the end points and control points are those for which we get the star and the final ones are the ones for which we get the heart. The range for each coordinate is the difference between its final value and its initial one. Here, we also rotate the shape as we morph it because we want the star to point up and we change the fill to go from the golden star to the crimson heart.
Alright, but how do we get the coordinates of the end and control points in the two cases?
Star
In the case of the star, we start with a regular pentagram. The end points of our curves are at the intersection between the pentagram edges and we use the pentagram vertices as control points.
Regular pentagram with vertex and edge crossing points highlighted as control points and end points of five cubic Bézier curves (live).
Getting the vertices of our regular pentagram is pretty straightforward given the radius (or diameter) of its circumcircle, which we take to be a fraction of the viewBox size of our SVG (considered here square for simplicity, we're not going for tight packing in this case). But how do we get their intersections?
First of all, let's consider the small pentagon highlighted inside the pentagram in the illustration below. Since the pentagram is regular, the small pentagon whose vertices coincide with the edge intersections of the pentagram is also regular. It also has the same incircle as the pentagram and, therefore, the same inradius.
Regular pentagram and inner regular pentagon share the same incircle (live).
So if we compute the pentagram inradius, then we also have the inradius of the inner pentagon, which, together with the central angle corresponding to an edge of a regular pentagon, allows us to get the circumradius of this pentagon, which in turn allows us to compute its vertex coordinates and these are exactly the edge intersections of the pentagram and the endpoints of our cubic Bézier curves.
Our regular pentagram is represented by the Schläfli symbol {5/2}, meaning that it has 5 vertices, and, given these 5 vertex points equally distributed on its circumcircle, 360°/5 = 72° apart, we start from the first, skip the next point on the circle and connect to the second one (this is the meaning of the 2 in the symbol; 1 would describe a pentagon as we don't skip any points, we connect to the first). And so on - we keep skipping the point right after.
In the interactive demo below, select either pentagon or pentagram to see how they get constructed.
See the Pen by thebabydino (@thebabydino) on CodePen.
This way, we get that the central angle corresponding to an edge of the regular pentagram is twice of that corresponding to the regular pentagon with the same vertices. We have 1·(360°/5) = 1·72° = 72° (or 1·(2·π/5) in radians) for the pentagon versus 2·(360°/5) = 2·72° = 144° (2·(2·π/5) in radians) for the pentagram. In general, given a regular polygon (whether it's a convex or a star polygon doesn't matter) with the Schläfli symbol {p,q}, the central angle corresponding to one of its edges is q·(360°/p) (q·(2·π/p) in radians).
Central angle corresponding to an edge of a regular polygon: pentagram (left, 144°) vs. pentagon (right, 72°) (live).
We also know the pentagram circumradius, which we said we take as a fraction of the square viewBox size. This means we can get the pentagram inradius (which is equal to that of the small pentagon) from a right triangle where we know the hypotenuse (it's the pentagram circumradius) and an acute angle (half the central angle corresponding to the pentagram edge).
Computing the inradius of a regular pentagram from a right triangle where the hypotenuse is the pentagram circumradius and the acute angle between the two is half the central angle corresponding to a pentagram edge (live).
The cosine of half the central angle is the inradius over the circumradius, which gives us that the inradius is the circumradius multiplied with this cosine value.
Now that we have the inradius of the small regular pentagon inside our pentagram, we can compute its circumradius from a similar right triangle having the circumradius as hypotenuse, half the central angle as one of the acute angles and the inradius as the cathetus adjacent to this acute angle.
The illustration below highlights a right triangle formed from a circumradius of a regular pentagon, its inradius and half an edge. From this triangle, we can compute the circumradius if we know the inradius and the central angle corresponding to a pentagon edge as the acute angle between these two radii is half this central angle.
Computing the circumradius of a regular pentagon from a right triangle where it's the hypotenuse, while the catheti are the inradius and half the pentagon edge and the acute angle between the two radii is half the central angle corresponding to a pentagon edge (live).
Remember that, in this case, the central angle is not the same as for the pentagram, it's half of it (360°/5 = 72°).
Good, now that we have this radius, we can get all the coordinates we want. They're the coordinates of points distributed at equal angles on two circles. We have 5 points on the outer circle (the circumcircle of our pentagram) and 5 on the inner one (the circumcircle of the small pentagon). That's 10 points in total, with angles of 360°/10 = 36° in between the radial lines they're on.
The end and control points are distributed on the circumradius of the inner pentagon and on that of the pentagram respectively (live).
We know the radii of both these circles. The radius of the outer one is the regular pentagram circumradius, which we take to be some arbitrary fraction of the viewBox dimension (.5 or .25 or .32 or whatever value we feel would work best). The radius of the inner one is the circumradius of the small regular pentagon formed inside the pentagram, which we can compute as a function of the central angle corresponding to one of its edges and its inradius, which is equal to that of the pentagram and therefore we can compute from the pentagram circumradius and the central angle corresponding to a pentagram edge.
So, at this point, we can generate the path data that draws our star, it doesn't depend on anything that's still unknown.
So let's do that and put all of the above into code!
We start by creating a getStarPoints(f) function which depends on an arbitrary factor (f) that's going to help us get the pentagram circumradius from the viewBox size. This function returns an array of coordinates we later use for interpolation.
Within this function, we first compute the constant stuff that won't change as we progress through it - the pentagram circumradius (radius of the outer circle), the central (base) angles corresponding to one edge of a regular pentagram and polygon, the inradius shared by the pentagram and the inner pentagon whose vertices are the points where the pentagram edges cross each other, the circumradius of this inner pentagon and, finally, the total number of distinct points whose coordinates we need to compute and the base angle for this distribution.
After that, within a loop, we compute the coordinates of the points we want and we push them into the array of coordinates.
const P = 5; /* number of cubic curves/ polygon vertices */

function getStarPoints(f = .5) {
const RCO = f*D /* outer (pentagram) circumradius */,
BAS = 2*(2*Math.PI/P) /* base angle for star poly */,
BAC = 2*Math.PI/P /* base angle for convex poly */,
RI = RCO*Math.cos(.5*BAS) /*pentagram/ inner pentagon inradius */,
RCI = RI/Math.cos(.5*BAC) /* inner pentagon circumradius */,
ND = 2*P /* total number of distinct points we need to get */,
BAD = 2*Math.PI/ND /* base angle for point distribution */,
PTS = [] /* array we fill with point coordinates */;

for(let i = 0; i < ND; i++) {}

return PTS;
}
To compute the coordinates of our points, we use the radius of the circle they're on and the angle of the radial line connecting them to the origin with respect to the horizontal axis, as illustrated by the interactive demo below (drag the point to see how its Cartesian coordinates change):
See the Pen by thebabydino (@thebabydino) on CodePen.
In our case, the current radius is the radius of the outer circle (pentagram circumradius RCO) for even index points (0, 2, ...) and the radius of the inner circle (inner pentagon circumradius RCI) for odd index points (1, 3, ...), while the angle of the radial line connecting the current point to the origin is the point index (i) multiplied with the base angle for point distribution (BAD, which happens to be 36° or π/10 in our particular case).
So within the loop we have:
for(let i = 0; i < ND; i++) {
let cr = i%2 ? RCI : RCO,
ca = i*BAD,
x = Math.round(cr*Math.cos(ca)),
y = Math.round(cr*Math.sin(ca));
}
Since we've chosen a pretty big value for the viewBox size, we can safely round the coordinate values so that our code looks cleaner, without decimals.
As for pushing these coordinates into the points array, we do this twice when we're on the outer circle (the even indices case) because that's where we actually have two control points overlapping, but only for the star, so we'll need to move each of these overlapping points into different positions to get the heart.
for(let i = 0; i < ND; i++) {
/* same as before */

PTS.push([x, y]);
if(!(i%2)) PTS.push([x, y]);
}
Next, we put data into our object O. For the path data (d) attribute, we store the array of points we get when calling the above function as the initial value. We also create a function for generating the actual attribute value (the path data string in this case - inserting commands in between the pairs of coordinates, so that the browser knows what to do with those coordinates). Finally, we take every attribute we have stored data for and we set its value to the value returned by the previously mentioned function:
(function init() {
/* same as before */

O.d = {
ini: getStarPoints(),
afn: function(pts) {
return pts.reduce((a, c, i) => {
return a + (i%3 ? ' ' : 'C') + c
}, `M${pts[pts.length - 1]}`)
}
};

for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p].ini))
})();
The result can be seen in the Pen below:
See the Pen by thebabydino (@thebabydino) on CodePen.
This is a promising start. However, we want the first tip of the generating pentagram to point down and the first tip of the resulting star to point up. Currently, they're both pointing right. This is because we start from 0° (3 o'clock). So in order to start from 6 o'clock, we add 90° (π/2 in radians) to every current angle in the getStarPoints() function.
ca = i*BAD + .5*Math.PI
This makes the first tip of the generating pentagram and resulting star to point down. To rotate the star, we need to set its transform attribute to a half circle rotation. In order to do so, we first set an initial rotation angle to -180. Afterwards, we set the function that generates the actual attribute value to a function that generates a string from a function name and an argument:
function fnStr(fname, farg) { return `${fname}(${farg})` };

(function init() {
/* same as before */

O.transform = { ini: -180, afn: (ang) => fnStr('rotate', ang) };

/* same as before */
})();
We also give our star a golden fill in a similar fashion. We set an RGB array to the initial value in the fill case and we use a similar function to generate the actual attribute value:
(function init() {
/* same as before */

O.fill = { ini: [255, 215, 0], afn: (rgb) => fnStr('rgb', rgb) };

/* same as before */
})();
We now have a nice golden SVG star, made up of five cubic Bézier curves:
See the Pen by thebabydino (@thebabydino) on CodePen.
Heart
Since we have the star, let's next see how we can get the heart!
We start with two intersecting circles of equal radii, both a fraction (let's say .25 for the time being) of the viewBox size. These circles intersect in such a way that the segment connecting their central points is on the x axis and the segment connecting their intersection points is on the y axis. We also take these two segments to be equal.
We start with two circles of equal radius whose central points are on the horizontal axis and which intersect on the vertical axis (live).
Next, we draw diameters through the upper intersection point and then tangents through the opposite points of these diameters. These tangents intersect on the y axis.
Constructing diameters through the upper intersection point and tangents to the circle at the opposite ends of these diameters, tangents which intersect on the vertical axis (live).
The upper intersection point and the diametrically opposite points make up three of the five end points we need. The other two end points split the outer half circle arcs into two equal parts, thus giving us four quarter circle arcs.
Highlighting the end points of the cubic Bézier curves that make up the heart and the coinciding control points of the bottom one of these curves (live).
Both control points for the curve at the bottom coincide with the intersection of the the two tangents drawn previously. But what about the other four curves? How can we go from circular arcs to cubic Bézier curves?
We don't have a cubic Bézier curve equivalent for a quarter circle arc, but we can find a very good approximation, as explained in this article.
The gist of it is that we start from a quarter circle arc of radius R and draw tangents to the end points of this arc (N and Q). These tangents intersect at P. The quadrilateral ONPQ has all angles equal to 90° (or π/2), three of them by construction (O corresponds to a 90° arc and the tangent to a point of that circle is always perpendicular onto the radial line to the same point) and the final one by computation (the sum of angles in a quadrilateral is always 360° and the other three angles add up to 270°). This makes ONPQ a rectangle. But ONPQ also has two consecutive edges equal (OQ and ON are both radial lines, equal to R in length), which makes it a square of edge R. So the lengths of NP and QP are also equal to R.
Approximating a quarter circle arc with a cubic Bézier curve (live).
The control points of the cubic curve approximating our arc are on the tangent lines NP and QP, at C·R away from the end points, where C is the constant the previously linked article computes to be .551915.
Given all of this, we can now start computing the coordinates of the end points and control points of the cubic curves making up our star.
Due to the way we've chosen to construct this heart, TO0SO1 (see figure below) is a square since it has all edges equal (all are radii of one of our two equal circles) and its diagonals are equal by construction (we said the distance between the central points equals that between the intersection points). Here, O is the intersection of the diagonals and OT is half the ST diagonal. T and S are on the y axis, so their x coordinate is 0. Their y coordinate in absolute value equals the OT segment, which is half the diagonal (as is the OS segment).
The TO0SO1 square (live).
We can split any square of edge length l into two equal right isosceles triangles where the catheti coincide with the square edges and the hypotenuse coincides with a diagonal.
Any square can be split into two congruent right isosceles triangles (live).
Using one of these right triangles, we can compute the hypotenuse (and therefore the square diagonal) using Pythagora's theorem: d² = l² + l². This gives us the square diagonal as a function of the edge d = √(2∙l) = l∙√2 (conversely, the edge as a function of the diagonal is l = d/√2). It also means that half the diagonal is d/2 = (l∙√2)/2 = l/√2.
Applying this to our TO0SO1 square of edge length R, we get that the y coordinate of T (which, in absolute value, equals half this square's diagonal) is -R/√2 and the y coordinate of S is R/√2.
The coordinates of the vertices of the TO0SO1 square (live).
Similarly, the Ok points are on the x axis, so their y coordinates are 0, while their x coordinates are given by the half diagonal OOk: ±R/√2.
TO0SO1 being a square also means all of its angles are 90° (π/2 in radians) angles.
The TAkBkS quadrilaterals (live).
In the illustration above, the TBk segments are diameter segments, meaning that the TBk arcs are half circle, or 180° arcs and we've split them into two equal halves with the Ak points, getting two equal 90° arcs - TAk and AkBk, which correspond to two equal 90° angles, ∠TOkAk and ∠AkOkBk.
Given that ∠TOkS are 90° angles and ∠TOkAk are also 90° angles by construction, it results that the SAk segments are also diameter segments. This gives us that in the TAkBkS quadrilaterals, the diagonals TBk and SAk are perpendicular, equal and cross each other in the middle (TOk, OkBk, SOk and OkAk are all equal to the initial circle radius R). This means the TAkBkS quadrilaterals are squares whose diagonals are 2∙R.
From here we can get that the edge length of the TAkBkS quadrilaterals is 2∙R/√2 = R∙√2. Since all angles of a square are 90° ones and the TS edge coincides with the vertical axis, this means the TAk and SBk edges are horizontal, parallel to the x axis and their length gives us the x coordinates of the Ak and Bk points: ±R∙√2.
Since TAk and SBk are horizontal segments, the y coordinates of the Ak and Bk points equal those of the T (-R/√2) and S (R/√2) points respectively.
The coordinates of the vertices of the TAkBkS squares (live).
Another thing we get from here is that, since TAkBkS are squares, AkBk are parallel with TS, which is on the y (vertical) axis, therefore the AkBk segments are vertical. Additionally, since the x axis is parallel to the TAk and SBk segments and it cuts the TS, it results that it also cuts the AkBk segments in half.
Now let's move on to the control points.
We start with the overlapping control points for the bottom curve.
The TB0CB1 quadrilateral (live).
The TB0CB1 quadrilateral has all angles equal to 90° (∠T since TO0SO1 is a square, ∠Bk by construction since the BkC segments are tangent to the circle at Bk and therefore perpendicular onto the radial lines OkBk at that point; and finally, ∠C can only be 90° since the sum of angles in a quadrilateral is 360° and the other three angles add up to 270°), which makes it a rectangle. It also has two consecutive edges equal - TB0 and TB1 are both diameters of the initial squares and therefore both equal to 2∙R. All of this makes it a square of edge 2∙R.
From here, we can get its diagonal TC - it's 2∙R∙√2. Since C is on the y axis, its x coordinate is 0. Its y coordinate is the length of the OC segment. The OC segment is the TC segment minus the OT segment: 2∙R∙√2 - R/√2 = 4∙R/√2 - R/√2 = 3∙R/√2.
The coordinates of the vertices of the TB0CB1 square (live).
So we now have the coordinates of the two coinciding control points for the bottom curve are (0,3∙R/√2).
In order to get the coordinates of the control points for the other curves, we draw tangents through their endpoints and we get the intersections of these tangents at Dk and Ek.
The TOkAkDk and AkOkBkEk quadrilaterals (live).
In the TOkAkDk quadrilaterals, we have that all angles are 90° (right) angles, three of them by construction (∠DkTOk and ∠DkAkOk are the angles between the radial and tangent lines at T and Ak respectively, while ∠TOkAk are the angles corresponding to the quarter circle arcs TAk) and the fourth by computation (the sum of angles in a quadrilateral is 360° and the other three add up to 270°). This makes TOkAkDk rectangles. Since they have two consecutive edges equal (OkT and OkAk are radial segments of length R), they are also squares.
This means the diagonals TAk and OkDk are R∙√2. We already know that TAk are horizontal and, since the diagonals of a square are perpendicular, it results the OkDk segments are vertical. This means the Ok and Dk points have the same x coordinate, which we've already computed for Ok to be ±R/√2. Since we know the length of OkDk, we can also get the y coordinates - they're the diagonal length (R∙√2) with minus in front.
Similarly, in the AkOkBkEk quadrilaterals, we have that all angles are 90° (right) angles, three of them by construction (∠EkAkOk and ∠EkBkOk are the angles between the radial and tangent lines at Ak and Bk respectively, while ∠AkOkBk are the angles corresponding to the quarter circle arcs AkBk) and the fourth by computation (the sum of angles in a quadrilateral is 360° and the other three add up to 270°). This makes AkOkBkEk rectangles. Since they have two consecutive edges equal (OkAk and OkBk are radial segments of length R), they are also squares.
From here, we get the diagonals AkBk and OkEk are R∙√2. We know the AkBk segments are vertical and split into half by the horizontal axis, which means the OkEk segments are on this axis and the y coordinates of the Ek points are 0. Since the x coordinates of the Ok points are ±R/√2 and the OkEk segments are R∙√2, we can compute those of the Ek points as well - they're ±3∙R/√2.
The coordinates of the newly computed vertices of the TOₖAₖDₖ and AₖOₖBₖEₖ squares (live).
Alright, but these intersection points for the tangents are not the control points we need to get the circular arc approximations. The control points we want are on the TDk, AkDk, AkEk and BkEk segments at about 55% (this value is given by the constant C computed in the previously mentioned article) away from the curve end points (T, Ak, Bk). This means the segments from the endpoints to the control points are C∙R.
In this situation, the coordinates of our control points are 1 - C of those of the end points (T, Ak and Bk) plus C of those of the points where the tangents at the end points intersect (Dk and Ek).
So let's put all of this into JavaScript code!
Just like in the star case, we start with a getStarPoints(f) function which depends on an arbitrary factor (f) that's going to help us get the radius of the helper circles from the viewBox size. This function also returns an array of coordinates we later use for interpolation.
Inside, we compute the stuff that doesn't change throughout the function. First off, the radius of the helper circles. From that, the half diagonal of the small squares whose edge equals this helper circle radius, half diagonal which is also the circumradius of these squares. Afterwards, the coordinates of the end points of our cubic curves (the T, Ak, Bk points), in absolute value for the ones along the horizontal axis. Then we move on to the coordinates of the points where the tangents through the end points intersect (the C, Dk, Ek points). These either coincide with the control points (C) or can help us get the control points (this is the case for Dk and Ek).
function getHeartPoints(f = .25) {
const R = f*D /* helper circle radius */,
RC = Math.round(R/Math.SQRT2) /* circumradius of square of edge R */,
XT = 0, YT = -RC /* coords of point T */,
XA = 2*RC, YA = -RC /* coords of A points (x in abs value) */,
XB = 2*RC, YB = RC /* coords of B points (x in abs value) */,
XC = 0, YC = 3*RC /* coords of point C */,
XD = RC, YD = -2*RC /* coords of D points (x in abs value) */,
XE = 3*RC, YE = 0 /* coords of E points (x in abs value) */;
}
The interactive demo below shows the coordinates of these points on click:
See the Pen by thebabydino (@thebabydino) on CodePen.
Now we can also get the control points from the end points and the points where the tangents through the end points intersect:
function getHeartPoints(f = .25) {
/* same as before */
const /* const for cubic curve approx of quarter circle */
C = .551915,
CC = 1 - C,
/* coords of ctrl points on TD segs */
XTD = Math.round(CC*XT + C*XD), YTD = Math.round(CC*YT + C*YD),
/* coords of ctrl points on AD segs */
XAD = Math.round(CC*XA + C*XD), YAD = Math.round(CC*YA + C*YD),
/* coords of ctrl points on AE segs */
XAE = Math.round(CC*XA + C*XE), YAE = Math.round(CC*YA + C*YE),
/* coords of ctrl points on BE segs */
XBE = Math.round(CC*XB + C*XE), YBE = Math.round(CC*YB + C*YE);

/* same as before */
}
Next, we need to put the relevant coordinates into an array and return this array. In the case of the star, we started with the bottom curve and then went clockwise, so we do the same here. For every curve, we push two sets of coordinates for the control points and then one set for the point where the current curve ends.
See the Pen by thebabydino (@thebabydino) on CodePen.
Note that in the case of the first (bottom) curve, the two control points coincide, so we push the same pair of coordinates twice. The code doesn't look anywhere near as nice as in the case of the star, but it will have to suffice:
return [
[XC, YC], [XC, YC], [-XB, YB],
[-XBE, YBE], [-XAE, YAE], [-XA, YA],
[-XAD, YAD], [-XTD, YTD], [XT, YT],
[XTD, YTD], [XAD, YAD], [XA, YA],
[XAE, YAE], [XBE, YBE], [XB, YB]
];
We can now take our star demo and use the getHeartPoints() function for the final state, no rotation and a crimson fill instead. Then, we set the current state to the final shape, just so that we can see the heart:
function fnStr(fname, farg) { return `${fname}(${farg})` };

(function init() {
_SVG.setAttribute('viewBox', [-.5*D, -.5*D, D, D].join(' '));

O.d = {
ini: getStarPoints(),
fin: getHeartPoints(),
afn: function(pts) {
return pts.reduce((a, c, i) => {
return a + (i%3 ? ' ' : 'C') + c
}, `M${pts[pts.length - 1]}`)
}
};

O.transform = {
ini: -180,
fin: 0,
afn: (ang) => fnStr('rotate', ang)
};

O.fill = {
ini: [255, 215, 0],
fin: [220, 20, 60],
afn: (rgb) => fnStr('rgb', rgb)
};

for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p].fin))
})();
This gives us a nice looking heart:
See the Pen by thebabydino (@thebabydino) on CodePen.
Ensuring consistent shape alignment
However, if we place the two shapes one on top of the other with no fill or transform, just a stroke, we see the alignment looks pretty bad:
See the Pen by thebabydino (@thebabydino) on CodePen.
The easiest way to solve this issue is to shift the heart up by an amount depending on the radius of the helper circles:
return [ /* same coords */ ].map(([x, y]) => [x, y - .09*R])
We now have much better alignment, regardless of how we tweak the f factor in either case. This is the factor that determines the pentagram circumradius relative to the viewBox size in the star case (when the default is .5) and the radius of the helper circles relative to the same viewBox size in the heart case (when the default is .25).
See the Pen by thebabydino (@thebabydino) on CodePen.
Switching between the two shapes
We want to go from one shape to the other on click. In order to do this, we set a direction dir variable which is 1 when we go from star to heart and -1 when we go from heart to star. Initially, it's -1, as if we've just switched from heart to star.
Then we add a 'click' event listener on the _SHAPE element and code what happens in this situation - we change the sign of the direction (dir) variable and we change the shape's attributes so that we go from a golden star to a crimson heart or the other way around:
let dir = -1;

(function init() {
/* same as before */

_SHAPE.addEventListener('click', e => {
dir *= -1;

for(let p in O)
_SHAPE.setAttribute(p, O[p].afn(O[p][dir > 0 ? 'fin' : 'ini']));
}, false);
})();
And we're now switching between the two shapes on click:
See the Pen by thebabydino (@thebabydino) on CodePen.
Morphing from one shape to another
What we really want however is not an abrupt change from one shape to another, but a gradual one. So we use the interpolation techniques explained in the previous article to achieve this.
We first decide on a total number of frames for our transition (NF) and choose the kind of timing functions we want to use - an ease-in-out type of function for transitioning the path shape from star to heart, a bounce-ini-fin type of function for the rotation angle and an ease-out one for the fill. We only include these, though we could later add others in case we change our mind and want to explore other options as well.
/* same as before */
const NF = 50,
TFN = {
'ease-out': function(k) {
return 1 - Math.pow(1 - k, 1.675)
},
'ease-in-out': function(k) {
return .5*(Math.sin((k - .5)*Math.PI) + 1)
},
'bounce-ini-fin': function(k, s = -.65*Math.PI, e = -s) {
return (Math.sin(k*(e - s) + s) - Math.sin(s))/(Math.sin(e) - Math.sin(s))
}
};
We then specify which of these timing functions we use for each property we transition:
(function init() {
/* same as before */

O.d = {
/* same as before */
tfn: 'ease-in-out'
};

O.transform = {
/* same as before */
tfn: 'bounce-ini-fin'
};

O.fill = {
/* same as before */
tfn: 'ease-out'
};

/* same as before */
})();
We move on to adding request ID (rID) and current frame (cf) variables, an update() function we first call on click, then on every refresh of the display until the transition finishes and we call a stopAni() function to exit this animation loop. Within the update() function, we... well, update the current frame cf, compute a progress k and decide whether we've reached the end of the transition and we need to exit the animation loop or we carry on.
We also add a multiplier m variable which we use so that we don't reverse the timing functions when we go from the final state (heart) back to the initial one (star).
let rID = null, cf = 0, m;

function stopAni() {
cancelAnimationFrame(rID);
rID = null;
};

function update() {
cf += dir;

let k = cf/NF;

if(!(cf%NF)) {
stopAni();
return
}

rID = requestAnimationFrame(update)
};
Then we need to change what we do on click:
addEventListener('click', e => {
if(rID) stopAni();
dir *= -1;
m = .5*(1 - dir);
update();
}, false);
Within the update() function, we want to set the attributes we transition to some intermediate values (depending on the progress k). As seen in the previous article, it's good to have the ranges between the final and initial values precomputed at the beginning, before even setting the listener, so that's our next step: creating a function that computes the range between numbers, whether as such or in arrays, no matter how deep and then using this function to set the ranges for the properties we want to transition.
function range(ini, fin) {
return typeof ini == 'number' ?
fin - ini :
ini.map((c, i) => range(ini[i], fin[i]))
};

(function init() {
/* same as before */

for(let p in O) {
O[p].rng = range(O[p].ini, O[p].fin);
_SHAPE.setAttribute(p, O[p].afn(O[p].ini));
}

/* same as before */
})();
Now all that's left to do is the interpolation part in the update() function. Using a loop, we go through all the attributes we want to smoothly change from one end state to the other. Within this loop, we set their current value to the one we get as the result of an interpolation function which depends on the initial value(s), range(s) of the current attribute (ini and rng), on the timing function we use (tfn) and on the progress (k):
function update() {
/* same as before */

for(let p in O) {
let c = O[p];

_SHAPE.setAttribute(p, c.afn(int(c.ini, c.rng, TFN[c.tfn], k)));
}

/* same as before */
};
The last step is to write this interpolation function. It's pretty similar to the one that gives us the range values:
function int(ini, rng, tfn, k) {
return typeof ini == 'number' ?
Math.round(ini + (m + dir*tfn(m + dir*k))*rng) :
ini.map((c, i) => int(ini[i], rng[i], tfn, k))
};
This finally gives us a shape that morphs from star to heart on click and goes back to star on a second click!
See the Pen by thebabydino (@thebabydino) on CodePen.
It's almost what we wanted - there's still one tiny issue. For cyclic values like angle values, we don't want to go back by half a circle on the second click. Instead, we want to continue going in the same direction for another half circle. Adding this half circle from after the second click with the one traveled after the first click, we get a full circle so we're right back where we started.
We put this into code by adding an optional continuity property and tweaking the updating and interpolating functions a bit:
function int(ini, rng, tfn, k, cnt) {
return typeof ini == 'number' ?
Math.round(ini + cnt*(m + dir*tfn(m + dir*k))*rng) :
ini.map((c, i) => int(ini[i], rng[i], tfn, k, cnt))
};

function update() {
/* same as before */

for(let p in O) {
let c = O[p];

_SHAPE.setAttribute(p, c.afn(int(c.ini, c.rng, TFN[c.tfn], k, c.cnt ? dir : 1)));
}

/* same as before */
};

(function init() {
/* same as before */

O.transform = {
ini: -180,
fin: 0,
afn: (ang) => fnStr('rotate', ang),
tfn: 'bounce-ini-fin',
cnt: 1
};

/* same as before */
})();
We now have the result we've been after: a shape that morphs from a golden star into a crimson heart and rotates clockwise by half a circle every time it goes from one state to the other:
See the Pen by thebabydino (@thebabydino) on CodePen.

Creating a Star to Heart Animation with SVG and Vanilla JavaScript is a post from CSS-Tricks
Source: CssTricks


Emulating CSS Timing Functions with JavaScript

CSS animations and transitions are great! However, while recently toying with an idea, I got really frustrated with the fact that gradients are only animatable in Edge (and IE 10+). Yes, we can do all sorts of tricks with background-position, background-size, background-blend-mode or even opacity and transform on a pseudo-element/ child, but sometimes these are just not enough. Not to mention that we run into similar problems when wanting to animate SVG attributes without a CSS correspondent.
Using a lot of examples, this article is going to explain how to smoothly go from one state to another in a similar fashion to that of common CSS timing functions using just a little bit of JavaScript, without having to rely on a library, so without including a lot of complicated and unnecessary code that may become a big burden in the future.

This is not how the CSS timing functions work. This is an approach that I find simpler and more intuitive than working with Bézier curves. I'm going to show how to experiment with different timing functions using JavaScript and dissect use cases. It is not a tutorial on how to do beautiful animation.
A few examples using a linear timing function
Let's start with a left to right linear-gradient() with a sharp transition where we want to animate the first stop. Here's a way to express that using CSS custom properties:
background: linear-gradient(90deg, #ff9800 var(--stop, 0%), #3c3c3c 0);
On click, we want the value of this stop to go from 0% to 100% (or the other way around, depending on the state it's already in) over the course of NF frames. If an animation is already running at the time of the click, we stop it, change its direction, then restart it.
We also need a few variables such as the request ID (this gets returned by requestAnimationFrame), the index of the current frame (an integer in the [0, NF] interval, starting at 0) and the direction our transition is going in (which is 1 when going towards 100% and -1 when going towards 0%).
While nothing is changing, the request ID is null. We also set the current frame index to 0 initially and the direction to -1, as if we've just arrived to 0% from 100%.
const NF = 80; // number of frames transition happens over

let rID = null, f = 0, dir = -1;

function stopAni() {
cancelAnimationFrame(rID);
rID = null;
};

function update() {};

addEventListener('click', e => {
if(rID) stopAni(); // if an animation is already running, stop it
dir *= -1; // change animation direction
update();
}, false);
Now all that's left is to populate the update() function. Within it, we update the current frame index f. Then we compute a progress variable k as the ratio between this current frame index f and the total number of frames NF. Given that f goes from 0 to NF (included), this means that our progress k goes from 0 to 1. Multiply this with 100% and we get the desired stop.
After this, we check whether we've reached one of the end states. If we have, we stop the animation and exit the update() function.
function update() {
f += dir; // update current frame index

let k = f/NF; // compute progress

document.body.style.setProperty('--stop', `${+(k*100).toFixed(2)}%`);

if(!(f%NF)) {
stopAni();
return
}

rID = requestAnimationFrame(update)
};
The result can be seen in the Pen below (note that we go back on a second click):
See the Pen by thebabydino (@thebabydino) on CodePen.
The way the pseudo-element is made to contrast with the background below is explained in an older article.
The above demo may look like something we could easily achieve with an element and translating a pseudo-element that can fully cover it, but things get a lot more interesting if we give the background-size a value that's smaller than 100% along the x axis, let's say 5em:
See the Pen by thebabydino (@thebabydino) on CodePen.
This gives us a sort of a "vertical blinds" effect that cannot be replicated in a clean manner with just CSS if we don't want to use more than one element.
Another option would be not to alternate the direction and always sweep from left to right, except only odd sweeps would be orange. This requires tweaking the CSS a bit:
--c0: #ff9800;
--c1: #3c3c3c;
background: linear-gradient(90deg,
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0)
In the JavaScript, we ditch the direction variable and add a type one (typ) that switches between 0 and 1 at the end of every transition. That's when we also update all custom properties:
const S = document.body.style;

let typ = 0;

function update() {
let k = ++f/NF;

S.setProperty('--stop', `${+(k*100).toFixed(2)}%`);

if(!(f%NF)) {
f = 0;
S.setProperty('--gc1', `var(--c${typ})`);
typ = 1 - typ;
S.setProperty('--gc0', `var(--c${typ})`);
S.setProperty('--stop', `0%`);
stopAni();
return
}

rID = requestAnimationFrame(update)
};
This gives us the desired result (click at least twice to see how the effect differs from that in the first demo):
See the Pen by thebabydino (@thebabydino) on CodePen.
We could also change the gradient angle instead of the stop. In this case, the background rule becomes:
background: linear-gradient(var(--angle, 0deg),
#ff9800 50%, #3c3c3c 0);
In the JavaScript code, we tweak the update() function:
function update() {
f += dir;

let k = f/NF;

document.body.style.setProperty(
'--angle',
`${+(k*180).toFixed(2)}deg`
);

if(!(f%NF)) {
stopAni();
return
}

rID = requestAnimationFrame(update)
};
We now have a gradient angle transition in between the two states (0deg and 180deg):
See the Pen by thebabydino (@thebabydino) on CodePen.
In this case, we might also want to keep going clockwise to get back to the 0deg state instead of changing the direction. So we just ditch the dir variable altogether, discard any clicks happening during the transition, and always increment the frame index f, resetting it to 0 when we've completed a full rotation around the circle:
function update() {
let k = ++f/NF;

document.body.style.setProperty(
'--angle',
`${+(k*180).toFixed(2)}deg`
);

if(!(f%NF)) {
f = f%(2*NF);
stopAni();
return
}

rID = requestAnimationFrame(update)
};

addEventListener('click', e => {
if(!rID) update()
}, false);
The following Pen illustrates the result - our rotation is now always clockwise:
See the Pen by thebabydino (@thebabydino) on CodePen.
Something else we could do is use a radial-gradient() and animate the radial stop:
background: radial-gradient(circle,
#ff9800 var(--stop, 0%), #3c3c3c 0);
The JavaScript code is identical to that of the first demo and the result can be seen below:
See the Pen by thebabydino (@thebabydino) on CodePen.
We may also not want to go back when clicking again, but instead make another blob grow and cover the entire viewport. In this case, we add a few more custom properties to the CSS:
--c0: #ff9800;
--c1: #3c3c3c;
background: radial-gradient(circle,
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0)
The JavaScript is the same as in the case of the third linear-gradient() demo. This gives us the result we want:
See the Pen by thebabydino (@thebabydino) on CodePen.
A fun tweak to this would be to make our circle start growing from the point we clicked. To do so, we introduce two more custom properties, --x and --y:
background: radial-gradient(circle at var(--x, 50%) var(--y, 50%),
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0)
When clicking, we set these to the coordinates of the point where the click happened:
addEventListener('click', e => {
if(!rID) {
S.setProperty('--x', `${e.clientX}px`);
S.setProperty('--y', `${e.clientY}px`);
update();
}
}, false);
This gives us the following result where we have a disc growing from the point where we clicked:
See the Pen by thebabydino (@thebabydino) on CodePen.
Another option would be using a conic-gradient() and animating the angular stop:
background: conic-gradient(#ff9800 var(--stop, 0%), #3c3c3c 0%)
Note that in the case of conic-gradient(), we must use a unit for the zero value (whether that unit is % or an angular one like deg doesn't matter), otherwise our code won't work - writing conic-gradient(#ff9800 var(--stop, 0%), #3c3c3c 0) means nothing gets displayed.
The JavaScript is the same as for animating the stop in the linear or radial case, but bear in mind that this currently only works in Chrome with Experimental Web Platform Features enabled in chrome://flags.
The Experimental Web Platform Features flag enabled in Chrome Canary (63.0.3210.0).
Just for the purpose of displaying conic gradients in the browser, there's a polyfill by Lea Verou and this works cross-browser but doesn't allow using CSS custom properties.
The recording below illustrates how our code works:
Recording of how our first conic-gradient() demo works in Chrome with the flag enabled (live demo).
This is another situation where we might not want to go back on a second click. This means we need to alter the CSS a bit, in the same way we did for the last radial-gradient() demo:
--c0: #ff9800;
--c1: #3c3c3c;
background: conic-gradient(
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0%)
The JavaScript code is exactly the same as in the corresponding linear-gradient() or radial-gradient() case and the result can be seen below:
Recording of how our second conic-gradient() demo works in Chrome with the flag enabled (live demo).
Before we move on to other timing functions, there's one more thing to cover: the case when we don't go from 0% to 100%, but in between any two values. We take the example of our first linear-gradient, but with a different default for --stop, let's say 85% and we also set a --stop-fin value - this is going to be the final value for --stop:
--stop-ini: 85%;
--stop-fin: 26%;
background: linear-gradient(90deg, #ff9800 var(--stop, var(--stop-ini)), #3c3c3c 0)
In the JavaScript, we read these two values - the initial (default) and the final one - and we compute a range as the difference between them:
const S = getComputedStyle(document.body),
INI = +S.getPropertyValue('--stop-ini').replace('%', ''),
FIN = +S.getPropertyValue('--stop-fin').replace('%', ''),
RANGE = FIN - INI;
Finally, in the update() function, we take into account the initial value and the range when setting the current value for --stop:
document.body.style.setProperty(
'--stop',
`${+(INI + k*RANGE).toFixed(2)}%`
);
With these changes we now have a transition in between 85% and 26% (and the other way on even clicks):
See the Pen by thebabydino (@thebabydino) on CodePen.
If we want to mix units for the stop value, things get hairier as we need to compute more things (box dimensions when mixing % and px, font sizes if we throw em or rem in the mix, viewport dimensions if we want to use viewport units, the length of the 0% to 100% segment on the gradient line for gradients that are not horizontal or vertical), but the basic idea remains the same.
Emulating ease-in/ ease-out
An ease-in kind of function means the change in value happens slow at first and then accelerates. ease-out is exactly the opposite - the change happens fast in the beginning, but then slows down towards the end.
The ease-in (left) and ease-out (right) timing functions (live).
The slope of the curves above gives us the rate of change. The steeper it is, the faster the change in value happens.
We can emulate these functions by tweaking the linear method described in the first section. Since k takes values in the [0, 1] interval, raising it to any positive power also gives us a number within the same interval. The interactive demo below shows the graph of a function f(k) = pow(k, p) (k raised to an exponent p) shown in purple and that of a function g(k) = 1 - pow(1 - k, p) shown in red on the [0, 1] interval versus the identity function id(k) = k (which corresponds to a linear timing function).
See the Pen by thebabydino (@thebabydino) on CodePen.
When the exponent p is equal to 1, the graphs of the f and g functions are identical to that of the identity function.
When exponent p is greater than 1, the graph of the f function is below the identity line - the rate of change increases as k increases. This is like an ease-in type of function. The graph of the g function is above the identity line - the rate of change decreases as k increases. This is like an ease-out type of function.
It seems an exponent p of about 2 gives us an f that's pretty similar to ease-in, while g is pretty similar to ease-out. With a bit more tweaking, it looks like the best approximation is for a p value of about 1.675:
See the Pen by thebabydino (@thebabydino) on CodePen.
In this interactive demo, we want the graphs of the f and g functions to be as close as possible to the dashed lines, which represent the ease-in timing function (below the identity line) and the ease-out timing function (above the identity line).
Emulating ease-in-out
The CSS ease-in-out timing function looks like in the illustration below:
The ease-in-out timing function (live).
So how can we get something like this?
Well, that's what harmonic functions are for! More exactly, the ease-in-out out shape is reminiscent the shape of the sin() function on the [-90°,90°] interval.
The sin(k) function on the [-90°,90°] interval (live).
However, we don't want a function whose input is in the [-90°,90°] interval and output is in the [-1,1] interval, so let's fix this!
This means we need to squish the hashed rectangle ([-90°,90°]x[-1,1]) in the illustration above into the unit one ([0,1]x[0,1]).
First, let's take the domain [-90°,90°]. If we change our function to be sin(k·180°) (or sin(k·π) in radians), then our domain becomes [-.5,.5] (we can check that -.5·180° = 90° and .5·180° = 90°):
The sin(k·π) function on the [-.5,.5] interval (live).
We can shift this domain to the right by .5 and get the desired [0,1] interval if we change our function to be sin((k - .5)·π) (we can check that 0 - .5 = -.5 and 1 - .5 = .5):
The sin((k - .5)·π) function on the [0,1] interval (live).
Now let's get the desired codomain. If we add 1 to our function making it sin((k - .5)·π) + 1 this shifts out codomain up into the [0, 2] interval:
The sin((k - .5)·π) + 1 function on the [0,1] interval (live).
Dividing everything by 2 gives us the (sin((k - .5)·π) + 1)/2 function and compacts the codomain into our desired [0,1] interval:
The (sin((k - .5)·π) + 1)/2 function on the [0,1] interval (live).
This turns out to be a good approximation of the ease-in-out timing function (represented with an orange dashed line in the illustration above).
Comparison of all these timing functions
Let's say we want to have a bunch of elements with a linear-gradient() (like in the third demo). On click, their --stop values go from 0% to 100%, but with a different timing function for each.
In the JavaScript, we create a timing functions object with the corresponding function for each type of easing:
tfn = {
'linear': function(k) {
return k;
},
'ease-in': function(k) {
return Math.pow(k, 1.675);
},
'ease-out': function(k) {
return 1 - Math.pow(1 - k, 1.675);
},
'ease-in-out': function(k) {
return .5*(Math.sin((k - .5)*Math.PI) + 1);
}
};
For each of these, we create an article element:
const _ART = [];

let frag = document.createDocumentFragment();

for(let p in tfn) {
let art = document.createElement('article'),
hd = document.createElement('h3');

hd.textContent = p;
art.appendChild(hd);
art.setAttribute('id', p);
_ART.push(art);
frag.appendChild(art);
}

n = _ART.length;
document.body.appendChild(frag);
The update function is pretty much the same, except we set the --stop custom property for every element as the value returned by the corresponding timing function when fed the current progress k. Also, when resetting the --stop to 0% at the end of the animation, we also need to do this for every element.
function update() {
let k = ++f/NF;

for(let i = 0; i < n; i++) {
_ART[i].style.setProperty(
'--stop',
`${+tfn[_ART[i].id](k).toFixed(5)*100}%`
);
}

if(!(f%NF)) {
f = 0;

S.setProperty('--gc1', `var(--c${typ})`);
typ = 1 - typ;
S.setProperty('--gc0', `var(--c${typ})`);

for(let i = 0; i < n; i++)
_ART[i].style.setProperty('--stop', `0%`);

stopAni();
return;
}

rID = requestAnimationFrame(update)
};
This gives us a nice visual comparison of these timing functions:
See the Pen by thebabydino (@thebabydino) on CodePen.
They all start and finish at the same time, but while the progress is constant for the linear one, the ease-in one starts slowly and then accelerates, the ease-out one starts fast and then slows down and, finally, the ease-in-out one starts slowly, accelerates and then slows down again at the end.
Timing functions for bouncing transitions
I first came across the concept years ago, in Lea Verou's CSS Secrets talk. These happen when the y (even) values in a cubic-bezier() function are outside the [0, 1] range and the effect they create is of the animated value going outside the interval between its initial and final value.
This bounce can happen right after the transition starts, right before it finishes or at both ends.
A bounce at the start means that, at first, we don't go towards the final state, but in the opposite direction. For example, if want to animate a stop from 43% to 57% and we have a bounce at the start, then, at first, out stop value doesn't increase towards 57%, but decreases below 43% before going back up to the final state. Similarly, if we go from an initial stop value of 57% to a final stop value of 43% and we have a bounce at the start, then, at first, the stop value increases above 57% before going down to the final value.
A bounce at the end means we overshoot our final state and only then go back to it. If want to animate a stop from 43% to 57% and we have a bounce at the end, then we start going normally from the initial state to the final one, but towards the end, we go above 57% before going back down to it. And if we go from an inital stop value of 57% to a final stop value of 43% and we have a bounce at the end, then, at first, we go down towards the final state, but, towards the end, we pass it and we briefly have stop values below 43% before our transition finishes there.
If what they do is still difficult to grasp, below there's a comparative example of all three of them in action.
The three cases.
These kinds of timing functions don't have their own keywords associated, but they look cool and they are what we want in a lot of situations.
Just like in the case of ease-in-out, the quickest way of getting them is by using harmonic functions. The difference lies in the fact that now we don't start from the [-90°,90°] domain anymore.
For a bounce at the beginning, we start with the [s, 0°] portion of the sin() function, where s (the start angle) is in the (-180°,-90°) interval. The closer it is to -180°, the bigger the bounce is and the faster it will go to the final state after it. So we don't want it to be really close to -180° because the result would look too unnatural. We also want it to be far enough from -90° that the bounce is noticeable.
In the interactive demo below, you can drag the slider to change the start angle and then click on the stripe at the bottom to see the effect in action:
See the Pen by thebabydino (@thebabydino) on CodePen.
In the interactive demo above, the hashed area ([s,0]x[sin(s),0]) is the area we need move and scale into the [0,1]x[0,1] area in order to get our timing function. The part of the curve that's below its lower edge is where the bounce happens. You can adjust the start angle using the slider and then click on the bottom bar to see how the transition looks for different start angles.
Just like in the ease-in-out case, we first squish the domain into the [-1,0] interval by dividing the argument with the range (which is the maximum 0 minus the minimum s). Therefore, our function becomes sin(-k·s) (we can check that -(-1)·s = s and -0·s = 0):
The sin(-k·s) function on the [-1,0] interval (live).
Next, we shift this interval to the right (by 1, into [0,1]). This makes our function sin(-(k - 1)·s) = sin((1 - k)·s) (it checks that 0 - 1 = -1 and 1 - 1 = 0):
The sin(-(k - 1)·s) function on the [0,1] interval (live).
We then shift the codomain up by its value at 0 (sin((1 - 0)*s) = sin(s)). Our function is now sin((1 - k)·s) - sin(s) and our codomain [0,-sin(s)].
The sin(-(k - 1)·s) - sin(s) function on the [0,1] interval (live).
The last step is to expand the codomain into the [0,1] range. We do this by dividing by its upper limit (which is -sin(s)). This means our final easing function is 1 - sin((1 - k)·s)/sin(s)
The 1 - sin((1 - k)·s)/sin(s) function on the [0,1] interval (live).
For a bounce at the end, we start with the [0°, e] portion of the sin() function, where e (the end angle) is in the (90°,180°) interval. The closer it is to 180°, the bigger the bounce is and the faster it will move from the initial state to the final one before it overshoots it and the bounce happens. So we don't want it to be really close to 180° as the result would look too unnatural. We also want it to be far enough from 90° so that the bounce is noticeable.
See the Pen by thebabydino (@thebabydino) on CodePen.
In the interactive demo above, the hashed area ([0,e]x[0,sin(e)]) is the area we need to squish and move into the [0,1]x[0,1] square in order to get our timing function. The part of the curve that's below its upper edge is where the bounce happens.
We start by squishing the domain into the [0,1] interval by dividing the argument with the range (which is the maximum e minus the minimum 0). Therefore, our function becomes sin(k·e) (we can check that 0·e = 0 and 1·e = e):
The sin(k·e) function on the [0,1] interval (live).
What's still left to do is to expand the codomain into the [0,1] range. We do this by dividing by its upper limit (which is sin(e)). This means our final easing function is sin(k·e)/sin(e).
The sin(k·e)/sin(e) function on the [0,1] interval (live).
If we want a bounce at each end, we start with the [s, e] portion of the sin() function, where s is in the (-180°,-90°) interval and e in the (90°,180°) interval. The larger s and e are in absolute values, the bigger the corresponding bounces are and the more of the total transition time is spent on them alone. On the other hand, the closer their absolute values get to 90°, the less noticeable their corresponding bounces are. So, just like in the previous two cases, it's all about finding the right balance.
See the Pen by thebabydino (@thebabydino) on CodePen.
In the interactive demo above, the hashed area ([s,e]x[sin(s),sin(e)]) is the area we need to move and scale into the [0,1]x[0,1] square in order to get our timing function. The part of the curve that's beyond its horizontal edges is where the bounces happen.
We start by shifting the domain to the right into the [0,e - s] interval. This means our function becomes sin(k + s) (we can check that 0 + s = s and that e - s + s = e).
The sin(k + s) function on the [0,e - s] interval (live).
Then we shrink the domain to fit into the [0,1] interval, which gives us the function sin(k·(e - s) + s).
The sin(k·(e - s) + s) function on the [0,1] interval (live).
Moving on to the codomain, we first shift it up by its value at 0 (sin(0·(e - s) + s)), which means we now have sin(k·(e - s) + s) - sin(s). This gives us the new codomain [0,sin(e) - sin(s)].
The sin(k·(e - s) + s) - sin(s) function on the [0,1] interval (live).
Finally, we shrink the codomain to the [0,1] interval by dividing with the range (sin(e) - sin(s)), so our final function is (sin(k·(e - s) + s) - sin(s))/(sin(e - sin(s)).
The (sin(k·(e - s) + s) - sin(s))/(sin(e - sin(s)) function on the [0,1] interval (live).
So in order to do a similar comparative demo to that for the JS equivalents of the CSS linear, ease-in, ease-out, ease-in-out, our timing functions object becomes:
tfn = {
'bounce-ini': function(k) {
return 1 - Math.sin((1 - k)*s)/Math.sin(s);
},
'bounce-fin': function(k) {
return Math.sin(k*e)/Math.sin(e);
},
'bounce-ini-fin': function(k) {
return (Math.sin(k*(e - s) + s) - Math.sin(s))/(Math.sin(e) - Math.sin(s));
}
};
The s and e variables are the values we get from the two range inputs that allow us to control the bounce amount.
The interactive demo below shows the visual comparison of these three types of timing functions:
See the Pen by thebabydino (@thebabydino) on CodePen.
Alternating animations
In CSS, setting animation-direction to alternate also reverses the timing function. In order to better understand this, consider a .box element on which we animate its transform property such that we move to the right. This means our @keyframes look as follows:
@keyframes shift {
0%, 10% { transform: none }
90%, 100% { transform: translate(50vw) }
}
We use a custom timing function that allows us to have a bounce at the end and we make this animation alternate - that is, go from the final state (translate(50vw)) back to the initial state (no translation) for the even-numbered iterations (second, fourth and so on).
animation: shift 1s cubic-bezier(.5, 1, .75, 1.5) infinite alternate
The result can be seen below:
See the Pen by thebabydino (@thebabydino) on CodePen.
One important thing to notice here is that, for the even-numbered iterations, our bounce doesn't happen at the end, but at the start - the timing function is reversed. Visually, this means it's reflected both horizontally and vertically with respect to the .5,.5 point.
The normal timing function (f, in red, with a bounce at the end) and the symmetrical reverse one (g, in purple, with a bounce at the start) (live)
In CSS, there is no way of having a different timing function other than the symmetrical one on going back if we are to use this set of keyframes and animation-direction: alternate. We can introduce the going back part into the keyframes and control the timing function for each stage of the animation, but that's outside the scope of this article.
When changing values with JavaScript in the fashion presented so far in this article, the same thing happens by default. Consider the case when we want to animate the stop of a linear-gradient() between an initial and a final position and we want to have a bounce at the end. This is pretty much the last example presented in the first section with timing function that lets us have a bounce at the end (one in the bounce-fin category described before) instead of a linear one.
The CSS is exactly the same and we only make a few minor changes to the JavaScript code. We set a limit angle E and we use a custom bounce-fin kind of timing function in place of the linear one:
const E = .75*Math.PI;

/* same as before */

function timing(k) {
return Math.sin(k*E)/Math.sin(E)
};

function update() {
/* same as before */

document.body.style.setProperty(
'--stop',
`${+(INI + timing(k)*RANGE).toFixed(2)}%`
);

/* same as before */
};

/* same as before */
The result can be seen below:
See the Pen by thebabydino (@thebabydino) on CodePen.
In the initial state, the stop is at 85%. We animate it to 26% (which is the final state) using a timing function that gives us a bounce at the end. This means we go beyond our final stop position at 26% before going back up and stopping there. This is what happens during the odd iterations.
During the even iterations, this behaves just like in the CSS case, reversing the timing function, so that the bounce happens at the beginning, not at the end.
But what if we don't want the timing function to be reversed?
In this case, we need to use the symmetrical function. For any timing function f(k) defined on the [0,1] interval (this is the domain), whose values are in the [0,1] (codomain), the symmetrical function we want is 1 - f(1 - k). Note that functions whose shape is actually symmetrical with respect to the .5,.5 point, like linear or ease-in-out are identical to their symmetrical functions.
See the Pen by thebabydino (@thebabydino) on CodePen.
So what we do is use our timing function f(k) for the odd iterations and use 1 - f(1 - k) for the even ones. We can tell whether an iteration is odd or even from the direction (dir) variable. This is 1 for odd iterations and -1 for even ones.
This means we can combine our two timing functions into one: m + dir*f(m + dir*k).
Here, the multiplier m is 0 for the odd iterations (when dir is 1) and 1 for the even ones (when dir is -1), so we can compute it as .5*(1 - dir):
dir = +1 → m = .5*(1 - (+1)) = .5*(1 - 1) = .5*0 = 0
dir = -1 → m = .5*(1 - (-1)) = .5*(1 + 1) = .5*2 = 1
This way, our JavaScript becomes:
let m;

/* same as before */

function update() {
/* same as before */

document.body.style.setProperty(
'--stop',
`${+(INI + (m + dir*timing(m + dir*k))*RANGE).toFixed(2)}%`
);

/* same as before */
};

addEventListener('click', e => {
if(rID) stopAni();
dir *= -1;
m = .5*(1 - dir);
update();
}, false);
The final result can be seen in this Pen:
See the Pen by thebabydino (@thebabydino) on CodePen.
Even more examples
Gradient stops are not the only things that aren't animatable cross-browser with just CSS.
Gradient end going from orange to violet
For a first example of something different, let's say we want the orange in our gradient to animate to a kind of violet. We start with a CSS that looks something like this:
--c-ini: #ff9800;
--c-fin: #a048b9;
background: linear-gradient(90deg,
var(--c, var(--c-ini)), #3c3c3c)
In order to interpolate between the initial and final values, we need to know the format we get when reading them via JavaScript - is it going to be the same format we set them in? Is it going to be always rgb()/ rgba()?
Here is where things get a bit hairy. Consider the following test, where we have a gradient where we've used every format possible:
--c0: hsl(150, 100%, 50%); // springgreen
--c1: orange;
--c2: #8a2be2; // blueviolet
--c3: rgb(220, 20, 60); // crimson
--c4: rgba(255, 245, 238, 1); // seashell with alpha = 1
--c5: hsla(51, 100%, 50%, .5); // gold with alpha = .5
background: linear-gradient(90deg,
var(--c0), var(--c1),
var(--c2), var(--c3),
var(--c4), var(--c5))
We read the computed values of the gradient image and the individual custom properties --c0 through --c5 via JavaScript.
let s = getComputedStyle(document.body);

console.log(s.backgroundImage);
console.log(s.getPropertyValue('--c0'), 'springgreen');
console.log(s.getPropertyValue('--c1'), 'orange');
console.log(s.getPropertyValue('--c2'), 'blueviolet');
console.log(s.getPropertyValue('--c3'), 'crimson');
console.log(s.getPropertyValue('--c4'), 'seashell (alpha = 1)');
console.log(s.getPropertyValue('--c5'), 'gold (alpha = .5)');
The results seem a bit inconsistent.
Screenshots showing what gets logged in Chrome, Edge and Firefox (live).
Whatever we do, if we have an alpha of strictly less than 1, what we get via JavaScript seems to be always an rgba() value, regardless of whether we've set it with rgba() or hsla().
All browsers also agree when reading the custom properties directly, though, this time, what we get doesn't seem to make much sense: orange, crimson and seashell are returned as keywords regardless of how they were set, but we get hex values for springgreen and blueviolet. Except for orange, which was added in Level 2, all these values were added to CSS in Level 3, so why do we get some as keywords and others as hex values?
For the background-image, Firefox always returns the fully opaque values only as rgb(), while Chrome and Edge return them as either keywords or hex values, just like they do in the case when we read the custom properties directly.
Oh well, at least that lets us know we need to take into account different formats.
So the first thing we need to do is map the keywords to rgb() values. Not going to write all that manually, so a quick search finds this repo - perfect, it's exactly what we want! We can now set that as the value of a CMAP constant.
The next step here is to create a getRGBA(c) function that would take a string representing a keyword, a hex or an rgb()/ rgba() value and return an array containing the RGBA values ([red, green, blue, alpha]).
We start by building our regular expressions for the hex and rgb()/ rgba() values. These are a bit loose and would catch quite a few false positives if we were to have user input, but since we're only using them on CSS computed style values, we can afford to take the quick and dirty path here:
let re_hex = /^#([a-fd]{1,2})([a-fd]{1,2})([a-fd]{1,2})$/i,
re_rgb = /^rgba?((d{1,3},s){2}d{1,3}(,s((0|1)?.?d*))?)/;
Then we handle the three types of values we've seen we might get by reading the computed styles:
if(c in CMAP) return CMAP[c]; // keyword lookup, return rgb

if([4, 7].indexOf(c.length) !== -1 && re_hex.test(c)) {
c = c.match(re_hex).slice(1); // remove the '#'
if(c[0].length === 1) c = c.map(x => x + x);
// go from 3-digit form to 6-digit one
c.push(1); // add an alpha of 1

// return decimal valued RGBA array
return c.map(x => parseInt(x, 16))
}

if(re_rgb.test(c)) {
// extract values
c = c.replace(/rgba?(/, '').replace(')', '').split(',').map(x => +x.trim());
if(c.length === 3) c.push(1); // if no alpha specified, use 1

return c // return RGBA array
}
Now after adding the keyword to RGBA map (CMAP) and the getRGBA() function, our JavaScript code doesn't change much from the previous examples:
const INI = getRGBA(S.getPropertyValue('--c-ini').trim()),
FIN = getRGBA(S.getPropertyValue('--c-fin').trim()),
RANGE = [],
ALPHA = 1 - INI[3] || 1 - FIN[3];

/* same as before */

function update() {
/* same as before */

document.body.style.setProperty(
'--c',
`rgb${ALPHA ? 'a' : ''}(
${INI.map((c, i) => Math.round(c + k*RANGE[i])).join(',')})`
);

/* same as before */
};

(function init() {
if(!ALPHA) INI.pop(); // get rid of alpha if always 1
RANGE.splice(0, 0, ...INI.map((c, i) => FIN[i] - c));
})();

/* same as before */
This gives us a linear gradient animation:
See the Pen by thebabydino (@thebabydino) on CodePen.
We can also use a different, non-linear timing function, for example one that allows for a bounce at the end:
const E = .8*Math.PI;

/* same as before */

function timing(k) {
return Math.sin(k*E)/Math.sin(E)
}

function update() {
/* same as before */

document.body.style.setProperty(
'--c',
`rgb${ALPHA ? 'a' : ''}(
${INI.map((c, i) => Math.round(c + timing(k)*RANGE[i])).join(',')})`
);

/* same as before */
};

/* same as before */
This means we go all the way to a kind of blue before going back to our final violet:
See the Pen by thebabydino (@thebabydino) on CodePen.
Do note however that, in general, RGBA transitions are not the best place to illustrate bounces. That's because the RGB channels are strictly limited to the [0,255] range and the alpha channel is strictly limited to the [0,1] range. rgb(255, 0, 0) is as red as red gets, there's no redder red with a value of over 255 for the first channel. A value of 0 for the alpha channel means completely transparent, there's no greater transparency with a negative value.
By now, you're probably already bored with gradients, so let's switch to something else!
Smooth changing SVG attribute values
At this point, we cannot alter the geometry of SVG elements via CSS. We should be able to as per the SVG2 spec and Chrome does support some of this stuff, but what if we want to animate the geometry of SVG elements now, in a more cross-browser manner?
Well, you've probably guessed it, JavaScript to the rescue!
Growing a circle
Our first example is that of a circle whose radius goes from nothing (0) to a quarter of the minimum viewBox dimension. We keep the document structure simple, without any other aditional elements.
<svg viewBox='-100 -50 200 100'>
<circle/>
</svg>
For the JavaScript part, the only notable difference from the previous demos is that we read the SVG viewBox dimensions in order to get the maximum radius and we now set the r attribute within the update() function, not a CSS variable (it would be immensely useful if CSS variables were allowed as values for such attributes, but, sadly, we don't live in an ideal world):
const _G = document.querySelector('svg'),
_C = document.querySelector('circle'),
VB = _G.getAttribute('viewBox').split(' '),
RMAX = .25*Math.min(...VB.slice(2)),
E = .8*Math.PI;

/* same as before */

function update() {
/* same as before */

_C.setAttribute('r', (timing(k)*RMAX).toFixed(2));

/* same as before */
};

/* same as before */
Below, you can see the result when using a bounce-fin kind of timing function:
See the Pen by thebabydino (@thebabydino) on CodePen.
Pan and zoom map
Another SVG example is a smooth pan and zoom map demo. In this case, we take a map like those from amCharts, clean up the SVG and then create this effect by triggering a linear viewBox animation when pressing the +/ - keys (zoom) and the arrow keys (pan).
The first thing we do in the JavaScript is create a navigation map, where we take the key codes of interest and attach info about what we do when the corresponding keys are pressed (note that we need different key codes for + and - in Firefox for some reason).
const NAV_MAP = {
187: { dir: 1, act: 'zoom', name: 'in' } /* + */,
61: { dir: 1, act: 'zoom', name: 'in' } /* + Firefox ¯_(ツ)_/¯ */,
189: { dir: -1, act: 'zoom', name: 'out' } /* - */,
173: { dir: -1, act: 'zoom', name: 'out' } /* - Firefox ¯_(ツ)_/¯ */,
37: { dir: -1, act: 'move', name: 'left', axis: 0 } /* ⇦ */,
38: { dir: -1, act: 'move', name: 'up', axis: 1 } /* ⇧ */,
39: { dir: 1, act: 'move', name: 'right', axis: 0 } /* ⇨ */,
40: { dir: 1, act: 'move', name: 'down', axis: 1 } /* ⇩ */
}
When pressing the + key, what we want to do is zoom in. The action we perform is 'zoom' in the positive direction - we go 'in'. Similarly, when pressing the - key, the action is also 'zoom', but in the negative (-1) direction - we go 'out'.
When pressing the arrow left key, the action we perform is 'move' along the x axis (which is the first axis, at index 0) in the negative (-1) direction - we go 'left'. When pressing the arrow up key, the action we perform is 'move' along the y axis (which is the second axis, at index 1) in the negative (-1) direction - we go 'up'.
When pressing the arrow right key, the action we perform is 'move' along the x axis (which is the first axis, at index 0) in the positive direction - we go 'right'. When pressing the arrow down key, the action we perform is 'move' along the y axis (which is the second axis, at index 1) in the positive direction - we go 'down'.
We then get the SVG element, its initial viewBox, set the maximum zoom out level to these initial viewBox dimensions and set the smallest possible viewBox width to a much smaller value (let's say 8).
const _SVG = document.querySelector('svg'),
VB = _SVG.getAttribute('viewBox').split(' ').map(c => +c),
DMAX = VB.slice(2), WMIN = 8;
We also create an empty current navigation object to hold the current navigation action data and a target viewBox array to contain the final state we animate the viewBox to for the current animation.
let nav = {}, tg = Array(4);
On 'keyup', if we don't have any animation running already and the key that was pressed is one of interest, we get the current navigation object from the navigation map we created at the beginning. After this, we handle the two action cases ('zoom'/ 'move') and call the update() function:
addEventListener('keyup', e => {
if(!rID && e.keyCode in NAV_MAP) {
nav = NAV_MAP[e.keyCode];

if(nav.act === 'zoom') {
/* what we do if the action is 'zoom' */
}

else if(nav.act === 'move') {
/* what we do if the action is 'move' */
}

update()
}
}, false);
Now let's see what we do if we zoom. First off, and this is a very useful programming tactic in general, not just here in particular, we get the edge cases that make us exit the function out of the way.
So what are our edge cases here?
The first one is when we want to zoom out (a zoom in the negative direction) when our whole map is already in sight (the current viewBox dimensions are bigger or equal to the maximum ones). In our case, this should happen if we want to zoom out at the very beginning because we start with the whole map in sight.
The second edge case is when we hit the other limit - we want to zoom in, but we're at the maximum detail level (the current viewBox dimensions are smaller or equal to the minimum ones).
Putting the above into JavaScript code, we have:
if(nav.act === 'zoom') {
if((nav.dir === -1 && VB[2] >= DMAX[0]) ||
(nav.dir === 1 && VB[2] <= WMIN)) {
console.log(`cannot ${nav.act} ${nav.name} more`);
return
}

/* main case */
}
Now that we've handled the edge cases, let's move on to the main case. Here, we set the target viewBox values. We use a 2x zoom on each step, meaning that when we zoom in, the target viewBox dimensions are half the ones at the start of the current zoom action, and when we zoom out they're double. The target offsets are half the difference between the maximum viewBox dimensions and the target ones.
if(nav.act === 'zoom') {
/* edge cases */

for(let i = 0; i < 2; i++) {
tg[i + 2] = VB[i + 2]/Math.pow(2, nav.dir);
tg[i] = .5*(DMAX[i] - tg[i + 2]);
}
}
Next, let's see what we do if we want to move instead of zooming.
In a similar fashion, we get the edge cases that make us exit the function out of the way first. Here, these happen when we're at an edge of the map and we want to keep going in that direction (whatever the direction might be). Since originally the top left corner of our viewBox is at 0,0, this means we cannot go below 0 or above the maximum viewBox size minus the current one. Note that given we're initially fully zoomed out, this also means we cannot move in any direction until we zoom in.
else if(nav.act === 'move') {
if((nav.dir === -1 && VB[nav.axis] <= 0) ||
(nav.dir === 1 && VB[nav.axis] >= DMAX[nav.axis] - VB[2 + nav.axis])) {
console.log(`at the edge, cannot go ${nav.name}`);
return
}

/* main case */
For the main case, we move in the desired direction by half the viewBox size along that axis:
else if(nav.act === 'move') {
/* edge cases */

tg[nav.axis] = VB[nav.axis] + .5*nav.dir*VB[2 + nav.axis]
}
Now let's see what we need to do inside the update() function. This is going to be pretty similar to previous demos, except now we need to handle the 'move' and 'zoom' cases separately. We also create an array to store the current viewBox data in (cvb):
function update() {
let k = ++f/NF, j = 1 - k, cvb = VB.slice();

if(nav.act === 'zoom') {
/* what we do if the action is zoom */
}

if(nav.act === 'move') {
/* what we do if the action is move */
}

_SVG.setAttribute('viewBox', cvb.join(' '));

if(!(f%NF)) {
f = 0;
VB.splice(0, 4, ...cvb);
nav = {};
tg = Array(4);
stopAni();
return
}

rID = requestAnimationFrame(update)
};
In the 'zoom' case, we need to recompute all viewBox values. We do this with linear interpolation between the values at the start of the animation and the target values we've previously computed:
if(nav.act === 'zoom') {
for(let i = 0; i < 4; i++)
cvb[i] = j*VB[i] + k*tg[i];
}
In the 'move' case, we only need to recompute one viewBox value - the offset for the axis we move along:
if(nav.act === 'move')
cvb[nav.axis] = j*VB[nav.axis] + k*tg[nav.axis];
And that's it! We now have a working pan and zoom demo with smooth linear transitions in between states:
See the Pen by thebabydino (@thebabydino) on CodePen.
From sad square to happy circle
Another example would be morphing a sad square SVG into a happy circle. We create an SVG with a square viewBox whose 0,0 point is right in the middle. Symmetrical with respect to the origin of the SVG system of coordinates we have a square (a rect element) covering 80% of the SVG. This is our face. We create the eyes with an ellipse and a copy of it, symmetrical with respect to the vertical axis. The mouth is a cubic Bézier curve created with a path element.
- var vb_d = 500, vb_o = -.5*vb_d;
- var fd = .8*vb_d, fr = .5*fd;

svg(viewBox=[vb_o, vb_o, vb_d, vb_d].join(' '))
rect(x=-fr y=-fr width=fd height=fd)
ellipse#eye(cx=.35*fr cy=-.25*fr
rx=.1*fr ry=.15*fr)
use(xlink:href='#eye'
transform='scale(-1 1)')
path(d=`M${-.35*fr} ${.35*fr}
C${-.21*fr} ${.13*fr}
${+.21*fr} ${.13*fr}
${+.35*fr} ${.35*fr}`)
In the JavaScript, we get the face and the mouth elements. We read the face width, which is equal to the height and we use it to compute the maximum corner rounding. This is the value for which we get a circle and is equal to half the square edge. We also get the mouth path data, from where we extract the initial y coordinate of the control points and compute the final y coordinate of the same control points.
const _FACE = document.querySelector('rect'),
_MOUTH = document.querySelector('path'),
RMAX = .5*_FACE.getAttribute('width'),
DATA = _MOUTH.getAttribute('d').slice(1)
.replace('C', '').split(/s+/)
.map(c => +c),
CPY_INI = DATA[3],
CPY_RANGE = 2*(DATA[1] - DATA[3]);
The rest is very similar to all other transition on click demos so far, with just a few minor differences (note that we use an ease-out kind of timing function):
/* same as before */

function timing(k) { return 1 - Math.pow(1 - k, 2) };

function update() {
f += dir;

let k = f/NF, cpy = CPY_INI + timing(k)*CPY_RANGE;

_FACE.setAttribute('rx', (timing(k)*RMAX).toFixed(2));
_MOUTH.setAttribute(
'd',
`M${DATA.slice(0,2)}
C${DATA[2]} ${cpy} ${DATA[4]} ${cpy} ${DATA.slice(-2)}`
);

/* same as before */
};

/* same as before */
And so we have our silly result:
See the Pen by thebabydino (@thebabydino) on CodePen.

Emulating CSS Timing Functions with JavaScript is a post from CSS-Tricks
Source: CssTricks


Creating Vue.js Transitions & Animations

My last two projects hurled me into the JAMstack. SPAs, headless content management, static generation... you name it. More importantly, they gave me the opportunity to learn Vue.js. More than "Build a To-Do App" Vue.js, I got to ship real-life, production-ready Vue apps.
The agency behind Snipcart (Spektrum) wanted to start using decoupled JavaScript frameworks for small to medium sites. Before using them on client projects, however, they chose to experiment on themselves. After a few of my peers had unfruitful experiences with React, I was given the green light to prototype a few apps in Vue. This prototyping morphed into full-blown Vue apps for Spektrum connected to a headless CMS. First, I spent time figuring out how to model and render our data appropriately. Then I dove head first into Vue transformations to apply a much-needed layer of polish on our two projects.

I've prepared live demos on CodePen and GitHub repos to go along with this article.
This post digs into Vue.js and the tools it offers with its transition system. It is assumed that you are already comfortable with the basics of Vue.js and CSS transitions. For the sake of brevity and clarity, we won't get into the "logic" used in the demo.
Handling Vue.js Transitions & Animations

Animations & transitions can bring your site to life and entice users to explore. Animations and transitions are an integral part of UX and UI design. They are, however, easy to get wrong. In complex situations like dealing with lists, they can be nearly impossible to reason about when relying on native JavaScript and CSS. Whenever I ask backend developers why they dislike front end so vehemently, their response is usually somewhere along the lines of "... animations".
Even for those of us who are drawn to the field by an urge to create intricate micro-interactions and smooth page transitions, it's not easy work. We often need to rely on CSS for performance reasons, even while working in a mostly JavaScript environment, and that break in the environment can be difficult to manage.
This is where frameworks like Vue.js step in, taking the guess-work and clumsy chains of setTimeout functions out of transitions.
The Difference Between Transitions and Animations
The terms transition and animation are often used interchangeably but are actually different things.

A transition is a change in the style properties on an element to be transitioned in a single step. They are often handled purely through CSS.
An animation is more complex. They are usually multi-step and sometimes run continuously. Animations will often call on JavaScript to pick up where CSS' lack of logic drops off.

It can be confusing, as adding a class could be the trigger for a transition or an animation. Still, it is an important distinction when stepping into the world of Vue because both have very different approaches and toolboxes.
Here's an example of transitions in use on Spektrum's site:

Using Transitions
The simplest way to achieve transition effects on your page is through Vue's <transition> component. It makes things so simple, it almost feels like cheating. Vue will detect if any CSS animations or transitions are being used and will automatically toggle classes on the transitioned content, allowing for a perfectly timed transition system and complete control.
First step is to identify our scope. We tell Vue to prepend the transition classes with modal, for example, by setting the component's name attribute. Then to trigger a transition all you need to do is toggle the content's visibility using the v-if or v-show attributes. Vue will add/remove the classes accordingly.
There are two "directions" for transitions: enter (for an element going from hidden to visible) and leave (for an element going from visble to hidden). Vue then provides 3 "hooks" that represent different timeframes in the transition:

.modal-enter-active / .modal-leave-active: These will be present throughout the entire transition and should be used to apply your CSS transition declaration. You can also declare styles that need to be applied from beginning to end.
.modal-enter / .modal-leave: Use these classes to define how your element looks before it starts the transition.
.modal-enter-to / .modal-leave-to: You've probably already guessed, these determine the styles you wish to transition towards, the "complete" state.

To visualize the whole process, take a look at this chart from Vue's documentation:

How does this translate into code? Say we simply want to fade in and out, putting the pieces together would look like this:
<button class="modal__open" @click="modal = true">Help</button>

<transition name="modal">
<section v-if="modal" class="modal">
<button class="modal__close" @click="modal = false">&times;</button>
</section>
</transition>
.modal-enter-active,
.modal-leave-active { transition: opacity 350ms }

.modal-enter,
.modal-leave-to { opacity: 0 }

.modal-leave,
.modal-enter-to { opacity: 1 }
This is likely the most basic implementation you will come across. Keep in mind that this transition system can also handle content changes. For example, you could react to a change in Vue's dynamic <component>.
<transition name="slide">
<component :is="selectedView" :key="selectedView"/>
</transition>
.slide-enter { transform: translateX(100%) }
.slide-enter-to { transform: translateX(0) }
.slide-enter-active { position: absolute }

.slide-leave { transform: translateX(0) }
.slide-leave-to { transform: translateX(-100%) }

.slide-enter-active,
.slide-leave-active { transition: all 750ms ease-in-out }
Whenever the selectedView changes, the old component will slide out to the left and the new one will enter from the right!
Here's a demo that uses these concepts:
See the Pen VueJS transition & transition-group demo by Nicolas Udy (@udyux) on CodePen.
Transitions on Lists
Things get interesting when we start dealing with lists. Be it some bullet points or a grid of blog posts, Vue gives you the <transition-group> component.

It is worth noting that while the <transition> component doesn't actually render an element, <transition-group> does. The default behaviour is to use a <span> but you can override this by setting the tag attribute on the <transition-group>.
The other gotcha is that all list items need to have a unique key attribute. Vue can then keep track of each item individually and optimize its performance. In our demo, we're looping over the list of companies, each of which has a unique ID. So we can set up our list like so:
<transition-group name="company" tag="ul" class="content__list">
<li class="company" v-for="company in list" :key="company.id">
<!-- ... -->
</li>
</transition-group>
The most impressive feature of transition-group is how Vue handles changes in the list's order so seamlessly. For this, an additional transition class is available, .company-move (much like the active classes for entering and leaving), which will be applied to list items that are moving about but will remain visible.
In the demo, I broke it down a bit more to show how to leverage different states to get a cleaner end result. Here's a simplified and uncluttered version of the styles:
/* base */
.company {
backface-visibility: hidden;
z-index: 1;
}

/* moving */
.company-move {
transition: all 600ms ease-in-out 50ms;
}

/* appearing */
.company-enter-active {
transition: all 300ms ease-out;
}

/* disappearing */
.company-leave-active {
transition: all 200ms ease-in;
position: absolute;
z-index: 0;
}

/* appear at / disappear to */
.company-enter,
.company-leave-to {
opacity: 0;
}
Using backface-visibility: hidden on an element, even in the absence of 3D transforms, will ensure silky 60fps transitions and avoid fuzzy text rendering during transformations by tricking the browser into leveraging hardware acceleration.
In the above snippet, I've set the base style to z-index: 1. This assures that elements staying on page will always appear above elements that are leaving. I also apply a absolute positioning to items that are leaving to remove them from the natural flow, triggering the move transition on the rest of the items.
That's all we need! The result is, frankly, almost magic.
Using Animations
The possibilities and approaches for animation in Vue are virtually endless, so I've chosen one of my favourite techniques to showcase how you could animate your data.
We're going to use GSAP's TweenLite library to apply easing functions to our state's changes and let Vue's lightning fast reactivity reflect this on the DOM. Vue is just as comfortable working with inline SVG as it is with HTML.
We'll be creating a line graph with 5 points, evenly spaced along the X-axis, whose Y-axis will represent a percentage. You can take a look here at the result.
See the Pen SVG path animation with VueJS & TweenLite by Nicolas Udy (@udyux) on CodePen.
Let's get started with our component's logic.
new Vue({
el: '#app',
// this is the data-set that will be animated
data() {
return {
points: { a: -1, b: -1, c: -1, d: -1, e: -1 }
}
},

// this computed property builds an array of coordinates that
// can be used as is in our path
computed: {
path() {
return Object.keys(this.points)
// we need to filter the array to remove any
// properties TweenLite has added
.filter(key => ~'abcde'.indexOf(key))
// calculate X coordinate for 5 points evenly spread
// then reverse the data-point, a higher % should
// move up but Y coordinates increase downwards
.map((key, i) => [i * 100, 100 - this.points[key]])
}
},

methods: {
// our randomly generated destination values
// could be replaced by an array.unshift process
setPoint(key) {
let duration = this.random(3, 5)
let destination = this.random(0, 100)
this.animatePoint({ key, duration, destination })
},
// start the tween on this given object key and call setPoint
// once complete to start over again, passing back the key
animatePoint({ key, duration, destination }) {
TweenLite.to(this.points, duration, {
[key]: destination,
ease: Sine.easeInOut,
onComplete: this.setPoint,
onCompleteParams: [key]
})
},
random(min, max) {
return ((Math.random() * (max - min)) + min).toFixed(2)
}
},

// finally, trigger the whole process when ready
mounted() {
Object.keys(this.points).forEach(key => {
this.setPoint(key)
})
}
});
Now for the template.
<main id="app" class="chart">
<figure class="chart__content">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-20 -25 440 125">
<path class="chart__path" :d="`M${path}`"
fill="none" stroke="rgba(255, 255, 255, 0.3)"
stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>

<text v-for="([ x, y ]) in path" :x="x - 10" :y="y - 7.5"
font-size="10" font-weight="200" fill="currentColor">
{{ 100 - (y | 0) + '%' }}
</text>
</svg>
</figure>
</main>
Notice how we bind our path computed property to the path element's d attribute. We do something similar with the text nodes that output the current value for that point. When TweenLite updates the data, Vue reacts instantly and keeps the DOM in sync.
That's really all there is to it! Of course, additional styles were applied to make things pretty, which at this point you might realize is more work then the animation itself!
Live demos (CodePen) & GitHub repo
Go ahead, browse the live demos or analyze/re-use the code in our open source repo!

The vue-animate GitHub repo
The vue-transitions GitHub repo
The Vue.js transition & transition-group demo
The SVG path animation demo

Conclusion
I've always been a fan of animations and transitions on the web, but I'm also a stickler for performance. As a result, I'm always very cautious when it comes to relying on JavaScript. However, combining Vue's blazing fast and low-cost reactivity with its ability to manage pure CSS transitions, you would really have to go overboard to have performance issues.
It's impressive that such a powerful framework can offer such a simple yet manageable API. The animation demo, including the styling, was built in only 45 minutes. And if you discount the time it took to set up the mock data used in the list-transition, it's achievable in under 2 hours. I don't even want to imagine the migraine-inducing process of building similar setups without Vue, much less how much time it would take!
Now get out there and get creative! The use cases go far beyond what we have seen in this post: the only true limitation is your imagination. Don't forget to check out the transitions and animations section in Vue.js' documentation for more information and inspiration.

This post originally appeared on Snipcart's blog. Got comments, questions? Add them below!

Creating Vue.js Transitions & Animations is a post from CSS-Tricks
Source: CssTricks


Do Your Worst

My daughter is taking swimming lessons. She’s three. It hasn’t been going well. Tears. Fear about putting her face in the water. Dread about going to the next class. I found myself telling her the age old wisdom of “Do Your Best”, but I’m curious if that isn’t very good advice at all…The Simpsons is one of the most successful sitcoms and animated shows in history running 29 seasons so far. Each episode takes eight-to-nine months to create! That means many teams and people need to be involved to get an entire season manufactured.But this isn’t a story about The Simpsons. It’s about South Park.The most surprising thing to me about South Park is that a single episode takes 6 days. Sometimes even less. Of course the animation isn’t as sophisticated as The Simpsons. And I’m sure some would argue the writing isn’t either. But South Park has been going on 21 seasons with 2 more already under contract and includes its own successful spin-off games, merchandise and movie.Couldn’t Trey Parker and Matt Stone, the creators of South Park, use more time to make the episodes better? In a wonderful short documentary about how South Park is made, Trey Parker let’s us know:I always feel like, “wow, I wish I had another day with this show.” That’s the reason that there’s so many episodes of South Park we’re able to get done, is ’cause there just is a deadline, and you can’t keep going, ’cause there would be so many shows that I’m like, “no, no, it’s not ready yet. Not ready yet.” And I would have spent four weeks on one show. All you do is start second-guessing yourself and rewriting stuff, and it gets over-thought, and it would have been 5% better.Sure, this is a lesson about how important deadlines are. They force you to keep shipping. You aren’t given a chance to overthink anything.But I think it’s a bigger lesson in getting stuck in a rut because we fear we could do better. Trey Parker and Matt Stone know these South Park episodes can be better. It isn’t their best. But will it make a material difference if they do more to it? No, probably not.The pilot episode wasn’t even as sophisticated as you see today. They were made with paper cutouts and stop motion animation. I’m sure in Trey and Matt’s heads, they were better than this. But they published just to get something into the world and avoid getting stuck in obscurity.It’s how this YouTube channel of mine has gone (youtube.com/nathankontny). I’m up to about 2500 subscribers watching me talk about business, marketing, design, and just getting through life on a daily basis. But I hesitated way too long to get even the first episode in the tank. I knew I could do a much better job than filming on my phone with crappy lighting, so I spent an inordinate amount of time researching lighting solutions, camera gear, storyboarding.I finally regained my sanity and just filmed on a camera phone in my bedroom. The result looks like absolute garbage. I knew it should have been better. But what difference would it have made. Ship it. It’ll get better with time. And it has. Today’s videos are drastically different than my freshman efforts.I see this all playing out with my daughter. She has this idea of being a great swimmer. She sees her best friend swimming already and then beats herself up that she can’t do it, to the point where she didn’t even want to get in the pool anymore because she couldn’t match her friend.But we kept encouraging. Just get in the pool. It’s ok if you don’t do what your friend does. Just dip your face in, even if it’s just one second. Of course she quickly got a lot better. She’s burying her face in now for 12 seconds and constantly excited to practice and return to swimming class.But it didn’t start with her best or what she thought should be “her best”. It started with getting comfortable doing her worst.When Trey completes the latest episode in the South Park documentary, he let’s us know his thoughts on its quality, which happens to be the same feeling he has every single week as they publish their work:I feel like it’s the worst episode we’ve ever done.P.S. You should also follow me on YouTube: youtube.com/nathankontny where I share more about how we run our business, do product design, market ourselves, and just get through life. And if you need a zero-learning-curve system to track leads and manage follow-ups, try Highrise.Do Your Worst was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.


Source: 37signals


A Bone to Pick with Skeleton Screens

In the fight for the short attention span of our users, every performance gain, whether real or perceived, matters. This is especially true on mobile, where despite our best efforts at performance, a spotty signal can leave users waiting an interminable few seconds (or more) for content to load.

Design’s conventional answer to unpredictable wait times has long been the loading spinner; a looping animation that tells the user to “Hold on. Something’s coming,” whether that something is one or ten seconds away.
More recently, a design pattern known as progressive loading has gained popularity. With progressive loading, individual elements become visible on the page as soon as they’ve loaded, rather than displaying all at once. See the following example from Facebook:

Progressive loading on the Facebook app

In the Facebook example above, a skeleton of the page loads first. It’s essentially a wireframe of the page with placeholder boxes for text and images. 

Facebook's skeleton screen

Progressive loading with skeleton screens is thought to benefit the user by indicating that progress is being made, thereby shortening the perceived wait time. Google, Medium, and Slack all use skeleton screens to make their apps feel more performant.
So, should we all be using skeleton screens to make our apps feel faster? To answer this question, we decided to do some lean research into the effects of different loading techniques on perceived wait time.

Test Design
We created a short test for mobile devices that measured users’ perceived wait time for three different loading animations of identical length: a loading spinner, a skeleton screen, and a blank screen.
The quickest way to build the test was to use animated GIFs to simulate each loading animation and put them inside an existing testing framework. (We chose Chalkmark by Optimal Workshop.) Users who opted into the test from a mobile device were asked to complete a simple task and then randomly shown one of the GIFs. Following the task, which was a red herring, they were asked a series of follow-up questions about how long they waited for the page to load.

The skeleton screen variant in the test we deployed

Follow-up questions:
Based just on what you can recall, please respond to the following statement: “The recipes loaded quickly for me." [Strongly agree, Moderately agree, Neutral, Moderately disagree, Strongly disagree, I didn’t notice]From what you can remember, estimate the amount of time it took for the meals to load. [1 second, 2 seconds, 3 seconds, 4 seconds, 5 seconds]
We also measured the time it took users in each group to complete the red herring task. Based on some of the literature we’d read, it seemed plausible that a skeleton loader might actually speed up task completion by orienting users more quickly to the structure of the page.
Roughly half (70) of the participants were sourced through Amazon Mechanical Turk and paid for their participation. The rest were organically sourced through Viget’s social channels. The results were replicated across both groups of participants.
Hypotheses:
Users in the skeleton screen group will perceive the shortest wait time.Users in the skeleton screen group will complete the task most quickly.
Results
We gave the test to 136 people, and the skeleton screen performed the worst by all metrics. Users in the skeleton screen group took longer to complete the task, were more likely to evaluate their wait time negatively (by answering the first question with “Strongly disagree” or “Moderately disagree”), and guessed that the wait time had been longer than users who saw the loading spinner or a blank screen.

Table 1. Test Results
    Skeleton screen Loading spinner Blank screen Number of participants 39 39 58 Percentage who agreed with the statement, "The meals loaded quickly for me." 59% 74% 66% Percentage who disagreed with the statement, "The meals loaded quickly for me." 36% 10% 26% Average perceived wait time (seconds) 2.82 2.41 2.29 Post-load task completion time (seconds) 10.54 9.49 9.50 This table shows how participants responded, on average, to each of the three simulated loading animations.

Participants in the loading spinner group were most likely to evaluate their wait time positively (by answering the first question with “Strongly agree” or “Moderately agree”) and had the shortest average perceived wait time.
Analysis
The unexpectedly weak performance of the skeleton screen may be due to one or more of the following reasons:
Skeleton screens are somewhat novel and attract more attention than the ubiquitous loading spinner. Skeleton screens work better in familiar interfaces and can be off-putting in new settings when users don’t know what to expect.Skeleton screens work best when wait times are very short.
Our hunch is that each of these reasons has some merit, but more testing is needed to know for certain. Either way, skeleton screens aren’t a silver bullet for increasing perceived performance and should be used thoughtfully.
Have you implemented or experienced skeleton screens in the wild? We’d love to hear your thoughts. Please leave us a comment.


Source: VigetInspire


The Art of Comments

I believe commenting code is important. Most of all, I believe commenting is misunderstood. I'm tentative to write this article at all. I am not a commenting expert (if there is such a thing) and have definitely written code that was poorly commented, not commented at all, and have written comments that are superfluous.

I tweeted out the other day that "I hear conflicting opinions on whether or not you should write comments. But I get thank you's from junior devs for writing them so I'll continue." The responses I received were varied, but what caught my eye was that for every person agreeing that commenting was necessary, they all had different reasons for believing this.
Commenting is a more nuanced thing than we give it credit for. There is no nomenclature for commenting (not that there should be) but lumping all comments together is an oversimplification. The example in this comic that was tweeted in response is true:
From Abstrusegoose
This is where I think a lot of the misconceptions of comments lie. The book Clean Code by Robert C. Martin talks about this: that comments shouldn't be necessary because code should be self-documenting. That if you feel a comment is necessary, you should rewrite it to be more legible. I both agree and disagree with this. In the process of writing a comment, you can often find things that could be written better, but it's not an either/or. I might still be able to rewrite that code to be more self-documenting and also write a comment as well, for the following reason:
Code can describe how, but it cannot explain why.
This isn't a new concept, but it's a common theme I notice in helpful comments that I have come across. The ability to communicate something that the code cannot, or cannot concisely.
All of that said, there is just not one right way or one reason to write a comment. In order to better learn, let's dig into some of the many beneficial types of comments that might all serve a different purpose, followed by patterns we might want to avoid.
Good comments
What is the Why
Many examples of good comments can be housed under this category. Code explains what you'd like the computer to take action on. You'll hear people talk about declarative code because it describes the logic precisely but without describing all of the steps like a recipe. It lets the computer do the heavy lifting. We could also write our comments to be a bit more declarative
/*
We had to write this function because the browser
interprets that everything is a box
*/
This doesn't describe what the code below it will do. It doesn't describe the actions it will take. But if you found a more elegant way of rewriting this function, you could feel confident in doing so because your code is likely the solution to the same problem in a different way.
Because of this, less maintenance is required (we'll dig more into this further on). If you found a better way to write this, you probably wouldn't need to rewrite the comment. You could also quickly understand whether you could rewrite another section of code to make this function unnecessary without spending a long time parsing all the steps to make the whole.
Clarifying something that is not legible by regular human beings
When you look at a long line of regex, can you immediately grok what's going on? If you can, you're in the minority, and even if you can at this moment, you might not be able to next year. What about a browser hack? Have you ever seen this in your code?
.selector { [;property: value;]; }
what about
var isFF = /a/[-1]=='a';
The first one targets Chrome ≤ 28, Safari ≤ 7, Opera ≥ 14, the second one is Firefox versions 2-3. I have written code that needs something like this. In order to avoid another maintainer or a future me assuming I took some Salvia before heading to work that day, it's great to tell people what the heck that's for. Especially in preparation for a time when we don't have to support that browser anymore, or the browser bug is fixed and we can remove it.
Something that is clear and legible to you is not necessarily clear to others
Who's smart? We are! Who writes clean code? We do! We don't have to comment, look how clear it is. The problem with this way of thinking is that we all have deeper knowledge in different areas. On small teams where people's skillsets and expertise are more of a circle than a venn diagram, this is less of an issue than big groups that change teams or get junior devs or interns frequently. But I'd probably still make room for those newcomers or for future you. On bigger teams where there are junior engineers or even just engineers from all types of background, people might not outrightly tell you they need you to comment, but many of these people will also express gratitude when you do.
Comments like chapters of a book
If this very article was written as one big hunk rather than broken up into sections with whitespace and smaller headings, it would be harder to skim through. Maybe not all of what I'm saying applies to you. Commenting sections or pieces allows people to skip to a part most relevant to them. But alas! You say. We have functional programming, imports, and modules for this now.
It's true! We break things down into smaller bits so that they are more manageable, and thank goodness for that. But even in smaller sections of code, you'll necessarily come to a piece that has to be a bit longer. Being able quickly grasp what is relevant or a label for an area that's a bit different can speed up productivity.
A guide to keep the logic straight while writing the code
This one is an interesting one! These are not the kind of comments you keep, and thus could also be found in the "bad patterns" section. Many times when I'm working on a bigger project with a lot of moving parts, breaking things up into the actions I'm going to take is extremely helpful. This could look like
// get the request from the server and give an error if it failed
// do x thing with that request
// format the data like so
Then I can easily focus on one thing at a time. But when left in your code as is, these comments can be screwy to read later. They're so useful while you're writing it but once you're finished can merely be a duplication of what the code does, forcing the reader to read the same thing twice in two different ways. It doesn't make them any less valuable to write, though.
My perfect-world suggestion would be to use these comments at the time of writing and then revisit them after. As you delete them, you could ask "does this do this in the most elegant and legible way possible?" "Is there another comment I might replace this with that will explain why this is necessary?" "What would I think is the most useful thing to express to future me or other from another mother?"
This is OK to refactor
Have you ever had a really aggressive product deadline? Perhaps you implemented a feature that you yourself disagreed with, or they told you it was "temporary" and "just an AB test so it doesn't matter". *Cue horror music* … and then it lived on… forever…
As embarrassing as it might be, writing comments like
// this isn't my best work, we had to get it in by the deadline
is rather helpful. As a maintainer, when I run across comments like this, I'll save buckets of time trying to figure out what the heck is wrong with this person and envisioning ways I could sabotage their morning commute. I'll immediately stop trying to figure out what parts of this code I should preserve and instead focus on what can be refactored. The only warning I'll give is to try not to make this type of coding your fallback (we'll discuss this in detail further on).
Commenting as a teaching tool
Are you a PHP shop that just was given a client that's all Ruby? Maybe it's totally standard Ruby but your team is in slightly over their heads. Are you writing a tutorial for someone? These are the limited examples for when writing out the how can be helpful. The person is literally learning on the spot and might not be able to just infer what it's doing because they've never seen it before in their lives. Comment that sh*t. Learning is humbling enough without them having to ask you aloud what they could more easily learn on their own.
I StackOverflow'd the bejeezus outta this
Did you just copy paste a whole block of code from Stack Overflow and modify it to fit your needs? This isn't a great practice but we've all been there. Something I've done that's saved me in the past is to put the link to the post where I found it. But! Then we won't get credit for that code! You might say. You're optimizing for the wrong thing would be my answer.
Inevitably people have different coding styles and the author of the solution solved a problem in a different way than you would if you knew the area deeper. Why does this matter? Because later, you might be smarter. You might level up in this area and then you'll spend less time scratching your head at why you wrote it that way, or learn from the other person's approach. Plus, you can always look back at the post, and see if any new replies came in that shed more light on the subject. There might even be another, better answer later.
Bad Comments
Writing comments gets a bad wrap sometimes, and that's because bad comments do indeed exist. Let's talk about some things to avoid while writing them.
They just say what it's already doing
John Papa made the accurate joke that this:
// if foo equals bar ...
If (foo === bar) {

} // end if
is a big pain. Why? Because you're actually reading everything twice in two different ways. It gives no more information, in fact, it makes you have to process things in two different formats, which is mental overhead rather than helpful. We've all written comments like this. Perhaps because we didn't understand it well enough ourselves or we were overly worried about reading it later. For whatever the reason, it's always good to take a step back and try to look at the code and comment from the perspective of someone reading it rather than you as the author, if you can.
It wasn't maintained
Bad documentation can be worse than no documentation. There's nothing more frustrating than coming across a block of code where the comment says something completely different than what's expressed below. Worse than time-wasting, it's misleading.
One solution to this is making sure that whatever code you are updating, you're maintaining the comments as well. And certainly having less and only more meaningful comments makes this upkeep less arduous. But commenting and maintaining comments are all part of an engineer's job. The comment is in your code, it is your job to work on it, even if it means deleting it.
If your comments are of good quality to begin with, and express why and not the how, you may find that this problem takes care of itself. For instance, if I write
// we need to FLIP this animation to be more performant in every browser
and refactor this code later to go from using getBoundingClientRect() to getBBox(), the comment still applies. The function exists for the same reason, but the details of how are what has changed.
You could have used a better name
I've definitely seen people write code (or done this myself) where the variable or functions names are one letter, and then comment what the thing is. This is a waste. We all hate typing, but if you are using a variable or function name repeatedly, I don't want to scan up the whole document where you explained what the name itself could do. I get it, naming is hard. But some comments take the place of something that could easily be written more precisely.
The comments are an excuse for not writing the code better to begin with
This is the crux of the issue for a lot of people. If you are writing code that is haphazard, and leaning back on your comments to clarify, this means the comments are holding back your programming. This is a horse-behind-the-cart kind of scenario. Unfortunately, even as the author it's not so easy to determine which is which.
We lie to ourselves in myriad ways. We might spend the time writing a comment that could be better spent making the code cleaner to begin with. We might also tell ourselves we don't need to comment our code because our code is well-written, even if other people might not agree.
There are lazy crutches in both directions. Just do your best. Try not to rely on just one correct way and instead write your code, and then read it. Try to envision you are both the author and maintainer, or how that code might look to a younger you. What information would you need to be as productive as possible?

People tend to, lately, get on one side or the other of "whether you should write comments", but I would argue that that conversation is not nuanced enough. Hopefully opening the floor to a deeper conversation about how to write meaningful comments bridges the gap.
Even so, it can be a lot to parse. Haha get it? Anyways, I'll leave you with some (better) humor. A while back there was a Stack Overflow post about the best comments people have written or seen. You can definitely waste some time in here. Pretty funny stuff.

The Art of Comments is a post from CSS-Tricks
Source: CssTricks


Getting Nowhere on Job Titles

Last week on ShopTalk, Dave and I spoke with Mandy Michael and Lara Schenck. Mandy had just written the intentionally provocative "Is there any value in people who cannot write JavaScript?" which guided our conversation. Lara is deeply interested in this subject as well, as someone who is a job seeking web worker, but places herself on the spectrum as a non-unicorn.

Part of that discussion was about job titles. If there was a ubiquitously accepted and used job title that meant you were specifically skilled at HTML and CSS, and there was a market for that job title, there probably wouldn't be any problem at all. There isn't though. "Web developer" is too vague. "Front-end developer" maybe used to mean that, but has been largely co-opted by JavaScript.
In fact, you might say that none of us has an exactly perfect job title and the industry at large has trouble agreeing on a set of job titles.
Lara created a repo with the intent to think all this out and discuss it.
If there is already a spectrum between design and backend development, and front-end integrationis that place in between, perhaps front-end development, if we zoon in, is a spectrum as well:

I like the idea of spectrums, but I also agree with a comment by Sarah Drasner where she mentioned that this makes it seem like you can't be good at both. If you're a dot right in the middle in this specrum, you are, for example, not as good at JavaScript as someone on the right.
This could probably be fixed with some different dataviz (perhaps the size of the dot), or, heaven forbid, skill-level bars.
More importantly, if you're really interested in the discussion around all this, Lara has used the issues area to open that up.
Last year, Geoff also started thinking about all our web jobs as a spectrum. We can break up our jobs into parts and map them onto those parts in differnet ways:
See the Pen Web Terminology Matrix by Geoff Graham (@geoffgraham) on CodePen.
See the Pen Web Terminology Venn Diagram by Geoff Graham (@geoffgraham) on CodePen.
That can certainly help us understand our world a little bit, but doesn't quite help with the job titles thing. It's unlikely we'll get people to write job descriptions that include a data visualization of what they are looking for.
Jeff Pelletier took a crack at job titles and narrowed it down to three:

Front-end Implementation (responsive web design, modular/scalable CSS, UI frameworks, living style guides, progressive enhancement & accessibility, animation and front-end performance).
Application Development (JavaScript frameworks, JavaScript preprocessors, code quality, process automation, testing).
Front-end Operations (build tools, deployment, speed: (app, tests, builds, deploys), monitoring errors/logs, and stability).

Although those don't quite feel like titles to me and converting them into something like "Front-end implementation developer" doesn't seem like something that will catch on.
Cody Lindley's Front-End Developer Handbook has a section on job titles. I won't quote it in full, but they are:

Front-End Developer
Front-End Engineer (aka JavaScript Developer or Full-stack JavaScript Developer)
CSS/HTML Developer
Front-End Web Designer
Web/Front-End User Interface (aka UI) Developer/Engineer
Mobile/Tablet Front-End Developer
Front-End SEO Expert
Front-End Accessibility Expert
Front-End Dev. Ops
Front-End Testing/QA

Note the contentious "full stack" title, in which Brad Frost says:
In my experience, “full-stack developers” always translates to “programmers who can do frontend code because they have to and it’s ‘easy’.” It’s never the other way around.
Still, these largely feel pretty good to me. And yet weirdly, almost like there is both too many and too few. As in, while there is good coverage here, but if you are going to cover specialties, you might as well add in performance, copywriting, analytics, and more as well. The more you add, the further away we are to locking things down. Not to mention the harder it becomes when people crossover these disciplines, like they almost always do.
Oh well.

Getting Nowhere on Job Titles is a post from CSS-Tricks
Source: CssTricks


Writing Smarter Animation Code

If you've ever coded an animation that's longer than 10 seconds with dozens or even hundreds of choreographed elements, you know how challenging it can be to avoid the dreaded "wall of code". Worse yet, editing an animation that was built by someone else (or even yourself 2 months ago) can be nightmarish.
In these videos, I'll show you the techniques that the pros use keep their code clean, manageable, and easy to revise. Scripted animation provides you the opportunity to create animations that are incredibly dynamic and flexible. My goal is for you to have fun without getting bogged down by the process.
We'll be using GSAP for all the animation. If you haven't used it yet, you'll quickly see why it's so popular - the workflow benefits are substantial.

See the Pen SVG Wars: May the morph be with you. (Craig Roblewsky) on CodePen.
The demo above from Craig Roblewsky is a great example of the types of complex animations I want to help you build.
This article is intended for those who have a basic understanding of GSAP and want to approach their code in a smarter, more efficient way. However, even if you haven't used GSAP, or prefer another animation tool, I think you'll be intrigued by these solutions to some of the common problems that all animators face. Sit back, watch and enjoy!
Video 1: Overview of the techniques
The video below will give you a quick behind-the-scenes look at how Craig structured his code in the SVG Wars animation and the many benefits of these workflow strategies.
[youtube https://www.youtube.com/watch?v=ZbTI85lmu9Q&w=560&h=315]
Although this is a detailed and complex animation, the code is surprisingly easy to work with. It's written using the same approach that we at GreenSock use for any animation longer than a few seconds. The secret to this technique is two-fold:

Break your animation into smaller timelines that get glued together in a master (parent) timeline.
Use functions to create and return those smaller timelines.

This makes your code modular and easy to edit.
Video 2: Detailed Example
I'll show you exactly how to build a sequence using functions that create and return timelines. You'll see how packing everything into one big timeline (no modular nesting) results in the intimidating "Wall of Code". I'll then break the animation down into separate timelines and use a parameterized function that does all the heavy lifting with 60% less code!
[youtube https://www.youtube.com/watch?v=8ETMjqhQRCs&w=560&h=315]
Let's review the key points...
Avoid the dreaded wall of code
A common strategy (especially for beginners) is to create one big timeline containing all of the animation code. Although a timeline offers tons of features that accommodate this style of coding, it's just a basic reality of any programming endeavor that too much code in one place will become unwieldy.
Let's upgrade the code so that we can apply the same techniques Craig used in the SVG wars animation...
See the Pen Wall of Code on CodePen.
Be sure to investigate the code in the "JS" tab. Even for something this simple, the code can be hard to scan and edit, especially for someone new to the project. Imagine if that timeline had 100 lines. Mentally parsing it all can be a chore.
Create a separate timeline for each panel
By separating the animation for each panel into its own timeline, the code becomes easier to read and edit.
var panel1 = new TimelineLite();
panel1.from(...);
...

var panel2 = new TimelineLite();
panel2.from(...);
...

var panel3 = new TimelineLite();
panel3.from(...);
...
Now it's much easier to do a quick scan and find the code for panel2. However, when these timelines are created they will all play instantly, but we want them sequenced.
See the Pen
No problem - just nest them in a parent timeline in whatever order we want.
Nest each timeline using add()
One of the greatest features of GSAP's timeline tools (TimelineLite / TimelineMax) is the ability to nest animations as deeply as you want (place timelines inside of other timelines).
The add() method allows you add any tween, timeline, label or callback anywhere in a timeline. By default, things are placed at the end of the timeline which is perfect for sequencing. In order to schedule these 3 timelines to run in succession we will add each of them to a master timeline like so:
//create a new parent timeline
var master = new TimelineMax();

//add child timelines
master.add(panel1)
.add(panel2)
.add(panel3);
Demo with all code for this stage:
See the Pen
The animation looks the same, but the code is much more refined and easy to parse mentally.
Some key benefits of nesting timelines are that you can:

Scan the code more easily.
Change the order of sections by just moving the add() code.
Change the speed of an individual timeline.
Make one section repeat multiple times.
Have precise control over the placement of each timeline using the position parameter (beyond the scope of this article).

Use functions to create and return timelines
The last step in optimizing this code is to create a function that generates the animations for each panel. Functions are inherently powerful in that they:

Can be called many times.
Can be parameterized in order to vary the animations they build.
Allow you to define local variables that won't conflict with other code.

Since each panel is built using the same HTML structure and the same animation style, there is a lot of repetitive code that we can eliminate by using a function to create the timelines. Simply tell that function which panel to operate on and it will do the rest.
Our function takes in a single panel parameter that is used in the selector string for all the tweens in the timeline:
function createPanel(panel) {
var tl = new TimelineLite();
tl.from(panel + " .bg", 0.4, {scale:0, ease:Power1.easeInOut})
.from(panel + " .bg", 0.3, {rotation:90, ease:Power1.easeInOut}, 0)
.staggerFrom(panel + " .text span", 1.1, {y:-50, opacity:0, ease:Elastic.easeOut}, 0.06)
.addLabel("out", "+=1")
.staggerTo(panel + " .text span", 0.3, {opacity:0, y:50, ease:Power1.easeIn}, -0.06, "out")
.to(panel + " .bg", 0.4, {scale:0, rotation:-90, ease:Power1.easeInOut});
return tl; //very important that the timeline gets returned
}
We can then build a sequence out of all the timelines by placing each one in a parent timeline using add().
var master = new TimelineMax();
master.add(createPanel(".panel1"))
.add(createPanel(".panel2"))
.add(createPanel(".panel3"));
Completed demo with full code:
See the Pen
This animation was purposefully designed to be relatively simple and use one function that could do all the heavy lifting. Your real-world projects may have more variance but even if each child animation is unique, I still recommend using functions to create each section of your complex animations.
Check out this example in the wonderful pen from Sarah Drasner that's built using functions that return timelines to illustrate how to do exactly that!
See the Pen
And of course the same technique is used on the main GSAP page animation:
See the Pen
GSDevTools
You may have noticed that fancy timeline controller used in some of the demos and the videos. GSDevTools was designed to super-charge your workflow by allowing you to quickly navigate and control any GSAP tween or timeline. To find out more about GSDevTools visit greensock.com/GSDevTools.
Conclusion
Next time you've got a moderately complex animation project, try these techniques and see how much more fun it is and how quickly you can experiment. Your coworkers will sing your praises when they need to edit one of your animations. Once you get the hang of modularizing your code and tapping into GSAP's advanced capabilities, it'll probably open up a whole new world of possibilities. Don't forget to use functions to handle repetitive tasks.
As with all projects, you'll probably have a client or art director ask:

"Can you slow the whole thing down a bit?"
"Can you take that 10-second part in the middle and move it to the end?"
"Can you speed up the end and make it loop a few times?"
"Can you jump to that part at the end so I can check the copy?"
"Can we add this new, stupid idea I just thought of in the middle?"

Previously, these requests would trigger a panic attack and put the entire project at risk, but now you can simply say "gimme 2 seconds..."
Additional Resources
To find out more about GSAP and what it can do, check out the following links:

GreenSock Animation Platform (GSAP)
GSAP Getting Started Guide
Official GSAP Video Training
GSAP Documentation
GSAP Showcase
GreenSock Support Forums
GSDevTools
Club GreenSock (get bonus plugins/tools)

CSS-Tricks readers can use the coupon code CSS-Tricks for 25% off a Club GreenSock membership which gets you a bunch of extras like MorphSVG and GSDevTools (referenced in this article). Valid through 11/14/2017.

Writing Smarter Animation Code is a post from CSS-Tricks
Source: CssTricks


You can get pretty far in making a slider with just HTML and CSS

A "slider", as in, a bunch of boxes set in a row that you can navigate between. You know what a slider is. There are loads of features you may want in a slider. Just as one example, you might want the slider to be swiped or scrolled. Or, you might not want that, and to have the slider only respond to click or tappable buttons that navigate to slides. Or you might want both. Or you might want to combine all that with autoplay.
I'm gonna go ahead and say that sliders are complicated enough of a UI component that it's use JavaScript territory. Flickity being a fine example. I'd also say that you can get pretty far with a nice looking functional slider with HTML and CSS alone. Starting that way makes the JavaScript easier and, dare I say, a decent example of progressive enhancement.

Let's consider the semantic markup first.
A bunch of boxes is probably as simple as:
<div class="slider">
<div class="slide" id="slide-1"></div>
<div class="slide" id="slide-2"></div>
<div class="slide" id="slide-3"></div>
<div class="slide" id="slide-4"></div>
<div class="slide" id="slide-5"></div>
</div>
With a handful of lines of CSS, we can set them next to each other and let them scroll.
.slider {
width: 300px;
height: 300px;
display: flex;
overflow-x: auto;
}
.slide {
width: 300px;
flex-shrink: 0;
height: 100%;
}

Might as well make it swipe smoothly on WebKit based mobile browsers.
.slider {
...

-webkit-overflow-scrolling: touch;
}

We can do even better!
Let's have each slide snap into place with snap-points.
.slider {
...

-webkit-scroll-snap-points-x: repeat(300px);
-ms-scroll-snap-points-x: repeat(300px);
scroll-snap-points-x: repeat(300px);
-webkit-scroll-snap-type: mandatory;
-ms-scroll-snap-type: mandatory;
scroll-snap-type: mandatory;
}
Look how much nicer it is now:

Jump links
A slider probably has a little UI to jump to a specific slide, so let's do that semantically as well, with anchor links that jump to the correct slide:
<div class="slide-wrap">

<a href="#slide-1">1</a>
<a href="#slide-2">2</a>
<a href="#slide-3">3</a>
<a href="#slide-4">4</a>
<a href="#slide-5">5</a>

<div class="slider">
<div class="slide" id="slide-1">1</div>
<div class="slide" id="slide-2">2</div>
<div class="slide" id="slide-3">3</div>
<div class="slide" id="slide-4">4</div>
<div class="slide" id="slide-5">5</div>
</div>

</div>
Anchor links that actually behave as a link to related content and semantic and accessible so no problems there (feel free to correct me if I'm wrong).
Let's style thing up a little bit... and we got some buttons that do their job:

On both desktop and mobile, we can still make sure we get smooth sliding action, too!
.slides {
...

scroll-behavior: smooth;
}
Maybe we'd only display the buttons in situations without nice snappy swiping?
If the browser supports scroll-snap-type, it's got nice snappy swiping. We could just hide the buttons if we wanted to:
@supports (scroll-snap-type) {
.slider > a {
display: none;
}
}
Need to do something special to the "active" slide?
We could use :target for that. When one of the buttons to navigate slides is clicked, the URL changes to that #hash, and that's when :target takes effect. So:
.slides > div:target {
transform: scale(0.8);
}

There is a way to build this slide with the checkbox hack as well, and still to "active slide" stuff with :checked, but you might argue that's a bit less semantic and accessible.
Here's where we are so far.
See the Pen Real Simple Slider by Chris Coyier (@chriscoyier) on CodePen.
This is where things break down a little bit.
Using :target is a neat trick, but it doesn't work, for example, when the page loads without a hash. Or if the user scrolls or flicks on their own without using the buttons. I both don't think there is any way around this with just HTML and CSS, nor do I think that's entirely a failure of HTML and CSS. It's just the kind of thing Javascript is for.
JavaScript can figure out what the active slide is. JavaScript can set the active slide. Probably worth looking into the Intersection Observer API.
What are more limitations?
We've about tapped out what HTML and CSS alone can do here.

Want to be able to flick with a mouse? That's not a mouse behavior, so you'll need to do all that with DOM events. Any kind of exotic interactive behavior (e.g. physics) will require JavaScript. Although there is a weird trick for flipping vertical scrolling for horizontal.

Want to know when a slide is changed? Like a callback? That's JavaScript territory.
Need autoplay? You might be able to do something rudimentary with a checkbox, :checked, and controlling the animation-play-state of a @keyframes animation, but it will feel limited and janky.
Want to have it infinitely scroll in one direction, repeating as needed? That's going to require cloning and moving stuff around in the DOM. Or perhaps some gross misuse of <marquee>.

I'll leave you with those. My point is only that there is a lot you can do before you need JavaScript. Starting with that strong of a base might be a way to go that provides a happy fallback, regardless of what you do on top of it.

You can get pretty far in making a slider with just HTML and CSS is a post from CSS-Tricks
Source: CssTricks


Building a Progress Ring, Quickly

On some particularly heavy sites, the user needs to see a visual cue temporarily to indicate that resources and assets are still loading before they taking in a finished site. There are different kinds of approaches to solving for this kind of UX, from spinners to skeleton screens.
If we are using an out-of-the-box solution that provides us the current progress, like preloader package by Jam3 does, building a loading indicator becomes easier.
For this, we will make a ring/circle, style it, animate given a progress, and then wrap it in a component for integrationuse.

Step 1: Let's make an SVG ring
From the many ways available to draw a circle using just HTML and CSS, I'm choosing SVG since it's possible to configure and style through attributes while preserving its resolution in all screens.
<svg
class="progress-ring"
height="120"
width="120"
>
<circle
class="progress-ring__circle"
stroke-width="1"
fill="transparent"
r="58"
cx="60"
cy="60"
/>
</svg>
Inside an <svg> element we place a <circle> tag, where we declare the radius of the ring with the r attribute, its position from the center in the SVG viewBox with cx and cy and the width of the circle stroke.
You might have noticed the radius is 58 and not 60 which would seem correct. We need to subtract the stroke or the circle will overflow the SVG wrapper.
radius = (width / 2) - (strokeWidth * 2)
These means that if we increase the stroke to 4, then the radius should be 52.
52 = (120 / 2) - (4 * 2)
So it looks like a ring we need to set its fill to transparent and choose a stroke color for the circle.
See the Pen SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.
Step 2: Adding the stroke
The next step is to animate the length of the outer line of our ring to simulate visual progress.
We are going to use two CSS properties that you might not have heard of before since they are exclusive to SVG elements, stroke-dasharray and stroke-dashoffset.
stroke-dasharray
This property is like border-style: dashed but it lets you define the width of the dashes and the gap between them.
.progress-ring__circle {
stroke-dasharray: 10 20;
}
With those values, our ring will have 10px dashes separated by 20px.
See the Pen Dashed SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.
stroke-dashoffset
The second one allows you to move the starting point of this dash-gap sequence along the path of the SVG element.
Now, imagine if we passed the circle's circumference to both stroke-dasharray values. Our shape would have one long dash occupying the whole length and a gap of the same length which wouldn't be visible.
This will cause no change initially, but if we also set to the stroke-dashoffset the same length, then the long dash will move all the way and reveal the gap.
Decreasing stroke-dasharray would start to reveal our shape.
A few years ago, Jake Archibald explained this technique in this article, which also has a live example that will help you understand it better. You should go read his tutorial.
The circumference
What we need now is that length which can be calculated with the radius and this simple trigonometric formula.
circumference = radius * 2 * PI
Since we know 52 is the radius of our ring:
326.7256 ~= 52 * 2 * PI
We could also get this value by JavaScript if we want:
const circle = document.querySelector('.progress-ring__circle');
const radius = circle.r.baseVal.value;
const circumference = radius * 2 * Math.PI;
This way we can later assign styles to our circle element.
circle.style.strokeDasharray = `${circumference} ${circumference}`;
circle.style.strokeDashoffset = circumference;
Step 3: Progress to offset
With this little trick, we know that assigning the circumference value to stroke-dashoffset will reflect the status of zero progress and the 0 value will indicate progress is complete.
Therefore, as the progress grows we need to reduce the offset like this:
function setProgress(percent) {
const offset = circumference - percent / 100 * circumference;
circle.style.strokeDashoffset = offset;
}
By transitioning the property, we will get the animation feel:
.progress-ring__circle {
transition: stroke-dashoffset 0.35s;
}
One particular thing about stroke-dashoffset: its starting point is vertically centered and horizontally titled to the right. It's necessary to negatively rotate the circle to get the desired effect.
.progress-ring__circle {
transition: stroke-dashoffset 0.35s;
transform: rotate(-90deg);
transform-origin: 50% 50%,
}
Putting all of this together will give us something like this.
See the Pen vegymB by Jeremias Menichelli (@jeremenichelli) on CodePen.
A numeric input was added in this example to help you test the animation.
For this to be easily coupled inside your application it would be best to encapsulate the solution in a component.
As a web component
Now that we have the logic, the styles, and the HTML for our loading ring we can port it easily to any technology or framework.
First, let's use web components.
class ProgressRing extends HTMLElement {...}

window.customElements.define('progress-ring', ProgressRing);
This is the standard declaration of a custom element, extending the native HTMLElement class, which can be configured by attributes.
<progress-ring stroke="4" radius="60" progress="0"></progress-ring>
Inside the constructor of the element, we will create a shadow root to encapsulate the styles and its template.
constructor() {
super();

// get config from attributes
const stroke = this.getAttribute('stroke');
const radius = this.getAttribute('radius');
const normalizedRadius = radius - stroke * 2;
this._circumference = normalizedRadius * 2 * Math.PI;

// create shadow dom root
this._root = this.attachShadow({mode: 'open'});
this._root.innerHTML = `
<svg
height="${radius * 2}"
width="${radius * 2}"
>
<circle
stroke="white"
stroke-dasharray="${this._circumference} ${this._circumference}"
style="stroke-dashoffset:${this._circumference}"
stroke-width="${stroke}"
fill="transparent"
r="${normalizedRadius}"
cx="${radius}"
cy="${radius}"
/>
</svg>

<style>
circle {
transition: stroke-dashoffset 0.35s;
transform: rotate(-90deg);
transform-origin: 50% 50%;
}
</style>
`;
}
You may have noticed that we have not hardcoded the values into our SVG, instead we are getting them from the attributes passed to the element.
Also, we are calculating the circumference of the ring and setting stroke-dasharray and stroke-dashoffset ahead of time.
The next thing is to observe the progress attribute and modify the circle styles.
setProgress(percent) {
const offset = this._circumference - (percent / 100 * this._circumference);
const circle = this._root.querySelector('circle');
circle.style.strokeDashoffset = offset;
}

static get observedAttributes() {
return [ 'progress' ];
}

attributeChangedCallback(name, oldValue, newValue) {
if (name === 'progress') {
this.setProgress(newValue);
}
}
Here setProgress becomes a class method that will be called when the progress attribute is changed.
The observedAttributes are defined by a static getter which will trigger attributeChangeCallback when, in this case, progress is modified.
See the Pen ProgressRing web component by Jeremias Menichelli (@jeremenichelli) on CodePen.
This Pen only works in Chrome at the time of this writing. An interval was added to simulate the progress change.
As a Vue component
Web components are great. That said, some of the available libraries and frameworks, like Vue.js, can do quite a bit of the heavy-lifting.
To start, we need to define the view component.
const ProgressRing = Vue.component('progress-ring', {});
Writing a single file component is also possible and probably cleaner but we are adopting the factory syntax to match the final code demo.
We will define the attributes as props and the calculations as data.
const ProgressRing = Vue.component('progress-ring', {
props: {
radius: Number,
progress: Number,
stroke: Number
},
data() {
const normalizedRadius = this.radius - this.stroke * 2;
const circumference = normalizedRadius * 2 * Math.PI;

return {
normalizedRadius,
circumference
};
}
});
Since computed properties are supported out-of-the-box in Vue we can use it to calculate the value of stroke-dashoffset.
computed: {
strokeDashoffset() {
return this._circumference - percent / 100 * this._circumference;
}
}
Next, we add our SVG as a template. Notice that the easy part here is that Vue provides us with bindings, bringing JavaScript expressions inside attributes and styles.
template: `
<svg
:height="radius * 2"
:width="radius * 2"
>
<circle
stroke="white"
fill="transparent"
:stroke-dasharray="circumference + ' ' + circumference"
:style="{ strokeDashoffset }"
:stroke-width="stroke"
:r="normalizedRadius"
:cx="radius"
:cy="radius"
/>
</svg>
`
When we update the progress prop of the element in our app, Vue takes care of computing the changes and update the element styles.
See the Pen Vue ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.
Note: An interval was added to simulate the progress change. We do that in the next example as well.
As a React component
In a similar way to Vue.js, React helps us handle all the configuration and computed values thanks to props and JSX notation.
First, we obtain some data from props passed down.
class ProgressRing extends React.Component {
constructor(props) {
super(props);

const { radius, stroke } = this.props;

this.circumference = radius * 2 * Math.PI;
this.normalizedRadius = radius - stroke * 2;
}
}
Our template is the return value of the component's render function where we use the progress prop to calculate the stroke-dashoffset value.
render() {
const { radius, stroke, progress } = this.props;
const strokeDashoffset = this.circumference - progress / 100 * this.circumference;

return (
<svg
height={radius * 2}
width={radius * 2}
>
<circle
stroke="white"
fill="transparent"
strokeWidth={ stroke }
strokeDasharray={ this.circumference + ' ' + this.circumference }
style={ { strokeDashoffset } }
stroke-width={ stroke }
r={ this.normalizedRadius }
cx={ radius }
cy={ radius }
/>
</svg>
);
}
A change in the progress prop will trigger a new render cycle recalculating the strokeDashoffset variable.
See the Pen React ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.
Wrap up
The recipe for this solution is based on SVG shapes and styles, CSS transitions and a little of JavaScript to compute special attributes to simulate the drawing circumference.
Once we separate this little piece, we can port it to any modern library or framework and include it in our app, in this article we explored web components, Vue, and React.
Further reading

Vue.js official guide
React official docs
Web components introduction

Building a Progress Ring, Quickly is a post from CSS-Tricks
Source: CssTricks