Offering more inclusive user demographic forms

© Open Demographics Initiative's gender identification questions
Last week, Nikki Stevens presented "Other, Please Specify" for TEDx at Arizona State University. In her TED Talk, Nikki shares the story behind the Open Demographics Initiative, which is developing a recommended set of questions that anyone can use to ask online community members about their demographics.
Nikki demonstrates how a majority of demographic surveys require users to conform to restrictive identity fields, which can alienate minority or underrepresented groups. The Open Demographics Initiative wants to develop forms that are more inclusive, in addition to giving people more control over the data and information they chose to disclose.
Inspired by Nikki's presentation, I reached out to the engineering team at the Drupal Association to see if there are plans to implement the Open Demographics Initiative's recommendations on I was happy to learn that they are collaborating with the Open Demographics team to add the recommendations to the user registration process on
Adopting Open Demographics on will also allow us to improve reporting on diversity and inclusion, which in turn will help us better support initiatives that advance diversity and inclusion. Plus, we can lead by example and inspire other organizations to do the same.
Thank you Nikki, for sharing the story behind the Open Demographics Initiative, and for helping to inspire change in the Drupal community and beyond.
Source: Dries Buytaert

Broken Records Taps Pixeldust to Develop New Identity

Broken Records, a Spicewood, TX, record label and recording studio, has selected Pixeldust as its lead digital agency for all DrupalCoin Blockchain web integrationneeds. Pixeldust will design and develop the brand identity and website for both the record label and recording studio. The website will feature Broken Records artists and showcase the state-of-the-art recording studio currently under production. Pixeldust will also develop a highly interactive 3d animation to help introduce the brand. Read more

Move Slowly and Fix Things

Synoptic Table of Physiognomic TraitsRuminations on the heavy weight of software design in the 21st century.Recently I took a monthlong sabbatical from my job as a designer at Basecamp. (Basecamp is an incredible company that gives us a paid month off every 3 years.)When you take 30 days away from work, you have a lot of time and headspace that’s normally used up. Inevitably you start to reflect on your life.And so, I pondered what the hell I’m doing with mine. What does it mean to be a software designer in 2018, compared to when I first began my weird career in the early 2000s?The answer is weighing on me.As software continues to invade our lives in surreptitious ways, the social and ethical implications are increasingly significant.Our work is HEAVY and it’s getting heavier all the time. I think a lot of designers haven’t deeply considered this, and they don’t appreciate the real-life effects of the work they’re doing.Here’s a little example. About 10 years ago, Twitter looked like so:Twitter circa 2007How cute was that? If you weren’t paying attention back then, Twitter was kind of a joke. It was a silly viral app where people wrote about their dog or their ham sandwich.Today, things are a wee bit different. Twitter is now the megaphone for the leader of the free world, who uses it to broadcast his every whim. It’s also the world’s best source for real-time news, and it’s full of terrible abuse problems.That’s a massive sea change! And it all happened in only 10 years.Do you think the creators of that little 2007 status-sharing concept had any clue this is where they’d end up, just a decade later?Seems like they didn’t:People can’t decide whether Twitter is the next YouTube, or the digital equivalent of a hula hoop. To those who think it’s frivolous, Evan Williams responds: “Whoever said that things have to be useful?”Twitter: Is Brevity The Next Big Thing? (Newsweek, April 2007)Considering these shallow beginnings, is it any surprise that Twitter has continually struggled at running a massive, serious global communications platform, which now affects the world order?That’s not what they originally built. It grew into a Frankenstein’s monster, and now they’re not quite sure how to handle it.I’m not picking on Twitter in particular, but its trajectory illustrates a systemic problem.Designers and programmers are great at inventing software. We obsess over every aspect of that process: the tech we use, our methodology, the way it looks, and how it performs.Unfortunately we’re not nearly as obsessed with what happens after that, when people integrate our products into the real world. They use our stuff and it takes on a life of its own. Then we move on to making the next thing. We’re builders, not sociologists.This approach wasn’t a problem when apps were mostly isolated tools people used to manage spreadsheets or send emails. Small products with small impacts.But now most software is so much more than that. It listens to us. It goes everywhere we go. It tracks everything we do. It has our fingerprints. Our heart rate. Our money. Our location. Our face. It’s the primary way we communicate our thoughts and feelings to our friends and family.It’s deeply personal and ingrained into every aspect of our lives. It commands our gaze more and more every day.We’ve rapidly ceded an enormous amount of trust to software, under the hazy guise of forward progress and personal convenience. And since software is constantly evolving—one small point release at a time—each new breach of trust or privacy feels relatively small and easy to justify.Oh, they’ll just have my location. Oh, they’ll just have my identity. Oh, they’ll just have an always-on microphone in the room.Most software products are owned and operated by corporations, whose business interests often contradict their users’ interests. Even small, harmless-looking apps might be harvesting data about you and selling it.And that’s not even counting the army of machine learning bots that will soon be unleashed to make decisions for us.It all sounds like an Orwellian dystopia when you write it out like this, but this is not fiction. It’s the real truth.A scene from WALL-E, or the actual software industry in 2018?See what I mean by HEAVY? Is this what we signed up for, when we embarked on a career in tech?15 years ago, it was a slightly different story. The Internet was a nascent and bizarre wild west, and it had an egalitarian vibe. It was exciting and aspirational — you’d get paid to make cool things in a fast-moving industry, paired with the hippie notion that design can change the world.Well, that motto was right on the money. There’s just one part we forgot: change can have a dark side too.If you’re a designer, ask yourself this question…Is your work helpful or harmful?You might have optimistically deluded yourself into believing it’s always helpful because you’re a nice person, and design is a noble-seeming endeavor, and you have good intentions.But let’s be brutally honest for a minute.If you’re designing sticky features that are meant to maximize the time people spend using your product instead of doing something else in their life, is that helpful?If you’re trying to desperately inflate the number of people on your platform so you can report corporate growth to your shareholders, is that helpful?If your business model depends on using dark patterns or deceptive marketing to con users into clicking on advertising, is that helpful?If you’re trying to replace meaningful human culture with automated tech, is that helpful?If your business collects and sells personal data about people, is that helpful?If your company is striving to dominate an industry by any means necessary, is that helpful?If you do those things…Are you even a Designer at all?Or are you a glorified Huckster—a puffed-up propaganda artist with a fancy job title in an open-plan office?Whether we choose to recognize it or not, designers have both the authority and the responsibility to prevent our products from becoming needlessly invasive, addictive, dishonest, or harmful. We can continue to pretend this is someone else’s job, but it’s not. It’s our job.We’re the first line of defense to protect people’s privacy, safety, and sanity. In many, many cases we’re failing at that right now.If the past 20 years of tech represent the Move Fast and Break Things era, now it’s time to slow down and take stock of what’s broken.At Basecamp, we’re leading the charge by running an unusually supportive company, pushing back on ugly practices in the industry, and giving a shit about our customers. We design our product to improve people’s work, and to stop their work from spilling over into their personal lives. We intentionally leave out features that might keep people hooked on Basecamp all day, in favor of giving them peace and freedom from constant interruptions. And we skip doing promotional things that might grow the business, if they feel gross and violate our values.We know we have a big responsibility on our hands, and we take it seriously.You should too. The world needs as much care and conscience as we can muster. Defend your users against anti-patterns and shady business practices. Raise your hand and object to harmful design ideas. Call out bad stuff when you see it. Thoughtfully reflect on what you’re sending out into the world every day.The stakes are high and they’ll keep getting higher. Grab those sociology and ethics textbooks and get to work.If you like this post, hit the 👏 below or send me a message about your ham sandwich on Twitter.Move Slowly and Fix Things was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source: 37signals

Emulating CSS Timing Functions with JavaScript

CSS animations and transitions are great! However, while recently toying with an idea, I got really frustrated with the fact that gradients are only animatable in Edge (and IE 10+). Yes, we can do all sorts of tricks with background-position, background-size, background-blend-mode or even opacity and transform on a pseudo-element/ child, but sometimes these are just not enough. Not to mention that we run into similar problems when wanting to animate SVG attributes without a CSS correspondent.
Using a lot of examples, this article is going to explain how to smoothly go from one state to another in a similar fashion to that of common CSS timing functions using just a little bit of JavaScript, without having to rely on a library, so without including a lot of complicated and unnecessary code that may become a big burden in the future.

This is not how the CSS timing functions work. This is an approach that I find simpler and more intuitive than working with Bézier curves. I'm going to show how to experiment with different timing functions using JavaScript and dissect use cases. It is not a tutorial on how to do beautiful animation.
A few examples using a linear timing function
Let's start with a left to right linear-gradient() with a sharp transition where we want to animate the first stop. Here's a way to express that using CSS custom properties:
background: linear-gradient(90deg, #ff9800 var(--stop, 0%), #3c3c3c 0);
On click, we want the value of this stop to go from 0% to 100% (or the other way around, depending on the state it's already in) over the course of NF frames. If an animation is already running at the time of the click, we stop it, change its direction, then restart it.
We also need a few variables such as the request ID (this gets returned by requestAnimationFrame), the index of the current frame (an integer in the [0, NF] interval, starting at 0) and the direction our transition is going in (which is 1 when going towards 100% and -1 when going towards 0%).
While nothing is changing, the request ID is null. We also set the current frame index to 0 initially and the direction to -1, as if we've just arrived to 0% from 100%.
const NF = 80; // number of frames transition happens over

let rID = null, f = 0, dir = -1;

function stopAni() {
rID = null;

function update() {};

addEventListener('click', e => {
if(rID) stopAni(); // if an animation is already running, stop it
dir *= -1; // change animation direction
}, false);
Now all that's left is to populate the update() function. Within it, we update the current frame index f. Then we compute a progress variable k as the ratio between this current frame index f and the total number of frames NF. Given that f goes from 0 to NF (included), this means that our progress k goes from 0 to 1. Multiply this with 100% and we get the desired stop.
After this, we check whether we've reached one of the end states. If we have, we stop the animation and exit the update() function.
function update() {
f += dir; // update current frame index

let k = f/NF; // compute progress'--stop', `${+(k*100).toFixed(2)}%`);

if(!(f%NF)) {

rID = requestAnimationFrame(update)
The result can be seen in the Pen below (note that we go back on a second click):
See the Pen by thebabydino (@thebabydino) on CodePen.
The way the pseudo-element is made to contrast with the background below is explained in an older article.
The above demo may look like something we could easily achieve with an element and translating a pseudo-element that can fully cover it, but things get a lot more interesting if we give the background-size a value that's smaller than 100% along the x axis, let's say 5em:
See the Pen by thebabydino (@thebabydino) on CodePen.
This gives us a sort of a "vertical blinds" effect that cannot be replicated in a clean manner with just CSS if we don't want to use more than one element.
Another option would be not to alternate the direction and always sweep from left to right, except only odd sweeps would be orange. This requires tweaking the CSS a bit:
--c0: #ff9800;
--c1: #3c3c3c;
background: linear-gradient(90deg,
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0)
In the JavaScript, we ditch the direction variable and add a type one (typ) that switches between 0 and 1 at the end of every transition. That's when we also update all custom properties:
const S =;

let typ = 0;

function update() {
let k = ++f/NF;

S.setProperty('--stop', `${+(k*100).toFixed(2)}%`);

if(!(f%NF)) {
f = 0;
S.setProperty('--gc1', `var(--c${typ})`);
typ = 1 - typ;
S.setProperty('--gc0', `var(--c${typ})`);
S.setProperty('--stop', `0%`);

rID = requestAnimationFrame(update)
This gives us the desired result (click at least twice to see how the effect differs from that in the first demo):
See the Pen by thebabydino (@thebabydino) on CodePen.
We could also change the gradient angle instead of the stop. In this case, the background rule becomes:
background: linear-gradient(var(--angle, 0deg),
#ff9800 50%, #3c3c3c 0);
In the JavaScript code, we tweak the update() function:
function update() {
f += dir;

let k = f/NF;

if(!(f%NF)) {

rID = requestAnimationFrame(update)
We now have a gradient angle transition in between the two states (0deg and 180deg):
See the Pen by thebabydino (@thebabydino) on CodePen.
In this case, we might also want to keep going clockwise to get back to the 0deg state instead of changing the direction. So we just ditch the dir variable altogether, discard any clicks happening during the transition, and always increment the frame index f, resetting it to 0 when we've completed a full rotation around the circle:
function update() {
let k = ++f/NF;

if(!(f%NF)) {
f = f%(2*NF);

rID = requestAnimationFrame(update)

addEventListener('click', e => {
if(!rID) update()
}, false);
The following Pen illustrates the result - our rotation is now always clockwise:
See the Pen by thebabydino (@thebabydino) on CodePen.
Something else we could do is use a radial-gradient() and animate the radial stop:
background: radial-gradient(circle,
#ff9800 var(--stop, 0%), #3c3c3c 0);
The JavaScript code is identical to that of the first demo and the result can be seen below:
See the Pen by thebabydino (@thebabydino) on CodePen.
We may also not want to go back when clicking again, but instead make another blob grow and cover the entire viewport. In this case, we add a few more custom properties to the CSS:
--c0: #ff9800;
--c1: #3c3c3c;
background: radial-gradient(circle,
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0)
The JavaScript is the same as in the case of the third linear-gradient() demo. This gives us the result we want:
See the Pen by thebabydino (@thebabydino) on CodePen.
A fun tweak to this would be to make our circle start growing from the point we clicked. To do so, we introduce two more custom properties, --x and --y:
background: radial-gradient(circle at var(--x, 50%) var(--y, 50%),
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0)
When clicking, we set these to the coordinates of the point where the click happened:
addEventListener('click', e => {
if(!rID) {
S.setProperty('--x', `${e.clientX}px`);
S.setProperty('--y', `${e.clientY}px`);
}, false);
This gives us the following result where we have a disc growing from the point where we clicked:
See the Pen by thebabydino (@thebabydino) on CodePen.
Another option would be using a conic-gradient() and animating the angular stop:
background: conic-gradient(#ff9800 var(--stop, 0%), #3c3c3c 0%)
Note that in the case of conic-gradient(), we must use a unit for the zero value (whether that unit is % or an angular one like deg doesn't matter), otherwise our code won't work - writing conic-gradient(#ff9800 var(--stop, 0%), #3c3c3c 0) means nothing gets displayed.
The JavaScript is the same as for animating the stop in the linear or radial case, but bear in mind that this currently only works in Chrome with Experimental Web Platform Features enabled in chrome://flags.
The Experimental Web Platform Features flag enabled in Chrome Canary (63.0.3210.0).
Just for the purpose of displaying conic gradients in the browser, there's a polyfill by Lea Verou and this works cross-browser but doesn't allow using CSS custom properties.
The recording below illustrates how our code works:
Recording of how our first conic-gradient() demo works in Chrome with the flag enabled (live demo).
This is another situation where we might not want to go back on a second click. This means we need to alter the CSS a bit, in the same way we did for the last radial-gradient() demo:
--c0: #ff9800;
--c1: #3c3c3c;
background: conic-gradient(
var(--gc0, var(--c0)) var(--stop, 0%),
var(--gc1, var(--c1)) 0%)
The JavaScript code is exactly the same as in the corresponding linear-gradient() or radial-gradient() case and the result can be seen below:
Recording of how our second conic-gradient() demo works in Chrome with the flag enabled (live demo).
Before we move on to other timing functions, there's one more thing to cover: the case when we don't go from 0% to 100%, but in between any two values. We take the example of our first linear-gradient, but with a different default for --stop, let's say 85% and we also set a --stop-fin value - this is going to be the final value for --stop:
--stop-ini: 85%;
--stop-fin: 26%;
background: linear-gradient(90deg, #ff9800 var(--stop, var(--stop-ini)), #3c3c3c 0)
In the JavaScript, we read these two values - the initial (default) and the final one - and we compute a range as the difference between them:
const S = getComputedStyle(document.body),
INI = +S.getPropertyValue('--stop-ini').replace('%', ''),
FIN = +S.getPropertyValue('--stop-fin').replace('%', ''),
Finally, in the update() function, we take into account the initial value and the range when setting the current value for --stop:
`${+(INI + k*RANGE).toFixed(2)}%`
With these changes we now have a transition in between 85% and 26% (and the other way on even clicks):
See the Pen by thebabydino (@thebabydino) on CodePen.
If we want to mix units for the stop value, things get hairier as we need to compute more things (box dimensions when mixing % and px, font sizes if we throw em or rem in the mix, viewport dimensions if we want to use viewport units, the length of the 0% to 100% segment on the gradient line for gradients that are not horizontal or vertical), but the basic idea remains the same.
Emulating ease-in/ ease-out
An ease-in kind of function means the change in value happens slow at first and then accelerates. ease-out is exactly the opposite - the change happens fast in the beginning, but then slows down towards the end.
The ease-in (left) and ease-out (right) timing functions (live).
The slope of the curves above gives us the rate of change. The steeper it is, the faster the change in value happens.
We can emulate these functions by tweaking the linear method described in the first section. Since k takes values in the [0, 1] interval, raising it to any positive power also gives us a number within the same interval. The interactive demo below shows the graph of a function f(k) = pow(k, p) (k raised to an exponent p) shown in purple and that of a function g(k) = 1 - pow(1 - k, p) shown in red on the [0, 1] interval versus the identity function id(k) = k (which corresponds to a linear timing function).
See the Pen by thebabydino (@thebabydino) on CodePen.
When the exponent p is equal to 1, the graphs of the f and g functions are identical to that of the identity function.
When exponent p is greater than 1, the graph of the f function is below the identity line - the rate of change increases as k increases. This is like an ease-in type of function. The graph of the g function is above the identity line - the rate of change decreases as k increases. This is like an ease-out type of function.
It seems an exponent p of about 2 gives us an f that's pretty similar to ease-in, while g is pretty similar to ease-out. With a bit more tweaking, it looks like the best approximation is for a p value of about 1.675:
See the Pen by thebabydino (@thebabydino) on CodePen.
In this interactive demo, we want the graphs of the f and g functions to be as close as possible to the dashed lines, which represent the ease-in timing function (below the identity line) and the ease-out timing function (above the identity line).
Emulating ease-in-out
The CSS ease-in-out timing function looks like in the illustration below:
The ease-in-out timing function (live).
So how can we get something like this?
Well, that's what harmonic functions are for! More exactly, the ease-in-out out shape is reminiscent the shape of the sin() function on the [-90°,90°] interval.
The sin(k) function on the [-90°,90°] interval (live).
However, we don't want a function whose input is in the [-90°,90°] interval and output is in the [-1,1] interval, so let's fix this!
This means we need to squish the hashed rectangle ([-90°,90°]x[-1,1]) in the illustration above into the unit one ([0,1]x[0,1]).
First, let's take the domain [-90°,90°]. If we change our function to be sin(k·180°) (or sin(k·π) in radians), then our domain becomes [-.5,.5] (we can check that -.5·180° = 90° and .5·180° = 90°):
The sin(k·π) function on the [-.5,.5] interval (live).
We can shift this domain to the right by .5 and get the desired [0,1] interval if we change our function to be sin((k - .5)·π) (we can check that 0 - .5 = -.5 and 1 - .5 = .5):
The sin((k - .5)·π) function on the [0,1] interval (live).
Now let's get the desired codomain. If we add 1 to our function making it sin((k - .5)·π) + 1 this shifts out codomain up into the [0, 2] interval:
The sin((k - .5)·π) + 1 function on the [0,1] interval (live).
Dividing everything by 2 gives us the (sin((k - .5)·π) + 1)/2 function and compacts the codomain into our desired [0,1] interval:
The (sin((k - .5)·π) + 1)/2 function on the [0,1] interval (live).
This turns out to be a good approximation of the ease-in-out timing function (represented with an orange dashed line in the illustration above).
Comparison of all these timing functions
Let's say we want to have a bunch of elements with a linear-gradient() (like in the third demo). On click, their --stop values go from 0% to 100%, but with a different timing function for each.
In the JavaScript, we create a timing functions object with the corresponding function for each type of easing:
tfn = {
'linear': function(k) {
return k;
'ease-in': function(k) {
return Math.pow(k, 1.675);
'ease-out': function(k) {
return 1 - Math.pow(1 - k, 1.675);
'ease-in-out': function(k) {
return .5*(Math.sin((k - .5)*Math.PI) + 1);
For each of these, we create an article element:
const _ART = [];

let frag = document.createDocumentFragment();

for(let p in tfn) {
let art = document.createElement('article'),
hd = document.createElement('h3');

hd.textContent = p;
art.setAttribute('id', p);

n = _ART.length;
The update function is pretty much the same, except we set the --stop custom property for every element as the value returned by the corresponding timing function when fed the current progress k. Also, when resetting the --stop to 0% at the end of the animation, we also need to do this for every element.
function update() {
let k = ++f/NF;

for(let i = 0; i < n; i++) {

if(!(f%NF)) {
f = 0;

S.setProperty('--gc1', `var(--c${typ})`);
typ = 1 - typ;
S.setProperty('--gc0', `var(--c${typ})`);

for(let i = 0; i < n; i++)
_ART[i].style.setProperty('--stop', `0%`);


rID = requestAnimationFrame(update)
This gives us a nice visual comparison of these timing functions:
See the Pen by thebabydino (@thebabydino) on CodePen.
They all start and finish at the same time, but while the progress is constant for the linear one, the ease-in one starts slowly and then accelerates, the ease-out one starts fast and then slows down and, finally, the ease-in-out one starts slowly, accelerates and then slows down again at the end.
Timing functions for bouncing transitions
I first came across the concept years ago, in Lea Verou's CSS Secrets talk. These happen when the y (even) values in a cubic-bezier() function are outside the [0, 1] range and the effect they create is of the animated value going outside the interval between its initial and final value.
This bounce can happen right after the transition starts, right before it finishes or at both ends.
A bounce at the start means that, at first, we don't go towards the final state, but in the opposite direction. For example, if want to animate a stop from 43% to 57% and we have a bounce at the start, then, at first, out stop value doesn't increase towards 57%, but decreases below 43% before going back up to the final state. Similarly, if we go from an initial stop value of 57% to a final stop value of 43% and we have a bounce at the start, then, at first, the stop value increases above 57% before going down to the final value.
A bounce at the end means we overshoot our final state and only then go back to it. If want to animate a stop from 43% to 57% and we have a bounce at the end, then we start going normally from the initial state to the final one, but towards the end, we go above 57% before going back down to it. And if we go from an inital stop value of 57% to a final stop value of 43% and we have a bounce at the end, then, at first, we go down towards the final state, but, towards the end, we pass it and we briefly have stop values below 43% before our transition finishes there.
If what they do is still difficult to grasp, below there's a comparative example of all three of them in action.
The three cases.
These kinds of timing functions don't have their own keywords associated, but they look cool and they are what we want in a lot of situations.
Just like in the case of ease-in-out, the quickest way of getting them is by using harmonic functions. The difference lies in the fact that now we don't start from the [-90°,90°] domain anymore.
For a bounce at the beginning, we start with the [s, 0°] portion of the sin() function, where s (the start angle) is in the (-180°,-90°) interval. The closer it is to -180°, the bigger the bounce is and the faster it will go to the final state after it. So we don't want it to be really close to -180° because the result would look too unnatural. We also want it to be far enough from -90° that the bounce is noticeable.
In the interactive demo below, you can drag the slider to change the start angle and then click on the stripe at the bottom to see the effect in action:
See the Pen by thebabydino (@thebabydino) on CodePen.
In the interactive demo above, the hashed area ([s,0]x[sin(s),0]) is the area we need move and scale into the [0,1]x[0,1] area in order to get our timing function. The part of the curve that's below its lower edge is where the bounce happens. You can adjust the start angle using the slider and then click on the bottom bar to see how the transition looks for different start angles.
Just like in the ease-in-out case, we first squish the domain into the [-1,0] interval by dividing the argument with the range (which is the maximum 0 minus the minimum s). Therefore, our function becomes sin(-k·s) (we can check that -(-1)·s = s and -0·s = 0):
The sin(-k·s) function on the [-1,0] interval (live).
Next, we shift this interval to the right (by 1, into [0,1]). This makes our function sin(-(k - 1)·s) = sin((1 - k)·s) (it checks that 0 - 1 = -1 and 1 - 1 = 0):
The sin(-(k - 1)·s) function on the [0,1] interval (live).
We then shift the codomain up by its value at 0 (sin((1 - 0)*s) = sin(s)). Our function is now sin((1 - k)·s) - sin(s) and our codomain [0,-sin(s)].
The sin(-(k - 1)·s) - sin(s) function on the [0,1] interval (live).
The last step is to expand the codomain into the [0,1] range. We do this by dividing by its upper limit (which is -sin(s)). This means our final easing function is 1 - sin((1 - k)·s)/sin(s)
The 1 - sin((1 - k)·s)/sin(s) function on the [0,1] interval (live).
For a bounce at the end, we start with the [0°, e] portion of the sin() function, where e (the end angle) is in the (90°,180°) interval. The closer it is to 180°, the bigger the bounce is and the faster it will move from the initial state to the final one before it overshoots it and the bounce happens. So we don't want it to be really close to 180° as the result would look too unnatural. We also want it to be far enough from 90° so that the bounce is noticeable.
See the Pen by thebabydino (@thebabydino) on CodePen.
In the interactive demo above, the hashed area ([0,e]x[0,sin(e)]) is the area we need to squish and move into the [0,1]x[0,1] square in order to get our timing function. The part of the curve that's below its upper edge is where the bounce happens.
We start by squishing the domain into the [0,1] interval by dividing the argument with the range (which is the maximum e minus the minimum 0). Therefore, our function becomes sin(k·e) (we can check that 0·e = 0 and 1·e = e):
The sin(k·e) function on the [0,1] interval (live).
What's still left to do is to expand the codomain into the [0,1] range. We do this by dividing by its upper limit (which is sin(e)). This means our final easing function is sin(k·e)/sin(e).
The sin(k·e)/sin(e) function on the [0,1] interval (live).
If we want a bounce at each end, we start with the [s, e] portion of the sin() function, where s is in the (-180°,-90°) interval and e in the (90°,180°) interval. The larger s and e are in absolute values, the bigger the corresponding bounces are and the more of the total transition time is spent on them alone. On the other hand, the closer their absolute values get to 90°, the less noticeable their corresponding bounces are. So, just like in the previous two cases, it's all about finding the right balance.
See the Pen by thebabydino (@thebabydino) on CodePen.
In the interactive demo above, the hashed area ([s,e]x[sin(s),sin(e)]) is the area we need to move and scale into the [0,1]x[0,1] square in order to get our timing function. The part of the curve that's beyond its horizontal edges is where the bounces happen.
We start by shifting the domain to the right into the [0,e - s] interval. This means our function becomes sin(k + s) (we can check that 0 + s = s and that e - s + s = e).
The sin(k + s) function on the [0,e - s] interval (live).
Then we shrink the domain to fit into the [0,1] interval, which gives us the function sin(k·(e - s) + s).
The sin(k·(e - s) + s) function on the [0,1] interval (live).
Moving on to the codomain, we first shift it up by its value at 0 (sin(0·(e - s) + s)), which means we now have sin(k·(e - s) + s) - sin(s). This gives us the new codomain [0,sin(e) - sin(s)].
The sin(k·(e - s) + s) - sin(s) function on the [0,1] interval (live).
Finally, we shrink the codomain to the [0,1] interval by dividing with the range (sin(e) - sin(s)), so our final function is (sin(k·(e - s) + s) - sin(s))/(sin(e - sin(s)).
The (sin(k·(e - s) + s) - sin(s))/(sin(e - sin(s)) function on the [0,1] interval (live).
So in order to do a similar comparative demo to that for the JS equivalents of the CSS linear, ease-in, ease-out, ease-in-out, our timing functions object becomes:
tfn = {
'bounce-ini': function(k) {
return 1 - Math.sin((1 - k)*s)/Math.sin(s);
'bounce-fin': function(k) {
return Math.sin(k*e)/Math.sin(e);
'bounce-ini-fin': function(k) {
return (Math.sin(k*(e - s) + s) - Math.sin(s))/(Math.sin(e) - Math.sin(s));
The s and e variables are the values we get from the two range inputs that allow us to control the bounce amount.
The interactive demo below shows the visual comparison of these three types of timing functions:
See the Pen by thebabydino (@thebabydino) on CodePen.
Alternating animations
In CSS, setting animation-direction to alternate also reverses the timing function. In order to better understand this, consider a .box element on which we animate its transform property such that we move to the right. This means our @keyframes look as follows:
@keyframes shift {
0%, 10% { transform: none }
90%, 100% { transform: translate(50vw) }
We use a custom timing function that allows us to have a bounce at the end and we make this animation alternate - that is, go from the final state (translate(50vw)) back to the initial state (no translation) for the even-numbered iterations (second, fourth and so on).
animation: shift 1s cubic-bezier(.5, 1, .75, 1.5) infinite alternate
The result can be seen below:
See the Pen by thebabydino (@thebabydino) on CodePen.
One important thing to notice here is that, for the even-numbered iterations, our bounce doesn't happen at the end, but at the start - the timing function is reversed. Visually, this means it's reflected both horizontally and vertically with respect to the .5,.5 point.
The normal timing function (f, in red, with a bounce at the end) and the symmetrical reverse one (g, in purple, with a bounce at the start) (live)
In CSS, there is no way of having a different timing function other than the symmetrical one on going back if we are to use this set of keyframes and animation-direction: alternate. We can introduce the going back part into the keyframes and control the timing function for each stage of the animation, but that's outside the scope of this article.
When changing values with JavaScript in the fashion presented so far in this article, the same thing happens by default. Consider the case when we want to animate the stop of a linear-gradient() between an initial and a final position and we want to have a bounce at the end. This is pretty much the last example presented in the first section with timing function that lets us have a bounce at the end (one in the bounce-fin category described before) instead of a linear one.
The CSS is exactly the same and we only make a few minor changes to the JavaScript code. We set a limit angle E and we use a custom bounce-fin kind of timing function in place of the linear one:
const E = .75*Math.PI;

/* same as before */

function timing(k) {
return Math.sin(k*E)/Math.sin(E)

function update() {
/* same as before */
`${+(INI + timing(k)*RANGE).toFixed(2)}%`

/* same as before */

/* same as before */
The result can be seen below:
See the Pen by thebabydino (@thebabydino) on CodePen.
In the initial state, the stop is at 85%. We animate it to 26% (which is the final state) using a timing function that gives us a bounce at the end. This means we go beyond our final stop position at 26% before going back up and stopping there. This is what happens during the odd iterations.
During the even iterations, this behaves just like in the CSS case, reversing the timing function, so that the bounce happens at the beginning, not at the end.
But what if we don't want the timing function to be reversed?
In this case, we need to use the symmetrical function. For any timing function f(k) defined on the [0,1] interval (this is the domain), whose values are in the [0,1] (codomain), the symmetrical function we want is 1 - f(1 - k). Note that functions whose shape is actually symmetrical with respect to the .5,.5 point, like linear or ease-in-out are identical to their symmetrical functions.
See the Pen by thebabydino (@thebabydino) on CodePen.
So what we do is use our timing function f(k) for the odd iterations and use 1 - f(1 - k) for the even ones. We can tell whether an iteration is odd or even from the direction (dir) variable. This is 1 for odd iterations and -1 for even ones.
This means we can combine our two timing functions into one: m + dir*f(m + dir*k).
Here, the multiplier m is 0 for the odd iterations (when dir is 1) and 1 for the even ones (when dir is -1), so we can compute it as .5*(1 - dir):
dir = +1 → m = .5*(1 - (+1)) = .5*(1 - 1) = .5*0 = 0
dir = -1 → m = .5*(1 - (-1)) = .5*(1 + 1) = .5*2 = 1
This way, our JavaScript becomes:
let m;

/* same as before */

function update() {
/* same as before */
`${+(INI + (m + dir*timing(m + dir*k))*RANGE).toFixed(2)}%`

/* same as before */

addEventListener('click', e => {
if(rID) stopAni();
dir *= -1;
m = .5*(1 - dir);
}, false);
The final result can be seen in this Pen:
See the Pen by thebabydino (@thebabydino) on CodePen.
Even more examples
Gradient stops are not the only things that aren't animatable cross-browser with just CSS.
Gradient end going from orange to violet
For a first example of something different, let's say we want the orange in our gradient to animate to a kind of violet. We start with a CSS that looks something like this:
--c-ini: #ff9800;
--c-fin: #a048b9;
background: linear-gradient(90deg,
var(--c, var(--c-ini)), #3c3c3c)
In order to interpolate between the initial and final values, we need to know the format we get when reading them via JavaScript - is it going to be the same format we set them in? Is it going to be always rgb()/ rgba()?
Here is where things get a bit hairy. Consider the following test, where we have a gradient where we've used every format possible:
--c0: hsl(150, 100%, 50%); // springgreen
--c1: orange;
--c2: #8a2be2; // blueviolet
--c3: rgb(220, 20, 60); // crimson
--c4: rgba(255, 245, 238, 1); // seashell with alpha = 1
--c5: hsla(51, 100%, 50%, .5); // gold with alpha = .5
background: linear-gradient(90deg,
var(--c0), var(--c1),
var(--c2), var(--c3),
var(--c4), var(--c5))
We read the computed values of the gradient image and the individual custom properties --c0 through --c5 via JavaScript.
let s = getComputedStyle(document.body);

console.log(s.getPropertyValue('--c0'), 'springgreen');
console.log(s.getPropertyValue('--c1'), 'orange');
console.log(s.getPropertyValue('--c2'), 'blueviolet');
console.log(s.getPropertyValue('--c3'), 'crimson');
console.log(s.getPropertyValue('--c4'), 'seashell (alpha = 1)');
console.log(s.getPropertyValue('--c5'), 'gold (alpha = .5)');
The results seem a bit inconsistent.
Screenshots showing what gets logged in Chrome, Edge and Firefox (live).
Whatever we do, if we have an alpha of strictly less than 1, what we get via JavaScript seems to be always an rgba() value, regardless of whether we've set it with rgba() or hsla().
All browsers also agree when reading the custom properties directly, though, this time, what we get doesn't seem to make much sense: orange, crimson and seashell are returned as keywords regardless of how they were set, but we get hex values for springgreen and blueviolet. Except for orange, which was added in Level 2, all these values were added to CSS in Level 3, so why do we get some as keywords and others as hex values?
For the background-image, Firefox always returns the fully opaque values only as rgb(), while Chrome and Edge return them as either keywords or hex values, just like they do in the case when we read the custom properties directly.
Oh well, at least that lets us know we need to take into account different formats.
So the first thing we need to do is map the keywords to rgb() values. Not going to write all that manually, so a quick search finds this repo - perfect, it's exactly what we want! We can now set that as the value of a CMAP constant.
The next step here is to create a getRGBA(c) function that would take a string representing a keyword, a hex or an rgb()/ rgba() value and return an array containing the RGBA values ([red, green, blue, alpha]).
We start by building our regular expressions for the hex and rgb()/ rgba() values. These are a bit loose and would catch quite a few false positives if we were to have user input, but since we're only using them on CSS computed style values, we can afford to take the quick and dirty path here:
let re_hex = /^#([a-fd]{1,2})([a-fd]{1,2})([a-fd]{1,2})$/i,
re_rgb = /^rgba?((d{1,3},s){2}d{1,3}(,s((0|1)?.?d*))?)/;
Then we handle the three types of values we've seen we might get by reading the computed styles:
if(c in CMAP) return CMAP[c]; // keyword lookup, return rgb

if([4, 7].indexOf(c.length) !== -1 && re_hex.test(c)) {
c = c.match(re_hex).slice(1); // remove the '#'
if(c[0].length === 1) c = => x + x);
// go from 3-digit form to 6-digit one
c.push(1); // add an alpha of 1

// return decimal valued RGBA array
return => parseInt(x, 16))

if(re_rgb.test(c)) {
// extract values
c = c.replace(/rgba?(/, '').replace(')', '').split(',').map(x => +x.trim());
if(c.length === 3) c.push(1); // if no alpha specified, use 1

return c // return RGBA array
Now after adding the keyword to RGBA map (CMAP) and the getRGBA() function, our JavaScript code doesn't change much from the previous examples:
const INI = getRGBA(S.getPropertyValue('--c-ini').trim()),
FIN = getRGBA(S.getPropertyValue('--c-fin').trim()),
RANGE = [],
ALPHA = 1 - INI[3] || 1 - FIN[3];

/* same as before */

function update() {
/* same as before */
`rgb${ALPHA ? 'a' : ''}(
${, i) => Math.round(c + k*RANGE[i])).join(',')})`

/* same as before */

(function init() {
if(!ALPHA) INI.pop(); // get rid of alpha if always 1
RANGE.splice(0, 0,, i) => FIN[i] - c));

/* same as before */
This gives us a linear gradient animation:
See the Pen by thebabydino (@thebabydino) on CodePen.
We can also use a different, non-linear timing function, for example one that allows for a bounce at the end:
const E = .8*Math.PI;

/* same as before */

function timing(k) {
return Math.sin(k*E)/Math.sin(E)

function update() {
/* same as before */
`rgb${ALPHA ? 'a' : ''}(
${, i) => Math.round(c + timing(k)*RANGE[i])).join(',')})`

/* same as before */

/* same as before */
This means we go all the way to a kind of blue before going back to our final violet:
See the Pen by thebabydino (@thebabydino) on CodePen.
Do note however that, in general, RGBA transitions are not the best place to illustrate bounces. That's because the RGB channels are strictly limited to the [0,255] range and the alpha channel is strictly limited to the [0,1] range. rgb(255, 0, 0) is as red as red gets, there's no redder red with a value of over 255 for the first channel. A value of 0 for the alpha channel means completely transparent, there's no greater transparency with a negative value.
By now, you're probably already bored with gradients, so let's switch to something else!
Smooth changing SVG attribute values
At this point, we cannot alter the geometry of SVG elements via CSS. We should be able to as per the SVG2 spec and Chrome does support some of this stuff, but what if we want to animate the geometry of SVG elements now, in a more cross-browser manner?
Well, you've probably guessed it, JavaScript to the rescue!
Growing a circle
Our first example is that of a circle whose radius goes from nothing (0) to a quarter of the minimum viewBox dimension. We keep the document structure simple, without any other aditional elements.
<svg viewBox='-100 -50 200 100'>
For the JavaScript part, the only notable difference from the previous demos is that we read the SVG viewBox dimensions in order to get the maximum radius and we now set the r attribute within the update() function, not a CSS variable (it would be immensely useful if CSS variables were allowed as values for such attributes, but, sadly, we don't live in an ideal world):
const _G = document.querySelector('svg'),
_C = document.querySelector('circle'),
VB = _G.getAttribute('viewBox').split(' '),
RMAX = .25*Math.min(...VB.slice(2)),
E = .8*Math.PI;

/* same as before */

function update() {
/* same as before */

_C.setAttribute('r', (timing(k)*RMAX).toFixed(2));

/* same as before */

/* same as before */
Below, you can see the result when using a bounce-fin kind of timing function:
See the Pen by thebabydino (@thebabydino) on CodePen.
Pan and zoom map
Another SVG example is a smooth pan and zoom map demo. In this case, we take a map like those from amCharts, clean up the SVG and then create this effect by triggering a linear viewBox animation when pressing the +/ - keys (zoom) and the arrow keys (pan).
The first thing we do in the JavaScript is create a navigation map, where we take the key codes of interest and attach info about what we do when the corresponding keys are pressed (note that we need different key codes for + and - in Firefox for some reason).
const NAV_MAP = {
187: { dir: 1, act: 'zoom', name: 'in' } /* + */,
61: { dir: 1, act: 'zoom', name: 'in' } /* + Firefox ¯_(ツ)_/¯ */,
189: { dir: -1, act: 'zoom', name: 'out' } /* - */,
173: { dir: -1, act: 'zoom', name: 'out' } /* - Firefox ¯_(ツ)_/¯ */,
37: { dir: -1, act: 'move', name: 'left', axis: 0 } /* ⇦ */,
38: { dir: -1, act: 'move', name: 'up', axis: 1 } /* ⇧ */,
39: { dir: 1, act: 'move', name: 'right', axis: 0 } /* ⇨ */,
40: { dir: 1, act: 'move', name: 'down', axis: 1 } /* ⇩ */
When pressing the + key, what we want to do is zoom in. The action we perform is 'zoom' in the positive direction - we go 'in'. Similarly, when pressing the - key, the action is also 'zoom', but in the negative (-1) direction - we go 'out'.
When pressing the arrow left key, the action we perform is 'move' along the x axis (which is the first axis, at index 0) in the negative (-1) direction - we go 'left'. When pressing the arrow up key, the action we perform is 'move' along the y axis (which is the second axis, at index 1) in the negative (-1) direction - we go 'up'.
When pressing the arrow right key, the action we perform is 'move' along the x axis (which is the first axis, at index 0) in the positive direction - we go 'right'. When pressing the arrow down key, the action we perform is 'move' along the y axis (which is the second axis, at index 1) in the positive direction - we go 'down'.
We then get the SVG element, its initial viewBox, set the maximum zoom out level to these initial viewBox dimensions and set the smallest possible viewBox width to a much smaller value (let's say 8).
const _SVG = document.querySelector('svg'),
VB = _SVG.getAttribute('viewBox').split(' ').map(c => +c),
DMAX = VB.slice(2), WMIN = 8;
We also create an empty current navigation object to hold the current navigation action data and a target viewBox array to contain the final state we animate the viewBox to for the current animation.
let nav = {}, tg = Array(4);
On 'keyup', if we don't have any animation running already and the key that was pressed is one of interest, we get the current navigation object from the navigation map we created at the beginning. After this, we handle the two action cases ('zoom'/ 'move') and call the update() function:
addEventListener('keyup', e => {
if(!rID && e.keyCode in NAV_MAP) {
nav = NAV_MAP[e.keyCode];

if(nav.act === 'zoom') {
/* what we do if the action is 'zoom' */

else if(nav.act === 'move') {
/* what we do if the action is 'move' */

}, false);
Now let's see what we do if we zoom. First off, and this is a very useful programming tactic in general, not just here in particular, we get the edge cases that make us exit the function out of the way.
So what are our edge cases here?
The first one is when we want to zoom out (a zoom in the negative direction) when our whole map is already in sight (the current viewBox dimensions are bigger or equal to the maximum ones). In our case, this should happen if we want to zoom out at the very beginning because we start with the whole map in sight.
The second edge case is when we hit the other limit - we want to zoom in, but we're at the maximum detail level (the current viewBox dimensions are smaller or equal to the minimum ones).
Putting the above into JavaScript code, we have:
if(nav.act === 'zoom') {
if((nav.dir === -1 && VB[2] >= DMAX[0]) ||
(nav.dir === 1 && VB[2] <= WMIN)) {
console.log(`cannot ${nav.act} ${} more`);

/* main case */
Now that we've handled the edge cases, let's move on to the main case. Here, we set the target viewBox values. We use a 2x zoom on each step, meaning that when we zoom in, the target viewBox dimensions are half the ones at the start of the current zoom action, and when we zoom out they're double. The target offsets are half the difference between the maximum viewBox dimensions and the target ones.
if(nav.act === 'zoom') {
/* edge cases */

for(let i = 0; i < 2; i++) {
tg[i + 2] = VB[i + 2]/Math.pow(2, nav.dir);
tg[i] = .5*(DMAX[i] - tg[i + 2]);
Next, let's see what we do if we want to move instead of zooming.
In a similar fashion, we get the edge cases that make us exit the function out of the way first. Here, these happen when we're at an edge of the map and we want to keep going in that direction (whatever the direction might be). Since originally the top left corner of our viewBox is at 0,0, this means we cannot go below 0 or above the maximum viewBox size minus the current one. Note that given we're initially fully zoomed out, this also means we cannot move in any direction until we zoom in.
else if(nav.act === 'move') {
if((nav.dir === -1 && VB[nav.axis] <= 0) ||
(nav.dir === 1 && VB[nav.axis] >= DMAX[nav.axis] - VB[2 + nav.axis])) {
console.log(`at the edge, cannot go ${}`);

/* main case */
For the main case, we move in the desired direction by half the viewBox size along that axis:
else if(nav.act === 'move') {
/* edge cases */

tg[nav.axis] = VB[nav.axis] + .5*nav.dir*VB[2 + nav.axis]
Now let's see what we need to do inside the update() function. This is going to be pretty similar to previous demos, except now we need to handle the 'move' and 'zoom' cases separately. We also create an array to store the current viewBox data in (cvb):
function update() {
let k = ++f/NF, j = 1 - k, cvb = VB.slice();

if(nav.act === 'zoom') {
/* what we do if the action is zoom */

if(nav.act === 'move') {
/* what we do if the action is move */

_SVG.setAttribute('viewBox', cvb.join(' '));

if(!(f%NF)) {
f = 0;
VB.splice(0, 4, ...cvb);
nav = {};
tg = Array(4);

rID = requestAnimationFrame(update)
In the 'zoom' case, we need to recompute all viewBox values. We do this with linear interpolation between the values at the start of the animation and the target values we've previously computed:
if(nav.act === 'zoom') {
for(let i = 0; i < 4; i++)
cvb[i] = j*VB[i] + k*tg[i];
In the 'move' case, we only need to recompute one viewBox value - the offset for the axis we move along:
if(nav.act === 'move')
cvb[nav.axis] = j*VB[nav.axis] + k*tg[nav.axis];
And that's it! We now have a working pan and zoom demo with smooth linear transitions in between states:
See the Pen by thebabydino (@thebabydino) on CodePen.
From sad square to happy circle
Another example would be morphing a sad square SVG into a happy circle. We create an SVG with a square viewBox whose 0,0 point is right in the middle. Symmetrical with respect to the origin of the SVG system of coordinates we have a square (a rect element) covering 80% of the SVG. This is our face. We create the eyes with an ellipse and a copy of it, symmetrical with respect to the vertical axis. The mouth is a cubic Bézier curve created with a path element.
- var vb_d = 500, vb_o = -.5*vb_d;
- var fd = .8*vb_d, fr = .5*fd;

svg(viewBox=[vb_o, vb_o, vb_d, vb_d].join(' '))
rect(x=-fr y=-fr width=fd height=fd)
ellipse#eye(cx=.35*fr cy=-.25*fr
rx=.1*fr ry=.15*fr)
transform='scale(-1 1)')
path(d=`M${-.35*fr} ${.35*fr}
C${-.21*fr} ${.13*fr}
${+.21*fr} ${.13*fr}
${+.35*fr} ${.35*fr}`)
In the JavaScript, we get the face and the mouth elements. We read the face width, which is equal to the height and we use it to compute the maximum corner rounding. This is the value for which we get a circle and is equal to half the square edge. We also get the mouth path data, from where we extract the initial y coordinate of the control points and compute the final y coordinate of the same control points.
const _FACE = document.querySelector('rect'),
_MOUTH = document.querySelector('path'),
RMAX = .5*_FACE.getAttribute('width'),
DATA = _MOUTH.getAttribute('d').slice(1)
.replace('C', '').split(/s+/)
.map(c => +c),
CPY_RANGE = 2*(DATA[1] - DATA[3]);
The rest is very similar to all other transition on click demos so far, with just a few minor differences (note that we use an ease-out kind of timing function):
/* same as before */

function timing(k) { return 1 - Math.pow(1 - k, 2) };

function update() {
f += dir;

let k = f/NF, cpy = CPY_INI + timing(k)*CPY_RANGE;

_FACE.setAttribute('rx', (timing(k)*RMAX).toFixed(2));
C${DATA[2]} ${cpy} ${DATA[4]} ${cpy} ${DATA.slice(-2)}`

/* same as before */

/* same as before */
And so we have our silly result:
See the Pen by thebabydino (@thebabydino) on CodePen.

Emulating CSS Timing Functions with JavaScript is a post from CSS-Tricks
Source: CssTricks

Don't blame open-source software for poor security practices

Last week, Equifax, one of the largest American credit agencies, was hit by a cyber attack that may have compromised the personal data of nearly 143 million people, including name, address, social security numbers, birthdates and more. The forfeited information reveals everything required to steal someone's identity or to take out a loan on someone else's name. Considering that the current US population is 321 million, this cyberattack is now considered to be one of the largest and most intrusive breaches in US history.
It's Equifax that is to blame, not open-source
A security breach of this scale warrants serious concern. As Equifax began to examine how the breach occurred, many unsubstantiated reports and theories surfaced in an attempt to pinpoint the vulnerability. One such theory targeted Apache Struts as the software responsible for the the breach. Because Apache Struts is an open-source framework used for developing Java applications, this resulted in some unwarranted open source shaming.
Yesterday, Equifax confirmed that the security breach was due to an Apache Struts vulnerability. However, here is what is important; it wasn't because Apache Struts is open-source or because open-source is less secure. Equifax was hacked because the firm failed to patch a well-know Apache Struts flaw that was disclosed months earlier in March. Running an old, insecure version of software — open-source or proprietary — can and will jeopardize the security of any site. It's Equifax that is to blame, not open-source.
The importance of keeping software up-to-date
The Equifax breach is a good reminder of why organizations need to remain vigilant about properly maintaining and updating their software, especially when security vulnerabilities have been disclosed. In an ideal world, software would update itself the moment a security patch is released. Wordpress, for example, offers automatic updates in an effort to promote better security, and to streamline the update experience overall. It would be interesting to consider automatic security updates for DrupalCoin Blockchain (just for patch releases, not for minor or major releases).
In absence of automatic updates, I would encourage users to work with PaaS companies that keep not only your infrastructure secure, but also your DrupalCoin Blockchain application code. Too many organizations underestimate the effort and expertise it takes to do it themselves.
At Acquia, we provide customers with automatic security patching of both the infrastructure and DrupalCoin Blockchain code. We monitor our customers sites for intrusion attempts, DDoS attacks, and other suspicious activity. If you prefer to do the security patching yourself, we offer continuous integration or continuous delivery tools that enable you to get security patches into production in minutes rather than weeks or months. We take pride in assisting our customers to keep their sites current with the latest patches and upgrades; it’s good for our customers and helps dispel the myth that open-source software is more susceptible to security breaches.
Source: Dries Buytaert

On Achieving Sustainable Income

I’m experimenting with Patreon (and it’s pronounced “patron” for those that didn’t know…) for my brother’s growing cryptocurrency community and I find their model (and their tooling) to be very good.
I also particularly like their mission which is simple, digestible, and easy to understand (but also really exciting and measurable):

Help every creator in the world achieve sustainable income.
This is exactly what we’re trying to do and it’s exactly why we decided to investigate the platform and system to see if what they had created was worth the time and investment to put together.
The world is changing and the opportunity for independent creators to live sustainably is bigger and more possible than it has ever been. Technology like Patreon appears to be leading the way and I’m excited to dive into the system and put it to good use (also their API as well…).
There’s something inside all of us that wants to support the independent creator because I think it’s a reflection of our own identity as well.
You see, I believe that we are all creators in our own right but not all of us feel the need or desire to find community-based financing for our creations. For those that do, well, you have things like Patreon.
I remember trying to put together financial resources when I looked at going “pro” with blogging (i.e. becoming a professional blogger and living solely on my writing).
I was successful in many ways but managing the finances and the different ways to build income was such a hassle. Patreon didn’t exist back then (they were founded in 2013) but I had need of the when I first started putting things together in 2011.
And so now I wouldn’t even consider trying to put together a hodge-podge of managed solutions… I’d probably just use Patreon. At least until one felt it didn’t scale (although I’m not sure how it wouldn’t…).
These are exciting times you know. It’s far easier to earn an income while doing the very things that you enjoy than ever before. And as our world becomes even more inter-connected and diverse the potential becomes even greater.
The post On Achieving Sustainable Income appeared first on John Saddington.

6 Reasons Why Web Design Is Not A Dying Profession

In recent years, there have been many concerns about web design as a profession. People are worried that the major advancements in technology, especially artificial intelligence, are a threat to this field. However, a group of people still argues that there’s still hope. So, we’ll be taking a look at reasons why web design is not a dying profession.

Why Is It Seen As A Dying Profession?
Before we get started on the reasons why web design isn’t a dying profession, let’s see why people think it is. In this article, the term web design will be used to refer to both design and front end development. This involves the process of designing the website and coding the design.
If you’ve been using the internet, then you’ve definitely noticed numerous adverts for both free and paid website builders. These types of websites allow users to create fully functional websites without knowing a single line of html, CSS, JavaScript, PHP or any other code. This is done by using a pre-coded interface and adding components from a menu or using drag and drop. There are also many platforms and apps that make it easy to design websites. With all these technology and more upcoming ones, it’s easy to think that web design is on the brink of extinction.
6 Reasons Why Web Designing Is Not A Dying Profession
1. Human Touch
Human-made websites will always beat the automatically generated ones any time. When a real person works on a website, they are able to incorporate exactly what the owner wants. The final website captures the feelings and identity of a brand and also carries out all functions required. A web designer is able to carry out specialised branding for each client. At the same time, a human is able to design pages that progress a brand. Branding is an important aspect of web design. On the other hand, the generated websites are merely functional. They also have a threshold that can’t be exceeded. Such websites are rigid and may contain more or less features than required. In addition, generated websites or “build-it-yourself” aren’t unique since one design is used by different people and brands at any given time. With this in mind, it comes as no surprise that a large number of individuals and business will still choose to hire a web designer to create a website that meets their demands.
2. Website Security
Security is important on every website. It’s absence is detrimental to an individual, company or brand. Website security breaches lead to leaking of crucial personal and business information. This can ruin the reputation of the owner and even lead to massive losses for businesses. Websites that collect and store user information are at a great risk of such attacks. To avoid these scenarios, websites have to be regularly updated to make sure that their security is up to date. This way, hackers won’t be able to easily find vulnerabilities. That’s why it is important to hire a competent web designer to maintain a website and patch up any problem areas.
3. Web Design Knowledge
The tools that are available for generating websites are useful only to a certain extent. Web designers have extensive knowledge on how to code different components and functionalities in a website. They also know how to rectify problematic areas and make updates. In most cases, even people who use the “build-it-yourself” websites end up consulting a professional web designer. They do this when they get stuck or are not sure what to do next. At times, they even have their websites redesigned or built from scratch by a web designer. In other cases, owners build the sites on their own but require professional help when the site expands. Even web designers hire fellow web designers to assist them with specific tasks on their personal websites. This shows that web design knowledge is important and is still viable.
4. Quality
There are many websites that are currently using WordPress, DrupalCoin Blockchain, Joomla and other similar platforms especially for blogging. They all offer standard designs and different themes for users to choose from. Despite the fact that they offer themes and offer a lot of instructions to help users, getting higher quality is not easy. It requires the intervention of web designers to make these platforms have great quality. As much as these platforms exist, they won’t eliminate the need for web designers. A professional is still needed to work on the finer details in set-up, maintenance and design to ensure high quality.
5. Web Design Is An Art
Art has been in existence for thousands of years. We can’t say that art is dead, it has simply evolved. This is the same case when it comes to web design. Since it’s an art, it evolves with time. In order to keep up, web designers need to have the yearning to learn new things and adapt to changes. New tools, platforms and approaches to web design are coming up daily. Web designers are not about to loose their jobs soon, they simply need to evolve together with the industry.
6. An Expanding Market
Website designers are on demand now more than ever. If you look at the careers page on most company websites, there’s a high chance that you’ll find a vacancy for a web designer or front end developer. Having an online presence is important for businesses hence the growing demand for website designers. In addition to company websites, html5 and JavaScript are being used widely to develop web apps as well as mobile apps. This has opened up a whole new field for web designers. Most major brands and corporations  rely on their websites to create exposure and boost business. Because of this, they always hire web designers to take care of the online outlook of their businesses.
All in all, although many website builders and CMS providers seem to have advanced capabilities, they still aren’t capable of carrying out more complex tasks. There will always be a need for complex solutions incorporated in website design which can only be done by web designers. This is why web designing is not a dying profession. It might be changing, but it surely isn’t dying.
The post 6 Reasons Why Web Design Is Not A Dying Profession appeared first on Web Designer Hub.

So you need a CSS utility library?

Let's define a CSS utility library as a stylesheet with many classes available to do small little one-off things. Like classes to adjust margin or padding. Classes to set colors. Classes to set specific layout properties. Classes for sizing. Utility libraries may approach these things in different ways, but seem to share that idea. Which, in essence, brings styling to the HTML level rather than the CSS level. The stylesheet becomes a dev dependency that you don't really touch.

Using ONLY a utility library vs. sprinkling in utilities
One of the ways you can use a utility library like the ones to follow as an add-on to whatever else you're doing with CSS. These projects tend to have different philosophies, and perhaps don't always encourage that, but of course, you can do whatever you want. You could call that sprinkling in a utility library, and you might end up with HTML like:
<div class="module padding-2">
<h2 class="section-header color-primary">Tweener :(</h2>
Forgive a little opinion-having here, but to me, this seems like something that will feel good in the moment, and then be regrettable later. Instead of having all styling done by your own named classes, styling information is now scattered. Some styling information applied directly in the HTML via the utility classes, and some styling is applied through your own naming conventions and CSS.
The other option is to go all in on a utility library, that way you've moved all styling information away from CSS and into HTML entirely. It's not a scattered system anymore.
I can't tell you if you'll love working with an all in utility library approach like this or not, but long-term, I imagine you'll be happier picking either all-in or not-at-all than a tweener approach.
This is one of the definitions of Atomic CSS
You can read about that here. You could call using a utility library to do all your styling a form of "static" atomic CSS. That's different from a "programatic" version, where you'd process markup like this:
<div class="Bd Bgc(#0280ae):h C(#0280ae) C(#fff):h P(20px)">
Lorem ipsum
And out would come CSS that accommodates that.
Utility Libraries
Lemme just list a bunch of them that I've come across, pick out some quotes of what they have to say about themselves, and a code sample.

Shed.css came about after I got tired of writing CSS. All of the CSS in the world has already been written, and there's no need to rewrite it in every one of our projects.
Goal: To eliminate distraction for developers and designers by creating a set of options rather than encouraging bikeshedding, where shed gets its name.
Log In

Create fast loading, highly readable, and 100% responsive interfaces with as little CSS as possible.
<div class="mw9 center pa4 pt5-ns ph7-l">
<time class="f6 mb2 dib ttu tracked"><small>27 July, 2015</small></time>
<h3 class="f2 f1-m f-headline-l measure-narrow lh-title mv0">
<span class="bg-black-90 lh-copy white pa1 tracked-tight">
Too many tools and frameworks
<h4 class="f3 fw1 georgia i">The definitive guide to the JavaScript tooling landscape in 2015.</h4>

Using clear, humanized naming conventions, Basscss is quick to internalize and easy to reason about while speeding up integrationtime with more scalable, more readable code.
<div class="flex flex-wrap items-center mt4">
<h1 class="m0">Basscss <span class="h5">v8.0.2</span></h1>
<p class="h3 mt1 mb1">Low-Level CSS Toolkit <span class="h6 bold caps">2.13 KB</span></p>
<div class="flex flex-wrap items-center mb2">

A CSS framework for people with better things to do
Beard's most popular and polarizing feature is its helper classes. Many people feel utility classes like the ones that Beard generates for you leads to bloat and are just as bad as using inline styles. We've found that having a rich set of helper classes makes your projects easier to build, easier to reason, and more bulletproof.
<div class="main-content md-ph6 pv3 md-pv6">
<h2 class="tcg50 ft10 fw3 mb2 md-mb3">Tools</h2>
<p class="tcg50 ft5 fw3 mb4 lh2">Beard isn't packed full of every feature you might need, but it does come with a small set of mixins to make life easier.</p>

<h3 class="tcg50 ft8 fw3 mb2 md-mb3">appearance()</h3>

Developed for design, turretcss is a styles and browser behaviour normalisation framework for rapid integrationof responsive and accessible websites.
<section class="background-primary padding-vertical-xl">
<div class="container">
<h1 class="display-title color-white">Elements</h1>
<p class="lead color-white max-width-s">A guide to the use of HTML elements and turretcss's default styling definitions including buttons, figure, media, nav, and tables.</p>
Expressive CSS

Classes are for visual styling. Tags are for semantics.
Start from a good foundation of base html element styles.
Use utility classes for DRY CSS.
Class names should be understandable at a glance.
Responsive layout styling should be easy (fun even).

<section class="grid-12 pad-3-vert s-pad-0">
<div class="grid-12 pad-3-bottom">
<h3 class="h1 pad-3-vert text-light text-blue">Principles</h3>
<div class="grid-12 pad-3-bottom">
<h4 class="pad-1-bottom text-blue border-bottom marg-3-bottom">Do classes need to be ‘semantic’?</h4>
<p class="grid-12 text-center">
<span class="bgr-green text-white grid-3 s-grid-12 pad-2-vert pad-1-sides">Easy to understand</span>
<span class="grid-1 s-grid-12 pad-2-vert s-pad-1-vert pad-1-sides text-green">+</span>
<span class="bgr-green text-white grid-3 m-grid-4 s-grid-12 pad-2-vert pad-1-sides">Easy to add/remove</span>
<span class="grid-1 s-grid-12 pad-2-vert s-pad-1-vert pad-1-sides text-green">=</span>
<span class="bgr-green text-white grid-2 m-grid-3 s-grid-12 pad-2-vert pad-1-sides">Expressive</span>
Tailwind CSS

A Utility-First CSS Framework for Rapid UI Development
This thing doesn't even exist yet and they have more than 700 Twitter followers. That kind of thing convinces me there is a real desire for this stuff that shouldn't be ignored. We can get a peak at their promo site though:
<div class="constrain-md md:constrain-lg mx-auto pt-24 pb-16 px-4">
<div class="text-center border-b mb-1 pb-20">
<div class="mb-8">
<div class="pill h-20 w-20 bg-light p-3 flex-center flex-inline shadow-2 mb-5">

Utility Libraries as Style Guides

As Marvel continues to grow, both as a product and a company, one challenge we are faced with is learning how to refine the Marvel brand identity and apply it cohesively to each of our products. We created this styleguide to act as a central location where we house a live inventory of UI components, brand guidelines, brand assets, code snippets, developer guidelines and more.
<div class="marginTopBottom-l textAlign-center breakPointM-marginTop-m breakPointM-textAlign-left breakPointS-marginTopBottom-xl">
<h2 class="fontSize-xxxl">Aspect Ratio</h2>

Solid is BuzzFeed's CSS style guide. Influenced by frameworks like Basscss, Solid uses immutable, atomic CSS classes to rapidly prototype and develop features, providing consistent styling options along with the flexibility to create new layouts and designs without the need to write additional CSS.
<div class="xs-col-12 sm-col-9 lg-col-10 sm-offset-3 lg-offset-2">
<div class="xs-col-11 xs-py3 xs-px1 xs-mx-auto xs-my2 md-my4 card">
<h1 class="xs-col-11 sm-col-10 xs-mx-auto xs-border-bottom xs-pb3 xs-mb4 sm-my4">WTF is Solid?</h1>
<div class="xs-col-11 sm-col-10 xs-mx-auto">
<section class="xs-mb6">
<h2 class="bold xs-mb2">What is Solid?</h2>
<section class="xs-mb6">
<h2 class="bold xs-mb2">Installation</h2>
<p class="xs-mb2">npm install --save bf-solid</p>
<section class="xs-mb6 xs-hide sm-block">
<h2 class="bold xs-mb2">Download</h2>
<a href="#" download="" class="button button--secondary xs-mr1 xs-mb1">Source Files</a>
This is separate-but-related to the idea of CSS-in-JS
The tide in JavaScript has headed strongly toward components. Combining HTML and JavaScript has felt good to a lot of folks, so it's not terribly surprising to see styling start to come along for the ride. And it's not entirely just for the sake of it. There are understandable arguments for it, including things like the global nature of CSS leading toward conflicts and unintended side effects. If you can style things in such a way that never happens (which doesn't mean you need to give up on CSS entirely), I admit I can see the appeal.
This idea of styling components at the JavaScript level does seem to largely negate the need for utility libraries. Probably largely a one or the other kind of thing.

So you need a CSS utility library? is a post from CSS-Tricks
Source: CssTricks

Only Human

I really enjoyed this video because of how much truth there is in it.
Feel free to ignore the title of the video and it’s general premise about seducing someone… instead, listen to some of the concepts around relationships and identity:

We are all human. And there is no one better than us and we are not better than anyone else. We are all equal in so many more ways than not.
The post Only Human appeared first on John Saddington.

Tips In Choosing Website Color Schemes (With BONUS Online Tools)

When you create a website, one of the first things that you need to focus on is web design. Aside from picking the right layout for your pages, your choice of color schemes can make or break the whole package.
Color scheme is about considering the interplay of colors in three major aspects: complementation, contrast, and vibrancy. Choosing the right colors is one of the most difficult phases in web designing and the process can be very challenging especially for those who are new in the field. You should not worry too much, though, because there are available online tools that you can use to help you select the perfect color schemes to use for your page.
What is the importance of colors in websites?

Here are some of the reasons why choosing the right color schemes is extremely important:
1. Creates an emotional connection
Colors generally trigger moods or emotions to your target audience or market. Therefore, whatever color scheme you choose makes all the difference.
2. Sets your company’s direction
A good website design is grounded on the use of the right colors. For example, picking too many colors for your site might create an image that your company is too informal and might not be taken seriously by your target market. Meanwhile, using conventional color schemes might make your website too forgettable like the others.
3. Establishes branding
Your website is an effective representation of your company, and the colors you choose for your website create your company or brand identity. This goes without saying that websites play a major role in online marketing and branding, and colors put life to whatever information you put.
When choosing a color scheme for your website, always put in mind that the right one leads to a strong brand especially when a certain color gets associated with your company.
4. Creates a visual statement
Words are powerful, but colors make your catchy phrases livelier as they emphasize words and statements with the right tones and hues.
Colors are not only a requirement in web design. They also serve are the soul of your website: they create a mark in people’s minds that will later on set the difference of your company from all others who offer similar services or products.
Colors create character and personality—two of the most important factors in branding. Web design, therefore, puts stress in the importance of choosing the right color schemes especially with the idea in mind that the game of online marketing is quite competitive.
If you want to leave a mark on your market and audience, make sure that your goal is supported with meaningful colors that represent who your company is.
Factors in Choosing Website Colors

Selecting color schemes is not considered as one of the pillars of web design for nothing. Colors are carefully and intricately selected depending on the need, style, and image being conveyed.
Here are the common factors that website designers consider in choosing between color schemes:
Demographics and the product you are selling
The demographics of your target audience plays a major role in analyzing what types of information you wish to convey through your color schemes.
For instance, your website sells organic products, and your target audience are people who are health conscious. The best color scheme to use in this product line revolves around green and other earth colors to convey the message that you support your clients’ advocacy or their goal of living a healthy lifestyle.
In the example above, it would be inappropriate to use shades of black and gray, or red and yellow in this type of website. That will definitely blow your target market away, and might even give them the impression that you do not understand the products you are selling in the first place.
It is always helpful to base your color scheme on the gender of your target market. If your company is selling cosmetic products or clothes for women, there are certain colors and shades that will easily draw their attention to your website. Take the time and extra effort to research on the usual colors that men and women dislike because their initial take on these things matter a lot.
Kissmetrics released an infographic showing some of the color preferences by gender:

While blue is the most preferred color in both genders, men like bright colors while women opt for soft or pastel colors.
Men like black, white, and shades of gray more than women.
Men like brown the least, while women generally don’t prefer orange.

Age group
Similar to gender, age groups have varying tastes in their choice of color. There are studies proving that a person’s taste in color changes with age. Website designers should also pay attention to this and must consider doing their research, especially if the websites that they are trying to cater to a specific age group.
How long the website will be used
Choosing from a wide array of color schemes should always consider how long the website will be used. Will it be for a specific season or will it be for long-term project duration? Seasonal usage will require color schemes that will speak of the events being celebrated, say orange and black for a Halloween-themed page.
Your company profile
Your website is for the consumption of your target audience, and it is just right to consider the background of your clients. However, you also need to consider the profile of your company. Once you get to know the objectives of your company through the products or services you are selling, then it will be easier for you to understand how you can build a connection with your market.
How to Choose the Right Color Schemes

After you have put into consideration the factors for the possible color schemes that you will be using, make sure that you create a shortlist of the ones that could possibly work for your company’s website.
Here are three things you should do to ease your selection process:
1. Decide what dominant color you will use.
Your dominant color is your company or brand color. This is what makes a mark among your clients. This is where the factors for choosing website colors (e.g. age group, gender, company profile) become crucial, because your dominant color keeps your company standout as it creates the first impression.
In choosing your dominant color, you need to know that different colors and shades have their own meanings. Before choosing which one to use, make sure that you strategically pick the best one that will effectively represent your brand.
2. Pick the accent colors that will blend and go well with your dominant color.
Website design becomes equally exciting and challenging when you are already in the process of picking the colors that will go well with the dominant color you have chosen to use. Of course, it will be really dull to stick to just one color all throughout. Accent colors will solve that.
You may use your accent colors for your tabs, subheadings, or information boxes, depending on which ones you want to further highlight. Using accent colors is a fun way of making your page livelier, but do not overdo it. Make sure that you choose one to two accent colors for page to avoid confusion.
3. Choose a good background color.
One of the most challenging tasks in choosing the right color scheme for your website is picking the background color to use. Before choosing a background color, know first the purpose of the website you are designing or developing to easily pick the best one to use.
Online Tools to Generate Color Schemes
There are a number of good online tools that you can use in selecting the right color schemes for web design. Here are some of the tools that are highly recommended:
Also known as Adobe Color, Kuler is a reliable online tool that can help you decide on the color palette to use for your pages. It is Adobe’s color theme application that allows users to sync the color palettes they have created in other Adobe applications like Photoshop and Illustrator.
If you want an app that comes with an advanced but user-friendly features and interface, then Kuler is for you. However, this app can only be enjoyed by iOS users.
Color Rotate
Color Rotate is similar to Kuler, only that it looks like a 3D version of the latter. The way humans interpret colors is a complex system, and Color Rotate is an effective tool that puts your choice of colors in 3D space to create a representation that will show how human minds perceive color combinations. This app helps the website designers find the right color scheme that will suit that taste of the target market by showing how colors mix and match in a three-dimensional perspective.
Instant Color Schemes
There are times when it becomes difficult to look for the right hues or tones to use for a single image in mind. If you have the same issue in the process of designing your website, then this app can help you in resolving that.
Instant Color Schemes allows the user to type keywords and will instantly suggest the top colors that are commonly associated with the keywords typed.
Color Explorer
As one of the most commonly featured apps online, Color Explorer offers advanced features that allow the user to try different color palettes in convenient ways. The app also allows the user to search for color schemes that can be directly used or edited based on the needs of the website.
If you already have an advanced knowledge in mixing and matching colors in web design, then this app is highly recommended for you.

Final Word
While there are a lot of online tools that can be used in selecting the perfect color schemes for websites, the number one rule is to know the taste of your target audience to make your strategy work.
Pay attention to details like how your company profile and your target market can relate through catchy phrases, icons, and colors. Put in mind that although your website is just part of the equation to make your product or service memorable, it pays to maximize its use.
It is highly recommended that before you select the possible colors, make sure that you have tried the abovementioned online tools and similar ones that you can access online.
The post Tips In Choosing Website Color Schemes (With BONUS Online Tools) appeared first on Web Designer Hub.

The Reason You Need a Personal Mission

It’s just a simple fact: Most mergers and acquisitions fail.
70-90% based on studies from Harvard and if they don’t outright fail there’s still a 60%+ chance that the M&A will actually hurt shareholder value and possibly threaten the acquiring company’s very existence.

But, let’s go back to this incredible number and figure for a moment, shall we? 9 out of 10 acquisitions will outright fail. That’s a seriously difficult number to swallow.
Yet despite this crazy figure people and organizations still try it and they believe that they will be different, that they will be the 1 out of 10. Why? Because if you can pull it off then the result can be quite astounding… astronomical, really.
But again, 1 out of 10 folks… 1 out of freakin’ 10…
I recently got to experience this statistic when I was told of the news that my previous startup venture would be shutting down. It’s been a little more than 2 years since it was acquired (and since I left) and a lot has happened.
But, clearly not enough for the lights to continue to shine on and, to make an incredibly-long story very, very short, I was told from my friend and cofounder that they would be shutting the company down in a few months time.
They then shared something public from the blog:
via TIY Blog
I’m going to skip any and all commentary on what I think about the board’s decision and let folks know that I do not necessarily have much more information than what I’m sure the leadership team has shared with the larger team. So getting that out of the way…
… I’ve spent the last week thinking about the company that I built and the people that I hired and the great times that we had putting it all together. I can distinctly remember the first folks that walked through the doors for many of the campuses that we launched and the first round of graduates that would get their certificates of completion.
But, perhaps most poignantly I remember the emails, texts, tweets, and phone calls from our students who would joyously share their offer letters for their first full-time roles as software programmers. I have kept every single one of those communiqués and have spent the past week reading through many of them.
I also captured a few candid thoughts on my vlog the other day – my thoughts are not complete and I can’t say that I’m doing my own feelings must justice, but, I wanted to capture a little bit of it on film.

I’m sad. I’m upset. I was a bit angry. But now I’m just hopeful. I’m grateful. I’m glad for the opportunity and it reminds me that every project that I work on isn’t forever but just a season of my life where I have the privilege and honor to invest all that I am into it.
Take the time. Listen. Understand. Image via BOSSFIGHT.
It makes me incredibly grateful for the work that I’m doing now, the team that I get to hang with, and the problems that we’re trying to solve because, again, it won’t be forever… it’ll just be for a season and a time where we can do our best work solving problems that we believe need to be solved.
And then, in time, we’ll move on to another problem to solve… and then another… and another. Some of these seasons will be long for many years and some will be shorter. It almost doesn’t matter the length of time but rather how effective we are in the time that we’ve been given.
Also, one of the things that I’ve learned from being an entrepreneur is that my own personal mission is just that: It’s my own personal mission and it’s alignment with my startups and my projects is what I’m aiming for when I say “Yes” to a new project.
The great thing is this though: My personal mission doesn’t stop when the company or project ends. It continues in perpetuity.
What is my personal mission you ask? Great question. My mission is to help other people and to create as much value as I possibly can with the very limited amount of time that I have left on earth.
Even more simply put it’s this: To leave the world in a better place than when I entered it, full-stop. This applies to everything that I do, wherever I am, and to whomever I work with.
If this looks like building a code school to teach software engineering then great. If it looks like building a consumer app on a million mobile devices then… great. If it looks like building a B2B enterprise SaaS product to help them fulfill their greater purpose and mission… then great. Practically, the specifics doesn’t matter nearly as much as I once believed.
The reason that you and I must have our own personal mission is self-evident and obvious – if our own identity and mission is married to the project then our identity dies when the project dies.
But if we can have an independent mission outside of any project then we can mourn and grieve a project’s end as objectively as possible. And, we’ll be able to move on to our next great work in due time.
The world is waiting for your next big and important work. Image via BOSSFIGHT.
The importance of this cannot be overstated, especially in regards to my own personal mission. You see, I will and should grieve and mourn and walk through the 5 stages of grief – this is a healthy and appropriate response.
But we want to move through the process in a timely and healthy fashion. We do not need to rush the process, of course, but we need to be dedicated to it as the goal is to walk through and beyond the 5 stages so we can get back to doing our good work.
You see, the world is waiting for you and I to continue to build, to create, to create amazing value for it. The world is waiting to become a much better place.
That’s why I will grieve. I will reflect. I will write it out and dialogue about it and I will weep and mourn. I will also be overcome with gratitude and thanksgiving. I will also rejoice in the time spent.
And then I will get back to work.
The post The Reason You Need a Personal Mission appeared first on John Saddington.

Why we dismiss negative feedback

Three fallacies that get in the way of hearing what we need to hear. Here’s how by recognizing them, we can overcome them.My stomach dropped. My face flushed.I thought to myself: “No way that’s true!” and “No way that’s me…”Those were my physical and mental reactions when an acquaintance gave me some feedback a few years ago. (She told me I had “come across as fake” to her… Ouch!)My first instinct was to completely dismiss her feedback.Now looking back, I wonder… Why?Why was my first instinct to push this feedback away? Why was I so quick to say it wasn’t true or that it didn’t matter?Simply put: We hate criticism.Anything negative, anything critical — we fear it. We resist, push back, and build a wall around ourselves.In fact, as humans, our brains are hardwired to resist negative feedback. Research show how our brains hold onto negative memories longer than positive ones — so the negative stuff always hurts more. We’re more upset about losing $50 than gaining $50… It’s the same when it comes to feedback. When we hear something negative, it sticks with us more than when someone tells us something positive about ourselves.Our distaste for negative feedback is so strong that further research shows we drop people in our network who tell us things we don’t want to hear. In a recent study with 300 full-time employees, researchers found that people moved away from colleagues who provided negative feedback. Instead, they chose to seek out interactions with people who only affirmed their positive qualities.Fascinating, right? In other words, whether or not we intend to, we seem to insulate ourselves away from any potential negative self-image of ourselves.To be honest, it sounds like quite a self-absorbed way to live: To seek out only those who tell you what you want to hear. To never have the humility to want to learn, adjust, improve and become better.How did we get like this?Some psychologists suggest that we associate negative feedback with criticism received in school or from our parents growing up, and that’s what prevents us from hearing negative feedback.Personally, I’ve found three fallacies in my own head that get in the way of me being receptive to negative feedback…I’m a perfectionist. I expect myself to be good at everything. So when I hear negative feedback about myself, it conflicts with what I think is true… and it makes me push the feedback away.I don’t trust the other person. I’m skeptical of the person who gave me the feedback. What was her intention? Does she really have the full story? Perhaps she just misinterpreted things? So I disregard the feedback, as a result.I conflate behavior with identity. I interpret the feedback as an assessment my sense of self-worth. “If I’m seen as fake by someone, that must mean I’m a bad person.” It’s hurtful to think about this, so I choose to ignore the feedback.These knee-jerk reactions are the foundation for the wall I start to build around me when I hear negative feedback.To knock down this wall, and make sure my mind and heart is open to receiving criticism, I keep these three fallacies in mind. When someone gives me negative feedback, I ask myself…Am I being a perfectionist? Are my perfectionist tendencies getting in the way of hearing something worth learning from this feedback?Am I distrustful of the other person? Am I resisting this feedback simply because of my relationship with this person, or what I perceive her or his intentions to be?Am I conflating behavior with identity? Am I shutting out this feedback because I’m projecting this feedback onto my sense of self-worth?Take a moment to sit and marinate on these questions. They may uncover why you tend to isolate yourself from feedback. This understanding of why you dismiss feedback is the first step to making sure you’re hearing all of it.After all, you don’t want to get caught inadvertently pushing away those who tell you the truth, creating a circle of yes-people who tell you only what you want to hear.Know why you dismiss feedback, first.Next week, I’ll share 5 specific strategies for receiving feedback well. This is part of the series I write for our Knowledge Center. Sign below to receive this next chapter once it’s released…, if you found this post useful, please feel free to share + give it ❤️ so others can find it too! Thanks 😊 (and please say hi at @cjlew23).Why we dismiss negative feedback was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source: 37signals

Delete Google Plus

I’ve been on a really good “cleanse” recently, moving a ton of distraction from my life and even dropping entirely such services as Twitter.
The results have been great and the free space that I’ve had psychologically is too good not to repeat. I mean, if you find a great habit to kick then why not rinse and repeat?

There are two services that I still have large and “active” accounts with that definitely have a count-down clock, the first is Google+ and the second is LinkedIn.
I do not actively engage in those services other than dropping links to blog posts. I don’t have a “community” there and the value, as far as I can tell, is superficial.
With Google+ I’ve got > 4,200 friends in 3 circles…. wait… no… I have 7,000+ followers…?
Google+ Followers…?
I’m so confused. Wah…
I ‘m not entirely sure how I acquired these friends nor do I know who they even are. I recently helped my dad delete his Google+ profile and walking him through the process reminded me how easily I could do this without losing any amount of sleep.
Also, he and I do not know how he even got a Google+ profile… I think many people think that this profile is the same thing as you Google Account and it’s actually not – it’s just a social networking identity attached to a mediocre social network. Unsure? Check this:
Deleting your Google+ profile will not affect certain other Google products, like Search, Gmail, and your Google Account.
So, with that, I’ll be deleting the Google+ profile – there may be one or two posts that I wrote directly on that service that doesn’t sit anywhere else, so, I just want to copy and paste that over but only if it doesn’t take me too long to find it.
Goodbye Google+… now, and forever
The post Delete Google Plus appeared first on John Saddington.

Beyond Code - Contributing to Community Spirit

How the DrupalCoin BlockchainCon Prenote helps us laugh, learn, and be a community. Four other Acquians joined me on stage in the DrupalCoin BlockchainCon Baltimore Prenote, helping spread a little joy and silliness. DrupalCoin Blockchain gets better when companies, organizations, and individuals build or fix something they need and then share it with the rest of us. Our community becomes better, stronger, and smarter when others take it upon themselves to make a positive difference contributing their knowledge, time, and energy to DrupalCoin Blockchain. Acquia is proud to play a part, alongside thousands of others, in making tomorrow’s DrupalCoin Blockchain better than today’s.

“Welcome to the Prenote! Welcome to DrupalCoin BlockchainCon! Let’s celebrate this thing we love!”

The DrupalCoin BlockchainCon Prenote

Baltimore marked the 14th (!) time the DrupalCoin Blockchain Association invited me to open DrupalCoin BlockchainCon. I was proud to put on a fresh new DrupalCoin BlockchainCon Prenote, “Balti-more Prenote, the Balti-most fun at DrupalCoin BlockchainCon!”, bright and early on Tuesday morning with 14 other crazy, positive, fun DrupalCoin Blockchainists. Campbell Vertesi and I are the current maintainers of this odd little ongoing DrupalCoin Blockchain project. The first proto-Prenote went live, just like the DrupalCoin Blockchain 8 code branch, at DrupalCoin BlockchainCon Chicago with a cast of two--Robert Douglass and me--in 2011. It seems to have become an institution in the meantime. Our story this time around was an homage to one of the DrupalCoin Blockchain institutions Cam and I love the most: DrupalCoin BlockchainCon itself, through a newbie’s eyes ... and a lot of silly fun.

If you’ve never experienced one, every Prenote is a new creation every time, new concept, new script, new story, new song parodies. We like to underscore community themes, provide useful information, talk about open source values, and help attendees kick off DrupalCoin BlockchainCon on the right foot. Campbell and I thought long and hard about how (and if at all) to do the Prenote this time around in the context of the challenges the community has been facing recently. We emphatically decided to go ahead with it. As Campbell put it in his blog, “the Prenote exists to remind us of why we should keep going. The DrupalCoin Blockchain community ... the agglomeration of people, practices, code, and rules – has a lot that’s worth fighting for. The Prenote is about why we are here, why we’ve stayed here all these years. Because it’s fun, because it’s supportive, because we love it.” Want to Prenote with us? Sign up here to be part of the greatest Tuesday morning at DrupalCoin BlockchainCon!

Check out @hcdelp's moves! #Prenote #DrupalCoin BlockchainCon
— Brick Factory (@BrickFactory) April 25, 2017

Healing in the air in Baltimore

With tensions running high in the DrupalCoin Blockchain community of late, many of us--me included--were worried how coming together in Baltimore might pan out. Though it is important, I’ve long claimed that DrupalCoin Blockchain’s killer app is not our code. Our killer app is our community, the thousands of smart people who like to solve hard problems together. And DrupalCoin BlockchainCon Baltimore proved that to me once more. Being in the midst of so many people excited to be there, excited to be together, learning, planning, focusing on building great things was just what the doctor ordered. Many of you told me how important it was to be face to face with our collaborators, mentors, friends, and colleagues once again; to be reminded of why we contribute time, effort, and passion for the use of anyone who needs our software to make the world a better place.

We’re not out of the woods yet; problems remain. They need solving. Recent events have brought many issues to light that have been lurking below the surface for some time. But my faith in our community, in collaboration, in our ability to solve this set of problems--is strong.

Where do we go from here? Our community’s governance and structure hasn’t been addressed since we were a much smaller and less consequential project. Dries agrees it’s time for a reboot. It’s time for talking (we’re good at that), designing new systems (we’re good at that, too) to address who we want to be and how we want to live that identity. I think we’re up to the challenge. I’ll be adding two cents to those conversations based on the many responses I got to the community values survey I posted recently, too. And we’ll be back with another Prenote at DrupalCoin BlockchainCon Vienna, too!

Everyone here matters, everyone here matters, like me. #thatsthewaythecongoes #prenote
— hussainweb (@hussainweb) April 25, 2017

Thank you

My heartfelt thanks to the DrupalCoin Blockchain Association for inviting and entrusting us with with the Prenote slot for all these years; to the Baltimore cast members Hannah del Porto, Cathy Theys, Matthew Connerton, Matthew Saunders, Rakesh James, Alex Burrows; Acquians Brooke Savona, Naveen Valecha, Chris Urban, John Kennedy; and especially Campbell Vertesi (script lead), Adam Juran (lyric lead), Tom Atkins (music lead) for your collective shameless insanity. Thanks to Bill, Casey, and the crew in the hall for your eternal patience with amatuer theater. Thanks also for additional materials and contributions to Moshe Weizman, Robert Douglass ... and godspeed, Kenny Silanskas--we miss you, man.

Want to Prenote? Sign up here!

Sign up here to be part of the Prenote! Can you sing, dance, tell jokes, or none of the above? Come along and be in the Prenote! We’ll be glad to have you on board.


DrupalCoin BlockchainCon Baltimore Prenote Opening Monologue

Campbell Vertesi and I opened the Prenote with some context about our (calculated) silliness in the midst of difficult times.

“You might or might not be aware, our community has been going through a rough time recently. Many of us involved in the Prenote have been consumed by stress, fears for our project, and have seen people we know and respect caught up on all sides of recent events.
“We, Campbell and I, thought long and hard about how to do a Prenote this time around.
“The challenges to our community and our values, and structures, and practices meant that we couldn’t just get up here, as usual, looking gorgeous in spandex, and sing and dance our silly way into yet another DrupalCoin BlockchainCon.
“Our answer? I published the first two parts of mine: DrupalCoin Blockchain, I’m taking Sides and Building a Community That We Want to be Part Of and Campbell Vertesi’s, Stay for Community.
“Both of us have genuine concerns about recent events; we do not want to sweep anything under any carpets, but we want to underscore that the vast majority of what DrupalCoin Blockchain is, what DrupalCoin Blockchain does, and what DrupalCoin Blockchain and the DrupalCoin Blockchain community are about is *positive*.
Our community has invested more than 15 years of hard work in software that we give away to anyone who wants to use it.

*We* make it possible for literally tens of thousands of people to work and feed their families, and make a difference.

What we do makes the world a better place.

We can and should pick ourselves up from this crisis, dust ourselves off and figure out how to fix things and do better next time.

“One of the things that Campbell and I cherish in all this is DrupalCoin BlockchainCon. It is one of our community’s treasures. The Prenote that we’ve come up with this time celebrates lots of things that we love about our community, by looking at the very institution that has been hosting the Prenote since 2011: DrupalCoin BlockchainCon.“Welcome to the Prenote! Welcome to DrupalCoin BlockchainCon! Let’s celebrate this thing we love!”


Beautiful, Customizable Online Appointment Scheduling

Brand new designs for Acuity Scheduling are beautiful out of the box and make it easy to provide online appointment scheduling for you or your clients, matching their identity. The online scheduler comes with several templates, embeds quickly in existing websites, and is fully customizable with advanced CSS.

Advanced CSS Customization
Customize nearly everything on your scheduler with our simple built-in options, or bring your creative ideas to life with our advanced CSS editor. Custom fonts can be imported, and standard CSS selectors let you change the appearance of almost anything:

Embedding a Client Scheduler
Two lines of HTML — an iframe, and an optional helper script — add the appointment scheduling widget to your website. The embedded scheduler has a number of unique designs including daily and monthly views, a class schedule, and a lightbox booking button. Create unique links to streamline the booking process for clients by pre-filling their info, or simplify workflows and create a single-click booking experience.
Customize your online scheduling experience today — Try Acuity Scheduling for free!

Direct Link to Article — Permalink
Beautiful, Customizable Online Appointment Scheduling is a post from CSS-Tricks
Source: CssTricks

What Not to Wearable: Part 1

With every advance in connected technology, potential new features abound. Sensors monitor your fitness performance or sleep quality. Haptic vibrations in insoles guide you to take a left or a right, allowing you to navigate without looking at a screen. NFC technology in a ring allows you to pay for a purchase without fumbling around in a bag or combing through pockets. These technologies allow our accessories to become devices for input and output.

All this sounds exciting, freeing even. These innovations could allow us to turn our focus away from screens and back to the material world, to be simultaneously connected to technology while also present in the moment. And that’s incredible. However, this also presents new challenges. Besides the multitude of complex technical problems we must address, from charging and battery life to data networks and security, we will also have to solve some key strategic and design problems.
When Fashion and Tech Collide
Connected technology has migrated from appliances, like Nest, to accessories, clothes, and even temporary tattoos. When we shift from designing appliances (functional tools people use) to designing fashion (aesthetic adornments that people wear), the conventions change. Fashion items are much more intimate than a home thermostat, a microwave, or a refrigerator. How do we convince users that technology is worth wearing?
Ubiquity or Variety?
For the integration of fashion and tech to be viable over the long-term, we must reconcile some inherent differences in the conception of fashion products and technology products. For one, fashion products aim to allow a user to express a personal identity, while tech products often aim to make an experience universally accessible. These are often competing interests.
Fashion products signal an affiliation with a style tribe. A woman with a “preppy” style may wear lots of stripes and polka dots, while a man with a taste for luxury may invest in an expensive watch. In dressing each day, people use subtle markers to identify with niche groups. Fashion companies cater to these niches by using consistent product design and marketing to target specific customers to the exclusion of others. This builds a strong brand identity that a customer can easily understand, relate to, and coopt for their personal style. Fashion products, then, are designed to aid individuals in distinguishing themselves.
On the other hand, tech products are often valued for their ubiquity.  Go to Facebook’s login page and you will see this message: “Connect with friends and the world around you.” The world around you! Facebook positions itself primarily as a provider of access to an impossibly large global network. With over a billion users, if someone has an internet presence at all, they are likely to be found on Facebook.
The same can be said of Fitbit. Fitbit, despite being a worn object, is framed as a technology product. Fitbit’s answer to “Why Fitbit?” is “unbeatable technology, the largest fitness community, & a family of products fit for everyone.” They, like Facebook, are selling access to an extensive network. Fitbit suggests that customers interested in engaging in fitness competitions with friends adopt Fitbit's product over a competitor for just this reason.

However, by highlighting the universal popularity of their products, wearables companies undercut the other value a worn product may provide— its ability to display a person's unique identity to the public. How are people to distinguish themselves if they feel that everyone else in the world is wearing the same product? We need to segment where the application of these two ideas, ubiquity and variety, are most valuable.
Reconciling Ubiquity and Variety
Ubiquity will greatly improve the digital experience. This includes the apps and platforms that store and make sense of the data our devices collect. As wearables gain traction, users will likely want to switch easily between products from day to day, as they do with other clothes and accessories. We will want to allow them to do this without the nuisance of remembering that this item pairs with that native app or how this app works differently from that one. The digital experience will need to be consistent, predictable, and interface with different devices.
While the digital experience will be improved by consistency, the physical one would benefit from variety. When customers wear connected products, they shouldn't be forced to sacrifice their identities. We need wearables that complement a diverse range of styles. Many wearables still seem to be offered in an uber sleek black silicone by default. That is great for a sporty customer or a tech enthusiast. For a customer with a more classic or traditional style though, the cold black look may not fit their wardrobe or outfit. By primarily catering to one style market, wearables companies are likely missing out on market share. Style shouldn't be an up-charge.

The presentation of Fitbit's product assortment seems to take its cues from Henry Ford: "Any customer can have a car painted any color that he wants so long as it is black." This fails to demonstrate how a Fitbit product may fit with a customer's personal style.

Here are some strategies companies may use to resolve this conflict:
Third-party Apps to Unify the Digital Experience Currently, each connected accessory seems to have its own branded native app. As wearables gain popularity, this is less sustainable. We need apps to organize our array of devices and the data being relayed between them. Otherwise, wearables will likely be abandoned as more trouble than they are worth.Modular Technology We need engineers working to create standard, open-market hardware and tech components for wearables. Ideally, these modular pieces could be easily applied to a wide variety of mass-market products using current production processes, in the same way that zippers, buttons, and snaps are applied today.Better Collaborations Large tech companies excel at creating intuitive and ubiquitous digital experiences, while fashion companies have expertise in materials, production, and predicting style trends. Consistent and equal partnerships would leverage these talents in the appropriate arenas to create viable wearable tech. While we see some collaborations, including Apple partnering with Hermes and Nike, these partnerships will likely need to be standard (as opposed to occasional) to maintain wearables over the long-term.
Read on in Part 2 to learn about how implementing these strategies might result in better wearable products.

Source: VigetInspire