Drupal4Gov Webinar Series: ​​HAX

Start: 
2018-07-19 15:00 - 16:00 America/New_York

Organizers: 

jdearie

Event type: 

Training (free or commercial)

https://www.eventbrite.com/e/drupal4gov-webinar-series-hax-registration-...

HAX the web is a headless authoring solution being developed by Penn State built on one question: Why can't all platforms have the same authoring experience (AX)? We decided that in order to build the best AX for Drupal we needed to NOT build it just for Drupal. Learn about how you can leverage HAX in your (deep breath) Drupal 6, Drupal 7, Drupal 8, GravCMS, Desktop apps, BackdropCMS sites and more!
Learn how and what you can build when we all work together across ecosystems on the front-end using a technology called Web components. Bryan Ollendyke (btopro) is the HAX project lead and will be demonstrating HAX, talking about it's capabilities and how to extend it, what you can do to use it in your projects and how and why web components should be the only technology implemented in your front end / theme layer of Drupal (and beyond).
Bryan Ollendyke (btopro) is a long time member of the Drupal community (13+ years) and works at Penn State on a platform called ELMS: Learning Network. Bryan is an open source absolutist, contributing 100% of his efforts back to the Drupal and web components communities in the form of modules, themes, install profiles, tutorials, design assets, tooling and more. Bryan drinks enough coffee to put down an elephant, and his "energy" is reflective of this.
Source: https://groups.drupal.org/node/512931/feed


Features as Apps

One of the most important things that I’ve learned when it comes to building technology products, especially at the super early-stage, is the reality that designing a real MVP (Minimum Viable Product) is incredibly difficult to do.
I’ve already talked about this once or twice on this blog before…

The challenge of keeping things MVP-ish is real and they mostly stem from these two issues / challenges:

The availability of robust frameworks and APIs make it far too easy to (accidentally) scale a simple experiment based on a simple hypothesis into more than just a simple MVP.
It is psychologically difficult to minimize, constrain, and limit our “vision” of what could be with what should be, especially with so many existing examples to compare to (and the availability of great tooling – see #1).

Practice, a shit-ton of discipline, and a hyper-judicious pragmatic framework are necessary to stay trim, stay lean, and to execute the smallest technological experiment possible.
And having an exceptionally-focused cofounder / partner / team is also a very useful and practical counterweight to keep one from moving beyond what is absolutely, fundamentally necessary.
Consequently, in my own practice, I’ve started to internalize a particular mantra that I haven’t really heard before… maybe I’ll coin the term and see if anyone challenges me on it… 
The term is “Features as Apps” and it is the philosophy and practice of building single-serve apps to see if real users will, in fact, use that particular “feature” that might exist in the much larger, future-state application.
I’ll give you an example…
After we had built a small-yet-growing community @ The Bitcoin Pub we began to hear from our users that they wished “this” and “that” existed within the forum itself. Some examples were:

Charting tools for technical analysis for cryptocurrency
More communication tools (e.g. real-time chat)
Notification systems for pricing data
Deeper integration with news sources
Calculators and conversion tools
Buying / Selling of cryptocurrencies
And many, many more…

Instead of spending a significant time building additional features within the larger framework of software we decided to deploy “Features as Apps” into the wild, putting together much smaller apps and websites to validate if these requests were real and true.
As a result, we built things like CoinPuffs and CryptoYum (soon to be released) and even experimented on the “Features” themselves, essentially, Features of Features as Apps.
A great example is CoinPuffs where we recently added a small site off of the main site that allows users to create and establish email alerts for cryptocurrency pricing:

You can easily create a new email alert via the button and then manage them in a simple backend interface:

The question, of course, is whether folks would actually use it (or not). If they do, then, we can roll this feature into the much larger application layer of either CoinPuffs or even The Bitcoin Pub.
There’s not a lot of risk associated with creating these super-small, single-serve apps that serve as features for much bigger software projects. It just takes time and a serious commitment to testing one’s hypothesis over a given amount of time.
If it works out well then you’ve created more goodwill with your customers, deeper lock-in, and hopefully a high return on investment.
I like this model and it allows us to experiment with high-velocity too which, in turn, creates a lot of momentum. These things, of course, are our greatest strengths and it is our greatest asset as a early-stage startup.
The post Features as Apps appeared first on John Saddington.
Source: https://john.do/


PixelSnap

Forever I've used the macOS Command-Shift-4 screenshot utility to measure things. Pressing it gets you a little crosshairs cursor which you can click-and-drag to take a screenshot but, crucially, has little numbers that tell you the width/height of the selection in pixels. It's crude, but ever so useful.

See those teeny-tiny numbers in the bottom-right? So useful, even if they are tough to read.
PixelSnap is one of those apps that, once you see it, you're like OMG that's the best idea ever. It's the same kind of interaction (key command, then mouse around), but it's drawing lines between obvious measurement points in any window at all. Plus it has this drag around and area and snap to edges thing that's just as brilliant. Instant purchase for me.
The Product Hunt newsletter said:
Two teenage makers launched PixelSnap, a powerful design tool to measure every pixel on your screen. Hit #1 on Product Hunt, and over $5,000 in sales within 24 hours of their launch. 📝✨
Hey, even cooler!
A couple people pointed out xScope, which also has this feature. Fifty bucks, but also has a ton of other features. Tempting.
Direct Link to Article — Permalink
PixelSnap is a post from CSS-Tricks
Source: CssTricks


On Startup Competition


Worth a repost, for sure:

How much should an entrepreneur worry about competition in an early-stage startup?
The short answer is easy: Nada. Zip. Zero. Zilch. Spend none of your time  worrying about competition.
But, if that’s not enough for you… Jason shares a few more meta-levels on why this makes sense:

First, markets get redefined by new entrants that change the paradigm. Who saw that Zoom would rocket to $200m+ ARR in a crowded space that seemed to be a commodity going to free? Or that Slack would remake chat apps?
Second, it’s often ok to just be 10x better at something that matters a lot to paying customers. You don’t have to better than Salesforce at everything. You can be Pipedrive, and simply be the best CRM that’s super easy to use. That’s enough to get you to $100m ARR right there.
Third, super happy customers win. Yes, winner-takes-most is true in SaaS and frustrating. But if you are growing at a healthy clip and have super happy customers, you will probably still do fine. It’s almost impossible to kill a SaaS company at $10m in ARR with super happy customers. Those happy customers buy more and beget more customers. Even if your competitor is 3x larger.
Fourth, everybody lies. Be wary of your competitor’s press releases and fancy venture rounds. They matter. But everyone exaggerates a bit, and hides the tough stuff.
Fifth, competition is part of life. Very few spaces in B2B and SaaS in particular lend themselves to true monopolies. Instead of sweating competition — get good at it.

Love it. And… after having done this a few times myself I have the scar tissue to prove that worrying about competition does no one any good.
The greatest strengths of an early-stage company is nothing more than speed. If you can move faster than the rest, working harder and smarter than anyone who might be deemed a “competitor” then you’re already in a good spot.
But, that suggests that that even matters.
A good startup isn’t trying to compete. They are trying to annihilate the market and move towards a monopoly. Peter Thiel says this best in one of my favorite business books out there.
Own the space and you can become the “competition” that everyone is trying to beat.

Two great posts that appeared in my inbox this morning:

Outlasting
The World Needs More Modest Linear Growth Companies

Yup.
The post On Startup Competition appeared first on John Saddington.
Source: https://john.do/


Design Systems: Design-Development Collaboration


In our series on design systems, we’ve discussed the advantages and approaches to creating a system from a design perspective. In this post, I’d like to cover some of the new tools that developers and designers are using.
There’s been a lot of exciting activity around design tools in the last few years, and it’s changing how designers and developers collaborate. For those uninitiated front-end developers (if you’ve entered the industry in the past few years), building out a design used to mean wading into a designer’s world: Photoshop. Even after years of doing buildouts from Photoshop, I found the interface to be largely unintelligible. If the organization system of the designer is not on point you could be in for an even bumpier ride. Developers want to quickly get accurate build information and not worry about layer names, how to turn off a mask to get at an image, or if that turned-off layer is important.
At Viget, we’re constantly evaluating the tools to ensure that they improve our workflow rather than bog it down. In the past year, we’ve been putting the two main design-integrationcollaboration tools, Figma and Zeplin, through their paces. The goals of these two apps are very different: Figma is a design tool with features that reveal buildout information, while Zeplin was built purely to facilitate design handoff. Zeplin still leads the pack in delivering buildout information, but Figma has become our one-tool-to-rule-them-all, particularly because their developer tools are catching up.
Benefits of a New Workflow
While some aspects of buildout are true for any project, there are a few particularly important aspects when building a design system, and the right tool or workflow can make all the difference in:
Quickly surfacing accurate information about a thing (ex. size, color, position, font).Checking for consistency in the design across pages to help keep the parts kit small and maintainable.Seeing modular design patterns and components that can be used as building blocks.
These new apps have made it easier to intuit and build a design system by:
Providing Ways to See More at Once
Our designers started the practice of putting every page layout in one artboard in Photoshop, but it didn’t take much for the app to get bogged down and slow. This isn’t the case with Figma, and that’s created benefits all around. For a developer, getting to see the entirety of a design system in one view is a great way to quickly move around multiple parts of a system and pick up on similarities and patterns.

Giving Quick Access to Information and Keeping Developers Out of Design Tools
When buildout information, like a font size, can be buried in nested layers, layer comps, or locked up in a mask, it can be time-consuming to navigate the advanced functionality of something as complex as Photoshop. Zeplin and Figma have both made this process light years easier by exposing developer-ready information with a single click.

Converting Design to Code
Even better than getting style information with a click is getting the code. Both Zeplin and Figma output copy-and-paste code snippets for an ultra-fast and accurate workflow. Bonus points go to Zeplin for providing a choice of CSS, Sass, SCSS, Less, and Stylus formats and allowing the developer to customize color variable names.

Measuring Everything
Getting measurements right, both of a thing and between things can be time-consuming. In addition to getting things like font information, Zeplin and Figma provide dimensions and distances for every object, making accurate buildouts a breeze.

Facilitating Communication
In the past, questions about a design had to take place separate from the design in email or a chat app like Slack. The best workflow I ever devised was to annotate a screenshot of the design with arrows and comments and send it to the designer for feedback — very inefficient! With Zeplin and Figma’s built-in commenting system, designers and developers can talk within the context of the design in nearly real time.

Wrap Up
We’re excited to see how these tools evolve as they continue to improve the quality and speed of our workflow. At the time of this writing, along with Zeplin and Figma, there are many other promising tools like Sympli, Sketch Measure, InVision Inspect, and Avocode. These new entrants should create some great competition.
Do you have experience with one of these tools or comments about the article? Don’t be shy about jumping into the comments!


Source: VigetInspire


How Do You Todo? A Microcosm / Redux Comparison


For those who don't know, we've been working on our own React framework here at Viget called Microcosm. Development on Microcosm started before Redux had hit the scene and while the two share a number of similarities, there are a few key differences we'll be highlighting in this post.

I've taken the Todo app example from Redux's docs (complete app forked here), and implemented my own Todo app in Microcosm. We'll run through these codebases side by side comparing how the two frameworks help you with different developer tasks. Enough chatter, let's get to it!

Entry point

So you've yarnpm installed the dependency, now what?

Javascript
// Redux

// index.js
import React from 'react'
import { render } from 'react-dom'
import { Provider } from 'react-redux'
import { createStore } from 'redux'
import todoApp from './reducers/index'
import App from './components/App'

let store = createStore(todoApp)

render(
<Provider store={store}>
<App />
</Provider>,
document.getElementById('root')
)

Javascript
// Microcosm

// repo.js
import Microcosm from 'microcosm'
import Todos from './domains/todos'
import Filter from './domains/filter'

export default class Repo extends Microcosm {
setup () {
this.addDomain('todos', Todos)
this.addDomain('currentFilter', Filter)
}
}

// index.js
import { render } from 'react-dom'
import React from 'react'
import Repo from './repo'
import App from './presenters/app'

const repo = new Repo()

render(
<App repo={repo} />,
document.getElementById('root')
)

Pretty similar looking code here. In both cases, we're mounting our App component to the root element and setting up our state management piece. Redux has you creating a Store, and passing that into a wrapping Provider component. With Microcosm you instantiate a Repo instance and set up the necessary Domains. Since Microcosm Presenters (from which App extends) take care of the same underlying "magic" access to the store/repo, there's no need for a higher-order component.

State Management

This is where things start to diverge. Where Redux has a concept of Reducers, Microcosm has Domains (and Effects, but we won't go into those here). Here's some code:

Javascript
// Redux

// reducers/index.js
import { combineReducers } from 'redux'
import todos from './todos'
import visibilityFilter from './visibilityFilter'

const todoApp = combineReducers({
todos,
visibilityFilter
})

export default todoApp

// reducers/todos.js
const todo = (state = {}, action) => {
switch (action.type) {
case 'ADD_TODO':
return {
id: action.id,
text: action.text,
completed: false
}
case 'TOGGLE_TODO':
if (state.id !== action.id) {
return state
}

return Object.assign({}, state, {
completed: !state.completed
})

default:
return state
}
}

const todos = (state = [], action) => {
switch (action.type) {
case 'ADD_TODO':
return [
...state,
todo(undefined, action)
]
case 'TOGGLE_TODO':
return state.map(t =>
todo(t, action)
)
default:
return state
}
}

export default todos

// reducers/visibilityFilter.js
const visibilityFilter = (state = 'SHOW_ALL', action) => {
switch (action.type) {
case 'SET_VISIBILITY_FILTER':
return action.filter
default:
return state
}
}

export default visibilityFilter

Javascript
// Microcosm

// domains/todos.js
import { addTodo, toggleTodo } from '../actions'

class Todos {
getInitialState () {
return []
}

addTodo (state, todo) {
return state.concat(todo)
}

toggleTodo (state, id) {
return state.map(todo => {
if (todo.id === id) {
return {...todo, completed: !todo.completed}
} else {
return todo
}
})
}

register () {
return {
[addTodo] : this.addTodo,
[toggleTodo] : this.toggleTodo
}
}
}

export default Todos

// domains/filter.js
import { setFilter } from '../actions'

class Filter {
getInitialState () {
return "All"
}

setFilter (_state, newFilter) {
return newFilter
}

register () {
return {
[setFilter] : this.setFilter
}
}
}

export default Filter

There are some high level similarities here: we're setting up handlers to deal with the result of actions and updating the application state accordingly. But the implementation differs significantly.

In Redux, a Reducer is a function which takes in the current state and an action, and returns the new state. We're keeping track of a list of todos and the visibilityFilter here, so we use Redux's combineReducers to keep track of both.

In Microcosm, a Domain is a class built to manage a section of state, and handle actions individually. For each action, you specify a handler function which takes in the previous state, as well as the returned value of the action, and returns the new state.

In our Microcosm setup, we called addDomain('todos', Todos) and addDomain('currentFilter', Filter). This hooks up our two domains to the todos and currentFilter keys of our application's state object, and each domain becomes responsible for managing their own isolated section of state.

A major difference here is the way that actions are handled on a lower level, and that's because actions themselves are fundamentally different in the two frameworks (more on that later).

Todo List

Enough with the behind-the-scenes stuff though, let's take a look at how the two frameworks enable you to pull data out of state, display it, and trigger actions. You know - the things you need to do on every React app ever.

Javascript
// Redux

// containers/VisibleTodoList.js
import { connect } from 'react-redux'
import { toggleTodo } from '../actions'
import TodoList from '../components/TodoList'

const getVisibleTodos = (todos, filter) => {
switch (filter) {
case 'SHOW_ALL':
return todos
case 'SHOW_COMPLETED':
return todos.filter(t => t.completed)
case 'SHOW_ACTIVE':
return todos.filter(t => !t.completed)
default:
return todos
}
}

const mapStateToProps = (state) => {
return {
todos: getVisibleTodos(state.todos, state.visibilityFilter)
}
}

const mapDispatchToProps = (dispatch) => {
return {
onTodoClick: (id) => {
dispatch(toggleTodo(id))
}
}
}

const VisibleTodoList = connect(
mapStateToProps,
mapDispatchToProps
)(TodoList)

export default VisibleTodoList

// components/TodoList.js
import React from 'react'

const TodoList = ({ todos, onTodoClick }) => (
<ul>
{todos.map(todo =>
<li
key = {todo.id}
onClick = {() => onTodoClick(todo.id)}
style = {{
textDecoration: todo.completed ? 'line-through' : 'none'
}}
>
{todo.text}
</li>
)}
</ul>
)

export default TodoList

Javascript
// Microcosm

// presenters/todoList.js
import React from 'react'
import Presenter from 'microcosm/addons/presenter'
import { toggleTodo } from '../actions'

class VisibleTodoList extends Presenter {
getModel () {
return {
todos: (state) => {
switch (state.currentFilter) {
case 'All':
return state.todos
case 'Active':
return state.todos.filter(t => !t.completed)
case 'Completed':
return state.todos.filter(t => t.completed)
default:
return state.todos
}
}
}
}

handleToggle (id) {
this.repo.push(toggleTodo, id)
}

render () {
let { todos } = this.model

return (
<ul>
{todos.map(todo =>
<li
key={todo.id}
onClick={() => this.handleToggle(todo.id)}
style={{
textDecoration: todo.completed ? 'line-through' : 'none'
}}
>
{todo.text}
</li>
)}
</ul>
)
}
}

export default VisibleTodoList

So with Redux the setup detailed here is, shall we say ... mysterious? Define yourself some mapStateToProps and mapDispatchToProps functions, pass those into connect, which gives you a function, which you finally pass your view component to. Slightly confusing at first glance and strange that your props become a melting pot of state and actions. But, once you become familiar with this it's not a big deal - set up the boiler plate code once, and then add the meat of your application in between the lines.

Looking at Microcosm however, we see the power of a Microcosm Presenter. A Presenter lets you grab what you need out of state when you define getModel, and also maintains a reference to the parent Repo so you can dispatch actions in a more readable fashion. Presenters can be used to help with simple scenarios like we see here, or you can make use of their powerful forking functionality to build an "app within an app" (David Eisinger wrote a fantastic post on that), but that's not what we're here to discuss, so let's move on!

Add Todo

Let's look at what handling form input looks like in the two frameworks.

Javascript
// Redux

// containers/AddTodo.js
import React from 'react'
import { connect } from 'react-redux'
import { addTodo } from '../actions'

let AddTodo = ({ dispatch }) => {
let input

return (
<div>
<form
onSubmit={e => {
dispatch(addTodo(input.value))
}}
>
<input ref={node => {input = node}} />
<button type="submit">Add Todo</button>
</form>
</div>
)
}
AddTodo = connect()(AddTodo)

export default AddTodo

Javascript
// Microcosm

// views/addTodo.js
import React from 'react'
import ActionForm from 'microcosm/addons/action-form'
import { addTodo } from '../actions'

let AddTodo = () => {
return (
<div>
<ActionForm action={addTodo}>
<input name="text" />
<button>Add Todo</button>
</ActionForm>
</div>
)
}

export default AddTodo

With Redux, we again make use of connect, but this time without any of the dispatch/state/prop mapping (just when you thought you understood how connect worked). That passes in dispatch as an available prop to our functional component which we can then use to send actions out.

Microcosm has a bit a syntactic sugar for us here with the ActionForm addon. ActionForm will serialize the form data and pass it along to the action you specify (addTodo in this instance). Along these lines, Microcosm provides an ActionButton addon for easy button-to-action functionality, as well as withSend which operates similarly to Redux's connect/dispatch combination if you like to keep things more low-level.

In the interest of time, I'm going to skip over the Filter Link implementations, the comparison is similar to what we've already covered.

Actions

The way that Microcosm handles Actions is a major reason that it stands out in the pool of state management frameworks. Let's look at some code, and then I'll touch on some high level points.

Javascript
// Redux

// actions/index.js
let nextTodoId = 0

export const addTodo = text => {
return {
type: 'ADD_TODO',
id: nextTodoId++,
text
}
}

export const setVisibilityFilter = filter => {
return {
type: 'SET_VISIBILITY_FILTER',
filter
}
}

export const toggleTodo = id => {
return {
type: 'TOGGLE_TODO',
id
}
}

Javascript
// Microcosm

// actions/index.js
let nextTodoId = 0

export function addTodo(data) {
return {
id: nextTodoId++,
completed: false,
text: data.text
}
}

export function setFilter(newFilter) {
return newFilter
}

export function toggleTodo(id) {
return id
}

At first glance, things look pretty similar here. In fact, the only major difference in defining actions here is the use of action types in Redux. In Microcosm, domains register to the actions themselves instead of a type constant, removing the need for that set of boilerplate code.

The important thing to know about Microcosm actions however is how powerful they are. In a nutshell, actions are first-class citizens that get things done, and have a predictable lifecycle that you can make use of. The simple actions here return JS primitives (similar to our Redux implementation), but you can write these action creators to return functions, promises, or generators (observables supported in the next release).

Let's say you return a promise that makes an API request. Microcosm will instantiate the action with an open status, and when the promise comes back, the action's status will update automatically to represent the new situation (either update, done, or error). Any Domains (guardians of the state) that care about that action can react to the individual lifecycle steps, and easily update the state depending on the current action status.

Action History

The last thing I'll quickly cover is a feature that is unique to Microcosm. All Microcosm apps have a History, which maintains a chronological list of dispatched actions, and knows how to reconcile action updates in the order that they were pushed. So if a handful of actions are pushed, it doesn't matter in what order they succeed (or error out). Whenever an Action changes its status, History alerts the Domains about the Action, and then moves down the chronological line alerting the Domains of any subsequent Actions as well. The result is that your application state will always be accurate based on the order in which actions were dispatched.

This topic deserves its own blog post to be honest, it's such a powerful feature that takes care of so many problems for you, but is a bit tough to cram into one paragraph. If you'd like to learn more, or are confused by my veritably confusing description, check out the History Reconciling docs.

Closing Thoughts

Redux is a phenomenal library, and the immense community that's grown with it over the last few years has brought forth every middleware you can think of in order to get the job done. And while that community has grown, we've been plugging away on Microcosm internally, morphing it to suit our ever growing needs, making it as performant and easy to use as possible because it makes our jobs easier. We love working with it, and we'd love to share the ride with anyone who's curious.

Should you be compelled to give Microcosm a go, here are a few resources to get you running:

Microcosm Quickstart
Documentation
Github
Contributing


Source: VigetInspire


Design Systems: Why Now?

Design Systems have been a hot topic as of late—so fiery hot that books are being written, platforms developed, events organized, and tools released to help us all with this growing need. To me, it feels a lot like a ‘what’s old is new again’ kind of topic. I mean, if we’re being real, the notion of systems design has been around since at least the industrial era—it’s not exclusive to the digital age. And, in many ways, Design Systems by their very nature are simply a natural evolution of style guides—a set of standard guidelines for writing and design. Yet, style guides have been around for decades. So, why the newness and why now?
As an agency, we’re not here to define what Design Systems are and are not—there are already tons of articles that do so. If you’re looking good starting places, I recommend Laura Kalbag’s Design Systems article (short form, 2012) and Invision’s Design System Handbook (long form, 2017). We’re interested in helping organizations, like our customers, better understand why they might need a Design System and how best to get started. With that in mind, this is the beginning of a small set of articles to give you an idea of how we (and other client services providers like us) can help.
To look deeper into why there seems to be a rising interest in Design Systems, here are a few factors that may be driving things right now:
Digital is pervasive. Where there used to be a separation between offline and online, there is no more. Businesses that were offline are now online and businesses that started online are expanding beyond. We’re even starting to see digital agencies (like Stink Studios) drop Digital from their name (formerly Stink Digital). This is happening because most agencies now serve ‘digital’—it’s no longer a separate thing. Some agencies are now using descriptive words like ‘integrated’ to mean they service both online and offline needs.More specialized capabilities are being brought in-house. As companies have hired more and more developers, they’ve built strong engineering departments. Once that happens, it doesn’t take long for a few engineers to tell you that they are not designers. And, once you hire designers it won’t take long for a designer to tell you what kind of designer they are. Suddenly, you are hiring for specialties like Visual, UI, UX, Interactive, Motion, Sound, and more.Agile integrationis widespread. It used to be that websites would go through extensive overhauls every two to five years to account for evolving needs. Once developers adopted agile processes they trained others outside of integrationto work in similar rapid release cycles. What used to amount to a big launch every few years has evolved from bi-annual to bi-weekly to twice daily all the way to the point where things are closer and closer to being real-time events—make a change, validate, then publish.Platforms are expanding. At one point in time we were designing for a single digital presence—the website. Then, it was sites and apps across a universe of displays—from wristwatches to stadium displays. Lately, what we see emerging are fully immersive extended reality (XR) environments—that’s just one side of the coin. On the other, displays are becoming non-essential thanks to voice-activated digital assistants like Amazon’s Alexa and Apple’s Siri. Put simply, it’s a lot to keep up with and stay ahead of.Consumer expectations are rising. The most successful brands are trusted by their customers because of their attention to detail, whether it be customer service, user experience, or overall impact. The more consistent and polished your brand is across your universe of touch points, the more likely it is that you are trusted no matter where you are.
To summarize, what I think we’re seeing is a natural evolution of a maturing era. Though it is still evolving, it is no longer emerging. For many of us, we’re at a point in time where we can celebrate progress, but also recognize the messes made along the way, as is natural after a significant growth period. It’s times like these that we take what we have and make things better, more efficient, and more effective—a very real promise that Design Systems offer. For more on why you might need a Design System, be sure to read our next article.
References
This being the start of a short series on this topic, we’re going to leave it here for now—so stay tuned for more from us about Design Systems. In the meantime, here are some references we’ve found helpful if you’d like to dive deeper.
Books
Atomic Design (Brad Frost)Design Systems (Smashing Magazine)Design Systems Handbook (DesignBetter.co)Pocket Guide: Why Build a Design System (UXPin)
Articles
Design Systems (Laura Kalbag)Atomic Design (Brad Frost)Building a Visual Language (AirBNB)Design Systems Article Series (Nathan Curtis, EightShapes)The Minimum Viable Design System (Marcin Treder, UXPin)
Podcasts
Style Guide Podcast
Lists
Styleguides.ioDesign Systems List (Github)
Examples
Carbon (IBM)Clarity (VMware)Lightning (Salesforce)Nachos (Trello)Polaris (Shopify)Photon (Firefox)Predix (GE)


Source: VigetInspire


mailto: for Google Apps (Not Gmail) in Chrome

You can change the default mail client to use gmail via your browser pretty easily and you can follow these instructions to get it started.
But, since I’ve moved from the free Gmail to a new email address, this function no longer works and it’s been really frustrating.

Oh yah.
Thankfully, I found a simple Chrome Extension which allows me to use a Google Apps domain like saddington.vc and I get the same behavior!
You can install it here… and I’m a much happier email user now.
The post mailto: for Google Apps (Not Gmail) in Chrome appeared first on John Saddington.
Source: https://john.do/


A Focus on Blockchain

I shared this with my newsletter subscribers today… I thought I’d share it here as well, especially since I’m tired af and I written way too much copy and content for one day.
So, here it is…

If You Could Invest in “The Internet”…
… before it became a huge thing… knowing what you know now… you wouldn’t hesitate, right? Same thing might be said of mobile (or the iPhone and/or Apple) before it took over the entire world and accelerated our world into a mobile-first culture.
You’d go all-in, right? I mean, you’d drop everything to be part of that movement, correct?
Well, this is exactly what I feel about blockchain (and bitcoin and cryptocurrency).
Consequently, I’ve decided to share some pretty big news today, which is this: I’m working with my brother on a collection of products and apps in the blockchain space. These include a mobile app, a vibrant community, and a brand-spankin’ new YouTube Channel called “Decentralized“.
And I couldn’t be more excited. Truly. Blockchain may very well be the most significant technological advancement that I will ever have the pleasure of experiencing first-hand.
So, as I mentioned, I’m going all-in on this project and it’s going to be my singular focus for quite some time.
The timing is great too, by the way, as yesterday I finished my 365-day vlogging experiment… that was a great mental and physical exercise (and thanks for everyone who followed along!).
So, anyways, I hope you join me on this exciting new adventure. I’ll be sharing more along the way, as I typically do, and I’ll be spending more time on this brand new YouTube Channel as well.
Finally… if I can say anything (and I think I’ve said it already at this point… beating dead horse…) you should seriously take a look into Bitcoin, cryptocurrency, and perhaps most importantly blockchain technology.
It’s going to change all of our lives for the better… might as well get a piece of the action, right?
Love you all. Let me know how I can serve you!

john

The post A Focus on Blockchain appeared first on John Saddington.
Source: https://john.do/


Creating a Decoupled DrupalCoin Blockchain Application in 30 Minutes with Lightning, BLT, and DrupalCoin BlockchainVM

Overview
Brian Reese, Jason Enter, and Dane Powell, members of Acquia’s Professional Services team, recently released an open-source application that demonstrates how DrupalCoin Blockchain and Node.js can easily be paired to create beautiful and functional decoupled applications.
This demo application was split into two repositories: a DrupalCoin Blockchain-based backend (acting as a data provider) and the Node-based frontend. You can find a tutorial on how to try out this demo application yourself here, or follow the READMEs included in each repo.
The purpose of the current tutorial, however, is to illustrate how easy it was to create the DrupalCoin Blockchain backend using a combination of Acquia and DrupalCoin Blockchain community projects such as Lightning, BLT, and DrupalCoin BlockchainVM. This will allow you to follow the same process to rapidly create your own custom decoupled applications.
Understanding the components
Let’s start by briefly reviewing the open-source (read: free!) tools you will use in this tutorial.
Lightning
Lightning is a DrupalCoin Blockchain distribution that curates the best DrupalCoin Blockchain modules and patches to provide a great experience for editorial teams and developers out of the box. For our purposes, it’s most useful because it provides a preconfigured Content API feature, which automatically exposes a JSON-based REST API for content types, fields, media, and other entities.
Headless Lightning
Headless Lightning is a sub-profile of Lightning that includes all of the same features, but additionally provides a simplified administrative interface designed especially for decoupled sites, as well as editorial teams who might not be as comfortable with DrupalCoin Blockchain’s administrative patterns.

Lightning and Headless Lightning are each great choices for decoupled applications, since they share the common Content API feature. For the purposes of this tutorial, however, we will assume you are using Headless Lightning.
Simplified content authoring interface provided by Headless Lightning
BLT
BLT is a set of tools that will assist in creating a new project, as well as deploying and testing that project, using just a few simple commands. It automates many of the tedious tasks of spinning up a new project such as setting up a local environment, enforcing best practices, managing configuration, building a test framework, and setting up continuous integration.
BLT only works with DrupalCoin Blockchain 8, but it is completely agnostic as to which distribution or contributed packages you choose to use. By default, it will build new sites based on Lightning.
DrupalCoin BlockchainVM
DrupalCoin BlockchainVM is a Vagrant-based virtual integrationenvironment that makes it easy to set up a dedicated local integrationenvironment (including LAMP stack) for each of your DrupalCoin Blockchain projects.
Creating your application -- in Six Steps
1. Install the prerequisites for BLT and DrupalCoin BlockchainVM. We strongly recommend following this tutorial in a Unix-like environment (Mac OS or Linux). While all of these tools are generally compatible with Windows 10, there are some caveats, and the developer experience is going to be generally inferior to a native *nix environment.
2. Proceed to create a new project using BLT. BLT’s provided setup instruction should be comprehensive and self-explanatory, but we will duplicate them here for posterity. If you have any problems setting up the new project, review the BLT documentation or create an issue in the support queue.
Create a new project based on BLT by running the following command. We assume you will name the project “decoupled”, like ours:composer create-project --no-interaction acquia/blt-project decoupled
This will create a new DrupalCoin Blockchain codebase and local Git repository in a directory named “decoupled”. When it’s complete, you should see a message like this:
Restart your terminal session so that your shell detects the new BLT alias, then change directory to your new site, i.e.cd ~/sites/decoupled
All following steps assume that you are in this directory.
3. Set up your LAMP stack. We recommend using DrupalCoin BlockchainVM, but you can also follow the steps in the BLT instructions to configure your own LAMP stack if desired. Setting up a DrupalCoin BlockchainVM instance is as easy as running this command (this can take 10-20 minutes, go grab a coffee!):blt vm
Important: it’s best if the major version of PHP on your host machine matches the major version in the VM. Your DrupalCoin BlockchainVM instance will use PHP 5.6 by default. Thus, if you use PHP 7+ on your host, you should configure DrupalCoin BlockchainVM to also use PHP 7:
Edit box/config.yml
Change php_version to 7.0 or 7.1 to match your host.
Run vagrant provision
4. Download and install Headless Lightning:composer require acquia/headless_lightning:~1.1.0
This will place the Headless Lightning code at: docroot/profiles/contrib/headless_lightning
5. Tell BLT to install Headless Lightning by default by editing blt/project.yml and changing the project:profile:name key to: headless_lightning.
6. Finally, now that all of the code dependencies and your LAMP stack are in place, it’s time to install the site:blt setup
When you run this command, BLT will automatically make sure that composer dependencies are installed, configure your local settings, and install the Headless Lightning profile.
Congratulations
You should now have a functional decoupled DrupalCoin Blockchain application! You can log in by running this command in the root of your new `decoupled` repository:drush @decoupled.local uli
Future blog posts in this series will demonstrate how to create and populate a content model, how that content is exposed via JSON API, and how to integrate with front-end apps and deploy them to Acquia Cloud.
Source: http://dev.acquia.com/


Animating Layouts with the FLIP Technique

User interfaces are most effective when they are intuitive and easily understandable to the user. Animation plays a major role in this - as Nick Babich said, animation brings user interfaces to life. However, adding meaningful transitions and micro-interactions is often an afterthought, or something that is “nice to have” if time permits. All too often, we experience web apps that simply “jump” from view to view without giving the user time to process what just happened in the current context.

This leads to unintuitive user experiences, but we can do better, by avoiding “jump cuts” and “teleportation” in creating UIs. After all, what’s more natural than real life, where nothing teleports (except maybe car keys), and everything you interact with moves with natural motion?
In this article, we’ll explore a technique called “FLIP” that can be used to animate the positions and dimensions of any DOM element in a performant manner, regardless of how their layout is calculated or rendered (e.g., height, width, floats, absolute positioning, transform, flexbox, grid, etc.)
Why the FLIP technique?
Have you ever tried to animate height, width, top, left, or any other properties besides transform and opacity? You might have noticed that the animations look a bit janky, and there's a reason for that. When any property that triggers layout changes (such as `height`), the browser has to recursively check if any other element's layout has changed as a result, and that can be expensive. If that calculation takes longer than one animation frame (around 16.7 milliseconds), then the animation frame will be skipped, resulting in "jank"
since that frame wasn't rendered in time. In Paul Lewis' article "Pixels are Expensive", he goes further in depth at how pixels are rendered and the various performance expenses.
In short, our goal is to be short -- we want to calculate the least amount of style changes necessary, as quickly as possible. The key to this is only animating transform and opacity, and FLIP explains how we can simulate layout changes using only transform.
What is FLIP?
FLIP is a mnemonic device and technique first coined by Paul Lewis, which stands for First, Last, Invert, Play. His article contains an excellent explanation of the technique, but I’ll outline it here:

First: before anything happens, record the current (i.e., first) position and dimensions of the element that will transition. You can use getBoundingClientRect() for this, as will be shown below.
Last: execute the code that causes the transition to instantaneously happen, and record the final (i.e., last) position and dimensions of the element.*
Invert: since the element is in the last position, we want to create the illusion that it’s in the first position, by using transform to modify its position and dimensions. This takes a little math, but it’s not too difficult.
Play: with the element inverted (and pretending to be in the first position), we can move it back to its last position by setting its transform to none.

Below is how these steps can be implemented:
const elm = document.querySelector('.some-element');

// First: get the current bounds
const first = getBoundingClientRect(elm);

// execute the script that causes layout change
doSomething();

// Last: get the final bounds
const last = getBoundingClientRect(elm);

// Invert: determine the delta between the
// first and last bounds to invert the element
const deltaX = first.left - last.left;
const deltaY = first.top - last.top;
const deltaW = first.width / last.width;
const deltaH = first.height / last.height;

// Play: animate the final element from its first bounds
// to its last bounds (which is no transform)
elm.animate([{
transformOrigin: 'top left',
transform: `
translate(${deltaX}px, ${deltaY}px)
scale(${deltaW}, ${deltaH})
`
}, {
transformOrigin: 'top left',
transform: 'none'
}], {
duration: 300,
easing: 'ease-in-out',
fill: 'both'
});
See the Pen How the FLIP technique works by David Khourshid (@davidkpiano) on CodePen.

There are two important things to note:

If the element’s size changed, you can transform scale in order to “resize” it with no performance penalty; however, make sure to set transformOrigin to 'top left' since that’s where we based our delta calculations.
We’re using the Web Animations API to animate the element here, but you’re free to use any other animation engine, such as GSAP, Anime, Velocity, Just-Animate, Mo.js and more.

Shared Element Transitions
One common use-case for transitioning an element between app views and states is that the final element might not be the same DOM element as the initial element. In Android, this is similar to a shared element transition, except that the element isn’t “recycled” from view to view in the DOM as it is on Android.
Nevertheless, we can still achieve the FLIP transition with a little magic illusion:
const firstElm = document.querySelector('.first-element');

// First: get the bounds and then hide the element (if necessary)
const first = getBoundingClientRect(firstElm);
firstElm.style.setProperty('visibility', 'hidden');

// execute the script that causes view change
doSomething();

// Last: get the bounds of the element that just appeared
const lastElm = document.querySelector('.last-element');
const last = getBoundingClientRect(lastElm);

// continue with the other steps, just as before.
// remember: you're animating the lastElm, not the firstElm.
Below is an example of how two completely disparate elements can appear to be the same element using shared element transitions. Click one of the pictures to see the effect.
See the Pen FLIP example with WAAPI by David Khourshid (@davidkpiano) on CodePen.

Parent-Child Transitions
With the previous implementations, the element bounds are based on the window. For most use cases, this is fine, but consider this scenario:

An element changes position and needs to transition.
That element contains a child element, which itself needs to transition to a different position inside the parent.

Since the previously calculated bounds are relative to the window, our calculations for the child element are going to be off. To solve this, we need to ensure that the bounds are calculated relative to the parent element instead:
const parentElm = document.querySelector('.parent');
const childElm = document.querySelector('.parent &gt; .child');

// First: parent and child
const parentFirst = getBoundingClientRect(parentElm);
const childFirst = getBoundingClientRect(childElm);

doSomething();

// Last: parent and child
const parentLast = getBoundingClientRect(parentElm);
const childLast = getBoundingClientRect(childElm);

// Invert: parent
const parentDeltaX = parentFirst.left - parentLast.left;
const parentDeltaY = parentFirst.top - parentLast.top;

// Invert: child relative to parent
const childDeltaX = (childFirst.left - parentFirst.left)
- (childLast.left - parentLast.left);
const childDeltaY = (childFirst.top - parentFirst.top)
- (childLast.top - parentLast.top);

// Play: using the WAAPI
parentElm.animate([
{ transform: `translate(${parentDeltaX}px, ${parentDeltaY}px)` },
{ transform: 'none' }
], { duration: 300, easing: 'ease-in-out' });

childElm.animate([
{ transform: `translate(${childDeltaX}px, ${childDeltaY}px)` },
{ transform: 'none' }
], { duration: 300, easing: 'ease-in-out' });
A few things to note here, as well:

The timing options for the parent and child (duration, easing, etc.) do not necessarily need to match with this technique. Feel free to be creative!
Changing dimensions in parent and/or child (width, height) was purposefully omitted in this example, since it is an advanced and complex topic. Let’s save that for another tutorial.
You can combine the shared element and parent-child techniques for greater flexibility.

Using Flipping.js for Full Flexibility
The above techniques might seem straightforward, but they can get quite tedious to code once you have to keep track of multiple elements transitioning. Android eases this burden by:

baking shared element transitions into the core SDK
allowing developers to identify which elements are shared by using a common android:transitionName XML attribute

I’ve created a small library called Flipping.js with the same idea in mind. By adding a data-flip-key="..." attribute to HTML elements, it’s possible to predictably and efficiently keep track of elements that might change position and dimensions from state to state.
For example, consider this initial view:
<section class="gallery">
<div class="photo-1" data-flip-key="photo-1">
<img src="/photo-1"/>
</div>
<div class="photo-2" data-flip-key="photo-2">
<img src="/photo-2"/>
</div>
<div class="photo-3" data-flip-key="photo-3">
<img src="/photo-3"/>
</div>
</section>
And this separate detail view:
<section class="details">
<div class="photo" data-flip-key="photo-1">
<img src="/photo-1"/>
</div>
<p class="description">
Lorem ipsum dolor sit amet...
</p>
</section>
Notice in the above example that there are 2 elements with the same data-flip-key="photo-1". Flipping.js tracks the “active” element by choosing the first element that meet these criteria:

The element exists in the DOM (i.e., it hasn’t been removed or detached)
The element is not hidden (hint: getBoundingClientRect(elm) will have { width: 0, height: 0 } for hidden elements)
Any custom logic specified in the selectActive option.

Getting Started with Flipping.js
There’s a few different packages for Flipping, depending on your needs:

flipping.js: tiny and low-level; only emits events when element bounds change
flipping.web.js: uses WAAPI to animate transitions
flipping.gsap.js: uses GSAP to animate transitions
More adapters coming soon!

You can grab the minified code directly from unpkg:

https://unpkg.com/flipping@latest/dist/flipping.js
https://unpkg.com/flipping@latest/dist/flipping.web.js
https://unpkg.com/flipping@latest/dist/flipping.gsap.js

Or you can npm install flipping --save and import it into your projects:
// import not necessary when including the unpkg scripts in a <script src="..."> tag
import Flipping from 'flipping/adapters/web';

const flipping = new Flipping();

// First: let Flipping read all initial bounds
flipping.read();

// execute the change that causes any elements to change bounds
doSomething();

// Last, Invert, Play: the flip() method does it all
flipping.flip();
Handling FLIP transitions as a result of a function call is such a common pattern, that the .wrap(fn) method transparently wraps (or “decorates”) the given function by first calling .read(), then getting the return value of the function, then calling .flip(), then returning the return value. This leads to much less code:
const flipping = new Flipping();

const flippingDoSomething = flipping.wrap(doSomething);

// anytime this is called, FLIP will animate changed elements
flippingDoSomething();
Here is an example of using flipping.wrap() to easily achieve the shifting letters effect. Click anywhere to see the effect.
See the Pen Flipping Birthstones #Codevember by David Khourshid (@davidkpiano) on CodePen.

Adding Flipping.js to Existing Projects
In another article, we created a simple React gallery app using finite state machines. It works just as expected, but the UI could use some smooth transitions between states to prevent “jumping” and improve the user experience. Let’s add Flipping.js into our React app to accomplish this. (Keep in mind, Flipping.js is framework-agnostic.)
Step 1: Initialize Flipping.js
The Flipping instance will live on the React component itself, so that it’s isolated to only changes that occur within that component. Initialize Flipping.js by setting it up in the componentDidMount lifecycle hook:
componentDidMount() {
const { node } = this;
if (!node) return;

this.flipping = new Flipping({
parentElement: node
});

// initialize flipping with the initial bounds
this.flipping.read();
}
By specifying parentElement: node, we’re telling Flipping to only look for elements with a data-flip-key in the rendered App, instead of the entire document.
Then, modify the HTML elements with the data-flip-key attribute (similar to React’s key prop) to identify unique and “shared” elements:
renderGallery(state) {
return (
<section className="ui-items" data-state={state}>
{this.state.items.map((item, i) =>
<img
src={item.media.m}
className="ui-item"
style={{'--i': i}}
key={item.link}
onClick={() => this.transition({
type: 'SELECT_PHOTO', item
})}
data-flip-key={item.link}
/>
)}
</section>
);
}
renderPhoto(state) {
if (state !== 'photo') return;

return (
<section
className="ui-photo-detail"
onClick={() => this.transition({ type: 'EXIT_PHOTO' })}>
<img
src={this.state.photo.media.m}
className="ui-photo"
data-flip-key={this.state.photo.link}
/>
</section>
)
}
Notice how the img.ui-item and img.ui-photo are represented by data-flip-key={item.link} and data-flip-key={this.state.photo.link} respectively: when the user clicks on an img.ui-item, that item is set to this.state.photo, so the .link values will be equal.
And since they are equal, Flipping will smoothly transition from the img.ui-item thumbnail to the larger img.ui-photo.
Now we need to do two more things:

call this.flipping.read() whenever the component will update
call this.flipping.flip() whenever the component did update

Some of you might have already guessed where these method calls are going to occur: componentWillUpdate and componentDidUpdate, respectively:
componentWillUpdate() {
this.flipping.read();
}

componentDidUpdate() {
this.flipping.flip();
}
And, just like that, if you’re using a Flipping adapter (such as flipping.web.js or flipping.gsap.js), Flipping will keep track of all elements with a [data-flip-key] and smoothly transition them to their new bounds whenever they change. Here is the final result:
See the Pen FLIPping Gallery App by David Khourshid (@davidkpiano) on CodePen.

If you would rather implement custom animations yourself, you can use flipping.js as a simple event emitter. Read the documentation for more advanced use-cases.
Flipping.js and its adapters handle the shared element and parent-child transitions by default, as well as:

interrupted transitions (in adapters)
enter/move/leave states
plugin support for plugins such as mirror, which allows newly entered elements to “mirror” another element’s movement
and more planned in the future!

Resources
Similar libraries include:

FlipJS by Paul Lewis himself, which handles simple single-element FLIP transitions
React-Flip-Move, a useful React library by Josh Comeau
BarbaJS, not necessarily a FLIP library, but one that allows you to add smooth transitions between different URLs, without page jumps.

Further resources:

Animating the Unanimatable - Joshua Comeau
FLIP your Animations - Paul Lewis
Pixels are Expensive - Paul Lewis
Improving User Flow Through Page Transitions - Luigi de Rosa
Smart Transitions in User Experience Design - Adrian Zumbrunnen
What Makes a Good Transition? - Nick Babich
Motion Guidelines in Google’s Material Design
Shared Element Transition with React Native

Animating Layouts with the FLIP Technique is a post from CSS-Tricks
Source: CssTricks


Netflix functions without client-side React, and it’s a good thing

Recently Netflix removed client-side React from their landing page which caused a bit of a stir. So Jake Archibald investigated why the team did that and how it’s actually a good thing for the React community in the long term:

When the PS4 was released in 2013, one of its advertised features was progressive downloading – allowing gamers to start playing a game while it's downloading. Although this was a breakthrough for consoles, the web has been doing this for 20 years. The HTML spec (warning: 8mb document), despite its size, starts rendering once ~20k is fetched.
Unfortunately, it's a feature we often engineer-away with single page apps, by channelling everything through a medium that isn't streaming-friendly, such as a large JS bundle.

I like the whole vibe of this post because it suggests that we should be careful when we pick our tools; we only should pick the right tool for the right job, instead of treating every issue as if it needs a giant hammer made of JavaScript. Also! Burke Holland wrote a funny piece last week on this topic with some of his thoughts.
Direct Link to Article — Permalink
Netflix functions without client-side React, and it’s a good thing is a post from CSS-Tricks
Source: CssTricks


Apps Have Command Prompts Now

Command lines were an early innovation in computers, and were the dominant way to interact with them in the 1960's - 80's. That gave way to GUIs for most users after that. I don't think we need to launch a scientific study to figure out why. Most users find it more comfortable and intuitive to accomplish things with interactive moments and visual feedback.
But command lines never went away. GUI's are generally a limited abstraction of what you could do through a command line anyway, so power users gravitate toward the closer-to-the-metal nature of the command line.
But we might be in the middle of a return to a happy medium.

Finder-ing
We know Apple is quite fond of cutting features. Particularly little-used features or power-user-only features. Curiously, this one has stuck around:
The "Go To Folder" dialog, via Command-Shift-G
William Pearson wrote:
If there’s only one keyboard shortcut you should remember in Mac OS X it’s this: Go To Folder. ... Is there a keyboard shortcut that is more useful than “Go To Folder”? I don’t think so.
I'm not sure about that, but it's clear some people take this shortcut pretty seriously! And yes, a keyboard shortcut, but one that essentially opens a command line prompt that can do one thing.
I guess that isn't terribly surprising, considering the success of apps like Alfred, which perhaps it's fair to say is a command line for finding, opening and doing things.

The Finder also has Spotlight (since OS X 10.4 Tiger anyway, 2005) which is largely a thing for search (especially these days, as it returns results the web as well).

Spotlight has a keyboard command (Command-Space) and then you just type to do stuff, so it's very much a command prompt. Just one that's pretty decked out in user-friendliness.
And while we're on this bend, we can't forget about Quicksilver. Interestingly, both Alfred and Quicksilver postdate Spotlight. I guess that speaks to Spotlight kind of sucking in the early days, and leaving people wanting more.
Code Editors
Most developers, I'm sure, are quite aware of the literal command line. Almost all the big integrationtools are command line based tools. Everything from Git to Gulp, image optimizers to package managers, Capistrano to webpack... they are all tools you use from the command line. If you have a GUI for any of them, it's probably a light abstraction over command line methods.
But, aside from the most hardcore of all Vim users who do all their code editing from a terminal window, we don't actually write code on the command line, but in an editor with a GUI.
Code Editors are a perfect breeding ground for ideas that combine the best of GUI's and command lines.
Let's look at Sublime Text. When creating a new folder, I might want to do that with the GUI. There I can see the existing folder structure and see exactly what I'm doing.

But say I want to jump to a file I know exists. Say it's buried a number of directories deep, and I'm glad that it is because it adheres to the structure of the current project. I might have to click - click - scroll - click - scroll - click to get there with a GUI, which isn't the greatest interaction.
Instead, I can fire up a command prompt in Sublime Text, in this case it's iconic Goto Anything command, type in something close to the file name, and find it.

Perhaps even more command-prompt-like is the literal Command Palette which is an extensible command-running menu triggered by a keyboard shortcut. Perhaps I want to run a specific Emmet command, correct syntax highlighting, or trigger an a find/replace extension to do its thing.

These things are in a similar boat as finding a file-in-a-haystack. There could be hundreds or thousands of commands. Your brain and typing fingers can find them quicker than your hand on a mouse in a UI can.
Sketch Runner
Sketch Runner is a popular plugin for Sketch that adds a command prompt to Sketch. Here's their product video:
[vimeo 208463550 w=800 h=450]
If you think of elements and groups in a design document just like files in a code project, the "Jump anywhere" feature makes sense. It's just like "Goto Anything".
Perhaps your design document has hundreds of symbols. Your brain probably has a mental map of that that is quicker to navigate than moving your mouse through nested menus. Thus, a command prompt to type in the name (fuzzy search) and insert it.
Slack
Too many Slacks, amiright?
I don't think it would be terrible uncommon to have a dozen Slack teams, hundreds of channels, and thousands of people. Particularly if you're fond of joining "Public Slacks", like say the A11Y Slack.
Say I want to shoot a message to Sarah. I can open the Quick Switcher and just start typing her name and get there.

You have to be in the right Slack for names (or channels) to work, but you can get to the right slack via Quick Switcher and then do a new search.
Notion
Notion has a pretty awesome take on the command prompt. It's available everywhere you are in a document just by pressing the slash / key.

There are probably ~30 things you can do. The menu is nice, but being able to type what you mean quickly is even better.
In addition to searching the menu, you can just complete the word after the slash (a slash command) to do the thing.
Chrome DevTools
David Khourshid:
I've been using the command prompt in Chrome Dev Tools _so much more_ because opening the Animate tab takes like 17 clicks.
Yet another good use case!

So
There is a lot of apps doing interesting things here. I'm a fan of all of it!
Even more could probably benefit from it. Photoshop is notoriously complex, but a lot of us have familiarity with the things it can do. Seems like a perfect canidate for a fuzzy-search enabled command prompt!
Users might be able to take things into their own hands a bit here too. Alfred users could use the Menu Bar Search plugin which allows you to:
Search for and activate application menu items for the frontmost application. The workflow lists the menu items for the current application, filtering them by whatever is entered into Alfred.
Apps can easily have dozens of menu items, and this would make them all command prompt-able.
Standardization is also an interesting thing to consider. It seems some apps have followed each other. Command-Shift-P is the Sublime Text command runner, which was used by Chrome DevTools, and also by VS Code. I kinda like the Command-Space of Spotlight, but that doesn't mean web-based apps should copy it. In fact, it means they can't, because the OS would override it.
TL;DR
When UI is cumbersome or impractical, a command line makes good sense. Many apps benefit from offering both a UI and a command line, in various forms.

Apps Have Command Prompts Now is a post from CSS-Tricks
Source: CssTricks


24 Essential Apps to Manage Your PPC Campaigns from Anywhere by @jonleeclark

These are the essential PPC apps you need to manage everything from ad accounts to reporting to team collaboration.The post 24 Essential Apps to Manage Your PPC Campaigns from Anywhere by @jonleeclark appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Techniques To Humanize Your Website and Connect With Your Audience

Have you ever visited a website and wondered if anyone was working round the clock behind the scenes to make sure that your user experience feels personal? A lot of business owners are using their websites as a way to connect with their prospective clients, especially if they cannot personally communicate with them. This is the reason you need to go to great lengths to humanize your website.
In this article, we will look into the importance of making your websites more human, and some effective tips and tricks to pull this off.

The website represents you and your business
The success of any business involves a lot of factors to consider. Some of these include having the right mindset on how to take care of the business, and having enough resources to start the business. Meanwhile, other companies focus on hiring the right people who can do the job, as well as providing products and services that people are likely to patronize.
All of these factors are key to making any business achieve its goals. But have you realized that you can accomplish all of these things by positioning your website properly?
The website lets people know more about your company
Years ago, promoting a business was done mainly through advertisements on TV, radio and print media. However, with the technology that we have today, the internet has become central to marketing – whether for brick-and-mortar stores or online businesses. These days, practically every business owner wants to take advantage of online media to promote its brand across the globe with just a few clicks.
Putting up a website has become one of the best marketing tools to reach out to consumers. Although earlier iterations webpages focused solely on content, functionality and user experience are now considered some of the key ingredients to make a website viral or popular.
Nevertheless, more than providing vivid colors and extraordinary web design, there are two important features that most consumers look for in a website: instantly available information, and customer satisfaction.
The website should give your prospective clients what they need
The content of the website plays a vital role in the success of any business. Website owners should stop beating around the bush and just go straight to the relevant information that a site visitor may want to have. Giving your site visitors high-value content propels them to browse through all of your site’s pages to check out what you can offer.
No matter what kind of content you provide, for as long as it is reader-friendly, then you can absolutely capture anyone’s attention.
Sure, you can configure your website to follow every detail in the SEO best practices rulebook. However, you need to remember that you are writing for humans – people who will potentially find your business on search engines. Do you think people would even bother to stay on your page for more than 5 seconds if they notice that you’re flooding them with repetitive keywords?
In other words, although using the right keywords in your site content is helpful in terms of SEO, you need to construct your website to satisfy your site visitors (and not the search bots).
Give your readers informative, interesting, and engaging material. Make sure that from the title, you already can excite their minds to read the rest of what is written on the page.

Tips to Humanize Your Website
One of the fundamental strategies to humanize your website is to make the content friendly and engaging to your intended audience. You want people to be interested in what you can offer them. Therefore, your goal is to create content that your target market likes.
It’s important for your site visitors to feel that they are being valued. They want to feel that their needs are being met by a real person, and not by a chatbot or a computer that’s running 24/7. In short, people want information with a heart.
Here are more techniques to humanize your website in order to connect with your target audience better:
Use human emotions

You’ve probably come across some websites where you feel like you’re reading your college professor’s lecture noes. That’s common among a lot of business websites, wherein they use a strictly formal tone by default.
While doing business is something to be taken seriously, not everyone may be interested to read technical jargon or overly serious content. That is why adding human emotions is effective – it can trigger a positive response among your visitors.
To achieve this, you may start listing down the products that your company offers, and identifying each of the product’s features that the people will probably appreciate. Be creative in using words that can trigger the user’s emotions and think about how it makes a user feel.
This can be successfully done by incorporating a story that everyone can relate to. People are fond of reading stories that they see themselves in. This kind of emotive or affective content (to humanize your website) will push them to action, whether it’s buying your products or signing up for your weekly newsletter.
Make use of creative content tools

It may be difficult to constantly update your website and provide new stories to tell your online visitors. The solution is simple: Search for apps that can help you create compelling headlines to keep your online visitors returning for more.
People generally like unique content along with high-quality images that can tickle their imagination as they read along. Once a visitor feels that the content is worth reading, he will likely become a regular follower of your website. This kind of following may even produce a lead or a sale.
Create unique content

How can you provide a weekly update without having the right materials to use? Get inspiration or ideas by checking what is trending. Discover what other people are talking about. and share something that is interesting and engaging.
Searching for trending hashtags can give you some ideas on how to create your content that will grab attention. Letting your followers realize that you’re updated with the latest trends is an effective way to humanize your website.
Find the right balance in terms of posting frequency

You may feel obliged to constantly update your content. But realistically, posting bi-monthly is ideal as you let your online viewers have time to read your latest post.
If you feel like sharing some more thoughts, you may do so by sharing shorter posts of about 500 words. Just make sure that everything that you share with your audience is carefully researched and written in high quality.
Initiate a call-to-action

Sometimes a website visitor may need a little more convincing before finally making a purchase. It can be helpful if you leave a question for readers to get them into thinking, or give them a direction (or call-to-action) to let them have an opportunity to decide.
For instance, you can offer them to sign up for a weekly update, or leave them a question to answer. This does not only make them feel that there is a real person behind the website, but it also helps in generating leads wherein they can be given priority in enjoying member-only promos.
Provide an opportunity for discussions

Some websites make use of a community board wherein visitors/consumers leave their comments, suggestions or opinions. This can help others in deciding to buy a certain product or service.
As a website owner, make sure that you take part in the discussions as well. By doing this, users are able to get first-hand information from you, in order to help them come up with a decision. In short, you can already humanize your website just by personally engaging with your followers.
Tap someone to check on your content

Having an amazing website should not only be based on visual aesthetics or effective social media sharing. There is also a need to make the content as perfect as possible in terms of grammar and spelling.
You may feel that you have made a well-written content, but it might be best to ask someone to review your work. This gives you better assurance that your text is 100% error-free.
At times, when we have a lot of things in our mind that we want to write about, we tend to think faster than we can write. As a result, we unknowingly miss out some punctuation marks, the plural forms of some words, misspelled words, and the like. Hiring a proofreader can help you create excellent content in which people can feel that you really know what you are talking about.
Besides, bad grammar may appear as if you used article spinning software to create your content. Aside from turning off your readers with weird text, article spinning also gets you penalized in terms of SEO.
Conclusion
Readers these days are very meticulous when it comes to browsing websites. Although web design continues to have importance in terms of catching attention, failing to humanize your website may turn off your potential clients.
The best way to connect with your prospects online is to make your website feel as human as possible. Nobody wants to experience talking to a robot while they’re browsing your site – unless if they’re huge fans of the Terminator’s Skynet!
Make your business flourish by making a website that can do more than merely enumerating your products or services. Humanize your website by telling a story to keep your online audience more engaged. People will become your loyal customers when you are able to strengthen your brand through sharing great content, providing high quality products and services, and establishing human communication that encourages interaction.
The post Techniques To Humanize Your Website and Connect With Your Audience appeared first on Web Designer Hub.
Source: http://www.webdesignerhub.com


Accessible Web Apps with React, TypeScript, and AllyJS

Accessibility is an aspect of web integrationthat is often overlooked. I would argue that it is as vital as overall performance and code reusability. We justify our endless pursuit of better performance and responsive design by citing the users, but ultimately these pursuits are done with the user's device in mind, not the user themselves and their potential disabilities or restrictions.
A responsive app should be one that delivers its content based on the needs of the user, not only their device.
Luckily, there are tools to help alleviate the learning curve of accessibility-minded development. For example, GitHub recently released their accessibility error scanner, AccessibilityJS and Deque has aXe. This article will focus on a different one: Ally.js, a library simplifying certain accessibility features, functions, and behaviors.

One of the most common pain points regarding accessibility is dialog windows.
There're a lot of considerations to take in terms of communicating to the user about the dialog itself, ensuring ease of access to its content, and returning to the dialog's trigger upon close.
A demo on the Ally.js website addresses this challenge which helped me port its logic to my current project which uses React and TypeScript. This post will walk through building an accessible dialog component.
Demo of accessible dialog window using Ally.js within React and TypeScript
View the live demo
Project Setup with create-react-app
Before getting into the use of Ally.js, let's take a look at the initial setup of the project. The project can be cloned from GitHub or you can follow along manually. The project was initiated using create-react-app in the terminal with the following options:
create-react-app my-app --scripts-version=react-scripts-ts
This created a project using React and ReactDOM version 15.6.1 along with their corresponding @types.
With the project created, let's go ahead and take a look at the package file and project scaffolding I am using for this demo.
Project architecture and package.json file
As you can see in the image above, there are several additional packages installed but for this post we will ignore those related to testing and focus on the primary two, ally.js and babel-polyfill.
Let's install both of these packages via our terminal.
yarn add ally.js --dev && yarn add babel-polyfill --dev
For now, let's leave `/src/index.tsx` alone and hop straight into our App container.
App Container
The App container will handle our state that we use to toggle the dialog window. Now, this could also be handled by Redux but that will be excluded in lieu of brevity.
Let's first define the state and toggle method.
interface AppState {
showDialog: boolean;
}

class App extends React.Component<{}, AppState> {
state: AppState;

constructor(props: {}) {
super(props);

this.state = {
showDialog: false
};
}

toggleDialog() {
this.setState({ showDialog: !this.state.showDialog });
}
}
The above gets us started with our state and the method we will use to toggle the dialog. Next would be to create an outline for our render method.
class App extends React.Component<{}, AppState> {
...

render() {
return (
<div className="site-container">
<header>
<h1>Ally.js with React &amp; Typescript</h1>
</header>
<main className="content-container">
<div className="field-container">
<label htmlFor="name-field">Name:</label>
<input type="text" id="name-field" placeholder="Enter your name" />
</div>
<div className="field-container">
<label htmlFor="food-field">Favourite Food:</label>
<input type="text" id="food-field" placeholder="Enter your favourite food" />
</div>
<div className="field-container">
<button
className='btn primary'
tabIndex={0}
title='Open Dialog'
onClick={() => this.toggleDialog()}
>
Open Dialog
</button>
</div>
</main>
</div>
);
}
}
Don't worry much about the styles and class names at this point. These elements can be styled as you see fit. However, feel free to clone the GitHub repo for the full styles.
At this point we should have a basic form on our page with a button that when clicked toggles our showDialog state value. This can be confirmed by using React's Developer Tools.
So let's now have the dialog window toggle as well with the button. For this let's create a new Dialog component.
Dialog Component
Let's look at the structure of our Dialog component which will act as a wrapper of whatever content (children) we pass into it.
interface Props {
children: object;
title: string;
description: string;
close(): void;
}

class Dialog extends React.Component<Props> {
dialog: HTMLElement | null;

render() {
return (
<div
role="dialog"
tabIndex={0}
className="popup-outer-container"
aria-hidden={false}
aria-labelledby="dialog-title"
aria-describedby="dialog-description"
ref={(popup) => {
this.dialog = popup;
}
}
>
<h5
id="dialog-title"
className="is-visually-hidden"
>
{this.props.title}
</h5>
<p
id="dialog-description"
className="is-visually-hidden"
>
{this.props.description}
</p>
<div className="popup-inner-container">
<button
className="close-icon"
title="Close Dialog"
onClick={() => {
this.props.close();
}}
>
×
</button>
{this.props.children}
</div>
</div>
);
}
}
We begin this component by creating the Props interface. This will allow us to pass in the dialog's title and description, two important pieces for accessibility. We will also pass in a close method, which will refer back to the toggleDialog method from the App container. Lastly, we create the functional ref to the newly created dialog window to be used later.
The following styles can be applied to create the dialog window appearance.
.popup-outer-container {
align-items: center;
background: rgba(0, 0, 0, 0.2);
display: flex;
height: 100vh;
justify-content: center;
padding: 10px;
position: absolute;
width: 100%;
z-index: 10;
}

.popup-inner-container {
background: #fff;
border-radius: 4px;
box-shadow: 0px 0px 10px 3px rgba(119, 119, 119, 0.35);
max-width: 750px;
padding: 10px;
position: relative;
width: 100%;
}

.popup-inner-container:focus-within {
outline: -webkit-focus-ring-color auto 2px;
}

.close-icon {
background: transparent;
color: #6e6e6e;
cursor: pointer;
font: 2rem/1 sans-serif;
position: absolute;
right: 20px;
top: 1rem;
}
Now, let's tie this together with the App container and then get into Ally.js to make this dialog window more accessible.
App Container
Back in the App container, let's add a check inside of the render method so any time the showDialog state updates, the Dialog component is toggled.
class App extends React.Component<{}, AppState> {
...

checkForDialog() {
if (this.state.showDialog) {
return this.getDialog();
} else {
return false;
}
}

getDialog() {
return (
<Dialog
title="Favourite Holiday Dialog"
description="Add your favourite holiday to the list"
close={() => { this.toggleDialog(); }}
>
<form className="dialog-content">
<header>
<h1 id="dialog-title">Holiday Entry</h1>
<p id="dialog-description">Please enter your favourite holiday.</p>
</header>
<section>
<div className="field-container">
<label htmlFor="within-dialog">Favourite Holiday</label>
<input id="within-dialog" />
</div>
</section>
<footer>
<div className="btns-container">
<Button
type="primary"
clickHandler={() => { this.toggleDialog(); }}
msg="Save"
/>
</div>
</footer>
</form>
</Dialog>
);
}

render() {
return (
<div className="site-container">
{this.checkForDialog()}
...
);
}
}
What we've done here is add the methods checkForDialog and getDialog.
Inside of the render method, which runs any time the state updates, there is a call to run checkForDialog. So upon clicking the button, the showDialog state will update, causing a re-render, calling checkForDialog again. Only now, showDialog is true, triggering getDialog. This method returns the Dialog component we just built to be rendered onto the screen.
The above sample includes a Button component that has not been shown.
Now, we should have the ability to open and close our dialog. So let's take a look at what problems exist in terms of accessibility and how we can address them using Ally.js.

Using only your keyboard, open the dialog window and try to enter text into the form. You'll notice that you must tab through the entire document to reach the elements within the dialog. This is a less-than-ideal experience. When the dialog opens, our focus should be the dialog  -  not the content behind it. So let's look at our first use of Ally.js to begin remedying this issue.
Ally.js
Ally.js is a library providing various modules to help simplify common accessibility challenges. We will be using four of these modules for the Dialog component.
The .popup-outer-container acts as a mask that lays over the page blocking interaction from the mouse. However, elements behind this mask are still accessible via keyboard, which should be disallowed. To do this the first Ally module we'll incorporate is maintain/disabled. This is used to disable any set of elements from being focussed via keyboard, essentially making them inert.
Unfortunately, implementing Ally.js into a project with TypeScript isn't as straightforward as other libraries. This is due to Ally.js not providing a dedicated set of TypeScript definitions. But no worries, as we can declare our own modules via TypeScript's types files.
In the original screenshot showing the scaffolding of the project, we see a directory called types. Let's create that and inside create a file called `global.d.ts`.
Inside of this file let's declare our first Ally.js module from the esm/ directory which provides ES6 modules but with the contents of each compiled to ES5. These are recommended when using build tools.
declare module 'ally.js/esm/maintain/disabled';
With this module now declared in our global types file, let's head back into the Dialog component to begin implementing the functionality.
Dialog Component
We will be adding all the accessibility functionality for the Dialog to its component to keep it self-contained. Let's first import our newly declared module at the top of the file.
import Disabled from 'ally.js/esm/maintain/disabled';
The goal of using this module will be once the Dialog component mounts, everything on the page will be disabled while filtering out the dialog itself.
So let's use the componentDidMount lifecycle hook for attaching any Ally.js functionality.
interface Handle {
disengage(): void;
}

class Dialog extends React.Component<Props, {}> {
dialog: HTMLElement | null;
disabledHandle: Handle;

componentDidMount() {
this.disabledHandle = Disabled({
filter: this.dialog,
});
}

componentWillUnmount() {
this.disabledHandle.disengage();
}
...
}
When the component mounts, we store the Disabled functionality to the newly created component property disableHandle. Because there are no defined types yet for Ally.js we can create a generic Handle interface containing the disengage function property. We will be using this Handle again for other Ally modules, hence keeping it generic.
By using the filter property of the Disabled import, we're able to tell Ally.js to disable everything in the document except for our dialog reference.
Lastly, whenever the component unmounts we want to remove this behaviour. So inside of the componentWillUnmount hook, we disengage() the disableHandle.

We will now follow this same process for the final steps of improving the Dialog component. We will use the additional Ally modules:

maintain/tab-focus
query/first-tabbable
when/key

Let's update the `global.d.ts` file so it declares these additional modules.
declare module 'ally.js/esm/maintain/disabled';
declare module 'ally.js/esm/maintain/tab-focus';
declare module 'ally.js/esm/query/first-tabbable';
declare module 'ally.js/esm/when/key';
As well as import them all into the Dialog component.
import Disabled from 'ally.js/esm/maintain/disabled';
import TabFocus from 'ally.js/esm/maintain/tab-focus';
import FirstTab from 'ally.js/esm/query/first-tabbable';
import Key from 'ally.js/esm/when/key';
Tab Focus
After disabling the document with the exception of our dialog, we now need to restrict tabbing access further. Currently, upon tabbing to the last element in the dialog, pressing tab again will begin moving focus to the browser's UI (such as the address bar). Instead, we want to leverage tab-focus to ensure the tab key will reset to the beginning of the dialog, not jump to the window.
class Dialog extends React.Component<Props> {
dialog: HTMLElement | null;
disabledHandle: Handle;
focusHandle: Handle;

componentDidMount() {
this.disabledHandle = Disabled({
filter: this.dialog,
});

this.focusHandle = TabFocus({
context: this.dialog,
});
}

componentWillUnmount() {
this.disabledHandle.disengage();
this.focusHandle.disengage();
}
...
}
We follow the same process here as we did for the disabled module. Let's create a focusHandle property which will assume the value of the TabFocus module import. We define the context to be the active dialog reference on mount and then disengage() this behaviour, again, when the component unmounts.
At this point, with a dialog window open, hitting tab should cycle through the elements within the dialog itself.
Now, wouldn't it be nice if the first element of our dialog was already focused upon opening?
First Tab Focus
Leveraging the first-tabbable module, we are able to set focus to the first element of the dialog window whenever it mounts.
class Dialog extends React.Component<Props> {
dialog: HTMLElement | null;
disabledHandle: Handle;
focusHandle: Handle;

componentDidMount() {
this.disabledHandle = Disabled({
filter: this.dialog,
});

this.focusHandle = TabFocus({
context: this.dialog,
});

let element = FirstTab({
context: this.dialog,
defaultToContext: true,
});
element.focus();
}
...
}
Within the componentDidMount hook, we create the element variable and assign it to our FirstTab import. This will return the first tabbable element within the context that we provide. Once that element is returned, calling element.focus() will apply focus automatically.
Now, that we have the behavior within the dialog working pretty well, we want to improve keyboard accessibility. As a strict laptop user myself (no external mouse, monitor, or any peripherals) I tend to instinctively press esc whenever I want to close any dialog or popup. Normally, I would write my own event listener to handle this behavior but Ally.js provides the when/key module to simplify this process as well.
class Dialog extends React.Component<Props> {
dialog: HTMLElement | null;
disabledHandle: Handle;
focusHandle: Handle;
keyHandle: Handle;

componentDidMount() {
this.disabledHandle = Disabled({
filter: this.dialog,
});

this.focusHandle = TabFocus({
context: this.dialog,
});

let element = FirstTab({
context: this.dialog,
defaultToContext: true,
});
element.focus();

this.keyHandle = Key({
escape: () => { this.props.close(); },
});
}

componentWillUnmount() {
this.disabledHandle.disengage();
this.focusHandle.disengage();
this.keyHandle.disengage();
}
...
}
Again, we provide a Handle property to our class which will allow us to easily bind the esc functionality on mount and then disengage() it on unmount. And like that, we're now able to easily close our dialog via the keyboard without necessarily having to tab to a specific close button.
Lastly (whew!), upon closing the dialog window, the user's focus should return to the element that triggered it. In this case, the Show Dialog button in the App container. This isn't built into Ally.js but a recommended best practice that, as you'll see, can be added in with little hassle.
class Dialog extends React.Component<Props> {
dialog: HTMLElement | null;
disabledHandle: Handle;
focusHandle: Handle;
keyHandle: Handle;
focusedElementBeforeDialogOpened: HTMLInputElement | HTMLButtonElement;

componentDidMount() {
if (document.activeElement instanceof HTMLInputElement ||
document.activeElement instanceof HTMLButtonElement) {
this.focusedElementBeforeDialogOpened = document.activeElement;
}

this.disabledHandle = Disabled({
filter: this.dialog,
});

this.focusHandle = TabFocus({
context: this.dialog,
});

let element = FirstTab({
context: this.dialog,
defaultToContext: true,
});

this.keyHandle = Key({
escape: () => { this.props.close(); },
});
element.focus();
}

componentWillUnmount() {
this.disabledHandle.disengage();
this.focusHandle.disengage();
this.keyHandle.disengage();
this.focusedElementBeforeDialogOpened.focus();
}
...
}
What has been done here is a property, focusedElementBeforeDialogOpened, has been added to our class. Whenever the component mounts, we store the current activeElement within the document to this property.
It's important to do this before we disable the entire document or else document.activeElement will return null.
Then, like we had done with setting focus to the first element in the dialog, we will use the .focus() method of our stored element on componentWillUnmount to apply focus to the original button upon closing the dialog. This functionality has been wrapped in a type guard to ensure the element supports the focus() method.

Now, that our Dialog component is working, accessible, and self-contained we are ready to build our App. Except, running yarn test or yarn build will result in an error. Something to this effect:
[path]/node_modules/ally.js/esm/maintain/disabled.js:21
import nodeArray from '../util/node-array';
^^^^^^

SyntaxError: Unexpected token import
Despite Create React App and its test runner, Jest, supporting ES6 modules, an issue is still caused with the ESM declared modules. So this brings us to our final step of integrating Ally.js with React, and that is the babel-polyfill package.
All the way in the beginning of this post (literally, ages ago!), I showed additional packages to install, the second of which being babel-polyfill. With this installed, let's head to our app's entry point, in this case ./src/index.tsx.
Index.tsx
At the very top of this file, let's import babel-polyfill. This will emulate a full ES2015+ environment and is intended to be used in an application rather than a library/tool.
import 'babel-polyfill';
With that, we can return to our terminal to run the test and build scripts from create-react-app without any error.
Demo of accessible dialog window using Ally.js within React and TypeScript
View the live demo

Now that Ally.js is incorporated into your React and TypeScript project, more steps can be taken to ensure your content can be consumed by all users, not just all of their devices.
For more information on accessibility and other great resources please visit these resources:

Accessible Web Apps with React, TypeScript & Ally.js on Github
Start Building Accessible Web Applications Today
HTML Codesniffer
Web Accessibility Best Practices
Writing CSS with Accessibility in Mind
Accessibility Checklist

Accessible Web Apps with React, TypeScript, and AllyJS is a post from CSS-Tricks
Source: CssTricks


Move Slowly and Fix Things

Synoptic Table of Physiognomic TraitsRuminations on the heavy weight of software design in the 21st century.Recently I took a monthlong sabbatical from my job as a designer at Basecamp. (Basecamp is an incredible company that gives us a paid month off every 3 years.)When you take 30 days away from work, you have a lot of time and headspace that’s normally used up. Inevitably you start to reflect on your life.And so, I pondered what the hell I’m doing with mine. What does it mean to be a software designer in 2018, compared to when I first began my weird career in the early 2000s?The answer is weighing on me.As software continues to invade our lives in surreptitious ways, the social and ethical implications are increasingly significant.Our work is HEAVY and it’s getting heavier all the time. I think a lot of designers haven’t deeply considered this, and they don’t appreciate the real-life effects of the work they’re doing.Here’s a little example. About 10 years ago, Twitter looked like so:Twitter circa 2007How cute was that? If you weren’t paying attention back then, Twitter was kind of a joke. It was a silly viral app where people wrote about their dog or their ham sandwich.Today, things are a wee bit different. Twitter is now the megaphone for the leader of the free world, who uses it to broadcast his every whim. It’s also the world’s best source for real-time news, and it’s full of terrible abuse problems.That’s a massive sea change! And it all happened in only 10 years.Do you think the creators of that little 2007 status-sharing concept had any clue this is where they’d end up, just a decade later?Seems like they didn’t:People can’t decide whether Twitter is the next YouTube, or the digital equivalent of a hula hoop. To those who think it’s frivolous, Evan Williams responds: “Whoever said that things have to be useful?”Twitter: Is Brevity The Next Big Thing? (Newsweek, April 2007)Considering these shallow beginnings, is it any surprise that Twitter has continually struggled at running a massive, serious global communications platform, which now affects the world order?That’s not what they originally built. It grew into a Frankenstein’s monster, and now they’re not quite sure how to handle it.I’m not picking on Twitter in particular, but its trajectory illustrates a systemic problem.Designers and programmers are great at inventing software. We obsess over every aspect of that process: the tech we use, our methodology, the way it looks, and how it performs.Unfortunately we’re not nearly as obsessed with what happens after that, when people integrate our products into the real world. They use our stuff and it takes on a life of its own. Then we move on to making the next thing. We’re builders, not sociologists.This approach wasn’t a problem when apps were mostly isolated tools people used to manage spreadsheets or send emails. Small products with small impacts.But now most software is so much more than that. It listens to us. It goes everywhere we go. It tracks everything we do. It has our fingerprints. Our heart rate. Our money. Our location. Our face. It’s the primary way we communicate our thoughts and feelings to our friends and family.It’s deeply personal and ingrained into every aspect of our lives. It commands our gaze more and more every day.We’ve rapidly ceded an enormous amount of trust to software, under the hazy guise of forward progress and personal convenience. And since software is constantly evolving—one small point release at a time—each new breach of trust or privacy feels relatively small and easy to justify.Oh, they’ll just have my location. Oh, they’ll just have my identity. Oh, they’ll just have an always-on microphone in the room.Most software products are owned and operated by corporations, whose business interests often contradict their users’ interests. Even small, harmless-looking apps might be harvesting data about you and selling it.And that’s not even counting the army of machine learning bots that will soon be unleashed to make decisions for us.It all sounds like an Orwellian dystopia when you write it out like this, but this is not fiction. It’s the real truth.A scene from WALL-E, or the actual software industry in 2018?See what I mean by HEAVY? Is this what we signed up for, when we embarked on a career in tech?15 years ago, it was a slightly different story. The Internet was a nascent and bizarre wild west, and it had an egalitarian vibe. It was exciting and aspirational — you’d get paid to make cool things in a fast-moving industry, paired with the hippie notion that design can change the world.Well, that motto was right on the money. There’s just one part we forgot: change can have a dark side too.If you’re a designer, ask yourself this question…Is your work helpful or harmful?You might have optimistically deluded yourself into believing it’s always helpful because you’re a nice person, and design is a noble-seeming endeavor, and you have good intentions.But let’s be brutally honest for a minute.If you’re designing sticky features that are meant to maximize the time people spend using your product instead of doing something else in their life, is that helpful?If you’re trying to desperately inflate the number of people on your platform so you can report corporate growth to your shareholders, is that helpful?If your business model depends on using dark patterns or deceptive marketing to con users into clicking on advertising, is that helpful?If you’re trying to replace meaningful human culture with automated tech, is that helpful?If your business collects and sells personal data about people, is that helpful?If your company is striving to dominate an industry by any means necessary, is that helpful?If you do those things…Are you even a Designer at all?Or are you a glorified Huckster—a puffed-up propaganda artist with a fancy job title in an open-plan office?Whether we choose to recognize it or not, designers have both the authority and the responsibility to prevent our products from becoming needlessly invasive, addictive, dishonest, or harmful. We can continue to pretend this is someone else’s job, but it’s not. It’s our job.We’re the first line of defense to protect people’s privacy, safety, and sanity. In many, many cases we’re failing at that right now.If the past 20 years of tech represent the Move Fast and Break Things era, now it’s time to slow down and take stock of what’s broken.At Basecamp, we’re leading the charge by running an unusually supportive company, pushing back on ugly practices in the industry, and giving a shit about our customers. We design our product to improve people’s work, and to stop their work from spilling over into their personal lives. We intentionally leave out features that might keep people hooked on Basecamp all day, in favor of giving them peace and freedom from constant interruptions. And we skip doing promotional things that might grow the business, if they feel gross and violate our values.We know we have a big responsibility on our hands, and we take it seriously.You should too. The world needs as much care and conscience as we can muster. Defend your users against anti-patterns and shady business practices. Raise your hand and object to harmful design ideas. Call out bad stuff when you see it. Thoughtfully reflect on what you’re sending out into the world every day.The stakes are high and they’ll keep getting higher. Grab those sociology and ethics textbooks and get to work.If you like this post, hit the 👏 below or send me a message about your ham sandwich on Twitter.Move Slowly and Fix Things was originally published in Signal v. Noise on Medium, where people are continuing the conversation by highlighting and responding to this story.


Source: 37signals


Robust React User Interfaces with Finite State Machines

User interfaces can be expressed by two things:

The state of the UI
Actions that can change that state

From credit card payment devices and gas pump screens to the software that your company creates, user interfaces react to the actions of the user and other sources and change their state accordingly. This concept isn't just limited to technology, it's a fundamental part of how everything works:

For every action, there is an equal and opposite reaction.
- Isaac Newton

This is a concept we can apply to developing better user interfaces, but before we go there, I want you to try something. Consider a photo gallery interface with this user interaction flow:

Show a search input and a search button that allows the user to search for photos
When the search button is clicked, fetch photos with the search term from Flickr
Display the search results in a grid of small sized photos
When a photo is clicked/tapped, show the full size photo
When a full-sized photo is clicked/tapped again, go back to the gallery view

Now think about how you would develop it. Maybe even try programming it in React. I'll wait; I'm just an article. I'm not going anywhere.
Finished? Awesome! That wasn't too difficult, right? Now think about the following scenarios that you might have forgotten:

What if the user clicks the search button repeatedly?
What if the user wants to cancel the search while it's in-flight?
Is the search button disabled while searching?
What if the user mischievously enables the disabled button?
Is there any indication that the results are loading?
What happens if there's an error? Can the user retry the search?
What if the user searches and then clicks a photo? What should happen?

These are just some of the potential problems that can arise during planning, development, or testing. Few things are worse in software integrationthan thinking that you've covered every possible use case, and then discovering (or receiving) new edge cases that will further complicate your code once you account for them. It's especially difficult to jump into a pre-existing project where all of these use cases are undocumented, but instead hidden in spaghetti code and left for you to decipher.
Stating the obvious
What if we could determine all possible UI states that can result from all possible actions performed on each state? And what if we can visualize these states, actions, and transitions between states? Designers intuitively do this, in what are called "user flows" (or "UX Flows"), to depict what the next state of the UI should be depending on the user interaction.
Picture credit: Simplified Checkout Process by Michael Pons
In computer science terms, there is a computational model called finite automata, or "finite state machines" (FSM), that can express the same type of information. That is, they describe which state comes next when an action is performed on the current state. Just like user flows, these finite state machines can be visualized in a clear and unambiguous way. For example, here is the state transition diagram describing the FSM of a traffic light:

What is a finite state machine?
A state machine is a useful way of modeling behavior in an application: for every action, there is a reaction in the form of a state change. There's 5 parts to a classical finite state machine:

A set of states (e.g., idle, loading, success, error, etc.)
A set of actions (e.g., SEARCH, CANCEL, SELECT_PHOTO, etc.)
An initial state (e.g., idle)
A transition function (e.g., transition('idle', 'SEARCH') == 'loading')
Final states (which don't apply to this article.)

Deterministic finite state machines (which is what we'll be dealing with) have some constraints, as well:

There are a finite number of possible states
There are a finite number of possible actions (these are the "finite" parts)
The application can only be in one of these states at a time
Given a currentState and an action, the transition function must always return the same nextState (this is the "deterministic" part)

Representing finite state machines
A finite state machine can be represented as a mapping from a state to its "transitions", where each transition is an action and the nextState that follows that action. This mapping is just a plain JavaScript object.
Let's consider an American traffic light example, one of the simplest FSM examples. Assume we start on green, then transition to yellow after some TIMER, and then RED after another TIMER, and then back to green after another TIMER:
const machine = {
green: { TIMER: 'yellow' },
yellow: { TIMER: 'red' },
red: { TIMER: 'green' }
};
const initialState = 'green';
A transition function answers the question:
Given the current state and an action, what will the next state be?
With our setup, transitioning to the next state based on an action (in this case, TIMER) is just a look-up of the currentState and action in the machine object, since:

machine[currentState] gives us the next action mapping, e.g.: machine['green'] == {TIMER: 'yellow'}
machine[currentState][action] gives us the next state from the action, e.g.: machine['green']['TIMER'] == 'yellow':
// ...
function transition(currentState, action) {
return machine[currentState][action];
}

transition('green', 'TIMER');
// => 'yellow'

Instead of using if/else or switch statements to determine the next state, e.g., if (currentState === 'green') return 'yellow';, we moved all of that logic into a plain JavaScript object that can be serialized into JSON. That's a strategy that will pay off greatly in terms of testing, visualization, reuse, analysis, flexibility, and configurability.
See the Pen Simple finite state machine example by David Khourshid (@davidkpiano) on CodePen.
Finite State Machines in React
Taking a look at a more complicated example, let's see how we can represent our gallery app using a finite state machine. The app can be in one of several states:

start - the initial search page view
loading - search results fetching view
error - search failed view
gallery - successful search results view
photo - detailed single photo view

And several actions can be performed, either by the user or the app itself:

SEARCH - user clicks the "search" button
SEARCH_SUCCESS - search succeeded with the queried photos
SEARCH_FAILURE - search failed due to an error
CANCEL_SEARCH - user clicks the "cancel search" button
SELECT_PHOTO - user clicks a photo in the gallery
EXIT_PHOTO - user clicks to exit the detailed photo view

The best way to visualize how these states and actions come together, at first, is with two very powerful tools: pencil and paper. Draw arrows between the states, and label the arrows with actions that cause transitions between the states:
We can now represent these transitions in an object, just like in the traffic light example:
const galleryMachine = {
start: {
SEARCH: 'loading'
},
loading: {
SEARCH_SUCCESS: 'gallery',
SEARCH_FAILURE: 'error',
CANCEL_SEARCH: 'gallery'
},
error: {
SEARCH: 'loading'
},
gallery: {
SEARCH: 'loading',
SELECT_PHOTO: 'photo'
},
photo: {
EXIT_PHOTO: 'gallery'
}
};

const initialState = 'start';
Now let's see how we can incorporate this finite state machine configuration and the transition function into our gallery app. In the App's component state, there will be a single property that will indicate the current finite state, gallery:
class App extends React.Component {
constructor(props) {
super(props);

this.state = {
gallery: 'start', // initial finite state
query: '',
items: []
};
}
// ...
The transition function will be a method of this App class, so that we can retrieve the current finite state:
// ...
transition(action) {
const currentGalleryState = this.state.gallery;
const nextGalleryState =
galleryMachine[currentGalleryState][action.type];

if (nextGalleryState) {
const nextState = this.command(nextGalleryState, action);

this.setState({
gallery: nextGalleryState,
...nextState // extended state
});
}
}
// ...
This looks similar to the previously described transition(currentState, action) function, with a few differences:

The action is an object with a type property that specifies the string action type, e.g., type: 'SEARCH'
Only the action is passed in since we can retrieve the current finite state from this.state.gallery
The entire app state will be updated with the next finite state, i.e., nextGalleryState, as well as any extended state (nextState) that results from executing a command based on the next state and action payload (see the "Executing commands" section)

Executing commands
When a state change occurs, "side effects" (or "commands" as we'll refer to them) might be executed. For example, when a user clicks the "Search" button and a 'SEARCH' action is emitted, the state will transition to 'loading', and an async Flickr search should be executed (otherwise, 'loading' would be a lie, and developers should never lie).
We can handle these side effects in a command(nextState, action) method that determines what to execute given the next finite state and action payload, as well as what the extended state should be:
// ...
command(nextState, action) {
switch (nextState) {
case 'loading':
// execute the search command
this.search(action.query);
break;
case 'gallery':
if (action.items) {
// update the state with the found items
return { items: action.items };
}
break;
case 'photo':
if (action.item) {
// update the state with the selected photo item
return { photo: action.item };
}
break;
default:
break;
}
}
// ...
Actions can have payloads other than the action's type, which the app state might need to be updated with. For example, when a 'SEARCH' action succeeds, a 'SEARCH_SUCCESS' action can be emitted with the items from the search result:
// ...
fetchJsonp(
`https://api.flickr.com/services/feeds/photos_public.gne?lang=en-us&format=json&tags=${encodedQuery}`,
{ jsonpCallback: 'jsoncallback' })
.then(res => res.json())
.then(data => {
this.transition({ type: 'SEARCH_SUCCESS', items: data.items });
})
.catch(error => {
this.transition({ type: 'SEARCH_FAILURE' });
});
// ...
The command() method above will immediately return any extended state (i.e., state other than the finite state) that this.state should be updated with in this.setState(...), along with the finite state change.
The final machine-controlled app
Since we've declaratively configured the finite state machine for the app, we can render the proper UI in a cleaner way by conditionally rendering based on the current finite state:
// ...
render() {
const galleryState = this.state.gallery;

return (
<div className="ui-app" data-state={galleryState}>
{this.renderForm(galleryState)}
{this.renderGallery(galleryState)}
{this.renderPhoto(galleryState)}
</div>
);
}
// ...
The final result:
See the Pen Gallery app with Finite State Machines by David Khourshid (@davidkpiano) on CodePen.
Finite state in CSS
You might have noticed data-state={galleryState} in the code above. By setting that data-attribute, we can conditionally style any part of our app using an attribute selector:
.ui-app {
// ...

&[data-state="start"] {
justify-content: center;
}

&[data-state="loading"] {
.ui-item {
opacity: .5;
}
}
}
This is preferable to using className because you can enforce the constraint that only a single value at a time can be set for data-state, and the specificity is the same as using a class. Attribute selectors also supported in most popular CSS-in-JS solutions.
Advantages and resources
Using finite state machines for describing the behavior of complex applications is nothing new. Traditionally, this was done with switch and goto statements, but by describing finite state machines as a declarative mapping between states, actions, and next states, you can use that data to visualize the state transitions:

Furthermore, using declarative finite state machines allows you to:

Store, share, and configure application logic anywhere - similar components, other apps, in databases, in other languages, etc.
Make collaboration easier with designers and project managers
Statically analyze and optimize state transitions, including states that are impossible to reach
Easily change application logic without fear
Automate integration tests

Conclusion and takeaways
Finite state machines are an abstraction for modeling the parts of your app that can be represented as finite states, and almost all apps have those parts. The FSM coding patterns presented in this article:

Can be used with any existing state management setup; e.g., Redux or MobX
Can be adapted to any framework (not just React), or no framework at all
Are not written in stone; the developer can adapt the patterns to their coding style
Are not applicable to every single situation or use-case

From now on, when you encounter "boolean flag" variables such as isLoaded or isSuccess, I encourage you to stop and think about how your app state can be modeled as a finite state machine instead. That way, you can refactor your app to represent state as state === 'loaded' or state === 'success', using enumerated states in place of boolean flags.
Resources
I gave a talk at React Rally 2017 about using finite automata and statecharts to create better user interfaces, if you want to learn more about the motivation and principles:
[youtube https://www.youtube.com/watch?v=VU1NKX6Qkxc&w=560&h=315]
Slides: Infinitely Better UIs with Finite Automata
Here are some further resources:

Pure UI by Guillermo Rauch
Pure UI Control by Adam Solove
Wikipedia: Finite State Machines
Statecharts: A Visual Formalism for Complex Systems by David Harel (PDF)
Managing State in JavaScript with State Machines by Krasimir Tsonev
Rambling Thoughts on React and Finite State Machines by Ryan Florence

Robust React User Interfaces with Finite State Machines is a post from CSS-Tricks
Source: CssTricks


CSS Code Smells

Every week(ish) we publish the newsletter which contains the best links, tips, and tricks about web design and development. At the end, we typically write about something we've learned in the week. That might not be directly related to CSS or front-end integrationat all, but they're a lot of fun to share. Here's an example of one those segments from the newsletter where I ramble on about code quality and dive into what I think should be considered a code smell when it comes to the CSS language.

A lot of developers complain about CSS. The cascade! The weird property names! Vertical alignment! There are many strange things about the language, especially if you're more familiar with a programming language like JavaScript or Ruby.
However, I think the real problem with the CSS language is that it's simple but not easy. What I mean by that is that it doesn't take much time to learn how to write CSS but it takes extraordinary effort to write "good" CSS. Within a week or two, you can probably memorize all the properties and values and make really beautiful designs in the browser without any plugins or dependencies and wow all you're friends. But that's not what I mean by "good CSS."
In an effort to define what that is I've been thinking a lot lately about how we can identify what bad CSS is first. In other areas of programming, developers tend to talk of code smells when they describe bad code; hints in a program that identify that, hey, maybe this thing you've written isn't a good idea. It could be something simple like a naming convention or a particularly fragile bit of code.
In a similar vein, below is my own list of code smells that I think will help us identify bad design and CSS. Note that these points are related to my experience in building large scale design systems in complex apps, so please take this all with a grain of salt.
Code smell #1: The fact you're writing CSS in the first place
A large team will likely already have a collection of tools and systems in place to create things like buttons or styles to move elements around in a layout so the simple fact that you're about to write CSS is probably a bad idea. If you're just about to write custom CSS for a specific edge case then stop! You probably need to do one of the following:

Learn how the current system works and why it has the constraints it does and stick to those constraints
Rethink the underlying infrastructure of the CSS

I think this approach was perfectly described here:

About the false velocity of “quick fixes”. pic.twitter.com/91jauLyEJ3
— Pete Lacey (@chopeh) November 2, 2017
Code smell #2: File Names and Naming Conventions
Let's say you need to make a support page for your app. First thing you probably do is make a CSS file called `support.scss` and start writing code like this:
.support {
background-color: #efefef;
max-width: 600px;
border: 2px solid #bbb;
}
So the problem here isn't necessarily the styles themselves but the concept of a 'support page' in the first place. When we write CSS we need to think in much larger abstractions — we need to think in templates or components instead of the specific content the user needs to see on a page. That way we can reuse something like a "card" over and over again on every page, including that one instance we need for the support page:
.card {
background-color: #efefef;
max-width: 600px;
border: 2px solid #bbb;
}
This is already a little better! (My next question would be what is a card, what content can a card have inside it, when is it not okay to use a card, etc etc. – these questions will likely challenge the design and keep you focused.)
Code smell #3: Styling HTML elements
In my experience, styling a HTML element (like a section or a paragraph tag) almost always means that we're writing a hack. There's only one appropriate time to style a HTML element directly like this:
section { display: block; }
figure { margin-bottom: 20px; }
And that is in the applications global so-called "reset styles". Otherwise, we're making our codebase fractured and harder to debug because we have no idea whether or not those styles are hacks for a specific purpose or whether they define the defaults for that HTML element.
Code smell #4: Identing code
Indenting Sass code so that child components sit within a parent element is almost always a code smell and a sure sign that this design needs to be refactored. Here's one example:
.card {
display: flex;

.header {
font-size: 21px;
}
}
In this example are we saying that you can only use a .header class inside a .card? Or are we overriding another block of CSS somewhere else deep within our codebase? The fact that we even have to ask questions like this shows the biggest problem here: we have now sown doubt into the codebase. To really understand how this code works I have to have knowledge of other bits of code. And if I have to ask questions about why this code exists or how it works then it is probably either too complicated or unmaintainable for the future.
This leads to the fifth code smell...
Code smell #5: Overriding CSS
In an ideal world we have a reset CSS file that styles all our default elements and then we have separate individual CSS files for every button, form input and component in our application. Our code should be, at most, overridden by the cascade once. First, this makes our overall code more predictable and second, makes our component code (like button.scss) super readable. We now know that if we need to fix something we can open up a single file and those changes are replicated throughout the application in one fell swoop. When it comes to CSS, predictability is everything.
In that same CSS Utopia, we would then perhaps make it impossible to override certain class names with something like CSS Modules. That way we can't make mistakes by accident.
Code smell #6: CSS files with more than 50 lines of code in them
The more CSS you write the more complicated and fragile the codebase becomes. So whenever I get to around ~50 lines of CSS I tend to rethink what I'm designing by asking myself a couple of questions. Starting and ending with: "is this a single component, or can we break it up into separate parts that work independently from one another?"
That's a difficult and time-consuming process to be practicing endlessly but it leads to a solid codebase and it trains you to write really good CSS.
Wrapping up
I suppose I now have another question, but this time for you: what do you see as a code smell in CSS? What is bad CSS? What is really good CSS? Make sure to add a comment below!

CSS Code Smells is a post from CSS-Tricks
Source: CssTricks


ARIA is Spackle, Not Rebar

Much like their physical counterparts, the materials we use to build websites have purpose. To use them without understanding their strengths and limitations is irresponsible. Nobody wants to live in an poorly-built house. So why are poorly-built websites acceptable?
In this post, I'm going to address WAI-ARIA, and how misusing it can do more harm than good.

Materials as technology
In construction, spackle is used to fix minor defects on interiors. It is a thick paste that dries into a solid surface that can be sanded smooth and painted over. Most renters become acquainted with it when attempting to get their damage deposit back.
Rebar is a lattice of steel rods used to reinforce concrete. Every modern building uses it—chances are good you'll see it walking past any decent-sized construction site.
Technology as materials
HTML is the rebar-reinforced concrete of the web. To stretch the metaphor, CSS is the interior and exterior decoration, and JavaScript is the wiring and plumbing.
Every tag in HTML has what is known as native semantics. The act of writing an HTML element programmatically communicates to the browser what that tag represents. Writing a button tag explicitly tells the browser, "This is a button. It does buttony things."
The reason this is so important is that assistive technology hooks into native semantics and uses it to create an interface for navigation. A page not described semantically is a lot like a building without rooms or windows: People navigating via a screen reader have to wander around aimlessly in the dark and hope they stumble onto what they need.
ARIA stands for Accessible Rich Internet Applications and is a relatively new specification developed to help assistive technology better communicate with dynamic, JavaScript-controlled content. It is intended to supplement existing semantic attributes by providing enhanced interactivity and context to screen readers and other assistive technology.
Using spackle to build walls
A concerning trend I've seen recently is the blind, mass-application of ARIA. It feels like an attempt by developers to conduct accessibility compliance via buckshot—throw enough of something at a target trusting that you'll eventually hit it.
Unfortunately, there is a very real danger to this approach. Misapplied ARIA has the potential to do more harm than good.
The semantics inherent in ARIA means that when applied improperly it can create a discordant, contradictory mess when read via screen reader. Instead of hearing, "This is a button. It does buttony things.", people begin to hear things along the lines of, "This is nothing, but also a button. But it's also a deactivated checkbox that is disabled and it needs to shout that constantly."
If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
– First rule of ARIA use
In addition, ARIA is a new technology. This means that browser support and behavior is varied. While I am optimistic that in the future the major browsers will have complete and unified support, the current landscape has gaps and bugs.
Another important consideration is who actually uses the technology. Compliance isn't some purely academic vanity metric we're striving for. We're building robust systems for real people that allow them to get what they want or need with as little complication as possible. Many people who use assistive technology are reluctant to upgrade for fear of breaking functionality. Ever get irritated when your favorite program redesigns and you have to re-learn how to use it? Yeah.
The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.
– Tim Berners-Lee
It feels disingenuous to see the benefits of the DRY principal of massive JavaScript frameworks also slather redundant and misapplied attributes in their markup. The web is accessible by default. For better or for worse, we are free to do what we want to it after that.
The fix
This isn't to say we should completely avoid using ARIA. When applied with skill and precision, it can turn a confusing or frustrating user experience into an intuitive and effortless one, with far fewer brittle hacks and workarounds.
A little goes a long way. Before considering other options, start with markup that semantically describes the content it is wrapping. Test extensively, and only apply ARIA if deficiencies between HTML's native semantics and JavaScript's interactions arise.
Development teams will appreciate the advantage of terse code that's easier to maintain. Savvy developers will use a CSS-Trick™ and leverage CSS attribute selectors to create systems where visual presentation is tied to semantic meaning.
input:invalid,
[aria-invalid] {
border: 4px dotted #f64100;
}
Examples
Here are a few of the more common patterns I've seen recently, and why they are problematic. This doesn't mean these are the only kinds of errors that exist, but it's a good primer on recognizing what not to do:
<li role="listitem">Hold the Bluetooth button on the speaker for three seconds to make the speaker discoverable</li>
The role is redundant. The native semantics of the li element already describe it as a list item.
<p role="command">Type CTRL+P to print
command is an Abstract Role. They are only used in ARIA to help describe its taxonomy. Just because an ARIA attribute seems like it is applicable doesn't mean it necessarily is. Additionally, the kbd tag could be used on "CTRL" and "P" to more accurately describe the keyboard command.
<div role="button" class="button">Link to device specifications</div>
Failing to use a button tag runs the risk of not accommodating all the different ways a user can interact with a button and how the browser responds. In addition, the a tag should be used for links.
<body aria-live="assertive" aria-atomic="true">
Usually the intent behind something like this is to expose updates to the screen reader user. Unfortunately, when scoped to the body tag, any page change—including all JS-related updates—are announced immediately. A setting of assertive on aria-live also means that each update interrupts whatever it is the user is currently doing. This is a disastrous experience, especially for single page apps.
<div aria-checked="true"></div>
You can style a native checkbox element to look like whatever you want it to. Better support! Less work!
<div role="link" tabindex="40">
Link text
</div>
Yes, it's actual production code. Where to begin? First, never use a tabindex value greater than 0. Secondly, the title attribute probably does not do what you think it does. Third, the anchor tag should have a destination—links take you places, after all. Fourth, the role of link assigned to a div wrapping an a element is entirely superfluous.
<h2 class="h3" role="heading" aria-level="1">How to make a perfect soufflé every time</h2>
Credit is where credit's due: Nicolas Steenhout outlines the issues for this one.
Do better
Much like content, markup shouldn't be an afterthought when building a website. I believe most people are genuinely trying to do their best most of the time, but wielding a technology without knowing its implications is dangerous and irresponsible.
I'm usually more of a honey-instead-of-vinegar kind of person when I try to get people to practice accessibility, but not here. This isn't a soft sell about the benefits of developing and designing with an accessible, inclusive mindset. It's a post about doing your job.
Every decision a team makes affects a site's accessibility.
– Laura Kalbag
Get better at authoring
Learn about the available HTML tags, what they describe, and how to best use them. Same goes for ARIA. Give your page template semantics the same care and attention you give your JavaScript during code reviews.
Get better at testing
There's little excuse to not incorporate a screen reader into your testing and QA process. NVDA is free. macOS, Windows, iOS and Android all come with screen readers built in. Some nice people have even written guides to help you learn how to use them.
Automated accessibility testing is a huge boon, but it also isn't a silver bullet. It won't report on what it doesn't know to report, meaning it's up to a human to manually determine if navigating through the website makes sense. This isn't any different than other usability testing endeavors.
Build better buildings
Universal Design teaches us that websites, like buildings, can be both beautiful and accessible. If you're looking for a place to start, here are some resources:

A Book Apart: Accessibility for Everyone, by Laura Kalbag
egghead.io: Intro to ARIA and Start Building Accessible Web Applications Today, by Marcy Sutton
Google Developers: Introduction to ARIA, by Meggin Kearney, Dave Gash, and Alice Boxhall
YouTube: A11ycasts with Rob Dodson, by Rob Dodson
W3C: WAI-ARIA Authoring Practices 1.1
W3C: Using ARIA
Zomigi: Videos of screen readers using ARIA
Inclusive Components, by Heydon Pickering
HTML5 Accessibility
The American Foundation for the Blind: Improving Your Website's Accessibility
Designing for All: 5 Ways to Make Your Next Website Design More Accessible, by Carie Fisher
Accessible Interface Design, by Nick Babich

ARIA is Spackle, Not Rebar is a post from CSS-Tricks
Source: CssTricks