Google’s Search Crawlers to Natively Render JavaScript-based Pages by @MattGSouthern

In Q2 2018, Google’s search crawlers will begin to render JavaScript-based webpages without the assistance of the AJAX crawling scheme.The post Google’s Search Crawlers to Natively Render JavaScript-based Pages by @MattGSouthern appeared first on Search Engine Journal.
Source: https://www.searchenginejournal.com/feed/


Save 15% or More on Car Insurance by Switching to Plain JavaScript

Satire disclaimer: This article is as much satire as it is serious insight if there is even any of that at all. Don’t take it too seriously, but do tell all your friends. Also, the bit about Taco Bell is 100% true. I wouldn’t joke about something like that.

My day usually begins like this: I wake up at 6:15 a.m. (kill me) to get the kids ready for school. They’re mad. I’m mad. Everyone is on the brink of an emotional breakdown because it's 6:15 in the morning.
Usually the first thing that I do when I wake up is roll out of bed and start hammering out pushups like Christian Bale.

BWAHAHAHA. No.
Before I’m even fully awake and out of bed, I grab my phone and look at Twitter. It’s a sickness, I know. I’m not proud of it but at least I’m out here admitting that I have a problem and I believe according to the rules of science that fully negates my problem and makes me better than you.
One morning a few weeks ago I wake up to this tweet…

Removing client-side React.js (but keeping it on the server) resulted in a 50% performance improvement on our landing page pic.twitter.com/vM7JhWhYKu
— Netflix UI Engineers (@NetflixUIE) October 26, 2017
The wonderful thing about Twitter is that there is essentially zero context for anything you see, which means your crazy brain gets to fill in all the holes and, in my case, that’s a recipe for utter disaster.
Here is how I read this tweet….
Heavily doctored by me. My Photoshop skills are a huge embarrassing failure.
I believe my brain read it that way because that’s literally what the original tweet says. My brain just adds the “Your whole life is a lie” part to pretty much everything I read or hear.
Your Whole Life is a Lie
This immediately dumped me into an existential crisis.
To be fair, I’m almost constantly in a state of crisis so it’s not like this was a big leap for me. Just last night at Taco Bell I had to choose between the Beefy 5-layer Burrito and the Cheesy Gordita Crunch and I almost came apart in the drive through. You can’t force decisions like that on people and expect an immediate response! And why do I need 50 packets of Fire sauce!?!
The point is that I’m kind of emotionally fragile as it is, so you can’t suggest to me that you got rid of React because all of a sudden people just don’t need it anymore.
I had so, so, so many. questions like:

What about binding?
What about components?
What about state?
What about templates?

You’re telling me that all of a sudden you just don’t need any of that stuff anymore? One does not simply “move to plain JavaScript” by removing React from their project. If you actually did that you would just be moving from React to your own version of React. Facebook could say that their site is built in “plain JavaScript” too. They just decided to name some of that JavaScript “React" in the process.
It was nonsensical. You might as well have said that you saved 15% on car insurance by moving to plain JavaScript. Thankfully, I only had to wait 6 agonizing days before Jake Archibald took to the blogs to clear everything up.

📝 Netflix "removed" React and improved performance.➡️ Despite appearances, this reflects well on React.https://t.co/R8SohrLX6q
— Jake Archibald (@jaffathecake) October 31, 2017

THIS IS NOT HELPING, JAKE! I’M LOSING IT OVER HERE!
The post goes on to explain that Netflix is actually deferring client-side React until it’s needed and going with server rendered React in the meantime. He also points out that it’s only logical that it would be faster because the browser is doing less work. Netflix is apparently loading client-side React in the background. It’s there when you need it, but you don’t have to parse it if you don’t.
I decided to check this out and see for myself what is going on.
Netflix Login
One of the places Jake mentions that server-side React is appropriate is on the login screen. So let’s start there. I loaded the login screen and it looks to me like client-side React is still every much in effect here.

As an aside, Netflix is great at naming things. I mean, look at these components—AkiraLayout, JawboneLinkProvider, FreezedWrapper? OK, FreezedWrapper isn’t that exciting but you can’t take AkiraLayout from me.

So I can’t find where React has been removed. The login page itself comes in at around 194KB and that’s before it loads the loginController.jsx file which bumps it up another 204KB.
I then did what I should have done the first time which is to watch the video from Netflix that was responsible for this descent into the depths of my insecurity and I noticed that they only mentioned the splash page.
The splash page is just netflix.com. No login. No videos. The splash page. That slide? The one that made it’s way all over the internet and into my therapy sessions? That slide is referring only to the splash page. Netflix did remove React from their splash page and replace the few interactions they had with plain JavaScript.
And there is your context. So let’s fix the slide again…

That is the actual story here.
It’s unfortunate that we latch on to a single slide taken completely out of context. This is not the fault of Netflix. Or maybe it is. I mean, they did tweet it out but, look, this is really the fault of 2017. This is how all of the news in our lives plays out.
What’s super unfortunate here, and what Jake was trying to convey in his post, is that we completely missed some actual cool things that Netflix is doing. Mainly the combination of server-side React and Prefetching. Or rather the idea that more complex code can be downloaded and parsed in the background instead of when the page loads.
Prefetching is Not a Solved Problem
We tend to forget that things like prefetching are not necessarily a solved problem. While Service Workers are awesome, Netflix can’t use them because the support is too sparse. Beyond that, the browser Prefetching API is flaky. In that same presentation, Netflix reports that the API (which is just the link tag) has a success rate as low as a 30%. That means your prefetch will only work about a third of the time in some cases. 😳

The reason for this is that the API is trying to make a bunch of decisions about whether or not it should prefetch depending on your device and resources. It’s not a guarantee that your resources will be loaded at all.
What’s most remarkable to me is that Netflix hit on another solution that is so simple it hurts: just make an AJAX call and don’t do anything with the result; the browser will cache that resource.
MY GOODNESS I LOVE THE WEB!
You Uh, Still Need React
So yes, you still need React on the client-side. Netflix is still using it and never said that they were not.
What they did say was that they had figured out some creative ways to make the experience better for the user and had combined that with their current React implementation. This should be exciting to you if you’re a React developer.
Maybe Netflix will open source some library for prefetching with a way cool name. Is "fakenews.js” taken?
Special Thanks to Brian Holt who reviewed this article and was still willing to be my friend.

Save 15% or More on Car Insurance by Switching to Plain JavaScript is a post from CSS-Tricks
Source: CssTricks


10 Web Design Choices That Can Kill Your Clients’ Search Ranking

As a web designer, there’s no getting away from your responsibility to make design choices with SEO in mind. Your clients want their sites to rank well in search engines – there’s not much point in having one otherwise – and this means we sometimes have to make compromises.
Compromise really is the key term, too. There’s no perfect way to design a website for search and your all your other priorities (user experience, conversions, etc.). You have to make the call on a number of design choices and come to the best overall result you can.
Here are 10 design choices to avoid for the sake of your clients’ search ranking.
Indexability killers
The first thing to think about with search optimisation indexability and there are a number of potential issues you can come across as a designer.
#1: One page, too much content

Even basic apps like IFTTT and Pocket break their content into multiple pages.
Single page designs might work for brands with a single message to get across but they’re an SEO killer in most circumstances. Keywords end up competing with each other, messages clash and search engines have a hard time deciding which kind of queries these pages should rank for.
You also have the risk of information overload and choice fatigue, which can impact engagement factors – something we’ll come back to later. The same thing goes for most home page designs now, too. You (or your client) need to decide how much information is enough/too much.
#2: JS/Ajax dynamic content
Google says it can crawl JavaScript and Ajax without problems these days, but the jury is still out amongst webmasters. The search giant is certainly a lot better at working its way through JS but it still seems to have problems. Whether this is down to Google, some sloppy JS code or both remains unclear.
Either way, placing important content that needs to be indexable in JS/Ajax code is a potential problem. You can remove this potential by not making the important stuff dynamic or take an educated risk.
#3: Providing no context for visual content
Google likes pages with visual content but it needs to know they’re relevant to the rest of the page. Search engines can’t crawl text in images, which means important text should be overlayed with the correct HTML markup (h1 tags, p tags, whatever).
The same thing goes for video content and infographics. Search engines can’t watch videos or read infographics but they can crawl transcriptions – something you and your clients might want to consider adding where appropriate.
 Speed killers

Loading times have been a ranking factor for Google since 2010, but user expectations are very different seven years later. These days, the industry sets a benchmark of two seconds or less for any page to load, despite the fact we’re expected to create richer experiences.
#4: Too many server requests
Something you have to think about as you’re working on a design is how many server requests you’re adding. Every Google Fonts you use, every video you include and every image is another server request that adds to the lists and slows down loading times.
#5: Using bulky files
Those hi-res images might look the part but they all take time to download and render in the browser. They also demand more data and stronger connections, which can become problematic for mobile users in particular.It’s not only media files that add to loading times either; the same thing goes for code files, plugins and any other resource the browser needs to download.
It’s not only media files that add to loading times either; the same thing goes for code files, plugins and any other resource the browser needs to download.
#6: JS overload
JavaScript can do some wonderful things, but it can also cripple web browsers when it’s used unwisely. There are only so many animations and dynamic features a browser can handle and sloppy JS code is one of the worst speed killers around. It’s worth keeping this in mind when you imagine lazy loading, scrolling effects and other JavaScript options.
#7: Third-party resources
Another thing worth keeping in mind is what kind of third-party resources your clients will have to use. Aside from the quantity of fonts, plugins and other add-ons, the issue of quality is also important. This can be especially true with WordPress themes and plugins, jQuery plugins, frameworks and any other integrations.
Engagement killers
Google uses a number of engagement signals to help build a picture of the user experience of pages and value of its content. Bounce rate, pages visited, time on site and social shares are just some of the signals search engines can combine to achieve this.
#8: Popups, notifications and other intrusions
Let me start by saying high bounce rates aren’t always a bad thing (eg: landing pages). But when you’re expecting people to navigate a site and work their way along the buying process, you have to be careful about the roadblocks you put in their way.
Popups are now a signal in themselves, meaning they can hurt rankings, but there are plenty of other intrusions that should be used with care.
#9: Designing without content
This is a really common one. We’ve all bought WordPress themes and then tried to fill them out with content. The problem is you’re cramming content into layouts and containers that weren’t designed for it. You’re instantly restricted by what you can say, which defeats the whole point of creating a website that encourages people to buy.
Your design should be bringing the content to life, not squeezing misshapen box. On a more technical SEO level, you’ll have trouble formatting your headings, designing CTAs and choosing breakpoints when the content isn’t already there to work with.
#10: Designing individual pages
This is another common one with themes and frameworks being the default option for so many projects. Every page on your client’s site is supposed to guide them to where the action is. Whether it’s the homepage, a blog post or landing page visitors see first, there needs to be a clear path towards the purchase (or whatever kind of conversion your client is after).
Designing individual pages means users slip away and that’s bad news for search rankings – not to mention conversion rates.
 
Designing with SEO in mind isn’t really all that difficult. Focus on creating the best experience you can for users and you’ll be covering most of the essentials by default. Aside from that, you have to make sure all the important content in crawlable and indexable.
There are no right or wrong answers to any of these specific design choices. It’s about coming to the best overall result you can through compromise and moderation.
The post 10 Web Design Choices That Can Kill Your Clients’ Search Ranking appeared first on Web Designer Hub.
Source: http://www.webdesignerhub.com


The Importance Of JavaScript Abstractions When Working With Remote Data

Recently I had the experience of reviewing a project and accessing its scalability and maintainability. There were a few bad practices here and there, a few strange pieces of code with lack of meaningful comments. Nothing uncommon for a relatively big (legacy) codebase, right?
However was something that I keep finding. A pattern that repeated itself throughout this codebase and a number of other projects I've looked through.They could be all by lack of abstraction. Ultimately, this was the cause for maintenance difficulty.

In object-oriented programming, abstraction is one of the three central principles (along with encapsulation and inheritance). Abstraction is valuable for two key reasons:
Abstraction hides certain details and only show the essential features of the object. It tries to reduce and factor out details so that the developer can focus on a few concepts at a time. This approach improves understandability as well as maintainability of the code.
Abstraction helps us to reduce code duplication. Abstraction provides ways of dealing with crosscutting concerns and enables us to avoid tightly coupled code.

The lack of abstraction inevitably leads to problems with maintainability.
Often I've seen colleagues that want to take a step further towards more maintainable code, but they struggle to figure out and implement fundamental abstractions. Therefore, in this article, I'll share a few useful abstractions I use for the most common thing in the web world: working with remote data.
It's important to mention that, just like everything in the JavaScript world, there are tons of ways and different approaches how to implement a similar concept. I'll share my approach, but feel free to upgrade it or to tweak it based on your own needs. Or even better - improve it and share it in the comments below! ❤️
API Abstraction
I haven't had a project which doesn't use an external API to receive and send data in a while. That's usually one of the first and fundamental abstractions I define. I try to store as much API related configuration and settings there like:

the API base url
the request headers:
the global error handling logic
const API = {
/**
* Simple service for generating different HTTP codes. Useful for
* testing how your own scripts deal with varying responses.
*/
url: 'http://httpstat.us/',

/**
* fetch() will only reject a promise if the user is offline,
* or some unlikely networking error occurs, such a DNS lookup failure.
* However, there is a simple `ok` flag that indicates
* whether an HTTP response's status code is in the successful range.
*/
_handleError(_res) {
return _res.ok ? _res : Promise.reject(_res.statusText);
},

/**
* Get abstraction.
* @return {Promise}
*/
get(_endpoint) {
return window.fetch(this.url + _endpoint, {
method: 'GET',
headers: new Headers({
'Accept': 'application/json'
})
})
.then(this._handleError)
.catch( error => { throw new Error(error) });
},

/**
* Post abstraction.
* @return {Promise}
*/
post(_endpoint, _body) {
return window.fetch(this.url + _endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: _body,

})
.then(this._handleError)
.catch( error => { throw new Error(error) });
}
};

In this module, we have 2 public methods, get() and post() which both return a Promise. On all places where we need to work with remote data, instead of directly calling the Fetch API via window.fetch(), we use our API module abstraction - API.get() or API.post().
Therefore, the Fetch API is not tightly coupled with our code.
Let's say down the road we read Zell Liew's comprehensive summary of using Fetch and we realize that our error handling is not really advanced, like it could be. We want to check the content type before we process with our logic any further. No problem. We modify only our APP module, the public methods API.get() and API.post() we use everywhere else works just fine.
const API = {
/* ... */

/**
* Check whether the content type is correct before you process it further.
*/
_handleContentType(_response) {
const contentType = _response.headers.get('content-type');

if (contentType && contentType.includes('application/json')) {
return _response.json();
}

return Promise.reject('Oops, we haven't got JSON!');
},

get(_endpoint) {
return window.fetch(this.url + _endpoint, {
method: 'GET',
headers: new Headers({
'Accept': 'application/json'
})
})
.then(this._handleError)
.then(this._handleContentType)
.catch( error => { throw new Error(error) })
},

post(_endpoint, _body) {
return window.fetch(this.url + _endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: _body
})
.then(this._handleError)
.then(this._handleContentType)
.catch( error => { throw new Error(error) })
}
};
Let's say we decide to switch to zlFetch, the library which Zell introduces that abstracts away the handling of the response (so you can skip ahead to and handle both your data and errors without worrying about the response). As long as our public methods return a Promise, no problem:
import zlFetch from 'zl-fetch';

const API = {
/* ... */

/**
* Get abstraction.
* @return {Promise}
*/
get(_endpoint) {
return zlFetch(this.url + _endpoint, {
method: 'GET'
})
.catch( error => { throw new Error(error) })
},

/**
* Post abstraction.
* @return {Promise}
*/
post(_endpoint, _body) {
return zlFetch(this.url + _endpoint, {
method: 'post',
body: _body
})
.catch( error => { throw new Error(error) });
}
};
Let's say down the road due to whatever reason we decide to switch to jQuery Ajax for working with remote data. Not a huge deal once again, as long as our public methods return a Promise. The jqXHR objects returned by $.ajax() as of jQuery 1.5 implement the Promise interface, giving them all the properties, methods, and behavior of a Promise.
const API = {
/* ... */

/**
* Get abstraction.
* @return {Promise}
*/
get(_endpoint) {
return $.ajax({
method: 'GET',
url: this.url + _endpoint
});
},

/**
* Post abstraction.
* @return {Promise}
*/
post(_endpoint, _body) {
return $.ajax({
method: 'POST',
url: this.url + _endpoint,
data: _body
});
}
};
But even if jQuery's $.ajax() didn't return a Promise, you can always wrap anything in a new Promise(). All good. Maintainability++!
Now let's abstract away the receiving and storing of the data locally.
Data Repository
Let's assume we need to take the current weather. API returns us the temperature, feels-like, wind speed (m/s), pressure (hPa) and humidity (%). A common pattern, in order for the JSON response to be as slim as possible, attributes are compressed up to the first letter. So here's what we receive from the server:
{
"t": 30,
"f": 32,
"w": 6.7,
"p": 1012,
"h": 38
}
We could go ahead and use API.get('weather').t and API.get('weather').w wherever we need it, but that doesn't look semantically awesome. I'm not a fan of the one-letter-not-much-context naming.
Additionally, let's say we don't use the humidity (h) and the feels like temperature (f) anywhere. We don't need them. Actually, the server might return us a lot of other information, but we might want to use only a couple of parameters only. Not restricting what our weather module actually needs (stores) could grow to a big overhead.
Enter repository-ish pattern abstraction!
import API from './api.js'; // Import it into your code however you like

const WeatherRepository = {
_normalizeData(currentWeather) {
// Take only what our app needs and nothing more.
const { t, w, p } = currentWeather;

return {
temperature: t,
windspeed: w,
pressure: p
};
},

/**
* Get current weather.
* @return {Promise}
*/
get(){
return API.get('/weather')
.then(this._normalizeData);
}
}
Now throughout our codebase use WeatherRepository.get() and access meaningful attributes like .temperature and .windspeed. Better!
Additionally, via the _normalizeData() we expose only parameters we need.
There is one more big benefit. Imagine we need to wire-up our app with another weather API. Surprise, surprise, this one's response attributes names are different:
{
"temp": 30,
"feels": 32,
"wind": 6.7,
"press": 1012,
"hum": 38
}
No worries! Having our WeatherRepository abstraction all we need to tweak is the _normalizeData() method! Not a single other module (or file).
const WeatherRepository = {
_normalizeData(currentWeather) {
// Take only what our app needs and nothing more.
const { temp, wind, press } = currentWeather;

return {
temperature: temp,
windspeed: wind,
pressure: press
};
},

/* ... */
};
The attribute names of the API response object are not tightly coupled with our codebase. Maintainability++!
Down the road, say we want to display the cached weather info if the currently fetched data is not older than 15 minutes. So, we choose to use localStorage to store the weather info, instead of doing an actual network request and calling the API each time WeatherRepository.get() is referenced.
As long as WeatherRepository.get() returns a Promise, we don't need to change the implementation in any other module. All other modules which want to access the current weather don't (and shouldn't) care how the data is retrieved - if it comes from the local storage, from an API request, via Fetch API or via jQuery's $.ajax(). That's irrelevant. They only care to receive it in the "agreed" format they implemented - a Promise which wraps the actual weather data.
So, we introduce two "private" methods _isDataUpToDate() - to check if our data is older than 15 minutes or not and _storeData() to simply store out data in the browser storage.
const WeatherRepository = {
/* ... */

/**
* Checks weather the data is up to date or not.
* @return {Boolean}
*/
_isDataUpToDate(_localStore) {
const isDataMissing =
_localStore === null || Object.keys(_localStore.data).length === 0;

if (isDataMissing) {
return false;
}

const { lastFetched } = _localStore;
const outOfDateAfter = 15 * 1000; // 15 minutes

const isDataUpToDate =
(new Date().valueOf() - lastFetched) < outOfDateAfter;

return isDataUpToDate;
},

_storeData(_weather) {
window.localStorage.setItem('weather', JSON.stringify({
lastFetched: new Date().valueOf(),
data: _weather
}));
},

/**
* Get current weather.
* @return {Promise}
*/
get(){
const localData = JSON.parse( window.localStorage.getItem('weather') );

if (this._isDataUpToDate(localData)) {
return new Promise(_resolve => _resolve(localData));
}

return API.get('/weather')
.then(this._normalizeData)
.then(this._storeData);
}
};
Finally, we tweak the get() method: in case the weather data is up to date, we wrap it in a Promise and we return it. Otherwise - we issue an API call. Awesome!
There could be other use-cases, but I hope you got the idea. If a change requires you to tweak only one module - that's excellent! You designed the implementation in a maintainable way!
If you decide to use this repository-ish pattern, you might notice that it leads to some code and logic duplication, because all data repositories (entities) you define in your project will probably have methods like _isDataUpToDate(), _normalizeData(), _storeData() and so on...
Since I use it heavily in my projects, I decided to create a library around this pattern that does exactly what I described in this article, and more!
Introducing SuperRepo
SuperRepo is a library that helps you implement best practices for working with and storing data on the client-side.
/**
* 1. Define where you want to store the data,
* in this example, in the LocalStorage.
*
* 2. Then - define a name of your data repository,
* it's used for the LocalStorage key.
*
* 3. Define when the data will get out of date.
*
* 4. Finally, define your data model, set custom attribute name
* for each response item, like we did above with `_normalizeData()`.
* In the example, server returns the params 't', 'w', 'p',
* we map them to 'temperature', 'windspeed', and 'pressure' instead.
*/
const WeatherRepository = new SuperRepo({
storage: 'LOCAL_STORAGE', // [1]
name: 'weather', // [2]
outOfDateAfter: 5 * 60 * 1000, // 5 min // [3]
request: () => API.get('weather'), // Function that returns a Promise
dataModel: { // [4]
temperature: 't',
windspeed: 'w',
pressure: 'p'
}
});

/**
* From here on, you can use the `.getData()` method to access your data.
* It will first check if out data outdated (based on the `outOfDateAfter`).
* If so - it will do a server request to get fresh data,
* otherwise - it will get it from the cache (Local Storage).
*/
WeatherRepository.getData().then( data => {
// Do something awesome.
console.log(`It is ${data.temperature} degrees`);
});
The library does the same things we implemented before:

Gets data from the server (if it's missing or out of date on our side) or otherwise - gets it from the cache.
Just like we did with _normalizeData(), the dataModel option applies a mapping to our rough data. This means:

Throughout our codebase, we will access meaningful and semantic attributes like
.temperature and .windspeed instead of .t and .s.
Expose only parameters you need and simply don't include any others.
If the response attributes names change (or you need to wire-up another API with different response structure), you only need to tweak here - in only 1 place of your codebase.

Plus, a few additional improvements:

Performance: if WeatherRepository.getData() is called multiple times from different parts of our app, only 1 server request is triggered.
Scalability:

You can store the data in the localStorage, in the browser storage (if you're building a browser extension), or in a local variable (if you don't want to store data across browser sessions). See the options for the storage setting.
You can initiate an automatic data sync with WeatherRepository.initSyncer(). This will initiate a setInterval, which will countdown to the point when the data is out of date (based on the outOfDateAfter value) and will trigger a server request to get fresh data. Sweet.

To use SuperRepo, install (or simply download) it with NPM or Bower:
npm install --save super-repo
Then, import it into your code via one of the 3 methods available:

Static HTML:
<script src="/node_modules/super-repo/src/index.js"></script>

Using ES6 Imports:
// If transpiler is configured (Traceur Compiler, Babel, Rollup, Webpack)
import SuperRepo from 'super-repo';

… or using CommonJS Imports
// If module loader is configured (RequireJS, Browserify, Neuter)
const SuperRepo = require('super-repo');

And finally, define your SuperRepositories :)
For advanced usage, read the documentation I wrote. Examples included!
Summary
The abstractions I described above could be one fundamental part of the architecture and software design of your app. As your experience grows, try to think about and apply similar concepts not only when working with remote data, but in other cases where they make sense, too.
When implementing a feature, always try to discuss change resilience, maintainability, and scalability with your team. Future you will thank you for that!

The Importance Of JavaScript Abstractions When Working With Remote Data is a post from CSS-Tricks
Source: CssTricks


Creating a Static API from a Repository

When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people's browsers as HTML pages. Over the years, countless products used that simple model to offer all-in-one solutions for content management and delivery on the web.
Fast-forward a decade or so and developers are presented with a very different reality. With such a vast landscape of devices consuming digital content, it's now imperative to consider how content can be delivered not only to web browsers, but also to native mobile applications, IoT devices, and other mediums yet to come.
Even within the realms of the web browser, things have also changed: client-side applications are becoming more and more ubiquitous, with challenges to content delivery that didn't exist in traditional server-rendered pages.

The answer to these challenges almost invariably involves creating an API — a way of exposing data in such a way that it can be requested and manipulated by virtually any type of system, regardless of its underlying technology stack. Content represented in a universal format like JSON is fairly easy to pass around, from a mobile app to a server, from the server to a client-side application and pretty much anything else.
Embracing this API paradigm comes with its own set of challenges. Designing, building and deploying an API is not exactly straightforward, and can actually be a daunting task to less experienced developers or to front-enders that simply want to learn how to consume an API from their React/Angular/Vue/Etc applications without getting their hands dirty with database engines, authentication or data backups.
Back to Basics
I love the simplicity of static sites and I particularly like this new era of static site generators. The idea of a website using a group of flat files as a data store is also very appealing to me, which using something like GitHub means the possibility of having a data set available as a public repository on a platform that allows anyone to easily contribute, with pull requests and issues being excellent tools for moderation and discussion.
Imagine having a site where people find a typo in an article and submit a pull request with the correction, or accepting submissions for new content with an open forum for discussion, where the community itself can filter and validate what ultimately gets published. To me, this is quite powerful.
I started toying with the idea of applying these principles to the process of building an API instead of a website — if programs like Jekyll or Hugo take a bunch of flat files and create HTML pages from them, could we build something to turn them into an API instead?
Static Data Stores
Let me show you two examples that I came across recently of GitHub repositories used as data stores, along with some thoughts on how they're structured.
The first example is the ESLint website, where every single ESLint rule is listed along with its options and associated examples of correct and incorrect code. Information for each rule is stored in a Markdown file annotated with a YAML front matter section. Storing the content in this human-friendly format makes it easy for people to author and maintain, but not very simple for other applications to consume programmatically.
The second example of a static data store is MDN's browser-compat-data, a compendium of browser compatibility information for CSS, JavaScript and other technologies. Data is stored as JSON files, which conversely to the ESLint case, are a breeze to consume programmatically but a pain for people to edit, as JSON is very strict and human errors can easily lead to malformed files.
There are also some limitations stemming from the way data is grouped together. ESLint has a file per rule, so there's no way to, say, get a list of all the rules specific to ES6, unless they chuck them all into the same file, which would be highly impractical. The same applies to the structure used by MDN.
A static site generator solves these two problems for normal websites — they take human-friendly files, like Markdown, and transform them into something tailored for other systems to consume, typically HTML. They also provide ways, through their template engines, to take the original files and group their rendered output in any way imaginable.
Similarly, the same concept applied to APIs — a static API generator? — would need to do the same, allowing developers to keep data in smaller files, using a format they're comfortable with for an easy editing process, and then process them in such a way that multiple endpoints with various levels of granularity can be created, transformed into a format like JSON.
Building a Static API Generator
Imagine an API with information about movies. Each title should have information about the runtime, budget, revenue, and popularity, and entries should be grouped by language, genre, and release year.
To represent this dataset as flat files, we could store each movie and its attributes as a text, using YAML or any other data serialization language.
budget: 170000000
website: http://marvel.com/guardians
tmdbID: 118340
imdbID: tt2015381
popularity: 50.578093
revenue: 773328629
runtime: 121
tagline: All heroes start somewhere.
title: Guardians of the Galaxy
To group movies, we can store the files within language, genre and release year sub-directories, as shown below.
input/
├── english
│ ├── action
│ │ ├── 2014
│ │ │ └── guardians-of-the-galaxy.yaml
│ │ ├── 2015
│ │ │ ├── jurassic-world.yaml
│ │ │ └── mad-max-fury-road.yaml
│ │ ├── 2016
│ │ │ ├── deadpool.yaml
│ │ │ └── the-great-wall.yaml
│ │ └── 2017
│ │ ├── ghost-in-the-shell.yaml
│ │ ├── guardians-of-the-galaxy-vol-2.yaml
│ │ ├── king-arthur-legend-of-the-sword.yaml
│ │ ├── logan.yaml
│ │ └── the-fate-of-the-furious.yaml
│ └── horror
│ ├── 2016
│ │ └── split.yaml
│ └── 2017
│ ├── alien-covenant.yaml
│ └── get-out.yaml
└── portuguese
└── action
└── 2016
└── tropa-de-elite.yaml
Without writing a line of code, we can get something that is kind of an API (although not a very useful one) by simply serving the `input/` directory above using a web server. To get information about a movie, say, Guardians of the Galaxy, consumers would hit:
http://localhost/english/action/2014/guardians-of-the-galaxy.yaml
and get the contents of the YAML file.
Using this very crude concept as a starting point, we can build a tool — a static API generator — to process the data files in such a way that their output resembles the behavior and functionality of a typical API layer.
Format translation
The first issue with the solution above is that the format chosen to author the data files might not necessarily be the best format for the output. A human-friendly serialization format like YAML or TOML should make the authoring process easier and less error-prone, but the API consumers will probably expect something like XML or JSON.
Our static API generator can easily solve this by visiting each data file and transforming its contents to JSON, saving the result to a new file with the exact same path as the source, except for the parent directory (e.g. `output/` instead of `input/`), leaving the original untouched.
This results on a 1-to-1 mapping between source and output files. If we now served the `output/` directory, consumers could get data for Guardians of the Galaxy in JSON by hitting:
http://localhost/english/action/2014/guardians-of-the-galaxy.json
whilst still allowing editors to author files using YAML or other.
{
"budget": 170000000,
"website": "http://marvel.com/guardians",
"tmdbID": 118340,
"imdbID": "tt2015381",
"popularity": 50.578093,
"revenue": 773328629,
"runtime": 121,
"tagline": "All heroes start somewhere.",
"title": "Guardians of the Galaxy"
}
Aggregating data
With consumers now able to consume entries in the best-suited format, let's look at creating endpoints where data from multiple entries are grouped together. For example, imagine an endpoint that lists all movies in a particular language and of a given genre.
The static API generator can generate this by visiting all subdirectories on the level being used to aggregate entries, and recursively saving their sub-trees to files placed at the root of said subdirectories. This would generate endpoints like:
http://localhost/english/action.json
which would allow consumers to list all action movies in English, or
http://localhost/english.json
to get all English movies.
{
"results": [
{
"budget": 150000000,
"website": "http://www.thegreatwallmovie.com/",
"tmdbID": 311324,
"imdbID": "tt2034800",
"popularity": 21.429666,
"revenue": 330642775,
"runtime": 103,
"tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
"title": "The Great Wall"
},
{
"budget": 58000000,
"website": "http://www.foxmovies.com/movies/deadpool",
"tmdbID": 293660,
"imdbID": "tt1431045",
"popularity": 23.993667,
"revenue": 783112979,
"runtime": 108,
"tagline": "Witness the beginning of a happy ending",
"title": "Deadpool"
}
]
}
To make things more interesting, we can also make it capable of generating an endpoint that aggregates entries from multiple diverging paths, like all movies released in a particular year. At first, it may seem like just another variation of the examples shown above, but it's not. The files corresponding to the movies released in any given year may be located at an indeterminate number of directories — for example, the movies from 2016 are located at `input/english/action/2016`, `input/english/horror/2016` and `input/portuguese/action/2016`.
We can make this possible by creating a snapshot of the data tree and manipulating it as necessary, changing the root of the tree depending on the aggregator level chosen, allowing us to have endpoints like http://localhost/2016.json.
Pagination
Just like with traditional APIs, it's important to have some control over the number of entries added to an endpoint — as our movie data grows, an endpoint listing all English movies would probably have thousands of entries, making the payload extremely large and consequently slow and expensive to transmit.
To fix that, we can define the maximum number of entries an endpoint can have, and every time the static API generator is about to write entries to a file, it divides them into batches and saves them to multiple files. If there are too many action movies in English to fit in:
http://localhost/english/action.json
we'd have
http://localhost/english/action-2.json
and so on.
For easier navigation, we can add a metadata block informing consumers of the total number of entries and pages, as well as the URL of the previous and next pages when applicable.
{
"results": [
{
"budget": 150000000,
"website": "http://www.thegreatwallmovie.com/",
"tmdbID": 311324,
"imdbID": "tt2034800",
"popularity": 21.429666,
"revenue": 330642775,
"runtime": 103,
"tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
"title": "The Great Wall"
},
{
"budget": 58000000,
"website": "http://www.foxmovies.com/movies/deadpool",
"tmdbID": 293660,
"imdbID": "tt1431045",
"popularity": 23.993667,
"revenue": 783112979,
"runtime": 108,
"tagline": "Witness the beginning of a happy ending",
"title": "Deadpool"
}
],
"metadata": {
"itemsPerPage": 2,
"pages": 3,
"totalItems": 6,
"nextPage": "/english/action-3.json",
"previousPage": "/english/action.json"
}
}
Sorting
It's useful to be able to sort entries by any of their properties, like sorting movies by popularity in descending order. This is a trivial operation that takes place at the point of aggregating entries.
Putting it all together
Having done all the specification, it was time to build the actual static API generator app. I decided to use Node.js and to publish it as an npm module so that anyone can take their data and get an API off the ground effortlessly. I called the module static-api-generator (original, right?).
To get started, create a new folder and place your data structure in a sub-directory (e.g. `input/` from earlier). Then initialize a blank project and install the dependencies.
npm init -y
npm install static-api-generator --save
The next step is to load the generator module and create an API. Start a blank file called `server.js` and add the following.
const API = require('static-api-generator')
const moviesApi = new API({
blueprint: 'source/:language/:genre/:year/:movie',
outputPath: 'output'
})
In the example above we start by defining the API blueprint, which is essentially naming the various levels so that the generator knows whether a directory represents a language or a genre just by looking at its depth. We also specify the directory where the generated files will be written to.
Next, we can start creating endpoints. For something basic, we can generate an endpoint for each movie. The following will give us endpoints like /english/action/2016/deadpool.json.
moviesApi.generate({
endpoints: ['movie']
})
We can aggregate data at any level. For example, we can generate additional endpoints for genres, like /english/action.json.
moviesApi.generate({
endpoints: ['genre', 'movie']
})
To aggregate entries from multiple diverging paths of the same parent, like all action movies regardless of their language, we can specify a new root for the data tree. This will give us endpoints like /action.json.
moviesApi.generate({
endpoints: ['genre', 'movie'],
root: 'genre'
})
By default, an endpoint for a given level will include information about all its sub-levels — for example, an endpoint for a genre will include information about languages, years and movies. But we can change that behavior and specify which levels to include and which ones to bypass.
The following will generate endpoints for genres with information about languages and movies, bypassing years altogether.
moviesApi.generate({
endpoints: ['genre'],
levels: ['language', 'movie'],
root: 'genre'
})
Finally, type npm start to generate the API and watch the files being written to the output directory. Your new API is ready to serve - enjoy!
Deployment
At this point, this API consists of a bunch of flat files on a local disk. How do we get it live? And how do we make the generation process described above part of the content management flow? Surely we can't ask editors to manually run this tool every time they want to make a change to the dataset.
GitHub Pages + Travis CI
If you're using a GitHub repository to host the data files, then GitHub Pages is a perfect contender to serve them. It works by taking all the files committed to a certain branch and making them accessible on a public URL, so if you take the API generated above and push the files to a gh-pages branch, you can access your API on http://YOUR-USERNAME.github.io/english/action/2016/deadpool.json.
We can automate the process with a CI tool, like Travis. It can listen for changes on the branch where the source files will be kept (e.g. master), run the generator script and push the new set of files to gh-pages. This means that the API will automatically pick up any change to the dataset within a matter of seconds – not bad for a static API!
After signing up to Travis and connecting the repository, go to the Settings panel and scroll down to Environment Variables. Create a new variable called GITHUB_TOKEN and insert a GitHub Personal Access Token with write access to the repository – don't worry, the token will be safe.
Finally, create a file named `.travis.yml` on the root of the repository with the following.
language: node_js

node_js:
- "7"

script: npm start

deploy:
provider: pages
skip_cleanup: true
github_token: $GITHUB_TOKEN
on:
branch: master
local_dir: "output"
And that's it. To see if it works, commit a new file to the master branch and watch Travis build and publish your API. Ah, GitHub Pages has full support for CORS, so consuming the API from a front-end application using Ajax requests will work like a breeze.
You can check out the demo repository for my Movies API and see some of the endpoints in action:

Movie endpoint (Deadpool)
List of genres with languages and years
List of languages and years by genre (Action)
Full list of languages with genres, years and movies

Going full circle with Staticman
Perhaps the most blatant consequence of using a static API is that it's inherently read-only – we can't simply set up a POST endpoint to accept data for new movies if there's no logic on the server to process it. If this is a strong requirement for your API, that's a sign that a static approach probably isn't the best choice for your project, much in the same way that choosing Jekyll or Hugo for a site with high levels of user-generated content is probably not ideal.
But if you just need some basic form of accepting user data, or you're feeling wild and want to go full throttle on this static API adventure, there's something for you. Last year, I created a project called Staticman, which tries to solve the exact problem of adding user-generated content to static sites.
It consists of a server that receives POST requests, submitted from a plain form or sent as a JSON payload via Ajax, and pushes data as flat files to a GitHub repository. For every submission, a pull request will be created for your approval (or the files will be committed directly if you disable moderation).
You can configure the fields it accepts, add validation, spam protection and also choose the format of the generated files, like JSON or YAML.
This is perfect for our static API setup, as it allows us to create a user-facing form or a basic CMS interface where new genres or movies can be added. When a form is submitted with a new entry, we'll have:

Staticman receives the data, writes it to a file and creates a pull request
As the pull request is merged, the branch with the source files (master) will be updated
Travis detects the update and triggers a new build of the API
The updated files will be pushed to the public branch (gh-pages)
The live API now reflects the submitted entry.

Parting thoughts
To be clear, this article does not attempt to revolutionize the way production APIs are built. More than anything, it takes the existing and ever-popular concept of statically-generated sites and translates them to the context of APIs, hopefully keeping the simplicity and robustness associated with the paradigm.
In times where APIs are such fundamental pieces of any modern digital product, I'm hoping this tool can democratize the process of designing, building and deploying them, and eliminate the entry barrier for less experienced developers.
The concept could be extended even further, introducing concepts like custom generated fields, which are automatically populated by the generator based on user-defined logic that takes into account not only the entry being created, but also the dataset as a whole – for example, imagine a rank field for movies where a numeric value is computed by comparing the popularity value of an entry against the global average.
If you decide to use this approach and have any feedback/issues to report, or even better, if you actually build something with it, I'd love to hear from you!
References

static-api-generator on GitHub
movies-api on GitHub
Staticman on GitHub

Creating a Static API from a Repository is a post from CSS-Tricks
Source: CssTricks


JavaScript Scope and Closures

Scopes and closures are important in JavaScript. But, they were confusing for me when I first started. Here's an explanation of scopes and closures to help you understand what they are.

Let's start with scopes.
Scope
A scope in JavaScript defines what variables you have access to. There are two kinds of scope – global scope and local scope.
Global scope
If a variable is declared outside all functions or curly braces ({}), it is said to be defined in the global scope.
This is true only with JavaScript in web browsers. You declare global variables in Node.js differently, but we won't go into Node.js in this article.
const globalVariable = 'some value'
Once you've declared a global variable, you can use that variable anywhere in your code, even in functions.
const hello = 'Hello CSS-Tricks Reader!'

function sayHello () {
console.log(hello)
}

console.log(hello) // 'Hello CSS-Tricks Reader!'
sayHello() // 'Hello CSS-Tricks Reader!'
Although you can declare variables in the global scope, it is advised not to. This is because there is a chance of naming collisions, where two or more variables are named the same. If you declared your variables with const or let, you would receive an error whenever you a name collision happens. This is undesirable.
// Don't do this!
let thing = 'something'
let thing = 'something else' // Error, thing has already been declared
If you declare your variables with var, your second variable overwrites the first one after it is declared. This also undesirable as you make your code hard to debug.
// Don't do this!
var thing = 'something'
var thing = 'something else' // perhaps somewhere totally different in your code
console.log(thing) // 'something else'
So, you should always declare local variables, not global variables.
Local Scope
Variables that are usable only in a specific part of your code are considered to be in a local scope. These variables are also called local variables.
In JavaScript, there are two kinds of local scope: function scope and block scope.
Let's talk about function scopes first.
Function scope
When you declare a variable in a function, you can access this variable only within the function. You can't get this variable once you get out of it.
In the example below, the variable hello is in the sayHello scope:
function sayHello () {
const hello = 'Hello CSS-Tricks Reader!'
console.log(hello)
}

sayHello() // 'Hello CSS-Tricks Reader!'
console.log(hello) // Error, hello is not defined
Block scope
When you declare a variable with const or let within a curly brace ({}), you can access this variable only within that curly brace.
In the example below, you can see that hello is scoped to the curly brace:
{
const hello = 'Hello CSS-Tricks Reader!'
console.log(hello) // 'Hello CSS-Tricks Reader!'
}

console.log(hello) // Error, hello is not defined
The block scope is a subset of a function scope since functions need to be declared with curly braces (unless you're using arrow functions with an implicit return).
Function hoisting and scopes
Functions, when declared with a function declaration, are always hoisted to the top of the current scope. So, these two are equivalent:
// This is the same as the one below
sayHello()
function sayHello () {
console.log('Hello CSS-Tricks Reader!')
}

// This is the same as the code above
function sayHello () {
console.log('Hello CSS-Tricks Reader!')
}
sayHello()
When declared with a function expression, functions are not hoisted to the top of the current scope.
sayHello() // Error, sayHello is not defined
const sayHello = function () {
console.log(aFunction)
}
Because of these two variations, function hoisting can potentially be confusing, and should not be used. Always declare your functions before you use them.
Functions do not have access to each other's scopes
Functions do not have access to each other's scopes when you define them separately, even though one function may be used in another.
In this example below, second does not have access to firstFunctionVariable.
function first () {
const firstFunctionVariable = `I'm part of first`
}

function second () {
first()
console.log(firstFunctionVariable) // Error, firstFunctionVariable is not defined
}
Nested scopes
When a function is defined in another function, the inner function has access to the outer function's variables. This behavior is called lexical scoping.
However, the outer function does not have access to the inner function's variables.
function outerFunction () {
const outer = `I'm the outer function!`

function innerFunction() {
const inner = `I'm the inner function!`
console.log(outer) // I'm the outer function!
}

console.log(inner) // Error, inner is not defined
}
To visualize how scopes work, you can imagine one-way glass. You can see the outside, but people from the outside cannot see you.
Scopes in functions behave like a one-way-glass. You can see the outside, but people outside can't see you
If you have scopes within scopes, visualize multiple layers of one-way glass.
Multiple layers of functions mean multiple layers of one-way glass
After understanding everything about scopes so far, you're well primed to figure out what closures are.
Closures
Whenever you create a function within another function, you have created a closure. The inner function is the closure. This closure is usually returned so you can use the outer function's variables at a later time.
function outerFunction () {
const outer = `I see the outer variable!`

function innerFunction() {
console.log(outer)
}

return innerFunction
}

outerFunction()() // I see the outer variable!
Since the inner function is returned, you can also shorten the code a little by writing a return statement while declaring the function.
function outerFunction () {
const outer = `I see the outer variable!`

return function innerFunction() {
console.log(outer)
}
}

outerFunction()() // I see the outer variable!
Since closures have access to the variables in the outer function, they are usually used for two things:

To control side effects
To create private variables

Controlling side effects with closures
Side effects happen when you do something in aside from returning a value from a function. Many things can be side effects, like an Ajax request, a timeout or even a console.log statement:
function (x) {
console.log('A console.log is a side effect!')
}
When you use closures to control side effects, you're usually concerned with ones that can mess up your code flow like Ajax or timeouts.
Let's go through this with an example to make things clearer.
Let's say you want to make a cake for your friend's birthday. This cake would take a second to make, so you wrote a function that logs made a cake after one second.
I'm using ES6 arrow functions here to make the example shorter, and easier to understand.
function makeCake() {
setTimeout(_ => console.log(`Made a cake`, 1000)
)
}
As you can see, this cake making function has a side effect: a timeout.
Let's further say you want your friend to choose a flavor for the cake. To do so, you can write add a flavor to your makeCake function.
function makeCake(flavor) {
setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000))
}
When you run the function, notice the cake gets made immediately after one second.
makeCake('banana')
// Made a banana cake!
The problem here is that you don't want to make the cake immediately after knowing the flavor. You want to make it later when the time is right.
To solve this problem, you can write a prepareCake function that stores your flavor. Then, return the makeCake closure within prepareCake.
From this point on, you can call the returned function whenever you want to, and the cake will be made within a second.
function prepareCake (flavor) {
return function () {
setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000))
}
}

const makeCakeLater = prepareCake('banana')

// And later in your code...
makeCakeLater()
// Made a banana cake!
That's how closures are used to reduce side effects – you create a function that activates the inner closure at your whim.
Private variables with closures
As you know by now, variables created in a function cannot be accessed outside the function. Since they can't be accessed, they are also called private variables.
However, sometimes you need to access such a private variable. You can do so with the help of closures.
function secret (secretCode) {
return {
saySecretCode () {
console.log(secretCode)
}
}
}

const theSecret = secret('CSS Tricks is amazing')
theSecret.saySecretCode()
// 'CSS Tricks is amazing'
saySecretCode in this example above is the only function (a closure) that exposes the secretCode outside the original secret function. As such, it is also called a privileged function.
Debugging scopes with DevTools
Chrome and Firefox's DevTools make it simple for you to debug variables you can access in the current scope. There are two ways to use this functionality.
The first way is to add the debugger keyword in your code. This causes JavaScript execution in browsers to pause so you can debug.
Here's an example with the prepareCake:
function prepareCake (flavor) {
// Adding debugger
debugger
return function () {
setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000))
}
}

const makeCakeLater = prepareCake('banana')
If you open your DevTools and navigate to the Sources tab in Chrome (or Debugger tab in Firefox), you would see the variables available to you.
Debugging prepareCake's scope
You can also shift the debugger keyword into the closure. Notice how the scope variables changes this time:
function prepareCake (flavor) {
return function () {
// Adding debugger
debugger
setTimeout(_ => console.log(`Made a ${flavor} cake!`, 1000))
}
}

const makeCakeLater = prepareCake('banana')
Debugging the closure scope
The second way to use this debugging functionality is to add a breakpoint to your code directly in the sources (or debugger) tab by clicking on the line number.
Debugging scopes by adding breakpoints
Wrapping up
Scopes and closures aren't incredibly hard to understand. They're pretty simple once you know how to see them through a one-way glass.
When you declare a variable in a function, you can only access it in the function. These variables are said to be scoped to the function.
If you define any inner function within another function, this inner function is called a closure. It retains access to the variables created in the outer function.
Feel free to pop by and ask any questions you have. I'll get back to you as soon as I can.
If you liked this article, you may also like other front-end-related articles I write on my blog and my newsletter. I also have a brand new (and free!) email course: JavaScript Roadmap.

JavaScript Scope and Closures is a post from CSS-Tricks
Source: CssTricks


Template Doesn’t Mean Cookie Cutter

The Challenge
The mere mention of website templates makes some clients bristle. Nobody likes being told they have to conform to a set of rules they feel weren’t written with them in mind. They also believe that their site will look like everyone else’s and not meet their unique needs.
Developers and designers also get concerned with templates, unsure if content editors will put the correct types of content in pre-built components. Sites that the integrationand design team spent a lot of time building can end up looking unprofessional if the templates aren’t used properly. No one wins in this scenario.
The Solution
Let’s first dispel the myth that using templates means your site will look like everyone else’s. When we talk about templates, we aren’t talking about simple differences in colors and fonts. Our Lectronimo website solution takes advantage of DrupalCoin Blockchain’s modularity and Panelizer to deliver different frameworks that solve common UX mistakes, and still allows creativity when it comes to content.

The Lectronimo templates are built for many different components that can be mixed and matched to highlight your best content, and they don’t require you to strictly adhere to a formula. People with lots of videos aren’t limited by the page structure, and people with complex written content have various ways to display that information so that users can scan and explore -- without feeling like they’re reading a novel.
To keep each Lectronimo website solution maintaining its professional appearance and supporting the content strategy, we worked by the philosophy that any content our users can place should actually work, both in terms of functionality and in design. To us this meant that we needed to place some limits on where our users can put things. We’ve applied some preprocess hooks to the Panels ‘Add Content’ dialog to ensure that whenever a user goes to add content to any region, the list of content types will have been filtered accordingly. Our custom IPE also uses Javascript variables via Ajax commands to prevent content editors from dragging & dropping existing content into invalid regions.
At the same time, we didn’t want to build a set of draconian rules that would leave users feeling trapped or limited, so we primarily assigned our region types based on where content might make sense, and avoided using this system as a crutch to resolve design limitations. For example, there’s a content plugin specifically for adding short intro text to the top of a page. From our experience we knew it would create an inconsistent experience to have that same style of text appear in the middle of the page, or in a sidebar, or anywhere other than the top of the content.
To resolve the design problems that arise when large content gets placed into small regions, our content plugins work in tandem with our layout templates. Plugins are enabled to automatically swap out some styles based on their region placement. We achieved this by establishing a convention that every region in every panel layout must follow one of three spatial patterns: Full Width, Wide, or Narrow.
A region declares its pattern just by including a class in the layout template. From there, the principles are very much like responsive design: Just as we would apply different styles on small displays vs. large displays through media queries, we can apply extra styles to content within narrow or wide columns via our standardized classnames. This contributes to a robust design experience, allowing content authors to place content freely without worrying about breaking the design. Everybody wins!
If you’re interested in learning more about our journey to develop our Lectronimo solution, check out parts 1 & 2 to this blog series: Making a Custom, Acquia-Hosted Site Affordable for Higher Ed, and Custom Theming that is Robust and Flexible Enough to Continue to Impress.
We’re excited to bring Lectronimo to market! If you’re a higher ed institution exploring options for your upcoming redesign and want to know more about Lectronimo, or if you’re in another market and want to talk about your next project, Digital Wave’s team is happy to help.
Source: http://dev.acquia.com/


Improving Conversations using the Perspective API

I recently came across an article by Rory Cellan-Jones about a new technology from Jigsaw, a integrationgroup at Google focused on making people safer online through technology. At the time they'd just released the first alpha version of what they call The Perspective API. It's a machine learning tool that is designed to rate a string of text (i.e. a comment) and provide you with a Toxicity Score, a number representing how toxic the text is.
The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how "toxic" they are and whether similar language had led other people to leave conversations. What it's doing is trying to improve the quality of debate and make sure people aren't put off from joining in.
As the project is still in its infancy it doesn't do much more than that. Still, we can use it!

Starting with the API
To get started with using the API, you'll need to request API access from their website. I managed to get access within a few days. If you're interested in playing with this yourself, know that you might need to wait it out until they email you back. Once you get the email saying you have access, you'll need to log in to the Google Developer Console and get your API key. Create your credentials with the amount of security you'd like and then you're ready to get going!
Now you'll need to head over to the documentation on GitHub to learn a bit more about the project and find out how it actually works. The documentation includes lots of information about what features are currently available and what they're ultimately designed to achieve. Remember: the main point of the API is to provide a score of how toxic a comment is, so to do anything extra with that information will require some work.
Getting a Score with cURL
Let's use PHP's cURL command to make the request and get the score. If you're not used to cURL, don't panic; it's relatively simple to get the hang of. If you want to try it within WordPress, it's even easier because there are a native WordPress helper functions you can use. Let's start with the standard PHP method.
Whilst we walk through this, it's a good idea to have the PHP documentation open to refer to. To understand the fundamentals of cURL, we'll go through a couple of the core options we may need to use.
$params = array(
'comment' => array(
'text' => 'what a stupid question...',
'languages' => array(
'en'
),
'requestedAttributes' => array(
'TOXICITY' => ''
)
)
);

$params = json_encode($params);

$req = curl_init();
curl_setpot($req, 'CURLOPT_URL', 'https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze');
curl_setpot($req, 'CURLOPT_POSTFIELDS', $params);
curl_setopt($req, CURLOPT_HTTPHEADER, array('Content-Type: application/json');
curl_exec($req);
curl_close($req);
The above seven lines simply perform different actions when you want to make a cURL request to a server. You'll need to initialize the cURL request, set the options for the request, execute it, then close the connection. You'll then get your comment data back from the server in the form of JSON data which is handy for a number reasons.
Send An Ajax Request
As you get the response from the API in JSON format, you can also make an Ajax request to the API as well. This is handy if you don't want to dive too much into PHP and the method of using cURL requests. An example of an Ajax request (using jQuery) would look something like the following:
$.ajax({

data: {
comment: {
text: "this is such a stupid idea!!"
},
languages: ["en"],
requestedAttributes: {
TOXICITY: {}
}
},
type: 'post',
url: 'https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=YOUR-API-KEY',
success: function(response) {

console.log(response);

}

});
The data we get back is now logged to the console ready for us to debug it. Now we can decode the JSON data into an array and do something with it. Make sure you include your API key at the end of the URL in the Ajax request too otherwise it won't work! Without it; you'll get an error about your authentication being invalid. Also, you don't have to stop here. You could also take the example above a step further and log the score in a database as soon as you've got the data back, or provide feedback to the user on the front-end in the form of an alert.
The WordPress Way
If you're using WordPress (which is relevant here since WordPress has comment threads you might want to moderate) and you want to make a cURL request to the Perspective API, then it's even simpler. Using the Toxic Comments plugin as an example, you can do the following instead thanks to WordPress' exhaustive built-in functions. You won't need to do any of the following if you use the plugin, but it's worth explaining what the plugin does behind the scenes to achieve what we want to do here.
$request = wp_remote_post($arguments, $url);
This will make a post request to the external resource for us without doing much leg work for it. There are other functions that you can use too, like a get request but we don't need to think about that right now. You then need to use another function to get the requested data back from the server. Yes, you're completely right. WordPress has a function for that:
$data = wp_remote_retrieve_body($request);
So that's great, but how do we actually use the API to get the data we want? Well, to start with if you just want to get the overall toxicity score, you'll need to use the following URL which will ask the API to read the comment and score it. It also has your API key at the end which you need to authenticate your request. Make sure you change it to yours!
https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=YOUR-API-KEY
It looks quite plain and if you visit it, it'll take you to a 404 page. But if you make a cURL request to it, either through your favorite CMS or via a simple PHP script, you'll end up getting data that might look similar to this:
{
"attributeScores": {
"TOXICITY": {
"summaryScore": {
"value": 0.567890,
"type": "PROBABILITY"
}
}
},
"languages": [
"en"
]
}
The score you'll get back from the API will be a number as a decimal. So if a comment gets a score of 50% toxicity, the score you'll actually get back from the API will be 0.5. You can then use this score to manipulate the way the comment is stored and shown to the end user by marking it as spam or creating a filter to let users show less or more toxic comments, much like Google has done in their example.

There are other bits of useful data you may want to look into as well. Things such as the context of the comment which can help you understand the intent of the comment without reading it firsthand.
Ultimately, with this kind of data we can expect to receive, it makes it possible to filter out certain comments with particular intent and provide a nicer comment area where trolls can often take over. Over time when the API becomes more developed, we should expect the scoring to be more robust and more accurate on the analysis of the comment we send it.
Privacy and Censorship
This is a pretty hot topic these days. I can imagine some pushback on this, particularly because it involves sending your data to Google to have it analyzed and judged Google computers, which ultimately does have effect on your voice and ability to use it. Personally, I think the idea behind this is great and it works very well in practice. But when you think about it's implementation on popular news websites and social media platforms, you can see how privacy and censorship could be a concern.
The Perspective API makes a great effort to score comments based on a highly complex algorithm, but it seems that there is still a long way to go yet in the fight to maintain more civil social spaces online.
Until then, play around with the API and let me know what you think! If you're not up for writing something from scratch, there are some public client libraries available now in both Node and Python so go for it! Also, remember to err on the side of caution as the API is still in an alpha phase for now so things may break. If you're feeling lazy, check out the quick start guide.

Improving Conversations using the Perspective API is a post from CSS-Tricks
Source: CssTricks


Creating Photorealistic 3D Graphics on the Web

Before becoming a web developer, I worked in the visual effects industry, creating award-winning, high-end 3D effects for movies and TV Shows such as Tron, The Thing, Resident Evil, and Vikings. To be able to create these effects, we would need to use highly sophisticated animation software such as Maya, 3Ds Max or Houdini and do long hours of offline rendering on Render Farms that consisted of hundreds of machines. It's because I worked with these tools for so long that I am now amazed by the state of the current web technology. We can now create and display high-quality 3D content right inside the web browser, in real time, using WebGL and Three.js.

Here is an example of a project that is built using these technologies. You can find more projects that use three.js on their website.
Some projects using three.js
As the examples on the three.js website demonstrate, 3D visualizations have a vast potential in the domains of e-commerce, retail, entertainment, and advertisement.
WebGL is a low-level JavaScript API that enables creation and display of 3D content inside the browser using the GPU. Unfortunately, since WebGL is a low-level API, it can be a bit hard and tedious to use. You need to write hundreds of lines of code to perform even the simplest tasks. Three.js, on the other hand, is an open source JavaScript library that abstracts away the complexity of WebGL and allows you to create real-time 3D content in a much easier manner.
In this tutorial, I will be introducing the basics of the three.js library. It makes sense to start with a simple example to communicate the fundamentals better when introducing a new programming library but I would like to take this a step further. I will also aim to build a scene that is aesthetically pleasant and even photorealistic to a degree.
We will just start out with a simple plane and sphere but in the end it will end up looking like this:
See the Pen learning-threejs-final by Engin Arslan (@enginarslan) on CodePen.
Photorealism is the pinnacle of computer graphics but achieving is not necessarily a factor of the processing power at your disposal but a smart deployment of techniques from your toolbox. Here are a few techniques that you will be learning about in this tutorial that will help your scenes achieve photo-realism.

Color, Bump and Roughness Maps.
Physically Based Materials.
Lighting with Shadows.

Photorealistic 3D portrait by Ian Spriggs
The basic 3D principles and techniques that you will learn here are relevant in any other 3D content creation environment whether it is Blender, Unity, Maya or 3Ds Max.
This is going to be a long tutorial. If you are more of a video person or would like to learn more about the capabilities of three.js you should check out my video training on the subject from Lynda.com.
Requirements
When using three.js, if you are working locally, it helps to serve the HTML file through a local server to be able to load in scene assets such as external 3D geometry, images, etc. If you are looking for a server that is easy to setup, you can use Python to spin up a simple HTTP Server. Python is pre-installed on many operating systems.
You don't have to worry about setting a local dev server to follow this tutorial though. You will instead rely on data URL's to load in assets like images to remove the overhead of setting up a server. Using this method you will be able to easily execute your three.js scene in online code editors such as CodePen.
This tutorial assumes a prior, basic to intermediate, knowledge of JavaScript and some understanding of front-end web development. If you are not comfortable with JavaScript but want to get started with it in an easy manner you might want to check out the course/book "Coding for Visual Learners: Learning JavaScript with p5.js". (Disclaimer: I am the author)
Let's get started with building 3D graphics on the Web!
Getting Started
I have already prepared a Pen that you can use to follow this tutorial with.
The HTML code that you will be using is going to be super simple. It just needs to have a div element to host the canvas that is going the display the 3D graphics. It also loads up the three.js library (release 86) from a CDN.
<div id="webgl"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/86/three.min.js"></script>
Codepen hides some of the HTML structure that is currently present for your convenience. If you were building this scene on some other online editors or on your local your HTML would need to have something like this code below where main.js would be the file that would hold the JavaScript code.
<!DOCTYPE html>
<html>
<head>
<title>Three.js</title>
<style type="text/css">
html, body {
margin: 0;
padding: 0;
overflow: hidden;
}
</style>
</head>
<body>
<div id="webgl"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/86/three.min.js"></script>
<script src="./main.js"></script>
</body>
</html>
Notice the simple CSS declaration inside the HTML. This is what you would have in the CSS tab of Codepen:
html, body {
margin: 0;
padding: 0;
overflow: hidden;
}
This is to ensure that you don't have any margin and padding values that might be applied by your browser and you don't get a scrollbar so that you can have the graphics fill the entire screen. This is all we need to get started with building 3D Graphics.
Part 1 - Three.js Scene Basics
When working with three.js and with 3D in general, there are a couple of required objects you need to have. These objects are scene, camera and the renderer.
First, you should create a scene. You can think of a scene object as a container for every other 3D object that you are going to work with. It represents the 3D world that you will be building. You can create the scene object by doing this:
var scene = new THREE.Scene();
Another thing that you need to have when working with 3D is the camera. Think of camera like the eyes that you will be viewing this 3D world through. When working with a 2D visualization, the concept of a camera usually doesn't exist. What you see is what you get. But in 3D, you need a camera to define your point of view as there are many positions and angles that you could be looking at a scene from. A camera doesn't only define a position but also other information like the field of view or the aspect ratio.
var camera = new THREE.PerspectiveCamera(
45, // field of view
window.innerWidth / window.innerHeight, // aspect ratio
1, // near clipping plane (beyond which nothing is visible)
1000 // far clipping plane (beyond which nothing is visible)
);
The camera captures the scene for display purposes but for us to actually see anything, the 3D data needs to be converted into a 2D image. This process is called rendering and you need a renderer to render the scene in three.js. You can initialize a renderer like this:
var renderer = new THREE.WebGLRenderer();
And then set the size of the renderer. This will dictate the size of the output image. You will make it cover the window size.
renderer.setSize(window.innerWidth, window.innerHeight);
To be able to display the results of the render you need to append the domElement property of the renderer to your HTML content. You will use the empty div element that you created that has the id webgl for this purpose.
document.getElementById('webgl').appendChild(renderer.domElement);
And having done all this you can call the render method on the renderer by providing the scene and the camera as the arguments.
renderer.render(
scene,
camera
);
To have things a bit tidier, put everything inside a function called init and execute that function.
init();
And now you would see nothing... but a black screen. Don't worry, this is normal. The scene is working but since you didn't include any objects inside the scene, what you are looking at is basically empty space. Next, you will be populating this scene with 3D objects.
See the Pen learning-threejs-01 by Engin Arslan (@enginarslan) on CodePen.
Adding Objects to the Scene
Geometric objects in three.js are made up of two parts. A geometry that defines the shape of the object and a material that defines the surface quality, the appearance, of the object. The combination of these two things makes up a mesh in three.js which forms the 3D object.
Three.js allows you to create some simple shapes like a cube or a sphere in an easy manner. You can create a simple sphere by providing the radius value.
var geo = new THREE.SphereGeometry(1);
There are various kinds of materials that you could use on geometries. Materials determine how an object reacts to the scene lighting. We can use a material to make an object reflective, rough, transparent, etc.. The default material that three.js objects are created with is the MeshBasicMaterial. MeshBasicMaterial is not affected by the scene lighting at all. This means that your geometry is going to be visible even when there is no lighting in the scene. You can pass an object with a color property and a hex value to the MeshBasicMaterial to be able to set the desired color for the object. You will use this material for now but later update it to have your objects be affected by the scene lighting. You don't have any lighting in the scene for now so MeshBasicMaterial should be a good enough choice.
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00
});
You can combine the geometry and material to create a mesh which is going to form the 3D object.
var mesh = new THREE.Mesh(geometry, material);
Create a function to encapsulate this code that creates a sphere. You won't be creating more than one sphere in this tutorial but it is still good to keep things neat and tidy.
function getSphere(radius) {
var geometry = new THREE.SphereGeometry(radius);
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00
});
var sphere = new THREE.Mesh(geometry, material);
return sphere;
}

var sphere = getSphere(1);
Then you need to add this newly created object to the scene for it to be visible.
scene.add(sphere);
Let's check out the scene again. You will still see a black screen.
See the Pen learning-threejs-02 by Engin Arslan (@enginarslan) on CodePen.
The reason why you don't see anything right now is that whenever you add an object to the scene in three.js, the object gets placed at the center of the scene, at the coordinates of 0, 0, 0 for x, y and z. This simply means that you currently have the camera and the sphere at the same position. You should change the position of either one of them to be able to start seeing things.
3D coordinates
Let's move the camera 20 units on the z axis. This is achieved by setting the position.z property on the camera. 3D objects have position, rotation and scale properties that would allow you to transform them into the 3D space.
camera.position.z = 20;
You could move the camera on other axises as well.
camera.position.x = 0;
camera.position.y = 5;
camera.position.z = 20;
The camera is positioned higher now but the sphere is not at the center of the frame anymore. You need to point the camera to it. To be able to do so, you can call a method on the camera called lookAt. The lookAt method on the camera determines which point the camera is looking at. The points in the 3D space are represented by Vectors. So you can pass a new Vector3 object to this lookAt method to be able to have the camera look at the 0, 0, 0 coordinates.
camera.lookAt(new THREE.Vector3(0, 0, 0));
The sphere object doesn't look too smooth right now. The reason for that is the SphereGeometry function actually accepts two additional parameters, the width and height segments, that affects the resolution of the surface. Higher these values, smoother the curved surfaces will appear. I will set this value to be 24 for width and height segments.
var geo = new THREE.SphereGeometry(radius, 24, 24);
See the Pen learning-threejs-03 by Engin Arslan (@enginarslan) on CodePen.
Now you will create a simple plane geometry for the sphere to be sitting on. PlaneGeometry function requires a width and height parameter. In 3D, 2D objects don't have both of their sides rendering by default so you need to pass a side property to the material to have both sides of the plane geometry to render.
function getPlane(w, h) {
var geo = new THREE.PlaneGeometry(w, h);
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00,
side: THREE.DoubleSide,
});
var mesh = new THREE.Mesh(geo, material);

return mesh;
}
You can now add this plane object to the scene as well. You will notice that the initial rotation of the plane geometry is parallel to the y-axis but you will likely need it to be horizontal for it to act as a ground plane. There is one important thing you should keep in mind regarding the rotations in three.js though. They use radians as a unit, not degrees. A rotation of 90 degrees in radians is equivalent to Math.PI/2.
var plane = getPlane(50, 50);
scene.add(plane);
plane.rotation.x = Math.PI/2;
When you created the sphere object, it got positioned using its center point. If you would like to move it above the ground plane then you can just increase its position.y value by the current radius amount. But that wouldn't be a programmatic way of doing things. If you would like the sphere to stay on the plane whatever its radius value is, you should make use of the radius value for the positioning.
sphere.position.y = sphere.geometry.parameters.radius;
See the Pen learning-threejs-04 by Engin Arslan (@enginarslan) on CodePen.
Animations
You are almost done with the first part of this tutorial. But before we wrap it up, I want to illustrate how to do animations in three.js. Animations in three.js make use of the requestAnimationFrame method on the window object which repeatedly executes a given function. It is somewhat like a setInterval function but optimized for the browser drawing performance.
Create an update function and pass the renderer, scene, and camera in there to execute the render method of the renderer inside this function. You will also declare a requestAnimationFrame function inside there and call this update function recursively from a callback function that is passed to the requestAnimationFrame function. It is better to illustrate this in code than to write about it.
function update(renderer, scene, camera) {
renderer.render(scene, camera);

requestAnimationFrame(function() {
update(renderer, scene, camera);
});
}
Everything might look same to you at this point but the core difference is that the requestAnimationFrame function is making the scene render around 60 frames per second with a recursive call to the update function. Which means that if you are to execute a statement inside the update function, that statement would get executed at around 60 times per second. Let's add a scaling animation to the sphere object. To be able to select the sphere object from inside the update function you could pass it as an argument but we will use a different technique. First, set a name attribute on the sphere object and give it a name of your liking.
sphere.name = 'sphere';
Inside the update function, you could find this object using its name by using the getObjectByName method on its parent object, the scene.
var sphere = scene.getObjectByName('sphere');
sphere.scale.x += 0.01;
sphere.scale.z += 0.01;
With this code, the sphere is now scaling on its x and z axises. Our intention is not to create a scaling sphere though. We are setting up the update function so that you can leverage for different animations later on. Now that you have seen how it works you can remove this scaling animation.
See the Pen learning-threejs-05 by Engin Arslan (@enginarslan) on CodePen.
Part 2 - Adding Realism to the Scene
Currently, we are using MeshBasicMaterial which displays the given color even when there is no lighting in the scene which results in a very flat look. Real-world materials don't work this way though. The visibility of the surface in the real world depends on how much light is reflecting back from the surface back to our eyes. Three.js comes with a couple of different materials that provide a better approximation of how real-world surfaces behave and one of them is the MeshStandardMaterial. MeshStandardMaterial is a physically based rendering material that can help you achieve photorealistic results. This is the kind of material that modern game engines like Unreal or Unity use and is an industry standard in gaming and visual effects.
Let's start using the MeshStandardMaterial on our objects and change the color of the materials to white.
var material = new THREE.MeshStandardMaterial({
color: 0xffffff,
});
You will once again get a black render at this point. That is normal. For objects to be visible we need to have lights in the scene. This wasn't a requirement with MeshBasicMaterial as it is a simple material that displays the given color at all conditions but other materials require an interaction with light to be visible. Let's create a SpotLight creating function. You will be creating two spotlights using this function.
function getSpotLight(color, intensity) {
var light = new THREE.SpotLight(color, intensity);

return light;
}

var spotLight_01 = getSpotlight(0xffffff, 1);
scene.add(spotLight_01);
You might start seeing something at this point. Position the light and the camera a bit differently for a better framing and shading. Also create a secondary light as well.
var spotLight_02 = getSpotlight(0xffffff, 1);
scene.add(spotLight_02);

camera.position.x = 0;
camera.position.y = 6;
camera.position.z = 6;

spotLight_01.position.x = 6;
spotLight_01.position.y = 8;
spotLight_01.position.z = -20;

spotLight_02.position.x = -12;
spotLight_02.position.y = 6;
spotLight_02.position.z = -10;
Having done this you have two light sources in the scene, illuminating the sphere from two different positions. The lighting is helping a bit in understanding the dimensionality of the scene, but things are still looking extremely fake at this point because the lighting is missing a critical component: the shadows!
Rendering a shadow in Three.js is unfortunately not too straightforward. This is because shadows are computationally expensive and we need to activate shadow rendering on multiple places. First, you need to tell the renderer to start rendering shadows:
var renderer = new THREE.WebGLRenderer();
renderer.shadowMap.enabled = true;
Then you need to tell the light to cast shadows. Do that in the getSpotLight function.
light.castShadow = true;
You should also tell the objects to cast and/or receive shadows. In this case, you will make the sphere cast shadows and the plane to receive shadows.
mesh.castShadow = true;
mesh.receiveShadow = true;
After all these settings we should start seeing shadows in the scene. Initially, they might be a bit lower quality. You can increase the resolution of the shadows by setting the light shadow map size.
light.shadow.mapSize.x = 4096;
light.shadow.mapSize.y = 4096;
MeshStandardMaterial has a couple of properties such as the roughness and metalness that controls the interaction of the surface with the light. The properties take values in between 0 and 1 and they control the corresponding behavior of the surface. Increase the roughness value on the plane material to 1 to see the surface look more like a rubber as the reflections get blurrier.
// material adjustments
var planeMaterial = plane.material;
planeMaterial.roughness = 1;
We won't be using 1 as a value in this tutorial though. Feel free to experiment with values but set it back to 0.65 for roughness and 0.75 for metalness.
planeMaterial.roughness = 0.65;
planeMaterial.metalness = 0.75;
Even though the scene should be looking much more promising right now it is still hard to call it realistic. The truth is, it is very hard to establish photorealism in 3D without using texture maps.
See the Pen learning-threejs-06 by Engin Arslan (@enginarslan) on CodePen.
Texture Maps
Texture maps are 2D images that can be mapped on a material for the purpose of providing surface detail. So far you were only getting solid colors on the surfaces but using a texture map you can map any image you would like on a surface. Texture maps are not only used to manipulate the color information of surfaces but they can also be used to manipulate other qualities of the surface like reflectiveness, shininess, roughness, etc.
Textures can be derived from photographic sources or can also be painted from scratch as well. For a texture to be useful in a 3D context it should be captured in a certain manner. Images that have reflections or shadows in them, or images where the perspective is too distorted wouldn't make great texture maps. There are several dedicated websites for finding textures online. One of them is textures.com which has a pretty good archive. They have some free download options but requires you to register to be able to do so. Another website for 3D textures is Megascans which does high resolution, high-quality environment scans that are of high-end production quality.
I have used a website called mb3d.co.uk for this example. This site provides seamless, free to use textures. A seamless texture implies a texture that can be repeated on the surface many times without having any discontinuations where the edges meet. This is the link to the texture file that I have used. I have decreased the size to 512px for width and height and converted the image file to data URI using an online service called ezgif to be able to include it as part of the JavaScript code as opposed to loading it in as a separate asset. (hint: don't include tags as you are outputting the data if you are to use this service)
Create a function that returns the data URI we have generated so that we don't have to put that huge string in the middle of our code.
function getTexture() {
var data = 'data:image/jpeg;base64,/...'; // paste your data URI inside the quotation marks.
return data
}
Next, you need to load in the texture and apply it on the plane surface. You will be using the three.js TextureLoader for this purpose. After loading in the texture you will load in the texture to the map property of the desired material to have it as a color map on the surface.
var textureLoader = new THREE.TextureLoader();
var texture = textureLoader.load(getTexture());
planeMaterial.map = texture;
Things would be looking rather ugly right now as the texture on the surface is pixelated. The image is stretching too much to cover the entire surface. What you can do is to make the image repeat itself instead of scaling so that it doesn't get as pixelated. To do so, you need to set the wrapS and wrapT properties on the desired map to THREE.RepeatWrapping and specify a repetition value. Since you will be doing this for other kinds of maps as well (like bump or roughness map) it is better to create a loop for this:
var repetition = 6;
var textures = ['map']// we will add 'bumpMap' and 'roughnessMap'
textures.forEach((mapName) => {
planeMaterial[mapName].wrapS = THREE.RepeatWrapping;
planeMaterial[mapName].wrapT = THREE.RepeatWrapping;
planeMaterial[mapName].repeat.set(repetition, repetition);
});
This should look much better. Since the texture you are using is seamless you wouldn't notice any disconnections around the edges where the repetition happens.
Loading of a texture is actually an asynchronous operation. This means that your 3D scene is generated before the image file is loaded in. But since you are continuously rendering the scene using requestAnimationFrame this doesn't cause any issues in this example. If you weren't doing this, you would need to use callbacks or other async methods to manage the loading order.
See the Pen learning-threejs-07 by Engin Arslan (@enginarslan) on CodePen.
Other Texture Maps
As mentioned in the previous chapter, textures are not only used to define the color of the surfaces but to define other qualities of it as well. One other way that textures can be used as are bump maps. When used as a bump map, the brightness values of the texture simulates a height effect.
planeMaterial.bumpMap = texture;
Bump map should also be using the same repetition configuration as the color map so include it in the textures array.
var textures = ['map', 'bumpMap'];
With a bump map, brighter the value of a pixel, higher the corresponding surface would look. But a bump map doesn't actually change the surface, it just manipulates how the light interacts with the surface to create an illusion of uneven topology. The bump amount looks a bit too much right now. Bump maps work best when they are used in subtle amounts. So let's change the bumpScale parameter to something lower for a more subtle effect.
planeMaterial.bumpScale = 0.01;
Notice how this texture made a huge difference in appearance. The reflections are not perfect anymore but nicely broken up as they would be in real life. Another kind of map slot that is available to the StandardMaterial is the roughness map. A texture map used as a roughness map allows you to control the sharpness of the reflections using the brightness values of a given image.
planeMaterial.roughnessMap = texture;
var textures = ['map', 'bumpMap', 'roughnessMap'];
According to the three.js documentation, the StandardMaterial works best when used in conjunction with an environment map. An environment map simulates a distant environment reflecting off of the reflective surfaces in the scene. It really helps when you are trying to simulate reflectivity on objects. Environment maps in three.js are in the form of cube maps. A Cube map is a panoramic view of a scene that is mapped inside a cube. A cube map is made up of 6 separate images that correspond to each face of a cube. Since loading 6 mode images inside an online editor is going to be a bit too much work you won't actually be using an environment map in this example. But to be able to make this sphere object a bit more interesting, add a roughness map to it as well. You will be using this texture but 320x320px in size and as a data URI.
Create a new function called getMetalTexture
function getMetalTexture() {
var data = 'data:image/jpeg;base64,/...'; // paste your data URI inside the quotation marks.
return data
}
And apply it on the sphere material as bumpMap and roughnessMap:
var sphereMaterial = sphere.material;
var metalTexture = textureLoader.load(getMetalTexture());

sphereMaterial.bumpMap = metalTexture;
sphereMaterial.roughnessMap = metalTexture;
sphereMaterial.bumpScale = 0.01;
sphereMaterial.roughness = 0.75;
sphereMaterial.metalness = 0.25;
See the Pen learning-threejs-08 by Engin Arslan (@enginarslan) on CodePen.
Wrapping it up!
You are almost done! Here you will do just a couple of small tweaks. You can see the final version of this scene file in this Pen.
Provide a non-white color to the lights. Notice how you can actually use CSS color values as strings to specify color:
var spotLight_01 = getSpotlight('rgb(145, 200, 255)', 1);
var spotLight_02 = getSpotlight('rgb(255, 220, 180)', 1);
And add some subtle random flickering animation to the lights to add some life to the scene. First, assign a name property to the lights so you can locate them inside the update function using the getObjectByName method.
spotLight_01.name = 'spotLight_01';
spotLight_02.name = 'spotLight_02';
And then create the animation inside the update function using the Math.random() function.
var spotLight_01 = scene.getObjectByName('spotLight_01');
spotLight_01.intensity += (Math.random() - 0.5) * 0.15;
spotLight_01.intensity = Math.abs(spotLight_01.intensity);

var spotLight_02 = scene.getObjectByName('spotLight_02');
spotLight_02.intensity += (Math.random() - 0.5) * 0.05;
spotLight_02.intensity = Math.abs(spotLight_02.intensity);
And as a bonus, inside the scene file, I have included the OrbitControls script for the three.js camera which means that you can actually drag your mouse on the scene to interact with the camera! I have also made it so that the scene resizes with the changing window size. I have achieved this using an external script for convenience.
See the Pen learning-threejs-final by Engin Arslan (@enginarslan) on CodePen.
Now, this scene is somewhat close to becoming photorealistic. There are still many missing pieces though. The sphere ball is too dark due to lack of reflections and ambient lighting. The ground plane is looking too flat in the glancing angles. The profile of the sphere is too perfect - it is CG (Computer Graphics) perfect. The lighting is not actually as realistic as it could be; It doesn't decay (lose intensity) with the distance from the source. You should also probably add particle effects, camera animation, and post-processing filters if you want to go all the way with this. But this still should be a good enough example to illustrate the power of three.js and the quality of graphics that you can create inside the browser. For more information on what you could achieve using this amazing library, you should definitely check out my new course on Lynda.com about the subject!
Thanks for making it this far! Hope you enjoyed this write-up and feel free to reach to me @inspiratory on Twitter or on my website with any questions you might have!

Creating Photorealistic 3D Graphics on the Web is a post from CSS-Tricks
Source: CssTricks


More Gotchas Getting Inline SVG Into Production—Part II

The following is a guest post by Rob Levin and Chris Rumble. Rob and Chris both work on the product design team at Mavenlink. Rob is also creator and host of the SVG Immersion Podcast and wrote the original 5 Gotchas article back in '14. Chris, is a UI and Motion Designer/Developer based out of San Francisco. In this article, they go over some additional issues they encountered after incorporating inline SVGs in to Mavenlink's flagship application more then 2 years ago. The article illustrations were done by Rob and—in the spirit of our topic—are 100% vector SVGs!

Wow, it's been over 2 years since we posted the 5 Gotchas Getting SVG Into Production article. Well, we've encountered some new gotchas making it time for another follow up post! We'll label these 6-10 paying homage to the first 5 gotchas in the original post :)
Gotcha Six: IE Drag & Drop SVG Disappears

If you take a look at the animated GIF above, you'll notice that I have a dropdown of task icons on the left, I attempt to drag the row outside of the sortable's container element, and then, when I drop the row back, the SVG icons have completely disappeared. This insidious bug didn't seem to happen on Windows 7 IE11 in my tests, but, did happen in Windows 10's IE11! Although, in our example, the issue is happening due to use of a combination of jQuery UI Sortable and the nestedSortable plugin (which needs to be able to drag items off the container to achieve the nesting, any sort of detaching of DOM elements and/or moving them in the DOM, etc., could result in this disappearing behavior. Oddly, I wasn't able to find a Microsoft ticket at time of writing, but, if you have access to a Windows 10 / IE11 setup, you can see for yourself how this will happen in this simple pen which was forked from fergaldoyle. The Pen shows the same essential disappearing behavior happening, but, this time it's caused by simply moving an element containing an SVG icon via JavaScript's appendChild.
A solution to this is to reset the href.baseVal attribute on all <use> elements that descend from event.target container element when a callback is called. For example, in the case of using Sortable, we were able to call the following method from inside Sortable's stop callback:
function ie11SortableShim(uiItem) {
function shimUse(i, useElement) {
if (useElement.href && useElement.href.baseVal) {
// this triggers fixing of href for IE
useElement.href.baseVal = useElement.href.baseVal;
}
}

if (isIE11()) {
$(uiItem).find('use').each(shimUse);
}
};
I've left out the isIE11 implementation, as it can be done a number of ways (sadly, most reliably through sniffing the window.navigator.userAgent string and matching a regex). But, the general idea is, find all the <use> elements in your container element, and then reassign their href.baseVal to trigger to IE to re-fetch those external xlink:href's. Now, you may have an entire row of complex nested sub-views and may need to go with a more brute force approach. In my case, I also needed to do:
$(uiItem).hide().show(0);
to rerender the row. Your mileage may vary ;)
If you're experiencing this outside of Sortable, you likely just need to hook into some "after" event on whatever the parent/container element is, and then do the same sort of thing.
As I'm boggled by this IE11 specific issue, I'd love to hear if you've encountered this issue yourself, have any alternate solutions and/or greater understanding of the root IE issues, so do leave a comment if so.
Gotcha Seven: IE Performance Boosts Replacing SVG4Everybody with Ajax Strategy

In the original article, we recommended using SVG4Everybody as a means of shimming IE versions that don't support using an external SVG definitions file and then referencing via the xlink:href attribute. But, it turns out to be problematic for performance to do so, and probably more kludgy as well, since it's based off user agent sniffing regex. A more "straight forward" approach, is to use Ajax to pull in the SVG sprite. Here's a slice of our code that does this which is, essentially, the same as what you'll find in the linked article:
loadSprite = null;

(function() {
var loading = false;
return loadSprite = function(path) {
if (loading) {
return;
}
return document.addEventListener('DOMContentLoaded', function(event) {
var xhr;
loading = true;
xhr = new XMLHttpRequest();
xhr.open('GET', path, true);
xhr.responseType = 'document';
xhr.onload = function(event) {
var el, style;
el = xhr.responseXML.documentElement;
style = el.style;
style.display = 'none';
return document.body.insertBefore(el, document.body.childNodes[0]);
};
return xhr.send();
});
};
})();

module.exports = {
loadSprite: loadSprite,
};
The interesting part about all this for us, was that—on our icon-heavy pages—we went from ~15 seconds down to ~1-2 seconds (for first uncached page hit) in IE11.
Something to consider about using the Ajax approach, you'll need to potentially deal with a "flash of no SVG" until the HTTP request is resolved. But, in cases where you already have a heavy initial loading SPA style application that throws up a spinner or progress indicator, that might be a sunk cost. Alternatively, you may wish to just go ahead and inline your SVG definition/sprite and take the cache hit for better-percieved performance. If so, measure just how much you're increasing the payload.
Gotcha Eight: Designing Non-Scaling Stroke Icons
In cases where you want to have various sizes of the same icon, you may want to lock down the stroke sizes of those icons…
Why, what's the issue?

Imagine you have a height: 10px; width: 10px; icon with some 1px shapes and scale it to 15px. Those 1px shapes will now be 1.5px which ends up creating a soft or fuzzy icon due to borders being displayed on sub-pixel boundaries. This softness also depends on what you scale to, as that will have a bearing on whether your icons are on sub-pixel boundaries. Generally, it's best to control the sharpness of your icons rather than leaving them up to the will of the viewer's browser.
The other problem is more of a visual weight issue. As you scale a standard icon using fills, it scales proportionately...I can hear you saying "SVG's are supposed to that". Yes, but being able to control the stroke of your icons can help them feel more related and seen as more of a family. I like to think of it like using a text typeface for titling, rather than a display or titling typeface, you can do it but why when you could have a tight and sharp UI.
Prepping the Icon
I primarily use Illustrator to create icons, but plenty of tools out there will work fine. This is just my workflow with one of those tools. I start creating an icon by focusing on what it needs to communicate not really anything technical. After I'm satisfied that it solves my visual needs I then start scaling and tweaking it to fit our technical needs. First, size and align your icon to the pixel grid (⌘⌥Y in Illustrator for pixel preview, on a Mac) at the size you are going to be using it. I try to keep diagonals on 45° and adjust any curves or odd shapes to keep them from getting weird. No formula exists for this, just get it as close as you can to something you like. Sometimes I scrap the whole idea if it's not gonna work at the size I need and start from scratch. If it's the best visual solution but no one can identify it... it's not worth anything.
Exporting AI
I usually just use the Export As "SVG" option in Illustrator, I find it gives me a standard and minimal place to start. I use the Presentation Attributes setting and save it off. It will come out looking something like this:
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 18 18">
<title>icon-task-stroke</title>
<polyline points="5.5 1.5 0.5 1.5 0.5 4.5 0.5 17.5 17.5 17.5 17.5 1.5 12.5 1.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
<rect x="5.5" y="0.5" width="7" height="4" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
<line x1="3" y1="4.5" x2="0.5" y2="4.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
<line x1="17.5" y1="4.5" x2="15" y2="4.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
<polyline points="6 10 8 12 12 8" fill="none" stroke="#ffa800" stroke-miterlimit="10" stroke-width="1"/>
</svg>
I know you see a couple of 1/2 pixels in there! Seems like there are a few schools of thought on this. I prefer to have the stroke line up to the pixel grid as that is what will display in the end. The coordinates are placed on the 1/2 pixel so that your 1px stroke is 1/2 on each side of the path. It looks something like this (in Illustrator):

Gotcha Nine: Implementing Non-Scaling Stroke
Clean Up

Our Grunt task, which Rob talks about in the previous article, cleans up almost everything. Unfortunately for the non-scaling-stroke you have some hand-cleaning to do on the SVG, but I promise it is easy! Just add a class to the paths on which you want to restrict stroke scaling. Then, in your CSS add a class and apply the attribute vector-effect: non-scaling-stroke; which should look something like this:
.non-scaling-stroke {
vector-effect: non-scaling-stroke;
}
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 18 18">
<title>icon-task-stroke</title>
<polyline class="non-scaling-stroke" points="5.5 1.5 0.5 1.5 0.5 4.5 0.5 17.5 17.5 17.5 17.5 1.5 12.5 1.5" stroke="#b6b6b6" stroke-miterlimit="10"/>
<rect class="non-scaling-stroke" x="5.5" y="0.5" width="7" height="4" stroke="#b6b6b6" stroke-miterlimit="10"/>
<line class="non-scaling-stroke" x1="3" y1="4.5" x2="0.5" y2="4.5" stroke="#b6b6b6" stroke-miterlimit="10"/>
<line class="non-scaling-stroke" x1="17.5" y1="4.5" x2="15" y2="4.5" stroke="#b6b6b6" stroke-miterlimit="10"/>
<polyline class="non-scaling-stroke" stroke="currentcolor" points="6 10 8 12 12 8" stroke="#ffa800" stroke-miterlimit="10" stroke-width="1"/>
</svg>
This keeps the strokes, if specified, from changing (in other words, the strokes will remain at 1px even if the overall SVG is scaled) when the SVG is scaled. We also add fill: none; to a class in our CSS script where we also control the stroke color as they will fill with #000000 by default.That's it! Now, you have beautiful pixel adherent strokes that will maintain stroke width!
And after all is said and done (and you have preprocessed via grunt-svgstore per the first article), your SVG will look like this in the defs file:
<svg>
<symbol viewBox="0 0 18 18" id="icon-task-stroke">
<title>icon-task-stroke</title>
<path class="non-scaling-stroke" stroke-miterlimit="10" d="M5.5 1.5h-5v16h17v-16h-5"/>
<path class="non-scaling-stroke" stroke-miterlimit="10" d="M5.5.5h7v4h-7zM3 4.5H.5M17.5 4.5H15"/>
<path class="non-scaling-stroke" stroke="currentColor" stroke-miterlimit="10" d="M6 10l2 2 4-4"/>
</symbol>
</svg>
CodePen Example
The icon set on the left is scaling proportionately, and on the right, we are using the vector-effect: non-scaling-stroke;. If your noticing that your resized SVG icon's strokes are starting to look out of control, the above technique will give you the ability to lock those babies down.
See the Pen SVG Icons: Non-Scaling Stroke by Chris Rumble (@Rumbleish) on CodePen.
Gotcha Ten: Accessibility

With everything involved in getting your SVG icon system up-and-running, it's easy to overlook accessibility. That's a shame, because SVGs are inherently accessible, especially if compared to icon fonts which are known to not always play well with screen readers. At bare minimum, we need to sprinkle a bit of code to prevent any text embedded within our SVG icons from being announced by screen readers. Although we'd love to just add a <title> tag with alternative text and "call it a day", the folks at Simply Accessible have found that Firefox and NVDA will not, in fact, announce the <title> text.
Their recommendation is to apply the aria-hidden="true" attribute to the <svg> itself, and then add an adjacent span element with a .visuallyhidden class. The CSS for that visually hidden element will be hidden visually, but its text will available to the screen reader to announce. I'm bummed that it doesn't feel very semantic, but it may be a reasonable comprimise while support for the more intuitively reasonable <title> tag (and combinations of friends like role, aria-labelledby, etc.) work across both browser and screen reader implementations. To my mind, the aria-hidden on the SVG may be the biggest win, as we wouldn't want to inadvertantly set off the screen reader for, say, 50 icons on a page!
Here's the general patterns borrowed but alterned a bit from Simply Accessible's pen:
<a href="/somewhere/foo.html">
<svg class="icon icon-close" viewBox="0 0 32 32" aria-hidden="true">
<use xlink:href="#icon-close"></use>
</svg>
<span class="visuallyhidden">Close</span>
</a>
As stated before, the two things interesting here are:

aria-hidden attribute applied to prevent screen readers from announcing any text embedded within the SVG.
The nasty but useful visuallyhidden span which WILL be announced by screen reader.

Honestly, if you would rather just code this with the <title> tag et al approach, I wouldn't necessarily argue with you as it this does feel kludgy. But, as I show you the code we've used that follows, you could see going with this solution as a version 1 implementation, and then making that switch quite easily when support is better…
Assuming you have some sort of centralized template helper or utils system for generating your use xlink:href fragments, it's quite easy to implement the above. We do this in Coffeescript, but since JavaScript is more universal, here's the code that gets resolved to:
templateHelpers = {
svgIcon: function(iconName, iconClasses, iconAltText) {
var altTextElement = iconAltText ? "" + iconAltText + "" : '';
var titleElement = iconTitle ? "<title>" + iconTitle + "</title>" : '';
iconClasses = iconClasses ? " " + iconClasses : '';
return this.safe.call(this, "<svg aria-hidden='true' class='icon-new " + iconClasses + "'><use xlink:href='#" + iconName + "'>" + titleElement + "</use></svg>" + altTextElement);
},
...
Why are we putting the <title> tag as a child of <use> instead of the <svg>? According to Amelia Bellamy-Royds(Invited Expert developing SVG & ARIA specs @w3c. Author of SVG books from @oreillymedia), you will get tooltips in more browsers.
Here's the CSS for .visuallyhidden. If you're wondering why we're doing it this particular why and not, say, display: none;, or other familiar means, see Chris Coyier's article which explains this in depth:
.visuallyhidden {
border: 0;
clip: rect(0 0 0 0);
height: 1px;
width: 1px;
margin: -1px;
padding: 0;
overflow: hidden;
position: absolute;
}
This code is not meant to be used "copy pasta" style, as your system will likely have nuanced differences. But, it shows the general approach, and, the important bits are:

the iconAltText, which allows the caller to provide alternative text if it seems appropriate (e.g. the icon is not purely decorative)
the aria-hidden="true" which now, is always placed on the SVG element.
the .visuallyhidden class will hide the element visually, while still making the text in that element available for screen readers

As you can see, it'd be quite easy to later refactor this code to use the <title> approach usually recommended down the road, and at least the maintainence hit won't be bad should we choose to do so. The relevant refactor changes would likely be similar to:
var aria = iconAltText ? 'role="img" aria-label="' + iconAltText + '"' : 'aria-hidden="true"';
return this.safe.call(this, "<svg " + aria + " class='icon-new " + iconClasses + "'><use xlink:href='#" + iconName + "'>" + titleElement + "</use></svg>");
So, in this version (credit to Amelia for the aria part!), if the caller passes alternative text in, we do NOT hide the SVG, and, we also do not use the visually hidden span technique, while adding the role and aria-label attributes to the SVG. This feels much cleaner, but the jury is out on whether screen readers are going to support this approach as well as using the visually hidden span technique. Maybe the experts (Amelia and Simply Accessible folks), will chime in on the comments :)
Bonus Gotcha: make viewBox width and height integers or scaling gets funky
If you have an SVG icon that you export with a resulting viewBox like: viewBox="0 0 100 86.81", you may have issues if you use transform: scale. For example, if your generally setting the width and height equal as is typical (e.g. 16px x 16px), you might expect that the SVG should just center itself in it's containing box, especially if you're using the defaults for preserveAspectRatio. But, if you attempt to scale it at all, you'll start to notice clipping.
In the following Adobe Illustrator screen capture, you see that "Snap to Grid" and "Snap to Pixel" are both selected:

The following pen shows the first three icons getting clipped. This particular icon (it's defined as a <symbol> and then referenced using the xlink:href strategy we've already went over), has a viewBox with non-integer height of 86.81, and thus we see the clipping on the sides. The next 3 examples (icons 4-6), have integer width and heights (the third argument to viewBox is width and the fourth is height), and does not clip.
See the Pen SVG Icons: Scale Clip Test 2 by Rob Levin (@roblevin) on CodePen.
Conclusions
The above challenges are just some of the ones we've encountered at Mavenlink having had a comprehensive SVG icon system in our application for well over 2 years now. The mysterious nature of some of these is par for the course given our splintered world of various browsers, screen readers, and operating systems. But, perhaps these additional gotchas will help you and your team to better harden your SVG icon implementations!

More Gotchas Getting Inline SVG Into Production—Part II is a post from CSS-Tricks
Source: CssTricks


(Now More Than Ever) You Might Not Need jQuery

The DOM and native browser API's have improved by leaps and bounds since jQuery's release all the way back in 2006. People have been writing "You Might Not Need jQuery" articles since 2013 (see this classic site and this classic repo). I don't want to rehash old territory, but a good bit has changed in browser land since the last You Might Not Need jQuery article you might have stumbled upon. Browsers continue to implement new APIs that take the pain away from library-free development, many of them directly copied from jQuery.
Let's go through some new vanilla alternatives to jQuery methods.

Remove an element from the page
Remember the maddeningly roundabout way you had to remove an element from the page with vanilla DOM? el.parentNode.removeChild(el);? Here's a comparison of the jQuery way and the new improved vanilla way.
jQuery:
var $elem = $(".someClass") //select the element
$elem.remove(); //remove the element
Without jQuery:
var elem = document.querySelector(".someClass"); //select the element
elem.remove() //remove the element
For the rest of this post, we'll assume that $elem a jQuery-selected set of elements, and elem is a native JavaScript-selected DOM element.
Prepend an element
jQuery:
$elem.prepend($someOtherElem);
Without jQuery:
elem.prepend(someOtherElem);
Insert an element before another element
jQuery:
$elem.before($someOtherElem);
Without jQuery:
elem.before(someOtherElem);
Replace an element with another element
jQuery:
$elem.replaceWith($someOtherElem);
Without jQuery:
elem.replaceWith(someOtherElem);
Find the closest ancestor that matches a given selector
jQuery:
$elem.closest("div");
Without jQuery:
elem.closest("div");
Browser Support of DOM manipulation methods
These methods now have a decent level of browser support:
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.DesktopChromeOperaFirefoxIEEdgeSafari544149NoNo10Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox10.0-10.2NoNo565954
They are also currently being implemented in Edge.
Fade in an Element
jQuery:
$elem.fadeIn();
By writing our own CSS we have far more control over how we animate our element. Here I'll do a simple fade.
.thingy {
display: none;
opacity: 0;
transition: 200;
}
elem.style.display = "block";
requestAnimationFrame(() => elem.style.opacity = 1);
Call an event handler callback only once
jQuery:
$elem.one("click", someFunc);
In the past when writing plain JavaScript, we had to call removeEventListener inside of the callback function.
function dostuff() {
alert("some stuff happened");
this.removeEventListener("click", dostuff);
}
var button = document.querySelector("button");
button.addEventListener("click", dostuff);
Now things are a lot cleaner. You might have seen the third optional parameter sometimes passed into addEventListener. It's a boolean to decide between event capturing or event bubbling. Nowadays, however, the third argument can alternatively be a configuration object.
elem.addEventListener('click', someFunc, { once: true, });
If you still want to use event capturing as well as have the callback called only once, then you can specify that in the configuration object as well:
elem.addEventListener('click', myClickHandler, {
once: true,
capture: true
});
Animation
jQuery's .animate() method is pretty limited.
$elem.animate({
width: "70%",
opacity: 0.4,
marginLeft: "0.6in",
fontSize: "3em",
borderWidth: "10px"
}, 1500);
The docs say "All animated properties should be animated to a single numeric value, except as noted below; most properties that are non-numeric cannot be animated using basic jQuery functionality." This rules out transforms, and you need a plugin just to animate colors. You'd be far better off with the new Web Animations API.
var elem = document.querySelector('.animate-me');
elem.animate([
{
transform: 'translateY(-1000px) scaleY(2.5) scaleX(.2)',
transformOrigin: '50% 0',
filter: 'blur(40px)',
opacity: 0
},
{
transform: 'translateY(0) scaleY(1) scaleX(1)',
transformOrigin: '50% 50%',
filter: 'blur(0)',
opacity: 1
}
], 1000);
Ajax
Another key selling point of jQuery in the past has been Ajax. jQuery abstracted away the ugliness of XMLHttpRequest:
$.ajax('https://some.url', {
success: (data) => { /* do stuff with the data */ }
});
The new fetch API is a superior replacement for XMLHttpRequest and is now supported by all modern browsers.
fetch('https://some.url')
.then(response => response.json())
.then(data => {
// do stuff with the data
});
Admittedly fetch can be a bit more complicated than this small code sample. For example, the Promise returned from fetch() won't reject on HTTP error status. It is, however, far more versatile than anything built on top of XMLHttpRequest.
If we want ease of use though, there is a simpler option that has gained popularity - but it's not native to the browser, which brings me onto...
The Rise of the Micro-Library
Axios is a popular library for Ajax. It is a great example of a micro-library - a library designed to do just one thing. While most libraries will not be as well tested as jQuery, they can often an appealing alternative to the jQuery behemoth.
(Almost) Everything Can Be Polyfilled
So now you're aware that the DOM is now pretty nice to work with! But perhaps you've looked at these developments only to think "oh well, still need to support IE 9 so I better use jQuery". Most of the time it doesn't really matter what Can I Use says about a certain feature you want to utilize. You can use whatever you like and polyfills can fill in the gaps. There was a time when if you wanted to use a fancy new browser feature, you had to find a polyfill, and then include it on your page. Doing this for all the features missing in IE9 would be an arduous task. Now it's as simple
<script src="https://cdn.polyfill.io/v2/polyfill.min.js"></script>
This simple script tag can polyfill just about anything. If you haven't heard about this polyfill service from the Financial Times you can read about it at polyfill.io.
Iterating a NodeList in 2017
jQuery's massive adoption hasn't solely been fostered by its reassuring ironing out of browser bugs and inconsistencies in IE Relics. Today jQuery has one remaining selling point: iteration.

Iterable NodeLists are so fundamentally important to the quality of the DOM. Unsurprisingly I now use React for most of my coding instead. — John Resig (@jeresig) April 29, 2016
It's defied rationality that NodeLists aren't iterable. Developers have had to jump through hoops to make them so. A classic for loop may be the most performance optimised approach, but sure isn't something I enjoy typing. And so we ended up with this ugliness:
var myArrayFromNodeList = [].slice.call(document.querySelectorAll('li'));
Or:
[].forEach.call(myNodeList, function (item) {...}
More recently we've been able to use Array.from, a terser, more elegant way of turning a nodeList into an array.
Array.from(querySelectorAll('li')).forEach((li) => /* do something with li */);
But the big news is that NodeLists are now iterable by default.

It's about time we have iterable NodeLists! https://t.co/nIT5uHALpW 🎉🎉🎉 Been asking for this for years! https://t.co/edb0TTSdop
— John Resig (@jeresig) April 29, 2016
Now simply type:
document.querySelectorAll('li').forEach((li) => /* do some stuff */);
Edge is the last modern browser to not support iterable NodeLists but is currently working on it.
Is jQuery Slow?
jQuery may be faster than sloppily written vanilla JS, but that's just a good reason to learn JavaScript better! Paul Irish was a contributor to the jQuery project and concluded:

Performance recommendation: Do not use jQuery's hide() method. Ever. https://t.co/zEQf6F54p6 Classes are your friend.
— Paul Irish (@paul_irish) February 8, 2015
Here's what the creator of jQuery has to say about learning the native DOM in his (totally essential) Javascript book Secrets of the JavaScript Ninja:
"Why do you need to understand how it works if the library will take care of it for you? The most compelling reason is performance. Understanding how DOM modification works in libraries can allow you to write better and faster code."
What I Dislike About jQuery
Rather than smoothing over only the remaining ugly parts of certain browser API's, jQuery seeks to replace them all wholesale. By returning a jQuery object rather than a NodeList, built-in browser methods are essentially off limits, meaning you're locked into the jQuery way of doing everything. For beginners, what once made front-end scripting approachable is now a hindrance, as it essentially means there are two duplicate ways of doing everything. If you want to read others code with ease and apply to both jobs that require vanilla JS and jobs that require jQuery, you have twice as much to learn. There are, however, libraries that have adopted an API that will be reassuringly familiar to jQuery addicts, but that return a NodeList rather than an object...
Can't Live Without $?
Perhaps you've grown fond of that jQuery $. Certain micro-libraries have sought to emulate the jQuery API.

Lea Verou, an Invited Expert at the W3C CSS Working Group, who herself penned the article jQuery Considered Harmful is the author of Bliss.js. Bliss uses a familiar $ syntax but returns a NodeList.
Paul Irish, meanwhile, released Bling.js "because you want the $ of jQuery without the jQuery."
Remy Sharp offered a similar micro-library, aptly named min.js.

I'm no anti-jQuery snob. Some great developers still choose to use it. If you're already comfortable using it and at home with its API, there's no huge reason to ditch it. Ultimately there are people who use jQuery and know what a closure is and who write enterprise-level web apps, and people who use vanilla JS who don't. Plenty of jobs still list it as a required skill. For anybody starting out though, it looks like an increasingly bad choice. Internet Explorer 11 is thankfully the final version of that infernal contraption. As soon as IE dies the entire browser landscape will be evergreen, and jQuery will increasingly be seen as a bygone relic from the DOM's dirty past.

(Now More Than Ever) You Might Not Need jQuery is a post from CSS-Tricks
Source: CssTricks


Introducing Microcosm: Our Data Layer For React

One of my favorite things about working in client-services is the interval with which we start new work. As a React shop, this means we build a lot of new apps from the ground up.

Along the way, we've distilled what we've learned and baked it into a tool that I, finally, want to talk about.

Microcosm is our general purpose tool for keeping React apps organized. We use it to work
with application state, split large projects into manageable chunks, and as the guiding star for our application architecture.

Before I go too much further: check it out the project on Github! In this post, I'll provide a high level overview of Microcosm and some of the features I find particularly valuable.

At a glance

Microcosm was born out of the Flux mindset. From there it draws similar pieces:

Actions

Actions are a general abstraction for performing a job. In Microcosm, actions move through a standard lifecycle: (open, update, resolve, reject, cancel).

Actions can process a variety of data types out of the box. For example, a basic networking request might look like:

import request from 'superagent'

function getUser(id) {
// This will return a promise. Microcosm automatically handles promises.
return request(`/users/${id}`)
}

let repo = new Microcosm()
let action = repo.push(getUser, '2')

action.onDone(function (user) {
console.log("Hurrah!")
})

action.onError(function (reason) {
console.log("Darn!", reason)
})

However they can also expose fine grained control over their lifecycle:

import Microcosm from 'microcosm'
import request from 'superagent'

function getUser (id) {
return function (action) {
let xhr = request(`/users/${id}`)

// The request has started
action.open(id)

// Show download progress
xhr.on('progress', action.update)

// Make the request cancellable
action.onCancel(xhr.abort)

// Normal pass/fail behavior
xhr.then(action.resolve, action.reject)
}
}

let repo = new Microcosm()

let action = repo.push(getUser, 2)

action.onUpdate(event => console.log(event.percent) // 0 ... 10... 20... 70...

// Wait, I no longer care about this!
action.cancel()

Domains

Domains define the rules in which actions are converted into new state. Conceptually they are sort of like stores in Flux, or reducers in Redux. They register to specific actions, performing some transformation over data:

const Users = {
getInitialState() {
return []
},
addUser(users, record) {
return users.concat(record)
},
register() {
return {
[getUser]: this.addUser
}
}
}

repo.addDomain('users', Users)

Basically: mount a data processor at repo.state.users that appends a user to a list whenever getUser finishes.

Effects

Effects provide an outlet for side-effects after domains have updated state. We use them for flash notifications, persistence in local storage, and other behavior that doesn't relate to managing state:

const Notifier = {
warn(repo, error) {
alert(error.message)
},
return {
[getUser]: {
error: this.warn
}
}
})

repo.addEffect(Notifier)

New here: Domains and Effects can subscribe to specific action states. The effect above will listen for when getUser fails, alerting the user that something went wrong.

Altogether, this looks something like:

import Microcosm from 'microcosm'
import request from 'superagent'

let repo = new Microcosm()

function getUser(id) {
return request(`/users/${id}`)
}

repo.addDomain('users', {
getInitialState() {
return []
},
addUser(users, record) {
return users.concat(record)
},
register() {
return {
[getUser]: this.addUser
}
}
})

// Listen to failures. What happens if the AJAX request fails?
repo.addEffect({
warn(repo, error) {
alert(error.message)
},
return {
[getUser]: {
error: this.warn
}
}
})

// Push an action, a request to perform some kind of work
let action = repo.push(getUser, 2)

action.onDone(function() {
console.log(repo.state.users) // [{ id: 2, name: "Bob" }]
})

// You could also handle errors in a domain's register method
// by hooking into `getUser.error`
action.onError(function() {
alert("Something went terribly wrong!")
})

It's 2017, why aren't you using Redux?

We do! As a client services company, we use whatever tool best serves
our clients. In some cases, that means using Redux, particularly if it's a client preference or the existing framework for a project.

However there are a few features of Microcosm that we think are compelling:

Action State

We've found that, when actions are treated as static events, the state
around the work performed is often discarded. Networking requests are
a story, not an outcome.

What if a user leaves a page before a request finishes? Or they get tired of a huge file uploading too slowly? What if they dip into a subway tunnel and lose connectivity? They might want to retry a request, cancel it, or just see what’s happening.

Microcosm makes this easier by providing a standard interface for interacting with outstanding work. For example, let's say we want to stop asking for data if a user no longer cares about the related presentation:

import React from 'react'
import { getPlanets } from '../actions/planets'

class PlanetsList extends React.Component {
componentWillMount() {
// We could avoid needing to pass down a "repo" prop by
// using some options shown later
const { repo } = this.props

this.action = repo.push(getPlanets)
}
componentWillUnmount() {
this.action.cancel()
}
render() {
//... render some planets
}
}

Assuming we give this component a Microcosm "repo" prop, and a list of planets, this component will fetch planets data, stopping whenever the component unmounts. We don't need to care if the request is represented by a Promise, Observable, error-first callback, etc.

Reducing boilerplate

Since actions move through consistent states, we can leverage these constraints to build boilerplate reducing React components for common problems. For example, we frequently need to dispatch an action to perform some task, so Microcosm ships with an <ActionButton /> component:

import React from 'react'
import ActionButton from 'microcosm/addons/action-button'
import { deleteUser } from '../actions/user'

class DeleteUserButton extends React.Component {
render () {
const { userId } = this.props

return (
<ActionButton action={deleteUser} value={userId}>
Delete User
</ActionButton>
)
}
}

Because the lifecycle is predictable, we can expose hooks to make further improvements around that lifecycle:

import React from 'react'
import ActionButton from 'microcosm/addons/action-button'
import { deleteUser } from '../actions/user'

class DeleteUserButton extends React.Component {
state = {
loading: false
}

setLoading = () => {
this.setState({ loading: true })
}

handleError (reason) => {
alert(reason)
this.setState({ loading: false })
}

render () {
const { userId } = this.props
const { loading } = this.state

return (
<ActionButton action={deleteUser} value={userId} disabled={loading} onOpen={this.setLoading} onError={this.handleError}>
Delete User
</ActionButton>
)
}
}

This makes one-time, use-case specific display requirements, like error reporting, or tracking file upload progress easy. In a lot of cases, the data layer doesn't need to get involved whatsoever. This makes state management simpler - it doesn't need to account for all of the specific user experience requirements within an interface.

Optimistic updates - Taking a historical approach

Actions are placed within a history of all outstanding work. This is maintained by a tree:

Taken from the Chatbot example.

Microcosm will never clean up an action that precedes incomplete
work. When an action moves from open to done, or cancelled, the
historical account of actions rolls back to the last state, rolling
forward with the new action states. This makes optimistic updates
simpler because action states are self cleaning; interstitial states are reverted automatically:

import { send } from 'actions/chat'

const Messages = {
getInitialState () {
return []
},

setPending(messages, item) {
return messages.concat({ ...item, pending: true })
},

setError(messages, item) {
return messages.concat({ ...item, error: true })
},

addMessage(messages, item) {
return messages.concat(item)
}

register () {
return {
[send]: {
open: this.setPending,
error: this.setError,
done: this.addMessage
}
}
}
}

In this example, as chat messages are sent, we optimistically update
state with the pending message. At this point, the action is in an
open state. The request has not finished.

On completion, when the action moves into error or done, Microcosm
recalculates state starting from the point prior to the open state
update. The message stops being in a loading state because, as far as
Microcosm is now concerned, the open status never occurred.

Separating responsibility with Presenters

The Presenter addon is a special React component that can build a view model around a given Microcosm state, sending it to child "passive view" components.

When a Presenter is instantiated, it creates a fork of a Microcosm instance. A fork is a "downstream" Microcosm that gets the same state updates as the original but can add additional Domains and Effects without impacting the "upstream" Microcosm.

This sandbox allows you to break up complicated apps into smaller sections. Share state that you need everywhere, but keep context specific state isolated to a section of your application:

class EmailPreferences extends Presenter {
setup (repo, props) {
repo.add('settings', UserSettings)

repo.push(getUserSettings, props.user.id)
}

getModel (props) {
return {
settings: state => state.settings
}
}

render () {
const { settings } = this.model

return (
<aside>
{ /* Email preferences UI omitted for brevity */ }
</aside>
)
}
}

In this example, we can keep a users email preferences local to this component. We could even lazy load this entire feature, state management included, using a library like react-loadable. For large applications, we've found this is essential for keeping build sizes down.

David wrote a fantastic article that goes into further detail on this subject.

What's next

At Viget, we're excited about the future of Microcosm, and have a few
areas we want to focus on in the next few months:

Developer tools. First class developer tools have become the baseline for JavaScript frameworks. Since Microcosm "knows" more about the state of actions, presenters, and other pieces, we're excited about opportunities to build fantastic tooling.
Support for Preact, Glimmer, Vue, and other frameworks. We'd love to stop calling our apps "React apps". What would it look like for the presentation layer to take on less responsibility?
Observables. The similarities between Actions and Observables is striking. We're curious about how we can use Observables more under the hood to provide greater interoperability with other tools.

So check it out! We're always willing to accept feedback and would love to hear about how you build apps.


Source: VigetInspire


Form Validation – Part 4: Validating the MailChimp Subscribe Form

Over the last few articles in this series, we've learned how to use a handful of input types and validation attributes to natively validate forms.
We've learned how to use the Constraint Validation API to enhance the native browser validation process for a better overall user experience. And we wrote a polyfill to extend support all the way back to IE9 (and plug a few feature holes in some newer versions).
Now, let's take what we've learned and apply it to a real example: the MailChimp signup form.

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript
A Validity State API Polyfill
Validating the MailChimp Subscribe Form (You are here!)

A simple form with a large footprint
When you embed a MailChimp signup form on your site, it comes with a JavaScript validation script named `mc-validate.js`.
This file is 140kb (minified), and includes the entire jQuery library, two third-party plugins, and some custom MailChimp code. We can better!
See the Pen Form Validation: The MailChimp Standard Signup Form by Chris Ferdinandi (@cferdinandi) on CodePen.
Removing the bloat
First, let's grab a MailChimp form without any of the bloat.
In MailChimp, where you get the code for your embeddable form, click on the tab labelled "Naked." This version includes none of the MailChimp CSS or JavaScript.

<div id="mc_embed_signup">
<form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate>
<div id="mc_embed_signup_scroll">
<h2>Subscribe to our mailing list</h2>
<div class="indicates-required"><span class="asterisk">*</span> indicates required</div>
<div class="mc-field-group">
<label for="mce-FNAME">First Name </label>
<input type="text" value="" name="FNAME" class="" id="mce-FNAME">
</div>
<div class="mc-field-group">
<label for="mce-EMAIL">Email Address <span class="asterisk">*</span></label>
<input type="email" value="" name="EMAIL" class="required email" id="mce-EMAIL">
</div>
<div id="mce-responses" class="clear">
<div class="response" id="mce-error-response" style="display:none"></div>
<div class="response" id="mce-success-response" style="display:none"></div>
</div> <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_f2d244c0df42a0431bd08ddea_aeaa9dd034" tabindex="-1" value=""></div>
<div class="clear"><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div>
</div>
</form>
</div>
This is better, but it still includes some markup we don't need. Let's trim this down as much as possible.

We can remove the div#mc_embed_signup wrapper from around the form.
Similarly, we can remove the div#mc_embed_signup_scroll wrapper around the fields inside the form.
We can also remove the text informing visitors that "* indicates required."
Let's remove the .mc-field-group classes from around our form fields, and the empty class attributes on the fields themselves.
We should also remove the .required and .email classes from our email field, since they were only used as hooks for MailChimp validation script.
I went ahead and removed the * from the email label. It's totally up to you how you want to label required fields, though.
We can delete the div#mce-responses container, which is only used by the MailChimp JavaScript file.
We can also remove the .clear class from the div around the submit button.
Let's remove all of the empty value attributes.
Finally, we should remove the novalidate attribute from the form element. We'll let our script add that for us when it loads.

All of this leaves us with a much more clean and modest looking form. Since the MailChimp CSS is removed, it will inherit your site's default form styles.
<form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank">
<h2>Subscribe to our mailing list</h2>
<div>
<label for="mce-FNAME">First Name</label>
<input type="text" name="FNAME" id="mce-FNAME">
</div>
<div>
<label for="mce-EMAIL">Email Address</label>
<input type="email" name="EMAIL" id="mce-EMAIL">
</div>
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_f2d244c0df42a0431bd08ddea_aeaa9dd034" tabindex="-1" value=""></div>
<div><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div>
</form>
See the Pen Form Validation: The MailChimp Simple Signup Form by Chris Ferdinandi (@cferdinandi) on CodePen.
Adding Constraint Validation
Now, let's add in a few input types and validation attributes so that the browser can natively validate the form for us.
The type for the email field is already set to email, which is great. Let's also add the required attribute, and a pattern to force emails to include a TLD (the .com part of an address). We should also include a title letting people know they have to have a TLD.
<form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank">
<h2>Subscribe to our mailing list</h2>
<div>
<label for="mce-FNAME">First Name</label>
<input type="text" name="FNAME" id="mce-FNAME">
</div>
<div>
<label for="mce-EMAIL">Email Address</label>
<input type="email" name="EMAIL" id="mce-EMAIL" title="The domain portion of the email address is invalid (the portion after the @)." pattern="^([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x22([^x0dx22x5cx80-xff]|x5c[x00-x7f])*x22))*x40([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d)(x2e([^x00-x20x22x28x29x2cx2ex3a-x3cx3ex40x5b-x5dx7f-xff]+|x5b([^x0dx5b-x5dx80-xff]|x5c[x00-x7f])*x5d))*(.w{2,})+$" required>
</div>
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_f2d244c0df42a0431bd08ddea_aeaa9dd034" tabindex="-1" value=""></div>
<div><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div>
</form>
Enhancing with the Constraint Validation API
This is a great starting point, but we can enhance the user experience by adding the form validation script we wrote earlier in this series.
See the Pen Form Validation: MailChimp with the Constraint Validation API by Chris Ferdinandi (@cferdinandi) on CodePen.
Our validation script is just 6.7kb before minification, making it 20x smaller than the one MailChimp provides. If we want to ensure support back to IE9, though, we should include our Validity State polyfill and Eli Grey's classList.js polyfill.
See the Pen Form Validation: MailChimp with the API Script and Polyfills by Chris Ferdinandi (@cferdinandi) on CodePen.
That brings our total file size up to 15.5kb unminified—still 9× smaller than the MailChimp validation script.
Submitting the form with Ajax
The `mc-validate.js` script provided by MailChimp doesn't just validate the form. It also submits it with Ajax and displays a status message.
When you click submit on our modified form, it redirects the visitor to the MailChimp site. That's a totally valid way to do things.
But, we can also recreate MailChimp's Ajax form submission without jQuery for a better user experience.
The first thing we want to do is prevent the form from submitting via a page reload like it normally would. In our submit event listener, we're calling event.preventDefault if there are errors. Instead, let's call it no matter what.
// Check all fields on submit
document.addEventListener('submit', function (event) {

// Only run on forms flagged for validation
if (!event.target.classList.contains('validate')) return;

// Prevent form from submitting
event.preventDefault();

...

}, false);
See the Pen Form Validation: MailChimp and Prevent Default on Submit by Chris Ferdinandi (@cferdinandi) on CodePen.
Using JSONP
The mc-validate.js script uses JSONP to get around cross-domain security errors.
JSONP works by loading the returned data as a script element in the document, which then passes that data into a callback function that does all of the heavy lifting.
Setting up our Submit URL
First, let's set up a function we can run when our form is ready to be submitted, and call it in our submit event listener.
// Submit the form
var submitMailChimpForm = function (form) {
// Code goes here...
};

// Check all fields on submit
document.addEventListener('submit', function (event) {

...

// Otherwise, let the form submit normally
// You could also bolt in an Ajax form submit process here
submitMailChimpForm(event.target);

}, false);
The first thing we need to do is get the URL from the form's action attribute.
// Submit the form
var submitMailChimpForm = function (form) {

// Get the Submit URL
var url = form.getAttribute('action');

};
In the `mc-validate.js` script, the /post?u=' in the URL is replaced with /post-json?u=. We can do that quite easily with the replace() method.
// Submit the form
var submitMailChimpForm = function (form) {

// Get the Submit URL
var url = form.getAttribute('action');
url = url.replace('/post?u=', '/post-json?u=');

};
Serializing our form data
Next, we want to grab all of the form field data and create a query string of key/value pairs from it. For example, FNAME=Freddie%20Chimp&EMAIL=freddie@mailchimp.com.
Let's create another function to handle this for us.
// Serialize the form data into a query string
var serialize = function (form) {
// Code goes here...
};
Now, we want to loop through all of our form fields and create key/value pairs. I'll be building off of the work done by Simon Steinberger for this.
First, we'll create a serialized variable set as an empty string.
// Serialize the form data into a query string
// Forked and modified from https://stackoverflow.com/a/30153391/1293256
var serialize = function (form) {

// Setup our serialized data
var serialized = '';

};
Now let's grab all of the fields in our form using form.elements and loop through them.
If the field doesn't have a name, is a submit or button, is disabled, or a file or reset input, we'll skip it.
If it's not a checkbox or radio (a nice catchall for select, textarea, and the various input types) or it is and it's checked, we'll convert it to a key/value pair, add an & at the beginning, and append it to our serialized string. We'll also make sure to encode the key and value for use in a URL.
Finally, we'll return the serialized string.
// Serialize the form data into a query string
// Forked and modified from https://stackoverflow.com/a/30153391/1293256
var serialize = function (form) {

// Setup our serialized data
var serialized = '';

// Loop through each field in the form
for (i = 0; i < form.elements.length; i++) {

var field = form.elements[i];

// Don't serialize fields without a name, submits, buttons, file and reset inputs, and disabled fields
if (!field.name || field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') continue;

// Convert field data to a query string
if ((field.type !== 'checkbox' && field.type !== 'radio') || field.checked) {
serialized += '&' + encodeURIComponent(field.name) + "=" + encodeURIComponent(field.value);
}
}

return serialized;

};
See the Pen Form Validation: MailChimp with Ajax Submit - Serialized Form Data by Chris Ferdinandi (@cferdinandi) on CodePen.
Now that we have our serialized form data, we can add it to our URL.
// Submit the form
var submitMailChimpForm = function (form) {

// Get the Submit URL
var url = form.getAttribute('action');
url = url.replace('/post?u=', '/post-json?u=');
url += serialize(form);

};
Adding a callback
A key part of how JSONP works is the callback.
Traditional Ajax requests return data back to you. JSONP instead passes data into a callback function. This function has to be global (as in, attached to the window rather than inside of another function).
Let's create a callback function, and log the returned data in the console so that we can see what MailChimp sends back.
// Display the form status
var displayMailChimpStatus = function (data) {
console.log(data);
};
Now we can add this callback to our URL. Most JSONP use callback as the query string key for this, but MailChimp uses c.
// Submit the form
var submitMailChimpForm = function (form) {

// Get the Submit URL
var url = form.getAttribute('action');
url = url.replace('/post?u=', '/post-json?u=');
url += serialize(form) + '&c=displayMailChimpStatus';

};
Injecting our script into the DOM
Now we're ready to inject our script into the DOM. First, we'll create a new script element and assign our URL as it's src.
// Submit the form
var submitMailChimpForm = function (form) {

// Get the Submit URL
var url = form.getAttribute('action');
url = url.replace('/post?u=', '/post-json?u=');
url += serialize(form) + '&c=displayMailChimpStatus';

// Create script with url and callback (if specified)
var script = window.document.createElement( 'script' );
script.src = url;

};
Next, we'll grab the first <script> element we find in the DOM, and inject our new one just before it using the insertBefore() method.
// Submit the form
var submitMailChimpForm = function (form) {

// Get the Submit URL
var url = form.getAttribute('action');
url = url.replace('/post?u=', '/post-json?u=');
url += serialize(form) + '&c=displayMailChimpStatus';

// Create script with url and callback (if specified)
var script = window.document.createElement( 'script' );
script.src = url;

// Insert script tag into the DOM (append to <head>)
var ref = window.document.getElementsByTagName( 'script' )[ 0 ];
ref.parentNode.insertBefore( script, ref );

};
Finally, we'll remove it from the DOM after our script loads successfully.
// Submit the form
var submitMailChimpForm = function (form) {

// Get the Submit URL
var url = form.getAttribute('action');
url = url.replace('/post?u=', '/post-json?u=');
url += serialize(form) + '&c=displayMailChimpStatus';

// Create script with url and callback (if specified)
var script = window.document.createElement( 'script' );
script.src = url;

// Insert script tag into the DOM (append to <head>)
var ref = window.document.getElementsByTagName( 'script' )[ 0 ];
ref.parentNode.insertBefore( script, ref );

// After the script is loaded (and executed), remove it
script.onload = function () {
this.remove();
};

};
Processing the submit response
Right now, our callback method is just logging whatever MailChimp responds with into the console.
// Display the form status
var displayMailChimpStatus = function (data) {
console.log(data);
};
If you look at the returned data, it's a JSON object with two keys: result and msg. The result value is either error or success, and the msg value is a short string explaining the result.
{
msg: 'freddie@mailchimp.com is already subscribed to list Bananas Are Awesome. Click here to update your profile.'
result: 'error'
}

// Or...

{
msg: 'Almost finished... We need to confirm your email address. To complete the subscription process, please click the link in the email we just sent you.'
result: 'success'
}
See the Pen Form Validation: MailChimp with Ajax Submit - Result by Chris Ferdinandi (@cferdinandi) on CodePen.
We should check to make sure our returned data has both of these keys. Otherwise, we'll throw a JavaScript error when we go to use them.
// Display the form status
var displayMailChimpStatus = function (data) {

// Make sure the data is in the right format
if (!data.result || !data.msg ) return;

};
Display a status message
Let's add a <div> to our form, just before the submit button, that we'll use to add our error or success message. We'll give it a class of .mc-status.
<form action="//us1.list-manage.com/subscribe/post?u=12345abcdef&amp;id=abc123" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank">
/* ... */
<div class="mc-status"></div>
<div><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div>
</form>
In our displayMailChimpStatus() function, we want to find the .mc-status container and add our msg to it.
// Display the form status
var displayMailChimpStatus = function (data) {

// Get the status message content area
var mcStatus = document.querySelector('.mc-status');
if (!mcStatus) return;

// Update our status message
mcStatus.innerHTML = data.msg;

};
We can style the message differently depending on whether the submission was successful or not.
We already have some styles set up for our error messages with the .error-message, so let's reuse those. We'll create a new class, .success-message, for successful submissions.
.success-message {
color: green;
font-style: italic;
margin-bottom: 1em;
}
Now, we can conditionally add one of our classes (and remove the other) based on the result.
// Display the form status
var displayMailChimpStatus = function (data) {

// Get the status message content area
var mcStatus = document.querySelector('.mc-status');
if (!mcStatus) return;

// Update our status message
mcStatus.innerHTML = data.msg;

// If error, add error class
if (data.result === 'error') {
mcStatus.classList.remove('success-message');
mcStatus.classList.add('error-message');
return;
}

// Otherwise, add success class
mcStatus.classList.remove('error-message');
mcStatus.classList.add('success-message');

};
See the Pen Form Validation: MailChimp with Ajax Submit by Chris Ferdinandi (@cferdinandi) on CodePen.
An important accessibility improvement
While our message will be easily spotted by sighted users, people using assistive technology like screen readers may not inherently know a message has been added to the DOM.
We'll use JavaScript to bring our message into focus. In order to do so, we'll also need to add a tabindex of -1, as <div> elements are not naturally focusable.
// Display the form status
var displayMailChimpStatus = function (data) {

// Get the status message content area
var mcStatus = document.querySelector('.mc-status');
if (!mcStatus) return;

// Update our status message
mcStatus.innerHTML = data.msg;

// Bring our status message into focus
mcStatus.addAttribute('tabindex', '-1');
mcStatus.focus();

// If error, add error class
if (data.result === 'error') {
mcStatus.classList.remove('success-message');
mcStatus.classList.add('error-message');
return;
}

// Otherwise, add success class
mcStatus.classList.remove('error-message');
mcStatus.classList.add('success-message');

};
There's a good chance this will add a blue outline to our status message. This is a really important accessibility feature for links, buttons, and other naturally focusable content areas, but it's not necessary for our message. We can remove it with a little CSS.
.mc-status:focus {
outline: none;
}
The end result
We now have a lightweight, dependency-free script that validates our MailChimp form and submits it asynchronously.
Our completed script weighs 19kb unminified. When minified, the script weighs just 9kb. That's 15.5× smaller than the version MailChimp provides.
Not bad!

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript
A Validity State API Polyfill
Validating the MailChimp Subscribe Form (You are here!)

Form Validation – Part 4: Validating the MailChimp Subscribe Form is a post from CSS-Tricks
Source: CssTricks


Form Validation Part 2: The Constraint Validation API (JavaScript)

In my last article, I showed you how to use native browser form validation through a combination of semantic input types (for example, <input type="email">) and validation attributes (such as required and pattern).
While incredibly easy and super lightweight, this approach does have a few shortcomings.

You can style fields that have errors on them with the :invalid pseudo-selector, but you can't style the error messages themselves.
Behavior is also inconsistent across browsers.

User studies from Christian Holst and Luke Wroblewski (separately) found that displaying an error when the user leaves a field, and keeping that error persistent until the issue is fixed, provided the best and fastest user experience.
Unfortunately, none of the browsers natively behave this way. However, there is a way to get this behavior without depending on a large JavaScript form validation library.

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript (You are here!)
A Validity State API Polyfill (Coming Soon!)
Validating the MailChimp Subscribe Form (Coming Soon!)

The Constraint Validation API
In addition to HTML attributes, browser-native constraint validation also provides a JavaScript API we can use to customize our form validation behavior.
There are a few different methods the API exposes, but the most powerful, Validity State, allows us to use the browser's own field validation algorithms in our scripts instead of writing our own.
In this article, I'm going to show you how to use Validity State to customize the behavior, appearance, and content of your form validation error messages.
Validity State
The validity property provides a set of information about a form field, in the form of boolean (true/false) values.
var myField = document.querySelector('input[type="text"]');
var validityState = myField.validity;
The returned object contains the following properties:

valid - Is true when the field passes validation.
valueMissing - Is true when the field is empty but required.
typeMismatch - Is true when the field type is email or url but the entered value is not the correct type.
tooShort - Is true when the field contains a minLength attribute and the entered value is shorter than that length.
tooLong - Is true when the field contains a maxLength attribute and the entered value is longer than that length.
patternMismatch - Is true when the field contains a pattern attribute and the entered value does not match the pattern.
badInput - Is true when the input type is number and the entered value is not a number.
stepMismatch - Is true when the field has a step attribute and the entered value does not adhere to the step values.
rangeOverflow - Is true when the field has a max attribute and the entered number value is greater than the max.
rangeUnderflow - Is true when the field has a min attribute and the entered number value is lower than the min.

By using the validity property in conjunction with our input types and HTML validation attributes, we can build a robust form validation script that provides a great user experience with a relatively small amount of JavaScript.
Let's get to it!
Disable native form validation
Since we're writing our validation script, we want to disable the native browser validation by adding the novalidate attribute to our forms. We can still use the Constraint Validation API — we just want to prevent the native error messages from displaying.
As a best practice, we should add this attribute with JavaScript so that if our script has an error or fails to load, the native browser form validation will still work.
// Add the novalidate attribute when the JS loads
var forms = document.querySelectorAll('form');
for (var i = 0; i < forms.length; i++) {
forms[i].setAttribute('novalidate', true);
}
There may be some forms that you don't want to validate (for example, a search form that shows up on every page). Rather than apply our validation script to all forms, let's apply it just to forms that have the .validate class.
// Add the novalidate attribute when the JS loads
var forms = document.querySelectorAll('.validate');
for (var i = 0; i < forms.length; i++) {
forms[i].setAttribute('novalidate', true);
}
See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.
Check validity when the user leaves the field
Whenever a user leaves a field, we want to check if it's valid. To do this, we'll setup an event listener.
Rather than add a listener to every form field, we'll use a technique called event bubbling (or event propagation) to listen for all blur events.
// Listen to all blur events
document.addEventListener('blur', function (event) {
// Do something on blur...
}, true);
You'll note that the last argument in addEventListener is set to true. This argument is called useCapture, and it's normally set to false. The blur event doesn't bubble the way events like click do. Setting this argument to true allows us to capture all blur events rather than only those that happen directly on the element we're listening to.
Next, we want to make sure that the blurred element was a field in a form with the .validate class. We can get the blurred element using event.target, and get it's parent form by calling event.target.form. Then we'll use classList to check if the form has the validation class or not.
If it does, we can check the field validity.
// Listen to all blur events
document.addEventListener('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = event.target.validity;
console.log(error);

}, true);
If error.validity is true, the field is valid. Otherwise, there's an error.
See the Pen Form Validation: Validate On Blur by Chris Ferdinandi (@cferdinandi) on CodePen.
Getting the error
Once we know there's an error, it's helpful to know what the error actually is. We can use the other Validity State properties to get that information.
Since we need to check each property, the code for this can get a bit long. Let's setup a separate function for this and pass our field into it.
// Validate the field
var hasError = function (field) {
// Get the error
};

// Listen to all blur events
document.addEventListner('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = hasError(event.target);

}, true);
There are a few field types we want to ignore: fields that are disabled, file and reset inputs, and submit inputs and buttons. If a field isn't one of those, let's get it's validity.
// Validate the field
var hasError = function (field) {

// Don't validate submits, buttons, file and reset inputs, and disabled fields
if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return;

// Get validity
var validity = field.validity;

};
If there's no error, we'll return null. Otherwise, we'll check each of the Validity State properties until we find the error.
When we find the match, we'll return a string with the error. If none of the properties are true but validity is false, we'll return a generic "catchall" error message (I can't imagine a scenario where this happens, but it's good to plan for the unexpected).
// Validate the field
var hasError = function (field) {

// Don't validate submits, buttons, file and reset inputs, and disabled fields
if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return;

// Get validity
var validity = field.validity;

// If valid, return null
if (validity.valid) return;

// If field is required and empty
if (validity.valueMissing) return 'Please fill out this field.';

// If not the right type
if (validity.typeMismatch) return 'Please use the correct input type.';

// If too short
if (validity.tooShort) return 'Please lengthen this text.';

// If too long
if (validity.tooLong) return 'Please shorten this text.';

// If number input isn't a number
if (validity.badInput) return 'Please enter a number.';

// If a number value doesn't match the step interval
if (validity.stepMismatch) return 'Please select a valid value.';

// If a number field is over the max
if (validity.rangeOverflow) return 'Please select a smaller value.';

// If a number field is below the min
if (validity.rangeUnderflow) return 'Please select a larger value.';

// If pattern doesn't match
if (validity.patternMismatch) return 'Please match the requested format.';

// If all else fails, return a generic catchall error
return 'The value you entered for this field is invalid.';

};
This is a good start, but we can do some additional parsing to make a few of our errors more useful. For typeMismatch, we can check if it's supposed to be an email or url and customize the error accordingly.
// If not the right type
if (validity.typeMismatch) {

// Email
if (field.type === 'email') return 'Please enter an email address.';

// URL
if (field.type === 'url') return 'Please enter a URL.';

}
If the field value is too long or too short, we can find out both how long or short it's supposed to be and how long or short it actually is. We can then include that information in the error.
// If too short
if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.';

// If too long
if (validity.tooLong) return 'Please short this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.';
If a number field is over or below the allowed range, we can include that minimum or maximum allowed value in our error.
// If a number field is over the max
if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.';

// If a number field is below the min
if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.';
And if there is a pattern mismatch and the field has a title, we can use that as our error, just like the native browser behavior.
// If pattern doesn't match
if (validity.patternMismatch) {

// If pattern info is included, return custom error
if (field.hasAttribute('title')) return field.getAttribute('title');

// Otherwise, generic error
return 'Please match the requested format.';

}
Here's the complete code for our hasError() function.
// Validate the field
var hasError = function (field) {

// Don't validate submits, buttons, file and reset inputs, and disabled fields
if (field.disabled || field.type === 'file' || field.type === 'reset' || field.type === 'submit' || field.type === 'button') return;

// Get validity
var validity = field.validity;

// If valid, return null
if (validity.valid) return;

// If field is required and empty
if (validity.valueMissing) return 'Please fill out this field.';

// If not the right type
if (validity.typeMismatch) {

// Email
if (field.type === 'email') return 'Please enter an email address.';

// URL
if (field.type === 'url') return 'Please enter a URL.';

}

// If too short
if (validity.tooShort) return 'Please lengthen this text to ' + field.getAttribute('minLength') + ' characters or more. You are currently using ' + field.value.length + ' characters.';

// If too long
if (validity.tooLong) return 'Please shorten this text to no more than ' + field.getAttribute('maxLength') + ' characters. You are currently using ' + field.value.length + ' characters.';

// If number input isn't a number
if (validity.badInput) return 'Please enter a number.';

// If a number value doesn't match the step interval
if (validity.stepMismatch) return 'Please select a valid value.';

// If a number field is over the max
if (validity.rangeOverflow) return 'Please select a value that is no more than ' + field.getAttribute('max') + '.';

// If a number field is below the min
if (validity.rangeUnderflow) return 'Please select a value that is no less than ' + field.getAttribute('min') + '.';

// If pattern doesn't match
if (validity.patternMismatch) {

// If pattern info is included, return custom error
if (field.hasAttribute('title')) return field.getAttribute('title');

// Otherwise, generic error
return 'Please match the requested format.';

}

// If all else fails, return a generic catchall error
return 'The value you entered for this field is invalid.';

};
Try it yourself in the pen below.
See the Pen Form Validation: Get the Error by Chris Ferdinandi (@cferdinandi) on CodePen.
Show an error message
Once we get our error, we can display it below the field. We'll create a showError() function to handle this, and pass in our field and the error. Then, we'll call it in our event listener.
// Show the error message
var showError = function (field, error) {
// Show the error message...
};

// Listen to all blur events
document.addEventListener('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = hasError(event.target);

// If there's an error, show it
if (error) {
showError(event.target, error);
}

}, true);
In our showError function, we're going to do a few things:

We'll add a class to the field with the error so that we can style it.
If an error message already exists, we'll update it with new text.
Otherwise, we'll create a message and inject it into the DOM immediately after the field.

We'll also use the field ID to create a unique ID for the message so we can find it again later (falling back to the field name in case there's no ID).
var showError = function (field, error) {

// Add error class to field
field.classList.add('error');

// Get field id or name
var id = field.id || field.name;
if (!id) return;

// Check if error message field already exists
// If not, create one
var message = field.form.querySelector('.error-message#error-for-' + id );
if (!message) {
message = document.createElement('div');
message.className = 'error-message';
message.id = 'error-for-' + id;
field.parentNode.insertBefore( message, field.nextSibling );
}

// Update error message
message.innerHTML = error;

// Show error message
message.style.display = 'block';
message.style.visibility = 'visible';

};
To make sure that screen readers and other assistive technology know that our error message is associated with our field, we also need to add the aria-describedby role.
var showError = function (field, error) {

// Add error class to field
field.classList.add('error');

// Get field id or name
var id = field.id || field.name;
if (!id) return;

// Check if error message field already exists
// If not, create one
var message = field.form.querySelector('.error-message#error-for-' + id );
if (!message) {
message = document.createElement('div');
message.className = 'error-message';
message.id = 'error-for-' + id;
field.parentNode.insertBefore( message, field.nextSibling );
}

// Add ARIA role to the field
field.setAttribute('aria-describedby', 'error-for-' + id);

// Update error message
message.innerHTML = error;

// Show error message
message.style.display = 'block';
message.style.visibility = 'visible';

};
Style the error message
We can use the .error and .error-message classes to style our form field and error message.
As a simple example, you may want to display a red border around fields with an error, and make the error message red and italicized.
.error {
border-color: red;
}

.error-message {
color: red;
font-style: italic;
}
See the Pen Form Validation: Display the Error by Chris Ferdinandi (@cferdinandi) on CodePen.
Hide an error message
Once we show an error, your visitor will (hopefully) fix it. Once the field validates, we need to remove the error message. Let's create another function, removeError(), and pass in the field. We'll call this function from event listener as well.
// Remove the error message
var removeError = function (field) {
// Remove the error message...
};

// Listen to all blur events
document.addEventListener('blur', function (event) {

// Only run if the field is in a form to be validated
if (!event.target.form.classList.contains('validate')) return;

// Validate the field
var error = event.target.validity;

// If there's an error, show it
if (error) {
showError(event.target, error);
return;
}

// Otherwise, remove any existing error message
removeError(event.target);

}, true);
In removeError(), we want to:

Remove the error class from our field.
Remove the aria-describedby role from the field.
Hide any visible error messages in the DOM.

Because we could have multiple forms on a page, and there's a chance those forms might have fields with the same name or ID (even though that's invalid, it happens), we're going to limit our search for the error message with querySelector the form our field is in rather than the entire document.
// Remove the error message
var removeError = function (field) {

// Remove error class to field
field.classList.remove('error');

// Remove ARIA role from the field
field.removeAttribute('aria-describedby');

// Get field id or name
var id = field.id || field.name;
if (!id) return;

// Check if an error message is in the DOM
var message = field.form.querySelector('.error-message#error-for-' + id + '');
if (!message) return;

// If so, hide it
message.innerHTML = '';
message.style.display = 'none';
message.style.visibility = 'hidden';

};
See the Pen Form Validation: Remove the Error After It's Fixed by Chris Ferdinandi (@cferdinandi) on CodePen.

If the field is a radio button or checkbox, we need to change how we add our error message to the DOM.
The field label often comes after the field, or wraps it entirely, for these types of inputs. Additionally, if the radio button is part of a group, we want the error to appear after the group rather than just the radio button.
See the Pen Form Validation: Issues with Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.
First, we need to modify our showError() method. If the field type is radio and it has a name, we want get all radio buttons with that same name (ie. all other radio buttons in the group) and reset our field variable to the last one in the group.
// Show the error message
var showError = function (field, error) {

// Add error class to field
field.classList.add('error');

// If the field is a radio button and part of a group, error all and get the last item in the group
if (field.type === 'radio' && field.name) {
var group = document.getElementsByName(field.name);
if (group.length > 0) {
for (var i = 0; i < group.length; i++) {
// Only check fields in current form
if (group[i].form !== field.form) continue;
group[i].classList.add('error');
}
field = group[group.length - 1];
}
}

...

};
When we go to inject our message into the DOM, we first want to check if the field type is radio or checkbox. If so, we want to get the field label and inject our message after it instead of after the field itself.
// Show the error message
var showError = function (field, error) {

...

// Check if error message field already exists
// If not, create one
var message = field.form.querySelector('.error-message#error-for-' + id );
if (!message) {
message = document.createElement('div');
message.className = 'error-message';
message.id = 'error-for-' + id;

// If the field is a radio button or checkbox, insert error after the label
var label;
if (field.type === 'radio' || field.type ==='checkbox') {
label = field.form.querySelector('label[for="' + id + '"]') || field.parentNode;
if (label) {
label.parentNode.insertBefore( message, label.nextSibling );
}
}

// Otherwise, insert it after the field
if (!label) {
field.parentNode.insertBefore( message, field.nextSibling );
}
}

...

};
When we go to remove the error, we similarly need to check if the field is a radio button that's part of a group, and if so, use the last radio button in that group to get the ID of our error message.
// Remove the error message
var removeError = function (field) {

// Remove error class to field
field.classList.remove('error');

// If the field is a radio button and part of a group, remove error from all and get the last item in the group
if (field.type === 'radio' && field.name) {
var group = document.getElementsByName(field.name);
if (group.length > 0) {
for (var i = 0; i < group.length; i++) {
// Only check fields in current form
if (group[i].form !== field.form) continue;
group[i].classList.remove('error');
}
field = group[group.length - 1];
}
}

...

};
See the Pen Form Validation: Fixing Radio Buttons & Checkboxes by Chris Ferdinandi (@cferdinandi) on CodePen.
Checking all fields on submit
When a visitor submits our form, we should first validate every field in the form and display error messages on any invalid fields. We should also bring the first field with an error into focus so that the visitor can immediately take action to correct it.
We'll do this by adding a listener for the submit event.
// Check all fields on submit
document.addEventListener('submit', function (event) {
// Validate all fields...
}, false);
If the form has the .validate class, we'll get every field, loop through each one, and check for errors. We'll store the first invalid field we find to a variable and bring it into focus when we're done. If no errors are found, the form can submit normally.
// Check all fields on submit
document.addEventListener('submit', function (event) {

// Only run on forms flagged for validation
if (!event.target.classList.contains('validate')) return;

// Get all of the form elements
var fields = event.target.elements;

// Validate each field
// Store the first field with an error to a variable so we can bring it into focus later
var error, hasErrors;
for (var i = 0; i < fields.length; i++) {
error = hasError(fields[i]);
if (error) {
showError(fields[i], error);
if (!hasErrors) {
hasErrors = fields[i];
}
}
}

// If there are errrors, don't submit form and focus on first element with error
if (hasErrors) {
event.preventDefault();
hasErrors.focus();
}

// Otherwise, let the form submit normally
// You could also bolt in an Ajax form submit process here

}, false);
See the Pen Form Validation: Validate on Submit by Chris Ferdinandi (@cferdinandi) on CodePen.
Tying it all together
Our finished script weight just 6kb (2.7kb minified). You can download a plugin version on GitHub.
It works in all modern browsers and provides support IE support back to IE10. But, there are some browser gotchas…

Because we can't have nice things, not every browser supports every Validity State property.
Internet Explorer is, of course, the main violator, though Edge does lack support for tooLong even though IE10+ supports it. Go figure.

Here's the good news: with a lightweight polyfill (5kb, 2.7kb minified) we can extend our browser support all the way back to IE9, and add missing properties to partially supporting browsers, without having to touch any of our core code.
There is one exception to the IE9 support: radio buttons. IE9 doesn't support CSS3 selectors (like [name="' + field.name + '"]). We use that to make sure at least one radio button has been selected within a group. IE9 will always return an error.
I'll show you how to create this polyfill in the next article.

Article Series:

Constraint Validation in HTML
The Constraint Validation API in JavaScript (You are here!)
A Validity State API Polyfill (Coming Soon!)
Validating the MailChimp Subscribe Form (Coming Soon!)

Form Validation Part 2: The Constraint Validation API (JavaScript) is a post from CSS-Tricks
Source: CssTricks


A Pretty Good SVG Icon System

I've long advocated SVG icon systems. Still do. To name a few benefits: vector-based icons look great in a high pixel density world, SVG offers lots of design control, and they are predictable and performant.
I've also often advocated for a SVG icon system that is based on <symbol>s (an "SVG sprite") and the <use> element for placing them. I've changed my mind a little. I don't think that is a bad way to go, really, but there is certainly a simpler (and perhaps a little better) way to go.

Just include the icons inline.
That's it. Sorry if you were hoping for something fancier.
Like this:
<button>
<svg class="icon icon-cart" viewBox="0 0 100 100" aria-hidden="true">
<!-- all your hot svg action, like: -->
<path d=" ... " />
</svg>
Add to Cart
</button>
Or perhaps more practically, with your server-side include of choice:
<button>
<?php include("/icons/icon-cart.svg"); ?>
Add to Cart
</button>
Like I said:

<?php include "icon.svg<% render "icon.svg"<Icon icon="icon"{% include "icon.svg"
Putting right into markup is a pretty 👍 icon system.
— Chris Coyier (@chriscoyier) May 31, 2017
Advantage #1: No Build Process
You need no fancy tooling to make this work. Your folder full of SVG icons remain a folder full of SVG icons. You'll probably want to optimize them, but that's about it.
Advantage #2: No Shadow DOM Weirdness
SVG icons included as a <use> reference have a shadow DOM boundary.
Showing the Shadow DOM boundry in Chrome DevTools
This can easily cause confusion. For example:
var playButton = document.querySelector("#play-button-shape");

playButton.addEventListener("click", function() {
alert("test");
});
That's not going to work. You'd be targetting the path in the <symbol>, which doesn't really do anything, and the click handler is kinda lost in the cloning. You'd have to attach a handler like that to the parent , like #play-button.
Likewise, a CSS selector like:
.button #play-button-shape {

}
Will not select anything, as there is a Shadow DOM boundry between those two things.
When you just drop inline SVG right into place, there is no Shadow DOM boundry.
Advantage #3: Only the Icons You Need
With a <use>/<symbol> system, you have this SVG sprite that is likely included on every page, whether or not they are all used on any given page or not. When you just include inline SVG, the only icons on the page are the ones you are actually using.
I listed that as advantage, but it sorta could go either way. To be fair, it's possible to cache an SVG sprite (e.g. Ajax for it and inject onto page), which could be pretty efficient.

@Real_CSS_Tricks how cache-friendly is SVG <use>? #SVG #CSS
— Samia Ruponti (@Snowbell1992) June 7, 2017
That's a bit of a trick question. <use> itself doesn't have anything to do with caching, it's about where the SVG is that the <use> is referencing. If the sprite is Ajax'd for, it could be cached. If the sprite is just part of the HTML already, that HTML can be cached. Or the <use> can point to an external file, and that can be cached. That's pretty tempting, but...
Advantage #4: No cross-browser support concerns
No IE or Edge browser can do this:
<use xlink:href="/icons/sprite.svg#icon-cart" />
That is, link to the icon via a relative file path. The only way it works in Microsoft land is to reference an ID to SVG on the same page. There are work arounds for this, such as Ajaxing for the sprite and dumping it onto the page, or libraries like SVG for Everybody that detects browser support and Ajaxs for the bit of SVG it needs and injects it if necessary.
Minor Potential Downside: Bloat of HTML Cache
If you end up going the sprite route, as I said, it's tempting to want to link to the sprite with a relative path to take advantage of caching. But Microsoft browsers kill that, so you have the choice between:

A JavaScript solution, like Ajaxing for the whole sprite and injecting it, or a polyfill.
Dumping the sprite into the HTML server-side.

I find myself doing #2 more often, because #1 ends up with async loading icons and that feels janky. But going with #2 means "bloated" HTML cache, meaning that you have this sprite being cached over and over and over on each unique HTML page, which isn't very efficient.
The same can be said for directly inlining SVG.

Conclusion and TLDR: Because of the simplicity, advantages, and only minor downsides, I suspect directly inlining SVG icons will become the most popular way of handling an SVG icon system.

A Pretty Good SVG Icon System is a post from CSS-Tricks
Source: CssTricks


How to Deal with the AJAX Loading Error: Not Found Error

Did you ever face the dreaded "AJAX Loading Error: Not Found" error message when trying to update your Joomla site using the "Joomla! Update" core component? 
 
In this tutorial, you will learn a few tips helping your to get rid of this error and smoothly run your Joomla update.
 

[[ This is a content summary only. Visit http://OSTraining.com for full links, other content, and more! ]]
Source: https://www.ostraining.com/


Using Fetch

Whenever we send or retrieve information with JavaScript, we initiate a thing known as an Ajax call. Ajax is a technique to send and retrieve information behind the scenes without needing to refresh the page. It allows browsers to send and retrieve information, then do things with what it gets back, like add or change HTML on the page.
Let's take a look at the history of that and then bring ourselves up-to-date.

Another note here, we're going to be using ES6 syntax for all the demos in this article.
A few years ago, the easiest way to initiate an Ajax call was through the use of jQuery's ajax method:
$.ajax('some-url', {
success: (data) => { /* do something with the data */ },
error: (err) => { /* do something when an error happens */}
});
We could do Ajax without jQuery, but we had to write an XMLHttpRequest, which is pretty complicated.
Thankfully, browsers nowadays have improved so much that they support the Fetch API, which is a modern way to Ajax without helper libraries like jQuery or Axios. In this article, I'll show you how to use Fetch to handle both success and errors.
Support for Fetch
Let's get support out of the way first.
Green indicates full support at the version listed (and above). Yellow indicates partial support. Red indicates no support. See Caniuse for full browser support details.DesktopChromeOperaFirefoxIEEdgeSafari422939No1410.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox10.337No565752
Support for Fetch is pretty good! All major browsers (with the exception of Opera Mini and old IE) support it natively, which means you can safely use it in your projects. If you need support anywhere it isn't natively supported, you can always depend on this handy polyfill.
Getting data with Fetch
Getting data with Fetch is easy. You just need to provide Fetch with the resource you're trying to fetch (so meta!).
Let's say we're trying to get a list of Chris' repositories on Github. According to Github's API, we need to make a get request for api.github.com/users/chriscoyier/repos.
This would be the fetch request:
fetch('https://api.github.com/users/chriscoyier/repos');
So simple! What's next?
Fetch returns a Promise, which is a way to handle asynchronous operations without the need for a callback.
To do something after the resource is fetched, you write it in a .then call:
fetch('https://api.github.com/users/chriscoyier/repos')
.then(response => {/* do something */})
If this is your first encounter with Fetch, you'll likely be surprised by the response Fetch returns. If you console.log the response, you'll get the following information:
{
body: ReadableStream
bodyUsed: false
headers: Headers
ok : true
redirected : false
status : 200
statusText : "OK"
type : "cors"
url : "http://some-website.com/some-url"
__proto__ : Response
}
Here, you can see that Fetch returns a response that tells you the status of the request. We can see that the request is successful (ok is true and status is 200), but a list of Chris' repos isn't present anywhere!
Turns out, what we requested from Github is hidden in body as a readable stream. We need to call an appropriate method to convert this readable stream into data we can consume.
Since we're working with GitHub, we know the response is JSON. We can call response.json to convert the data.
There are other methods to deal with different types of response. If you're requesting an XML file, then you should call response.text. If you're requesting an image, you call response.blob.
All these conversion methods (response.json et all) returns another Promise, so we can get the data we wanted with yet another .then call.
fetch('https://api.github.com/users/chriscoyier/repos')
.then(response => response.json())
.then(data => {
// Here's a list of repos!
console.log(data)
});
Phew! That's all you need to do to get data with Fetch! Short and simple, isn't it? :)
Next, let's take a look at sending some data with Fetch.
Sending data with Fetch
Sending data with Fetch is pretty simple as well. You just need to configure your fetch request with three options.
fetch('some-url', options);
The first option you need to set is your request method to post, put or del. Fetch automatically sets the method to get if you leave it out, which is why getting a resource takes lesser steps.
The second option is to set your headers. Since we're primarily sending JSON data in this day and age, we need to set Content-Type to be application/json.
The third option is to set a body that contains JSON content. Since JSON content is required, you often need to call JSON.stringify when you set the body.
In practice, a post request with these three options looks like:
let content = {some: 'content'};

// The actual fetch request
fetch('some-url', {
method: 'post',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(content)
})
// .then()...
For the sharp-eyed, you'll notice there's some boilerplate code for every post, put or del request. Ideally, we can reuse our headers and call JSON.stringify on the content before sending since we already know we're sending JSON data.
But even with the boilerplate code, Fetch is still pretty nice for sending any request.
Handling errors with Fetch, however, isn't as straightforward as handling success messages. You'll see why in a moment.
Handling errors with Fetch
Although we always hope for Ajax requests to be successful, they can fail. There are many reasons why requests may fail, including but not limited to the following:

You tried to fetch a non-existent resource.
You're unauthorized to fetch the resource.
You entered some arguments wrongly
The server throws an error.
The server timed out.
The server crashed.
The API changed.
...

Things aren't going to be pretty if your request fails. Just imagine a scenario you tried to buy something online. An error occured, but it remains unhandled by the people who coded the website. As a result, after clicking buy, nothing moves. The page just hangs there... You have no idea if anything happened. Did your card go through? 😱.
Now, let's try to fetch a non-existent error and learn how to handle errors with Fetch. For this example, let's say we misspelled chriscoyier as chrissycoyier
// Fetching chrissycoyier's repos instead of chriscoyier's repos
fetch('https://api.github.com/users/chrissycoyier/repos')
We already know we should get an error since there's no chrissycoyier on Github. To handle errors in promises, we use a catch call.
Given what we know now, you'll probably come up with this code:
fetch('https://api.github.com/users/chrissycoyier/repos')
.then(response => response.json())
.then(data => console.log('data is', data))
.catch(error => console.log('error is', error));
Fire your fetch request. This is what you'll get:
Fetch failed, but the code that gets executed is the second `.then` instead of `.catch`
Why did our second .then call execute? Aren't promises supposed to handle errors with .catch? Horrible! 😱😱😱
If you console.log the response now, you'll see slightly different values:
{
body: ReadableStream
bodyUsed: true
headers: Headers
ok: false // Response is not ok
redirected: false
status: 404 // HTTP status is 404.
statusText: "Not Found" // Request not found
type: "cors"
url: "https://api.github.com/users/chrissycoyier/repos"
}
Most of the response remain the same, except ok, status and statusText. As expected, we didn't find chrissycoyier on Github.
This response tells us Fetch doesn't care whether your AJAX request succeeded. It only cares about sending a request and receiving a response from the server, which means we need to throw an error if the request failed.
Hence, the initial then call needs to be rewritten such that it only calls response.json if the request succeeded. The easiest way to do so to check if the response is ok.
fetch('some-url')
.then(response => {
if (response.ok) {
return response.json()
} else {
// Find some way to get to execute .catch()
}
});
Once we know the request is unsuccessful, we can either throw an Error or reject a Promise to activate the catch call.
// throwing an Error
else {
throw new Error('something went wrong!')
}

// rejecting a Promise
else {
return Promise.reject('something went wrong!')
}
Choose either one, because they both activate the .catch call.
Here, I choose to use Promise.reject because it's easier to implement. Errors are cool too, but they're harder to implement, and the only benefit of an Error is a stack trace, which would be non-existent in a Fetch request anyway.
So, the code looks like this so far:
fetch('https://api.github.com/users/chrissycoyier/repos')
.then(response => {
if (response.ok) {
return response.json()
} else {
return Promise.reject('something went wrong!')
}
})
.then(data => console.log('data is', data))
.catch(error => console.log('error is', error));
Failed request, but error gets passed into catch correctly
This is great. We're getting somewhere since we now have a way to handle errors.
But rejecting the promise (or throwing an Error) with a generic message isn't good enough. We won't be able to know what went wrong. I'm pretty sure you don't want to be on the receiving end for an error like this...
Yeah... I get it that something went wrong... but what exactly? 🙁
What went wrong? Did the server time out? Was my connection cut? There's no way for me to know! What we need is a way to tell what's wrong with the request so we can handle it appropriately.
Let's take a look at the response again and see what we can do:
{
body: ReadableStream
bodyUsed: true
headers: Headers
ok: false // Response is not ok
redirected: false
status: 404 // HTTP status is 404.
statusText: "Not Found" // Request not found
type: "cors"
url: "https://api.github.com/users/chrissycoyier/repos"
}
Okay great. In this case, we know the resource is non-existent. We can return a 404 status or Not Found status text and we'll know what to do with it.
To get status and statusText into the .catch call, we can reject a JavaScript object:
fetch('some-url')
.then(response => {
if (response.ok) {
return response.json()
} else {
return Promise.reject({
status: response.status,
statusText: response.statusText
})
}
})
.catch(error => {
if (error.status === 404) {
// do something about 404
}
})
Now we're getting somewhere again! Yay! 😄.
Let's make this better! 😏.
The above error handling method is good enough for certain HTTP statuses which doesn't require further explanation, like:

401: Unauthorized
404: Not found
408: Connection timeout
...

But it's not good enough for this particular badass:

400: Bad request.

What constitutes bad request? It can be a whole slew of things! For example, Stripe returns 400 if the request is missing a required parameter.
Stripe's explains it returns a 400 error if the request is missing a required field
It's not enough to just tell our .catch statement there's a bad request. We need more information to tell what's missing. Did your user forget their first name? Email? Or maybe their credit card information? We won't know!
Ideally, in such cases, your server would return an object, telling you what happened together with the failed request. If you use Node and Express, such a response can look like this.
res.status(400).send({
err: 'no first name'
})
Here, we can't reject a Promise in the initial .then call because the error object from the server can only be read after response.json.
The solution is to return a promise that contains two then calls. This way, we can first read what's in response.json, then decide what to do with it.
Here's what the code looks like:
fetch('some-error')
.then(handleResponse)

function handleResponse(response) {
return response.json()
.then(json => {
if (response.ok) {
return json
} else {
return Promise.reject(json)
}
})
}
Let's break the code down. First, we call response.json to read the json data the server sent. Since, response.json returns a Promise, we can immediately call .then to read what's in it.
We want to call this second .then within the first .then because we still need to access response.ok to determine if the response was successful.
If you want to send the status and statusText along with the json into .catch, you can combine them into one object with Object.assign().
let error = Object.assign({}, json, {
status: response.status,
statusText: response.statusText
})
return Promise.reject(error)
With this new handleResponse function, you get to write your code this way, and your data gets passed into .then and .catch automatically
fetch('some-url')
.then(handleResponse)
.then(data => console.log(data))
.catch(error => console.log(error))
Unfortunately, we're not done with handling the response just yet :(
Handling other response types
So far, we've only touched on handling JSON responses with Fetch. This already solves 90% of use cases since APIs return JSON nowadays.
What about the other 10%?
Let's say you received an XML response with the above code. Immediately, you'll get an error in your catch statement that says:
Parsing an invalid JSON produces a Syntax error
This is because XML isn't JSON. We simply can't return response.json. Instead, we need to return response.text. To do so, we need to check for the content type by accessing the response headers:
.then(response => {
let contentType = response.headers.get('content-type')

if (contentType.includes('application/json')) {
return response.json()
// ...
}

else if (contentType.includes('text/html')) {
return response.text()
// ...
}

else {
// Handle other responses accordingly...
}
});
Wondering why you'll ever get an XML response?
Well, I encountered it when I tried using ExpressJWT to handle authentication on my server. At that time, I didn't know you can send JSON as a response, so I left it as its default, XML. This is just one of the many unexpected possibilities you'll encounter. Want another? Try fetching some-url :)
Anyway, here's the entire code we've covered so far:
fetch('some-url')
.then(handleResponse)
.then(data => console.log(data))
.then(error => console.log(error))

function handleResponse (response) {
let contentType = response.headers.get('content-type')
if (contentType.includes('application/json')) {
return handleJSONResponse(response)
} else if (contentType.includes('text/html')) {
return handleTextResponse(response)
} else {
// Other response types as necessary. I haven't found a need for them yet though.
throw new Error(`Sorry, content-type ${contentType} not supported`)
}
}

function handleJSONResponse (response) {
return response.json()
.then(json => {
if (response.ok) {
return json
} else {
return Promise.reject(Object.assign({}, json, {
status: response.status
statusText: response.statusText
}))
}
})
}
function handleTextResponse (response) {
return response.text()
.then(text => {
if (response.ok) {
return json
} else {
return Promise.reject({
status: response.status,
statusText: response.statusText,
err: text
})
}
})
}
It's a lot of code to write/copy and paste into if you use Fetch. Since I use Fetch heavily in my projects, I create a library around Fetch that does exactly what I described in this article (plus a little more).
Introducing zlFetch
zlFetch is a library that abstracts away the handleResponse function so you can skip ahead to and handle both your data and errors without worrying about the response.
A typical zlFetch look like this:
zlFetch('some-url', options)
.then(data => console.log(data))
.catch(error => console.log(error));
To use zlFetch, you first have to install it.
npm install zl-fetch --save
Then, you'll import it into your code. (Take note of default if you aren't importing with ES6 imports). If you need a polyfill, make sure you import it before adding zlFetch.
// Polyfills (if needed)
require('isomorphic-fetch') // or whatwg-fetch or node-fetch if you prefer

// ES6 Imports
import zlFetch from 'zl-fetch';

// CommonJS Imports
const zlFetch = require('zl-fetch');
zlFetch does a bit more than removing the need to handle a Fetch response. It also helps you send JSON data without needing to write headers or converting your body to JSON.
The below the functions do the same thing. zlFetch adds a Content-Type and converts your content into JSON under the hood.
let content = {some: 'content'}

// Post request with fetch
fetch('some-url', {
method: 'post',
headers: {'Content-Type': 'application/json'}
body: JSON.stringify(content)
});

// Post request with zlFetch
zlFetch('some-url', {
method: 'post',
body: content
});
zlFetch also makes authentication with JSON Web Tokens easy.
The standard practice for authentication is to add an Authorization key in the headers. The contents of this Authorization key is set to Bearer your-token-here. zlFetch helps to create this field if you add a token option.
So, the following two pieces of code are equivalent.
let token = 'someToken'
zlFetch('some-url', {
headers: {
Authorization: `Bearer ${token}`
}
});

// Authentication with JSON Web Tokens with zlFetch
zlFetch('some-url', {token});
That's all zlFetch does. It's just a convenient wrapper function that helps you write less code whenever you use Fetch. Do check out zlFetch if you find it interesting. Otherwise, feel free to roll your own!
Here's a Pen for playing around with zlFetch:
See the Pen zlFetch demo by Zell Liew (@zellwk) on CodePen.
Wrapping up
Fetch is a piece of amazing technology that makes sending and receiving data a cinch. We no longer need to write XHR requests manually or depend on larger libraries like jQuery.
Although Fetch is awesome, error handling with Fetch isn't straightforward. Before you can handle errors properly, you need quite a bit of boilerplate code to pass information go to your .catch call.
With zlFetch (and the info presented in this article), there's no reason why we can't handle errors properly anymore. Go out there and put some fun into your error messages too :)

By the way, if you liked this post, you may also like other front-end-related articles I write on my blog. Feel free to pop by and ask any questions you have. I'll get back to you as soon as I can.

Using Fetch is a post from CSS-Tricks
Source: CssTricks


Senior Java Developer - Loginsoft Consulting LLC - Rockville, MD

Senior Java Developer. Knowledge in other technologies such as DrupalCoin Blockchain, HTML, HTML5, CSS (Cascading Style Sheets) JQuery and AJAX may also be necessary....
From Loginsoft Consulting LLC - Sat, 29 Apr 2017 01:43:34 GMT - View all Rockville jobs
Source: http://rss.indeed.com/rss?q=DrupalCoin Blockchain+Developer


When Does a Project Need React?

You know when a project needs HTML and CSS, because it's all of them. When you reach for JavaScript is fairly clear: when you need interactivity or some functionality that only JavaScript can provide. It used to be fairly clear when we reached for libraries. We reached for jQuery to help us simplify working with the DOM, Ajax, and handle cross-browser issues with JavaScript. We reached for underscore to give us helper functions that the JavaScript alone didn't have.
As the need for these libraries fades, and we see a massive rise in new frameworks, I'd argue it's not as clear when to reach for them. At what point do we need React?

I'm just going to use React as a placeholder here for kinda large JavaScript framework thingies. Vue, Ember, Svelte... whatever. I understand they aren't all the same, but when to reach for them I find equally nebulous.
Here's my take.
✅ Because there is lots of state.
Even "state" is a bit of a nebulous word. Imagine things like this:

Which navigation item is active
Whether a button is disabled or not
The value of an input
Which accordion sections are expanded
When an area is loading
The user that is logged in and the team they belong to
Whether the thing the user is working on is published, or a draft

"Business logic"-type stuff that we regularly deal with. State can also be straight up content:

All the comments on an article and the bits and bobs that make them up
The currently viewed article and all its metadata
An array of related articles and the metadata for those
A list of authors
An an activity log of recent actions a user has taken

React doesn't help you organize that state, it just says: I know you need to deal with state, so let's just call it state and have programmatic ways to set and get that state.
Before React, we might have thought in terms of state, but, for the most part, didn't manage it as a direct concept.
Perhaps you've heard the phrase "single source of truth"? A lot of times we treated the DOM as our single source of truth. For example, say you need to know if a form on your website is able to be submitted. Maybe you'd check to see if $(".form input[type='submit']).is(":disabled") because all your business logic that dealt with whether or not the form could be submitted or not ultimately changed the disabled attribute of that button. So the button became this defacto source of truth for the state of your app.
Or say you needed to figure of the name of the first comment author on an article. Maybe you'd write $(".comments > ul > li:first > h3.comment-author).text() because the DOM is the only place that knows that information.
React kinda tells us:

Let's start thinking about all that stuff as state.
I'll do ya one better: state is a chunk of JSON, so it's easy to work with and probably works nicely with your back end.
And one more even better: You build your HTML using bits of that state, and you won't have to deal with the DOM directly at all, I'll handle all that for you (and likely do a better/faster job than you would have.)

✅ To Fight Spaghetti.
This is highly related to the state stuff we were just talking about.
"Spaghetti" code is when code organization and structure has gotten away from you. Imagine, again, a form on your site. It has some business logic stuff that specifically deals with the inputs inside of it. Perhaps there is a number input that, when changed, display the result of some calculation beside it. The form can also be submitted and needs to be validated, so perhaps that code is in a validation library elsewhere. Perhaps you disable the form until you're sure all JavaScript has loaded elsewhere, and that logic is elsewhere. Perhaps when the form is submitted, you get data back and that needs logic and handling. Nothing terribly surprising here, but you can see how this can get confusing quickly. How does a new dev on the project, looking at that form, reason out everything that is going on?
React encourages the use of building things into modules. So this form would likely either be a module of its own or comprised of other smaller modules. Each of them would handle the logic that is directly relevant to it.
React says: well, you aren't going to be watching the DOM directly for changes and stuff, because the DOM is mine and you don't get to work with it directly. Why don't you start thinking of these things as part of the state, change state when you need to, and I'll deal with the rest, rerendering what needs to be rerendered.
It should be said that React itself doesn't entirely solve spaghetti. You can still have state in all kinds of weird places, name things badly, and connect things in weird ways.
In my limited experience, it's Redux that is the thing that really kills spaghetti. Redux says: I'll handle all the important state, totally globally, not module-by-module. I am the absolute source of truth. If you need to change state, there is quite a ceremony involved (I've heard it called that, and I like it.) There are reducers and dispatched actions and such. All changes follow the ceremony.
If you go the Redux road (and there are variations of it, of course), you end up with really solid code. It's much harder to break things and there are clear trails to follow for how everything is wired together.
✅ Lots of DOM management.
Manually handling the DOM is probably the biggest cause of spaghetti code.

Inject HTML over here!
Rip something out over here!
Watch this area for this event!
Bind a new event over here!
New incoming content! Inject again! Make sure it has the right event bindings!

All these things can happen any time from anywhere in an app that's gone spaghetti. Real organization has been given up and it's back to the DOM as the source of truth. It's hard to know exactly what's going on for any given element, so everybody just asks the DOM, does what they need to do, and crosses their fingers it doesn't mess with somebody else.
React says: you don't get to deal with the DOM directly. I have a virtual DOM and I deal with that. Events are bound directly to the elements, and if you need it to do something above and beyond something directly handle-able in this module, you can kind of ceremoniously call things in higher order modules, but that way, the breadcrumb trail can be followed.
Complicated DOM management is another thing. Imagine a chat app. New chat messages might appear because a realtime database has new data from other chatters and some new messages have arrives. Or you've typed a new message yourself! Or the page is loading for the first time and old messages are being pulled from a local data store so you have something to see right away. Here's a Twitter thread that drives that home.
❌ Just because. It's the new hotness.
Learning something for the sake of learning something is awesome. Do that.
Building a project for clients and real human being users requires more careful consideration.
A blog, for example, probably has none of the problems and fits none of the scenarios that would make React a good fit. And because it's not a good fit, it's probably a bad fit, because it introduces complicated technology and dependencies for something that doesn't call for it.
And yet, gray area. If that blog is a SPA ("Single Page App", e.g. no browser refreshing) that is built from data from a headless CMS and had fancy server-side rendering... well maybe that is React territory again.
The web app CMS that makes that blog? Maybe a good choice for React, because of all the state.
❌ I just like JavaScript and want to write everything in JavaScript.
People get told, heck, I've told people: learn JavaScript. It's huge. It powers all kinds of stuff. There are jobs in it. It's not going anyway.
It's only in recent web history that it's become possible to never leave JavaScript. You got Node.js on the server side. There are loads of projects that yank CSS out of the mix and handle styles through JavaScript. And with React, your HTML is in JavaScript too.
All JavaScript! All hail JavaScript!
That's cool and all, but again, just because you can doesn't mean you should. Not all projects call for this, and in fact, most probably don't.
☯️ That's what I know.
(There are decent emojis for YES and NO, but MAYBE is tougher!)
You're learning. Awesome. Everybody is. Keep learning. The more you know the more informed decisions you can make about what tech to use.
But sometimes you gotta build with what you know, so I ain't gonna ding ya for that.
☯️ That's where the jobs are.
Not everybody has a direct say in what technology is used on any given project. Hopefully, over time, you have influence in that, but that takes time. Eden says she spent 2 years with Ember because that's where the jobs were. No harm in that. Everybody's gotta get paid, and Ember might have been a perfect fit for those projects.

When Does a Project Need React? is a post from CSS-Tricks
Source: CssTricks