Gatti’s Pizza

Pixeldust put a lot of digital mojo into gattisjingle.com. Utilizing a musician's flyer feel, we designed and developed a clean microsite that would ultimately serve as a bridge for the 40th anniversary, remind Austinites that Gatti's started in Austin, and generate excitement and interest among new and existing customers. And they wanted to conquer social media.Read more


Broken Records Taps Pixeldust to Develop New Identity

Broken Records, a Spicewood, TX, record label and recording studio, has selected Pixeldust as its lead digital agency for all DrupalCoin Blockchain web integrationneeds. Pixeldust will design and develop the brand identity and website for both the record label and recording studio. The website will feature Broken Records artists and showcase the state-of-the-art recording studio currently under production. Pixeldust will also develop a highly interactive 3d animation to help introduce the brand. Read more


Direction Aware Hover Effects

This is a particular design trick that never fails to catch people's eye! I don't know the exact history of who-thought-of-what first and all that, but I know I have seen a number of implementations of it over the years. I figured I'd round a few of them up here.

Noel Delagado
See the Pen Direction-aware 3D hover effect (Concept) by Noel Delgado (@noeldelgado) on CodePen.
The detection here is done by tracking the mouse position on mouseover and mouseout and calculating which side was crossed. It's a small amount of clever JavaScript, the meat of which is figuring out that direction:
var getDirection = function (ev, obj) {
var w = obj.offsetWidth,
h = obj.offsetHeight,
x = (ev.pageX - obj.offsetLeft - (w / 2) * (w > h ? (h / w) : 1)),
y = (ev.pageY - obj.offsetTop - (h / 2) * (h > w ? (w / h) : 1)),
d = Math.round( Math.atan2(y, x) / 1.57079633 + 5 ) % 4;

return d;
};
Then class names are applied depending on that direction to trigger the directional CSS animations.
Fabrice Weinberg
See the Pen Direction aware hover pure CSS by Fabrice Weinberg (@FWeinb) on CodePen.
Fabrice uses just pure CSS here. They don't detect the outgoing direction, but they do detect the incoming direction by way of four hidden hoverable boxes, each rotated to cover a triangle. Like this:

Codrops
Demo
In an article by Mary Lou on Codrops from 2012, Direction-Aware Hover Effect with CSS3 and jQuery, the detection is also done in JavaScript. Here's that part of the plugin:
_getDir: function (coordinates) {
// the width and height of the current div
var w = this.$el.width(),
h = this.$el.height(),
// calculate the x and y to get an angle to the center of the div from that x and y.
// gets the x value relative to the center of the DIV and "normalize" it
x = (coordinates.x - this.$el.offset().left - (w / 2)) * (w > h ? (h / w) : 1),
y = (coordinates.y - this.$el.offset().top - (h / 2)) * (h > w ? (w / h) : 1),
// the angle and the direction from where the mouse came in/went out clockwise (TRBL=0123);
// first calculate the angle of the point,
// add 180 deg to get rid of the negative values
// divide by 90 to get the quadrant
// add 3 and do a modulo by 4 to shift the quadrants to a proper clockwise TRBL (top/right/bottom/left) **/
direction = Math.round((((Math.atan2(y, x) * (180 / Math.PI)) + 180) / 90) + 3) % 4;
return direction;
},
It's technically CSS doing the animation though, as inline styles are applied as needed to the elements.
John Stewart
See the Pen Direction Aware Hover Goodness by John Stewart (@johnstew) on CodePen.
John leaned on Greensock to do all the detection and animation work here. Like all the examples, it has its own homegrown geometric math to calculate the direction in which the elements were hovered.
// Detect Closest Edge
function closestEdge(x,y,w,h) {
var topEdgeDist = distMetric(x,y,w/2,0);
var bottomEdgeDist = distMetric(x,y,w/2,h);
var leftEdgeDist = distMetric(x,y,0,h/2);
var rightEdgeDist = distMetric(x,y,w,h/2);
var min = Math.min(topEdgeDist,bottomEdgeDist,leftEdgeDist,rightEdgeDist);
switch (min) {
case leftEdgeDist:
return "left";
case rightEdgeDist:
return "right";
case topEdgeDist:
return "top";
case bottomEdgeDist:
return "bottom";
}
}

// Distance Formula
function distMetric(x,y,x2,y2) {
var xDiff = x - x2;
var yDiff = y - y2;
return (xDiff * xDiff) + (yDiff * yDiff);
}
Gabrielle Wee
See the Pen CSS-Only Direction-Aware Cube Links by Gabrielle Wee ✨ (@gabriellewee) on CodePen.
Gabrielle gets it done entirely in CSS by positioning four hoverable child elements which trigger the animation on a sibling element (the cube) depending on which one was hovered. There is some tricky stuff here involving clip-path and transforms that I admit I don't fully understand. The hoverable areas don't appear to be triangular like you might expect, but rectangles covering half the area. It seems like they would overlap ineffectively, but they don't seem to. I think it might be that they hang off the edges slightly giving a hover area that allows each edge full edge coverage.
Elmer Balbin
See the Pen Direction Aware Tiles using clip-path Pure CSS by Elmer Balbin (@elmzarnsi) on CodePen.
Elmer is also using clip-path here, but the four hoverable elements are clipped into triangles. You can see how each of them has a point at 50% 50%, the center of the square, and has two other corner points.
clip-path: polygon(0 0, 100% 0, 50% 50%)
clip-path: polygon(100% 0, 100% 100%, 50% 50%);
clip-path: polygon(0 100%, 50% 50%, 100% 100%);
clip-path: polygon(0 0, 50% 50%, 0 100%);
Nigel O Toole
Demo
Raw JavaScript powers Nigel's demo here, which is all modernized to work with npm and modules and all that. It's familiar calculations though:
const _getDirection = function (e, item) {
// Width and height of current item
let w = item.offsetWidth;
let h = item.offsetHeight;
let position = _getPosition(item);

// Calculate the x/y value of the pointer entering/exiting, relative to the center of the item.
let x = (e.pageX - position.x - (w / 2) * (w > h ? (h / w) : 1));
let y = (e.pageY - position.y - (h / 2) * (h > w ? (w / h) : 1));

// Calculate the angle the pointer entered/exited and convert to clockwise format (top/right/bottom/left = 0/1/2/3). See https://stackoverflow.com/a/3647634 for a full explanation.
let d = Math.round(Math.atan2(y, x) / 1.57079633 + 5) % 4;

// console.table([x, y, w, h, e.pageX, e.pageY, item.offsetLeft, item.offsetTop, position.x, position.y]);

return d;
};
The JavaScript ultimately applies classes, which are animated in CSS based on some fancy Sass-generated animations.
Giana
A CSS-only take that handles the outgoing direction nicely!
See the Pen CSS-only directionally aware hover by Giana (@giana) on CodePen.

Seen any others out there? Ever used this on something you've built?

Direction Aware Hover Effects is a post from CSS-Tricks
Source: CssTricks


Getting Ready for Web Video

Inspired Magazine
Inspired Magazine - creativity & inspiration daily
Video is one of those really contentious points about web design. There are some people who feel like web pages should not have embedded video at all. These people are wrong.
Like any technology, however, we should respect it and not abuse it. The two worst things you can do are:

AutoPlay videos, without express consent from the user
Embed too many videos in one page

Both of these things are likely to cause annoyance to users and should be avoided unless you have a very good reason.
Knowing what not to do will only get you so far. The rest of your online video success story will depend on knowing the things you ought to do, which is what we’ll cover in the rest of this article.
Video categories
There are six different types of videos that are commonly used on sites. These are:

Regular video – you point a camera at something and record it
Live stream – you point a camera at something and don’t record it
Slide show – composed from a series of still images, often with voice over plus added descriptive text
Animation – various methods, but more commonly 3D rendered animations made with Maya3D or Blender.
Screencast – software records images from your computer, normally used for tutorials, usually with text overlays and voice narration.
Hybrid screencast – a screen cast with regular video segments, and possibly also slideshow segments.

Knowing which type of video you want to produce is a good start. Actually that brings us neatly to the next topic.
Plan your video
Good video doesn’t normally happen by accident. Meticulous planning pays off, and that means you know what kind of video you’re going to produce, how you’re going to produce it, and (very importantly) why.
Don’t fail to plan. For a start, your video should be scripted. This is true even if there is no dialog or narration. The script gives you a clear impression of how the video is supposed to unfold. You can also optionally story board the video, but a crew that can’t work straight from a script is not a very visionary crew.
If you’re making a bigger production, you’ll also benefit from budget planning, scene breakdown, shooting sequence (shot list), location scouting, etc. The more time you invest into planning, the better your video is likely to be. Professional preparation leads to professional results.
Software that can help you with script writing and planning includes Trelby and CeltX.

Invest in quality equipment
The equipment you use will have a big impact on the result. It may be difficult to believe, but the camera is not the most important part of your equipment investment.
That’s because for web video (in 2018, at least) it’s rarely sensible to shoot video above normal HD (1920px wide), and in fact it’s better to shoot in SD (1280px wide) or lower, and the aspect ratio should always be 16:9.
One source of confusion with these resolutions, by the way, is the slightly misleading standard names used, which references the vertical height (720p / 1080p) rather than the width, which is the most natural thing people think about.
In thinking about this, bear in mind that a video with a frame height of 720px will not fit on the screen real estate of most users, so it is easy to see why shooting above 720p will not give superior results for web video.
The larger your video frame is, the more resources it will hog on the user’s device, including in some cases failing to play at all, or playing very poorly. Your goal really should be to get the highest image quality and the lowest file size (in bytes).
The reason all this is mentioned is because cameras up to HD will be quite inexpensive compared to cameras that can shoot at higher resolutions, and you’ll just be wasting your money if you invest in them, because most users in 2018:

Do not have screens large enough to support the enormous frame size
Do not have connections fast enough to stream anything above HD smoothly
Do not have connections able to stream anything above SD smoothly either
Are not overly concerned about quality as long as it is reasonable

Quality of your content is the more important thing. So cameras for web video are cheap. What matters a lot more is the audio, and that is where you should invest sensibly.
Cheap audio solutions are likely to result in poor results, so avoid cheap audio and invest in quality. What you save on your camera can be reinvested into sound. Literally what you’d regard as a sound investment.

The main microphone types are shotgun, boom, and wireless. The top brands include Rode, Senheiser, Shure, and Audio-Technica.
Shotgun microphones will do the job if the camera is reasonably near and there is no wind. A boom mic can be made from a shotgun mic mounted on a pole with an extension cable. Wireless is the most expensive and the most likely to give you trouble.
You should invest in a good quality tripod as well, with the generally accepted best brand on the market being Manfrotto. What you should invest in lighting depends on the location. Other items you’ll need could include reflectors and shaders.
Completely optional items that can be useful include sliders, dollies, jibs, and lens filters. Don’t invest in these items unless your production warrants their purchase.
Set the scene
The best idea with online video is to keep it short whenever possible, and when it’s not possible, break it down into segments. This is far better than one long continuous narrative, and makes your video look more professional.
For each segment, think about what will be in the frame. If the camera will pan, track, or otherwise follow your movement between two or more points, think about what will be in the frame at each point. Rehearse it and mark the spots where you will stand if you’re in an on-camera role.

How you can mark ground spots is with chalk, tape, small bean bags, or stones. The camera operator should use a tripod or Steadicam for best results. Shaky video is truly horrible.
For screen casts and slideshows, think about how well the user can see what you’re showing. Zoom in on key elements if necessary, and be willing to switch betweeen different zoomed and unzoomed views, as the situation requires.
Make your own green screen
If you are presenting from behind a desk, a green screen can be a big improvement to your presentation. Simply get yourself a large, flat, solid surface, which should be smooth and unblemished, and paint it a bright shade of green.

For ultimate compatibility, also create magenta and cyan screens that can be swapped in if you need to show anything green colored in your frame.
With a green screen (or magenta, or cyan) you can use a technology called chroma key to replace the solid color with any image, including another video.
Obviously there’s not much point in making a video if nobody wants to watch it, so try to keep things interesting. Beware, however, not to be insincere or act out of character, because poor acting is worse than no acting at all.
Humor can be powerful if it is done well, and used only where it is appropriate. Likewise solemn, somber, and scandalous tones can also create interest when used appropriately.
Product videos and testimonials should be delivered enthusiastically and highlight the best features, however product reviews should be brutally honest in order to boost your credibility and win the trust of your viewers. Nothing is more valuable than trust.
Editing
Editing your video is the biggest task of all. For this, you’ll need software, and that software must be a nonlinear video editor (NLE). With this you can put mix and match the various clips you’ve shot to make a coherent narrative.

Not all editing software is equal. The best video editors are Cinlerra, Adobe Premiere Pro, Blender, and Sony Vegas Pro.
Rendering
Rendering is usually done, at least on the first pass, by the video editing software. When rendering for DVD, your goal is to get maximum video quality, regardless of the file size. Rendering for the web is a whole different thing.
The only formats worth considering are MP4 and WEBM, and while the latter will give you a better file size, it is not currently universally supported by all browsers. It is worth keeping in mind for the future.
Although your sound capture needs to be first rate, your rendered audio definitely should not be. In fact this is where most people go wrong, leaving their sound at ridiculously high fidelity when it’s not necessary. Reducing the audio quality will go a long way towards reducing file size while not noticeably affecting the outcome.

Codecs are a hotly debated topic, but the general consensus of professionals is to use the H.264 codec (or equivalent), because this will ensure maximum compatibility and a good balance between quality and file size.
Finally, consider shrinking the physical dimensions of the video if it is going to be viewed within a pre-defined space, and the user would not be expected to view it in full screen mode (doing so will work, but results in pixelation… their problem, not yours).
You can also use video transcoders such as Handbrake for your final render to fine tune the resulting file and ensure maximum compatibility. In some regions ISPs have restricted access to Handbrake downloads, but that’s just a testament to how good it is.
Captioning
Don’t under-estimate the power of captioning. Investing the time to create proper closed captions (subtitles) for your video production will be a very good investment. At the very least, allow auto-captions, but creating your own, especially if you allow a choice of languages, is always a good idea except when your video contains no speech.
Hosting
Considering how many mobile users there are and the prevalence of 3G connections, with 4G still being a (slowly growing) minority, HD video is not the best of ideas, and since Vimeo’s support for captioning is not on a par with Google’s, this makes Google the better choice for online video hosting at present.

Notice, however, that it was Google, not YouTube, that got the mention there. For numerous reasons, YouTube is not the best way to host your video, however there is nothing to prevent you uploading multiple versions of your video, one you host on a private Google account and one you host on YouTube.
The version embedded on your site should be the version hosted on your Google account.
The one exception to the rule is if you’re producing feature content, where you are showing off your film making prowess. In this case, Vimeo may have the edge.
For low bandwidth sites (those that attract less traffic than the bandwidth they have available), you could consider hosting the video on your own server. This can provide some advantages, especially in terms of loading time.
This post Getting Ready for Web Video was written by Inspired Mag Team and first appearedon Inspired Magazine.
Source: inspiredm.com


You Just Changed My Workflow

Good products challenge people to rethink their existing workflows and consider changing them so that they might adopt the new tool.
This requires time and patience, trial and error, and can even create a lot of anxiety.

The difference between a good product and a great one is that a great one simply changes the user’s workflow magically.
And, it really does feel like magic when it works well. Instead of anxiety it can create excitement. Instead of the user feeling as if their investing time that they can’t afford to invest they, instead, feel that their time is well-spent.
This is the aim of any good product designer and developer. Our goal is to fundamentally transform your existing workflow(s) for the better. And, if we do our job well, then, we can be handsomely rewarded for it.
CryptoYum Development — Nov, 2017
I’m working on two current mobile (iOS) applications right now that I hope impacts the users in these positive ways. I hope that they are seen and understood as great products instead of just good products.
I believe that CryptoYum has a unique opportunity to transform a bitcoin, blockchain, and cryptocurrency enthusiast’s reading workflow permanently, becoming a daily source of inspiration and news and education for them regarding one of the most exciting technology movements of our time.
I really can’t wait to get it into our Alpha / Beta user’s hands so that they can help me refine the native workflow so that it works well with their own.
George App Development — Nov, 2017
With George I’ve built a mobile workflow around creating accountability for my own tasks and get-shit-done system that mirrors my own natural behavior and enhances it for the better.
I barely spend any time in the app itself because there are opportunities for me to engage with the app quickly without ever opening it (a’la 3D touch):
Add it, remove it… get back to work.
My personal productivity has increased and it couldn’t be better timed since having a new kiddo arrive recently.
But most importantly is the fact that I was more than willing to change my own personal workflow to accommodate the increased performance. It’s been a joy to use and I’m glad to finally have a To-Do list app that actually works and that I’ve actually kept around for longer than a week (or two).
The real tests have been when I’ve demo’d it for friends and unaffiliated folks during my own user-tests. When I show it to them and they reply:
Whoa… you just changed my workflow.
Then I know I’m onto something special.
We want to build products that do just that. Nothing special… just magical.
The post You Just Changed My Workflow appeared first on John Saddington.
Source: https://john.do/


Creating Vue.js Transitions & Animations

My last two projects hurled me into the JAMstack. SPAs, headless content management, static generation... you name it. More importantly, they gave me the opportunity to learn Vue.js. More than "Build a To-Do App" Vue.js, I got to ship real-life, production-ready Vue apps.
The agency behind Snipcart (Spektrum) wanted to start using decoupled JavaScript frameworks for small to medium sites. Before using them on client projects, however, they chose to experiment on themselves. After a few of my peers had unfruitful experiences with React, I was given the green light to prototype a few apps in Vue. This prototyping morphed into full-blown Vue apps for Spektrum connected to a headless CMS. First, I spent time figuring out how to model and render our data appropriately. Then I dove head first into Vue transformations to apply a much-needed layer of polish on our two projects.

I've prepared live demos on CodePen and GitHub repos to go along with this article.
This post digs into Vue.js and the tools it offers with its transition system. It is assumed that you are already comfortable with the basics of Vue.js and CSS transitions. For the sake of brevity and clarity, we won't get into the "logic" used in the demo.
Handling Vue.js Transitions & Animations

Animations & transitions can bring your site to life and entice users to explore. Animations and transitions are an integral part of UX and UI design. They are, however, easy to get wrong. In complex situations like dealing with lists, they can be nearly impossible to reason about when relying on native JavaScript and CSS. Whenever I ask backend developers why they dislike front end so vehemently, their response is usually somewhere along the lines of "... animations".
Even for those of us who are drawn to the field by an urge to create intricate micro-interactions and smooth page transitions, it's not easy work. We often need to rely on CSS for performance reasons, even while working in a mostly JavaScript environment, and that break in the environment can be difficult to manage.
This is where frameworks like Vue.js step in, taking the guess-work and clumsy chains of setTimeout functions out of transitions.
The Difference Between Transitions and Animations
The terms transition and animation are often used interchangeably but are actually different things.

A transition is a change in the style properties on an element to be transitioned in a single step. They are often handled purely through CSS.
An animation is more complex. They are usually multi-step and sometimes run continuously. Animations will often call on JavaScript to pick up where CSS' lack of logic drops off.

It can be confusing, as adding a class could be the trigger for a transition or an animation. Still, it is an important distinction when stepping into the world of Vue because both have very different approaches and toolboxes.
Here's an example of transitions in use on Spektrum's site:

Using Transitions
The simplest way to achieve transition effects on your page is through Vue's <transition> component. It makes things so simple, it almost feels like cheating. Vue will detect if any CSS animations or transitions are being used and will automatically toggle classes on the transitioned content, allowing for a perfectly timed transition system and complete control.
First step is to identify our scope. We tell Vue to prepend the transition classes with modal, for example, by setting the component's name attribute. Then to trigger a transition all you need to do is toggle the content's visibility using the v-if or v-show attributes. Vue will add/remove the classes accordingly.
There are two "directions" for transitions: enter (for an element going from hidden to visible) and leave (for an element going from visble to hidden). Vue then provides 3 "hooks" that represent different timeframes in the transition:

.modal-enter-active / .modal-leave-active: These will be present throughout the entire transition and should be used to apply your CSS transition declaration. You can also declare styles that need to be applied from beginning to end.
.modal-enter / .modal-leave: Use these classes to define how your element looks before it starts the transition.
.modal-enter-to / .modal-leave-to: You've probably already guessed, these determine the styles you wish to transition towards, the "complete" state.

To visualize the whole process, take a look at this chart from Vue's documentation:

How does this translate into code? Say we simply want to fade in and out, putting the pieces together would look like this:
<button class="modal__open" @click="modal = true">Help</button>

<transition name="modal">
<section v-if="modal" class="modal">
<button class="modal__close" @click="modal = false">&times;</button>
</section>
</transition>
.modal-enter-active,
.modal-leave-active { transition: opacity 350ms }

.modal-enter,
.modal-leave-to { opacity: 0 }

.modal-leave,
.modal-enter-to { opacity: 1 }
This is likely the most basic implementation you will come across. Keep in mind that this transition system can also handle content changes. For example, you could react to a change in Vue's dynamic <component>.
<transition name="slide">
<component :is="selectedView" :key="selectedView"/>
</transition>
.slide-enter { transform: translateX(100%) }
.slide-enter-to { transform: translateX(0) }
.slide-enter-active { position: absolute }

.slide-leave { transform: translateX(0) }
.slide-leave-to { transform: translateX(-100%) }

.slide-enter-active,
.slide-leave-active { transition: all 750ms ease-in-out }
Whenever the selectedView changes, the old component will slide out to the left and the new one will enter from the right!
Here's a demo that uses these concepts:
See the Pen VueJS transition & transition-group demo by Nicolas Udy (@udyux) on CodePen.
Transitions on Lists
Things get interesting when we start dealing with lists. Be it some bullet points or a grid of blog posts, Vue gives you the <transition-group> component.

It is worth noting that while the <transition> component doesn't actually render an element, <transition-group> does. The default behaviour is to use a <span> but you can override this by setting the tag attribute on the <transition-group>.
The other gotcha is that all list items need to have a unique key attribute. Vue can then keep track of each item individually and optimize its performance. In our demo, we're looping over the list of companies, each of which has a unique ID. So we can set up our list like so:
<transition-group name="company" tag="ul" class="content__list">
<li class="company" v-for="company in list" :key="company.id">
<!-- ... -->
</li>
</transition-group>
The most impressive feature of transition-group is how Vue handles changes in the list's order so seamlessly. For this, an additional transition class is available, .company-move (much like the active classes for entering and leaving), which will be applied to list items that are moving about but will remain visible.
In the demo, I broke it down a bit more to show how to leverage different states to get a cleaner end result. Here's a simplified and uncluttered version of the styles:
/* base */
.company {
backface-visibility: hidden;
z-index: 1;
}

/* moving */
.company-move {
transition: all 600ms ease-in-out 50ms;
}

/* appearing */
.company-enter-active {
transition: all 300ms ease-out;
}

/* disappearing */
.company-leave-active {
transition: all 200ms ease-in;
position: absolute;
z-index: 0;
}

/* appear at / disappear to */
.company-enter,
.company-leave-to {
opacity: 0;
}
Using backface-visibility: hidden on an element, even in the absence of 3D transforms, will ensure silky 60fps transitions and avoid fuzzy text rendering during transformations by tricking the browser into leveraging hardware acceleration.
In the above snippet, I've set the base style to z-index: 1. This assures that elements staying on page will always appear above elements that are leaving. I also apply a absolute positioning to items that are leaving to remove them from the natural flow, triggering the move transition on the rest of the items.
That's all we need! The result is, frankly, almost magic.
Using Animations
The possibilities and approaches for animation in Vue are virtually endless, so I've chosen one of my favourite techniques to showcase how you could animate your data.
We're going to use GSAP's TweenLite library to apply easing functions to our state's changes and let Vue's lightning fast reactivity reflect this on the DOM. Vue is just as comfortable working with inline SVG as it is with HTML.
We'll be creating a line graph with 5 points, evenly spaced along the X-axis, whose Y-axis will represent a percentage. You can take a look here at the result.
See the Pen SVG path animation with VueJS & TweenLite by Nicolas Udy (@udyux) on CodePen.
Let's get started with our component's logic.
new Vue({
el: '#app',
// this is the data-set that will be animated
data() {
return {
points: { a: -1, b: -1, c: -1, d: -1, e: -1 }
}
},

// this computed property builds an array of coordinates that
// can be used as is in our path
computed: {
path() {
return Object.keys(this.points)
// we need to filter the array to remove any
// properties TweenLite has added
.filter(key => ~'abcde'.indexOf(key))
// calculate X coordinate for 5 points evenly spread
// then reverse the data-point, a higher % should
// move up but Y coordinates increase downwards
.map((key, i) => [i * 100, 100 - this.points[key]])
}
},

methods: {
// our randomly generated destination values
// could be replaced by an array.unshift process
setPoint(key) {
let duration = this.random(3, 5)
let destination = this.random(0, 100)
this.animatePoint({ key, duration, destination })
},
// start the tween on this given object key and call setPoint
// once complete to start over again, passing back the key
animatePoint({ key, duration, destination }) {
TweenLite.to(this.points, duration, {
[key]: destination,
ease: Sine.easeInOut,
onComplete: this.setPoint,
onCompleteParams: [key]
})
},
random(min, max) {
return ((Math.random() * (max - min)) + min).toFixed(2)
}
},

// finally, trigger the whole process when ready
mounted() {
Object.keys(this.points).forEach(key => {
this.setPoint(key)
})
}
});
Now for the template.
<main id="app" class="chart">
<figure class="chart__content">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-20 -25 440 125">
<path class="chart__path" :d="`M${path}`"
fill="none" stroke="rgba(255, 255, 255, 0.3)"
stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>

<text v-for="([ x, y ]) in path" :x="x - 10" :y="y - 7.5"
font-size="10" font-weight="200" fill="currentColor">
{{ 100 - (y | 0) + '%' }}
</text>
</svg>
</figure>
</main>
Notice how we bind our path computed property to the path element's d attribute. We do something similar with the text nodes that output the current value for that point. When TweenLite updates the data, Vue reacts instantly and keeps the DOM in sync.
That's really all there is to it! Of course, additional styles were applied to make things pretty, which at this point you might realize is more work then the animation itself!
Live demos (CodePen) & GitHub repo
Go ahead, browse the live demos or analyze/re-use the code in our open source repo!

The vue-animate GitHub repo
The vue-transitions GitHub repo
The Vue.js transition & transition-group demo
The SVG path animation demo

Conclusion
I've always been a fan of animations and transitions on the web, but I'm also a stickler for performance. As a result, I'm always very cautious when it comes to relying on JavaScript. However, combining Vue's blazing fast and low-cost reactivity with its ability to manage pure CSS transitions, you would really have to go overboard to have performance issues.
It's impressive that such a powerful framework can offer such a simple yet manageable API. The animation demo, including the styling, was built in only 45 minutes. And if you discount the time it took to set up the mock data used in the list-transition, it's achievable in under 2 hours. I don't even want to imagine the migraine-inducing process of building similar setups without Vue, much less how much time it would take!
Now get out there and get creative! The use cases go far beyond what we have seen in this post: the only true limitation is your imagination. Don't forget to check out the transitions and animations section in Vue.js' documentation for more information and inspiration.

This post originally appeared on Snipcart's blog. Got comments, questions? Add them below!

Creating Vue.js Transitions & Animations is a post from CSS-Tricks
Source: CssTricks


Exploring Data with Serverless and Vue: Filtering and Using the Data

In this second article of this tutorial, we'll take the data we got from our serverless function and use Vue and Vuex to disseminate the data, update our table, and modify the data to use in our WebGL globe. This article assumes some base knowledge of Vue. By far the coolest/most useful thing we'll address in this article is the use of the computed properties in Vue.js to create the performant filtering of the table. Read on!

Article Series:

Automatically Update GitHub Files With Serverless Functions
Filtering and Using the Data (you are here!)

You can check out the live demo here, or explore the code on GitHub.
First, we'll spin up an entire Vue app with server-side rendering, routing, and code-splitting with a tool called Nuxt. (This is similar to Zeit's Next.js for React). If you don't already have the Vue CLI tool installed, run
npm install -g vue-cli
# or
yarn global add vue-cli
This installs the Vue CLI globally so that we can use it whenever we wish. Then we'll run:
vue init nuxt/starter my-project
cd my-project
yarn
That creates this application in particular. Now we can kick off our local dev server with:
npm run dev
If you're not already familiar with Vuex, it's similar to React's Redux. There's more in depth information on what it is and does in this article here.
import Vuex from 'vuex';
import speakerData from './../assets/cda-data.json';

const createStore = () => {
return new Vuex.Store({
state: {
speakingColumns: ['Name', 'Conference', 'From', 'To', 'Location'],
speakerData
}
});
};

export default createStore;
Here, we're pulling the speaker data from our `cda.json` file that has now been updated with latitude and longitude from our Serverless function. As we import it, we're going to store it in our state so that we have application-wide access to it. You may also notice that now that we've updated the JSON with our Serverless function, the columns no longer correspond to what we're want to use in our table. That's fine! We'll store only the columns we need as well to use to create the table.
Now in the pages directory of our app, we'll have an `Index.vue` file. If we wanted more pages, we would merely need to add them to this directory. We're going to use this index page for now and use a couple of components in our template.
<template>
<section>
<h1>Cloud Developer Advocate Speaking</h1>
<h3>Microsoft Azure</h3>
<div class="tablecontain">
...
<speaking-table></speaking-table>
</div>
<more-info></more-info>
<speaking-globe></speaking-globe>
</section>
</template>
We're going to bring all of our data in from the Vuex store, and we'll use a computed property for this. We'll also create a way to filter that data in a computed property here as well. We'll end up passing that filtered property to both the speaking table and the speaking globe.
computed: {
speakerData() {
return this.$store.state.speakerData;
},
columns() {
return this.$store.state.speakingColumns;
},
filteredData() {
const x = this.selectedFilter,
filter = new RegExp(this.filteredText, 'i')
return this.speakerData.filter(el => {
if (el[x] !== undefined) { return el[x].match(filter) }
else return true;
})
}
}
}</script>
You'll note that we're using the names of the computed properties, even in other computed properties, the same way that we use data- i.e. speakerData() becomes this.speakerData in the filter. It would also be available to us as {{ speakerData }} in our template and so forth. This is how they are used. Quickly sorting and filtering a lot of data in a table based on user input, is definitely a job for computed properties. In this filter, we'll also check and make sure we're not throwing things out for case-sensitivity, or trying to match up a row that's undefined as our data sometimes has holes in it.
Here's an important part to understand, because computed properties in Vue are incredibly useful. They are calculations that will be cached based on their dependencies and will only update when needed. This means they're extremely performant when used well. Computed properties aren't used like methods, though at first, they might look similar. We may register them in the same way, typically with some accompanying logic, they're actually used more like data. You can consider them another view into your data.
Computed values are very valuable for manipulating data that already exists. Anytime you're building something where you need to sort through a large group of data, and you don't want to rerun those calculations on every keystroke, think about using a computed value. Another good candidate would be when you're getting information from your Vuex store. You'd be able to gather that data and cache it.
Creating the inputs
Now, we want to allow the user to pick which type of data they are going to filter. In order to use that computed property to filter based on user input, we can create a value as an empty string in our data, and use v-model to establish a relationship between what is typed in this search box with the data we want filtered in that filteredData function from earlier. We'd also like them to be able to pick a category to narrow down their search. In our case, we already have access to these categories, they are the same as the columns we used for the table. So we can create a select with a corresponding label:
<label for="filterLabel">Filter By</label>
<select id="filterLabel" name="select" v-model="selectedFilter">
<option v-for="column in columns" key="column" :value="column">
{{ column }}
</option>
</select>
We'll also wrap that extra filter input in a v-if directive, because it should only be available to the user if they have already selected a column:
<span v-if="selectedFilter">
<label for="filterText" class="hidden">{{ selectedFilter }}</label>
<input id="filteredText" type="text" name="textfield" v-model="filteredText"></input>
</span>
Creating the table
Now, we'll pass the filtered data down to the speaking table and speaking globe:
<speaking-globe :filteredData="filteredData"></speaking-globe>
Which makes it available for us to update our table very quickly. We can also make good use of directives to keep our table small, declarative, and legible.
<table class="scroll">
<thead>
<tr>
<th v-for="key in columns">
{{ key }}
</th>
</tr>
</thead>
<tbody>
<tr v-for="(post, i) in filteredData">
<td v-for="entry in columns">
<a :href="post.Link" target="_blank">
{{ post[entry] }}
</a>
</td>
</tr>
</tbody>
</table>
Since we're using that computed property we passed down that's being updated from the input, it will take this other view of the data and use that instead, and will only update if the data is somehow changed, which will be pretty rare.
And now we have a performant way to scan through a lot of data on a table with Vue. The directives and computed properties are the heroes here, making it very easy to write this declaratively.

I love how fast it filters the information with very little effort on our part. Computed properties leverage Vue's ability to cache wonderfully.
Creating the Globe Visualization
As mentioned previously, I'm using a library from Google dataarts for the globe, found in this repo.
The globe is beautiful out of the box but we need two things in order to work with it: we need to modify our data to create the JSON that the globe expects, and we need to know enough about three.js to update its appearance and make it work in Vue.
It's an older repo, so it's not available to install as an npm module, which is actually just fine in our case, because we're going to manipulate the way it looks a bit because I'm a control freak ahem I mean, we'd like to play with it to make it our own.
Dumping all of this repo's contents into a method isn't that clean though, so I'm going to make use of a mixin. The mixin allows us to do two things: it keeps our code modular so that we're not scanning through a giant file, and it allows us to reuse this globe if we ever wanted to put it on another page in our app.
I register the globe like this:
import * as THREE from 'three';
import { createGlobe } from './../mixins/createGlobe';

export default {
mixins: [createGlobe],

}
and create a separate file in a directory called mixins (in case I'd like to make more mixins) named `createGlobe.js`. For more information on mixins and how they work and what they do, check out this other article I wrote on how to work with them.
Modifying the data
If you recall from the first article, in order to create the globe, we need feed it values that look like this:
var data = [
[
'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
],
[
'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
]
];
So far, the filteredData computed value we're returning from our store will give us our latitude and longitude for each entry, because we got that information from our computed property. For now we just want one view of that dataset, just my team's data, but in the future we might want to collect information from other teams as well so we should build it out to add new values fairly easily.
Let's make another computed value that returns the data the way that we need it. We're going to make it as an object first because that will be more efficient while we're building it, and then we'll create an array.
teamArr() {
//create it as an object first because that's more efficient than an array
var endUnit = {};
//our logic to build the data will go here

//we'll turn it into an array here
let x = Object.entries(endUnit);
let area = [],
places,
all;

for (let i = 0; i < x.length; i++) {
[all, places] = x[i];
area.push([all, [].concat(...Object.values(places))]);
}
return area;
}
In the object we just created, we'll see if our values exist already, and if not, we'll create a new one. We'll also have to create a key from the latitude and longitude put together so that we can check for repeat instances. This is particularly helpful because I don't know if my teammates will put the location in as just the city or the city and the state. Google maps API is pretty forgiving in this way- they'll be able to find one consistent location for either string.
We'll also decide what the smallest and incremental value of the magnification will be. Our decision for the magnification will mainly be from trial and error of adjusting this value and seeing what fits in a way that makes sense for the viewer. My first try here was long stringy wobbly poles and looked like a balding broken porcupine, it took a minute or so to find a value that worked.
this.speakerData.forEach(function(index) {
let lat = index.Latitude,
long = index.Longitude,
key = lat + ", " + long,
magBase = 0.1,
val = 'Microsoft CDAs';

//if we either the latitude or longitude are missing, skip it
if (lat === undefined || long === undefined) return;

//because the pins are grouped together by magnitude, as we build out the data, we need to check if one exists or increment the value
if (val in endUnit) {

//if we already have this location (stored together as key) let's increment it
if (key in endUnit[val]) {
//we'll increase the maginifation here
}
} else {
//we'll create the new values here
}

})
Now, we'll check if the location already exists, and if it does, we'll increment it. If not, we'll create new values for them.
this.speakerData.forEach(function(index) {
...

if (val in endUnit) {
//if we already have this location (stored together as key) let's increment it
if (key in endUnit[val]) {
endUnit[val][key][2] += magBase;
} else {
endUnit[val][key] = [lat, long, magBase];
}
} else {
let y = {};
y[key] = [lat, long, magBase];
endUnit[val] = y;
}

})
Make it look interesting
I mentioned earlier that part of the reason we'd want to store the base dataarts JavaScript in a mixin is that we'd want to make some modifications to its appearance. Let's talk about that for a minute as well because it's an aspect of any interesting data visualization.
If you don't know very much about working with three.js, it's a library that's pretty well documented and has a lot of examples to work off of. The real breakthrough in my understanding of what it was and how to work with it didn't really come from either of these sources, though. I got a lot out of Rachel Smith's series on codepen and Chris Gammon's (not to be confused with Chris Gannon) excellent YouTube series. If you don't know much about three.js and would like to use it for 3D data visualization, my suggestion is to start there.
The first thing we'll do is adjust the colors of the pins on the globe. The ones out of the box are beautiful, but they don't fit the style of our page, or the magnification we need for this data. The code to update is on line 11 of our mixin:
const colorFn = opts.colorFn || function(x) {
let c = new THREE.Color();
c.setHSL(0.1 - x * 0.19, 1.0, 0.6);
return c;
};
If you're not familiar with it, HSL is a wonderfully human-readable color format, which makes it easy to update the colors of our pins on a range:

H stands for hue, which is given to us as a circle. This is great for generative projects like this because unlike a lot of other color formats, it will never fail. 20 degrees will give us the same value as 380 degrees, and so on. The x that we pass in here have a relationship with our magnification, so we'll want to figure out where that range begins, and what it will increase by.
The second value will be Saturation, which we'll pump up to full blast here so that it will stand out- on a range from 0 to 1, 1.0 is the highest.
The third value is Lightness. Like Saturation, we'll get a value from 0 to 1, and we'll use this halfway at 0.5.

You can see if I just made a slight modification, to that one line of code to c.setHSL(0.6 - x * 0.7, 1.0, 0.4); it would change the color range dramatically.

We'll also make some other fine-tuned adjustments: the globe will be a circle, but it will use an image for the texture. If we wanted to change that shape to a a icosahedron or even a torus knot, we could do so, we'd need only to change one line of code here:
//from
const geometry = new THREE.SphereGeometry(200, 40, 30);
//to
const geometry = new THREE.IcosahedronGeometry(200, 0);
and we'd get something like this, you can see that the texture will still even map to this new shape:

Strange and cool, and maybe not useful in this instance, but it's really nice that creating a three-dimensional shape is so easy to update with three.js. Custom shapes get a bit more complex, though.
We load that texture differently in Vue than the way the library would- we'll need to get it as the component is mounted and load it in, passing it in as a parameter when we also instantiate the globe. You'll notice that we don't have to create a relative path to the assets folder because Nuxt and Webpack will do that for us behind the scenes. We can easily use static image files this way.
mounted() {
let earthmap = THREE.ImageUtils.loadTexture('https://cdn.css-tricks.com/world4.jpg');
this.initGlobe(earthmap);
}
We'll then apply that texture we passed in here, when we create the material:
uniforms = THREE.UniformsUtils.clone(shader.uniforms);
uniforms['texture'].value = imageLoad;

material = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: shader.vertexShader,
fragmentShader: shader.fragmentShader
});
There are so many ways we could work with this data and change the way it outputs- we could adjust the white bands around the globe, we could change the shape of the globe with one line of code, we could surround it in particles. The sky's the limit!

And there we have it! We're using a serverless function to interact with the Google Maps API, we're using Nuxt to create the application with Server Side Rendering, we're using computed values in Vue to make that table slick, declarative and performant. Working with all of these technologies can yield really fun exploratory ways to look at data.

Article Series:

Automatically Update GitHub Files With Serverless Functions
Filtering and Using the Data (you are here!)

Exploring Data with Serverless and Vue: Filtering and Using the Data is a post from CSS-Tricks
Source: CssTricks


Creating Your First WebVR App using React and A-Frame

Today, we'll be running through a short tutorial on creating our own WebVR application using A-Frame and React. We'll cover the setup process, build out a basic 3D scene, and add interactivity and animation. A-Frame has an excellent third-party component registry, so we will be using some of those in addition to writing one from scratch. In the end, we'll go through the deployment process through surge.sh so that you can share your app with the world and test it out live on your smartphone (or Google Cardboard if you have one available). For reference, the final code is in this repo. Over the course of this tutorial, we will be building a scene like this. Check out the live demo as well.

Exciting, right? Without further ado, let's get started!

What is A-Frame?

A-Frame is a framework for building rich 3D experiences on the web. It's built on top of three.js, an advanced 3D JavaScript library that makes working with WebGL extremely fun. The cool part is that A-Frame lets you build WebVR apps without writing a single line of JavaScript (to some extent). You can create a basic scene in a few minutes writing just a few lines of HTML. It provides an excellent HTML API for you to scaffold out the scene, while still giving you full flexibility by letting you access the rich three.js API that powers it. In my opinion, A-Frame strikes an excellent balance of abstraction this way. The documentation is an excellent place to learn more about it in detail.

Setup

The first thing we're going to be doing is setting up A-Frame and React. I've already gone ahead and done that for you so you can simply clone this repo, cd into it, and run yarn install to get all the required dependencies. For this app, we're actually going to be using Preact, a fast and lightweight alternative to React, in order to reduce our bundle size. Dont worry, it's still the same API so if you've worked with React before then you shouldn't notice any differences. Go ahead and run yarn start to fire up the integrationserver. Hit up http://localhost:3333 and you should be presented with a basic scene including a spinning cube and some text. I highly suggest that you spend some time going through the README in that repo. It has some essential information about A-Frame and React. It also goes more into detail on what and how to install everything. Now on to the fun stuff.

Building Blocks

Fire up the editor on the root of the project directory and inspect the file app/main.js (or view it on GitHub), that's where we'll be building out our scene. Let's take a second to break this down.

The Scene component is the root node of an A-Frame app. It's what creates the stage for you to place 3D objects in, initializes the camera, the WebGL renderer and handles other boilerplate. It should be the outermost element wrapping everything else inside it. You can think of Entity like an HTML div. Entities are the basic building blocks of an A-Frame Scene. Every object inside the A-Frame scene is an Entity.

A-Frame is built on the Entity-component-system (ECS) architecture, a very common pattern utilized in 3D and game integrationmost notably popularized by Unity, a powerful game engine. What ECS means in the context of an A-Frame app is that we create a bunch of Entities that quite literally do nothing, and attach components to them to describe their behavior and appearance. Because we're using React, this means that we'll be passing props into our Entity to tell it what to render. For example, passing in a-box as the value of the prop primitive will render a box for us. Same goes for a-sphere, or a-cylinder. Then we can pass in other values for attributes like position, rotation, material, height, etc. Basically, anything listed in the A-Frame documentation is fair game. I hope you see how powerful this really is. You're grabbing just the bits of functionality you need and attaching them to Entities. It gives us maximum flexibility and reusability of code, and is very easy to reason about. This is called composition over inheritance.

But, Why React?

Sooooo, all we need is markup and a few scripts. What's the point of using React, anyway? Well, if you wanted to attach state to these objects, then manually doing it would be a lot of hard work. A-Frame handles almost all of its rendering through the use of HTML attributes (or components as mentioned above), and updating different attributes of many objects in your scene manually can be a massive headache. Since React is excellent at binding state to markup, diffing it for you, and re-rendering, we'll be taking advantage of that. Keep in mind that we won't be handling any WebGL render calls or manipulating the animation loop with React. A-Frame has a built in animation engine that handles that for us. We just need to pass in the appropriate props and let it do the hard work for us. See how this is pretty much like creating your ordinary React app, except the result is WebGL instead of raw markup? Well, technically, it is still markup. But A-Frame converts that to WebGL for us. Enough with the talking, let's write some code.

Setting Up the Scene

The first thing we should do is to establish an environment. Let's start with a blank slate. Delete everything inside the Scene element. For the sake of making things look interesting right away, we'll utilize a 3rd party component called aframe-environment to generate a nice environment for us. Third party components pack a lot of WebGL code inside them, but expose a very simple interface in the markup. It's already been imported in the app/intialize.js file so all we need to do is attach it to the Scene element. I've already configured some nice defaults for us to get started, but feel free to modify to your taste. As an aside, you can press CTRL + ALT + I to load up the A-Frame Scene Inspector and change parameters in real-time. I find this super handy in the initial stage when designing the app. Our file should now look something like:

import { h, Component } from 'preact'
import { Entity, Scene } from 'aframe-react'

// Color palette to use for later
const COLORS = ['#D92B6A', '#9564F2', '#FFCF59']

class App extends Component {
constructor() {
super()

// We'll use this state later on in the tutorial
this.state = {
colorIndex: 0,
spherePosition: { x: 0.0, y: 4, z: -10.0 }
}
}

render() {
return (
<Scene
environment={{
preset: 'starry',
seed: 2,
lightPosition: { x: 0.0, y: 0.03, z: -0.5 },
fog: 0.8,
ground: 'canyon',
groundYScale: 6.31,
groundTexture: 'walkernoise',
groundColor: '#8a7f8a',
grid: 'none'
}}
>
</Scene>
)
}
}

Was that too easy? That's the power of A-Frame components. Don't worry. We'll dive into writing some of our own stuff from scratch later on. We might as well take care of the camera and the cursor here. Let's define another Entity inside the Scene tags. This time, we'll pass in different primitives (a-camera and a-cursor).

<Entity primitive="a-camera" look-controls>
<Entity
primitive="a-cursor"
cursor={{ fuse: false }}
material={{ color: 'white', shader: 'flat', opacity: 0.75 }}
geometry={{ radiusInner: 0.005, radiusOuter: 0.007 }}
/>
</Entity>

See how readable and user-friendly this is? It's practically English. You can look up every single prop here in the A-Frame docs. Instead of string attributes, I'm passing in objects.

Populating the Environment

Now that we've got this sweet scene set up, we can populate it with objects. They can be basic 3D geometry objects like cubes, spheres, cylinders, octahedrons, or even custom 3D models. For the sake of simplicity, we'll use the defaults provided by A-Frame, and then write our own component and attach it to the default object to customize it. Let's build a low poly count sphere because they look cool. We'll define another entity and pass in our attributes to make it look the way we want. We'll be using the a-octahedron primitive for this. This snippet of code will live in-between the Scene tags as well.

<Entity
primitive="a-octahedron"
detail={2}
radius={2}
position={this.state.spherePosition}
color="#FAFAF1"
/>

You may just be seeing a dark sphere now. We need some lighting. Let there be light:

<Entity
primitive="a-light"
type="directional"
color="#FFF"
intensity={1}
position={{ x: 2.5, y: 0.0, z: 0.0 }}
/>

This adds a directional light, which is a type of light emitted from a certain point in space. You can also try using ambient or point lights, but in this situation, I prefer directional to emulate it coming from the sun's direction.

Building Your First A-Frame Component

Baby steps. We now have a 3D object and an environment that we can walk/look around in. Now let's take it up a notch and build our own custom A-Frame component from scratch. This component will alter the appearance of our object, and also attach interactive behavior to it. Our component will take the provided shape, and create a slightly bigger wireframe of the same shape on top of it. That'll give it a really neat geometric, meshy (is that even a word?) look. To do that, we'll define our component in the existing js file app/components/aframe-custom.js.

First, we'll register the component using the global AFRAME reference, define our schema for the component, and add our three.js code inside the init function. You can think of schema as arguments, or properties that can be passed to the component. We'll be passing in a few options like color, opacity, and other visual properties. The init function will run as soon as the component gets attached to the Entity. The template for our A-Frame component looks like:

AFRAME.registerComponent('lowpoly', {
schema: {
// Here we define our properties, their types and default values
color: { type: 'string', default: '#FFF' },
nodes: { type: 'boolean', default: false },
opacity: { type: 'number', default: 1.0 },
wireframe: { type: 'boolean', default: false }
},

init: function() {
// This block gets executed when the component gets initialized.
// Then we can use our properties like so:
console.log('The color of our component is ', this.data.color)
}
}

Let's fill the init function in. First things first, we change the color of the object right away. Then we attach a new shape which becomes the wireframe. In order to create any 3D object programmatically in WebGL, we first need to define a geometry, a mathematical formula that defines the vertices and the faces of our object. Then, we need to define a material, a pixel by pixel map which defines the appearance of the object (color, light reflection, texture). We can then compose a mesh by combining the two.

We then need to position it correctly, and attach it to the scene. Don't worry if this code looks a little verbose, I've added some comments to guide you through it.

init: function() {
// Get the ref of the object to which the component is attached
const obj = this.el.getObject3D('mesh')

// Grab the reference to the main WebGL scene
const scene = document.querySelector('a-scene').object3D

// Modify the color of the material
obj.material = new THREE.MeshPhongMaterial({
color: this.data.color,
shading: THREE.FlatShading
})

// Define the geometry for the outer wireframe
const frameGeom = new THREE.OctahedronGeometry(2.5, 2)

// Define the material for it
const frameMat = new THREE.MeshPhongMaterial({
color: '#FFFFFF',
opacity: this.data.opacity,
transparent: true,
wireframe: true
})

// The final mesh is a composition of the geometry and the material
const icosFrame = new THREE.Mesh(frameGeom, frameMat)

// Set the position of the mesh to the position of the sphere
const { x, y, z } = obj.position
icosFrame.position.set(0.0, 4, -10.0)

// If the wireframe prop is set to true, then we attach the new object
if (this.data.wireframe) {
scene.add(icosFrame)
}

// If the nodes attribute is set to true
if (this.data.nodes) {
let spheres = new THREE.Group()
let vertices = icosFrame.geometry.vertices

// Traverse the vertices of the wireframe and attach small spheres
for (var i in vertices) {
// Create a basic sphere
let geometry = new THREE.SphereGeometry(0.045, 16, 16)
let material = new THREE.MeshBasicMaterial({
color: '#FFFFFF',
opacity: this.data.opacity,
shading: THREE.FlatShading,
transparent: true
})

let sphere = new THREE.Mesh(geometry, material)
// Reposition them correctly
sphere.position.set(
vertices[i].x,
vertices[i].y + 4,
vertices[i].z + -10.0
)

spheres.add(sphere)
}
scene.add(spheres)
}
}

Let's go back to the markup to reflect the changes we've made to the component. We'll add a lowpoly prop to our Entity and give it an object of the parameters we defined in our schema. It should now look like:

<Entity
lowpoly={{
color: '#D92B6A',
nodes: true,
opacity: 0.15,
wireframe: true
}}
primitive="a-octahedron"
detail={2}
radius={2}
position={{ x: 0.0, y: 4, z: -10.0 }}
color="#FAFAF1"
/>

Adding Interactivity

We have our scene, and we've placed our objects. They look the way we want. Now what? This is still very static. Let's add some user input by changing the color of the sphere every time it gets clicked on.

A-Frame comes with a fully functional raycaster out of the box. Raycasting gives us the abiltiy to detect when an object is 'gazed at' or 'clicked on' with our cursor, and execute code based on those events. Although the math behind it is fascinating, we don't have to worry about how it's implemented. Just know what it is and how to use it. To add a raycaster, we provide the raycaster prop to the camera with the class of objects which we want to be clickable. Our camera node should now look like:

<Entity primitive="a-camera" look-controls>
<Entity
primitive="a-cursor"
cursor={{ fuse: false }}
material={{ color: 'white', shader: 'flat', opacity: 0.75 }}
geometry={{ radiusInner: 0.005, radiusOuter: 0.007 }}
event-set__1={{
_event: 'mouseenter',
scale: { x: 1.4, y: 1.4, z: 1.4 }
}}
event-set__1={{
_event: 'mouseleave',
scale: { x: 1, y: 1, z: 1 }
}}
raycaster="objects: .clickable"
/>
</Entity>

We've also added some feedback by scaling the cursor when it enters and leaves an object targeted by the raycaster. We're using the aframe-event-set-component to make this happen. It lets us define events and their effects accordingly. Now go back and add a class="clickable" prop to the 3D sphere Entity we created a bit ago. While you're at it, attach an event handler so we can respond to clicks accordingly.

<Entity
class="clickable"
// ... all the other props we've already added before
events={{
click: this._handleClick.bind(this)
}}
/>

Now let's define this _handleClick function. Outside of the render call, define it and use setState to change the color index. We're just cycling between the numbers of 0 - 2 on every click.

_handleClick() {
this.setState({
colorIndex: (this.state.colorIndex + 1) % COLORS.length
})
}

Great, now we're changing the state of app. Let's hook that up to the A-Frame object. Use the colorIndex variable to cycle through a globally defined array of colors. I've already added that for you so you just need to change the color prop of the sphere Entity we created. Like so:

<Entity
class="clickable"
lowpoly={{
color: COLORS[this.state.colorIndex],
// The rest stays the same
/>

One last thing, we need to modify the component to swap the color property of the material since we pass it a new one when clicked. Underneath the init function, define an update function, which gets invoked whenever a prop of the component gets modified. Inside the update function, we simply swap our the color of the material like so:

AFRAME.registerComponent('lowpoly', {
schema: {
// We've already filled this out
},

init: function() {
// We've already filled this out
}

update: function() {
// Get the ref of the object to which the component is attached
const obj = this.el.getObject3D('mesh')

// Modify the color of the material during runtime
obj.material.color = new THREE.Color(this.data.color)
}
}

You should now be able to click on the sphere and cycle through colors.

Animating Objects

Let's add a little bit of movement to the scene. We can use the aframe-animation-component to make that happen. It's already been imported so let's add that functionality to our low poly sphere. To the same Entity, add another prop named animation__rotate. That's just a name we give it, you can call it whatever you want. The inner properties we pass are what's important. In this case, it rotates the sphere by 360 degrees on the Y axis. Feel free to play with the duration and property parameters.

<Entity
class="clickable"
lowpoly
// A whole buncha props that we wrote already...
animation__rotate={{
property: 'rotation',
dur: 60000,
easing: 'linear',
loop: true,
to: { x: 0, y: 360, z: 0 }
}}
/>

To make this a little more interesting, let's add another animation prop to oscillate the sphere up and down ever so slightly.

animation__oscillate={{
property: 'position',
dur: 2000,
dir: 'alternate',
easing: 'linear',
loop: true,
from: this.state.spherePosition,
to: {
x: this.state.spherePosition.x,
y: this.state.spherePosition.y + 0.25,
z: this.state.spherePosition.z
}
}}

Polishing Up

We're almost there! Post-processing effects in WebGL are extremely fast and can add a lot of character to your scene. There are many shaders available for use depending on the aesthetic you're going for. If you want to add post-processing effects to your scene, you can utilize the additional shaders provided by three.js to do so. Some of my favorites are bloom, blur, and noise shaders. Let's run through that very briefly here.

Post-processing effects operate on your scene as a whole. Think of it as a bitmap that's rendered every frame. This is called the framebuffer. The effects take this image, process it, and output it back to the renderer. The aframe-effects-component has already been imported for your convenience, so let's throw the props at our Scene tag. We'll be using a mix of bloom, film, and FXAA to give our final scene a touch of personality:

<Scene
effects="bloom, film, fxaa"
bloom="radius: 0.99"
film="sIntensity: 0.15; nIntensity: 0.15"
fxaa
// Everything else that was already there
/>

Boom. we're done. There's an obscene amount of shader math going on behind the scene (pun intended), but you don't need to know any of it. That's the beauty of abstraction. If you're curious you can always dig into the source files and look at the shader wizardry that's happening back there. It's a world of its own. We're pretty much done here. Onto the final step...

Deployment

It's time to deploy. The final step is letting it live on someone else's server and not your dev server. We'll use the super awesome tool called surge to make this painfully easy. First, we need a production build of our app. Run yarn build. It will output final build to the public/ directory. Install surge by running npm install -g surge. Now run surge public/ to push the contents of that directory live. It should prompt you to log in or create an account and you'll have the choice to change your domain name. The rest should be very straightforward, and you will get a URL of your deployed site at the end. That's it. I've hosted mine http://eventide.surge.sh.

Fin

I hope you enjoyed this tutorial and you see the power of A-Frame and its capabilities. By combining third-party components and cooking up our own, we can create some neat 3D scenes with relative ease. Extending all this with React, we're able to manage state efficiently and go crazy with dynamic props. We've only scratched the surface, and now it's up to you to explore the rest. As 2D content fails to meet the rising demand for immersive content on the web, tools like A-Frame and three.js have come into the limelight. The future of WebVR is looking bright. Go forth and unleash your creativity, for the browser is an empty 3D canvas and code is your brush. If you end up making something cool, feel free tweet at @_prayash and A-Frame @aframevr so we everyone else can see it too.

Additional Resources

Check out these additional resources to advance your knowledge of A-Frame and WebVR.

Crash Course: VR Design for N00bs for tips on designing for VR.
A-Frame School for more A-Frame knowledge.
A Week of A-Frame for inspiration.
A-Frame Slack for the community.
A-Frame Stack Overflow for common problems that you will run into.
Awesome A-Frame for a general hub for anything A-Frame.
Three.js 101 for an awesome intro to Three.js.


Source: VigetInspire


Tips In Choosing Website Color Schemes (With BONUS Online Tools)

When you create a website, one of the first things that you need to focus on is web design. Aside from picking the right layout for your pages, your choice of color schemes can make or break the whole package.
Color scheme is about considering the interplay of colors in three major aspects: complementation, contrast, and vibrancy. Choosing the right colors is one of the most difficult phases in web designing and the process can be very challenging especially for those who are new in the field. You should not worry too much, though, because there are available online tools that you can use to help you select the perfect color schemes to use for your page.
What is the importance of colors in websites?

Here are some of the reasons why choosing the right color schemes is extremely important:
1. Creates an emotional connection
Colors generally trigger moods or emotions to your target audience or market. Therefore, whatever color scheme you choose makes all the difference.
2. Sets your company’s direction
A good website design is grounded on the use of the right colors. For example, picking too many colors for your site might create an image that your company is too informal and might not be taken seriously by your target market. Meanwhile, using conventional color schemes might make your website too forgettable like the others.
3. Establishes branding
Your website is an effective representation of your company, and the colors you choose for your website create your company or brand identity. This goes without saying that websites play a major role in online marketing and branding, and colors put life to whatever information you put.
When choosing a color scheme for your website, always put in mind that the right one leads to a strong brand especially when a certain color gets associated with your company.
4. Creates a visual statement
Words are powerful, but colors make your catchy phrases livelier as they emphasize words and statements with the right tones and hues.
Colors are not only a requirement in web design. They also serve are the soul of your website: they create a mark in people’s minds that will later on set the difference of your company from all others who offer similar services or products.
Colors create character and personality—two of the most important factors in branding. Web design, therefore, puts stress in the importance of choosing the right color schemes especially with the idea in mind that the game of online marketing is quite competitive.
If you want to leave a mark on your market and audience, make sure that your goal is supported with meaningful colors that represent who your company is.
Factors in Choosing Website Colors

Selecting color schemes is not considered as one of the pillars of web design for nothing. Colors are carefully and intricately selected depending on the need, style, and image being conveyed.
Here are the common factors that website designers consider in choosing between color schemes:
Demographics and the product you are selling
The demographics of your target audience plays a major role in analyzing what types of information you wish to convey through your color schemes.
For instance, your website sells organic products, and your target audience are people who are health conscious. The best color scheme to use in this product line revolves around green and other earth colors to convey the message that you support your clients’ advocacy or their goal of living a healthy lifestyle.
In the example above, it would be inappropriate to use shades of black and gray, or red and yellow in this type of website. That will definitely blow your target market away, and might even give them the impression that you do not understand the products you are selling in the first place.
Gender
It is always helpful to base your color scheme on the gender of your target market. If your company is selling cosmetic products or clothes for women, there are certain colors and shades that will easily draw their attention to your website. Take the time and extra effort to research on the usual colors that men and women dislike because their initial take on these things matter a lot.
Kissmetrics released an infographic showing some of the color preferences by gender:

While blue is the most preferred color in both genders, men like bright colors while women opt for soft or pastel colors.
Men like black, white, and shades of gray more than women.
Men like brown the least, while women generally don’t prefer orange.

Age group
Similar to gender, age groups have varying tastes in their choice of color. There are studies proving that a person’s taste in color changes with age. Website designers should also pay attention to this and must consider doing their research, especially if the websites that they are trying to cater to a specific age group.
How long the website will be used
Choosing from a wide array of color schemes should always consider how long the website will be used. Will it be for a specific season or will it be for long-term project duration? Seasonal usage will require color schemes that will speak of the events being celebrated, say orange and black for a Halloween-themed page.
Your company profile
Your website is for the consumption of your target audience, and it is just right to consider the background of your clients. However, you also need to consider the profile of your company. Once you get to know the objectives of your company through the products or services you are selling, then it will be easier for you to understand how you can build a connection with your market.
How to Choose the Right Color Schemes

After you have put into consideration the factors for the possible color schemes that you will be using, make sure that you create a shortlist of the ones that could possibly work for your company’s website.
Here are three things you should do to ease your selection process:
1. Decide what dominant color you will use.
Your dominant color is your company or brand color. This is what makes a mark among your clients. This is where the factors for choosing website colors (e.g. age group, gender, company profile) become crucial, because your dominant color keeps your company standout as it creates the first impression.
In choosing your dominant color, you need to know that different colors and shades have their own meanings. Before choosing which one to use, make sure that you strategically pick the best one that will effectively represent your brand.
2. Pick the accent colors that will blend and go well with your dominant color.
Website design becomes equally exciting and challenging when you are already in the process of picking the colors that will go well with the dominant color you have chosen to use. Of course, it will be really dull to stick to just one color all throughout. Accent colors will solve that.
You may use your accent colors for your tabs, subheadings, or information boxes, depending on which ones you want to further highlight. Using accent colors is a fun way of making your page livelier, but do not overdo it. Make sure that you choose one to two accent colors for page to avoid confusion.
3. Choose a good background color.
One of the most challenging tasks in choosing the right color scheme for your website is picking the background color to use. Before choosing a background color, know first the purpose of the website you are designing or developing to easily pick the best one to use.
Online Tools to Generate Color Schemes
There are a number of good online tools that you can use in selecting the right color schemes for web design. Here are some of the tools that are highly recommended:
Kuler
Also known as Adobe Color, Kuler is a reliable online tool that can help you decide on the color palette to use for your pages. It is Adobe’s color theme application that allows users to sync the color palettes they have created in other Adobe applications like Photoshop and Illustrator.
If you want an app that comes with an advanced but user-friendly features and interface, then Kuler is for you. However, this app can only be enjoyed by iOS users.
Color Rotate
Color Rotate is similar to Kuler, only that it looks like a 3D version of the latter. The way humans interpret colors is a complex system, and Color Rotate is an effective tool that puts your choice of colors in 3D space to create a representation that will show how human minds perceive color combinations. This app helps the website designers find the right color scheme that will suit that taste of the target market by showing how colors mix and match in a three-dimensional perspective.
Instant Color Schemes
There are times when it becomes difficult to look for the right hues or tones to use for a single image in mind. If you have the same issue in the process of designing your website, then this app can help you in resolving that.
Instant Color Schemes allows the user to type keywords and will instantly suggest the top colors that are commonly associated with the keywords typed.
Color Explorer
As one of the most commonly featured apps online, Color Explorer offers advanced features that allow the user to try different color palettes in convenient ways. The app also allows the user to search for color schemes that can be directly used or edited based on the needs of the website.
If you already have an advanced knowledge in mixing and matching colors in web design, then this app is highly recommended for you.

Final Word
While there are a lot of online tools that can be used in selecting the perfect color schemes for websites, the number one rule is to know the taste of your target audience to make your strategy work.
Pay attention to details like how your company profile and your target market can relate through catchy phrases, icons, and colors. Put in mind that although your website is just part of the equation to make your product or service memorable, it pays to maximize its use.
It is highly recommended that before you select the possible colors, make sure that you have tried the abovementioned online tools and similar ones that you can access online.
The post Tips In Choosing Website Color Schemes (With BONUS Online Tools) appeared first on Web Designer Hub.
Source: http://www.webdesignerhub.com


Creating Photorealistic 3D Graphics on the Web

Before becoming a web developer, I worked in the visual effects industry, creating award-winning, high-end 3D effects for movies and TV Shows such as Tron, The Thing, Resident Evil, and Vikings. To be able to create these effects, we would need to use highly sophisticated animation software such as Maya, 3Ds Max or Houdini and do long hours of offline rendering on Render Farms that consisted of hundreds of machines. It's because I worked with these tools for so long that I am now amazed by the state of the current web technology. We can now create and display high-quality 3D content right inside the web browser, in real time, using WebGL and Three.js.

Here is an example of a project that is built using these technologies. You can find more projects that use three.js on their website.
Some projects using three.js
As the examples on the three.js website demonstrate, 3D visualizations have a vast potential in the domains of e-commerce, retail, entertainment, and advertisement.
WebGL is a low-level JavaScript API that enables creation and display of 3D content inside the browser using the GPU. Unfortunately, since WebGL is a low-level API, it can be a bit hard and tedious to use. You need to write hundreds of lines of code to perform even the simplest tasks. Three.js, on the other hand, is an open source JavaScript library that abstracts away the complexity of WebGL and allows you to create real-time 3D content in a much easier manner.
In this tutorial, I will be introducing the basics of the three.js library. It makes sense to start with a simple example to communicate the fundamentals better when introducing a new programming library but I would like to take this a step further. I will also aim to build a scene that is aesthetically pleasant and even photorealistic to a degree.
We will just start out with a simple plane and sphere but in the end it will end up looking like this:
See the Pen learning-threejs-final by Engin Arslan (@enginarslan) on CodePen.
Photorealism is the pinnacle of computer graphics but achieving is not necessarily a factor of the processing power at your disposal but a smart deployment of techniques from your toolbox. Here are a few techniques that you will be learning about in this tutorial that will help your scenes achieve photo-realism.

Color, Bump and Roughness Maps.
Physically Based Materials.
Lighting with Shadows.

Photorealistic 3D portrait by Ian Spriggs
The basic 3D principles and techniques that you will learn here are relevant in any other 3D content creation environment whether it is Blender, Unity, Maya or 3Ds Max.
This is going to be a long tutorial. If you are more of a video person or would like to learn more about the capabilities of three.js you should check out my video training on the subject from Lynda.com.
Requirements
When using three.js, if you are working locally, it helps to serve the HTML file through a local server to be able to load in scene assets such as external 3D geometry, images, etc. If you are looking for a server that is easy to setup, you can use Python to spin up a simple HTTP Server. Python is pre-installed on many operating systems.
You don't have to worry about setting a local dev server to follow this tutorial though. You will instead rely on data URL's to load in assets like images to remove the overhead of setting up a server. Using this method you will be able to easily execute your three.js scene in online code editors such as CodePen.
This tutorial assumes a prior, basic to intermediate, knowledge of JavaScript and some understanding of front-end web development. If you are not comfortable with JavaScript but want to get started with it in an easy manner you might want to check out the course/book "Coding for Visual Learners: Learning JavaScript with p5.js". (Disclaimer: I am the author)
Let's get started with building 3D graphics on the Web!
Getting Started
I have already prepared a Pen that you can use to follow this tutorial with.
The HTML code that you will be using is going to be super simple. It just needs to have a div element to host the canvas that is going the display the 3D graphics. It also loads up the three.js library (release 86) from a CDN.
<div id="webgl"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/86/three.min.js"></script>
Codepen hides some of the HTML structure that is currently present for your convenience. If you were building this scene on some other online editors or on your local your HTML would need to have something like this code below where main.js would be the file that would hold the JavaScript code.
<!DOCTYPE html>
<html>
<head>
<title>Three.js</title>
<style type="text/css">
html, body {
margin: 0;
padding: 0;
overflow: hidden;
}
</style>
</head>
<body>
<div id="webgl"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/86/three.min.js"></script>
<script src="./main.js"></script>
</body>
</html>
Notice the simple CSS declaration inside the HTML. This is what you would have in the CSS tab of Codepen:
html, body {
margin: 0;
padding: 0;
overflow: hidden;
}
This is to ensure that you don't have any margin and padding values that might be applied by your browser and you don't get a scrollbar so that you can have the graphics fill the entire screen. This is all we need to get started with building 3D Graphics.
Part 1 - Three.js Scene Basics
When working with three.js and with 3D in general, there are a couple of required objects you need to have. These objects are scene, camera and the renderer.
First, you should create a scene. You can think of a scene object as a container for every other 3D object that you are going to work with. It represents the 3D world that you will be building. You can create the scene object by doing this:
var scene = new THREE.Scene();
Another thing that you need to have when working with 3D is the camera. Think of camera like the eyes that you will be viewing this 3D world through. When working with a 2D visualization, the concept of a camera usually doesn't exist. What you see is what you get. But in 3D, you need a camera to define your point of view as there are many positions and angles that you could be looking at a scene from. A camera doesn't only define a position but also other information like the field of view or the aspect ratio.
var camera = new THREE.PerspectiveCamera(
45, // field of view
window.innerWidth / window.innerHeight, // aspect ratio
1, // near clipping plane (beyond which nothing is visible)
1000 // far clipping plane (beyond which nothing is visible)
);
The camera captures the scene for display purposes but for us to actually see anything, the 3D data needs to be converted into a 2D image. This process is called rendering and you need a renderer to render the scene in three.js. You can initialize a renderer like this:
var renderer = new THREE.WebGLRenderer();
And then set the size of the renderer. This will dictate the size of the output image. You will make it cover the window size.
renderer.setSize(window.innerWidth, window.innerHeight);
To be able to display the results of the render you need to append the domElement property of the renderer to your HTML content. You will use the empty div element that you created that has the id webgl for this purpose.
document.getElementById('webgl').appendChild(renderer.domElement);
And having done all this you can call the render method on the renderer by providing the scene and the camera as the arguments.
renderer.render(
scene,
camera
);
To have things a bit tidier, put everything inside a function called init and execute that function.
init();
And now you would see nothing... but a black screen. Don't worry, this is normal. The scene is working but since you didn't include any objects inside the scene, what you are looking at is basically empty space. Next, you will be populating this scene with 3D objects.
See the Pen learning-threejs-01 by Engin Arslan (@enginarslan) on CodePen.
Adding Objects to the Scene
Geometric objects in three.js are made up of two parts. A geometry that defines the shape of the object and a material that defines the surface quality, the appearance, of the object. The combination of these two things makes up a mesh in three.js which forms the 3D object.
Three.js allows you to create some simple shapes like a cube or a sphere in an easy manner. You can create a simple sphere by providing the radius value.
var geo = new THREE.SphereGeometry(1);
There are various kinds of materials that you could use on geometries. Materials determine how an object reacts to the scene lighting. We can use a material to make an object reflective, rough, transparent, etc.. The default material that three.js objects are created with is the MeshBasicMaterial. MeshBasicMaterial is not affected by the scene lighting at all. This means that your geometry is going to be visible even when there is no lighting in the scene. You can pass an object with a color property and a hex value to the MeshBasicMaterial to be able to set the desired color for the object. You will use this material for now but later update it to have your objects be affected by the scene lighting. You don't have any lighting in the scene for now so MeshBasicMaterial should be a good enough choice.
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00
});
You can combine the geometry and material to create a mesh which is going to form the 3D object.
var mesh = new THREE.Mesh(geometry, material);
Create a function to encapsulate this code that creates a sphere. You won't be creating more than one sphere in this tutorial but it is still good to keep things neat and tidy.
function getSphere(radius) {
var geometry = new THREE.SphereGeometry(radius);
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00
});
var sphere = new THREE.Mesh(geometry, material);
return sphere;
}

var sphere = getSphere(1);
Then you need to add this newly created object to the scene for it to be visible.
scene.add(sphere);
Let's check out the scene again. You will still see a black screen.
See the Pen learning-threejs-02 by Engin Arslan (@enginarslan) on CodePen.
The reason why you don't see anything right now is that whenever you add an object to the scene in three.js, the object gets placed at the center of the scene, at the coordinates of 0, 0, 0 for x, y and z. This simply means that you currently have the camera and the sphere at the same position. You should change the position of either one of them to be able to start seeing things.
3D coordinates
Let's move the camera 20 units on the z axis. This is achieved by setting the position.z property on the camera. 3D objects have position, rotation and scale properties that would allow you to transform them into the 3D space.
camera.position.z = 20;
You could move the camera on other axises as well.
camera.position.x = 0;
camera.position.y = 5;
camera.position.z = 20;
The camera is positioned higher now but the sphere is not at the center of the frame anymore. You need to point the camera to it. To be able to do so, you can call a method on the camera called lookAt. The lookAt method on the camera determines which point the camera is looking at. The points in the 3D space are represented by Vectors. So you can pass a new Vector3 object to this lookAt method to be able to have the camera look at the 0, 0, 0 coordinates.
camera.lookAt(new THREE.Vector3(0, 0, 0));
The sphere object doesn't look too smooth right now. The reason for that is the SphereGeometry function actually accepts two additional parameters, the width and height segments, that affects the resolution of the surface. Higher these values, smoother the curved surfaces will appear. I will set this value to be 24 for width and height segments.
var geo = new THREE.SphereGeometry(radius, 24, 24);
See the Pen learning-threejs-03 by Engin Arslan (@enginarslan) on CodePen.
Now you will create a simple plane geometry for the sphere to be sitting on. PlaneGeometry function requires a width and height parameter. In 3D, 2D objects don't have both of their sides rendering by default so you need to pass a side property to the material to have both sides of the plane geometry to render.
function getPlane(w, h) {
var geo = new THREE.PlaneGeometry(w, h);
var material = new THREE.MeshBasicMaterial({
color: 0x00ff00,
side: THREE.DoubleSide,
});
var mesh = new THREE.Mesh(geo, material);

return mesh;
}
You can now add this plane object to the scene as well. You will notice that the initial rotation of the plane geometry is parallel to the y-axis but you will likely need it to be horizontal for it to act as a ground plane. There is one important thing you should keep in mind regarding the rotations in three.js though. They use radians as a unit, not degrees. A rotation of 90 degrees in radians is equivalent to Math.PI/2.
var plane = getPlane(50, 50);
scene.add(plane);
plane.rotation.x = Math.PI/2;
When you created the sphere object, it got positioned using its center point. If you would like to move it above the ground plane then you can just increase its position.y value by the current radius amount. But that wouldn't be a programmatic way of doing things. If you would like the sphere to stay on the plane whatever its radius value is, you should make use of the radius value for the positioning.
sphere.position.y = sphere.geometry.parameters.radius;
See the Pen learning-threejs-04 by Engin Arslan (@enginarslan) on CodePen.
Animations
You are almost done with the first part of this tutorial. But before we wrap it up, I want to illustrate how to do animations in three.js. Animations in three.js make use of the requestAnimationFrame method on the window object which repeatedly executes a given function. It is somewhat like a setInterval function but optimized for the browser drawing performance.
Create an update function and pass the renderer, scene, and camera in there to execute the render method of the renderer inside this function. You will also declare a requestAnimationFrame function inside there and call this update function recursively from a callback function that is passed to the requestAnimationFrame function. It is better to illustrate this in code than to write about it.
function update(renderer, scene, camera) {
renderer.render(scene, camera);

requestAnimationFrame(function() {
update(renderer, scene, camera);
});
}
Everything might look same to you at this point but the core difference is that the requestAnimationFrame function is making the scene render around 60 frames per second with a recursive call to the update function. Which means that if you are to execute a statement inside the update function, that statement would get executed at around 60 times per second. Let's add a scaling animation to the sphere object. To be able to select the sphere object from inside the update function you could pass it as an argument but we will use a different technique. First, set a name attribute on the sphere object and give it a name of your liking.
sphere.name = 'sphere';
Inside the update function, you could find this object using its name by using the getObjectByName method on its parent object, the scene.
var sphere = scene.getObjectByName('sphere');
sphere.scale.x += 0.01;
sphere.scale.z += 0.01;
With this code, the sphere is now scaling on its x and z axises. Our intention is not to create a scaling sphere though. We are setting up the update function so that you can leverage for different animations later on. Now that you have seen how it works you can remove this scaling animation.
See the Pen learning-threejs-05 by Engin Arslan (@enginarslan) on CodePen.
Part 2 - Adding Realism to the Scene
Currently, we are using MeshBasicMaterial which displays the given color even when there is no lighting in the scene which results in a very flat look. Real-world materials don't work this way though. The visibility of the surface in the real world depends on how much light is reflecting back from the surface back to our eyes. Three.js comes with a couple of different materials that provide a better approximation of how real-world surfaces behave and one of them is the MeshStandardMaterial. MeshStandardMaterial is a physically based rendering material that can help you achieve photorealistic results. This is the kind of material that modern game engines like Unreal or Unity use and is an industry standard in gaming and visual effects.
Let's start using the MeshStandardMaterial on our objects and change the color of the materials to white.
var material = new THREE.MeshStandardMaterial({
color: 0xffffff,
});
You will once again get a black render at this point. That is normal. For objects to be visible we need to have lights in the scene. This wasn't a requirement with MeshBasicMaterial as it is a simple material that displays the given color at all conditions but other materials require an interaction with light to be visible. Let's create a SpotLight creating function. You will be creating two spotlights using this function.
function getSpotLight(color, intensity) {
var light = new THREE.SpotLight(color, intensity);

return light;
}

var spotLight_01 = getSpotlight(0xffffff, 1);
scene.add(spotLight_01);
You might start seeing something at this point. Position the light and the camera a bit differently for a better framing and shading. Also create a secondary light as well.
var spotLight_02 = getSpotlight(0xffffff, 1);
scene.add(spotLight_02);

camera.position.x = 0;
camera.position.y = 6;
camera.position.z = 6;

spotLight_01.position.x = 6;
spotLight_01.position.y = 8;
spotLight_01.position.z = -20;

spotLight_02.position.x = -12;
spotLight_02.position.y = 6;
spotLight_02.position.z = -10;
Having done this you have two light sources in the scene, illuminating the sphere from two different positions. The lighting is helping a bit in understanding the dimensionality of the scene, but things are still looking extremely fake at this point because the lighting is missing a critical component: the shadows!
Rendering a shadow in Three.js is unfortunately not too straightforward. This is because shadows are computationally expensive and we need to activate shadow rendering on multiple places. First, you need to tell the renderer to start rendering shadows:
var renderer = new THREE.WebGLRenderer();
renderer.shadowMap.enabled = true;
Then you need to tell the light to cast shadows. Do that in the getSpotLight function.
light.castShadow = true;
You should also tell the objects to cast and/or receive shadows. In this case, you will make the sphere cast shadows and the plane to receive shadows.
mesh.castShadow = true;
mesh.receiveShadow = true;
After all these settings we should start seeing shadows in the scene. Initially, they might be a bit lower quality. You can increase the resolution of the shadows by setting the light shadow map size.
light.shadow.mapSize.x = 4096;
light.shadow.mapSize.y = 4096;
MeshStandardMaterial has a couple of properties such as the roughness and metalness that controls the interaction of the surface with the light. The properties take values in between 0 and 1 and they control the corresponding behavior of the surface. Increase the roughness value on the plane material to 1 to see the surface look more like a rubber as the reflections get blurrier.
// material adjustments
var planeMaterial = plane.material;
planeMaterial.roughness = 1;
We won't be using 1 as a value in this tutorial though. Feel free to experiment with values but set it back to 0.65 for roughness and 0.75 for metalness.
planeMaterial.roughness = 0.65;
planeMaterial.metalness = 0.75;
Even though the scene should be looking much more promising right now it is still hard to call it realistic. The truth is, it is very hard to establish photorealism in 3D without using texture maps.
See the Pen learning-threejs-06 by Engin Arslan (@enginarslan) on CodePen.
Texture Maps
Texture maps are 2D images that can be mapped on a material for the purpose of providing surface detail. So far you were only getting solid colors on the surfaces but using a texture map you can map any image you would like on a surface. Texture maps are not only used to manipulate the color information of surfaces but they can also be used to manipulate other qualities of the surface like reflectiveness, shininess, roughness, etc.
Textures can be derived from photographic sources or can also be painted from scratch as well. For a texture to be useful in a 3D context it should be captured in a certain manner. Images that have reflections or shadows in them, or images where the perspective is too distorted wouldn't make great texture maps. There are several dedicated websites for finding textures online. One of them is textures.com which has a pretty good archive. They have some free download options but requires you to register to be able to do so. Another website for 3D textures is Megascans which does high resolution, high-quality environment scans that are of high-end production quality.
I have used a website called mb3d.co.uk for this example. This site provides seamless, free to use textures. A seamless texture implies a texture that can be repeated on the surface many times without having any discontinuations where the edges meet. This is the link to the texture file that I have used. I have decreased the size to 512px for width and height and converted the image file to data URI using an online service called ezgif to be able to include it as part of the JavaScript code as opposed to loading it in as a separate asset. (hint: don't include tags as you are outputting the data if you are to use this service)
Create a function that returns the data URI we have generated so that we don't have to put that huge string in the middle of our code.
function getTexture() {
var data = 'data:image/jpeg;base64,/...'; // paste your data URI inside the quotation marks.
return data
}
Next, you need to load in the texture and apply it on the plane surface. You will be using the three.js TextureLoader for this purpose. After loading in the texture you will load in the texture to the map property of the desired material to have it as a color map on the surface.
var textureLoader = new THREE.TextureLoader();
var texture = textureLoader.load(getTexture());
planeMaterial.map = texture;
Things would be looking rather ugly right now as the texture on the surface is pixelated. The image is stretching too much to cover the entire surface. What you can do is to make the image repeat itself instead of scaling so that it doesn't get as pixelated. To do so, you need to set the wrapS and wrapT properties on the desired map to THREE.RepeatWrapping and specify a repetition value. Since you will be doing this for other kinds of maps as well (like bump or roughness map) it is better to create a loop for this:
var repetition = 6;
var textures = ['map']// we will add 'bumpMap' and 'roughnessMap'
textures.forEach((mapName) => {
planeMaterial[mapName].wrapS = THREE.RepeatWrapping;
planeMaterial[mapName].wrapT = THREE.RepeatWrapping;
planeMaterial[mapName].repeat.set(repetition, repetition);
});
This should look much better. Since the texture you are using is seamless you wouldn't notice any disconnections around the edges where the repetition happens.
Loading of a texture is actually an asynchronous operation. This means that your 3D scene is generated before the image file is loaded in. But since you are continuously rendering the scene using requestAnimationFrame this doesn't cause any issues in this example. If you weren't doing this, you would need to use callbacks or other async methods to manage the loading order.
See the Pen learning-threejs-07 by Engin Arslan (@enginarslan) on CodePen.
Other Texture Maps
As mentioned in the previous chapter, textures are not only used to define the color of the surfaces but to define other qualities of it as well. One other way that textures can be used as are bump maps. When used as a bump map, the brightness values of the texture simulates a height effect.
planeMaterial.bumpMap = texture;
Bump map should also be using the same repetition configuration as the color map so include it in the textures array.
var textures = ['map', 'bumpMap'];
With a bump map, brighter the value of a pixel, higher the corresponding surface would look. But a bump map doesn't actually change the surface, it just manipulates how the light interacts with the surface to create an illusion of uneven topology. The bump amount looks a bit too much right now. Bump maps work best when they are used in subtle amounts. So let's change the bumpScale parameter to something lower for a more subtle effect.
planeMaterial.bumpScale = 0.01;
Notice how this texture made a huge difference in appearance. The reflections are not perfect anymore but nicely broken up as they would be in real life. Another kind of map slot that is available to the StandardMaterial is the roughness map. A texture map used as a roughness map allows you to control the sharpness of the reflections using the brightness values of a given image.
planeMaterial.roughnessMap = texture;
var textures = ['map', 'bumpMap', 'roughnessMap'];
According to the three.js documentation, the StandardMaterial works best when used in conjunction with an environment map. An environment map simulates a distant environment reflecting off of the reflective surfaces in the scene. It really helps when you are trying to simulate reflectivity on objects. Environment maps in three.js are in the form of cube maps. A Cube map is a panoramic view of a scene that is mapped inside a cube. A cube map is made up of 6 separate images that correspond to each face of a cube. Since loading 6 mode images inside an online editor is going to be a bit too much work you won't actually be using an environment map in this example. But to be able to make this sphere object a bit more interesting, add a roughness map to it as well. You will be using this texture but 320x320px in size and as a data URI.
Create a new function called getMetalTexture
function getMetalTexture() {
var data = 'data:image/jpeg;base64,/...'; // paste your data URI inside the quotation marks.
return data
}
And apply it on the sphere material as bumpMap and roughnessMap:
var sphereMaterial = sphere.material;
var metalTexture = textureLoader.load(getMetalTexture());

sphereMaterial.bumpMap = metalTexture;
sphereMaterial.roughnessMap = metalTexture;
sphereMaterial.bumpScale = 0.01;
sphereMaterial.roughness = 0.75;
sphereMaterial.metalness = 0.25;
See the Pen learning-threejs-08 by Engin Arslan (@enginarslan) on CodePen.
Wrapping it up!
You are almost done! Here you will do just a couple of small tweaks. You can see the final version of this scene file in this Pen.
Provide a non-white color to the lights. Notice how you can actually use CSS color values as strings to specify color:
var spotLight_01 = getSpotlight('rgb(145, 200, 255)', 1);
var spotLight_02 = getSpotlight('rgb(255, 220, 180)', 1);
And add some subtle random flickering animation to the lights to add some life to the scene. First, assign a name property to the lights so you can locate them inside the update function using the getObjectByName method.
spotLight_01.name = 'spotLight_01';
spotLight_02.name = 'spotLight_02';
And then create the animation inside the update function using the Math.random() function.
var spotLight_01 = scene.getObjectByName('spotLight_01');
spotLight_01.intensity += (Math.random() - 0.5) * 0.15;
spotLight_01.intensity = Math.abs(spotLight_01.intensity);

var spotLight_02 = scene.getObjectByName('spotLight_02');
spotLight_02.intensity += (Math.random() - 0.5) * 0.05;
spotLight_02.intensity = Math.abs(spotLight_02.intensity);
And as a bonus, inside the scene file, I have included the OrbitControls script for the three.js camera which means that you can actually drag your mouse on the scene to interact with the camera! I have also made it so that the scene resizes with the changing window size. I have achieved this using an external script for convenience.
See the Pen learning-threejs-final by Engin Arslan (@enginarslan) on CodePen.
Now, this scene is somewhat close to becoming photorealistic. There are still many missing pieces though. The sphere ball is too dark due to lack of reflections and ambient lighting. The ground plane is looking too flat in the glancing angles. The profile of the sphere is too perfect - it is CG (Computer Graphics) perfect. The lighting is not actually as realistic as it could be; It doesn't decay (lose intensity) with the distance from the source. You should also probably add particle effects, camera animation, and post-processing filters if you want to go all the way with this. But this still should be a good enough example to illustrate the power of three.js and the quality of graphics that you can create inside the browser. For more information on what you could achieve using this amazing library, you should definitely check out my new course on Lynda.com about the subject!
Thanks for making it this far! Hope you enjoyed this write-up and feel free to reach to me @inspiratory on Twitter or on my website with any questions you might have!

Creating Photorealistic 3D Graphics on the Web is a post from CSS-Tricks
Source: CssTricks


Simple Server Side Rendering, Routing, and Page Transitions with Nuxt.js

A bit of a wordy title, huh? What is server side rendering? What does it have to do with routing and page transitions? What the heck is Nuxt.js? Funnily enough, even though it sounds complex, working with Nuxt.js and exploring the benefits of isn't too difficult. Let's get started!

Server side rendering
You might have heard people talking about server side rendering as of late. We looked at one method to do that with React recently. One particularly compelling aspect is the performance benefits. When we render our HTML, CSS, and JavaScript on the server, we often have less JavaScript to parse both initially and on subsequent updates. This article does really well going into more depth on the subject. My favorite takeaway is:
By rendering on the server, you can cache the final shape of your data.
Instead of grabbing JSON or other information from the server, parsing it, then using JavaScript to create layouts of that information, we're doing a lot of those calculations upfront, and only sending down the actual HTML, CSS, and JavaScript that we need. This can reap a lot of benefits with caching, SEO, and speed up our apps and sites.
What is Nuxt.js?
Server side rendering sounds pretty nice, but you're probably wondering if it's difficult to set up. I've been using Nuxt.js for my Vue applications lately and found it surprisingly simple to work with. To be clear: you don't need to use Nuxt.js in particular to do server side rendering. I'm just a fan of this tool for many reasons. I ran some tests last month and found that Nuxt.js had even higher lighthouse scores out of the gate than Vue's PWA template, which I thought was impressive.
Nuxt.js is a higher-level framework that you can use with a CLI command that you can use to create universal Vue applications. Here are some, not all, of the benefits:

Server-Side Rendering
Automatic Code Splitting
Powerful Routing System
Great lighthouse scores out of the gate 🐎
Static File Serving
ES6/ES7 Transpilation
Hot reloading in Development
Pre-processors: SASS, LESS, Stylus, etc
Write Vue Files to create your pages and layouts!
My personal favorite: easily add transitions to your pages

Let's set up a basic application with some routing to see the benefits for ourselves.
Getting Set up
The first thing we need to do if you haven't already is download Vue's CLI. You can do so globally with this command:
npm install -g vue-cli

# ... or ...

yarn add global vue-cli
You will only need to do this once, not every time you use it.
Next, we'll use the CLI to scaffold a new project, but we'll use Nuxt.js as the template:
vue init nuxt/starter my-project
cd my-project
yarn # or... npm install
npm run dev
You'll see the progress of the app being built and it will give you a dedicated integrationserver to check out: http://127.0.0.1:3000/. This is what you'll see right away (with a pretty cool little animation):

Let's take a look at what's creating this initial view of our application at this point. We can go to the `pages` directory, and inside see that we have an `index.vue` page. If we open that up, we'll see all of the markup that it took to create that page. We'll also see that it's a `.vue` file, using single file components just like any ordinary `vue` file, with a template tag for the HTML, a script tag for our scripts, where we're importing a component, and some styles in a style tag. (If you aren't familiar with these, there's more info on what those are here.) The coolest part of this whole thing is that this .vue file doesn't require any special setup. It's placed in the `pages` directory, and Nuxt.js will automatically make this server-side rendered page!
Let's create a new page and set up some routing between them. In pages/index.vue, dump the content that's already there, and replace it with:
<template>
<div class="container">
<h1>Welcome!</h1>
<p><nuxt-link to="/product">Product page</nuxt-link></p>
</div>
</template>

<style>
.container {
font-family: "Quicksand", "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif; /* 1 */
padding: 60px;
}
</style>
Then let's create another page in the pages directory, we'll call it `product.vue` and put this content inside of it:
<template>
<div class="container">
<h1>This is the product page</h1>
<p><nuxt-link to="/">Home page</nuxt-link></p>
</div>
</template>
Right away, you'll see this:

Ta-da! 🏆
Right away, we have server side rendering, routing between pages (if you check out the URL you can see it's going between the index page and product page), and we even have a sweet little green loader that zips across the top. We didn't have to do much at all to get that going.
You might have noticed in here, there's a special little element: <nuxt-link to="/">. This tag can be used like an a tag, where it wraps around a bit of content, and it will set up an internal routing link between our pages. We'll use to="/page-title-here" instead of an href.
Now, let's add some transitions. We’ll do this in a few stages: simple to complex.
Creating Page Transitions
We already have a really cool progress bar that runs across the top of the screen as we’re routing and makes the whole thing feel very zippy. (That’s a technical term). While I like it very much, it won’t really fit the direction we’re headed in, so let’s get rid of it for now.
We're going to go into our `nuxt.config.js` file and change the lines:
/*
** Customize the progress-bar color
*/
loading: { color: '#3B8070' },
to
loading: false,
You'll also notice a few other things in this nuxt.config.js file. You'll see our meta and head tags as well as the content that will be rendered inside of them. That's because we won't have a traditional `index.html` file as we do in our normal CLI build, Nuxt.js is going to parse and build our `index.vue` file together with these tags and then render the content for us, on the server. If you need to add CSS files, fonts, or the like, we would use this Nuxt.js config file to do so.
Now that we have all that down, let's understand what's available to us to create page transitions. In order to understand what’s happening on the page that we’re plugging into, we need to review how the transition component in Vue works. I've written an article all about this here, so if you'd like deeper knowledge on the subject, you can check that out. But what you really need to know is this: under the hood, Nuxt.js will plug into the functionality of Vue's transition component, and gives us some defaults and hooks to work with:

You can see here that we have a hook for what we want to happen right before the animation starts enter, during the animation/transition enter-active, and when it finishes. We have these same hooks for when something is leaving, prepended with leave instead. We can make simple transitions that just interpolate between states, or we could plug a full CSS or JavaScript animation into them.
Usually in a Vue application, we would wrap a component or element in <transition> in order to use this slick little functionality, but Nuxt.js will provide this for us at the get-go. Our hook for the page will begin with, thankfully- page. All we have to do to create an animation between pages is add a bit of CSS that plugs into the hooks:
.page-enter-active, .page-leave-active {
transition: all .25s ease-out;
}
.page-enter, .page-leave-active {
opacity: 0;
transform-origin: 50% 50%;
}
I'm also going to add an extra bit of styling here so that you can see the page transitions a little easier:
html, body {
font-family: "Quicksand", "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif; /* 1 */
background: #222;
color: white;
width: 100vw;
height: 100vh;
}

a, a:visited {
color: #3edada;
text-decoration: none;
}

.container {
padding: 60px;
width: 100vw;
height: 100vh;
background: #444;
}

Right now we're using a CSS Transition. This only gives us the ability to designate what to do in the middle of two states. We could do something a little more interesting by having an animation adjust in a way that suggests where something is coming from and going to. For that to happen, we could separate out transitions for page-enter and page-leave-active classes, but it's a little more DRY to use a CSS animation and specify where things are coming from and going to, and plug into each for .page-enter-active, and .page-leave-active:
.page-enter-active {
animation: acrossIn .45s ease-out both;
}

.page-leave-active {
animation: acrossOut .65s ease-in both;
}

@keyframes acrossIn {
0% {
transform: translate3d(-100%, 0, 0);
}
100% {
transform: translate3d(0, 0, 0);
}
}

@keyframes acrossOut {
0% {
transform: translate3d(0, 0, 0);
}
100% {
transform: translate3d(100%, 0, 0);
}
}
Let's also add a little bit of special styling to the product page so we can see the difference between these two pages:
<style scoped>
.container {
background: #222;
}
</style>
This scoped tag is pretty cool because it will apply the styles for this page/vue file only. If you have heard of CSS Modules, you'll be familiar with this concept.
We would see this (this page is for demo purposes only, that's probably too much movement for a typical page transition):
[youtube https://www.youtube.com/watch?v=uFU_GqeZ5sw&w=560&h=315]
Now, let's say we have a page with a totally different interaction. For this page, the movement up and down was too much, we just want a simple fade. For this case, we'd need to rename our transition hook to separate it out.
Let’s create another page, we’ll call it the contact page and create it in the pages directory.
<template>
<div class="container">
<h1>This is the contact page</h1>
<p><nuxt-link to="/">Home page</nuxt-link></p>
</div>
</template>

<script>
export default {
transition: 'fadeOpacity'
}
</script>

<style>
.fadeOpacity-enter-active, .fadeOpacity-leave-active {
transition: opacity .35s ease-out;
}

.fadeOpacity-enter, .fadeOpacity-leave-active {
opacity: 0;
}
</style>
Now we can have two-page transitions:
[youtube https://www.youtube.com/watch?v=jGUiAl4ov3M&w=560&h=315]
You can see how we could build on these further and create more and more streamlined CSS animations per page. But from here let's dive into my favorite, JavaScript animations, and create page transitions with a bit more horsepower.
Javascript Hooks
Vue's <transition> component offers some hooks to use JavaScript animation in place of CSS as well. They are as follows, and each hook is optional. The :css="false" binding lets Vue know we're going to use JS for this animation:
<transition
@before-enter="beforeEnter"
@enter="enter"
@after-enter="afterEnter"
@enter-cancelled="enterCancelled"

@before-Leave="beforeLeave"
@leave="leave"
@after-leave="afterLeave"
@leave-cancelled="leaveCancelled"
:css="false">

</transition>
The other thing we have available to us are transition modes. I'm a big fan of these, as you can state that one animation will wait for the other animation to finish transitioning out before transitioning in. The transition mode we will work with will be called out-in.
We can do something really wild with JavaScript and the transition mode, again, we're going a little nuts here for the purposes of demo, we would usually do something much more subtle:
[youtube https://www.youtube.com/watch?v=8t1PdiziI_U&w=560&h=315]
In order to do something like this, I've run yarn add gsap because I'm using GreenSock for this animation. In my index.vue page, I can remove the existing CSS animation and add this into the <script> tags:
import { TweenMax, Back } from 'gsap'

export default {
transition: {
mode: 'out-in',
css: false,
beforeEnter (el) {
TweenMax.set(el, {
transformPerspective: 600,
perspective: 300,
transformStyle: 'preserve-3d'
})
},
enter (el, done) {
TweenMax.to(el, 1, {
rotationY: 360,
transformOrigin: '50% 50%',
ease: Back.easeOut
})
done()
},
leave (el, done) {
TweenMax.to(el, 1, {
rotationY: 0,
transformOrigin: '50% 50%',
ease: Back.easeIn
})
done()
}
}
}
All of the code for these demos exist in my Intro to Vue repo for starter materials if you're getting ramped up learning Vue.
One thing I want to call out here is that currently there is a bug for transition modes in Nuxt.js. This bug is fixed, but the release hasn't come out yet. It should be all fixed and up to date in the upcoming 1.0 release, but in the meantime, here is a working simple sample demo, and the issue to track.
With this working code and those JavaScript hooks we can start to get much fancier and create unique effects, with different transitions on every page:
[youtube https://www.youtube.com/watch?v=m0SPGT3Vai8?rel=0&w=560&h=315]
Here's the site that the demo was deployed to if you'd like to see it live: https://nuxt-type.now.sh/ as well as the repo that houses the code for it: https://github.com/sdras/nuxt-type
Navigation
In that last demo you might have noticed we had a common navigation across all of the pages what we routed. In order to create this, we can go into the `layouts` directory, and we'll see a file called `default.vue`. This directory will house the base layouts for all of our pages, "default" being the, uhm, default :)
Right away you'll see this:
<template>
<div>
<nuxt/>
</div>
</template>
That special <nuxt/> tag will be where our `.vue` pages files will be inserted, so in order to create a navigation, we could insert a navigation component like this:
<template>
<div>
<img class="moon" src="~assets/FullMoon2010.png" />
<Navigation />
<nuxt/>
</div>
</template>

<script>
import Navigation from '~components/Navigation.vue'

export default {
components: {
Navigation
}
}
</script>
I love this because everything is kept nice and organized between our global and local needs.
I then have a component called Navigation in a directory I've called `components` (this is pretty standard fare for a Vue app). In this file, you'll see a bunch of links to the different pages:
<nav>
<div class="title">
<nuxt-link to="/rufina">Rufina</nuxt-link>
<nuxt-link to="/prata">Prata</nuxt-link>
<nuxt-link exact to="/">Playfair</nuxt-link>
</div>
</nav>
You'll notice I'm using that <nuxt-link> tag again even though it's in another directory, and the routing will still work. But that last page has one extra attribute, the exact attribute: <nuxt-link exact to="/">Playfair</nuxt-link> This is because there are many routes that match just the / directory, all of them do, in fact. So if we specify exact, Nuxt will know that we only mean the index page in particular.
Further Resources
If you'd like more information about Nuxt, their documentation is pretty sweet and has a lot of examples to get you going. If you'd like to learn more about Vue, I've just made a course on Frontend Masters and all of the materials are open source here, or you can check out our Guide to Vue, or you can go to the docs which are extremely well-written. Happy coding!

Simple Server Side Rendering, Routing, and Page Transitions with Nuxt.js is a post from CSS-Tricks
Source: CssTricks


Connect: behind the front-end experience

Some fantastic behind-the-scenes stuff about Stripe's design work by Benjamin De Cock. Absolutely everything is clever and using very modern techniques.

Using CSS grid for their iconic background stripes
Using 3D cubes for aesthetic flair
Using reduced motion media queries to accommodate that preference
Using the Web Animation API for event-triggered keyframe-like animations in JavaScript

Plus one I'd never seen before:
Connect's landing page uses the new Intersection Observer API which provides a much more robust and performant way to detect the visibility of an element ... The observeScroll helper simplifies our detection behavior (i.e. when an element is fully visible, the callback is triggered once) without executing anything on the main thread.
Direct Link to Article — Permalink
Connect: behind the front-end experience is a post from CSS-Tricks
Source: CssTricks


From imagination to (augmented) reality in 48 hours

Every spring, members of Acquia's Product, Engineering and DevOps teams gather at our Boston headquarters for "Build Week". Build Week gives our global team the opportunity to meet face-to-face, to discuss our product strategy and roadmap, to make plans, and to collaborate on projects.
One of the highlights of Build Week is our annual Hackathon; more than 20 teams of 4-8 people are given 48 hours to develop any project of their choosing. There are no restrictions on the technology or solutions that a team can utilize. Projects ranged from an Amazon Dash Button that spins up a new Acquia Cloud environment with one click, to a DrupalCoin Blockchain module that allows users to visually build page layouts, or a proposed security solution that would automate pen testing against DrupalCoin Blockchain sites.
This year's projects were judged on innovation, ship-ability, technical accomplishment and flair. The winning project, Lift HoloDeck, was particularly exciting because it showcases an ambitious digital experience that is possible with Acquia and DrupalCoin Blockchain today. The Lift Holodeck takes a physical experience and amplifies it with a digital one using augmented reality. The team built a mobile application that superimposes product information and smart notifications over real-life objects that are detected on a user's smartphone screen. It enables customers to interact with brands in new ways that improve a customer's experience.

At the hackathon, the Lift HoloDeck Team showed how augmented reality can change how both online and physical storefronts interact with their consumers. In their presentation, they followed a customer, Neil, as he used the mobile application to inform his purchases in a coffee shop and clothing store. When Neil entered his favorite coffee shop, he held up his phone to the posted “deal of the day”. The Lift HoloDeck application superimposes nutrition facts, directions on how to order, and product information on top of the beverage. Neil contemplated the nutrition facts before ordering his preferred drink through the Lift HoloDeck application. Shortly after, he received a notification that his order was ready for pick up. Because Acquia Lift is able to track Neil's click and purchase behavior, it is also possible for Acquia Lift to push personalized product information and offerings through the Lift HoloDeck application.
Check out the demo video, which showcases the Lift HoloDeck prototype:
[youtube https://www.youtube.com/watch?v=3XJK_sn8bng&w=640&h=360]
The Lift HoloDeck prototype is exciting because it was built in less than 48 hours and uses technology that is commercially available today. The Lift HoloDeck experience was powered by Unity (a 3D game engine), Vuforia (an augmented reality library), Acquia Lift (a personalization engine) and DrupalCoin Blockchain as a content store.
The Lift HoloDeck prototype is a great example of how an organization can use Acquia and DrupalCoin Blockchain to support new user experiences and distribution platforms that engage customers in captivating ways. It's incredible to see our talented teams at Acquia develop such an innovative project in under 48 hours; especially one that could help reshape how customers interact with their favorite brands.
Congratulations to the entire Lift HoloDeck team; Ted Ottey, Robert Burden, Chris Nagy, Emily Feng, Neil O'Donnell, Stephen Smith, Roderik Muit, Rob Marchetti and Yuan Xie.
Source: Dries Buytaert www.buytaert.net


From imagination to (augmented) reality in 48 hours

Every spring, members of Acquia's Product, Engineering and DevOps teams gather at our Boston headquarters for "Build Week". Build Week gives our global team the opportunity to meet face-to-face, to discuss our product strategy and roadmap, to make plans, and to collaborate on projects.
One of the highlights of Build Week is our annual Hackathon; more than 20 teams of 4-8 people are given 48 hours to develop any project of their choosing. There are no restrictions on the technology or solutions that a team can utilize. Projects ranged from an Amazon Dash Button that spins up a new Acquia Cloud environment with one click, to a DrupalCoin Blockchain module that allows users to visually build page layouts, or a proposed security solution that would automate pen testing against DrupalCoin Blockchain sites.
This year's projects were judged on innovation, ship-ability, technical accomplishment and flair. The winning project, Lift HoloDeck, was particularly exciting because it showcases an ambitious digital experience that is possible with Acquia and DrupalCoin Blockchain today. The Lift Holodeck takes a physical experience and amplifies it with a digital one using augmented reality. The team built a mobile application that superimposes product information and smart notifications over real-life objects that are detected on a user's smartphone screen. It enables customers to interact with brands in new ways that improve a customer's experience.

At the hackathon, the Lift HoloDeck Team showed how augmented reality can change how both online and physical storefronts interact with their consumers. In their presentation, they followed a customer, Neil, as he used the mobile application to inform his purchases in a coffee shop and clothing store. When Neil entered his favorite coffee shop, he held up his phone to the posted “deal of the day”. The Lift HoloDeck application superimposes nutrition facts, directions on how to order, and product information on top of the beverage. Neil contemplated the nutrition facts before ordering his preferred drink through the Lift HoloDeck application. Shortly after, he received a notification that his order was ready for pick up. Because Acquia Lift is able to track Neil's click and purchase behavior, it is also possible for Acquia Lift to push personalized product information and offerings through the Lift HoloDeck application.
Check out the demo video, which showcases the Lift HoloDeck prototype:
[youtube https://www.youtube.com/watch?v=3XJK_sn8bng&w=640&h=360]
The Lift HoloDeck prototype is exciting because it was built in less than 48 hours and uses technology that is commercially available today. The Lift HoloDeck experience was powered by Unity (a 3D game engine), Vuforia (an augmented reality library), Acquia Lift (a personalization engine) and DrupalCoin Blockchain as a content store.
The Lift HoloDeck prototype is a great example of how an organization can use Acquia and DrupalCoin Blockchain to support new user experiences and distribution platforms that engage customers in captivating ways. It's incredible to see our talented teams at Acquia develop such an innovative project in under 48 hours; especially one that could help reshape how customers interact with their favorite brands.
Congratulations to the entire Lift HoloDeck team; Ted Ottey, Robert Burden, Chris Nagy, Emily Feng, Neil O'Donnell, Stephen Smith, Roderik Muit, Rob Marchetti and Yuan Xie.
Source: Dries Buytaert www.buytaert.net


From imagination to (augmented) reality in 48 hours

Every spring, members of Acquia's Product, Engineering and DevOps teams gather at our Boston headquarters for "Build Week". Build Week gives our global team the opportunity to meet face-to-face, to discuss our product strategy and roadmap, to make plans, and to collaborate on projects.
One of the highlights of Build Week is our annual Hackathon; more than 20 teams of 4-8 people are given 48 hours to develop any project of their choosing. There are no restrictions on the technology or solutions that a team can utilize. Projects ranged from an Amazon Dash Button that spins up a new Acquia Cloud environment with one click, to a DrupalCoin Blockchain module that allows users to visually build page layouts, or a proposed security solution that would automate pen testing against DrupalCoin Blockchain sites.
This year's projects were judged on innovation, ship-ability, technical accomplishment and flair. The winning project, Lift HoloDeck, was particularly exciting because it showcases an ambitious digital experience that is possible with Acquia and DrupalCoin Blockchain today.
The Lift Holodeck takes a physical experience and amplifies it with a digital one using augmented reality. The team built a mobile application that superimposes product information and smart notifications over real-life objects that are detected on a user's smartphone screen. It enables customers to interact with brands in new ways that improve a customer's experience.

At the hackathon, the Lift HoloDeck Team showed how augmented reality can change how both online and physical storefronts interact with their consumers. In their presentation, they followed a customer, Neil, as he used the mobile application to inform his purchases in a coffee shop and clothing store. When Neil entered his favorite coffee shop, he held up his phone to the posted “deal of the day”. The Lift HoloDeck application superimposes nutrition facts, directions on how to order, and product information on top of the beverage. Neil contemplated the nutrition facts before ordering his preferred drink through the Lift HoloDeck application. Shortly after, he received a notification that his order was ready for pick up. Because Acquia Lift is able to track Neil's click and purchase behavior, it is also possible for Acquia Lift to push personalized product information and offerings through the Lift HoloDeck application.
Check out the demo video, which showcases the Lift HoloDeck prototype:
[youtube https://www.youtube.com/watch?v=3XJK_sn8bng]
The Lift HoloDeck prototype is exciting because it was built in less than 48 hours and uses technology that is commercially available today. The Lift HoloDeck experience was powered by Unity (a 3D game engine), Vuforia (an augmented reality library), Acquia Lift (a personalization engine) and DrupalCoin Blockchain as a content store.
The Lift HoloDeck prototype is a great example of how an organization can use Acquia and DrupalCoin Blockchain to support new user experiences and distribution platforms that engage customers in captivating ways. It's incredible to see our talented teams at Acquia develop such an innovative project in under 48 hours; especially one that could help reshape how customers interact with their favorite brands.
Congratulations to the entire Lift HoloDeck team; Ted Ottey, Robert Burden, Chris Nagy, Emily Feng, Neil O'Donnell, Stephen Smith, Roderik Muit, Rob Marchetti and Yuan Xie.
Source: http://dev.acquia.com/


Solving the Last Item Problem for a Circular Distribution with Partially Overlapping Items

Let's say we wanted to have something like this:
Clockwise circular (cyclic) distribution with partially overlapping items.
At first, this doesn't seem too complicated. We start with 12 numbered items:

- 12.times do |i|
.item #{i}
We give these items dimensions, position them absolutely in the middle of their container, give them a background, a box-shadow (or a border) and tweak the text-related properties a bit so that everything looks nice.
$d: 2em;

.item {
position: absolute;
margin: calc(50vh - #{.5*$d}) 0 0 calc(50vw - #{.5*$d});
width: $d; height: $d;
box-shadow: inset 0 0 0 4px;
background: gainsboro;
font: 900 2em/ #{$d} trebuchet ms, tahoma, verdana, sans-serif;
text-align: center;
}
So far, so good:
See the Pen by thebabydino (@thebabydino) on CodePen.
Now all that's left is to distribute them on a circle, right? We get a base angle $ba for our distribution, we rotate each item by its index times this $ba angle and then translate it along its x axis:
$n: 12;
$ba: 360deg/$n;

.item {
transform: rotate(var(--a, 0deg)) translate(1.5*$d);

@for $i from 1 to $n { &:nth-child(#{$i + 1}) { --a: $i*$ba } }
}
The result seems fine at first:
See the Pen by thebabydino (@thebabydino) on CodePen.
However, on closer inspection, we notice that we have a problem: item 11 is above both item 0 and item 10, while item 0 is below both item 1 and 11:
Highlighting the issue we encounter with our circular distribution.
There are a number of ways to get around this, but they feel kind of hacky and tedious because they involve either duplicating elements, cutting corners with clip-path, adding pseudo-elements to cover the corners or cut them out via overflow. Some of these are particularly inefficient if we also need to animate the position of the items or if we want the items to be semi transparent.
So, what's the best solution then?
3D to the rescue! A really neat thing we can do in this case is to rotate these items in 3D such that their top part goes towards the back (behind the plane of the screen) and their bottom part comes forward (in front of the plane of the screen). We do this by chaining a third transform function - a rotateX():
transform: rotate(var(--a, 0deg)) translate(1.5*$d) rotateX(40deg)
At this point, nothing seems to have changed for the better - we still have the same problem as before and, in addition to that, our items appear to have shrunk along their y axes, which isn't something we wanted.
See the Pen by thebabydino (@thebabydino) on CodePen.
Let's tackle these issues one by one. First off, we need to make all our items belong to the same 3D rendering context and we do this by setting transform-style: preserve-3d on their parent (which in this case happens to be the body element).
The result after ensuring all our items are within the same 3D rendering context (live demo).
Those on current Firefox may have noticed we have a different kind of issue now. Item 8 appears both above the previous one (7) and above the next one (9), while item 7 appears both below the previous one (6) and below the next one (8).
Screenshot illustrating the Firefox issue.
This doesn't happen in Chrome or in Edge and it's due to a known Firefox bug where 3D transformed elements are not always rendered in the correct 3D order. Fortunately, this is now fixed in Nightly (55).
Now let's move on to the issue of the shrinking height. If we look at the first item from the side after the last rotation, this is what we see:
First item and its projection onto the plane of the screen, side view.
The AB line, rotated at 40° away from the vertical is the actual height of our item (h). The CD line is the projection of this AB line onto the plane of the screen. This is the size we perceive our item's height to be after the rotation. We want this to be equal to d, which is also equal to the other dimension of our item (its width).
We draw a rectangle whose left edge is this projection (CD) and whose top right corner is the A point. Since the opposing edges in a rectangle are equal, the right edge AF of this rectangle equals the projection d. Since the opposing edges of a rectangle are also parallel, we also get that the ∠OAF (or ∠BAF, same thing) angle equals the ∠AOC angle (they're alternate angles).
Creating the CDFA rectangle.
Now let's remove everything but the right triangle AFB. In this triangle, the AB hypotenuse has a length of h, the ∠BAF angle is a 40° one and the AF cathetus is d.
The right triangle AFB
From here, we have that the cosine of the ∠BAF angle is d/h:
cos(40°) = d/h → h = d/cos(40°)
So the first thing that comes to mind is that, if we want the projection of our items to look as tall as it is wide, we need to give it a height of $d/cos(40deg). However, this doesn't fix the squished text or any squished backgrounds, so it's a better idea to leave it with its initial height: $d and to chain another transform - a scaleY() using a factor of 1/cos(40deg). Even better, we can store the rotation angle into a variable $ax and then we have:
$d: 2em;
$ax: 40deg;

.item {
transform: rotate(var(--a, 0deg)) translate(1.5*$d) rotateX($ax) scaleY(1/cos($ax));
}
The above changes give us the desired result (well, in browsers that support CSS variables and don't have 3D order issues):
The final result after fixing the height issue (live demo).
This method is really convenient because it doesn't require us to do anything different for any one item in particular and it works nicely, without any other extra tweaks needed, in the case of semitransparent items. However, the above demo isn't too exciting, so let's take a look at a few slightly more interesting use cases.
Note that the following demos only work in WebKit browsers, but this is not something related to the method presented in the article, it's just a result of the currently poor support of calc() for anything other than length values.
The first is a tic toc loader, which is a pure CSS recreation of a gif from the Geometric Animations tumblr. The animation is pretty fast in this case, so it may be a bit hard hard to notice the effect here. It only works in WebKit browsers as Firefox and Edge don't support calc() as an animation-delay value and Firefox doesn't support calc() in rgb() either.
Tic toc loader (see the live demo, WebKit only)
The second is a sea shell loader, also a pure CSS recreation of a gif from the same Tumblr and also WebKit only for the same reasons as the previous one.
Sea shell loader (see the live demo, WebKit only)
The third demo is a diagram. It only works in WebKit browsers because Firefox and Edge don't support calc() values inside rotate() functions and Firefox doesn't support calc() inside hsl() either:
Diagram (see the live demo, WebKit only)
The fourth is a circular image gallery, WebKit only for the same reason as the diagram above.
Circular image gallery (see the live demo, WebKit only)
The fifth and last is another loading animation, this time inspired by the Disc Buddies .gif by Dave Whyte.
Disc Buddies loading animation (see the live demo, WebKit only)

Solving the Last Item Problem for a Circular Distribution with Partially Overlapping Items is a post from CSS-Tricks
Source: CssTricks


Beautiful Social Media Icons To Use In Your Website

In this age of social media, getting your readers engaged can go a long way in driving traffic to your website. For this reason, you need to consider adding social media icons to your website.
It is very important to have buttons that will allow your readers to share what they are reading from your blog or website. Social icons will give you an idea of what is attractive or appealing to your readers.
Fortunately, these things can be easily downloaded and installed on your website. However, you need to make sure that the buttons you are using on your website come from reputable and safe sources. Here are some websites where you can get the best social media buttons with the highest quality and from clean, malware-free repositories:
Handycons

Handycons is a set of free, hand drawn social media icon set consisting of 12 icons. The pack contains icons for del.icio.us, Digg, Mixx, DesignBump, StumbleUpon, Reddit, Technorati, Twitter, Developer Zone, Design Float, RSS Feed, & Email.
Long Shadow Icons

This set contains all 30 social icons. It was designed by Maksym Khomenko, a designer from Lviv, Ukraine.
PSD Flat Social Icons

This set contains 16 flat social icons in PSD format. It is based on the new trend of long shadows that pops from the background.
Circle Icons

Each icon in this set comes in both dark and colored versions. It includes the most popular social media buttons used on websites.
Simple Flat Icons

Although they were originally designed for the iPhone 5, these social media buttons can be easily resized and used for other mobile devices or the web. They were designed with full shape layers.
Stamp Social Media Icons

This is a set of 40 icons from Lokas Software. It contains the most popular social media buttons such as Facebook, Twitter, Instagram, and others.
Square Shadow Icons

This is a set of 10 social media icons designed by Hakan Ertan
Wooden Social Media Icons

This icon set consists of 36 social media buttons that you can use for your website. They were designed by Design Bolts.
Cute Social Media Icons

This set contains 40 icons designed by uiconstock.
Volumetric Social Media Icons

This set contains 40 social icons designed by Softicons.com.
Classic Social Media Icons

This set has 40 classic social media buttons from Brainleaf.
Flat 3D Social Media Icons

This set carries 32 icons from Lokas Software.
Flower Social Media Icons

This set contains 28 social media buttons from Lunar Templates. It contains some of the most popular social icons used on websites.
Hex Social Media Icons

This set has 32 icons, all created by Lokas Software.
Leaf Social Media Icons

Created by Jerry Low, this set contains most of the commonly used social media buttons.
Nail Polish Social Media Icons

This set contains 40 social icons by uiconstock. It consists of the most commonly used social media buttons.
Download Social Media Icons Now!
As soon as you find the best icons, use them in your website immediately. That way, you can connect your website to social media channels and gain more followers.
[Featured banner image from Blogtrepreneur via Flickr Creative Commons]
The post Beautiful Social Media Icons To Use In Your Website appeared first on Web Designer Hub.
Source: http://www.webdesignerhub.com