Building a Website Site using Drupal 8 @IPRC Kigali

Start: 
2018-12-01 09:00 - 17:00 Africa/Kigali

Organizers: 

Bikino

Event type: 

Training (free or commercial)

Welcome to Drupal Training Day, Kigali.
This one day training is FREE to everyone and will cover the following:
Introduction to Drupal 8
Drupal Installation
Content
Extend
Layout
People
Manage.
At the end of the training, everyone will be able to create a basic website with full functions.
Source: https://groups.drupal.org/node/512931/feed


ARIA is Spackle, Not Rebar

Much like their physical counterparts, the materials we use to build websites have purpose. To use them without understanding their strengths and limitations is irresponsible. Nobody wants to live in an poorly-built house. So why are poorly-built websites acceptable?
In this post, I'm going to address WAI-ARIA, and how misusing it can do more harm than good.

Materials as technology
In construction, spackle is used to fix minor defects on interiors. It is a thick paste that dries into a solid surface that can be sanded smooth and painted over. Most renters become acquainted with it when attempting to get their damage deposit back.
Rebar is a lattice of steel rods used to reinforce concrete. Every modern building uses it—chances are good you'll see it walking past any decent-sized construction site.
Technology as materials
HTML is the rebar-reinforced concrete of the web. To stretch the metaphor, CSS is the interior and exterior decoration, and JavaScript is the wiring and plumbing.
Every tag in HTML has what is known as native semantics. The act of writing an HTML element programmatically communicates to the browser what that tag represents. Writing a button tag explicitly tells the browser, "This is a button. It does buttony things."
The reason this is so important is that assistive technology hooks into native semantics and uses it to create an interface for navigation. A page not described semantically is a lot like a building without rooms or windows: People navigating via a screen reader have to wander around aimlessly in the dark and hope they stumble onto what they need.
ARIA stands for Accessible Rich Internet Applications and is a relatively new specification developed to help assistive technology better communicate with dynamic, JavaScript-controlled content. It is intended to supplement existing semantic attributes by providing enhanced interactivity and context to screen readers and other assistive technology.
Using spackle to build walls
A concerning trend I've seen recently is the blind, mass-application of ARIA. It feels like an attempt by developers to conduct accessibility compliance via buckshot—throw enough of something at a target trusting that you'll eventually hit it.
Unfortunately, there is a very real danger to this approach. Misapplied ARIA has the potential to do more harm than good.
The semantics inherent in ARIA means that when applied improperly it can create a discordant, contradictory mess when read via screen reader. Instead of hearing, "This is a button. It does buttony things.", people begin to hear things along the lines of, "This is nothing, but also a button. But it's also a deactivated checkbox that is disabled and it needs to shout that constantly."
If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
– First rule of ARIA use
In addition, ARIA is a new technology. This means that browser support and behavior is varied. While I am optimistic that in the future the major browsers will have complete and unified support, the current landscape has gaps and bugs.
Another important consideration is who actually uses the technology. Compliance isn't some purely academic vanity metric we're striving for. We're building robust systems for real people that allow them to get what they want or need with as little complication as possible. Many people who use assistive technology are reluctant to upgrade for fear of breaking functionality. Ever get irritated when your favorite program redesigns and you have to re-learn how to use it? Yeah.
The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.
– Tim Berners-Lee
It feels disingenuous to see the benefits of the DRY principal of massive JavaScript frameworks also slather redundant and misapplied attributes in their markup. The web is accessible by default. For better or for worse, we are free to do what we want to it after that.
The fix
This isn't to say we should completely avoid using ARIA. When applied with skill and precision, it can turn a confusing or frustrating user experience into an intuitive and effortless one, with far fewer brittle hacks and workarounds.
A little goes a long way. Before considering other options, start with markup that semantically describes the content it is wrapping. Test extensively, and only apply ARIA if deficiencies between HTML's native semantics and JavaScript's interactions arise.
Development teams will appreciate the advantage of terse code that's easier to maintain. Savvy developers will use a CSS-Trick™ and leverage CSS attribute selectors to create systems where visual presentation is tied to semantic meaning.
input:invalid,
[aria-invalid] {
border: 4px dotted #f64100;
}
Examples
Here are a few of the more common patterns I've seen recently, and why they are problematic. This doesn't mean these are the only kinds of errors that exist, but it's a good primer on recognizing what not to do:
<li role="listitem">Hold the Bluetooth button on the speaker for three seconds to make the speaker discoverable</li>
The role is redundant. The native semantics of the li element already describe it as a list item.
<p role="command">Type CTRL+P to print
command is an Abstract Role. They are only used in ARIA to help describe its taxonomy. Just because an ARIA attribute seems like it is applicable doesn't mean it necessarily is. Additionally, the kbd tag could be used on "CTRL" and "P" to more accurately describe the keyboard command.
<div role="button" class="button">Link to device specifications</div>
Failing to use a button tag runs the risk of not accommodating all the different ways a user can interact with a button and how the browser responds. In addition, the a tag should be used for links.
<body aria-live="assertive" aria-atomic="true">
Usually the intent behind something like this is to expose updates to the screen reader user. Unfortunately, when scoped to the body tag, any page change—including all JS-related updates—are announced immediately. A setting of assertive on aria-live also means that each update interrupts whatever it is the user is currently doing. This is a disastrous experience, especially for single page apps.
<div aria-checked="true"></div>
You can style a native checkbox element to look like whatever you want it to. Better support! Less work!
<div role="link" tabindex="40">
Link text
</div>
Yes, it's actual production code. Where to begin? First, never use a tabindex value greater than 0. Secondly, the title attribute probably does not do what you think it does. Third, the anchor tag should have a destination—links take you places, after all. Fourth, the role of link assigned to a div wrapping an a element is entirely superfluous.
<h2 class="h3" role="heading" aria-level="1">How to make a perfect soufflé every time</h2>
Credit is where credit's due: Nicolas Steenhout outlines the issues for this one.
Do better
Much like content, markup shouldn't be an afterthought when building a website. I believe most people are genuinely trying to do their best most of the time, but wielding a technology without knowing its implications is dangerous and irresponsible.
I'm usually more of a honey-instead-of-vinegar kind of person when I try to get people to practice accessibility, but not here. This isn't a soft sell about the benefits of developing and designing with an accessible, inclusive mindset. It's a post about doing your job.
Every decision a team makes affects a site's accessibility.
– Laura Kalbag
Get better at authoring
Learn about the available HTML tags, what they describe, and how to best use them. Same goes for ARIA. Give your page template semantics the same care and attention you give your JavaScript during code reviews.
Get better at testing
There's little excuse to not incorporate a screen reader into your testing and QA process. NVDA is free. macOS, Windows, iOS and Android all come with screen readers built in. Some nice people have even written guides to help you learn how to use them.
Automated accessibility testing is a huge boon, but it also isn't a silver bullet. It won't report on what it doesn't know to report, meaning it's up to a human to manually determine if navigating through the website makes sense. This isn't any different than other usability testing endeavors.
Build better buildings
Universal Design teaches us that websites, like buildings, can be both beautiful and accessible. If you're looking for a place to start, here are some resources:

A Book Apart: Accessibility for Everyone, by Laura Kalbag
egghead.io: Intro to ARIA and Start Building Accessible Web Applications Today, by Marcy Sutton
Google Developers: Introduction to ARIA, by Meggin Kearney, Dave Gash, and Alice Boxhall
YouTube: A11ycasts with Rob Dodson, by Rob Dodson
W3C: WAI-ARIA Authoring Practices 1.1
W3C: Using ARIA
Zomigi: Videos of screen readers using ARIA
Inclusive Components, by Heydon Pickering
HTML5 Accessibility
The American Foundation for the Blind: Improving Your Website's Accessibility
Designing for All: 5 Ways to Make Your Next Website Design More Accessible, by Carie Fisher
Accessible Interface Design, by Nick Babich

ARIA is Spackle, Not Rebar is a post from CSS-Tricks
Source: CssTricks


The Ultimate Uploading Experience in 5 Minutes

Filestack is a web service that completely handles file uploads for your app.
Let's imagine a little web app together. The web app allows people to write reviews for anything they want. The give the review a name, type up their review, upload a photo, and publish it. Saving a name and text to a database is fairly easy, so the trickiest part about this little app is handling those photo uploads. Here's just a few considerations:

You'll need to design a UI. What does the area look like that encourage folks to pick a photo and upload it? What happens when they are ready to upload that photo and interact? You'll probably want to design that experience.
You'll likely want to support drag and drop, how is that going to work?
You'll probably want to show upload progress. That's just good UX.
A lot of people keep their files in Dropbox or other cloud services these days, can you upload from there?
What about multiple files? Might make sense to upload three or four images for a review!
Are you going to restrict sizes? The app should probably just handle that automatically, right?

That's certainly not a comprehensive list, but I think you can see how every bit of that is a bunch of design and integrationwork. Well, hey, that's the job, right? It is, but the job is even more so about being smart with your time and money to make your app a success. Being smart here, in my opinion, is seriously looking at Filestack to give you a fantastic uploading experience, while you spend your time on your product vision, not already-solved problems.

With Filestack, as a developer, implementation is beautifully easy:
var client = filestack.init('yourApiKey');

client.pick(pickerOptions).then(function(result) {
console.log(JSON.stringify(result.filesUploaded))
});
And you get an incredibly full featured picker like this:
Notice how the upload is so fast you barely notice it.
And that is completely configurable, of course. Their documentation is super nice, so figuring out how to do all that is no problem. Want to limit it to 3 files? Sure. Only images? Yep. Allow only particular upload integrations? That's your choice.
We started this talking about photos. Filestack is particularly good at handling those for you, in part because it means that you can offer a really great UX (upload whatever!) while also making sure you do all the right developer stuff with those photos. Like if a user uploads a 5 MB photo, no problem, you can allow them to drop it, alter it, and then you can serve a resized and optimized version of it.

Similiar stuff exists for video, audio, and other special file types. And don't let me stop you from checking out all the really fancy stuff, like facial recognition, filtering, collages, and all that.
Serve it from where? Well that's your call. If you really need to store your files somewhere specific, you can do that. Easier, you can let Filestack be your document store as well as your CDN for delivering the files.
Another thing to think about with this type of integrationwork is maintenance. Rolling your own system means you're on your own when things change. Browsers evolve. The mobile landscape is ever-changing. API's change. Third-parties come and go and make life difficult. All of that stuff is abstracted away with Filestack.
With all this happening in JavaScript, Filestack works great with literally any way you're building a website. And if you have an iOS app as well, don't worry, they have an SDK for that too!
Direct Link to Article — Permalink
The Ultimate Uploading Experience in 5 Minutes is a post from CSS-Tricks
Source: CssTricks


Zoey is An Advanced Ecommerce Platform for Web Designers and Agencies

Inspired Magazine
Inspired Magazine - creativity & inspiration daily
It’s often tough to argue that one ecommerce building platform is faster than another, unless you complete an unbiased speed test with a large sample size. However, sometimes you can objectively say that a system is faster just by playing around with it.
That seems to be the case with Zoey, seeing as how during my tests I was able to launch a few ecommerce websites within minutes, and they actually looked great out of the box and I was technically able to start collecting payments from customers.
Granted, my sites were made for testing, but the same experience can be transferred over to regular developers. And that’s exactly what Zoey is trying to achieve. The company isn’t necessarily attempting to bring in regular ecommerce entrepreneurs, but rather the web designers who are making hundreds, or thousands, of ecommerce sites for clients.
We recommend you request a demo of Zoey to get a better feel for what the platform can do for you. It’s completely free, and it can give you a more hands on test to show you whether or not it’s something that might work for your business.
What’s Zoey Really All About?

As we mentioned above, the Zoey platform is for agencies and web designers, so you’re better off looking somewhere else if you only want to make one ecommerce site.
It’s an ecommerce platform that has two advantages: It includes the power and flexibility of a modern day open source solution, similar to how WordPress is so flexible. Secondly, Zoey also works like a SaaS platform, making it easier to use and more robust in its feature-set.
When looking at the features, I noticed that Zoey provides a wonderful drag and drop builder, along with robust ecommerce capabilities. I like this quite a bit, since it’s much harder to design with solutions like Shopify. In fact, Shopify doesn’t even have a drag and drop builder, so most developers are left with lots of settings they need to adjust for a new site.
Zoey also provides a blazing fast infrastructure, meaning that developers are able to service just about any type of client. For instance, let’s say you’re an experienced developer with multiple ecommerce clients. A new client wants a sophisticated B2B website, while another one is more interested in a basic startup online store, with everything they need for selling a handful of products.
Zoey has the cutting-edge platform to serve both of these clients.
Is There Anything Truly Unique About Zoey?

When you look at other ecommerce platforms like Shopify, Volusion and Bigcommerce, you’ll notice that these are more akin to consumer products, being marketed to every possible person who might want to make their own website. Yes, developers have been known to utilize Shopify and Volusion when making ecommerce sites for clients, but neither are built to make the process easier for developers.
Zoey, on the other hand, was made for designers and agencies, cutting out the amount of code needed, and expanding the number of possibilities when it comes to satisfying clients. In short, web designers should theoretically cut down the amount of time they spend developing with Zoey, and in turn, make more money.
Oh yea, this also makes support and client satisfaction a littler nicer.
The deep ecommerce functionality serves far more businesses than SaaS platforms like Shopify and Bigcommerce. Developers often have to utilize multiple ecommerce platforms for different clients. However, Zoey generally cuts down on the number of platforms needed, since Zoey is technically the only one required for all company sizes. Therefore, you could build an extremely complicated site right after pumping out a small shop with five products, all with the same platform.
I would compare this to a web designer who mainly generates blogs and business sites with WordPress. It’s far easier to stick with one CMS.
The Need for Speed
Zoey claims that building a website with its platform is 4X faster than if you were to go with a competitor like Shopify or Volusion. Does this hold up? I would say so, considering my tests brought me directly to the elements I needed, and the drag and drop editor is far more intuitive than some of the others I have used on the market.
Along with the design tools, Zoey provides rich functionality and a beautiful SaaS infrastructure, both of which allow agencies and freelancers to boost their production levels.

With this time, your company can save money and cut down on the amount of people working on one project. One site may require you to allocate two or three designers for full-time work. However, Zoey tends to eliminate the need for these integrationteams. Even the most complex ecommerce sites can be generated by one designer, freeing up time and resources for your other designers to handle other jobs. Overall, scaling up becomes more realistic, since your large freelance teams aren’t all tangled up in one project.
Affordability for Design Clients

One impressive part about Zoey involves wholesale and B2B tools. Zoey is the only SaaS system with a built-in wholesale and B2B suite. Therefore, web integrationclients aren’t forced to pay tens of thousands of dollars for a more custom/complex ecommerce site.

Not to mention, the B2B tools make it possible for freelancers and agencies to take on projects like these. In the past a freelancer might pass up a large, custom site, but now there’s no reason to skip out on this revenue stream. The interface is simple enough, you’re not burdened with tons of coding, and you even get some nice templates.
Who Should Use Zoey?
This answer is simple: Developers and agencies that want to stick to one platform and cut down on integrationtime when making ecommerce sites. I wouldn’t say it’s the first recommendation for individuals constructing online shops, but the developers should be all over Zoey.
We encourage all agencies and developers to request a demo of Zoey. Let us know in the comments section if you have any questions about this gem.
This post Zoey is An Advanced Ecommerce Platform for Web Designers and Agencies was written by Inspired Mag Team and first appearedon Inspired Magazine.
Source: inspiredm.com


Building a Website Performance Monitor

A couple of months ago I wrote about using WebPageTest, and more specifically its RESTful API, to monitor the performance of a website. Unarguably, the data it provides can translate to precious information for engineers to tweak various parts of a system to make it perform better.
But how exactly does this tool sit within your integrationworkflow? When should you run tests and what exactly do you do with the results? How do you visualise them?

Now that we have the ability to obtain performance metrics programmatically through the RESTful API, we should be looking into ways of persisting that data and tracking its progress over time. This means being able to see how the load time of a particular page is affected by new features, assets or infrastructural changes.
I set out to create a tool that allowed me to compile and visualise all this information, and I wanted to build it in a way that allowed others to do it too.

What I had in mind. Roughly.

The wish list
I wanted this tool to be capable of:

Running tests manually or have them triggered by a third-party, like a webhook fired after a GitHub release commit
Running recurrent tests with a configurable time interval
Testing multiple URLs, with the ability to configure different test locations, devices and connectivity types
Grouping any number of performance metrics and display them on a chart
Defining budgets for any performance metric and visualise them on the charts, alongside the results
Configuring alerts (email and Slack) to be sent when metrics exceed their budget

Before proceeding any further, I have to point out that there are established solutions in the market that deliver all of the above. Companies like SpeedCurve or Calibre offer a professional monitoring tool as a service that you should seriously consider if you’re running a business. They use private instances of WebPageTest and don’t rely on the public one, which means no usage limits and no unpredictable availability.
The tool I created and that I’ll introduce to you during the course of this article is a modest and free alternative to those services. I built it because I don’t have a budget that allows me to pay a monthly fee for a performance monitoring service, and I'm sure other individuals, non-profit organisations and open-source projects are on the same boat. My aim was to bring this type of tooling to people that otherwise might not have access to it.
The idea
The system I had in mind had to have three key components:

An application that listens for test requests and communicates with the WebPageTest API
A data store to persist the test results
A visualisation layer to display them, with a series of graphs to show the progress of the various metrics over time

I really wanted to build something that people of all levels of expertise could set up and use for free, and that heavily influenced the decisions I made about the architecture and infrastructure of the platform.
It may seem like an unusual approach, but GitHub is actually a pretty interesting choice to achieve #2 and #3. With GitHub’s API, you can easily read and write files from and to a repository, so you can effectively use it as a persistent data store. On top of that, GitHub Pages makes the same repository a great place to serve a website from. You get a fast and secure hosting service, with the option to use a custom domain. All this comes for free, if you’re okay with using a public repository.
As for #1, I built a small Node.js application that receives test requests, sends them to WebPageTest, retrieves the results and pushes them to a GitHub repository as data files, which will then be picked up by the visualisation layer. I’ve used this approach before when I built Staticman and it worked really well.
The diagram below shows the gist of the idea.

The system architecture

Oh, at some point I needed a name. I called it SpeedTracker.
You can see it in action here or jump straight into using it by following this link. If you want to know more about how it works under the hood, what it was like to build it and where I see it going, then read on.
Building the dashboard
I’m a big fan of Jekyll. For those of you who are not familiar with it, Jekyll is a program that takes structured content from files in various formats (Markdown, JSON, YAML or even CSV) and generates HTML pages. It’s part of a larger family of static site generators.
It’s particularly relevant to this project because of its native integration with GitHub Pages, which enables any repository to automatically build a Jekyll site every time it receives new or updated content and instantly serve the generated HTML files on a designated URL. With this in mind, I could make the API layer write the test results to JSON files and have Jekyll read and output them to a web page.
By storing the data in a GitHub repository, we're putting people in control of their data. It's not hidden somewhere in some service's database, it's on a free, open repository that can easily be downloaded as a ZIP file. And by using JSON, we're choosing a universally-accepted format for the data, making it easier to re-use it somewhere else.
To cater for the requirement of being able to test multiple sites with different devices, connection types and locations, I introduced the concept of profiles. Every test must run against a profile, which consists of a file (see example) that holds information about the URL to be tested and any parameters to be passed to WebPageTest.
In this file, you can also define an interval, in hours, at which tests for the given profile will be repeated. You can change this value, or remove scheduled tests altogether, by changing the interval property in the profile file.
The challenge now was how to deliver results for multiple profiles and offer some basic date filtering functionality (like being able to drill down on results for the past week, month or year) from a static site backed by a bunch of JSON files. I couldn’t simply have Jekyll dump the entire dataset to a page, or the generated HTML files would quickly get prohibitively large.
Jekyll meets React
I started by organising the files in a folder and file structure so that test results were grouped by date and profile. Jekyll could cycle through this structure and generate a list of all the available data files for each site, along with their full paths.
With that list in place, I could build a client-side application that given a profile and a date range, could asynchronously fetch just the files required to display the affected results, extract and compile the various metrics and plot them on a series of interactive charts.
I built that using React.

Jekyll powering the React application

Performance budgets
A good way to get a team in the right mindset about web performance is to define budgets for one or more metrics and abide by them religiously. Tim Kadlec explains it in this article a lot better than I ever could, but the basic idea is that you specify that your website must load in under a certain amount of time on a certain type of connection.
That threshold must then be taken into account every time you plan on adding a new feature or asset to the site. If the new addition takes you over the budget, you have to abandon it, or otherwise find a way to remove or optimise an existing feature or asset to make room for the new one.
I wanted to give budgets a prominent place in the platform. When creating a profile, you can set a budget for any of the metrics captured and a horizontal line will show in the respective chart alongside the data, giving you a visual indication of how well your site is doing.

Paul Irish recommends a 1000ms budget for SpeedIndex

It's also possible to define alerts that are triggered when any of the budgets is exceeded, so that you and your team can instantly be notified via email or Slack when things aren't looking so great.
A centralised service
The core idea behind this project was to make this type of tooling free and accessible to everyone. Making it open-source is obviously a big first step, and the fact that you can use free services to deploy both the front-end (GitHub Pages or Netlify) and the back-end (Heroku or now) definitely help. But still, I felt that having to install and deploy the API layer would create barriers for less experienced people.
For that reason, I built the application in such a way that a single instance can be used to deliver test results to multiple sites and GitHub repositories, so effectively it can work as a centralised service that many people can use. There's a server running a public instance of the API, available for anyone to use for free.
This means that all you need to get started is to install the Jekyll site on a GitHub repository, add the username speedtracker-bot as a collaborator, configure a profile and a couple of other things and you're set.
The screencast below can guide you through the process.
[vimeo 185952137 w=640 h=360]
Where to go from here
If this tool succeeds at helping some of you improve the performance of your sites, I'll be very happy. If you use it and decide to donate some of your time to help make it better for everyone, I'll be even happier!
Straight away, I can think of some things I'd like to see happening:

Add support for annotations on the charts to mark specific events, like an infrastructural change or important feature release
It's already possible to have a GitHub webhook triggering a new test, but we could go a step further and actually read the contents of the webhook payload to create annotations on the charts to mark a commit or release
Make it easier to display custom metrics
Add support for scripting
Better documentation and tests

If you feel you can help, by all means pitch in. If you have any questions or issues in getting started, send me a tweet.
Happy tests!

Building a Website Performance Monitor is a post from CSS-Tricks
Source: CssTricks


A Guide to Usability: Your Friendly Neighborhood Grocery Store

"It's a fact: People won't use your website if they can't find their way around it."
--Steve Krug, author of Don't Make Me Think: A Common Sense Approach To Web Usability

Navigation: It's everywhere in our lives. We have it in our cars, phones, malls, grocery stores, street signs and our own homes. After all, if you didn't know how to navigate anything, it would be impossible to get anywhere. The same is true for websites.Read more