Employing specific JavaScript animation frameworks
Making use of all the powerful properties of CSS Transforms and CSS Animation
Trying out emerging CSS3 animation tools like Sencha Animator
If it wasn’t already apparent, these demos show an exaggerated example and probably aren’t practical in a lot of environments. However, there are a handful of great sites out there that honor animation techniques—metaphor, physics, and misdirection, among others—like Benjamin De Cock’s vCard, 20 Things I Learned About Browsers and the Web by Fantasy Interactive, and the Nike Snowboarding site by Ian Coyle and HEGA. They’re wonderful testaments to what you can do to aid interaction for users.
My goal was to show you the ‘why’ and the ‘how.’ Your charge is to discern the ‘where’ and the ‘when.’ Happy animating!",,235,0
236,Extreme Design,Hannah Donovan,"Recently, I set out with twelve other designers and developers for a 19th century fortress on the Channel Island of Alderney. We were going to /dev/fort, a sort of band camp for geeks. Our cohort’s mission: to think up, build and finish something – without readily available internet access.
Alderney runway, photo by Chris Govias
Wait, no internet?
Well, pretty much. As the creators of /dev/fort James Aylett and Mark Norman Francis put it: “Imagine a place with no distractions – no IM, no Twitter”. But also no way to quickly look up a design pattern, code sample or source material. Like packing for camping, /dev/fort means bringing everything you’ll need on your back or your hard drive: from long johns to your favourite icon set.
We got to work the first night discussing ideas for what we wanted to build. By the time breakfast was cleared up the next morning, we’d settled on Russ’s idea to make the Apollo 13 (PDF) transcript accessible. Days two and three were spent collaboratively planning (KJ style) what features we wanted to build, and unravelling the larger UX challenges of the project. The next five days were spent building it. Within 36 hours of touchdown at Southampton Airport, we launched our creation: spacelog.org
The weather was cold, the coal fire less than ideal, food and supplies a hike away, and the process lightning-fast. A week of designing under extreme circumstances called for an extreme process. Some of this was driven by James’s and Norm’s experience running these things, but a lot of it materialised while we were there – especially for our three-strong design team (myself, Gavin O’ Carroll and Chris Govias) who, though we knew each other, had never worked together as a group in this kind of scenario before.
The outcome was a pretty spectacular process, with a some key takeaways useful for any small group trying to build something quickly.
What it’s like inside the fort
/dev/fort has the pressure and pace of a hack day without being a hack day – primarily, no workshops or interruptions‚ but also a different mentality. While hack days are typically developer-driven with a ‘hack first, design later (if at all)’ attitude, James was quick to tell the team to hold off from writing any code until we had a plan. This put a healthy pressure on the design and product folks to slash through the UX problems before we started building.
While the fort had definitely more of a hack day feel, all of us were familiar with Agile methods, so we borrowed a few useful techniques such as morning stand-ups and an emphasis on teamwork. We cut some really good features to make our launch date, and chunked the work based on user goals, iterating as we went.
What made this design process work?
A golden ratio of teams
My personal experience both professionally and in free-form situations like this, is a tendency to get/hire a designer. Leaders of businesses, founders of start-ups, organisers of events: one designer is not enough! Finding one ace-blooded designer who can ‘do everything’ will always result in bottleneck and burnout. Like the nuances between different development languages, design is a multifaceted discipline, and very few can claim to be equally strong in every aspect. Overlap in skill set will result in a stronger, more robust interface.
More importantly, however, having lots of designers to go around meant that we all had the opportunity to pair with developers, polishing the details that don’t usually get polished. As soon as we launched, the public reception of the design and UX was overwhelmingly positive (proof!). But also, a lot of people asked us who the designer was, attributing it to one person.
While it’s important to note that everyone in our team was multitalented (and could easily shift between roles, helping us all stay unblocked), the golden ratio James and Norm devised was two product/developer folks, three interaction designers and eight developers.
photo by Ben Firshman
Equality inside the fortress walls
Something magical about the fort is how everyone leaves the outside world on the drawbridge. Job titles, professional status, Twitter followers, and so on. Like scout camp, a mutual respect and trust is expected of all the participants. Like extreme programming, extreme design requires us all to be equal partners in a collaborative team. I think this is especially worth noting for designers; our past is filled with the clear hierarchy of the traditional studio system which, however important for taste and style, seems less compatible with modern web/software development methods.
Being equal doesn’t mean being the same, however. We established clear roles and teams for ourselves on the second day, deferring to that person when a decision needed to be made. As the interface coalesced, the designers and developers took ownership over certain parts to ensure the details got looked after, while staying open to ideas and revisions from the rest of the cohort.
Create a space where everyone who enters is equal, but be sure to establish clear roles. Even if it’s just for a short while, the environment will be beneficial.
photo by Ben Firshman
Hang your heraldry from the rafters
Forts and castles are full of lore: coats of arms; paintings of battles; suits of armour. It’s impossible not to be surrounded by these stories, words and ways of thinking. Like the whiteboards on the walls, putting organisational lore in your physical surroundings makes it impossible not to see.
Ryan Alexander brought some of those static-cling whiteboard sheets which were quickly filled with use cases; IA; team roles; and, most importantly, a glossary. As soon as we started working on the project, we realised we needed to get clear on what certain words meant: what was a logline, a range, a phase, a key moment? Were the back-end people using these words in the same way design and product was? Quickly writing up a glossary of terms meant everyone was instantly speaking the same language. There was no “Ah, I misunderstood because in the data structure x means y” or, even worse, accidental seepage of technical language into the user interface copy.
Put a glossary of your internal terminology somewhere big and fat on the wall. Stand around it and argue until you agree on what it says. Leave it up; don’t underestimate the power of ambient communication and physical reference.
Plan more, download less
While internet is forbidden inside the fort, we did go on downloading expeditions: NASA photography; code documentation; and so on. The project wouldn’t have been possible without a few trips to the web. We had two lists on the wall: groceries and supplies; internets – “loo roll; Tom Stafford photo“.
This changed our usual design process, forcing us to plan carefully and think of what we needed ahead of time. Getting to the internet was a thirty-minute hike up a snow covered cliff to the town airport, so you really had to need it, too.
The path to the internet
For the visual design, especially, this resulted in more focus up front, and communication between the designers on what assets we required. It made us make decisions earlier and stick with them, creating less distraction and churn later in the process.
Try it at home: unplug once you’ve got the things you need. As an artist, it’s easier to let your inner voice shine through if you’re not looking at other people’s work while creating.
Social design
Finally, our design team experimented with a collaborative approach to wireframing. Once we had collectively nailed down use cases, IA, user journeys and other critical artefacts, we tried a pairing approach. One person drew in Illustrator in real time as the other two articulated what to draw. (This would work equally well with two people, but with three it meant that one of us could jump up and consult the lore on the walls or clarify a technical detail.) The result: we ended up considering more alternatives and quickly rallying around one solution, and resolved difficult problems more quickly.
At a certain stage we discovered it was more efficient for one person to take over – this happened around the time when the basic wireframes existed in Illustrator and we’d collectively run through the use cases, making sure that everything was accounted for in a broad sense. At this point, take a break, go have a beer, and give yourself a pat on the back.
Put the files somewhere accessible so everyone can use them as their base, and divide up the more detailed UI problems, screens or journeys. At this level of detail it’s better to have your personal headspace.
Gavin called this ‘social design’. Chatting and drawing in real time turned what was normally a rather solitary act into a very social process, with some really promising results. I’d tried something like this before with product or developer folks, and it can work – but there’s something really beautiful about switching places and everyone involved being equally quick at drawing. That’s not something you get with non-designers, and frequent swapping of the ‘driver’ and ‘observer’ roles is a key aspect to pairing.
Tackle the forest collectively and the trees individually – it will make your framework more robust and your details more polished. Win/win.
The return home
Grateful to see a 3G signal on our phones again, our flight off the island was delayed, allowing for a flurry of domain name look-ups, Twitter catch-up, and e-mails to loved ones. A week in an isolated fort really made me appreciate continuous connectivity, but also just how unique some of these processes might be.
You just never know what crazy place you might be designing from next.",,236,0
237,Circles of Confusion,Andy Clarke,"Long before I worked on the web, I specialised in training photographers how to use large format, 5×4″ and 10×8″ view cameras – film cameras with swing and tilt movements, bellows and upside down, back to front images viewed on dim, ground glass screens. It’s been fifteen years since I clicked a shutter on a view camera, but some things have stayed with me from those years.
In photography, even the best lenses don’t focus light onto a point (infinitely small in size) but onto ‘spots’ or circles in the ‘film/image plane’. These circles of light have dimensions, despite being microscopically small. They’re known as ‘circles of confusion’.
As circles of light become larger, the more unsharp parts of a photograph appear. On the flip side, when circles are smaller, an image looks sharper and more in focus. This is the basis for photographic depth of field and with that comes the knowledge that no photograph can be perfectly focused, never truly sharp. Instead, photographs can only be ‘acceptably unsharp’.
Acceptable unsharpness is now a concept that’s relevant to the work we make for the web, because often – unless we compromise – websites cannot look or be experienced exactly the same across browsers, devices or platforms. Accepting that fact, and learning to look upon these natural differences as creative opportunities instead of imperfections, can be tough. Deciding which aspects of a design must remain consistent and, therefore, possibly require more time, effort or compromises can be tougher. Circles of confusion can help us, our bosses and our customers make better, more informed decisions.
Acceptable unsharpness
Many clients still demand that every aspect of a design should be ‘sharp’ – that every user must see rounded boxes, gradients and shadows – without regard for the implications. I believe that this stems largely from the fact that they have previously been shown designs – and asked for sign-off – using static images.
It’s also true that in the past, organisations have invested heavily in style guides which, while maybe still useful in offline media, have a strictness that often fails to allow for the flexibility that we need to create experiences that are appropriate to a user’s browser or device capabilities.
We live in an era where web browsers and devices have wide-ranging capabilities, and websites can rarely look or be experienced exactly the same across them. Is a particular typeface vital to a user’s experience of a brand? How important are gradients or shadows? Are rounded corners really that necessary? These decisions determine how ‘sharp’ an element should be across browsers with different capabilities and, therefore, how much time, effort or extra code and images we devote to achieving consistency between them. To help our clients make those decisions, we can use circles of confusion.
Circles of confusion
Using circles of confusion involves plotting aspects of a visual design into a series of concentric circles, starting at the centre with elements that demand the most consistency. Then, work outwards, placing elements in order of their priority so that they become progressively ‘softer’, more defocused as they’re plotted into outer rings.
If layout and typography must remain consistent, place them in the centre circle as they’re aspects of a design that must remain ‘sharp’.
When gradients are important – but not vital – to a user’s experience of a brand, plot them close to, but not in the centre. This makes everyone aware that to achieve consistency, you’ll need to carve out extra images for browsers that don’t support CSS gradients.
If achieving rounded corners or shadows in all browsers isn’t important, place them into outer circles, allowing you to save time by not creating images or employing JavaScript workarounds.
I’ve found plotting aspects of a visual design into circles of confusion is a useful technique when explaining the natural differences between browsers to clients. It sets more realistic expectations and creates an environment for more meaningful discussions about progressive and emerging technologies. Best of all, it enables everyone to make better and more informed decisions about design implementation priorities.
Involving clients allows the implications of the decisions they make more transparent. For me, this has sometimes meant shifting deadlines or it has allowed me to more easily justify an increase in fees. Most important of all, circles of confusion have helped the people that I work with move beyond yesterday’s one-size-fits-all thinking about visual design, towards accepting the rich diversity of today’s web.",,237,0
238,Everything You Wanted To Know About Gradients (And a Few Things You Didn’t),Ethan Marcotte,"Hello. I am here to discuss CSS3 gradients. Because, let’s face it, what the web really needed was more gradients.
Still, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer’s vocabulary. There’s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work.
Of course, that whole ‘proper application’ thing is the tricky bit.
But given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They’re part of the draft images module, and implemented in two of the major rendering engines.
Still, I’ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you’ll indulge me, let’s take a quick look at how to create CSS gradients—hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser.
Gradient theory 101 (I hope that’s not really a thing)
Right. So before we dive into the code, let’s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here’s a straightforward one:
I spent seconds hours designing this gradient. I hope you like it.
At either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different:
(Don’t ever really do this. Please. I beg you.)
It’s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way.
Now, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn’t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient.
A tale of two syntaxes
Armed with our new vocabulary, let’s look at a CSS gradient in the wild. Behold, the simple input button:
There’s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here’s the CSS that makes it happen:
input[type=submit] {
background-color: #F47A20;
background-image: -moz-linear-gradient(
#FAA51A,
#F47A20
);
background-image: -webkit-gradient(linear, 0 0, 0 100%,
color-stop(0, #FAA51A),
color-stop(1, #F47A20)
);
}
I’ve borrowed David DeSandro’s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that’s perfectly understandable—heck, it sort of turned mine. But let’s step through the CSS slowly, and see if we can’t make it a little less terrifying.
Verbose WebKit is verbose
Here’s the syntax for our little gradient on WebKit:
background-image: -webkit-gradient(linear, 0 0, 0 100%,
color-stop(0, #FAA51A),
color-stop(1, #F47A20)
);
Woof. Quite a mouthful, no? Well, here’s what we’re looking at:
WebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients.
The next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points.
Afterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop’s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself.
For a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us:
background-image: -webkit-gradient(linear, 0 0, 0 100%,
from(#FAA51A),
to(#FAA51A)
);
from(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)—in both cases, we’re simply declaring the first and last color stops in our gradient.
Terse Gecko is terse
WebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3.
Naturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented.
Here’s how we get gradient-y in Gecko:
background-image: -moz-linear-gradient(
#FAA51A,
#F47A20
);
Wait, what? Done already? That’s right. By default, -moz-linear-gradient assumes you’re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that’s the case, then you simply need to specify your color stops, delimited with a few commas.
I know: that was almost… painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them.
We can specify an origin point for our gradient:
background-image: -moz-linear-gradient(50% 100%,
#FAA51A,
#F47A20
);
As well as an angle, to give it a direction:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#F47A20
);
And we can specify multiple stops, simply by adding to our comma-delimited list:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#FCC,
#F47A20
);
By adding a percentage after a given color value, we can determine its position along the gradient path:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#FCC 20%,
#F47A20
);
So that’s some of the flexibility implicit in the W3C/Mozilla-style syntax.
Now, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit’s more verbose approach to the, well, looseness behind the -moz syntax. À chacun son gradient syntax.
Still, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same.
Reusing color stops for fun and profit
But CSS gradients aren’t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects.
Tim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it’s the feature image that really catches the eye.
You see, there’s nothing that says you can’t reuse color stops. And Tim’s exploited that perfectly.
He’s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he’s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo.
But then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss.
And his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so.
Rocking the radials
We’ve been looking at linear gradients pretty exclusively. But I’d be remiss if I didn’t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar:
And here’s the relevant CSS:
background: -moz-radial-gradient(50% 100%, farthest-side,
rgb(204, 255, 255) 1%,
rgb(85, 85, 85) 15%,
rgba(85, 85, 85, 0)
);
background: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15,
from(rgb(204, 255, 255)),
to(rgba(85, 85, 85, 0))
);
Now, the syntax builds on what we’ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes’ reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, …)) to shift into circular mode.
Mozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There’s also a size constant defined (farthest-side), which determines the reach and shape of our gradient.
WebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle.
Again, this is a fairly modest little radial gradient. Time and article length (and, let’s be honest, your author’s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can’t recommend the references at Mozilla and Apple strongly enough.
Leave no browser behind
But no matter the kind of gradients you’re working with, there is a large swathe of browsers that simply don’t support gradients. Thankfully, it’s fairly easy to declare a sensible fallback—it just depends on the kind of fallback you’d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties.
For example: if you’d like to fall back to a flat color, simply declare a separate background-color:
.nav {
background-color: #000;
background-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
Or perhaps just create three separate background properties.
.nav {
background: #000;
background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
We can even build on this to fall back to a non-gradient image:
.nav {
background: #000 url(""faux-gradient-lol.png"") repeat-x ;
background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
No matter the approach you feel most appropriate to your design, it’s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings.
(If you’re feeling especially masochistic, there’s even a way to get simple linear gradients working in IE via Microsoft’s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I’d recommend avoiding those.
And don’t tell Andy Clarke I told you, or he’ll probably unload his Derringer at me. Or something.)
Go forth and, um, gradientify!
It’s entirely possible your head’s spinning. Heck, mine is, but that might be the effects of the ’nog. But maybe you’re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine.
Well, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they’re easily modifiable by tweaking a few CSS properties. Because, let’s face it, less time spent yelling at Photoshop is a very, very good thing.
Of course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it’s still under development at the W3C. As we’ve seen, browser support is still very much in flux. And it’s possible that gradients themselves have some real performance drawbacks—so test thoroughly, and gradient carefully.
But still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you.
So have fun, and get gradientin’.",,238,0
239,Using the WebFont Loader to Make Browsers Behave the Same,Richard Rutter,"Web fonts give us designers a whole new typographic palette with which to work. However, browsers handle the loading of web fonts in different ways, and this can lead to inconsistent user experiences.
Safari, Chrome and Internet Explorer leave a blank space in place of the styled text while the web font is loading. Opera and Firefox show text with the default font which switches over when the web font has loaded, resulting in the so-called Flash of Unstyled Text (aka FOUT). Some people prefer Safari’s approach as it eliminates FOUT, others think the Firefox way is more appropriate as content can be read whilst fonts download. Whatever your preference, the WebFont Loader can make all browsers behave the same way.
The WebFont Loader is a JavaScript library that gives you extra control over font loading. It was co-developed by Google and Typekit, and released as open source. The WebFont Loader works with most web font services as well as with self-hosted fonts.
The WebFont Loader tells you when the following events happen as a browser downloads web fonts (or loads them from cache):
when fonts start to download (‘loading’)
when fonts finish loading (‘active’)
if fonts fail to load (‘inactive’)
If your web page requires more than one font, the WebFont Loader will trigger events for individual fonts, and for all the fonts as a whole. This means you can find out when any single font has loaded, and when all the fonts have loaded (or failed to do so).
The WebFont Loader notifies you of these events in two ways: by applying special CSS classes when each event happens; and by firing JavaScript events. For our purposes, we’ll be using just the CSS classes.
Implementing the WebFont Loader
As stated above, the WebFont Loader works with most web font services as well as with self-hosted fonts.
Self-hosted fonts
To use the WebFont Loader when you are hosting the font files on your own server, paste the following code into your web page:
Replace Font Family Name and Another Font Family with a comma-separated list of the font families you want to check against, and replace http://yourwebsite.com/styles.css with the URL of the style sheet where your @font-face rules reside.
Fontdeck
Assuming you have added some fonts to a website project in Fontdeck, use the afore-mentioned code for self-hosted solutions and replace http://yourwebsite.com/styles.css with the URL of the tag in your Fontdeck website settings page. It will look something like http://f.fontdeck.com/s/css/xxxx/domain/nnnn.css.
Typekit
Typekit’s JavaScript-based implementation incorporates the WebFont Loader events by default, so you won’t need to include any WebFont Loader code.
Making all browsers behave like Safari
To make Firefox and Opera work in the same way as WebKit browsers (Safari, Chrome, etc.) and Internet Explorer, and thus minimise FOUT, you need to hide the text while the fonts are loading.
While fonts are loading, the WebFont Loader adds a class of wf-loading to the element. Once the fonts have loaded, the wf-loading class is removed and replaced with a class of wf-active (or wf-inactive if all of the fonts failed to load). This means you can style elements on the page while the fonts are loading and then style them differently when the fonts have finished loading.
So, let’s say the text you need to hide while fonts are loading is contained in all paragraphs and top-level headings. By writing the following style rule into your CSS, you can hide the text while the fonts are loading:
.wf-loading h1, .wf-loading p {
visibility:hidden;
}
Because the wf-loading class is removed once the the fonts have loaded, the visibility:hidden rule will stop being applied, and the text revealed. You can see this in action on this simple example page.
That works nicely across the board, but the situation is slightly more complicated. WebKit doesn’t wait for all fonts to load before displaying text: it displays text elements as soon as the relevant font is loaded.
To emulate WebKit more accurately, we need to know when individual fonts have loaded, and apply styles accordingly. Fortunately, as mentioned earlier, the WebFont Loader has events for individual fonts too.
When a specific font is loading, a class of the form wf-fontfamilyname-n4-loading is applied. Assuming headings and paragraphs are styled in different fonts, we can make our CSS more specific as follows:
.wf-fontfamilyname-n4-loading h1,
.wf-anotherfontfamily-n4-loading p {
visibility:hidden;
}
Note that the font family name is transformed to lower case, with all spaces removed. The n4 is a shorthand for the weight and style of the font family. In most circumstances you’ll use n4 but refer to the WebFont Loader documentation for exceptions.
You can see it in action on this Safari example page (you’ll probably need to disable your cache to see any change occur).
Making all browsers behave like Firefox
To make WebKit browsers and Internet Explorer work like Firefox and Opera, you need to explicitly show text while the fonts are loading. In order to make this happen, you need to specify a font family which is not a web font while the fonts load, like this:
.wf-fontfamilyname-n4-loading h1 {
font-family: 'arial narrow', sans-serif;
}
.wf-anotherfontfamily-n4-loading p {
font-family: arial, sans-serif;
}
You can see this in action on the Firefox example page (again you’ll probably need to disable your cache to see any change occur).
And there’s more
That’s just the start of what can be done with the WebFont Loader. More areas to explore would be tweaking font sizes to reduce the impact of reflowing text and to better cater for very narrow fonts. By using the JavaScript events much more can be achieved too, such as fading in text as the fonts load.",,239,0
240,My CSS Wish List,Inayaili de León Persson,"I love Christmas. I love walking around the streets of London, looking at the beautifully decorated windows, seeing the shiny lights that hang above Oxford Street and listening to Christmas songs.
I’m not going to lie though. Not only do I like buying presents, I love receiving them too. I remember making long lists that I would send to Father Christmas with all of the Lego sets I wanted to get. I knew I could only get one a year, but I would spend days writing the perfect list.
The years have gone by, but I still enjoy making wish lists. And I’ll tell you a little secret: my mum still asks me to send her my Christmas list every year.
This time I’ve made my CSS wish list. As before, I’d be happy with just one present.
Before I begin…
… this list includes:
things that don’t exist in the CSS specification (if they do, please let me know in the comments – I may have missed them);
others that are in the spec, but it’s incomplete or lacks use cases and examples (which usually means that properties haven’t been implemented by even the most recent browsers).
Like with any other wish list, the further down I go, the more unrealistic my expectations – but that doesn’t mean I can’t wish. Some of the things we wouldn’t have thought possible a few years ago have been implemented and our wishes fulfilled (think multiple backgrounds, gradients and transformations, for example).
The list
Cross-browser implementation of font-size-adjust
When one of the fall-back fonts from your font stack is used, rather than the preferred (first) one, you can retain the aspect ratio by using this very useful property. It is incredibly helpful when the fall-back fonts are smaller or larger than the initial one, which can make layouts look less polished.
What font-size-adjust does is divide the original font-size of the fall-back fonts by the font-size-adjust value. This preserves the x-height of the preferred font in the fall-back fonts. Here’s a simple example:
p {
font-family: Calibri, ""Lucida Sans"", Verdana, sans-serif;
font-size-adjust: 0.47;
}
In this case, if the user doesn’t have Calibri installed, both Lucida Sans and Verdana will keep Calibri’s aspect ratio, based on the font’s x-height. This property is a personal favourite and one I keep pointing to.
Firefox supported this property from version three. So far, it’s the only browser that does. Fontdeck provides the font-size-adjust value along with its fonts, and has a handy tool for calculating it.
More control over overflowing text
The text-overflow property lets you control text that overflows its container. The most common use for it is to show an ellipsis to indicate that there is more text than what is shown. To be able to use it, the container should have overflow set to something other than visible, and white-space: nowrap:
div {
white-space: nowrap;
width: 100%;
overflow: hidden;
text-overflow: ellipsis;
}
This, however, only works for blocks of text on a single line. In the wish list of many CSS authors (and in mine) is a way of defining text-overflow: ellipsis on a block of multiple text lines. Opera has taken the first step and added support for the -o-ellipsis-lastline property, which can be used instead of ellipsis. This property is not part of the CSS3 spec, but we could certainly make good use of it if it were…
WebKit has -webkit-line-clamp to specify how many lines to show before cutting with an ellipsis, but support is patchy at best and there is no control over where the ellipsis shows in the text. Many people have spent time wrangling JavaScript to do this for us, but the methods used are very processor intensive, and introduce a JavaScript dependency.
Indentation and hanging punctuation properties
You might notice a trend here: almost half of the items in this list relate to typography. The lack of fine-grained control over typographical detail is a general concern among designers and CSS authors. Indentation and hanging punctuation fall into this category.
The CSS3 specification introduces two new possible values for the text-indent property: each-line; and hanging. each-line would indent the first line of the block container and each line after a forced line break; hanging would invert which lines are affected by the indentation.
The proposed hanging-punctuation property would allow us to specify whether opening and closing brackets and quotes should hang outside the edge of the first and last lines. The specification is still incomplete, though, and asks for more examples and use cases.
Text alignment and hyphenation properties
Following the typographic trend of this list, I’d like to add better control over text alignment and hyphenation properties. The CSS3 module on Generated Content for Paged Media already specifies five new hyphenation-related properties (namely: hyphenate-dictionary; hyphenate-before and hyphenate-after; hyphenate-lines; and hyphenate-character), but it is still being developed and lacks examples.
In the text alignment realm, the new text-align-last property allows you to define how the last line of a block (or a line just before a forced break) is aligned, if your text is set to justify. Its value can be: start; end; left; right; center; and justify. The text-justify property should also allow you to have more control over text set to text-align: justify but, for now, only Internet Explorer supports this.
calc()
This is probably my favourite item in the list: the calc() function. This function is part of the CSS3 Values and Units module, but it has only been implemented by Firefox (4.0). To take advantage of it now you need to use the Mozilla vendor code, -moz-calc().
Imagine you have a fluid two-column layout where the sidebar column has a fixed width of 240 pixels, and the main content area fills the rest of the width available. This is how you could create that using -moz-calc():
#main {
width: -moz-calc(100% - 240px);
}
Can you imagine how many hacks and headaches we could avoid were this function available in more browsers? Transitions and animations are really nice and lovely but, for me, it’s the ability to do the things that calc() allows you to that deserves the spotlight and to be pushed for implementation.
Selector grouping with -moz-any()
The -moz-any() selector grouping has been introduced by Mozilla but it’s not part of any CSS specification (yet?); it’s currently only available on Firefox 4.
This would be especially useful with the way HTML5 outlines documents, where we can have any number of variations of several levels of headings within numerous types of containers (think sections within articles within sections…).
Here is a quick example (copied from the Mozilla blog post about the article) of how -moz-any() works. Instead of writing:
section section h1, section article h1, section aside h1,
section nav h1, article section h1, article article h1,
article aside h1, article nav h1, aside section h1,
aside article h1, aside aside h1, aside nav h1, nav section h1,
nav article h1, nav aside h1, nav nav h1, {
font-size: 24px;
}
You could simply write:
-moz-any(section, article, aside, nav)
-moz-any(section, article, aside, nav) h1 {
font-size: 24px;
}
Nice, huh?
More control over styling form elements
Some are of the opinion that form elements shouldn’t be styled at all, since a user might not recognise them as such if they don’t match the operating system’s controls. I partially agree: I’d rather put the choice in the hands of designers and expect them to be capable of deciding whether their particular design hampers or improves usability.
I would say the same idea applies to font-face: while some fear designers might go crazy and litter their web pages with dozens of different fonts, most welcome the freedom to use something other than Arial or Verdana.
There will always be someone who will take this freedom too far, but it would be useful if we could, for example, style the default Opera date picker:
or Safari’s slider control (think star movie ratings, for example):
Parent selector
I don’t think there is one CSS author out there who has never come across a case where he or she wished there was a parent selector. There have been many suggestions as to how this could work, but a variation of the child selector is usually the most popular:
article < h1 {
…
}
One can dream…
Flexible box layout
The Flexible Box Layout Module sounds a bit like magic: it introduces a new box model to CSS, allowing you to distribute and order boxes inside other boxes, and determine how the available space is shared.
Two of my favourite features of this new box model are:
the ability to redistribute boxes in a different order from the markup
the ability to create flexible layouts, where boxes shrink (or expand) to fill the available space
Let’s take a quick look at the second case. Imagine you have a three-column layout, where the first column takes up twice as much horizontal space as the other two:
With the flexible box model, you could specify it like this:
body {
display: box;
box-orient: horizontal;
}
#main {
box-flex: 2;
}
#links {
box-flex: 1;
}
aside {
box-flex: 1;
}
If you decide to add a fourth column to this layout, there is no need to recalculate units or percentages, it’s as easy as that.
Browser support for this property is still in its early stages (Firefox and WebKit need their vendor prefixes), but we should start to see it being gradually introduced as more attention is drawn to it (I’m looking at you…). You can read a more comprehensive write-up about this property on the Mozilla developer blog.
It’s easy to understand why it’s harder to start playing with this module than with things like animations or other more decorative properties, which don’t really break your layouts when users don’t see them. But it’s important that we do, even if only in very experimental projects.
Nested selectors
Anyone who has never wished they could do something like the following in CSS, cast the first stone:
article {
h1 { font-size: 1.2em; }
ul { margin-bottom: 1.2em; }
}
Even though it can easily turn into a specificity nightmare and promote redundancy in your style sheets (if you abuse it), it’s easy to see how nested selectors could be useful. CSS compilers such as Less or Sass let you do this already, but not everyone wants or can use these compilers in their projects.
Every wish list has an item that could easily be dropped. In my case, I would say this is one that I would ditch first – it’s the least useful, and also the one that could cause more maintenance problems. But it could be nice.
Implementation of the ::marker pseudo-element
The CSS Lists module introduces the ::marker pseudo-element, that allows you to create custom list item markers. When an element’s display property is set to list-item, this pseudo-element is created.
Using the ::marker pseudo-element you could create something like the following:
Footnote 1: Both John Locke and his father, Anthony Cooper, are
named after 17th- and 18th-century English philosophers; the real
Anthony Cooper was educated as a boy by the real John Locke.
Footnote 2: Parts of the plane were used as percussion instruments
and can be heard in the soundtrack.
where the footnote marker is generated by the following CSS:
li::marker {
content: ""Footnote "" counter(notes) "":"";
text-align: left;
width: 12em;
}
li {
counter-increment: notes;
}
You can read more about how to use counters in CSS in my article from last year.
Bear in mind that the CSS Lists module is still a Working Draft and is listed as “Low priority”. I did say this wish list would start to grow more unrealistic closer to the end…
Variables
The sight of the word ‘variables’ may make some web designers shy away, but when you think of them applied to things such as repeated colours in your stylesheets, it’s easy to see how having variables available in CSS could be useful.
Think of a website where the main brand colour is applied to elements like the main text, headings, section backgrounds, borders, and so on. In a particularly large website, where the colour is repeated countless times in the CSS and where it’s important to keep the colour consistent, using variables would be ideal (some big websites are already doing this by using server-side technology).
Again, Less and Sass allow you to use variables in your CSS but, again, not everyone can (or wants to) use these.
If you are using Less, you could, for instance, set the font-family value in one variable, and simply call that variable later in the code, instead of repeating the complete font stack, like so:
@fontFamily: Calibri, ""Lucida Grande"", ""Lucida Sans Unicode"", Helvetica, Arial, sans-serif;
body {
font-family: @fontFamily;
}
Other features of these CSS compilers might also be useful, like the ability to ‘call’ a property value from another selector (accessors):
header {
background: #000000;
}
footer {
background: header['background'];
}
or the ability to define functions (with arguments), saving you from writing large blocks of code when you need to write something like, for example, a CSS gradient:
.gradient (@start:"""", @end:"""") {
background: -webkit-gradient(linear, left top, left bottom, from(@start), to(@end));
background: -moz-linear-gradient(-90deg,@start,@end);
}
button {
.gradient(#D0D0D0,#9F9F9F);
}
Standardised comments
Each CSS author has his or her own style for commenting their style sheets. While this isn’t a massive problem on smaller projects, where maybe only one person will edit the CSS, in larger scale projects, where dozens of hands touch the code, it would be nice to start seeing a more standardised way of commenting.
One attempt at creating a standard for CSS comments is CSSDOC, an adaptation of Javadoc (a documentation generator that extracts comments from Java source code into HTML). CSSDOC uses ‘DocBlocks’, a term borrowed from the phpDocumentor Project. A DocBlock is a human- and machine-readable block of data which has the following structure:
/**
* Short description
*
* Long description (this can have multiple lines and contain tags
*
* @tags (optional)
*/
CSSDOC includes a standard for documenting bug fixes and hacks, colours, versioning and copyright information, amongst other important bits of data.
I know this isn’t a CSS feature request per se; rather, it’s just me pointing you at something that is usually overlooked but that could contribute towards keeping style sheets easier to maintain and to hand over to new developers.
Final notes
I understand that if even some of these were implemented in browsers now, it would be a long time until all vendors were up to speed. But if we don’t talk about them and experiment with what’s available, then it will definitely never happen.
Why haven’t I mentioned better browser support for existing CSS3 properties? Because that would be the same as adding chocolate to your Christmas wish list – you don’t need to ask, everyone knows you want it.
The list could go on. There are dozens of other things I would love to see integrated in CSS or further developed. These are my personal favourites: some might be less useful than others, but I’ve wished for all of them at some point.
Part of the research I did while writing this article was asking some friends what they would add to their lists; other than a couple of items I already had in mine, everything else was different. I’m sure your list would be different too. So tell me, what’s on your CSS wish list?",,240,0
241,Jank-Free Image Loads,Eric Portis,"There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then —
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
— oops! — an image pops in above it, pushing said text down the page, and our poor reader loses their place.
By default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I’ll chart a path into a jank-free future – one in which it’s easy and natural to author elements that load like this:
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
Jank is a very old problem, and there is a very old solution to it: the width and height attributes on . The idea is: if we stick an image’s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.
width
Specifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.
—The HTML 3.2 Specification, published on January 14 1997
Unfortunately for us, when width and height were first spec’d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:
See the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.
width and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.
I have good news, bad news, and great news.
The good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.
The bad news is that these techniques are all terrible, cumbersome hacks. They’re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.
So, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.
aspect-ratio in CSS
The first proposed feature? An aspect-ratio property in CSS!
This would allow us to write CSS like this:
img {
width: 100%;
}
.thumb {
aspect-ratio: 1/1;
}
.hero {
aspect-ratio: 16/9;
}
This’ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.
Alas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It’s images – possibly user generated images – that can have any aspect ratio. The really tricky problem is unknown-when-you’re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:
And inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style="""" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I’m cutting off my, (or anyone else’s) ability to change those aspect ratios, for whatever reason, later.
How might we specify aspect ratios at a lower level? How might we give browsers information about an image’s dimensions, without giving them explicit instructions about how to style it?
I’ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!
A brief note on intrinsic and extrinsic sizing
What do I mean by “intrinsic” and “extrinsic?”
The intrinsic size of an image is, put simply, how big it’d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800×600 image has an intrinsic width of 800px.
The extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800×600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn’t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.
It surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).
CSS aspect-ratio lets us avoid setting extrinsic heights and widths – and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.
The last tool I’m going to talk about gets us out of the extrinsic sizing game all together — which, I think, is only appropriate for a feature that we’re going to be using in HTML.
intrinsicsize in HTML
The proposed intrinsicsize attribute will let you do this:
That tells the browser, “hey, this image.jpg that I’m using here – I know you haven’t loaded it yet but I’m just going to let you know right away that it’s going to have an intrinsic size of 800×600.” This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image’s intrinsic size.
You may ask (I did!): wait, what if my references multiple resources, which all have different intrinsic sizes? Well, if you’re using srcset, intrinsicsize is a bit of a misnomer – what the attribute will do then, is specify an intrinsic aspect ratio:
In the future (and behind the “Experimental Web Platform Features” flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 – it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.
Can’t wait
I seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!
Don’t let all of these details bury the big takeaway here: sometime soon (🤞 2019‽ 🤞), we’ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can’t wait!",,241,0
242,Creating My First Chrome Extension,Jennifer Wong,"Writing a Chrome Extension isn’t as scary at it seems!
Not too long ago, I used a Chrome extension called 20 Cubed. I’m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.
Unfortunately, the developer stopped updating the extension and removed it from Chrome’s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?
Fortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the “old-fashioned” way. Sky’s the limit!
Setup
But first, some things you’ll need to know about before getting started:
Callbacks
Timeouts
Chrome Dev Tools
Developing with Chrome extension methods requires a lot of callbacks. If you’ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I’d highly recommend brushing up on that subject before getting started.
Hyperbole and a Half
Timeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn’t consider the fact that I’d be juggling three timers. And I probably would’ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.
On the note of organization, abstraction is important! You might have any combination of the following:
The Chrome extension options page
The popup from the Chrome Menu
The windows or tabs you create
The background scripts
And that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you’ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.
Alright, now that you know all that up front, let’s get going!
Documentation
TL;DR READ THE DOCS.
A few things to get started:
Read Google’s primer on browser extensions
Have a look at their Getting started tutorial
Check out their overview on Chrome Extensions
This overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.
The manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here’s what my manifest.json looked like, for example:
https://github.com/jennz0r/eye-rest/blob/master/manifest.json
Because I’m a visual learner, I found the images that describe the extension’s architecture most helpful.
To clarify this diagram, the background.js file is the extension’s event handler. It’s constantly listening for browser events, which you’ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.
The Popup is the little window that appears when you click on an extension’s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { ""default_popup"": FILE_NAME_HERE }.
The Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose “Options” under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.
Content scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.
Debugging
A quick note: don’t forget the debugging tutorial!
Just like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you’re likely to have several inspector windows open – one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.
For example, I kept seeing the error “This request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.” Well, it turns out there are limitations on how often you can sync stored information.
Another error I kept seeing was “Alarm delay is less than minimum of 1 minutes. In released .crx, alarm “ALARM_NAME_HERE” will fire in approximately 1 minutes”. Well, it turns out there are minimum interval times for alarms.
Chrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!
Me adding a ton of `console.log`s while trying to debug my alarms.
Eye Rest Functionality
Ok, so what is the extension I created? Again, it’s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:
If the extension is running AND
If the user has not clicked Pause in the Popup HTML AND
If the counter in the Popup HTML is down to 00:00 THEN
Open a new window with Timer HTML AND
Start a 20 sec countdown in Timer HTML AND
Reset the Popup HTML counter to 20:00
If the Timer HTML is down to 0 sec THEN
Close that window. Rinse. Repeat.
Sounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that’s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn’t realize until I started coding.
For visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it’s a pun.)
API
Now let’s discuss the APIs that I used to build this extension.
Alarms
What even are alarms? I didn’t know either.
Alarms are basically Chrome’s setTimeout and setInterval. They exist because, as Google says…
DOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.
For more information, check out this background migration doc.
One interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn’t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.
Background Scripts
For Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.
I wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.
I also wanted my Popup script to do the same things when the user has unpaused the extension’s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.
Other APIs
Windows
I use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.
One day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window’s HTML. Until then, it hadn’t dawned on me that the url could be relative. Duh. I was tired!
I pass the timer.html as the url option, as well as type, size, position, and other visual options.
Storage
Maybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome’s storage is avoiding quotas and write operation maximums.
I wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it’s quite complicated and requires lots of writes. That’s why I went with the user’s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.
Declarative Content
The Declarative Content API allows you to show your extension’s page action based on several type of matches, without needing to take a host permission or inject a content script. So you’ll need this to get your extension to work in the browser!
You can see me set this in my Background script. Because I want my extension’s popup to appear on every page one is browsing, I leave the page matchers empty.
There are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!
The Extension
Here’s what my original Popup looked like before I added styles.
And here’s what it looks like with new styles. I guess I’m going for a Nickelodeon feel.
And here’s the Timer window and Popup together!
Publishing
Publishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That’s all! Then your extension will be available on the Chrome extension store! Neato :D
My extension is now available for you to install.
Conclusion
I thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it’s definitely achievable with some time, persistence, and good ole Google searches.
Eventually, I’d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!
Either way, with Eye Rest’s framework built out, I’m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.",,242,0
243,Researching a Property in the CSS Specifications,Rachel Andrew,"I frequently joke that I’m “reading the specs so you don’t have to”, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial.
What if you could just look it up yourself? That’s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs.
Where are the CSS Specifications?
The easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4.
Who are the specifications for?
CSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected.
Which version of a spec should I look at?
There are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I’m referring to a particular Working Draft in an article I’ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing.
If you want the very latest additions and changes to the spec, then the Editor’s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor’s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor’s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed.
If you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated.
How to approach a spec
The first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won’t need to read all the way to the bottom, or read every section in detail.
The second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as:
Values and Units
Intrinsic and Extrinsic Sizing
Box Alignment
When something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore.
Researching your property
As an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property.
You might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties.
Clicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns.
Value
For value we can see that the property accepts a value . The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec.
The value is defined as accepting various values:
minmax( , )
fit-content(
We need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size.
is defined just below as:
min-content
max-content
auto
So these are all things that would be valid to use as a value for grid-auto-rows.
The first value of is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values.
grid-auto-rows: 100px;
grid-auto-rows: 1em;
grid-auto-rows: 30%;
When using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work.
“ values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.”
This means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid.
The second value of is also defined here, as “A non-negative dimension with the unit fr specifying the track’s flex factor.” This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be:
grid-auto-rows: 1fr;
There is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed.
We then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out.
For example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary.
We can now add min-content and max-content to our available values.
grid-auto-rows: min-content;
grid-auto-rows: max-content;
The final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this.
grid-auto-rows: auto;
So, this was the list for , the next possible value is minmax( , ). So this is telling us that we can use minmax() as a value, the final (max) value will be and we have already unpacked all of the allowable values there. The first value (min) is detailed as an . If we look at the values for this, we discover that they are the same as , minus the value:
min-content
max-content
auto
We already know what all of these do, so we can add possible minmax() values to our list of values for .
grid-auto-rows: minmax(100px, 200px);
grid-auto-rows: minmax(20%, 1fr);
grid-auto-rows: minmax(1em, auto);
grid-auto-rows: minmax(min-content, max-content);
Finally we can use fit-content( . We can see that fit-content takes a value of which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it.
grid-auto-rows: fit-content(200px);
grid-auto-rows: fit-content(20%);
Those are all of our possible values, and to round things off, check again at the initial value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, “A plus (+) indicates that the preceding type, word, or group occurs one or more times.” This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size:
“If multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.”
Therefore with the following CSS, if five implicit rows were needed they would be as follows:
100px
1fr
auto
100px
1fr
.grid {
display: grid;
grid-auto-rows: 100px 1fr auto;
}
Initial
We can now move to the next line in the box, and you’ll be glad to know that it isn’t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one.
Applies to
This line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers.
Inherited
This tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element.
Percentages
Are percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area.
Media
This defines the media group that the property belongs to. In this case visual.
Computed Value
This details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute.
Canonical Order
If you have a property–generally a shorthand property–which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don’t need to worry about this value in the table.
Animation Type
This details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet!
That’s (mostly) it!
Sometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property.
In selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole.
Being able to work out what is valid for each property is incredibly useful. It means you don’t waste time trying to use a value that doesn’t work for that property. You also may find that there are values you weren’t aware of, that solve problems for you.
Further reading
Specifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need.
You may also find useful:
Value Definition Syntax on MDN.
The MDN Glossary defines many common terms.
Understanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax.
How to read W3C Specs - from 2001 but still relevant.
I hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed.",,243,0
244,It’s Beginning to Look a Lot Like XSSmas,Anna Debenham,"I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional.
It’s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it.
With platforms like GitHub and npm, giving the gift of code is so easy it’s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it’s also risky.
Vulnerabilities and Open Source
Open source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps.
Graph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017
In the first 24 ways article this year, Katie referenced a few different types of vulnerability:
Cross-site Request Forgery (also known as CSRF)
SQL Injection
Cross-site Scripting (also known as XSS)
There are many more types of vulnerability, and those that live under the same category share similarities.
For example, my favourite – is it weird to have a favourite vulnerability? – is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks – share it generously with your colleagues.
Most vulnerabilities like this are not added intentionally – they’re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library.
Others, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer’s curiosity.
Another tactic to get developers to unwittingly install malicious packages into their codebase is “typosquatting” – back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env).
This is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas.
Santa’s on his sleigh
If you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn’t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too.
Let’s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that’s just the tip of the iceberg – have a look at the full dependency tree which contains 109 dependencies – more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies.
Fixing vulnerabilities – the ultimate christmas gift
If you’re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy.
When you find out about a new vulnerability, you have some options:
Fix the vulnerability via an upgrade
You can often fix a vulnerability by upgrading the library to the latest version. Make sure you’re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version.
Patch the vulnerable code
Sometimes, a fix for a vulnerable library isn’t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won’t change anything else.
Switch to a different library
If the library you’re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it’s being unmaintained.
Responsibly disclosing vulnerabilities
Knowing how to responsibly disclose vulnerabilities is something I’m ashamed to admit that I didn’t know about before I joined a security company. But it’s so important! On discovering a new vulnerability, a developer has a few options:
A malicious developer will exploit that vulnerability for their own gain.
A reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited.
An ethical and aware developer will follow what’s known as a “responsible disclosure process”. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too.
It’s important to understand this process if you’re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code.
So what does responsible disclosure look like? I’ll take Node.js’s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There’s a separate process for bug found in third-party npm packages). Once you’ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they’ll give you regular updates about the progress they’re making towards fixing it. As part of this, they’ll figure out which versions are affected, and prepare fixes for them. They’ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org.
Tim Kadlec published an in-depth article about responsible disclosures if you’re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed.
Encourage responsible disclosure
Add a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability – 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn’t published one one.
Stats from the State of Open Source Security 2017
Bug bounties
Some companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program.
If you don’t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money.
On one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question.
A gift worth giving
It’s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool – I’m just a little biased towards Snyk, but as Katie mentioned, there’s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities.
Stats from the State of Open Source Security 2017
Most importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too.",,244,0
245,Web Content Accessibility Guidelines 2.1—for People Who Haven’t Read the Update,Alan Dalton,"Happy United Nations International Day of Persons with Disabilities 2018! The United Nations chose “Empowering persons with disabilities and ensuring inclusiveness and equality” as this year’s theme. We’ve seen great examples of that in 2018; for example, Paul Robert Lloyd has detailed how he improved the accessibility of this very website.
On social media, US Congressmember-Elect Alexandria Ocasio-Cortez started using the Clipomatic app to add live captions to her Instagram live stories, conforming to success criterion 1.2.4, “Captions (Live)” of the Web Content Accessibility Guidelines (figure 1) …and British Vogue Contributing Editor Sinéad Burke has used the split-screen feature of Instagram live stories to invite an interpreter to provide live Sign Language interpretation, going above and beyond success criterion 1.2.6, “Sign Language (Prerecorded)” of the Web Content Accessibility Guidelines (figure 2).
Figure 1: Screenshot of Alexandria Ocasio-Cortez’s Instagram story with live captionsFigure 2: Screenshot of Sinéad Burke’s Instagram story with Sign Language Interpretation
That theme chimes with this year’s publication of the World Wide Web Consortium (W3C)’s Web Content Accessibility Guidelines (WCAG) 2.1. In last year’s “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I mentioned the scale of the project to produce this update during 2018: “the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible”.
The WCAG working group have added 17 success criteria to the 61 that they released way back in 2008—for context, that was 1½ years before Apple released their first iPad! These new criteria make it easier than ever for us web geeks to produce work that is more accessible to people using mobile devices and touchscreens, people with low vision, and people with cognitive and learning disabilities.
Once again, let’s rip off all the legalese and ambiguous terminology like wrapping paper, and get up to date.
Can your users perceive the information on your website?
The first guideline has criteria that help you prevent your users from asking, “What the **** is this thing here supposed to be?” We’ve seven new criteria for this guideline.
1.3.4 Some people can’t easily change the orientation of the device that they use to browse the web, and so you should make sure that your users can use your website in portrait orientation and in landscape orientation. Consider how people slowly twirl presents that they have plucked from under the Christmas tree, to find the appropriate orientation—and expect your users to do likewise with your websites and apps. We’ve had 18½ years since John Allsopp’s revelatory Dao of Web Design enlightened us to “embrace the fact that the web doesn’t have the same constraints” as printed pages, and to “design for this flexibility”. So, even though this guideline doesn’t apply to websites where “a specific display orientation is essential,” such as a piano tutorial, always ask yourself, “What would John Allsopp do?”
1.3.5 You should help the user’s browser to automatically complete–or not complete–form fields, to save the user some time and effort. The surprisingly powerful and flexible autocomplete attribute for input elements should prove most useful here. If you’ve used microformats or microdata to mark up information about a person, the autocomplete attribute’s range of values should seem familiar. I like how the W3’s “Using HTML 5.2 autocomplete attributes” says that autocompleted values in forms help “those with dexterity disabilities who have trouble typing, those who may need more time, and anyone who wishes to reduce effort to fill out a form” (emphasis mine). Um…🙋♂️
1.3.6 I like this one a lot, because it can help a huge audience to overcome difficulties that might prevent them from ever using the web. Some people have cognitive difficulties that affect their memory, focus, attention, language processing, and/or decision-making. Those users often rely on assistive technologies that present information through proprietary symbols, summaries of content, and keyboard shortcuts. You could use ARIA landmarks to identify the regions of each webpage. You could also keep an eye on the W3C’s ongoing work on Personalisation Semantics.
1.4.10 If you were to find a Nintendo Switch and “Super Mario Odyssey” under your Christmas tree, you would have many hours of enjoyably scrolling horizontally and vertically to play the game. On the other hand, if you had to zoom a webpage to 400% so that you could read the content, you might have many hours of frustratedly scrolling horizontally and vertically to read the content. Learned reader, I assume you understand the purpose and the core techniques of Responsive Web Design. I also assume you’re getting up to speed with the new Grid, Flexbox, and Box Alignment techniques for layout, and overflow-wrap. Using those skills, you should make sure that all content and functionality remain available when the browser is 320px wide, without your user needing to scroll horizontally. (For vertical text, you should make sure that all content and functionality remain available when the browser is 256px high, without your user needing to scroll vertically.) You don’t have to do this for anything that would lose meaning if you restructured it into one narrow column. That includes some images, maps, diagrams, video, games, presentations, and data tables. Remember to check how your media queries affect font size: your user might find that text becomes smaller as they zoom into the webpage. So, test this one on real devices, or—better yet—test it with real users.
1.4.11 In “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I recommended bookmarking Lea Verou’s Contrast Ratio calculator for checking that text contrasts enough with its background (for success criteria 1.4.3 and 1.4.6), so that more people can read it more easily. For this update, you should make sure that form elements and their focus states have a 3:1 contrast ratio with the colour around them. This doesn’t apply to controls that use the browser’s default styling. Also, you should make sure that graphics that convey information have a 3:1 contrast ratio with the colour around them.
1.4.12 Some people, due to low vision or dyslexia, might need to modify the typography that you agonised over. Research indicates that you should make sure that all content and functionality would remain available if a user were to set:
line height to at least 1½ × the font size;
space below paragraphs to at least 2 × the font size;
letter spacing to at least 0.12 × the font size;
word spacing to at least 0.16 × the font size.
To test this, check for text overlapping, text hiding behind other elements, or text disappearing.
1.4.13 Sometimes when visiting a website, you hover over—or tab on to—something that unleashes a newsletter subscription pop-up, some suggested “related content”, and/or a GDPR-related pop-up. On a well-designed website, you can press the Esc key on your keyboard or click a prominent “Close” button or “X” button to vanquish such intrusions. If the Esc key fails you, or if you either can’t see or can’t click the “Close” button…well, you’ll probably just close that browser tab. This situation can prove even more infuriating for users with low vision or cognitive disabilities. So, if new content appears when your user hovers over or tabs on to some element, you should make sure that:
your user can dismiss that content without needing to move their pointer or tab on to some other element (this doesn’t apply to error warnings, or well-behaved content that doesn’t obscure or replace other content);
the new content remains visible while your user moves their cursor over it;
the new content remains visible as long as the user hovers over that element or dismisses that content—or until the new content is no longer valid.
This doesn’t apply to situations such as hovering over an element’s title attribute, where the user’s browser controls the display of the content that appears.
Can users operate the controls and links on your website?
The second guideline has criteria that help you prevent your users from asking, “How the **** does this thing work?” We’ve nine new criteria for this guideline.
2.1.4 Some websites offer keyboard shortcuts for users. For example, the keyboard shortcuts for Gmail allow the user to press the ⇧ key and u to mark a message as unread. Usually, shortcuts on websites include modifier keys, such as Ctrl, along with a letter, number, or punctuation symbol. Unfortunately, users who have dexterity challenges sometimes trigger those shortcuts by accident, and that can make a website impossible to use. Also, speech input technology can sometimes trigger those shortcuts. If your website offers single-character keyboard shortcuts, you must allow your user to turn off or remap those shortcuts. This doesn’t apply to single-character keyboard shortcuts that only work when a control, such as drop-down list, has focus.
2.2.6 If your website uses a timeout for some process, you could store the user’s data for at least 20 hours, so that users with cognitive disabilities can take a break or take longer than usual to complete the process without losing their place or losing their data. Alternatively, you could warn the user, at the start of the process, about that the website will timeout after whatever amount of time you have chosen.
2.3.3 If your website has some non-essential animation (such as parallax scrolling) that starts when the user does some particular action, you could allow the user to turn off that animation so that you avoid harming users with vestibular disorders. The prefers-reduced-motion media query currently has limited browser support, but you can start using it now to avoid showing animations to users who select the “Reduce Motion” setting (or equivalent) in their device’s operating system:
@media (prefers-reduced-motion: reduce) {
.MrFancyPants {
animation: none;
}
}
2.5.1 Some websites let users use multi-touch gestures on touchscreen devices. For example, Google Maps allows users to pinch with two fingers to zoom out and “unpinch” with two fingers to zoom in. Also, some websites allow users to drag a finger to do some action, such as changing the value on an input element with type=""range"", or swiping sideways to the next photograph in a gallery. Some users with dexterity challenges, and some users who use a head pointer, an eye-gaze system, or speech-controlled mouse emulation, might find multi-touch gestures or dragging impossible. You must make sure that your website supports single-tap alternatives to any multi-touch gestures or dragging actions that it provides. For example, if your website lets someone pinch and unpinch a map to zoom in and out, you must also provide buttons that a user can tap to zoom in and out.
2.5.2 This might be my favourite accessibility criterion ever! Did you ever touch or press a “Send” button but then immediately realise that you really didn’t want to send the message, and so move your finger or cursor away from the “Send” button before lifting your finger?! Imagine how many arguments that functionality has prevented. 😌 You must make sure that touching or pressing does not cause anything to happen before the user raises their finger or cursor, or make sure that the user can move their finger or cursor away to prevent the action. In JavaScript, prefer onclick to onmousedown, unless your website has actions that need onmousedown. Also, this doesn’t apply to actions that need to happen as soon as the user clicks or touches. For example, a user playing a “Whac-A-Mole” game or a piano emulator needs the action to happen as soon as they click or touch the screen.
2.5.3 Recently, entrepreneur and social media guru Gary Vaynerchuk has emphasised the rise of audio and voice as output and input. He quotes a Google statistic that says one in five search queries use voice input. Once again, users with disabilities have been ahead of the curve here, having used screen readers and/or dictation software for many years. You must make sure that the text that appears on a form control or image matches how your HTML identifies that form control or image. Use proper semantic HTML to achieve this:
use the label element to pair text with the corresponding input element;
use an alt attribute value that exactly matches any text that appears in an image;
use an aria-labelledby attribute value that exactly matches the text that appears in any complex component.
2.5.4 Modern Web APIs allow web developers to specify how their website will react to the user shaking, tilting, or gesturing towards their device. Some users might find those actions difficult, impossible, or embarrassing to perform. If you make any functionality available when the user shakes, tilts, or gestures towards their device, you must provide form controls that make that same functionality available. As usual, this doesn’t apply to websites that require shaking, tilting, or gesturing; this includes some games and music programmes. John Gruber describes the iPhone’s “Shake to Undo” gesture as “dreadful — impossible to discover through exploration of the on-screen [user interface], bad for accessibility, and risks your phone flying out of your hand”. This accessibility criterion seems to empathise with John: you must make sure that your user can prevent your website from responding to shaking, tilting and/or gesturing towards their device.
2.5.5 Homer Simpson’s telephone famously complained, “The fingers you have used to dial are too fat.” I think we’ve all felt like that when using phones and tablets, particularly when trying to dismiss pop-ups and ads. You could make interactive elements at least 44px wide × 44px high. Apple’s “Human Interface Guidelines” agree: “Provide ample touch targets for interactive elements. Try to maintain a minimum tappable area of 44pt x 44pt for all controls.” This doesn’t apply to links within inline text, or to unsoiled elements.
2.5.6 Expect your users to use a variety of input devices they want, and to change from one to another whenever they please. For example, a user with a tablet and keyboard might jab icons on the screen while typing on the keyboard, or a user might dictate text while alone and then type on a keyboard when a colleague arrives. You could make sure that your website allows your users to use whichever available input modality they choose. Once again, this doesn’t apply to websites that require a specific modality; this includes typing tutors and music programmes.
Can users understand your content?
The third guideline has criteria that help you prevent your users from asking, “What the **** does this mean?” We’ve no new criteria for this guideline.
Have you made your website robust enough to work on your users’ browsers and assistive technologies?
The fourth and final guideline has criteria that help you prevent your users from asking, “Why the **** doesn’t this work on my device?” We’ve one new criterion for this guideline.
4.1.3 Sometimes you need to let your user know the status of something: “Did it work OK? What was the error? How far through it are we?” However, you should avoid making your user lose their place on the webpage, and so you should let them know the status without opening a new window, focusing on another element, or submitting a form. To do this properly for assistive technology users, choose the appropriate ARIA role for the new content; for example:
if your user needs to know, “Did it work OK?”, add role=""status”;
if your user needs to know, “What was the error?”, add role=""alert”;
if you user needs to know, “How far through it are we?”, add role=""log"" (for a chat window) or role=""progressbar"" (for, well, a progress bar).
Better design for humans
My favourite of Luke Wroblewski’s collection of Design Quotes is, “Design is the art of gradually applying constraints until only one solution remains,” from that most prolific author, “Unknown”. I’ve always viewed the Web Content Accessibility Guidelines as people-based constraints, and liked how they help the design process. With these 17 new web content accessibility criteria, go forth and create solutions that more people than ever before can use.
Spending those book vouchers you got for Christmas
What next? If you’re looking for something to do to keep you busy this Christmas, I thoroughly recommend these four books for increasing your accessibility expertise:
“Pro HTML5 Accessibility” by Joshue O Connor (Head of Accessibility (Interim) at the UK Government Digital Service, Director of InterAccess, and one of the editors of the Web Content Accessibility Guidelines 2.1): Although this book is six years old—a long time in web design—I find it an excellent go-to resource. It begins by explaining how people with disabilities use the web, and then expertly explains modern HTML in that context.
“A Web for Everyone—Designing Accessible User Experiences” by Sarah Horton (the Paciello Group’s UX Strategy Lead) and Whitney Quesenbery (the Center for Civic Design’s co-director): This book covers the Web Content Accessibility Guidelines 2.0, the principles of Universal Design, and design thinking. Its personas for Accessible UX and its profiles of well-known industry figures—including some 24ways authors—keep its content practical and relevant throughout.
“Accessibility For Everyone” by Laura Kalbag (Ind.ie’s co-founder and designer, and 24ways author): This book is just over a year old, and so serves as a great resource for up-to-date coverage of guidelines, laws, and accessibility features of operating systems—as well as content, design, coding, and testing. The audiobook, which Laura narrates, can help you and your colleagues go from having little or no understanding of web accessibility, to becoming familiar with all aspects of web accessibility—in less than four hours.
“Just Ask: Integrating Accessibility Throughout Design” by Shawn Lawton Henry (the World Wide Web Consortium (W3C)’s Web Accessibility Initiative (WAI)’s Outreach Coordinator): Although this book is 11½ years old, the way it presents accessibility as part of the User-Centered Design process is timeless. I found its section on Usability Testing with people with disabilities particularly useful.",,245,0
246,Designing Your Site Like It’s 1998,Andy Clarke,"It’s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary—and my fourteenth contribution to 24 ways— I’d like to explain how I would’ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.
My design for Planes, Trains and Automobiles is fixed at 800px wide.
Developing a framework
I’ll start by using frames to set up the framework for this new website. Frames are individual pages—one for navigation, the other for my content—pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a element.
I add two rows to my ; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don’t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0:
[…]
Next I add the source of my two frame documents. I don’t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame:
I do want links from my navigation to open in the content frame, so I give each a name so I can specify where I want links to open:
The framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets—and as many individual documents—as I need:
Letterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space.
Handling older browsers
Sadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content:
This page uses frames, but your browser doesn’t support them.
Please upgrade your browser.
Forcing someone back into a frame
Sometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a , people could easily miss out on other parts of a design. This short script will prevent this happening and because it’s vanilla Javascript, it doesn’t require a library such as jQuery:
Laying out my page
Before starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:
I want absolute control over how people experience my design and don’t want to allow it to stretch, so I first need a which limits the width of my layout to 800px. The align attribute will keep this in the centre of someone’s screen:
Although they were developed for displaying tabular information, the cells and rows which make up the element make it ideal for the precise implementation of a design. I need several tables—often nested inside each other—to implement my design. These include tables for a banner and three rows of content:
The width of the first table—used for my banner—is fixed to match the logo it contains. As I don’t need borders, padding, or spacing between these cells, I use attributes to remove them:
The next table—which contains the largest image, introduction, and a call-to-action—is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:
The height and width of these “shims” or “spacers” is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.
For the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:
[…]
I use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table—this time with a white background—inside it:
Adding details
Tables are fabulous tools for laying out a page, but they’re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my “Buy the DVD” call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:
I use those same elements to add details to headlines and lists too. Adding a “bullet” to each item in a list needs only two additional table cells, a circular graphic, and a spacer:
Directed by John Hughes
Implementing a typographic hierarchy
So far I’ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use elements to change the typeface from the browser’s default to any font installed on someone’s device:
Planes, Trains and Automobiles is a comedy film […]
To adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser’s default. I use a size of 4 for this headline and 2 for the text which follows:
Steve Martin
An American actor, comedian, writer, producer, and musician.
When I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.
NB: I use as many elements as needed to create space between headlines and paragraphs.
View the final result (and especially the source.)
My modern day design for Planes, Trains and Automobiles.
I can imagine many people reading this and thinking “This is terrible advice because we don’t develop websites like this in 2018.” That’s true.
We have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:
font-variant-caps: titling-caps;
font-variant-ligatures: common-ligatures;
font-variant-numeric: oldstyle-nums;
Grid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:
body {
display: grid;
grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;
grid-template-rows: auto;
grid-column-gap: 2vw;
grid-row-gap: 1vh;
}
Flexbox has made it easy to develop flexible components such as navigation links:
nav ul { display: flex; }
nav li { flex: 1; }
Just one line of CSS can create multiple columns of fluid type:
main { column-width: 12em; }
CSS Shapes enable text to flow around irregular shapes including polygons:
[src*=""main-img""] {
float: left;
shape-outside: polygon(…);
}
Today, we wouldn’t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:
.btn {
background: linear-gradient(#8B1212, #DD3A3C);
border-radius: 1em;
box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);
}
CSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we’re certain we know how to design and develop products and websites better than we did at the end of 1998.
Strange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That’s why it’s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today—tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass—will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.
I have no prediction for what the web will be like twenty years from now. However, I want to believe we’ll build on what we’ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won’t be repeated.
Head over to my website if you’d like to read about how I’d implement my design for ‘Planes, Trains and Automobiles’ today.",,246,0
247,Managing Flow and Rhythm with CSS Custom Properties,Andy Bell,"An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability.
Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as and
also have no default margin or padding associated with them.
I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in!
The Flow utility
With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid.
That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy.
The code
.flow {
--flow-space: 1em;
}
.flow > * + * {
margin-top: var(--flow-space);
}
What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them.
The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples.
A basic example
See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/LXqerj
What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility.
Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size.
This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though?
See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/qQgxaY
I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances:
h2 {
--flow-space: 3rem;
}
Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured.
A more advanced example
Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space.
See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/ZmPGyL
In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself:
...
Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element.
“But what about old browsers?”, I hear you cry
We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size:
.flow {
--flow-space: 1em;
}
.flow > * + * {
margin-top: 1em;
margin-top: var(--flow-space);
}
Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement!
Wrapping up
This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS.
If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on!",,247,0
248,How to Use Audio on the Web,Ruth John,"I know what you’re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! 🙉
You’re having flashbacks, flashbacks to the days of yore, when we had a element and yup did everyone think that was the most rad thing since . I mean put those two together with a , only use CSS colour names, make sure your borders were all set to ridge and you’ve got yourself the neatest website since 1998.
The sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then.
Yes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth.
We can’t help but be jaded, even though the element is depreciated, and the autoplay policy appeared this year. Although still in it’s infancy, the policy “controls when video and audio is allowed to autoplay”, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future.
But then of course comes the question, having lived in a muted present for so long, where and why would you use audio?
✨ Showcase Time ✨
There are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience.
futurelibrary.no
Another site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good.
pottermore.com
Eighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience.
Eighty-six and a half years
Sound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which “allows the user to explore charts on web pages using sound and the keyboard”. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here.
Then there’s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today.
lightnote.co
learningmusic.ableton.com
Considerations
Yes, tis the season, let’s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to ‘enter’ the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment.
Eighty-six and a half years
Allowing mute and/or volume control clearly within the user interface is a good idea. It won’t draw the user out of the experience, it’ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it’s less thought to reach for a very visible volume than to fumble with device settings.
Indicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn’t always the first place to look for everyone.
To The Future
So let’s go!
We see amazing demos built with Web Audio, and I’m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that.
But audio doesn’t actually need to be all bells and whistles (hey, it’s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design.
Isn’t it great then that there’s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers).
This year I believe we have all experienced the web as a shopping mall more than ever. It’s shining store fronts, flashing adverts, fast food, loud noises.
Let’s use 2019 to create more forests to explore, oceans to dive and mountains to climb.",,248,0
249,Fast Autocomplete Search for Your Website,Simon Willison,"Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.
I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.
Try it out here, then read on to see how I built it.
First step: crawling the data
The first step in building a search engine is to grab a copy of the data that you plan to make searchable.
There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.
I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites.
We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.
Then we’ll tell wget to recursively crawl the website, using the --recursive flag.
We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2
The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X ""/*/*/comments"".
Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.
Tie all of those options together and we get this command:
wget --recursive --wait 2 --no-clobber
-I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
-X ""/*/*/comments""
https://24ways.org/archives/
If you leave this running for a few minutes, you’ll end up with a folder structure something like this:
$ find 24ways.org
24ways.org
24ways.org/2013
24ways.org/2013/why-bother-with-accessibility
24ways.org/2013/why-bother-with-accessibility/index.html
24ways.org/2013/levelling-up
24ways.org/2013/levelling-up/index.html
24ways.org/2013/project-hubs
24ways.org/2013/project-hubs/index.html
24ways.org/2013/credits-and-recognition
24ways.org/2013/credits-and-recognition/index.html
...
As a quick sanity check, let’s count the number of HTML pages we have retrieved:
$ find 24ways.org | grep index.html | wc -l
328
There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead:
wget --recursive --wait 2 --no-clobber
-I /2018
-X ""/*/*/comments""
https://24ways.org/
Thanks to --no-clobber, this is safe to run every day in December to pick up any new content.
We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index.
Building a search index using SQLite
There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.
I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.
SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!
SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.
This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.
It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.
I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday.
If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages:
$ python3 -m venv ./jupyter-venv
$ ./jupyter-venv/bin/pip install jupyter
# ... lots of installer output
# Now lets install some extra packages we will need later
$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib
# And start the notebook web application
$ ./jupyter-venv/bin/jupyter-notebook
# This will open your browser to Jupyter at http://localhost:8888/
You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.
A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account.
Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on.
from pathlib import Path
from bs4 import BeautifulSoup as Soup
base = Path(""/Users/simonw/Dropbox/Development/24ways-search"")
articles = list(base.glob(""*/*/*/*.html""))
# articles is now a list of paths that look like this:
# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')
docs = []
for path in articles:
year = str(path.relative_to(base)).split(""/"")[1]
url = 'https://' + str(path.relative_to(base).parent) + '/'
soup = Soup(path.open().read(), ""html5lib"")
author = soup.select_one("".c-continue"")[""title""].split(
""More information about""
)[1].strip()
author_slug = soup.select_one("".c-continue"")[""href""].split(
""/authors/""
)[1].split(""/"")[0]
published = soup.select_one("".c-meta time"")[""datetime""]
contents = soup.select_one("".e-content"").text.strip()
title = soup.find(""title"").text.split("" ◆"")[0]
try:
topic = soup.select_one(
'.c-meta a[href^=""/topics/""]'
)[""href""].split(""/topics/"")[1].split(""/"")[0]
except TypeError:
topic = None
docs.append({
""title"": title,
""contents"": contents,
""year"": year,
""author"": author,
""author_slug"": author_slug,
""published"": published,
""url"": url,
""topic"": topic,
})
After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:
[
{
""title"": ""Why Bother with Accessibility?"",
""contents"": ""Web accessibility (known in other fields as inclus..."",
""year"": ""2013"",
""author"": ""Laura Kalbag"",
""author_slug"": ""laurakalbag"",
""published"": ""2013-12-10T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/why-bother-with-accessibility/"",
""topic"": ""design""
},
{
""title"": ""Levelling Up"",
""contents"": ""Hello, 24 ways. Iu2019m Ashley and I sell property ins..."",
""year"": ""2013"",
""author"": ""Ashley Baxter"",
""author_slug"": ""ashleybaxter"",
""published"": ""2013-12-06T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/levelling-up/"",
""topic"": ""business""
},
...
My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries.
import sqlite_utils
db = sqlite_utils.Database(""/tmp/24ways.db"")
db[""articles""].insert_all(docs)
That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.
(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)
You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:
$ sqlite3 /tmp/24ways.db
sqlite> .headers on
sqlite> .mode column
sqlite> select title, author, year from articles;
title author year
------------------------------ ------------ ----------
Why Bother with Accessibility? Laura Kalbag 2013
Levelling Up Ashley Baxte 2013
Project Hubs: A Home Base for Brad Frost 2013
Credits and Recognition Geri Coady 2013
Managing a Mind Christopher 2013
Run Ragged Mark Boulton 2013
Get Started With GitHub Pages Anna Debenha 2013
Coding Towards Accessibility Charlie Perr 2013
...
There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:
db[""articles""].enable_fts([""title"", ""author"", ""contents""])
Introducing Datasette
Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet.
We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that?
If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article.
If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:
./jupyter-venv/bin/pip install datasette
This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.
Now you can run Datasette against the 24ways.db file we created earlier like so:
./jupyter-venv/bin/datasette /tmp/24ways.db
This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.
If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db
Publishing the database to the internet
One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish:
$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways
-----> Python app detected
-----> Installing requirements with pip
-----> Running post-compile hook
-----> Discovering process types
Procfile declares types -> web
-----> Compressing...
Done: 47.1M
-----> Launching...
Released v8
https://search-24ways.herokuapp.com/ deployed to Heroku
If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways.
You can run this command as many times as you like to deploy updated versions of the underlying database.
Searching and faceting
Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.
SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A
A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic
Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:
http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A
Better search using custom SQL
The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.
This exposes the underlying SQL query, which looks like this:
select rowid, * from articles where rowid in (
select rowid from articles_fts where articles_fts match :search
) order by rowid limit 101
We can do better than this by constructing a custom SQL query. Here’s the query we will use instead:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10;
You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.
There’s a lot going on here! Let’s break the SQL down line-by-line:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on.
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.
from articles
join articles_fts on articles.rowid = articles_fts.rowid
articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.
where articles_fts match :search || ""*""
order by rank limit 10;
:search || ""*"" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.
How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects:
JSON API call to search for articles matching SVG
The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.
A simple JavaScript autocomplete search interface
I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.
We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:
function debounce(func, wait, immediate) {
let timeout;
return function() {
let context = this, args = arguments;
let later = () => {
timeout = null;
if (!immediate) func.apply(context, args);
};
let callNow = immediate && !timeout;
clearTimeout(timeout);
timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args);
};
};
We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.
Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these:
const htmlEscape = (s) => s.replace(
/>/g, '>'
).replace(
/Autocomplete search
And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:
// Embed the SQL query in a multi-line backtick string:
const sql = `select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10`;
// Grab a reference to the
const searchbox = document.getElementById(""searchbox"");
// Used to avoid race-conditions:
let requestInFlight = null;
searchbox.onkeyup = debounce(() => {
const q = searchbox.value;
// Construct the API URL, using encodeURIComponent() for the parameters
const url = (
""https://search-24ways.herokuapp.com/24ways-866073b.json?sql="" +
encodeURIComponent(sql) +
`&search=${encodeURIComponent(q)}&_shape=array`
);
// Unique object used just for race-condition comparison
let currentRequest = {};
requestInFlight = currentRequest;
fetch(url).then(r => r.json()).then(d => {
if (requestInFlight !== currentRequest) {
// Avoid race conditions where a slow request returns
// after a faster one.
return;
}
let results = d.map(r => `
${htmlEscape(r.author)} - ${r.year}
${highlight(r.snippet)}
`).join("""");
document.getElementById(""results"").innerHTML = results;
});
}, 100); // debounce every 100ms
There’s just one more utility function, used to help construct the HTML results:
const highlight = (s) => htmlEscape(s).replace(
/b4de2a49c8/g, ''
).replace(
/8c94a2ed4b/g, ' '
);
This is what those unique strings passed to the snippet() function were for.
Avoiding race conditions in autocomplete
One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:
User types acces
Browser sends request A - querying documents matching acces*
User continues to type accessibility
Browser sends request B - querying documents matching accessibility*
Request B returns. It was fast, because there are fewer documents matching the full term
The results interface updates with the documents from request B, matching accessibility*
Request A returns results (this was the slower of the two requests)
The results interface updates with the documents from request A - results matching access*
This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.
Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.
Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.
When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.
It’s not a lot of code, really
And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like.
How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with.
A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.
More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.
For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.",,249,0
250,Build up Your Leadership Toolbox,Mazz Mosley,"Leadership. It can mean different things to different people and vary widely between companies. Leadership is more than just a job title. You won’t wake up one day and magically be imbued with all you need to do a good job at leading. If we don’t have a shared understanding of what a Good Leader looks like, how can we work on ourselves towards becoming one? How do you know if you even could be a leader? Can you be a leader without the title?
What even is it?
I got very frustrated way back in my days as a senior developer when I was given “advice” about my leadership style; at the time I didn’t have the words to describe the styles and ways in which I was leading to be able to push back. I heard these phrases a lot:
you need to step up
you need to take charge
you need to grab the bull by its horns
you need to have thicker skin
you need to just be more confident in your leading
you need to just make it happen
I appreciate some people’s intent was to help me, but honestly it did my head in. WAT?! What did any of this even mean. How exactly do you “step up” and how are you evaluating what step I’m on? I am confident, what does being even more confident help achieve with leading? Does that not lead you down the path of becoming an arrogant door knob? >___<
While there is no One True Way to Lead, there is an overwhelming pattern of people in positions of leadership within tech industry being held by men. It felt a lot like what people were fundamentally telling me to do was to be more like an extroverted man. I was being asked to demonstrate more masculine associated qualities (#notallmen). I’ll leave the gendered nature of leadership qualities as an exercise in googling for the reader.
I’ve never had a good manager and at the time had no one else to ask for help, so I turned to my trusted best friends. Books.
I <3 books
I refused to buy into that style of leadership as being the only accepted way to be. There had to be room for different kinds of people to be leaders and have different leadership styles.
There are three books that changed me forever in how I approach and think about leadership.
Primal leadership, by Daniel Goleman, Richard Boyatzis and Annie McKee
Quiet, by Susan Cain
Daring Greatly - How the Courage to be Vulnerable transforms the way we live, love, parent and Lead, by Brené Brown
I recommend you read them. Ignore the slightly cheesy titles and trust me, just read them.
Primal leadership helped to give me the vocabulary and understanding I needed about the different styles of leadership there are, how and when to apply them.
Quiet really helped me realise how much I was being undervalued and misunderstood in an extroverted world. If I’d had managers or support from someone who valued introverts’ strengths, things would’ve been very different. I would’ve had someone telling others to step down and shut up for a change rather than pushing on me to step up and talk louder over everyone else. It’s OK to be different and needing different things like time to recharge or time to think before speaking. It also improved my ability to work alongside my more extroverted colleagues by giving me an understanding of their world so I could communicate my needs in a language they would get.
Brené Brown’s book I am forever in debt to. Her work gave me the courage to stand up and be my own kind of leader. Even when no-one around me looked or sounded like me, I found my own voice.
It takes great courage to be vulnerable and open about what you can and can’t do. Open about your mistakes. Vocalise what you don’t know and asking for help. In some lights, these are seen as weaknesses and many have tried to use them against me, to pull me down and exclude me for talking about them. Dear reader, it did not work, they failed. The truth is, they are my greatest strengths. The privileges I have, I use for good as best and often as I can.
Just like gender, leadership is not binary
If you google for what a leader is, you’ll get many different answers. I personally think Brené’s version is the best as it is one that can apply to a wider range of people, irrespective of job title or function.
I define a leader as anyone who takes responsibility for finding potential in people and processes, and who has the courage to develop that potential.
Brené Brown
Being a leader isn’t about being the loudest in a room, having veto power, talking over people or ignoring everyone else’s ideas. It’s not about “telling people what to do”. It’s not about an elevated status that you’re better than others. Nor is it about creating a hand wavey far away vision and forgetting to help support people in how to get there.
Being a Good Leader is about having a toolbox of leadership styles and skills to choose from depending on the situation. Knowing how and when to apply them is part of the challenge and difficulty in becoming good at it. It is something you will have to continuously work on, forever. There is no Done.
Leaders are Made, they are not Born.
Be flexible in your leadership style
Typically, the best, most effective leaders act according to one or more of six distinct approaches to leadership and skillfully switch between the various styles depending on the situation.
From the book, Primal Leadership, it gives a summary of 6 leadership styles which are:
Visionary
Coaching
Affiliative
Democratic
Pacesetting
Commanding
Visionary, moves people toward a shared dream or future. When change requires a new vision or a clear direction is needed, using a visionary style of leadership helps communicate that picture. By learning how to effectively communicate a story you can help people to move in that direction and give them clarity on why they’re doing what they’re doing.
Coaching, is about connecting what a person wants and helping to align that with organisation’s goals. It’s a balance of helping someone improve their performance to fulfil their role and their potential beyond.
Affiliative, creates harmony by connecting people to each other and requires effective communication to aid facilitation of those connections. This style can be very impactful in healing rifts in a team or to help strengthen connections within and across teams. During stressful times having a positive and supportive connection to those around us really helps see us through those times.
Democratic, values people’s input and gets commitment through participation. Taking this approach can help build buy-in or consensus and is a great way to get valuable input from people. The tricky part about this style, I find, is that when I gather and listen to everyone’s input, that doesn’t mean the end result is that I have to please everyone.
The next two, sadly, are the ones wielded far too often and have the greatest negative impact. It’s where the “telling people what to do” comes from. When used sparingly and in the right situations, they can be a force for good. However, they must not be your default style.
Pacesetting, when used well, it is about meeting challenging and exciting goals. When you need to get high-quality results from a motivated and well performing team, this can be great to help achieve real focus and drive. Sadly it is so overused and poorly executed it becomes the “just make it happen” and driver of unrealistic workload which contributes to burnout.
Commanding, when used appropriately soothes fears by giving clear direction in an emergency or crisis. When shit is on fire, you want to know that your leadership ability can help kick-start a turnaround and bring clarity. Then switch to another style. This approach is also required when dealing with problematic employees or unacceptable behaviour.
Commanding style seems to be what a lot of people think being a leader is, taking control and commanding a situation. It should be used sparingly and only when absolutely necessary.
Be responsible for the power you wield
If reading through those you find yourself feeling a bit guilty that maybe you overuse some of the styles, or overwhelmed that you haven’t got all of these down and ready to use in your toolbox…
Take a breath. Take responsibility. Take action.
No one is perfect, and it’s OK. You can start right now working on those. You can have a conversation with your team and try being open about how you’re going to try some different styles. You can be vulnerable and own up to mistakes you might’ve made followed with an apology. You can order those books and read them. Those books will give you more examples on those leadership styles and help you to find your own voice.
The impact you can have on the lives of those around you when you’re a leader, is huge. You can help be that positive impact, help discover and develop potential in someone.
Time spent understanding people is never wasted.
Cate Huston.
I believe in you. <3 Mazz.",,250,0
251,"The System, the Search, and the Food Bank",Lisa Maria Martin,"Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center.
On the belt are plastic bins filled with assortments of shelf-stable food—one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like “Bottled Water” or “Cereal” or “Candy.”
Such was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins.
I could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there’s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into.
Look, I can’t quite explain it: I just know that I love sorting, organizing, and classifying things—groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask?
The opportunity to create meaning from nothing is at the core of my excitement, which is why I’ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. “I can’t believe they’re letting me do this,” I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey.
The jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It’s not just a nice-to-have that we spent our morning separating cookies from carrots—it’s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we’re talking about donated groceries or—there it is—web content.
This happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.)
Is X a Y? was the question at the heart of our food sorting—but it’s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves:
What is the criteria that defines Y?
Does X meet that criteria?
We don’t usually articulate it so concretely because it’s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn’t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn’t allowed, causing a small cognitive hiccup—this X is NOT a Y—that sometimes meant having to re-sort our boxes.
On the web, we’re interested—I would hope—in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer—Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y?
We have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means—well, this means a lot of things, and I’ve got limited space here, so let’s focus on these three lessons from the food bank:
Use plain, familiar language. This advice seems to be given constantly, but that’s because it’s solid and it’s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it’s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I’ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows—not only will they be more likely to spot what they’re looking for on your site or app, but you’ll turn up more often in search results.
Create consistency in all things. Missteps in consistency look like my earlier chicken broth example—changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts—these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall.
Make the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it.
This idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn’t know every category label around the conveyer belt, nor what criteria each category warranted.
The system wasn’t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We’d ask staff members. We’d ask more seasoned volunteers. We’d ask each other. We’d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn.
The more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it.
The same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away—or, worse, misinform or hurt them.
This is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y—well-sorted content can give them the answer.",,251,0
252,Turn Jekyll up to Eleventy,Paul Lloyd,"Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper — and more secure — option.
The JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it’s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.
By combining three approachable languages — Markdown for content, YAML for data and Liquid for templating — the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it’s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself “this, but in Node”. Thankfully, one of Santa’s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.
Introducing Eleventy
Eleventy is a more flexible alternative Jekyll. Besides being written in Node, it’s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).
As content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I’ve discovered, there are a few gotchas. If you’ve been considering making the switch, here are a few tips and tricks to help you on your way1.
Note: Throughout this article, I’ll be converting Matt Cone’s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:
git clone https://github.com/mattcone/markdown-guide.git
cd markdown-guide
Before you start
If you’ve used tools like Grunt, Gulp or Webpack, you’ll be familiar with Node.js but, if you’ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now’s the time to install Node.js and set up your project to work with its package manager, NPM:
Install Node.js:
Mac: If you haven’t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.
Windows: Download the Windows installer from the Node.js website and follow the instructions.
Initiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems’s Gemfile, this file contains a list of your project’s third-party dependencies.
If you’re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn’t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.
Installing Eleventy
With Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:
npm install --save-dev @11ty/eleventy
If you open package.json you should see the following:
…
""devDependencies"": {
""@11ty/eleventy"": ""^0.6.0""
}
…
We can now run Eleventy from the command line using NPM’s npx command. For example, to covert the README.md file to HTML, we can run the following:
npx eleventy --input=README.md --formats=md
This command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.
Configuration
Whereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we’ll see later on.
We’ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:
module.exports = function(eleventyConfig) {
return {
dir: {
input: ""./"", // Equivalent to Jekyll's source property
output: ""./_site"" // Equivalent to Jekyll's destination property
}
};
};
A few other things to bear in mind:
Whereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).
By default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you’ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.
Layouts
One area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).
Wanting to keep our layouts together, we’ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy’s config:
module.exports = function(eleventyConfig) {
// Aliases are in relation to the _includes folder
eleventyConfig.addLayoutAlias('about', 'layouts/about.html');
eleventyConfig.addLayoutAlias('book', 'layouts/book.html');
eleventyConfig.addLayoutAlias('default', 'layouts/default.html');
return {
dir: {
input: ""./"",
output: ""./_site""
}
};
}
Determining which template language to use
Eleventy will transform Markdown (.md) files using Liquid by default, but we’ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.
Variables
On the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.
Site variables
Alongside build settings, Jekyll let’s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:
title: ""Markdown Guide""
url: https://www.markdownguide.org
baseurl: """"
repo: http://github.com/mattcone/markdown-guide
comments: false
author:
name: ""Matt Cone""
og_locale: ""en_US""
Eleventy’s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.
{
""title"": ""Markdown Guide"",
""url"": ""https://www.markdownguide.org"",
""baseurl"": """",
""repo"": ""http://github.com/mattcone/markdown-guide"",
""comments"": false,
""author"": {
""name"": ""Matt Cone""
},
""og_locale"": ""en_US""
}
Page variables
The table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:
Jekyll
Eleventy
page.url
page.url
page.date
page.date
page.path
page.inputPath
page.id
page.outputPath
page.name
page.fileSlug
page.content
content
page.title
title
page.foobar
foobar
When iterating through pages, frontmatter values are available via the data object while content is available via templateContent:
Jekyll
Eleventy
item.url
item.url
item.date
item.date
item.path
item.inputPath
item.name
item.fileSlug
item.id
item.outputPath
item.content
item.templateContent
item.title
item.data.title
item.foobar
item.data.foobar
Ideally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.
Pagination variables
Whereas Jekyll’s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:
Jekyll
Eleventy
paginator.page
pagination.pageNumber
paginator.per_page
pagination.size
paginator.posts
pagination.items
paginator.previous_page_path
pagination.previousPageHref
paginator.next_page_path
pagination.nextPageHref
Filters
Although Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few — more than can be covered by this article — but you can replicate them by using Eleventy’s addFilter configuration option. Let’s convert two used by our Markdown Guide: jsonify and where.
The jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:
// {{ variable | jsonify }}
eleventyConfig.addFilter('jsonify', function (variable) {
return JSON.stringify(variable);
});
Jekyll’s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:
{{ site.members | where: ""graduation_year"",""2014"" }}
To account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:
// {{ array | where: key,value }}
eleventyConfig.addFilter('where', function (array, key, value) {
return array.filter(item => {
const keys = key.split('.');
const reducedKey = keys.reduce((object, key) => {
return object[key];
}, item);
return (reducedKey === value ? item : false);
});
});
There’s quite a bit going on within this filter, but I’ll try to explain. Essentially we’re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it’s removed. Phew!
Includes
As with filters, Jekyll provides a set of tags that aren’t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you’re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:
{% include include.html value=""key"" %}
{{ include.value }}
in Eleventy, you would do this:
{% include ""include.html"", value: ""key"" %}
{{ value }}
A downside of Shopify’s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.
Tweaking Liquid
You may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS’s dynamicPartials option to false. Additionally, Eleventy doesn’t support the include_relative tag, meaning you can’t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option.
Thankfully, Eleventy allows us to pass options to LiquidJS:
eleventyConfig.setLiquidOptions({
dynamicPartials: false,
root: [
'_includes',
'.'
]
});
Collections
Jekyll’s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.
Collections in Jekyll
In Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:
collections:
- basic-syntax
- extended-syntax
These correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:
{% for syntax in site.extended-syntax %}
{{ syntax.title }}
{% endfor %}
Collections in Eleventy
There are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:
---
title: Strikethrough
syntax-id: strikethrough
syntax-summary: ""~~The world is flat.~~""
tag: extended-syntax
---
We can then iterate over tagged content like this:
{% for syntax in collections.extended-syntax %}
{{ syntax.data.title }}
{% endfor %}
Eleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):
eleventyConfig.addCollection('basic-syntax', collection => {
return collection.getFilteredByGlob('_basic-syntax/*.md');
});
eleventyConfig.addCollection('extended-syntax', collection => {
return collection.getFilteredByGlob('_extended-syntax/*.md');
});
We can extend this further. For example, say we wanted to sort a collection by the display_order property in our document’s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript’s sort method to sort the result:
eleventyConfig.addCollection('example', collection => {
return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {
return a.data.display_order - b.data.display_order;
});
});
Hopefully, this gives you just a hint of what’s possible using this approach.
Using directory data to manage defaults
By default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:
---
title: Lists
syntax-id: lists
api: ""no""
permalink: /basic-syntax/lists.html
---
Again, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.
For example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:
{
""layout"": ""syntax"",
""tag"": ""basic-syntax"",
""permalink"": ""basic-syntax/{{ title | slug }}.html""
}
However, Markdown Guide doesn’t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let’s change things around a little. No longer tied to Jekyll’s rules about where collection folders should be saved and how they should be labelled, we’ll move them into a folder called _content:
markdown-guide
└── _content
├── basic-syntax
├── extended-syntax
├── getting-started
└── _content.json
We will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:
{
""permalink"": false
}
Static files
Eleventy only transforms files whose template language it’s familiar with. But often we may have static assets that don’t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:
module.exports = function(eleventyConfig) {
…
// Copy the `assets` directory to the compiled site folder
eleventyConfig.addPassthroughCopy('assets');
return {
dir: {
input: ""./"",
output: ""./_site""
},
passthroughFileCopy: true
};
}
Final considerations
Assets
Unlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts — we have plenty of choices in that department already. If you’ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.
Publishing to GitHub Pages
One of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site — or any site not built with Jekyll — to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It’s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!
Going one louder
If you’ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons.
While it’s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy’s appeal, it’s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.
I moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that’s perfectly fine!
But these go to 11.
Information provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5 ↩",,252,0
253,Clip Paths Know No Bounds,Dan Wilson,"CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance.
The basics of a clip path
Before we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed.
Using the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely’s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element’s dimensions.
See the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen.
So for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines.
clip-path: polygon(
30% 0%,
70% 0%,
100% 30%,
100% 70%,
70% 100%,
30% 100%,
0% 70%,
0% 30%
);
A shape with less vertices than the eye can see
It’s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box — or more specifically when we think outside the range of 0% - 100%.
Our element’s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element.
See the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen.
By going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons.
Our earlier octagon can similarly be made with only four points.
See the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen.
Multiple shapes, one clip path
We can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon().
See the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen.
Depending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element’s box, we can draw extra lines to help us get where we need to go next as needed.
It can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles.
See the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen.
Different shapes with fill rules
A polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification — the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape.
See the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen.
As lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path’s lines, the point is considered outside, and if it crosses an odd number the point is inside.
Order of operations
It is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more.
These compositing effects are applied in the order:
CSS Filters (e.g. filter: blur(2px))
Clipping (e.g. what this article is about)
Masking (Clipping’s cousin)
Blend Modes (e.g. mix-blend-mode: multiply)
Opacity
This means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element’s box edges.
See the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen.
If we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally.
Revealing content with animation
CSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values.
See the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen.
Don’t keep CSS Shapes in a box
Clip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.",,253,0
254,What I Learned in Six Years at GDS,Anna Shipman,"When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public.
Lots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned.
1. What is the user need?
The main discipline I learned from my time at GDS was to always ask ‘what is the user need?’ It’s very easy to build something that seems like a good idea, but until you’ve identified what problem you are solving for the user, you can’t be sure that you are building something that is going to help solve an actual problem.
A really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a “where’s my stuff” for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what’s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres.
The project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users’ needs. At the end of this discovery, the team realised that a status tracker wasn’t the way to address the problem. As they wrote in this blog post:
Status tracking tools are often just ‘channel shift’ for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place.
What would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it’s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes.
Making sure you know your user
At GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you’d observe users working with things you’d built, but even if they weren’t, it was an incredibly valuable experience, and something you should seek out if you are able to.
Even if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn’t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged.
User needs is not just about building things
Asking the question “what is the user need?” really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be).
Thinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What’s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review.
Or you are writing an email to a colleague. What’s the user need? What are you hoping the reader will learn, understand or do as a result of your email?
2. Make things open: it makes things better
The second important thing I learned at GDS was ‘make things open: it makes things better’. This works on many levels: being open about your strategy, blogging about what you are doing and what you’ve learned (including mistakes), and – the part that I got most involved in – coding in the open.
Talking about your work helps clarify it
One thing we did really well at GDS was blogging – a lot – about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you’re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions.
It’s also really valuable to blog about what you’ve learned, especially if you’ve made a mistake. It makes sure you’ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better.
Coding in the open has a lot of benefits
In my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed).
When I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people’s mind at rest about these things is why it’s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given).
But I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won’t be looking) makes everyone raise their game slightly. The very fact that you know it’s open, makes you make it a bit better.
It helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I’ve left GDS, it’s so useful to be able to look back at code I worked on to remember how things worked.
Share what you learn
It’s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you’ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well.
3. Do the hard work to make it simple (tech edition)
‘Start with user needs’ and ‘Make things open: it makes things better’ are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: ‘Do the hard work to make it simple’, and specifically, how this manifests itself in the way we build technology.
At GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer.
We worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it.
Reviewing others’ pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a ‘pull request seal’, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days.
Making it easier for developers to support the site
Another example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers.
The team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions.
The main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it.
This was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they’d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day.
Developers are users too
Doing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work.
These three principles help us make great things
I learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design.
And the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I’ve talked about here: think about the user need, make things open, and do the hard work to make it simple.",,254,0
255,Inclusive Considerations When Restyling Form Controls,Scott O'Hara,"I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility.
I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year.
Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match.
However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve.
Quick level setting
Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following:
Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels.
It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics.
Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing.
Visually resetting and restyling form controls
Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors.
As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation.
See the Pen form control styling comparisons by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/gZOrZm/
Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated.
If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing.
You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting?
It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements:
Non-standard
This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future.
While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all.
Can’t make it? Fake it.
Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended.
Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling.
Getting clever with markup and CSS
The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox.
See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/RqEayN/
Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen?
As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others.
Limitations to cleverness
Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to.
However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach.
Using ARIA when appropriate
Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA).
ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA:
If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible.
Exceptions to the rule
One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control.
Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized.
While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control.
See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/XyvoeE/
By adding a role=""switch"" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state.
But while this is a valid approach to take in building a switch, how does this actually match up to reality?
Does it pass the test(s)?
Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better.
What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example:
Is the control implemented in all supported browsers?
If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field?
If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline?
Test with multiple input devices.
Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few.
You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well.
How well does the control take to custom styles?
If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling.
Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled.
Do specifications match reality?
If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations?
For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order?
Wrapping up
While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented.
Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option.
2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs.
And hey. There’s always next year, right?",,255,0
256,Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist,Natalie Downe,"We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.
*(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!)
Looking for critters in rocky intertidal habitats
One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’
A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.)
They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.
My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.
I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist.
Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!
Finding nearby animals with iNaturalist
We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.
I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting.
This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.
iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button.
You’ll then get a URL that looks a bit like
https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at
which you can call and interrrogate using a programming language of your choice.
If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).
Rapid development using Observable Notebooks
We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all.
You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example.
Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.
Developing your Naturalist superpowers
If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.
For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.
We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com
The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API.
westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811
A quick map primer:
The higher the latitude the more north it is
The lower the latitude the more south it is
Latitude 0 = the equator
The higher the longitude the more east it is of Greenwich
The lower the longitude the more west it is of Greenwich
Longitude 0 = Greenwich
In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:
nelat = highest latitude = north limit = 37.499811
nelng = highest longitude = east limit = -122.492738
swlat = smallest latitude = south limit = 37.492805
swlng = smallest longitude = west limit = 122.50542
As API parameters these look like this:
?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542
These parameters in this format can be used for most of the iNaturalist API methods.
Nudibranch seasonality in Pillar Point
We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.
In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’.
Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’.
The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.
The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:
pillar_point_counts_per_week = fetch(
""https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true""
).then(response => {
return response.json();
})
Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks.
(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)
The iNaturalist API returns data that looks like this:
{
""total_results"": 53,
""page"": 1,
""per_page"": 53,
""results"": {
""week_of_year"": {
""1"": 136,
""2"": 20,
""3"": 150,
""4"": 65,
""5"": 186,
""6"": 74,
""7"": 47,
""8"": 87,
""9"": 64,
""10"": 56,
But for our Vega-Lite graph we need data that looks like this:
[{
""week"": ""01"",
""value"": 136
}, {
""week"": ""02"",
""value"": 20
}, ...]
We can convert what we get back from the API to the second format using a loop that iterates over the object keys:
objects_to_plot = {
let objects = [];
Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {
objects.push({
week: `Wk ${week_index.toString()}`,
observations: pillar_point_counts_per_week.results.week_of_year[week_index]
});
})
return objects;
}
We can then plug this into Vega-Lite to draw us a graph:
vegalite({
data: {values: objects_to_plot},
mark: ""bar"",
encoding: {
x: {field: ""week"", type: ""nominal"", sort: null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences.
So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.
pillar_point_nudibranches = {
let api_results = await fetch(
""https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true""
).then(r => r.json())
let species_list = api_results.results.map(i => ({
value: i.taxon.id,
label: `${i.taxon.preferred_common_name} (${i.taxon.name})`
}));
return species_list
}
We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from ""@jashkenas/inputs"" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!
viewof select_species = select({
title: ""Which Nudibranch do you want to see seasonality for?"",
options: [{value: """", label: ""All the Nudibranchs!""}, ...pillar_point_nudibranches],
value: """"
})
Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.
pillar_point_counts_per_month_per_species = fetch(
`https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`
).then(r => r.json())
Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:
objects_to_plot_species_month = {
let objects = [];
Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {
objects.push({
month: (new Date(2018, (month_index - 1), 1)).toLocaleString(""en"", {month: ""long""}),
observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]
});
})
return objects;
}
(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)
And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!
vegalite({
data: {values: objects_to_plot_species_month},
mark: ""bar"",
encoding: {
x: {field: ""month"", type: ""nominal"", sort:null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.
This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?
Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!
Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.
viewof selected_month = select({
title: ""When do you want to see Nudibranchs?"",
options: [
{ label: ""Whenever"", value: """" },
{ label: ""January"", value: ""1"" },
{ label: ""February"", value: ""2"" },
{ label: ""March"", value: ""3"" },
{ label: ""April"", value: ""4"" },
{ label: ""May"", value: ""5"" },
{ label: ""June"", value: ""6"" },
{ label: ""July"", value: ""7"" },
{ label: ""August"", value: ""8"" },
{ label: ""September"", value: ""9"" },
{ label: ""October"", value: ""10"" },
{ label: ""November"", value: ""11"" },
{ label: ""December"", value: ""12"" },
],
value: """"
})
We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.
all_species_data = fetch(
`https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`
).then(r => r.json())
You can render HTML directly in a notebook cell using Observable’s html tagged template literal:
If you go to Pillar Point ${
{"""": """",
""1"":""in January"",
""2"":""in Febrary"",
""3"":""in March"",
""4"":""in April"",
""5"":""in May"",
""6"":""in June"",
""7"":""in July"",
""8"":""in August"",
""9"":""in September"",
""10"":""in October"",
""11"":""in November"",
""12"":""in December"",
}[selected_month]
} you might see…
${all_species_data.results.map(s => `
${s.taxon.name}
Seen ${s.count} times
`)}
These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!
Play with it yourself in this Observable Notebook.
Conclusion
I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower.
Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.
Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.",,256,0
257,The (Switch)-Case for State Machines in User Interfaces,David Khourshid,"You’re tasked with creating a login form. Email, password, submit button, done.
“This will be easy,” you think to yourself.
Login form by Selecto
You’ve made similar forms many times in the past; it’s essentially muscle memory at this point. You’re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you’ll have to translate the pixels to meaningful, responsive CSS values, but that’s the least of your problems.
As you’re writing up the HTML structure and CSS layout and styles for this form, you realize that you don’t know what the successful “logged in” page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.
What if login fails? Where do those errors show up?
Should we show errors differently if the user forgot to enter their email, or password, or both?
Or should the submit button be disabled?
Should we validate the email field?
When should we show validation errors – as they’re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)
When should the errors disappear?
What do we show during the login process? Some loading spinner?
What if loading takes too long, or a server error occurs?
Many more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.
Modeling Behavior
Describing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story – they often leave out potential edge-cases or small yet important bits of information.
However, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.
The general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:
start - not submitted yet
loading - submitted and logging in
success - successfully logged in
error - login failed
Additionally, we can describe an application as accepting a finite number of events – that is, all the possible events that can be “sent” to the application, either from the user or some other external entity:
SUBMIT - pressing the submit button
RESOLVE - the server responds, indicating that login is successful
REJECT - the server responds, indicating that login failed
Then, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:
From the start state, when the SUBMIT event occurs, the app should be in the loading state.
From the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.
If login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.
From the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.
Otherwise, if any other event occurs, don’t do anything and stay in the same state.
That’s a pretty thorough description, similar to a user story! It’s also a bit more symbolic than a user story (e.g., “when the SUBMIT event occurs” instead of “when the user presses the submit button”), and that’s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:
Every state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.
From Visuals to Code
Drawing a state machine doesn’t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn’t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.
With the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:
function loginMachine(state, event) {
switch (state) {
case 'start':
if (event === 'SUBMIT') {
return 'loading';
}
break;
case 'loading':
if (event === 'RESOLVE') {
return 'success';
} else if (event === 'REJECT') {
return 'error';
}
break;
case 'success':
// Accept no further events
break;
case 'error':
if (event === 'SUBMIT') {
return 'loading';
}
break;
default:
// This should never occur
return undefined;
}
}
console.log(loginMachine('start', 'SUBMIT'));
// => 'loading'
This is fine (I suppose) but personally, I find it much easier to use objects:
const loginMachine = {
initial: ""start"",
states: {
start: {
on: { SUBMIT: 'loading' }
},
loading: {
on: {
REJECT: 'error',
RESOLVE: 'success'
}
},
error: {
on: {
SUBMIT: 'loading'
}
},
success: {}
}
};
function transition(state, event) {
return machine
.states[state] // Look up the state
.on[event] // Look up the next state based on the event
|| state; // If not found, return the current state
}
console.log(transition('start', 'SUBMIT'));
As you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:
A Common Language Between Designers and Developers
Although finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:
identify all possible states, and potentially missing states
describe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)
eliminate impossible states and identify states that are “unreachable” (have no entry transition) or “sunken” (have no exit transition)
add features with full confidence of knowing what other states it might affect
simplify redundant states or complex user flows
create test paths for almost every possible user flow, and easily identify edge cases
collaborate better by understanding the entire application model equally.
Not a New Idea
I’m not the first to suggest that state machines can help bridge the gap between design and development.
Vince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines
User flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.
In 1999, Ian Horrocks wrote a book titled “Constructing the User Interface with Statecharts”, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.
More than a decade earlier, David Harel published “Statecharts: A Visual Formalism for Complex Systems”, in which the statechart - an extended hierarchical state machine model - is born.
State machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:
Visualized modeling
Precise diagrams
Automatic code generation
Comprehensive test coverage
Accommodation of late-breaking requirements changes
Moving Forward
It’s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:
The World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications
The Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling
I gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.
I have also been working on XState, which is a library for “state machines and statecharts for the modern web”. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).
I’m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.",,257,0
258,Mistletoe Offline,Jeremy Keith,"It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.
I am, of course, talking about Murphy, and the golden rule he gave unto us:
Anything that can go wrong will go wrong.
So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit?
But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong.
I guess there’s nothing we can do about those particular situations, right?
Wrong!
A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.
If you’ve got a custom 404 page, why not make a custom offline page too?
Get your server in order
Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.
If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.
Make an offline page
Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference.
When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html.
Pre-cache your offline page
Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:
addEventListener('install', installEvent => {
// put your instructions here.
}); // end addEventListener
In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html',
'/path/to/stylesheet.css',
'/path/to/javascript.js',
'/path/to/image.jpg'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.
Intercept requests
The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:
addEventListener('fetch', fetchEvent => {
// What happens next is up to you!
}); // end addEventListener
Let’s write a fairly conservative script with the following logic:
Whenever a file is requested,
First, try to fetch it from the network,
But if that doesn’t work, try to find it in the cache,
But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead.
Here’s how that translates into JavaScript:
// Whenever a file is requested
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
fetchEvent.respondWith(
// First, try to fetch it from the network
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
}) // end fetch.then
// But if that doesn't work
.catch( fetchError => {
// try to find it in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
// But if that doesn't work
} else {
// and it's a request for a web page
if (request.headers.get('Accept').includes('text/html')) {
// show the custom offline page instead
return caches.match('/offline.html');
} // end if
} // end if/else
}) // end match.then
}) // end fetch.catch
); // end respondWith
}); // end addEventListener
I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas.
Hook up your service worker script
You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML:
if (navigator.serviceWorker) {
navigator.serviceWorker.register('/serviceworker.js');
}
That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.
You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted.
Go further!
Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!
This is just the beginning. You can do more with service workers.
What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version.
Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.
So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript.
Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action.",,258,0
259,Designing Your Future,Christopher Murphy,"I’ve had the pleasure of working for a variety of clients – both large and small – over the last 25 years. In addition to my work as a design consultant, I’ve worked as an educator, leading the Interaction Design team at Belfast School of Art, for the last 15 years.
In July, 2018 – frustrated with formal education, not least the ever-present hand of ‘austerity’ that has ravaged universities in the UK for almost a decade – I formally reduced my teaching commitment, moving from a full-time role to a half-time role.
Making the move from a (healthy!) monthly salary towards a position as a freelance consultant is not without its challenges: one month your salary’s arriving in your bank account (and promptly disappearing to pay all of your bills); the next month, that salary’s been drastically reduced. That can be a shock to the system.
In this article, I’ll explore the challenges encountered when taking a life-changing leap of faith. To help you confront ‘the fear’ – the nervousness, the sleepless nights and the ever-present worry about paying the bills – I’ll provide a set of tools that will enable you to take a leap of faith and pursue what deep down drives you.
In short: I’ll bare my soul and share everything I’m currently working on to – once and for all – make a final bid for freedom.
This isn’t easy. I’m sharing my innermost hopes and aspirations, and I might open myself up to ridicule, but I believe that by doing so, I might help others, by providing them with tools to help them make their own leap of faith.
The power of visualisation
As designers we have skills that we use day in, day out to imagine future possibilities, which we then give form. In our day-to-day work, we use those abilities to design products and services, but I also believe we can use those skills to design something every bit as important: ourselves.
In this article I’ll explore three tools that you can use to design your future:
Product DNA
Artefacts From the Future
Tomorrow Clients
Each of these tools is designed to help you visualise your future. By giving that future form, and providing a concrete goal to aim for, you put the pieces in place to make that future a reality.
Brian Eno – the noted musician, producer and thinker – states, “Humans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds.” Eno helpfully provides a powerful example:
When Martin Luther King said, “I have a dream,” he was inviting others to dream that dream with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it.
The dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real.
When you imagine your future – designing an alternate, imagined reality in your mind – you begin the process of making that future real.
Product DNA
The first tool, which I use regularly – for myself and for client work – is a tool called Product DNA. The intention of this tool is to identify beacons from which you can learn, helping you to visualise your future.
We all have heroes – individuals or organisations – that we look up to. Ask yourself, “Who are your heroes?” If you had to pick three, who would they be and what could you learn from them? (You probably have more than three, but distilling down to three is an exercise in itself.)
Earlier this year, when I was putting the pieces in place for a change in career direction, I started with my heroes. I chose three individuals that inspired me:
Alan Moore: the author of ‘Do Design: Why Beauty is Key to Everything’;
Mark Shayler: the founder of Ape, a strategic consultancy; and
Seth Godin: a writer and educator I’ve admired and followed for many years.
Looking at each of these individuals, I ‘borrowed’ a little DNA from each of them. That DNA helped me to paint a picture of the kind of work I wanted to do and the direction I wanted to travel.
Moore’s book - ‘Do Design’ – had a powerful influence on me, but the primary inspiration I drew from him was the sense of gravitas he conveyed in his work. Moore’s mission is an important one and he conveys that with an appropriate weight of expression.
Shayler’s work appealed to me for its focus on equipping big businesses with a startup mindset. As he puts it: “I believe that you can do the things that you do better.” That sense – of helping others to be their best selves – appealed to me.
Finally, the words Godin uses to describe himself – “An Author, Entrepreneur and Most of All, a Teacher” – resonated with me. The way he positions himself, as, “most of all, a teacher,” gave me the belief I needed that I could work as an educator, but beyond the ivory tower of academia.
I’ve been exploring each of these individuals in depth, learning from them and applying what I learn to my practice. They don’t all know it, but they are all ‘mentors from afar’.
In a moment of serendipity – and largely, I believe, because I’d used this tool to explore his work – I was recently invited by Alan Moore to help him develop a leadership programme built around his book.
The key lesson here is that not only has this exercise helped me to design my future and give it tangible form, it’s also led to a fantastic opportunity to work with Alan Moore, a thinker who I respect greatly.
Artefacts From the Future
The second tool, which I also use regularly, is a tool called ‘Artefacts From the Future’. These artefacts – especially when designed as ‘finished’ pieces – are useful for creating provocations to help you see the future more clearly.
‘Artefacts From the Future’ can take many forms: they might be imagined magazine articles, news items, or other manifestations of success. By imagining these end points and giving them form, you clarify your goals, establishing something concrete to aim for.
Earlier this year I revisited this tool to create a provocation for myself. I’d just finished Alla Kholmatova’s excellent book on ‘Design Systems’, which I would recommend highly. The book wasn’t just filled with valuable insights, it was also beautifully designed.
Once I’d finished reading Kholmatova’s book, I started thinking: “Perhaps it’s time for me to write a new book?” Using the magic of ‘Inspect Element’, I created a fictitious page for a new book I wanted to write: ‘Designing Delightful Experiences’.
I wrote a description for the book, considering how I’d pitch it.
This imagined page was just what I needed to paint a picture in my mind of a possible new book. I contacted the team at Smashing Magazine and pitched the idea to them. I’m happy to say that I’m now working on that book, which is due to be published in 2019.
Without this fictional promotional page from the future, the book would have remained as an idea – loosely defined – rolling around my mind. By spending some time, turning that idea into something ‘real’, I had everything I needed to tell the story of the book, sharing it with the publishing team at Smashing Magazine.
Of course, they could have politely informed me that they weren’t interested, but I’d have lost nothing – truly – in the process.
As designers, creating these imaginary ‘Artefacts From the Future’ is firmly within our grasp. All we need to do is let go a little and allow our imaginations to wander.
In my experience, working with clients and – to a lesser extent, students – it’s the ‘letting go’ part that’s the hard part. It can be difficult to let down your guard and share a weighty goal, but I’d encourage you to do so. At the end of the day, you have nothing to lose.
The key lesson here is that your ‘Artefacts From the Future’ will focus your mind. They’ll transform your unformed ideas into ‘tangible evidence’ of future possibilities, which you can use as discussion points and provocations, helping you to shape your future reality.
Tomorrow Clients
The third tool, which I developed more recently, is a tool called ‘Tomorrow Clients’. This tool is designed to help you identify a list of clients that you aspire to work with.
The goal is to pinpoint who you would like to work with – in an ideal world – and define how you’d position yourself to win them over. Again, this involves ‘letting go’ and allowing your mind to imagine the possibilities, asking, “What if…?”
Before I embarked upon the design of my new website, I put together a ‘soul searching’ document that acted as a focal point for my thinking. I contacted a number of designers for a second opinion to see if my thinking was sound.
One of my graduates – Chris Armstrong, the founder of Niice – replied with the following: “Might it be useful to consider five to ten companies you’d love to work for, and consider how you’d pitch yourself to them?”
This was just the provocation I needed. To add a little focus, I reduced the list to three, asking: “Who would my top three clients be?”
By distilling the list down I focused on who I’d like to work for and how I’d position myself to entice them to work with me. My list included: IDEO, Adobe and IBM. All are companies I admire and I believed each would be interesting to work for.
This exercise might – on the surface – appear a little like indulging in fantasy, but I believe it helps you to clarify exactly what it is you are good at and, just as importantly, put that in to words.
For each company, I wrote a short pitch outlining why I admired them and what I thought I could add to their already existing skillset.
Focusing first on Adobe, I suggested establishing an emphasis on educational resources, designed to help those using Adobe’s creative tools to get the most out of them.
A few weeks ago, I signed a contract with the team working on Adobe XD to create a series of ‘capsule courses’, focused on UX design. The first of these courses – exploring UI design – will be out in 2019.
I believe that Armstrong’s provocation – asking me to shift my focus from clients I have worked for in the past to clients I aspire to work for in the future – made all the difference.
The key lesson here is that this exercise encouraged me to raise the bar and look to the future, not the past. In short, it enabled me to proactively design my future.
In closing…
I hope these three tools will prove a welcome addition to your toolset. I use them when working with clients, I also use them when working with myself.
I passionately believe that you can design your future. I also firmly believe that you’re more likely to make that future a reality if you put some thought into defining what it looks like.
As I say to my students and the clients I work with: It’s not enough to want to be a success, the word ‘success’ is too vague to be a destination. A far better approach is to define exactly what success looks like.
The secret is to visualise your future in as much detail as possible. With that future vision in hand as a map, you give yourself something tangible to translate into a reality.",,259,0
260,The Art of Mathematics: A Mandala Maker Tutorial,Hagar Shilo,"In front-end development, there’s often a great deal of focus on tools that aim to make our work more efficient. But what if you’re new to web development? When you’re just starting out, the amount of new material can be overwhelming, particularly if you don’t have a solid background in Computer Science. But the truth is, once you’ve learned a little bit of JavaScript, you can already make some pretty impressive things.
A couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days:
Mandala Maker user interface
The coolest part about it is the fact that it’s a tool: anyone can use it to create something original and brand new.
In this tutorial, we’ll build a smaller version of this app – a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we’re done, you’re on your own and can tweak it as you please. Be creative!
Preparations: a blank canvas
The first thing you’ll need for this project is a designated drawing space. We’ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like).
Files
Create 3 files: index.html, styles.css, main.js. Don’t forget to include your JS and CSS files in your HTML.
Your browser doesn't support canvas.
I’ll ask you to update your HTML file at a later point, but the CSS file we’ll start with will stay the same throughout the project. This is the full CSS we are going to use:
body {
background-color: #ccc;
text-align: center;
}
canvas {
touch-action: none;
background-color: #fff;
}
button {
font-size: 110%;
}
Next steps
We are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts:
Building a simple drawing app with one line and one color
Adding a Clear button and a color picker
Adding more functionality: 2 line drawing (add the first reflection)
Adding more functionality: 8 line drawing (add 6 more reflections!)
Interactive demos
This tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet.
Part 1: A simple drawing app
Let’s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables:
var canvas, context, w, h,
prevX = 0, currX = 0, prevY = 0, currY = 0,
draw = false;
The variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true.
1. init
Responsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events.
function init() {
canvas = document.querySelector(""canvas"");
context = canvas.getContext(""2d"");
w = canvas.width;
h = canvas.height;
canvas.onpointermove = handlePointerMove;
canvas.onpointerdown = handlePointerDown;
canvas.onpointerup = stopDrawing;
canvas.onpointerout = stopDrawing;
}
2. drawLine
This is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial.
lineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY).
function drawLine() {
var a = prevX,
b = prevY,
c = currX,
d = currY;
context.lineWidth = 4;
context.lineCap = ""round"";
context.beginPath();
context.moveTo(a, b);
context.lineTo(c, d);
context.stroke();
context.closePath();
}
3. stopDrawing
This is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout).
function stopDrawing() {
draw = false;
}
4. recordPointerLocation
This tracks the pointer’s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it.
function recordPointerLocation(e) {
prevX = currX;
prevY = currY;
currX = e.clientX - canvas.offsetLeft;
currY = e.clientY - canvas.offsetTop;
}
5. handlePointerMove
This is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it.
function handlePointerMove(e) {
if (draw) {
recordPointerLocation(e);
drawLine();
}
}
6. handlePointerDown
This is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That’s because we only want movement events from handlePointerMove to cause drawing if the pointer is down.
function handlePointerDown(e) {
recordPointerLocation(e);
draw = true;
}
Finally, we have a working drawing app. But that’s just the beginning!
See the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen.
Part 2: Add a Clear button and a color picker
Now we’ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear.
Your browser doesn't support canvas.
Clear
Color picker
This is our new color picker function. It targets the input element by its class and gets its value.
function getColor() {
return document.querySelector("".color"").value;
}
Up until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We’ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.
function drawLine() {
//...code...
context.strokeStyle = getColor();
context.lineWidth = 4;
context.lineCap = ""round"";
//...code...
}
Clear button
This is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.
function clearCanvas() {
if (confirm(""Want to clear?"")) {
context.clearRect(0, 0, w, h);
}
}
The method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner.
If we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.
Let’s add this event handler to init:
function init() {
//...code...
canvas.onpointermove = handleMouseMove;
canvas.onpointerdown = handleMouseDown;
canvas.onpointerup = stopDrawing;
canvas.onpointerout = stopDrawing;
document.querySelector("".clear"").onclick = clearCanvas;
}
See the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.
Part 3: Draw with 2 lines
It’s time to make a line appear where no pointer has gone before. A ghost line!
For that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it’s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn’t matter which one we choose. Let’s go with the x-axis.
Here is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!).
Now, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.
A sketch illustrating the mathematics of reflecting a point.
What happens to the x coordinates?
The variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them “the x coordinates”. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c.
What happens to the y coordinates?
What about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.
This is the new code for the app’s variables and the two lines: the one that fills the pointer’s path and the one mirroring it across the x-axis.
function drawLine() {
var a = prevX, a_ = a,
b = prevY, b_ = h-b,
c = currX, c_ = c,
d = currY, d_ = h-d;
//... code ...
// Draw line #1, at the pointer's location
context.moveTo(a, b);
context.lineTo(c, d);
// Draw line #2, mirroring the line #1
context.moveTo(a_, b_);
context.lineTo(c_, d_);
//... code ...
}
In case this was too abstract for you, let’s look at some actual numbers to see how this works.
Let’s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.
So b' = 10 - 2 = 8 and d' = 10 - 3 = 7.
We use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That’s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.
If you are still confused, I don’t blame you.
Here is the result. Draw something and see what happens.
See the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.
Part 4: Draw with 8 lines
I have made yet another confusing sketch, with points C and D, so you understand what we’re trying to do. Later on we’ll look at points E, F, G and H as well. The circled point is the one we’re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas.
A sketch illustrating points C and D.
This is the part where the math gets a bit mathier, as our drawLine function evolves further. We’ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let’s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).
function drawLine() {
//... code ...
// Reassign values
a_ = w-a; b_ = b;
c_ = w-c; d_ = d;
// Draw the 3rd line
context.moveTo(a_, b_);
context.lineTo(c_, d_);
// Reassign values
a_ = w-a; b_ = h-b;
c_ = w-c; d_ = h-d;
// Draw the 4th line
context.moveTo(a_, b_);
context.lineTo(c_, d_);
//... code ...
What is happening?
You might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That’s because we want the symmetry to hold for a rectangular canvas as well, and this way it will.
Also, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It’s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous).
What happens to the x coordinates?
As you recall, our x coordinates are a (prevX) and c (currX).
For the third line we are adding, a' = w - a and c' = w - c, which means…
For the fourth line, the same thing happens to our x coordinates a and c.
What happens to the y coordinates?
As you recall, our y coordinates are b (prevY) and d (currY).
For the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis.
For the fourth line, b' = h - b and d' = h - d, which we’ve seen before: that’s a reflection across the x-axis.
We have four more lines, or locations, to define. Note: the part of the code that’s responsible for drawing a micro-line between the newly calculated coordinates is always the same:
context.moveTo(a_, b_);
context.lineTo(c_, d_);
We can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments.
Once again, we need some concrete examples to see where we’re going, so here’s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.
A sketch illustrating points E and F.
This is the code for E and F:
// Reassign for 5
a_ = w/2+h/2-b; b_ = w/2+h/2-a;
c_ = w/2+h/2-d; d_ = w/2+h/2-c;
// Reassign for 6
a_ = w/2+h/2-b; b_ = h/2-w/2+a;
c_ = w/2+h/2-d; d_ = h/2-w/2+c;
Their x coordinates are identical and their y coordinates are reversed to one another.
This one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).
A sketch illustrating points G and H.
This is the code:
// Reassign for 7
a_ = w/2-h/2+b; b_ = w/2+h/2-a;
c_ = w/2-h/2+d; d_ = w/2+h/2-c;
// Reassign for 8
a_ = w/2-h/2+b; b_ = h/2-w/2+a;
c_ = w/2-h/2+d; d_ = h/2-w/2+c;
//...code...
}
Once again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won’t go into the full details, since this has been a long enough journey as it is, and I think we’ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.
I hope you had fun learning! This is our final app:
See the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.",,260,0
261,Surviving—and Thriving—as a Remote Worker,Mel Choyce,"Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there’s an entire world of talent out there?
I’ve been working remotely, full-time, for five and a half years. I’ve reached the point where I can’t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I’ve grown attached to my current level of freedom and flexibility.
However, it took me a lot of trial and error to reach success as a remote worker — and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn’t for everyone, but most people, with enough effort, can make it work — or even thrive. Here’s what I’ve learned in over five years of working remotely.
Experiment with your environment
As a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space — whether that’s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch… but I’ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn’t completely destroy your eyesight.
Working remotely doesn’t always mean working from home. If you don’t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don’t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you’re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls.
Co-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They’re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.)
Just be polite — make sure your headphones don’t leak, and don’t work from a library if you have a day full of calls.
Remember, too, that you don’t have to stay in the same spot all day. It’s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don’t force yourself to sit at your desk for eight hours if that doesn’t work for you.
Set boundaries
If you’re a workaholic, working remotely can be a challenge. It’s incredibly easy to just… work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there’s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it’s time to figure this out, eh?
Learning how to set boundaries is one of the most important lessons I’ve learned working remotely. (And honestly, it’s something I still struggle with).
For a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it’s noon, I’m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot.
I recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time.
I too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can’t get sucked into work before I’m even dressed. I’ve gotten pretty good at forbidding myself from working until I’m ready, but building any new habit requires intentionality.
Something else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn’t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I’ve heard other coworkers have figured out ways to work through these technical issues, so I’m hoping to give it another try soon.
You might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It’s true! If you’re a more disciplined person, you might not need any of these coping mechanisms. If you’re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out.
Create intentional transition time
I know it’s a stereotype that people who work from home stay in their pajamas all day, but… sometimes, it’s very easy to do. I’ve found that in order to reach peak focus, I need to create intentional transition time.
The most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it’s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort.
I’ve found it helpful to take similar steps at the end of my day. If I’ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch.
This is another reason working from outside your home is advantageous. Commutes, even if it’s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen — not next to your computer.
Embrace async
If you’re used to working in an office, you’ve probably gotten pretty used to being able to pop over to a colleague’s desk if you need to ask a question. They’re pretty much forced to engage with you at that point. When you’re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary’s in Australia and it’s 3am there right now.
For many remote workers, that’s part of the package. When you’re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers.
Asynchronous communication is great. Not everyone can be present for every meeting or office conversation — and the same goes for working remotely. However, when you’re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (“p2s”) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase — “p2 or it didn’t happen.”
Working remotely has made me a better communicator largely because I’ve gotten into the habit of making written updates. I’ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it!
(That said, if you’re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.)
Seek out social opportunities
Even introverts can feel lonely or isolated. When you work remotely, there isn’t a built-in community you’re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide.
I have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.)
Every now and then, I’ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh.
If you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work.
If you don’t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you’ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can’t fall back on your work to provide community, you need to build your own.
These are tips that I’ve found help me, but not everyone works the same way. Remember that it’s okay to experiment — just because you’ve worked one way, doesn’t mean that’s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment!
Hope to see you all online soon 🙌",,261,0
262,Be the Villain,Eric Bailey,"Inclusive Design is the practice of making products and services accessible to, and usable by as many people as reasonably possible without the need for specialized accommodations. The practice was popularized by author and User Experience Design Director Kat Holmes. If getting you to discover her work is the only thing this article succeeds in doing then I’ll consider it a success.
As a framework for creating resilient solutions to problems, Inclusive Design is incredible. However, the aimless idealistic aspirations many of its newer practitioners default to can oftentimes run into trouble. Without outlining concrete, actionable outcomes that are then vetted by the people you intend to serve, there is the potential to do more harm than good.
When designing, you take a user flow and make sure it can’t be broken. Ensuring that if something is removed, it can be restored. Or that something editable can also be updated at a later date—you know, that kind of thing. What we want to do is avoid surprises. Much like a water slide with a section of pipe missing, a broken flow forcibly ejects a user, to great surprise and frustration. Interactions within a user flow also have to be small enough to be self-contained, so as to avoid creating a none pizza with left beef scenario.
Lately, I’ve been thinking about how to expand on this practice. Watertight user flows make for a great immediate experience, but it’s all too easy to miss the forest for the trees when you’re a product designer focused on cranking out features.
What I’m concerned about is while to trying to envision how a user flow could be broken, you also think about how it could be subverted. In addition to preventing the removal of a section of water slide, you also keep someone from mugging the user when they shoot out the end.
If you pay attention, you’ll start to notice this subversion with increasing frequency:
Domestic abusers using internet-controlled devices to spy on and control their partner.
Zealots tanking a business’ rating on Google because its owners spoke out against unchecked gun violence.
Forcing people to choose between TV or stalking because the messaging center portion of a cable provider’s entertainment package lacks muting or blocking features.
White supremacists tricking celebrities into endorsing anti-Semitic conspiracy theories.
Facebook repeatedly allowing housing, credit, and employment advertisers to discriminate against users by their race, ability, and religion.
White supremacists also using a video game chat service as a recruiting tool.
The unchecked harassment of minors on Instagram.
Swatting.
If I were to guess why we haven’t heard more about this problem, I’d say that optimistically, people have settled out of court. Pessimistically, it’s most likely because we ignore, dismiss, downplay, and suppress those who try to bring it to our attention.
Subverted design isn’t the practice of employing Dark Patterns to achieve your business goals. If you are not familiar with the term, Dark Patterns are the use of cheap user interface tricks and psychological manipulation to get users to act against their own best interests. User Experience consultant Chris Nodder wrote Evil By Design, a fantastic book that unpacks how to detect and think about them, if you’re interested in this kind of thing
Subverted design also isn’t beholden design, or simple lack of attention. This phenomenon isn’t even necessarily premeditated. I think it arises from naïve (or willfully ignorant) design decisions being executed at a historically unprecedented pace and scale. These decisions are then preyed on by the shrewd and opportunistic, used to control and inflict harm on the undeserving. Have system, will game.
This is worth discussing. As the field of design continues to industrialize empathy, it also continues to ignore the very established practice of threat modeling. Most times, framing user experience in terms of how to best funnel people into a service comes with an implicit agreement that the larger system that necessitates the service is worth supporting.
To achieve success in the eyes of their superiors, designers may turn to emotional empathy exercises. By projecting themselves into the perceived surface-level experiences of others, they play-act at understanding how to nudge their targeted demographics into a conversion funnel. This roleplaying exercise has the effect of scoping concerns to the immediate, while simultaneously reinforcing the idea of engagement at all cost within the identified demographic.
The thing is, pure engagement leaves the door wide open for bad actors. Even within the scope of a limited population, the assumption that everyone entering into the funnel is acting with good intentions is a poor one. Security researchers, network administrators, and other professionals who practice threat modeling understand that the opposite is true. By preventing everyone save for well-intentioned users from operating a system within the parameters you set for them, you intentionally limit the scope of abuse that can be enacted.
Don’t get me wrong: being able to escort as many users as you can to the happy path is a foundational skill. But we should also be having uncomfortable conversations about why something unthinkable may in fact not be.
They’re not going to be fun conversations. It’s not going to be easy convincing others that these aren’t paranoid delusions best tucked out of sight in the darkest, dustiest corner of the backlog. Realistically, talking about it may even harm your career.
But consider the alternative. The controlled environment of the hypothetical allows us to explore these issues without propagating harm. Better to be viewed as the office’s resident villain than to have to live with something like this:
If the past few years have taught us anything, it’s that the choices we make—or avoid making—have consequences. Design has been doing a lot of growing up as of late, including waking up to the idea that technology isn’t neutral.
You’re going to have to start thinking the way a monster does—if you can imagine it, chances are someone else can as well. To get into this kind of mindset, inverting the Inclusive Design Principles is a good place to start:
Providing a comparable experience becomes forcing a single path.
Considering situation becomes ignoring circumstance.
Being consistent becomes acting capriciously.
Giving control becomes removing autonomy.
Offering choice becomes limiting options.
Prioritizing content becomes obfuscating purpose.
Adding value becomes filling with gibberish.
Combined, these inverted principles start to paint a picture of something we’re all familiar with: a half-baked, unscrupulous service that will jump at the chance to take advantage of you. This environment is also a perfect breeding ground for spawning bad actors.
These kinds of services limit you in the ways you can interact with them. They kick you out or lock you in if you don’t meet their unnamed criteria. They force you to parse layout, prices, and policies that change without notification or justification. Their controls operate in ways that are unexpected and may shift throughout the experience. Their terms are dictated to you, gaslighting you to extract profit. Heaps of jargon and flashy, unnecessary features are showered on you to distract from larger structural and conceptual flaws.
So, how else can we go about preventing subverted design? Marli Mesibov, Content Strategist and Managing Editor of UX Booth, wrote a brilliant article about how to use Dark Patterns for good—perhaps the most important takeaway being admitting you have a problem in the first place.
Another exercise is asking the question, “What is the evil version of this feature?” Ask it during the ideation phase. Ask it as part of acceptance criteria. Heck, ask it over lunch. I honestly don’t care when, so long as the question is actually raised.
In keeping with the spirit of this article, we can also expand on this line of thinking. Author, scientist, feminist, and pacifist Ursula Franklin urges us to ask, “Whose benefits? Whose risks?” instead of “What benefits? What risks?” in her talk, When the Seven Deadly Sins Became the Seven Cardinal Virtues. Inspired by the talk, Ethan Marcotte discusses how this relates to the web platform in his powerful post, Seven into seven.
Few things in this world are intrinsically altruistic or good—it’s just the nature of the beast. However, that doesn’t mean we have to stand idly by when harm is created. If we can add terms like “anti-pattern” to our professional vocabulary, we can certainly also incorporate phrases like “abuser flow.”
Design finally got a seat at the table. We should use this newfound privilege wisely. Listen to women. Listen to minorities, listen to immigrants, the unhoused, the less economically advantaged, and the less technologically-literate. Listen to the underrepresented and the underprivileged.
Subverted design is a huge problem, likely one that will never completely go away. However, the more of us who put the hard work into being the villain, the more we can lessen the scope of its impact.",,262,0
263,Securing Your Site like It’s 1999,Katie Fenn,"Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.
As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.
What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.
Bad input validation: Trusting anything the user sends you
Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.
One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.
One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile.
Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened?
There aren’t any official rules for developing software on the web. But if there were, my golden rule would be:
Never trust user input. Ever.
Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.
Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.
Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner.
Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.
SQL injection: Allowing the user to run their own database queries
A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.
One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.
It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?
' OR '1'='1
SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Alice'
AND PASSWORD='hunter2'
This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead!
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Admin'
AND PASSWORD='' OR '1'='1'
Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!
So how can you protect yourself from SQL injection?
Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.
Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!
Cross site request forgery: Getting other users to do your dirty work for you
Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.
Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky…
Let’s just say there are older and fouler things than Rick Astley in the dark places of the web.
What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:
Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge!
This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from.
The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set.
Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies.
You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.
Cross site scripting: Someone else’s code running on your website
In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.
Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then…
Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
tags into his headline, but was filtered. MySpace wasn’t about to let someone else run their own code on MySpace.
Intrigued, Samy set about finding out exactly what he could do with and
tags. He found that you could add style properties to
tags to style them with CSS.
This code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from
.
Samy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace’s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code.
alert(document.body.innerHTML)
Samy wondered if he could inspect the page’s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too.
alert(eval('document.body.inne' + 'rHTML'))
This isn’t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends’ pages.
One final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain.
if (location.hostname == 'profile.myspace.com') {
document.location = 'http://www.myspace.com'
+ location.pathname + location.search
}
Samy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser’s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace’s API. Samy installed this code on his profile page, and he waited.
Over the course of the next day, over a million people unwittingly installed Samy’s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy’s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service.
This is the power of installing a little bit of JavaScript on someone else’s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people.
So how can you help protect yourself from cross-site scripting?
Always sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don’t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down.
You can also use an auto-escaping templating language to make sure nobody else’s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use.
You should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function.
For content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing.
npm audit: Protecting yourself from code you don’t own
JavaScript and npm run the modern web. Together, they make it easy to take advantage of the world’s largest public registry of open source software. How do you protect yourself from code written by someone you’ve never met? Enter npm audit.
npm audit reviews the security of your website’s dependency tree. You can start using it by upgrading to the latest version of npm:
npm install npm -g
npm audit
When you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed.
If your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What’s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you!
Securing your site like it’s 2019
The truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes.
However, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies.
As we roll over into 2019, let’s take the opportunity to reflect on the security of the code we write and be grateful for the everything we’ve learned in the last twenty years.",,263,0
264,Dynamic Social Sharing Images,Drew McLellan,"Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you’d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days!
With so many social channels and a proliferation of content and promotions flying past in everyone’s streams, it’s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader’s attention if you’re trying to get as many eyes as possible on that sweet, sweet social content.
One of the best ways to grab attention with your posts or tweets is to include an image. There’s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.
So without hard stats to quote, we’ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.
Adding sharing images
The process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.
There’s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.
This is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don’t necessarily have an image? One approach is to use stock photography, but that’s not going to be right for every situation.
This was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don’t usually lend themselves directly to being the main ‘hero’ image for an article.
Putting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.
One of the hand-made sharing images from 2017
Each day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.
Hatching a new plan
One initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.
The more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!
In fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn’t this just be another page on the site generated by the CMS?
Breaking it down, I figured the steps needed would be something like:
Create the CSS to lay out a component that could be turned into an image
Add a new field to articles in the CMS to hold a handpicked quote
Build a new article template in the CMS to output the author name and quote dynamically for any article
… um … screenshot?
I thought I’d get cracking and see if I could figure out the final steps later.
Building the page
The first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.
Paul’s code uses a fixed dimension container in the HTML, set to 600 × 315px. This is to make it the correct aspect ratio for Facebook’s recommended image size. It’s useful to remember here that it doesn’t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.
With the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul’s markup.
I also added the quote as a new field on the article in the CMS, so each ‘image’ could be quickly and easily customised in the editing process.
I added a new field to the article template to capture the quote.
With very little effort, we quickly had a page to dynamically generate our ‘image’ right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.
Our automatically generated layout direct from the CMS
It soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that’s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought… how could I automatically take a screenshot?
Enter Puppeteer
Puppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It’s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.
Headless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or… get this… rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.
Using Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that’s it’s actually Puppeteer’s ‘hello world’ example.
First you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don’t need to worry about installing that. At the command line:
npm i puppeteer
Then save a new file as example.js with this code:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
and then run it using Node:
node example.js
This will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:
Launch a browser
Open up a new page
Goto a URL
Take a screenshot
Close the browser
The async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That’s useful with actions like loading a web page that might take some time to complete. They’re used with Promises, and the effect is to make asynchronous code behave as if it’s synchronous. You can read more about async and await at MDN if you’re interested.
That’s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?
Our content is up in the corner of the image with lots of empty space.
That’s not great. It’s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 × 600, so we need to find out how to adjust that. Fortunately, the docs aren’t the worst and I was able to find the page.setViewport() method pretty easily.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');
await page.setViewport({
width: 600,
height: 315
});
await page.screenshot({path: 'example.png'});
await browser.close();
})();
This worked! The screenshot is now 600 × 315 as expected. That’s exactly what we asked for. Trouble is, that’s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
Perfect! We now have a programmatic way of turning a URL into an image.
Improving the script
Rather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site’s build process to automate the generation of the image.
Our goal is to call the script like this:
node shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/
And I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.
// Get the URL and the slug segment from it
const url = process.argv[2];
const segments = url.split('/');
// Get the second-to-last segment (the slug)
const slug = segments[segments.length-2];
We can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url + 'sharing');
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
await page.screenshot({path: slug + '.png'});
await browser.close();
})();
Once you’re generating the image with Node, there’s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.
You can also run optimisations on the file. I’m using imagemin with pngquant to reduce the file size a little.
const imagemin = require('imagemin');
const imageminPngquant = require('imagemin-pngquant');
await imagemin([slug + '.png'], 'build', {
plugins: [
imageminPngquant({quality: '75-90'})
]
});
You can see the completed example as a gist.
Integrating it with your CMS
So we now have a command we can run to take a URL and generate a custom image for that URL. It’s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you’re using, but it’s likely not too hard as long as you can run a command as part of the process.
For 24 ways this year, I’ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I’ll look to automate this one step further.
We may also look at having a few slightly different layouts to choose from, so that each day isn’t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there’s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!
By using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we’ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.
I hope you’ll give it a try!",,264,0
265,Designing for Perfection,Greg Wood,"Hello, 24 ways readers. I hope you’re having a nice run up to Christmas. This holiday season I thought I’d share a few things with you that have been particularly meaningful in my work over the last year or so. They may not make you wet your santa pants with new-idea-excitement, but in the context of 24 ways I think they may serve as a nice lesson and a useful seasonal reminder going into the New Year. Enjoy!
Story
Despite being a largely scruffy individual for most of my life, I had some interesting experiences regarding kitchen tidiness during my third year at university.
As a kid, my room had always been pretty tidy, and as a teenager I used to enjoy reordering my CDs regularly (by artist, label, colour of spine – you get the picture); but by the time I was twenty I’d left most of these traits behind me, mainly due to a fear that I was turning into my mother. The one remaining anally retentive part of me that remained however, lived in the kitchen. For some reason, I couldn’t let all the pots and crockery be strewn across the surfaces after cooking. I didn’t care if they were washed up or not, I just needed them tidied. The surfaces needed to be continually free of grated cheese, breadcrumbs and ketchup spills. Also, the sink always needed to be clear. Always. Even a lone teabag, discarded casually into the sink hours previously, would give me what I used to refer to as “kitchen rage”.
Whilst this behaviour didn’t cause any direct conflicts, it did often create weirdness. We would be happily enjoying a few pre-night out beverages (Jack Daniels and Red Bull – nice) when I’d notice the state of the kitchen following our round of customized 49p Tesco pizzas. Kitchen rage would ensue, and I’d have to blitz the kitchen, which usually resulted in me having to catch everyone up at the bar afterwards.
One evening as we were just about to go out, I was stood there, in front of the shithole that was our kitchen with the intention of cleaning it all up, when a realization popped into my head. In hindsight, it was a pretty obvious one, but it went along the lines of “What the fuck are you doing? Sort your life out”. I sodded the washing up, rolled out with my friends, and had a badass evening of partying.
After this point, whenever I got the urge to clean the kitchen, I repeated that same realization in my head. My tidy kitchen obsession strived for a level of perfection that my housemates just didn’t share, so it was ultimately pointless. It didn’t make me feel that good, either; it was like having a cigarette after months of restraint – initially joyous but soon slightly shameful.
Lesson
Now, around seven years later, I’m a designer on the web and my life is chaotic. It features no planning for significant events, no day-to-day routine or structure, no thought about anything remotely long-term, and I like to think I do precisely what I want. It seems my days at striving for something ordered and tidy, in most parts of my life, are long gone.
For much of my time as a designer, though, it’s been a different story. I relished industry-standard terms such as ‘pixel perfection’ and ‘polished PSDs’, taking them into my stride as I strove to design everything that was put on my plate perfectly. Even down to grids and guidelines, all design elements would be painstakingly aligned to a five-pixel grid. There were no seven-pixel margins or gutters to be found in my design work, that’s for sure. I put too much pride and, inadvertently, too much ego into my work. Things took too long to create, and because of the amount of effort put into the work, significant changes, based on client feedback for example, were more difficult to stomach.
Over the last eighteen months I’ve made a conscious effort to change the way I approach designing for the web. Working on applications has probably helped with this; they seem to have a more organic development than rigid content-based websites. Mostly though, a realization similar to my kitchen rage one came about when I had to make significant changes to a painstakingly crafted Photoshop document I had created. The changes shouldn’t have been difficult or time-consuming to implement, but they were turning out to be. One day, frustrated with how long it was taking, the refrain “What the fuck are you doing? Sort your life out” again entered my head. I blazed the rest of the work, not rushing or doing scruffy work, but just not adhering to the insane levels of perfection I had previously set for myself. When the changes were presented, everything went down swimmingly. The client in this case (and I’d argue most cases) cared more about the ideas than the perfect way in which they had been implemented. I had taken myself and my ego out of the creative side of the work, and it had been easier to succeed.
Argument
I know many other designers who work on the web share such aspirations to perfection. I think it’s a common part of the designer DNA, but I’m not sure it really has a place when designing for the web.
First, there’s the environment. The landscape in which we work is continually shifting and evolving. The inherent imperfection of the medium itself makes attempts to create perfect work for it redundant. Whether you consider it a positive or negative point, the products we make are never complete. They’re always scaling and changing.
Like many aspects of web design, this striving for perfection in our design work is a way of thinking borrowed from other design industries where it’s more suited. A physical product cannot be as easily altered or developed after it has been manufactured, so the need to achieve perfection when designing is more apt.
Designers who can relate to anything I’ve talked about can easily let go of that anal retentiveness if given the right reasons to do so. Striving for perfection isn’t a bad thing, but I simply don’t think it can be achieved in such a fast-moving, unique industry. I think design for the web works better when it begins with quick and simple, followed by iteration and polish over time.
To let go of ego and to publish something that you’re not completely happy with is perhaps the most difficult part of the job for designers like us, but it’s followed by a satisfaction of knowing your product is alive and breathing, whereas others (possibly even competitors) may still be sitting in Photoshop, agonizing over whether a margin should be twenty or forty pixels.
I keep telling myself to stop sitting on those two hundred ideas that are all half-finished. Publish them, clean them up and iterate over time. I’ve been telling myself this for months and, hopefully, writing this article will give me the kick in the arse I need. Hopefully, it will also give someone else the same kick.",,265,0
266,Collaborative Development for a Responsively Designed Web,Paul Lloyd,"In responsive web design we’ve found a technique that allows us to design for the web as a medium in its own right: one that presents a fluid, adaptable and ever changing canvas.
Until this point, we gave little thought to the environment in which users will experience our work, caring more about the aggregate than the individual. The applications we use encourage rigid layouts, whilst linear processes focus on clients signing off paintings of websites that have little regard for behaviour and interactions. The handover of pristine, pixel-perfect creations to developers isn’t dissimilar to farting before exiting a crowded lift, leaving front-end developers scratching their heads as they fill in the inevitable gaps. If you haven’t already, I recommend reading Drew’s checklist of things to consider before handing over a design.
Somehow, this broken methodology has survived for the last fifteen years or so. Even the advent of web standards has had little impact. Now, as we face an onslaught of different devices, the true universality of the web can no longer be ignored.
Responsive web design is just the thin end of the wedge. Largely concerned with layout, its underlying philosophy could ignite a trend towards interfaces that adapt to any number of different variables: input methods, bandwidth availability, user preference – you name it!
With such adaptability, a collaborative and iterative process is required. Ethan Marcotte, who worked with the team behind the responsive redesign of the Boston Globe website, talked about such an approach in his book:
The responsive projects I’ve worked on have had a lot of success combining design and development into one hybrid phase, bringing the two teams into one highly collaborative group.
Whilst their process still involved the creation of desktop-centric mock-ups, these were presented to the entire team early on, where questions about how pages might adapt and behave at different sizes were asked. Mock-ups were quickly converted into HTML prototypes, meaning further decisions could be based on usage rather than guesswork (and endless hours spent in Photoshop).
Regardless of the exact process, it’s clear that the relationship between our two disciplines is more crucial than ever. Yet, historically, it seems a wedge has been driven between us – perhaps a result of segregation and waterfall-style processes – resulting in animosity.
So how can we improve this relationship? Ultimately, we’ll need to adapt, but even within existing workflows we can start to overlap. Simply adjusting our attitude can effect change, and bring design and development teams closer together.
Good design is constant contact.
Mark Otto
The way we work needs to be more open and inclusive. For example, ensuring members of the development team attend initial kick-off meetings and design workshops will not only ensure technical concerns are raised, but mean that those implementing our designs better understand the problems we’re trying to solve.
It can also be useful at this stage to explain how you work and the sort of deliverables you expect to produce. This will give developers a chance to make recommendations on how these can be optimized for their own needs.
You may even find opportunities to share the load. On a recent project I worked on, our development partners offered to produce the interactive prototypes needed for user testing. This allowed us to concentrate on refining the experience, whilst they were able to get a head start on building the product.
While developers should be involved at the beginning of projects, it’s also important that designers are able to review and contribute to a product as it’s being built. Any handover should be done in person, and ideally you’ll have a day set aside to do so. Having additional budget available for follow-up design reviews is also recommended. Learning how to use version control tools like Subversion or Git will allow you to work within the same environment as developers, and allow you to contribute code or graphic assets directly to a project if needed.
Don’t underestimate the benefits of designer and developer sitting next to each other. Subtle nuances can be explored far more easily than if they were conducted over email or phone. As Ethan writes, “‘Design’ is the means, not merely the end; the path we walk over the course of a project, the choices we make”.
It’s from collaboration like this that I’ve become fond of producing visual style guides. These demonstrate typographic treatments for common markup and patterns (blockquotes, lists, pagination, basic form controls and so on). Thinking in terms of components rather than individual pages not only fits in better with how a developer will implement a site, but can also ensure your design works as a coherent whole.
Despite the amount of research and design produced, when it comes to the crunch, there will always be a need for compromise. As the old saying goes, ‘fast, cheap and good – pick two.’ It’s important that you know which pieces are crucial to a design and which areas can allow for movement. Pick your battles wisely. Having an agreed set of design principles can be useful when making such decisions, as they help everyone focus on the goals of the project.
The best compromises are reached when both sides understand the issues of the other.
Richard Rutter
Ultimately, better collaboration comes through a shared understanding of the different competencies required to build a website. Instead of viewing ourselves in terms of discrete roles, we should instead look to emphasize our range of abilities, and work with others whose skills are complementary.
Perhaps somebody who actively seeks to broaden their knowledge is the mark of a professional. Seek these people out.
The best developers I’ve worked with have a respect for design, probably having attempted to do some themselves! Having wrangled with a few MySQL databases myself, I certainly believe the obverse is true. While knowing HTML won’t necessarily make you a better designer, it will help you understand the issues being faced by a front-end developer and, more importantly, allow you to offer solutions or alternative approaches.
So take a moment to think about how you work with developers and how you could improve your relationship with them. What are you doing to ease the path towards our collaborative future?",,266,0
267,Taming Complexity,Simon Collison,"I’m going to step into my UX trousers for this one. I wouldn’t usually wear them in public, but it’s Christmas, so there’s nothing wrong with looking silly.
Anyway, to business. Wherever I roam, I hear the familiar call for simplicity and the denouncement of complexity. I read often that the simpler something is, the more usable it will be. We understand that simple is hard to achieve, but we push for it nonetheless, convinced it will make what we build easier to use. Simple is better, right?
Well, I’ll try to explore that. Much of what follows will not be revelatory to some but, like all good lessons, I think this serves as a welcome reminder that as we live in a complex world it’s OK to sometimes reflect that complexity in the products we build.
Myths and legends
Less is more, we’ve been told, ever since master of poetic verse Robert Browning used the phrase in 1855. Well, I’ve conducted some research, and it appears he knew nothing of web design. Neither did modernist architect Ludwig Mies van der Rohe, a later pedlar of this worthy yet contradictory notion. Broad is narrow. Tall is short. Eggs are chips. See: anyone can come up with this stuff.
To paraphrase Einstein, simple doesn’t have to be simpler. In other words, simple doesn’t dictate that we remove the complexity. Complex doesn’t have to be confusing; it can be beautiful and elegant. On the web, complex can be necessary and powerful. A website that simplifies the lives of its users by offering them everything they need in one site or screen is powerful. For some, the greater the density of information, the more useful the site.
In our decision-making process, principles such as Occam’s razor’s_razor (in a nutshell: simple is better than complex) are useful, but simple is for the user to determine through their initial impression and subsequent engagement. What appears simple to me or you might appear very complex to someone else, based on their own mental model or needs. We can aim to deliver simple, but they’ll be the judge.
As a designer, developer, content alchemist, user experience discombobulator, or whatever you call yourself, you’re often wrestling with a wealth of material, a huge number of features, and numerous objectives. In many cases, much of that stuff is extraneous, and goes in the dustbin. However, it can be just as likely that there’s a truckload of suggested features and content because it all needs to be there. Don’t be afraid of that weight.
In the right hands, less can indeed mean more, but it’s just as likely that less can very often lead to, well… less.
Complexity is powerful
Simple is the ability to offer a powerful experience without overwhelming the audience or inducing information anxiety. Giving them everything they need, without having them ferret off all over a site to get things done, is important.
It’s useful to ask throughout a site’s lifespan, “does the user have everything they need?” It’s so easy to let our designer egos get in the way and chop stuff out, reduce down to only the things we want to see. That benefits us in the short term, but compromises the audience long-term.
The trick is not to be afraid of complexity in itself, but to avoid creating the perception of complexity. Give a user a flight simulator and they’ll crash the plane or jump out. Give them everything they need and more, but make it feel simple, and you’re building a relationship, empowering people.
This can be achieved carefully with what some call gradual engagement, and often the sensible thing might be to unleash complexity in carefully orchestrated phases, initially setting manageable levels of engagement and interaction, gradually increasing the inherent power of the product and fostering an empowered community.
The design aesthetic
Here’s a familiar scenario: the client or project lead gets overexcited and skips most of the important decision-making, instead barrelling straight into a bout of creative direction Tourette’s. Visually, the design needs to be minimal, white, crisp, full of white space, have big buttons, and quite likely be “clean”. Of course, we all like our websites to be clean as that’s more hygienic.
But what do these words even mean, really? Early in a project they’re abstract distractions, unnecessary constraints. This premature narrowing forces us to think much more about throwing stuff out rather than acknowledging that what we’re building is complex, and many of the components perhaps necessary.
Simple is not a formula. It cannot be achieved just by using a white background, by throwing things away, or by breathing a bellowsful of air in between every element and having it all float around in space. Simple is not a design treatment. Simple is hard. Simple requires deep investigation, a thorough understanding of every aspect of a project, in line with the needs and expectations of the audience.
Recognizing this helps us empathize a little more with those most vocal of UX practitioners. They usually appreciate that our successes depend on a thorough understanding of the user’s mental models and expected outcomes. I personally still consider UX people to be web designers like the rest of us (mainly to wind them up), but they’re web designers that design every decision, and by putting the user experience at the heart of their process, they have a greater chance of finding simplicity in complexity. The visual design aesthetic — the façade — is only a part of that.
Divide and conquer
I’m currently working on an app that’s complex in architecture, and complex in ambition. We’ll be releasing in carefully orchestrated private phases, gradually introducing more complexity in line with the unavoidably complex nature of the objective, but my job is to design the whole, the complete system as it will be when it’s out of beta and beyond.
I’ve noticed that I’m not throwing much out; most of it needs to be there. Therefore, my responsibility is to consider interesting and appropriate methods of navigation and bring everything together logically.
I’m using things like smart defaults, graphical timelines and colour keys to make sense of the complexity, techniques that are sympathetic to the content. They act as familiar points of navigation and reference, yet are malleable enough to change subtly to remain relevant to the information they connect. It’s really OK to have a lot of stuff, so long as we make each component work smartly.
It’s a divide and conquer approach. By finding simplicity and logic in each content bucket, I’ve made more sense of the whole, allowing me to create key layouts where most of the simplified buckets are collated and sometimes combined, providing everything the user needs and expects in the appropriate places.
I’m also making sure I don’t reduce the app’s power. I need to reflect the scale of opportunity, and provide access to or knowledge of the more advanced tools and features for everyone: a window into what they can do and how they can help. I know it’s the minority who will be actively building the content, but the power is in providing those opportunities for all.
Much of this will be familiar to the responsible practitioners who build websites for government, local authorities, utility companies, newspapers, magazines, banking, and we-sell-everything-ever-made online shops. Across the web, there are sites and tools that thrive on complexity.
Alas, the majority of such sites have done little to make navigation intuitive, or empower audiences. Where we can make a difference is by striving to make our UIs feel simple, look wonderful, not intimidating — even if they’re mind-meltingly complex behind that façade.
Embrace, empathize and tame
So, there are loads of ways to exploit complexity, and make it seem simple. I’ve hinted at some methods above, and we’ve already looked at gradual engagement as a way to make sense of complexity, so that’s a big thumbs-up for a release cycle that increases audience power.
Prior to each and every release, it’s also useful to rest on the finished thing for a while and use it yourself, even if you’re itching to release. ‘Ready’ often isn’t, and ‘finished’ never is, and the more time you spend browsing around the sites you build, the more you learn what to question, where to add, or subtract. It’s definitely worth building in some contingency time for sitting on your work, so to speak.
One thing I always do is squint at my layouts. By squinting, I get a sort of abstract idea of the overall composition, and general feel for the thing. It makes my face look stupid, but helps me see how various buckets fit together, and how simple or complex the site feels overall.
I mentioned the need to put our design egos to one side and not throw out anything useful, and I think that’s vital. I’m a big believer in economy, reduction, and removing the extraneous, but I’m usually referring to decoration, bells and whistles, and fluff. I wouldn’t ever advocate the complete removal of powerful content from a project roadmap.
Above all, don’t fear complexity. Embrace and tame it. Work hard to empathize with audience needs, and you can create elegant, playful, risky, surprising, emotive, delightful, and ultimately simple things.",,267,0
268,Getting the Most Out of Google Analytics,Matt Curry,"Something a bit different for today’s 24 ways article. For starters, I’m not a designer or a developer. I’m an evil man who sells things to people on the internet. Second, this article will likely be a little more nebulous than you’re used to, since it covers quite a number of points in a relatively short space.
This isn’t going to be the complete Google Analytics Conversion University IQ course compressed into a single article, obviously. What it will be, however, is a primer on setting up and using Google Analytics in real life, and a great deal of what I’ve learned using Google Analytics nearly every working day for the past six (crikey!) years.
Also, to be clear, I’ll be referencing new Google Analytics here; old Google Analytics is for loooosers (and those who want reliable e-commerce conversion data per site search term, natch).
You may have been running your Analytics account for several years now, dipping in and out, checking traffic levels, seeing what’s popular… and that’s about it. Google Analytics provides so much more than that, but the number of reports available can often intimidate users, and documentation and case studies on their use are minimal at best.
Let’s start! Setting up your Analytics profile
Before we plough on, I just want to run through a quick checklist that some basic settings have been enabled for your profile. If you haven’t clicked it, click the big cog on the top-right of Google Analytics and we’ll have a poke about.
If you have an e-commerce site, e-commerce tracking has been enabled
If your site has a search function, site search tracking has been enabled.
Query string parameters that you do not want tracked as separate pages have been excluded (for example, any parameters needed for your platform to function, otherwise you’ll get multiple entries for the same page appearing in your reports)
Filters have been enabled on your main profile to exclude your office IP address and any IPs of people who frequently access the site for work purposes. In decent numbers they tend to throw data off a tad.
You may also find the need to set up multiple profiles prefiltered for specific audience segments. For example, at Lovehoney we have seventeen separate profiles that allow me quick access to certain countries, devices and traffic sources without having to segment first. You’ll also find load time for any complex reports much improved. Use the same filter screen as above to set up a series of profiles that only include, say, mobile visits, or UK visitors, so you can quickly analyse important segments.
Matt, what’s a segment?
A segment is a subsection of your visitor base, which you define and then call on in reports to see specific data for that subsection. For example, in this report I’ve defined two segments, the first for IE6 users and the second for IE7.
Segments are easily created by clicking the Advanced Segments tabs at the top of any report and clicking +New Custom Segment.
What does your site do?
Understanding the goals of your site is an oft-covered topic, but it’s necessary not just to form a better understand of your business and prioritize your time. Understanding what you wish visitors to do on your site translates well into a goal-driven analytics package like Google Analytics.
Every site exists essentially to sell something, either financially through e-commerce, or to sell an idea or impart information, get people to download a CV or enquire about service, or to sell space on that website to advertisers. If the site did not provide a positive benefit to its owners, it would not have a reason for being.
Once you have understood the reason why you have a site, you can map that reason on to one of the three goal types Google Analytics provides.
E-commerce
This conversion type registers transactions as part of a sales process which requires a monetary value, what products have been bought, an SKU (stock keeping unit), affiliation (if you’re then attributing the sale to a third party or franchise) and so on.
The benefit of e-commerce tracking is not only assigning non-arbitrary monetary value to behaviour of visitors on your site, as well as being able to see ancillary costs such as shipping, but seeing product-level information, like which products are preferred from various channels, popular categories, and so on.
However, I find the e-commerce tracking options also useful for non-e-commerce sites. For example, if you’re offering downloads or subscriptions and having an email address or user’s details is worth something to you, you can set up e-commerce tracking to understand how much value your site is bringing. For example, an email address might be worth 20p to you, but if it also includes a name it’s worth 50p. A contact telephone number is worth £2, and so on.
Page goals
Page goals, unsurprisingly, track a visit to a page (often with a sequence of pages leading up to that page). This is what’s referred to as a goal funnel, and is generally used to track how visitors behave in a multistep checkout.
Interestingly, the page doesn’t have to actually exist. For example, if you have a single page checkout, you can register virtual page views using trackPageview() when a visitor clicks into a particular section of the checkout or other form. If your site is geared towards getting someone to a particular page, but where there isn’t a transaction (for example, a subscription page) this is for you.
There are also behavioural goals, such as time on site and number of pages viewed, which are geared towards sites that make money from advertising.
But, going back to the page goals, these can be abstracted using regular expressions, meaning that you can define a funnel based on page type rather than having to set individual folders.
In this example, I’ve created regexes for the main page types on my site, so I can create a wide funnel that captures visitors from where they enter through to checkout.
Events
Event tracking registers a predefined event, such as playing a video, or some interaction that can trigger JavaScript, such as a Tweet This button. Events can then be triggered using the trackEvent() call. If you want someone to complete watching a video, you would code your player to fire trackEvent() upon completion.
While I don’t use events as goals, I use events elsewhere to see how well a video play aids to conversion. This not only helps me justify the additional spend on creating video content, but also quickly highlights which videos are underperforming as sales tools.
What a visitor can tell you
Now you have some proper goals set up, we can start to see how changes in content (on-site and external) affect those goals.
Ultimately, when a visitor comes to your site, they bring information with them:
where they came from (a search engine – including: keyword searched for; a referral; direct; affiliate; or ad campaign)
demographics (country; whether they’re new or returning, within thirty days)
technical information (browser; screen size; device; bandwidth)
site-specific information (landing page; next click; previous values assigned to them as custom variables*)
* A note about custom variables. There’s no hope in hell that I can cover custom variables in this article. Go research them. Custom variables are the single best way to hack Google Analytics and bend it to your will. Custom variables allow you to record anything you want about a visitor, which that visitor will then carry around with them between visits. It’s also great for plugging other services into Google Analytics (as shown by the marvelous way Visual Website Optimizer allows you to track and segment tests within the GA interface). Just make sure not to breach the terms of service, eh?
CSI your website
Police procedural TV shows are all the same: the investigators are called to a crime and come across a clue; there’s then an autopsy; new evidence leads them to a new location; they find a new clue; they put two and two together; they solve the mystery.
This is your life now. Exciting!
So, now you’re gathering a wealth of information about what sort of people visit your site, what they do when they’re there, and what eventually gets them to drive value to you. It’s now your job to investigate all these little clues to see which types of people drive the most value, and what you can change to improve it.
Maybe not that exciting.
However, Google Analytics comes pre-armed with extensive reports for you to delve into. As an e-commerce guy (as opposed to a page goal guy) my day pretty much follows the pattern below.
Look at e-commerce conversion rate by traffic source compared to the same day in the previous week and previous month. As ours is an e-commerce site, we have weekly and monthly trends. A big spike on Sundays and Mondays, and payday towards the end of the month is always good; on the third week of a month there tends to be a lull. Spend time letting your Google Analytics data brew, understand your own trends and patterns, and you’ll start to get a feel for when something isn’t quite right.
Traffic Sources → Sources → All Traffic
Look at the conversion rate by landing page for any traffic source that feels significantly different to what’s expected. Check bounce rates, drill down to likely landing pages and check search keyword or referral site to see if it’s a particular subset of visitor. You can do this by clicking Secondary Dimension and choosing Keyword or Source. If it’s direct, choose Visitor Type to break down by new or returning visitor.
Content → Site Content → Landing Pages
I then tend to flip into Content Drilldown to see what the next clicks were from those landing pages, and whether they changed significantly to the date I’m comparing with. If they have, that’s usually an indicator of changed content (or its relevancy). Remember, if a bunch of people have found their way to your page via a method you’re not expecting (such as a mention on a Spanish radio station – this actually happened to me once), while the content hasn’t changed, the relevancy of it to the audience may have.
Content → Site Content → Content Drilldown
Once I have an idea of what content was consumed, and whether it was relevant to the user, I then look at the visitor specifics, such as browser or demographic data, to see again whether the change was limited to a specific subset. Site speed, for example, is normally a good factor towards bounce rate, so compare that with previous data as well.
Now, to be investigating at this level you still need a serious amount of data, in order to tell what’s a significant change or not. If you’re struggling with a small number of visitors, you might find reporting on a weekly or fortnightly basis more appropriate.
However, once you’ve looked into the basics of why changes happen to the value of your site, you’ll soon find yourself limited by the reports offered in Standard Reporting. So, it’s time to build your own. Hooray!
Custom reporting
Google Analytics provides the tools to build reports specific to the types of investigations you frequently perform.
Welcome to my world.
Custom reports are quite simple to build: first, you determine the metric you want the report to cover (number of visitors, bounce rate, conversion rate, and so on), then choose a set of dimensions that you’d like to segment the report by (say, the source of the traffic, and whether they were new or returning users). You can filter the report, including or excluding particular dimension values, and you can assign the report to any of the profiles you created earlier.
In the example below, I’ve created a report that shows me visits and conversion rate for any Google traffic that landed directly only on a product page. I can then drill down on each product page to see the complete phrases use to search. I can use this information in two ways:
I can see which products aren’t converting, which shows me where I need to work harder on merchandising.
I can give this information to my content team, showing them the actual phrases visitors used to reach our product content, helping them write better targeted product descriptions.
The possibilities here are nearly endless, but here are a few examples of reports I find useful:
Non-brand inbound search
By creating a report that shows inbound search traffic which doesn’t include your brand, you can see more clearly the behaviour of visitors most likely to be unfamiliar with your site and brand values, without having to rely on the clumsy new or returning demographic date.
Traffic/conversion/sales by hour
This is pure stats porn, but actually more useful than real-time data. By seeing this data broken down at an hourly level, you can not only compare the current day to previous days, but also see the best performing times for email broadcasts and tweets.
Visits, load time, conversion and sales by page and browser
Page speed can often kill conversion rates, but it’s difficult to prove the value of focusing on speed in monetary terms. Having this report to hand helps me drive Operation Greenbelt, our effort to get into the sub-1.5 second band in Google Webmaster Tools.
Useful things you can’t do in custom reporting
If you have a search function on your website, then Conversion Rate and Products Bought by Site Search Term is an incredibly useful report that allows you to measure the effectiveness of your site’s search engine at returning products and content related to the search term used. By including the products actually bought by visitors who searched for each term, you can use this information to better searchandise these results, escalating high propensity and high value products to the top of the results.
However, it’s not possible to get this information out of new Google Analytics.
Try it, select the following in the report builder:
Metrics: total unique searches; e-commerce or goal conversion rate
Dimensions: search term; product
You’ll see that the data returned is a little nonsensical, though a 2,000% conversion rate would be nice. However, you can get more accurate information using advanced segments. By creating individual segments to define users who have searched for a particular term, you can run the sales performance and product performance reports as normal. It’s laborious, but it teaches a good lesson: data that seems inaccessible can normally be found another way!
Reporting infrastructure
Now that you have a series of reports that you can refer to on a daily or weekly basis, it’s time to put together a regular reporting infrastructure.
Even if you’re not reporting to someone, having a set of key performance indicators that you can use to see how your performance is improving over time allows you to set yourself business goals on a monthly and annual basis.
For my own reporting, I take some high-level metrics (such as visitors, conversion rate and average order value), and segment them by traffic source and, separately, landing page. These statistics I record weekly and report:
current week compared with previous week
same week previous year (if available)
4 week average
13 week average
52 week average (if available)
This takes into account weekly, monthly, seasonal and annual trends, and gives you a much clearer view of your performance.
Getting data in other ways
If you’re using Google Analytics frequently, with any large site you’ll come to a couple of conclusions:
Doing any kind of practical comparative analysis is unwieldy.
Boy, Google Analytics is slow!
As you work with bigger datasets and put together more complex queries, you’ll see the loading graphic more than you’ll see actual data. So when you reach that level, there are ways to completely bypass the Google Analytics interface altogether, and get data into your own spreadsheet application for manipulation.
Data Feed Query Explorer
If you just want to pull down some quick statistics but still use complex filters and exotic metric and dimension combinations, the Data Feed Query Explorer is the quickest way of doing so. Authenticate with your Google Analytics account, select a profile, and you can start selecting metrics and dimensions to be generated in a handy, selectable tabulated format.
Google Analytics API
If you’re feeling clever, you can bypass having to copy and paste data by pulling in directly into Excel, Google Docs or your own application using the Google Analytics API. There are several scripts and plugins available to do this. I use Automate Analytics Google Docs code (there’s also a paid version that simplifies setup and creates some handy reports for you).
New shiny things
Well, now that that’s over, I can show you some cool stuff. Well, at least it’s cool to me. Google Analytics is being constantly improved and new functionality is introduced nearly every month. Here are a couple of my favourites.
Multichannel attribution
Not every visitor converts on your site on the first visit. They may not even do so on the second visit, or third. If they convert on the fourth visit, but each time they visit they do so via a different channel (for example, Search PPC, Search Organic, Direct, Email), which channel do you attribute the conversion to? The last channel, or the first? Dilemma!
Google now has a Multichannel Attribution report, available in the Conversion category, which shows how each channel assists in converting, the overlap between channels, and where in the process that channel was important.
For example, you may have analysed your blog traffic from Twitter and become disheartened that not many people were subscribing after visiting from Twitter links, but instead your high-value subscribers were coming from natural search. On the face of it, you’d spend less time tweeting, but a multichannel report may tell you that visitors first arrived via a Twitter link and didn’t subscribe, but then came back later after searching for your blog name on Google, after which they did. Don’t pack Twitter in yet!
Visitor and goal flow
Visitor and goal flow are amazing reports that help you visualize the flow of traffic through your site and, ultimately, into your checkout funnel or similar goal path. Flow reports are perfect for understanding drop-off points in your process, as well as what the big draws are on each page.
Previously, if you wanted to visualize this data you had to set up several abstracted microgoals and chain them together in custom reports. Frankly, it was a pain in the arse and burned through your precious and limited goal allocation.
Visitor flow bypasses all that and produces the report in an interactive flow diagram. While it doesn’t show you the holy grail of conversion likelihood by each path, you can segment visitor flow so that you can see very specifically how different segments of your visitor base behave.
Go play with it now!",,268,0
269,Adaptive Images for Responsive Designs… Again,Jake Archibald,"When I was asked to write an article for 24 ways I jumped at the chance, as I’d been wanting to write about some fun hacks for responsive images and related parsing behaviours. My heart sank a little when Matt Wilcox beat me to the subject, but it floated back up when I realized I disagreed with his method and still had something to write about.
So, Matt Wilcox, if that is your real name (and I’m pretty sure it is), I disagree. I see your dirty server-based hack and raise you an even dirtier client-side hack. Evil laugh, etc., etc.
You guys can stomach yet another article about responsive design, right? Right?
Half the room gets up to leave
Whoa, whoa… OK, I’ll cut to the chase…
TL;DR
In a previous episode, we were introduced to Debbie and her responsive cat poetry page. Well, now she’s added some reviews of cat videos and some images of cats. Check out her new page and have a play around with the browser window. At smaller widths, the images change and the design responds. The benefits of this method are:
it’s entirely client-side
images are still shown to users without JavaScript
your media queries stay in your CSS file
no repetition of image URLs
no extra downloads per image
it’s fast enough to work on resize
it’s pure filth
What’s wrong with the server-side solution?
Responsive design is a client-side issue; involving the server creates a boatload of problems.
It sets a cookie at the top of the page which is read in subsequent requests. However, the cookie is not guaranteed to be set in time for requests on the same page, so the server may see an old value or no value at all.
Serving images via server scripts is much slower than plain old static hosting.
The URL can only cache with vary: cookie, so the cache breaks when the cookie changes, even if the change is unrelated. Also, far-future caching is out for devices that can change width.
It depends on detecting screen width, which is rather messy on mobile devices.
Responding to things other than screen width (such as DPI) means packing more information into the cookie, and a more complicated script at the top of each page.
So, why isn’t this straightforward on the client?
Client-side solutions to the problem involve JavaScript testing user agent properties (such as screen width), looping through some images and setting their URLs accordingly. However, by the time JavaScript has sprung into action, the original image source has already started downloading. If you change the source of an image via JavaScript, you’re setting off yet another request.
Images are downloaded as soon as their DOM node is created. They don’t need to be visible, they don’t need to be in the document.
new Image().src = url
The above will start an HTTP request for url. This is a handy trick for quick requests and preloading, but also shows the browser’s eagerness to download images.
Here’s an example of that in action. Check out the network tab in Web Inspector (other non-WebKit development aids are available) to see the image requests.
Because of this, some client-side solutions look like this:
where t.gif is a 1×1px tiny transparent GIF.
This results in no images if JavaScript isn’t available. Dealing with the absence of JavaScript is still important, even on mobile. I was recently asked to make a website work on an old Blackberry 9000. I was able to get most of the way there by preventing that OS parsing any JavaScript, and that was only possible because the site didn’t depend on it.
We need to delay loading images for JavaScript users, but ensure they load for users without JavaScript. How can we conditionally parse markup depending on JavaScript support?
Oh yeah!
!
Whoa! First spacer GIFs and now ? This really is the future! The image above will only load for users without JavaScript support. Now all we need to do is send JavaScript in there to get the textContent of the element, then we can alter the image source before handing it to the DOM for parsing.
Here’s an example of that working … unless you’re using Internet Explorer.
Internet Explorer doesn’t retain the content of elements. As soon as it’s parsed, it considers it an empty element. FANKS INTERNET EXPLORER. This is why some solutions do this:
so JavaScript can still get at the URL via the data-src attribute. However, repeating stuff isn’t great. Surely we can do better than that.
A dirty, dirty hack
Thankfully, I managed to come up with a solution, and by me, I mean someone cleverer than me. Pornel’s solution uses , but surely we don’t need that.
Now, before we look at this, I can’t stress how dirty it is. It’s so dirty that if you’ve seen it, schools will refuse to employ you.
Phwoar! Dirty, isn’t it? I’ll stop for a moment, so you can go have a wash.
Done? Excellent.
With this, the image is wrapped in a comment only for users with JavaScript. Without JavaScript, we get the image. Unlike the example above, we can get the text content of the comment pretty easily.
Hurrah! But wait… Some browsers are sometimes downloading the image, even with JavaScript enabled. Notably Firefox. Huh?
Images are downloaded in comments now? What?
No. What we’re seeing here is the effect of speculative parsing. Here’s what’s happening:
While the browser is parsing the script, it parses the rest of the document. This is usually a good thing, as it can download subsequent images and scripts without waiting for the script to complete. The problem here is we create an unbalanced tree.
An unbalanced tree, yesterday.
In this case, the browser must throw away its speculative parsing and reparse from the end of the
And there we have it. We can now prevent images loading for users with JavaScript, but we can still get at the markup.
We’re still creating an unbalanced tree and there’s a performance impact in that. However, the parser won’t have got far by the time our script executes, so the impact is small. Unbalanced trees are more of a concern for external scripts; a lot of parsing can happen by the time the script has downloaded and parsed.
Using dirtiness to create responsive images
Now all we need to do is give each of our dirty scripts a class name, then JavaScript can pick them up, grab the markup from the comment and decide what to do with the images.
This technique isn’t exclusively useful for responsive images. It could also be used to delay images loading until they’ve scrolled into view. But to do that you’ll need a bulletproof way of detecting when elements are in view. This involves getting the height of the viewport, which is extremely unreliable on mobile devices.
Here’s a hastily thrown together example showing how it can be used for responsive images.
I adjust the end of the image URLs conditionally depending on the result of media queries. This is done on page load, and on resize.
I’m using regular expressions to alter the URLs. Using regex to deal with HTML is usually a sign of insanity, but parsing it with the browser’s DOM parser would trigger the download of images before we change the URLs. My implementation currently requires double-quoted image URLs, because I’m lazy. Wanna fight about it?
Media querying via JavaScript
Jeremy Keith used document.documentElement.clientWidth in his example, which is great as a proof of concept, but unfortunately is rather unreliable across mobile devices.
Thankfully, standards are coming to the rescue with window.matchMedia, which lets us provide a media query string and get a boolean result. There’s even a great polyfill for browsers that don’t support it (as long as they support media queries in CSS).
I didn’t go with that for three reasons:
I’d like to keep media queries in the CSS file, if possible.
I wanted something a little lighter to keep things speedy while resizing.
It’s just not dirty enough yet.
To make things ultra-dirty, I add a test element to the page with a specific class, let’s say media-test. Then, I control the width of it using media queries in my CSS file:
@media all and (min-width: 640px) {
.media-test {
width: 1px;
}
}
@media all and (min-width: 926px) {
.media-test {
width: 2px;
}
}
The JavaScript part changes the URL suffix depending on the width of media-test. I’m using a min-width media query, but you can use others such as pixel-ratio to detect high DPI displays. Basically, it’s a hacky way for CSS to set a value that can be picked up by JavaScript. It means the bit that signals changes to the images sits with the rest of the responsive code, without duplication.
Also, phwoar, dirty!
The API
I threw a script together to demonstrate the technique. I’m not particularly attached to it, I’m not even sure I like it, but here’s the API:
responsiveGallery({
// Class name of dirty script element(s) to target
scriptClass: 'dirty-gallery-script',
// Class name for our test element
testClass: 'dirty-gallery-test',
// The initial suffix of URLs, the bit that changes.
initialSuffix: '-mobile.jpg',
// A map of suffixes, for each width of 'dirty-gallery-test'
suffixes: {
'1': '-desktop.jpg',
'2': '-large-desktop.jpg',
'3': '-mobile-retina.jpg'
}
});
The API can cover individual images or multiple galleries at once. In the example I gave at the start of the article I make two calls to the API, one for both galleries, and one for the single image above the video reviews. They’re separate calls because they respond slightly differently.
The future
Hopefully, we’ll get a proper solution to this soon. My favourite suggestion is the element that Bruce Lawson covers.
Unfortunately, we’re nowhere near that yet, and I’d still rather have my media queries stay in CSS. Perhaps the source elements could be skipped if they’re display:none; then they could have class names and be controlled via CSS. Sigh.
Well, I’m tired of writing now and I’m sure you’re tired of reading. I realize what I’ve presented is a yet another dirty hack to the responsive image problem (perhaps the dirtiest?) and may be completely unfeasible in professional situations. But isn’t that the true spirit of Christmas?
No.",,269,0
270,From Side Project to Not So Side Project,Elliot Jay Stocks,"In the last article I wrote for 24 ways, back in 2009, I enthused about the benefits of having a pet project, suggesting that we should all have at least one so that we could collaborate with our friends, escape our day jobs, fulfil our own needs, help others out, raise our profiles, make money, and — most importantly — have fun. I don’t think I need to offer any further persuasions: it seems that designers and developers are launching their own pet projects left, right and centre. This makes me very happy.
However, there still seems to be something of a disconnect between having a side project and turning it into something that is moderately successful; in particular, the challenge of making enough money to sustain the project and perhaps even elevating it from the sidelines so that it becomes something not so on the side at all.
Before we even begin this, let’s spend a moment talking about money, also known as…
Evil, nasty, filthy money
Over the last couple of years, I’ve started referring to myself as an accidental businessman. I say accidental because my view of the typical businessman is someone who is driven by money, and I usually can’t stand such people. Those who are motivated by profit, obsessed with growth, and take an active interest in the world’s financial systems don’t tend to be folks with whom I share a beer, unless it’s to pour it over them. Especially if they’re wearing pinstriped suits.
That said, we all want to make money, don’t we? And most of us want to make a relatively decent amount, too. I don’t think there’s any harm in admitting that, is there? Hello, I’m Elliot and I’m a capitalist.
The key is making money from doing what we love. For most people I know in our community, we’ve already achieved that — I’m hard-pressed to think of anyone who isn’t extremely passionate about working in our industry and I think it’s one of the most positive, unifying benefits we enjoy as a group of like-minded people — but side projects usually arise from another kind of passion: a passion for something other than what we do as our day jobs. Perhaps it’s because your clients are driving you mental and you need a break; perhaps it’s because you want to create something that is truly your own; perhaps it’s because you’re sick of seeing your online work disappear so fast and you want to try your hand at print in order to make a more permanent mark.
The three factors I listed there led me to create 8 Faces, a printed magazine about typography that started as a side project and is now a very significant part of my yearly output and income.
Like many things that prove fruitful, 8 Faces’ success was something of an accident, too. For a start, the magazine was never meant to be profitable; its only purpose at all was to scratch my own itch. Then, after the first issue took off and I realized how much time I needed to spend in order to make the next one decent, it became clear that I would have to cover more than just the production costs: I’d have to take time out from client work as well. Doing this meant I’d have to earn some money. Probably not enough to equate to the exact amount of time lost when I could be doing client work (not that you could ever describe time as being lost when you work on something you love), but enough to survive; for me to feel that I was getting paid while doing all of the work that 8 Faces entailed. The answer was to raise money through partnerships with some cool companies who were happy to be associated with my little project.
A sustainable business model
Business model! I can’t believe I just wrote those words! But a business model is really just a loose plan for how not to screw up. And all that stuff I wrote in the paragraph above about partnering with companies so I could get some money in while I put the magazine together? Well, that’s my business model.
If you’re making any product that has some sort of production cost, whether that’s physical print run expenses or up-front dev work to get an app built, covering those costs before you even release your product means that you’ll be in profit from the first copy you sell. This is no small point: production expenses are pretty much the only cost you’ll ever need to recoup, so having them covered before you launch anything is pretty much the best possible position in which you could place yourself. Happy days, as Jamie Oliver would say.
Obtaining these initial funds through partnerships has another benefit. Sure, it’s a form of advertising but, done right, your partners can potentially provide you with great content, too. In the case of 8 Faces, the ads look as nice as the rest of the magazine, and a couple of our partners also provide proper articles: genuinely meaningful, relevant, reader-pleasing articles at that. You’d be amazed at how many companies are willing to become partners and, as the old adage goes, if you don’t ask, you don’t get.
With profit comes responsibility
Don’t forget about the responsibility you have to your audience if you engage in a relationship with a partner or any type of advertiser: although I may have freely admitted my capitalist leanings, I’m still essentially a hairy hippy, and I feel that any partnership should be good for me as a publisher, good for the partner and — most importantly — good for the reader. Really, the key word here is relevance, and that’s where 99.9% of advertising fails abysmally.
(99.9% is not a scientific figure, but you know what I’m on about.)
The main grey area when a side project becomes profitable is how you share that profit, partly because — in my opinion, at least — the transition from non-profitable side project to relatively successful source of income can be a little blurred. Asking for help for nothing when there’s no money to be had is pretty normal, but sometimes it’s easy to get used to that free help even once you start making money. I believe the best approach is to ask for help with the promise that it will always be rewarded as soon as there’s money available. (Oh, god: this sounds like one of those nightmarish client proposals. It’s not, honest.) If you’re making something cool, people won’t mind helping out while you find your feet.
Events often think that they’re exempt from sharing profit. Perhaps that’s because many event organizers think they’re doing the speakers a favour rather than the other way around (that’s a whole separate article), but it’s shocking to see how many people seem to think they can profit from content-makers — speakers, for example — and yet not pay for that content. It was for this reason that Keir and I paid all of our speakers for our Insites: The Tour side project, which we ran back in July. We probably could’ve got away without paying them, especially as the gig was so informal, but it was the right thing to do.
In conclusion: money as a by-product
Let’s conclude by returning to the slightly problematic nature of money, because it’s the pivot on which your side project’s success can swing, regardless of whether you measure success by monetary gain. I would argue that success has nothing to do with profit — it’s about you being able to spend the time you want on the project. Unfortunately, that is almost always linked to money: money to pay yourself while you work on your dream idea; money to pay for more servers when your web app hits the big time; money to pay for efforts to get the word out there. The key, then, is to judge success on your own terms, and seek to generate as much money as you see fit, whether it’s purely to cover your running costs, or enough to buy a small country. There’s nothing wrong with profit, as long as you’re ethical about it. (Pro tip: if you’ve earned enough to buy a small country, you’ve probably been unethical along the way.)
The point at which individuals and companies fail — in the moral sense, for sure, but often in the competitive sense, too — is when money is the primary motivation. It should never be the primary motivation. If you’re not passionate enough about something to do it as an unprofitable side project, you shouldn’t be doing it all.
Earning money should be a by-product of doing what you love. And who doesn’t want to spend their life doing what they love?",,270,0
271,Creating Custom Font Stacks with Unicode-Range,Drew McLellan,"Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We’ve all used it — either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts.
If you’re like me, however, you’ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec.
One such property — the unicode-range descriptor — sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways.
Unicode-range
The unicode-range descriptor is designed to help when using fonts that don’t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers.
@font-face {
font-family: BBCBengali;
src: url(fonts/BBCBengali.ttf) format(""opentype"");
unicode-range: U+00-FF;
}
In this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to ÿ at U+FF – the extent of the Basic Latin character range.
By adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts.
When I say that it’s possible to specify the range of characters the font covers, that’s true, but what you’re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together.
The best available ampersand
A few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a element with a class applied:
&
A CSS rule can then be written to select the and apply a different font:
span.amp {
font-family: Baskerville, Palatino, ""Book Antiqua"", serif;
}
That’s a perfectly serviceable technique, but the drawbacks are clear — you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn’t always possible when working with a CMS.
Perhaps we could do this with unicode-range.
A better best available ampersand
The Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so:
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
What we’ve done here is specify a new family called Ampersand and created a font stack for it with the user’s locally installed copies of Baskerville, Palatino or Book Antiqua. We’ve then limited it to a single character range — the ampersand. Of course, those don’t need to be local fonts — they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life.
We can then use that new family in a regular font stack.
h1 {
font-family: Ampersand, Arial, sans-serif;
}
With this in place, any elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn’t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial.
You didn’t think it was that easy, did you?
Oh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all.
If you’re familiar with how CSS works when it comes to unsupported properties, you’ll know that if a browser encounters a property it doesn’t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius — if the browser can’t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect.
Less perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters — the whole range. If you’re using a fancy font for flamboyant ampersands, you probably don’t want that applied to all your text if unicode-range isn’t supported. That would be bad. Really bad.
Ensuring good fallbacks
As ever, the trick is to make sure that there’s a sensible fallback in place if a browser doesn’t have support for whatever technology you’re trying to use. This is where being a super nerd about understanding the spec you’re working with really pays off.
We can make use of the rules of the CSS cascade to make sure that if unicode-range isn’t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren’t implemented.
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
@font-face {
font-family: 'Ampersand';
src: local('Arial');
}
In theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don’t have support, the second rule should just supersede the first, setting the font to Arial.
Unfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That’s both unexpected and unfortunate. Bad Firefox. On your rug.
If that doesn’t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That’s really what we’re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we’re never going to use in our page? Then it wouldn’t affect the outcome for browsers that do support ranges.
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
@font-face {
/* Ampersand fallback font */
font-family: 'Ampersand';
src: local('Arial');
unicode-range: U+270C;
}
By specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we’re not using in any of our pages — U+270C.
So we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect!
That obscure character, my friends, is what Unicode defines as the VICTORY HAND.
✌
So, how can we use this?
Ampersands are a neat trick, and it works well in browsers that support ranges, but that’s not really the point of all this. Styling ampersands is fun, but they’re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting.
How do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end.
Of course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you’re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you’ll have to use your own best judgement based on your needs and audience.
One thing to keep in mind is that if you’re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn’t be downloaded if none of the characters within the Unicode range are present in a given page.
As ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.",,271,0
272,Crafting the Front-end,Ben Bodien,"Much has been spoken and written recently about the virtues of craftsmanship in the context of web design and development. It seems that we as fabricators of the web are finally tiring of seeking out parallels between ourselves and architects, and are turning instead to the fabled specialist artisans.
Identifying oneself as a craftsman or craftswoman (let’s just say craftsperson from here onward) will likely be a trend of early 2012. In this pre-emptive strike, I’d like to expound on this movement as I feel it pertains to front-end development, and encourage care and understanding of the true qualities of craftsmanship (craftspersonship).
The core values
I’ll begin by defining craftspersonship. What distinguishes a craftsperson from a technician? Dictionaries tend to define a craftsperson as one who possesses great skill in a chosen field. The badge of a craftsperson for me, though, is a very special label that should be revered and used sparingly, only where it is truly deserved. A genuine craftsperson encompasses a few other key traits, far beyond raw skill, each of which must be learned and mastered.
A craftsperson has:
An appreciation of good work, in both the work of others and their own. And not just good as in ‘hey, that’s pretty neat’, I mean a goodness like a shining purity – the kind of good that feels right when you see it.
A belief in quality at every level: every facet of the craftsperson’s product is as crucial as any other, without exception, even those normally hidden from view.
Vision: an ability to visualize their path ahead, pre-empting the obstacles that may be encountered to plan a route around them.
A preference for simplicity: an almost Bauhausesque devotion to undecorated functionality, with no unjustifiable parts included.
Sincerity: producing work that speaks directly to its purpose with flawless clarity.
Only when you become a custodian of such values in your work can you consider calling yourself a craftsperson. Now let’s take a look at some steps we front-end developers can take on our journey of enlightenment toward craftspersonhood.
Speaking of the craftsman’s journey, be sure to watch out for the video of The Standardistas’ stellar talk at the Build 2011 conference titled The Journey, which should be online sometime soon.
Building your own toolbox
My grandfather was a carpenter and trained as a young apprentice under a master. After observing and practising the many foundation theories, principles and techniques of carpentry, he was tasked with creating his own set of woodworking tools, which he would use and maintain throughout his career. By going through the process of having to create his own tools, he would be connected at the most direct level with every piece of wood he touched, his tools being his own creations and extensions of his own skilled hands. The depth of his knowledge of these tools must have surpassed the intricate as he fathered, used, cleaned and repaired them, day in and day out over many years.
And so it should be, ideally, with all crafts. We must understand our tools right down to the most fundamental level. I firmly believe that a level of true craftsmanship cannot be reached while there exists a layer that remains not wholly understood between a creator and his canvas. Of course, our tools as front-end developers are somewhat more complex than those of other crafts – it may seem reasonable to require that a carpenter create his or her own set of chisels, but somewhat less so to ask a front-end developer to code their own CSS preprocessor, or design their own computer.
However, it is still vitally important that you understand how your tools work. This is particularly critical when it comes to things like preprocessors, libraries and frameworks which aim to save you time by automating common processes and functions. For the most part, anything that saves you time is a Good Thing™ but it cannot be stressed enough that using tools like these in earnest should be avoided until you understand exactly what they are doing for you (and, to an extent, how they are doing it).
In particular, you must understand any drawbacks to using your tools, and any shortcuts they may be taking on your behalf. I’m not suggesting that you steer clear of paid work until you’ve studied each of jQuery’s 9,266 lines of JavaScript source code but, all levity aside, it will further you on your journey to look at interesting or relevant bits of jQuery, and any other libraries you might want to use. Such libraries often directly link to corresponding sections of their source code on sites like GitHub from their official documentation. Better yet, they’re almost always written in high level languages (easy to read), so there’s no excuse not to don your pith helmet and go on something of an exploration. Any kind of tangential learning like this will drive you further toward becoming a true craftsperson, so keep an open mind and always be ready to step out of your comfort zone.
Downtime and tool honing
With any craft, it is essential to keep your tools in good condition, and a good idea to stay up-to-date with the latest equipment. This is especially true on the web, which, as we like to tell anyone who is still awake more than a minute after asking what it is that we do, advances at a phenomenal pace. A tool or technique that could be considered best practice this week might be the subject of haughty derision in a comment thread within six months.
I have little doubt that you already spend a chunk of time each day keeping up with the latest material from our industry’s finest Interblogs and Twittertubes, but do you honestly put aside time to collect bookmarks and code snippets from things you read into a slowly evolving toolbox? At @media in 2009, Simon Collison delivered a candid talk on his ‘Ultimate Package’. Those of us who didn’t flee the room anticipating a newfound and unwelcome intimacy with the contents of his trousers were shown how he maintained his own toolkit – a collection of files and folders all set up and ready to go for a new project. By maintaining a toolkit in this way, he has consistency across projects and a dependable base upon which to learn and improve.
The assembly and maintenance of such a personalized and familiar toolkit is probably as close as we will get to emulating the tool making stage of more traditional craft trades. Keep a master copy of your toolkit somewhere safe, making copies of it for new projects. When you learn of a way in which part of it can be improved, make changes to the master copy.
Simplicity through modularity
I believe that the user interfaces of all web applications should be thought of as being made up primarily of modular components. Modules in this context are patterns in design that appear repeatedly throughout the app. These can be small collections of elements, like a user profile summary box (profile picture, username, meta data), as well as atomic elements such as headings and list items.
Well-crafted front-end architectures have the ability to support this kind of repeating pattern as modules, with as close to no repetition of CSS (or JavaScript) as possible, and as close to no variations in HTML between instances as possible.
One of the most fundamental and well known tenets of software engineering is the DRY rule – don’t repeat yourself. It requires that “every piece of knowledge must have a single, unambiguous, authoritative representation within a system.”
As craftspeople, we must hold this rule dear and apply it to the modules we have identified in our site designs. The moment you commit a second style definition for a module, the quality of your output (the front-end code) takes a huge hit. There should only ever be one base style definition for each distinct module or component. Keep these in a separate, sacred place in your CSS. I use a _modules.scss Sass include file, imported near the top of my main CSS files.
Be sure, of course, to avoid making changes to this file lightly, as the smallest adjustment can affect multiple pages (hint: keep a structure list of which modules are used on which pages). Avoid the inevitable temptation to duplicate code late in the project. Sticking to this rule becomes more important the more complex the codebase becomes.
If you can stick to this rule, using sensible class names and consistent HTML, you can reach a joyous, self-fulfilling plateau stage in each project where you are assembling each interface from your own set of carefully crafted building blocks.
Old school markup
Let’s take a step back. Before we fret about creating a divinely pure modular CSS framework, we need to know the site’s design and what it is made of. The best way to gain this knowledge is to go old school. Print out every comp, mockup, wireframe, sketch or whatever you have. If there are sections of pages that are hidden until some user action takes place, or if the page has multiple states, be sure that you have everything that could become visible to the user on paper.
Once you have your wedge of paper designs, lay out all the pages on the floor, or stick them to the wall if you can, arranging them logically according to the site hierarchy, by user journey, or whatever guidelines make most sense to you. Once you have the site laid out before you, study it for a while, familiarizing yourself with every part of every interface. This will eliminate nasty surprises late in the project when you realize you’ve duplicated something, or left an interface on the drawing board altogether.
Now that you know the site like it’s your best friend, get out your pens or pencils of choice and attack it. Mark it up like there’s no tomorrow. Pretend you’re a spy trying to identify communications from an enemy network hiding their messages in newspapers. Look for patterns and similarities, drawing circles around them. These are your modules. Start also highlighting the differences between each instance of these modules, working out which is the most basic or common type that will become the base definition from which all other representations are extended.
This simple but empowering exercise will equip you for your task of actually crafting, instead of just building, the front-end. Without the knowledge gained from this kind of research phase, you will be blundering forward, improvising as best you can, but ultimately making quality-compromising mistakes that could have been avoided.
For more on this theme, read Anna Debenham’s Front-end Style Guides which recommends a similar process, and the sublime idea of extending this into a guide to refer to during development and beyond.
Design homogeneity
Moving forward again, you now have your modules defined and things are looking good. I mentioned that many instances of these modules will carry minor differences. These differences must be given significant thinking time, and discussion time with your designer(s).
It should be common knowledge by now that successful software projects are not the product of distinct design and build phases with little or no bidirectional feedback. The crucial nature of the designer-developer relationship has been covered in depth this year by Paul Robert Lloyd, and a joint effort from both teams throughout the project lifecycle is pivotal to your ability to craft and ship successful products.
This relationship comes into play when you’re well into the development of the site, and you start noticing these differences between instances of modules (they’ll start to stand out very clearly to you and your carefully regimented modular CSS system). Before you start overriding your base styles, question the differences with the designer to work out why they exist. Perhaps they are required and are important to their context, but perhaps they were oversights from earlier design revisions, or simple mistakes.
The craftsperson’s gland
As you grow towards the levels of expertise and experience where you can proudly and honestly consider yourself a craftsperson, you will find that you steadily develop what initially feels like a kind of sixth sense. I think of it more as a new hormonal gland, secreting into your bloodstream a powerful messenger chemical that can either reward or punish your brain. This gland is connected directly to your core understanding of what good quality work looks and feels like, an understanding that itself improves with experience.
This gland will make itself known to you in two ways. First, when you solve a problem in a beautifully elegant way with clean and unobtrusive code that looks good and just feels right, your craftsperson’s gland will ooze something delicious that makes your brain and soul glow from the inside out. You will beam triumphantly at the succinct lines of code on your computer display before bounding outside with a spring in your step to swim up glittering rainbows and kiss soft fluffy puppies.
The second way that you may become aware of your craftsperson’s gland, though, is somewhat less pleasurable. In an alternate reality, your parallel self is faced with the same problem, but decides to take a shortcut and get around it by some dubious means – the kind of technical method that the words hack, kludge and bodge are reserved for. As soon as you have done this, or even as you are doing it, your craftsperson’s gland will damn well let you know that you took the wrong fork in the road. As your craftsperson’s gland begins to secrete a toxic pus, you will at first become entranced into a vacant stare at the monstrous mess you are considering unleashing upon your site’s visitors, before writhing in the horrible agony of an itch that can never be scratched, and a feeling of being coated with the devil’s own deep and penetrating filth that no shower will ever cleanse.
Perhaps I exaggerate slightly, but it is no overstatement to suggest that you will find yourself being guided by proverbial angels and demons perched on opposite shoulders, or a whispering voice inside your head. If you harness this sense, sharpening it as if it were another tool in your kit and letting it guide or at least advise your decision making, you will transcend the rocky realm of random trial and error when faced with problems, and tend toward the right answers instinctively.
This gland can also empower your ability to assess your own work, becoming a judge before whom all your work is cross-examined. A good craftsperson regularly takes a step back from their work, and questions every facet of their product for its precise alignment with their core values of quality and sincerity, and even the very necessity of each component.
The wrapping
By now, you may be thinking that I take this kind of thing far too seriously, but to terrify you further, I haven’t even shared the half of it. Hopefully, though, this gives you an idea of the kind of levels of professionalism and dedication that it should take to get you on your way to becoming a craftsperson. It’s a level of accomplishment and ability toward which we all should strive, both for our personal fulfilment and the betterment of the products we use daily. I look forward to seeing your finely crafted work throughout 2012.",,272,0
273,There’s No Formula for Great Designs,Andy Clarke,"Before he combined them with fluid images and CSS3 media queries to coin responsive design, Ethan Marcotte described fluid grids — one of the most enjoyable parts of responsive design. Enjoyable that is, if you like working with math(s). But fluid grids aren’t perfect and, unless we’re careful when applying them, they can sometimes result in a design that feels disconnected.
Recapping fluid grids
If you haven’t read Ethan’s Fluid Grids, now would be a good time to do that. It centres around a simple formula for converting pixel widths to percentages:
(target ÷ context) × 100 = result
How does that work in practice? Well, take that Fireworks or Photoshop comp you’re working on (I call them static design visuals, or just visuals.) Of course, everything on that visual — column divisions, inline images, navigation elements, everything — is measured in pixels. Now:
Pick something in the visual and measure its width. That’s our target.
Take that target measurement and divide it by the width of its parent (context).
Multiply what you’ve got by 100 (shift two decimal places).
What you’re left with is a percentage width to drop into your style sheets.
For example, divide this 300px wide sidebar division by its 948px parent and then multiply by 100: your original 300px is neatly converted to 31.646%.
.content-sub {
width : 31.646%; /* 300px ÷ 948px = .31646 */ }
That formula makes it surprisingly simple for even die-hard fixed width aficionados to convert their visuals to percentage-based, fluid layouts.
It’s a handy formula for those who still design using static visuals, and downright essential for those situations where one person in an organization designs in Fireworks or Photoshop and another develops with CSS. Why?
Well, although I think that designing in a browser makes the best sense — particularly when designing for multiple devices — I’ll wager most designers still make visuals in Fireworks or Photoshop and use them for demonstrations and get feedback and sign-off. That’s OK. If you haven’t made the transition to content-out designing in a browser yet, the fluid grids formula helps you carry on pushing pixels a while longer.
You can carry on moving pixel width measurements from your visuals to your style sheets, too, in the same way you always have. You can be precise to the pixel and even apply a grid image as a CSS background to help you keep everything lined up perfectly.
Once you’re done, and the fixed width layout in the browser matches your visual, loop back through your style sheets and convert those pixels to percentages using the fluid grids formula. With very little extra work, you’ll have a fluid implementation of your fixed width layout.
The fluid grids formula is simple and incredibly effective, but not long after I started working responsively I realized that the formula shouldn’t (always) be a one-fix, set-and-forget calculation. I noticed that unless we compensate for problems it sometimes creates, the result can be a disconnected design.
Staying connected
Good design relies on connectedness, a feeling of natural balance between elements and the grid they’re placed on. Give an element greater prominence or position in a visual hierarchy and you can fundamentally alter the balance and sometimes the meaning of a design.
Different from a browser’s page zooming feature — where images, text and overall layout change size by the same ratio — fluid grids flex a layout in response to a window or device width. Columns expand and contract, and within them fluid media (images and videos) can also change size. This can be one of the most impressive demonstrations of responsive design.
But not every element within a fluid grid can change size along with the window or device width. For example, type size and leading won’t change along with a column’s width.
When columns and elements within them change width, all too easily a visual hierarchy can be broken and along with it the relationship between element sizes and the outer window or viewport. This can happen quickly if you make just one set of fluid grid calculations and use those percentages across every screen width, from smartphones through tablets and up to large desktops.
The answer? Make several sets of fluid grids calculations, each one at a significant window or device width breakpoint. Then apply those new percentages, when needed, to help keep elements in proportion and maintain balance and connectedness. Here’s how I work.
Avoiding disconnection
I’ve never been entirely happy with grid frameworks such as the 960 Grid System, so I start almost every project by creating a custom grid to inform my layout decisions. Here’s a plain version of a grid from a recent project that I’ll use as an illustration.
This project’s grid comprises 84px columns and 24px gutters. This creates an odd number of columns at common tablet and desktop widths, and allows for 300px fixed width assets — useful when I need to fit advertising into a desktop layout’s sidebar.
Showing common advertising sizes (Larger image)
For this project I chose three 320 and Up breakpoints above 320px and, after placing as many columns as would fit those breakpoint widths, I derived three content widths:
Breakpoint
Columns
Content width
768px
7
732px
992px
9
948px
1,382px
13
1,380px
Here’s my grid again, this time with pixel measurements and breakpoints overlaid.
Showing pixel measurements and breakpoints (Larger image)
Now cast your mind back to the fluid grids calculation I made earlier. I divided a 300px element by 948px and arrived at 31.646%. For some elements it’s possible to use that percentage across all screen widths, but others will feel too small in relation to a narrower 768px and too large inside 1,380px.
To help maintain connectedness, I make a set of fluid grids calculations based on each of the content widths I established earlier. Now I can shift an element’s percentage width up or down when I switch to a new breakpoint and content width. For example:
300px is 40.984% of 732px
300px is 31.646% of 948px
300px is 21.739% of 1,380px
I’ll add all those fluid grid percentages to my grid image and save it for quick reference.
Showing percentages at all breakpoints (Larger image)
Then I can apply those different percentage widths to elements at each breakpoint using CSS3 media queries. For example, that sidebar division again:
/* 732px, 7-column width */
@media only screen and (min-width: 768px) {
.content-sub {
width : 40.983%; /* 300px ÷ 732px = .40983 */ }
}
/* 948px, 9-column width */
@media only screen and (min-width: 992px) {
.content-sub {
width : 31.645%; /* 300px ÷ 948px = .31645 */ }
}
/* 1380px, 13-column width */
@media only screen and (min-width: 1382px) {
.content-sub {
width : 21.739%; /* 300px ÷ 1380px = .21739 */ }
}
The number of changes you make to a layout at different breakpoints will, of course, depend on the specifics of the design you’re working on. Yes, this is additional work, but the result will be a layout that feels better balanced and within which elements remain in harmony with each other while they respond to new screen or device widths.
Putting the design in responsive web design
Until now, many of the conversations around responsive web design have been about aspects of technical implementation, rather than design. I believe we’re only beginning to understand what’s involved in designing responsively. In future, we’ll likely be making design decisions not just about proportions but also about responsive typography. We’ll also need to learn how to adapt our designs to device characteristics such as touch targets and more.
Sometimes we’ll make decisions to improve function, other times because they make a design ‘feel’ right. You’ll know when you’ve made a right decision. You’ll feel it.
After all, there really is no formula for making great designs.",,273,0
274,Adaptive Images for Responsive Designs,Matt Wilcox,"So you’ve been building some responsive designs and you’ve been working through your checklist of things to do:
You started with the content and designed around it, with mobile in mind first.
You’ve gone liquid and there’s nary a px value in sight; % is your weapon of choice now.
You’ve baked in a few media queries to adapt your layout and tweak your design at different window widths.
You’ve made your images scale to the container width using the fluid Image technique.
You’ve even done the same for your videos using a nifty bit of JavaScript.
You’ve done a good job so pat yourself on the back. But there’s still a problem and it’s as tricky as it is important: image resolutions.
HTML has an problem
CSS is great at adapting a website design to different window sizes – it allows you not only to tweak layout but also to send rescaled versions of the design’s images. And you want to do that because, after all, a smartphone does not need a 1,900-pixel background image1.
HTML is less great. In the same way that you don’t want CSS background images to be larger than required, you don’t want that happening with s either. A smartphone only needs a small image but desktop users need a large one. Unfortunately s can’t adapt like CSS, so what do we do?
Well, you could just use a high resolution image and the fluid image technique would scale it down to fit the viewport; but that’s sending an image five or six times the file size that’s really needed, which makes it slow to download and unpleasant to use. Smartphones are pretty impressive devices – my ancient iPhone 3G is more powerful in every way than my first proper computer – but they’re still terribly slow in comparison to today’s desktop machines. Sending a massive image means it has to be manipulated in memory and redrawn as you scroll. You’ll find phones rapidly run out of RAM and slow to a crawl.
Well, OK. You went mobile first with everything else so why not put in mobile resolution s too? Because even though mobile devices are rapidly gaining share in your analytics stats, they’re still not likely to be the major share of your user base. I don’t think desktop users would be happy with pokey little mobile resolution images, do you? What we need are adaptive images.
Adaptive image techniques
There are a number of possible solutions, each with pros and cons, and it’s not as simple to find a graceful solution as you might expect.
Your first thought might be to use JavaScript to trawl through the markup and rewrite the source attribute. That’ll get you the right end result, but it’ll have done it in a way you absolutely don’t want. That’s because of the way browsers load resources. It starts to load the HTML and builds the page on-the-fly; as soon as it finds an element it immediately asks the server for that image. After the HTML has finished loading, the JavaScript will run, change the src attribute, and then the browser will request that new image too. Not instead of, but as well as. Not good: that’s added more bloat instead of cutting it.
Plain JavaScript is out then, which is a problem, because what other tools do we have to work with as web designers? Let’s ignore that for now and I’ll outline another issue with the concept of serving different resolution images for different window widths: a basic file management problem. To request a different image, that image has to exist on the server. How’s it going to get there? That’s not a trivial problem, especially if you have non-technical users that update content through a CMS. Let’s say you solve that – do you plan on a simple binary switch: big image|little image? Is that really efficient or future-proof? What happens if you have an archive of existing content that needs to behave this way? Can you apply such a solution to existing content or markup?
There’s a detailed round-up of potential techniques for solving the adaptive images problem over at the Cloud Four blog if you fancy a dig around exploring all the options currently available. But I’m here to show you what I think is the most flexible and easy to implement solution, so here we are.
Adaptive Images
Adaptive Images aims to mitigate most of the issues surrounding the problems of bringing the venerable tag into the 21st century. And it works by leaving that tag completely alone – just add that desktop resolution image into the markup as you’ve been doing for years now. We’ll fix it using secret magic techniques and bottled pixie dreams. Well, fine: with one .htaccess file, one small PHP file and one line of JavaScript. But you’re killing the mystique with that kind of talk.
So, what does this solution do?
It allows s to adapt to the same break points you use in your media queries, giving granular control in the same way you get with your CSS.
It installs on your server in five minutes or less and after that is automatic and you don’t need to do anything.
It generates its own rescaled images on the server and doesn’t require markup changes, so you can apply it to existing web content.
If you wish, it will make all of your images go mobile-first (just in case that’s what you want if JavaScript and cookies aren’t available).
Sound good? I hope so. Here’s what you do.
Setting up and rolling out
I’ll assume you have some basic server knowledge along with that wealth of front-end wisdom exploding out of your head: that you know not to overwrite any existing .htaccess file for example, and how to set file permissions on your server. Feeling up to it? Excellent.
Download the latest version of Adaptive Images either from the website or from the GitHub repository.
Upload the included .htaccess and adaptive-images.php files into the root folder of your website.
Create a directory called ai-cache and make sure the server can write to it (CHMOD 755 should do it).
Add the following line of JavaScript into the of your site:
That’s it, unless you want to tweak the default settings. You likely do, but essentially you’re already up and running.
How it works
Adaptive Images does a number of things depending on the scenario the script has to handle, but here’s a basic overview of what it does when you load a page running it:
A session cookie is written with the value of the visitor’s screen size in pixels.
The HTML encounters an tag and sends a request to the server for that image. It also sends the cookie, because that’s how browsers work.
Apache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL.
There are! The .htaccess says “Hey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.”
The PHP file then does some intelligent thinking which can cover a number of scenarios, but I’ll illustrate one path that can happen:
The PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px.
The PHP has a look at the available media query sizes that were configured and decides which one matches the user’s device.
It then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there.
We’ll pretend it doesn’t – the PHP then goes to the actual requested URI and finds that the original file does exist.
It has a look to see how wide that image is. If it’s already smaller than the user’s screen width it sends it along and stops there. But, let’s pretend the image is 1,000px wide.
The PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it.
It also does a few other things when needs arise, for example:
It sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it.
It handles cases where there isn’t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size.
It compares timestamps between the source image and the generated cache image – to ensure that if the source image gets updated, the old cached file won’t be sent.
Customizing
There are a few options you can customize if you don’t like the default values. By looking in the PHP’s configuration section at the top of the file, you can:
Set the resolution breakpoints to match your media query break points.
Change the name and location of the ai-cache folder.
Change the quality level any generated JPG images are saved at.
Have it perform a subtle sharpen on rescaled images to help keep detail.
Toggle whether you want it to compare the files in the cache folder with the source ones or not.
Set how long the browser should cache the images for.
Switch between a mobile-first or desktop-first approach when a cookie isn’t found.
More importantly, you probably want to omit a few folders from the AI behaviour. You don’t need or want it resizing the images you’re using in your CSS, for example. That’s fine – just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you’re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it’ll only apply AI behaviour to a given list of folders.
Caveats
As I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I’ve seen so far.
This is a PHP solution
I wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don’t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3.
Content delivery networks
Adaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won’t allow that to happen. Adaptive Images will not work if you’re using a CDN to deliver your website.
A minor but interesting cookie issue.
As Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested – even though the JavaScript that sets the cookie is loaded by the browser before it finds any tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem:
The $mobile_first toggle allows you to choose what to send to a browser if a cookie isn’t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest.
Even if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE.
This means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn’t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest.
The best way to get a cookie written is to use JavaScript as I’ve explained above, because it’s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP ‘image’ to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won’t be set until after images have been requested.
The future
For today, this is a pretty good solution. It works, and as it doesn’t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you’re good to go – you’d never know AI had been there.
However, this isn’t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future.
First, we could do with browsers sending far more information about the user’s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn’t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for.
Secondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does.
I hope you’ve found this interesting and will find Adaptive Images useful.
Footnotes
1 While I’m talking about preventing smartphones from downloading resources they don’t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files.
2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository.
3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images.",,274,0
275,Context First: Web Strategy in Four Handy Ws,Alex Morris,"Many, many years ago, before web design became my proper job, I trained and worked as a journalist. I studied publishing in London and spent three fun years learning how to take a few little nuggets of information and turn them into a story. I learned a bunch of stuff that has all been a huge help to my design career. Flatplanning, layout, typographic theory. All of these disciplines have since translated really well to web design, but without doubt the most useful thing I learned was how to ask difficult questions.
Pretty much from day one of journalism school they hammer into you the importance of the Five Ws. Five disarmingly simple lines of enquiry that eloquently manage to provide the meat of any decent story. And with alliteration thrown in too. For a young journo, it’s almost too good to be true.
Who? What? Where? When? Why? It seems so obvious to almost be trite but, fundamentally, any story that manages to answer those questions for the reader is doing a pretty good job. You’ll probably have noticed feeling underwhelmed by certain news pieces in the past – disappointed, like something was missing. Some irritating oversight that really lets the story down. No doubt it was one of the Ws – those innocuous little suckers are generally only noticeable by their absence, but they sure get missed when they’re not there.
Question everything
I’ve always been curious. An inveterate tinkerer with things and asker of dopey questions, often to the point of abject annoyance for anyone unfortunate enough to have ended up in my line of fire. So, naturally, the Five Ws started drifting into other areas of my life. I’d scrutinize everything, trying to justify or explain my rationale using these Ws, but I’d also find myself ripping apart the stuff that clearly couldn’t justify itself against the same criteria.
So when I started working as a designer I applied the same logic and, sure enough, the Ws pretty much mapped to the exact same needs we had for gathering requirements at the start of a project. It seemed so obvious, such a simple way to establish the purpose of a product. What was it for? Why we were making it? And, of course, who were we making it for? It forced clients to stop and think, when really what they wanted was to get going and see something shiny. Sometimes that was a tricky conversation to have, but it’s no coincidence that those who got it also understood the value of strategy and went on to have good solid products, while those that didn’t often ended up with arrogantly insular and very shiny but ultimately unsatisfying and expendable products. Empty vessels make the most noise and all that…
Content first
I was both surprised and pleased when the whole content first idea started to rear its head a couple of years back. Pleased, because without doubt it’s absolutely the right way to work. And surprised, because personally it’s always been the way I’ve done it – I wasn’t aware there was even an alternative way. Content in some form or another is the whole reason we were making the things we were making. I can’t even imagine how you’d start figuring out what a site needs to do, how it should be structured, or how it should look without a really good idea of what that content might be. It baffles me still that this was somehow news to a lot of people. What on earth were they doing? Design without purpose is just folly, surely?
It’s great to see the idea gaining momentum but, having watched it unfold, it occurred to me recently that although it’s fantastic to see a tangible shift in thinking – away from those bleak times, where making things up was somehow deemed an appropriate way to do things – there’s now a new bad guy in town.
With any buzzword solution of the moment, there’s always a catch, and it seems like some have taken the content first approach a little too literally. By which I mean, it’s literally the first thing they do. The project starts, there’s a very cursory nod towards gathering requirements, and off they go, cranking content. Writing copy, making video, commissioning illustrations.
All that’s happened is that the ‘making stuff up’ part has shifted along the line, away from layout and UI, back to the content.
Starting is too easy
I can’t remember where I first heard that phrase, but it’s a great sentiment which applies to so much of what we do on the web. The medium is so accessible and to an extent disposable; throwing things together quickly carries far less burden than in any other industry. We’re used to tweaking as we go, changing bits, iterating things into shape. The ubiquitous beta tag has become the ultimate caveat, and has made the unfinished and unpolished acceptable. Of course, that can work brilliantly in some circumstances. Occasionally, a product offers such a paradigm shift it’s beyond the level of deep planning and prelaunch finessing we’d ideally like. But, in the main, for most client sites we work on, there really is no excuse not to do things properly. To ask the tricky questions, to challenge preconceptions and really understand the Ws behind the products we’re making before we even start.
The four Ws
For product definition, only four of the five Ws really apply, although there’s a lot of discussion around the idea of when being an influencing factor. For example, the context of a user’s engagement with your product is something you can make a call on depending on the specifics of the project.
So, here’s my take on the four essential Ws. I’ll point out here that, of course, these are not intended to be autocratic dictums. Your needs may differ, your clients’ needs may differ, but these four starting points will get you pretty close to where you need to be.
Who
It’s surprising just how many projects start without a real understanding of the intended audience. Many clients think they have an idea, but without really knowing – it’s presumptive at best, and we all know what presumption is the mother of, right? Of course, we can’t know our audiences in the same way a small shop owner might know their customers. But we can at least strive to find out what type of people are likely to be using the product. I’m not talking about deep user research. That should come later.
These are the absolute basics. What’s the context for their visit? How informed are they? What’s their level of comprehension? Are they able to self-identify and relate to categories you have created? I could go on, and it changes on a per-project basis. You’ll only find this out by speaking to them, if not in person, then indirectly through surveys, questionnaires or polls. The mechanism is less important than actually reaching out and engaging with them, because without that understanding it’s impossible to start to design with any empathy.
What
Once you become deeply involved directly with a product or service, it’s notoriously difficult to see things as an outsider would. You learn the thing inside and out, you develop shortcuts and internal phraseology. Colloquialisms creep in. You become too close. So it’s no surprise when clients sometimes struggle to explain what it is their product actually does in a way that others can understand.
Often products are complex but, really, the core reasons behind someone wanting to use that product are very simple. There’s a value proposition for the customer and, if they choose to engage with it, there’s a value exchange. If that proposition or exchange isn’t transparent, then people become confused and will likely go elsewhere. Make sure both your client and you really understand what that proposition is and, in turn, what the expected exchange should be. In a nutshell: what is the intended outcome of that engagement? Often the best way to do this is strip everything back to nothing. Verbosity is rife on the web. Just because it’s easy to create content, that shouldn’t be a reason to do so. Figure out what the value proposition is and then reintroduce content elements that genuinely help explain or present that to a level that is appropriate for the audience.
Why
In advertising, they talk about the truths behind a product or service. Truths can be both tangible or abstract, but the most important part is the resonance those truths hit with a customer. In a digital product or service those truths are often exposed as benefits. Why is this what I need? Why will it work for me? Why should I trust you? The why is one of the more fluffy Ws, yet it’s such an important one to nail. Clients can get prickly when you ask them to justify the why behind their product, but it’s a fantastic way to make sure the value proposition is clear, realistic and meets with the expectations of both client and customer.
It’s our job as designers to question things: we’re not just a pair of hands for clients. Just recently I spoke to a potential client about a site for his business. I asked him why people would use his product and also why his product seemed so fractured in its direction. He couldnt answer that question so, instead of ploughing on regardless, he went back to his directors and is now re-evaluating that business. It was awkward but he thanked me and hopefully he’ll have a better product as a result.
Where
In this instance, where is not so much a geographical thing, although in some cases that level of context may indeed become a influencing factor… The where we’re talking about here is the position of the product in relation to others around it. By looking at competitors or similar services around the one you are designing, you can start to get a sense for many of the things that are otherwise hard to pin down or have yet to be defined. For example, in a collection of sites all selling cars, where does yours fit most closely? Where are the overlaps? How are they communicating to their customers? How is the product range presented or categorized?
It’s good to look around and see how others are doing it. Not in a quest for homogeneity but more to reference or to avoid certain patterns that may or may not make sense for your own particular product. Clients often strive to be different for the sake of it. They feel they need to provide distinction by going against the flow a bit. We know different. We know users love convention. They embrace familiar mental models. They’re comfortable with things that they’ve experienced elsewhere. By showing your client that position is a vital part of their strategy, you can help shape their product into something great.
To conclude
So there we have it – the four Ws. Each part tells a different and vital part of the story you need to be able to make a really good product. It might sound like a lot of work, particularly when the client is breathing down your neck expecting to see things, but without those pieces in place, the story you’re building your product on, and the content that you’re creating to form that product can only ever fit into one genre. Fiction.",,275,0
276,Your jQuery: Now With 67% Less Suck,Scott Kosman,"Fun fact: more websites are now using jQuery than Flash.
jQuery is an amazing tool that’s made JavaScript accessible to developers and designers of all levels of experience. However, as Spiderman taught us, “with great power comes great responsibility.” The unfortunate downside to jQuery is that while it makes it easy to write JavaScript, it makes it easy to write really really f*ing bad JavaScript. Scripts that slow down page load, unresponsive user interfaces, and spaghetti code knotted so deep that it should come with a bottle of whiskey for the next sucker developer that has to work on it.
This becomes more important for those of us who have yet to move into the magical fairy wonderland where none of our clients or users view our pages in Internet Explorer. The IE JavaScript engine moves at the speed of an advancing glacier compared to more modern browsers, so optimizing our code for performance takes on an even higher level of urgency.
Thankfully, there are a few very simple things anyone can add into their jQuery workflow that can clear up a lot of basic problems. When undertaking code reviews, three of the areas where I consistently see the biggest problems are: inefficient selectors; poor event delegation; and clunky DOM manipulation. We’ll tackle all three of these and hopefully you’ll walk away with some new jQuery batarangs to toss around in your next project.
Selector optimization
Selector speed: fast or slow?
Saying that the power behind jQuery comes from its ability to select DOM elements and act on them is like saying that Photoshop is a really good tool for selecting pixels on screen and making them change color – it’s a bit of a gross oversimplification, but the fact remains that jQuery gives us a ton of ways to choose which element or elements in a page we want to work with. However, a surprising number of web developers are unaware that all selectors are not created equal; in fact, it’s incredible just how drastic the performance difference can be between two selectors that, at first glance, appear nearly identical. For instance, consider these two ways of selecting all paragraph tags inside a with an ID.
$(""#id p"");
$(""#id"").find(""p"");
Would it surprise you to learn that the second way can be more than twice as fast as the first? Knowing which selectors outperform others (and why) is a pretty key building block in making sure your code runs well and doesn’t frustrate your users waiting for things to happen.
There are many different ways to select elements using jQuery, but the most common ways can be basically broken down into five different methods. In order, roughly, from fastest to slowest, these are:
$(""#id"");
This is without a doubt the fastest selector jQuery provides because it maps directly to the native document.getElementbyId() JavaScript method. If possible, the selectors listed below should be prefaced with an ID selector in conjunction with jQuery’s .find() method to limit the scope of the page that has to be searched (as in the $(""#id"").find(""p"") example shown above).
$(""p"");, $(""input"");, $(""form""); and so on
Selecting elements by tag name is also fast, since it maps directly to the native document.getElementsByTagname() method.
$("".class"");
Selecting by class name is a little trickier. While still performing very well in modern browsers, it can cause some pretty significant slowdowns in IE8 and below. Why? IE9 was the first IE version to support the native document.getElementsByClassName() JavaScript method. Older browsers have to resort to using much slower DOM-scraping methods that can really impact performance.
$(""[attribute=value]"");
There is no native JavaScript method for this selector to use, so the only way that jQuery can perform the search is by crawling the entire DOM looking for matches. Modern browsers that support the querySelectorAll() method will perform better in certain cases (Opera, especially, runs these searches much faster than any other browser) but, generally speaking, this type of selector is Slowey McSlowersons.
$("":hidden"");
Like attribute selectors, there is no native JavaScript method for this one to use. Pseudo-selectors can be painfully slow since the selector has to be run against every element in your search space. Again, modern browsers with querySelectorAll() will perform slightly better here, but try to avoid these if at all possible. If you must use one, try to limit the search space to a specific portion of the page: $(""#list"").find("":hidden"");
But, hey, proof is in the performance testing, right? It just so happens that said proof is sitting right here. Be sure to notice the class selector numbers beside IE7 and 8 compared to other browsers and then wonder how the people on the IE team at Microsoft manage to sleep at night. Yikes.
Chaining
Almost all jQuery methods return a jQuery object. This means that when a method is run, its results are returned and you can continue executing more methods on them. Rather than writing out the same selector multiple times over, just making a selection once allows multiple actions to be run on it.
Without chaining
$(""#object"").addClass(""active"");
$(""#object"").css(""color"",""#f0f"");
$(""#object"").height(300);
With chaining
$(""#object"").addClass(""active"").css(""color"", ""#f0f"").height(300);
This has the dual effect of making your code shorter and faster. Chained methods will be slightly faster than multiple methods made on a cached selector, and both ways will be much faster than multiple methods made on non-cached selectors. Wait… “cached selector”? What is this new devilry?
Caching
Another easy way to speed up your code that seems to be a mystery to developers is the idea of caching your selectors. Think of how many times you end up writing the same selector over and over again in any project. Every $("".element"") selector has to search the entire DOM each time, regardless of whether or not that selector had been previously run. Running the selection once and then storing the results in a variable means that the DOM only has to be searched once. Once the results of a selector have been cached, you can do anything with them.
First, run your search (here we’re selecting all of the
elements inside ):
var blocks = $(""#blocks"").find(""li"");
Now, you can use the blocks variable wherever you want without having to search the DOM every time.
$(""#hideBlocks"").click(function() {
blocks.fadeOut();
});
$(""#showBlocks"").click(function() {
blocks.fadeIn();
});
My advice? Any selector that gets run more than once should be cached. This jsperf test shows just how much faster a cached selector runs compared to a non-cached one (and even throws some chaining love in to boot).
Event delegation
Event listeners cost memory. In complex websites and apps it’s not uncommon to have a lot of event listeners floating around, and thankfully jQuery provides some really easy methods for handling event listeners efficiently through delegation.
In a bit of an extreme example, imagine a situation where a 10×10 cell table needs to have an event listener on each cell; let’s say that clicking on a cell adds or removes a class that defines the cell’s background color. A typical way that this might be written (and something I’ve often seen during code reviews) is like so:
$('table').find('td').click(function() {
$(this).toggleClass('active');
});
jQuery 1.7 has provided us with a new event listener method, .on(). It acts as a utility that wraps all of jQuery’s previous event listeners into one convenient method, and the way you write it determines how it behaves. To rewrite the above .click() example using .on(), we’d simply do the following:
$('table').find('td').on('click',function() {
$(this).toggleClass('active');
});
Simple enough, right? Sure, but the problem here is that we’re still binding one hundred event listeners to our page, one to each individual table cell. A far better way to do things is to create one event listener on the table itself that listens for events inside it. Since the majority of events bubble up the DOM tree, we can bind a single event listener to one element (in this case, the ) and wait for events to bubble up from its children. The way to do this using the .on() method requires only one change from our code above:
$('table').on('click','td',function() {
$(this).toggleClass('active');
});
All we’ve done is moved the td selector to an argument inside the .on() method. Providing a selector to .on() switches it into delegation mode, and the event is only fired for descendants of the bound element (table) that match the selector (td). With that one simple change, we’ve gone from having to bind one hundred event listeners to just one. You might think that the browser having to do one hundred times less work would be a good thing and you’d be completely right. The difference between the two examples above is staggering.
(Note that if your site is using a version of jQuery earlier than 1.7, you can accomplish the very same thing using the .delegate() method. The syntax of how you write the function differs slightly; if you’ve never used it before, it’s worth checking the API docs for that page to see how it works.)
DOM manipulation
jQuery makes it very easy to manipulate the DOM. It’s trivial to create new nodes, insert them, remove other ones, move things around, and so on. While the code to do this is simple to write, every time the DOM is manipulated, the browser has to repaint and reflow content which can be extremely costly. This is no more evident than in a long loop, whether it be a standard for() loop, while() loop, or jQuery $.each() loop.
In this case, let’s say we’ve just received an array full of image URLs from a database or Ajax call or wherever, and we want to put all of those images in an unordered list. Commonly, you’ll see code like this to pull this off:
var arr = [reallyLongArrayOfImageURLs];
$.each(arr, function(count, item) {
var newImg = ' ';
$('#imgList').append(newImg);
});
There are a couple of problems with this. For one (which you should have already noticed if you’ve read the earlier part of this article), we’re making the $(""#imgList"") selection once for each iteration of our loop. The other problem here is that each time the loop iterates, it’s adding a new to the DOM. Each of those insertions is going to be costly, and if our array is quite large then this could lead to a massive slowdown or even the dreaded ‘A script is causing this page to run slowly’ warning.
var arr = [reallyLongArrayOfImageURLs],
tmp = '';
$.each(arr, function(count, item) {
tmp += ' ';
});
$('#imgList').append(tmp);
All we’ve done here is create a tmp variable that each is added to as it’s created. Once our loop has finished iterating, that tmp variable will contain all of our list items in memory, and can be appended to our