Employing specific JavaScript animation frameworks
Making use of all the powerful properties of CSS Transforms and CSS Animation
Trying out emerging CSS3 animation tools like Sencha Animator
If it wasn’t already apparent, these demos show an exaggerated example and probably aren’t practical in a lot of environments. However, there are a handful of great sites out there that honor animation techniques—metaphor, physics, and misdirection, among others—like Benjamin De Cock’s vCard, 20 Things I Learned About Browsers and the Web by Fantasy Interactive, and the Nike Snowboarding site by Ian Coyle and HEGA. They’re wonderful testaments to what you can do to aid interaction for users.
My goal was to show you the ‘why’ and the ‘how.’ Your charge is to discern the ‘where’ and the ‘when.’ Happy animating!",2010,Dan Mall,danmall,2010-12-15T00:00:00+00:00,https://24ways.org/2010/real-animation-using-javascript-css3-and-html5-video/,code
236,Extreme Design,"Recently, I set out with twelve other designers and developers for a 19th century fortress on the Channel Island of Alderney. We were going to /dev/fort, a sort of band camp for geeks. Our cohort’s mission: to think up, build and finish something – without readily available internet access.
Alderney runway, photo by Chris Govias
Wait, no internet?
Well, pretty much. As the creators of /dev/fort James Aylett and Mark Norman Francis put it: “Imagine a place with no distractions – no IM, no Twitter”. But also no way to quickly look up a design pattern, code sample or source material. Like packing for camping, /dev/fort means bringing everything you’ll need on your back or your hard drive: from long johns to your favourite icon set.
We got to work the first night discussing ideas for what we wanted to build. By the time breakfast was cleared up the next morning, we’d settled on Russ’s idea to make the Apollo 13 (PDF) transcript accessible. Days two and three were spent collaboratively planning (KJ style) what features we wanted to build, and unravelling the larger UX challenges of the project. The next five days were spent building it. Within 36 hours of touchdown at Southampton Airport, we launched our creation: spacelog.org
The weather was cold, the coal fire less than ideal, food and supplies a hike away, and the process lightning-fast. A week of designing under extreme circumstances called for an extreme process. Some of this was driven by James’s and Norm’s experience running these things, but a lot of it materialised while we were there – especially for our three-strong design team (myself, Gavin O’ Carroll and Chris Govias) who, though we knew each other, had never worked together as a group in this kind of scenario before.
The outcome was a pretty spectacular process, with a some key takeaways useful for any small group trying to build something quickly.
What it’s like inside the fort
/dev/fort has the pressure and pace of a hack day without being a hack day – primarily, no workshops or interruptions‚ but also a different mentality. While hack days are typically developer-driven with a ‘hack first, design later (if at all)’ attitude, James was quick to tell the team to hold off from writing any code until we had a plan. This put a healthy pressure on the design and product folks to slash through the UX problems before we started building.
While the fort had definitely more of a hack day feel, all of us were familiar with Agile methods, so we borrowed a few useful techniques such as morning stand-ups and an emphasis on teamwork. We cut some really good features to make our launch date, and chunked the work based on user goals, iterating as we went.
What made this design process work?
A golden ratio of teams
My personal experience both professionally and in free-form situations like this, is a tendency to get/hire a designer. Leaders of businesses, founders of start-ups, organisers of events: one designer is not enough! Finding one ace-blooded designer who can ‘do everything’ will always result in bottleneck and burnout. Like the nuances between different development languages, design is a multifaceted discipline, and very few can claim to be equally strong in every aspect. Overlap in skill set will result in a stronger, more robust interface.
More importantly, however, having lots of designers to go around meant that we all had the opportunity to pair with developers, polishing the details that don’t usually get polished. As soon as we launched, the public reception of the design and UX was overwhelmingly positive (proof!). But also, a lot of people asked us who the designer was, attributing it to one person.
While it’s important to note that everyone in our team was multitalented (and could easily shift between roles, helping us all stay unblocked), the golden ratio James and Norm devised was two product/developer folks, three interaction designers and eight developers.
photo by Ben Firshman
Equality inside the fortress walls
Something magical about the fort is how everyone leaves the outside world on the drawbridge. Job titles, professional status, Twitter followers, and so on. Like scout camp, a mutual respect and trust is expected of all the participants. Like extreme programming, extreme design requires us all to be equal partners in a collaborative team. I think this is especially worth noting for designers; our past is filled with the clear hierarchy of the traditional studio system which, however important for taste and style, seems less compatible with modern web/software development methods.
Being equal doesn’t mean being the same, however. We established clear roles and teams for ourselves on the second day, deferring to that person when a decision needed to be made. As the interface coalesced, the designers and developers took ownership over certain parts to ensure the details got looked after, while staying open to ideas and revisions from the rest of the cohort.
Create a space where everyone who enters is equal, but be sure to establish clear roles. Even if it’s just for a short while, the environment will be beneficial.
photo by Ben Firshman
Hang your heraldry from the rafters
Forts and castles are full of lore: coats of arms; paintings of battles; suits of armour. It’s impossible not to be surrounded by these stories, words and ways of thinking. Like the whiteboards on the walls, putting organisational lore in your physical surroundings makes it impossible not to see.
Ryan Alexander brought some of those static-cling whiteboard sheets which were quickly filled with use cases; IA; team roles; and, most importantly, a glossary. As soon as we started working on the project, we realised we needed to get clear on what certain words meant: what was a logline, a range, a phase, a key moment? Were the back-end people using these words in the same way design and product was? Quickly writing up a glossary of terms meant everyone was instantly speaking the same language. There was no “Ah, I misunderstood because in the data structure x means y” or, even worse, accidental seepage of technical language into the user interface copy.
Put a glossary of your internal terminology somewhere big and fat on the wall. Stand around it and argue until you agree on what it says. Leave it up; don’t underestimate the power of ambient communication and physical reference.
Plan more, download less
While internet is forbidden inside the fort, we did go on downloading expeditions: NASA photography; code documentation; and so on. The project wouldn’t have been possible without a few trips to the web. We had two lists on the wall: groceries and supplies; internets – “loo roll; Tom Stafford photo“.
This changed our usual design process, forcing us to plan carefully and think of what we needed ahead of time. Getting to the internet was a thirty-minute hike up a snow covered cliff to the town airport, so you really had to need it, too.
The path to the internet
For the visual design, especially, this resulted in more focus up front, and communication between the designers on what assets we required. It made us make decisions earlier and stick with them, creating less distraction and churn later in the process.
Try it at home: unplug once you’ve got the things you need. As an artist, it’s easier to let your inner voice shine through if you’re not looking at other people’s work while creating.
Social design
Finally, our design team experimented with a collaborative approach to wireframing. Once we had collectively nailed down use cases, IA, user journeys and other critical artefacts, we tried a pairing approach. One person drew in Illustrator in real time as the other two articulated what to draw. (This would work equally well with two people, but with three it meant that one of us could jump up and consult the lore on the walls or clarify a technical detail.) The result: we ended up considering more alternatives and quickly rallying around one solution, and resolved difficult problems more quickly.
At a certain stage we discovered it was more efficient for one person to take over – this happened around the time when the basic wireframes existed in Illustrator and we’d collectively run through the use cases, making sure that everything was accounted for in a broad sense. At this point, take a break, go have a beer, and give yourself a pat on the back.
Put the files somewhere accessible so everyone can use them as their base, and divide up the more detailed UI problems, screens or journeys. At this level of detail it’s better to have your personal headspace.
Gavin called this ‘social design’. Chatting and drawing in real time turned what was normally a rather solitary act into a very social process, with some really promising results. I’d tried something like this before with product or developer folks, and it can work – but there’s something really beautiful about switching places and everyone involved being equally quick at drawing. That’s not something you get with non-designers, and frequent swapping of the ‘driver’ and ‘observer’ roles is a key aspect to pairing.
Tackle the forest collectively and the trees individually – it will make your framework more robust and your details more polished. Win/win.
The return home
Grateful to see a 3G signal on our phones again, our flight off the island was delayed, allowing for a flurry of domain name look-ups, Twitter catch-up, and e-mails to loved ones. A week in an isolated fort really made me appreciate continuous connectivity, but also just how unique some of these processes might be.
You just never know what crazy place you might be designing from next.",2010,Hannah Donovan,hannahdonovan,2010-12-09T00:00:00+00:00,https://24ways.org/2010/extreme-design/,process
237,Circles of Confusion,"Long before I worked on the web, I specialised in training photographers how to use large format, 5×4″ and 10×8″ view cameras – film cameras with swing and tilt movements, bellows and upside down, back to front images viewed on dim, ground glass screens. It’s been fifteen years since I clicked a shutter on a view camera, but some things have stayed with me from those years.
In photography, even the best lenses don’t focus light onto a point (infinitely small in size) but onto ‘spots’ or circles in the ‘film/image plane’. These circles of light have dimensions, despite being microscopically small. They’re known as ‘circles of confusion’.
As circles of light become larger, the more unsharp parts of a photograph appear. On the flip side, when circles are smaller, an image looks sharper and more in focus. This is the basis for photographic depth of field and with that comes the knowledge that no photograph can be perfectly focused, never truly sharp. Instead, photographs can only be ‘acceptably unsharp’.
Acceptable unsharpness is now a concept that’s relevant to the work we make for the web, because often – unless we compromise – websites cannot look or be experienced exactly the same across browsers, devices or platforms. Accepting that fact, and learning to look upon these natural differences as creative opportunities instead of imperfections, can be tough. Deciding which aspects of a design must remain consistent and, therefore, possibly require more time, effort or compromises can be tougher. Circles of confusion can help us, our bosses and our customers make better, more informed decisions.
Acceptable unsharpness
Many clients still demand that every aspect of a design should be ‘sharp’ – that every user must see rounded boxes, gradients and shadows – without regard for the implications. I believe that this stems largely from the fact that they have previously been shown designs – and asked for sign-off – using static images.
It’s also true that in the past, organisations have invested heavily in style guides which, while maybe still useful in offline media, have a strictness that often fails to allow for the flexibility that we need to create experiences that are appropriate to a user’s browser or device capabilities.
We live in an era where web browsers and devices have wide-ranging capabilities, and websites can rarely look or be experienced exactly the same across them. Is a particular typeface vital to a user’s experience of a brand? How important are gradients or shadows? Are rounded corners really that necessary? These decisions determine how ‘sharp’ an element should be across browsers with different capabilities and, therefore, how much time, effort or extra code and images we devote to achieving consistency between them. To help our clients make those decisions, we can use circles of confusion.
Circles of confusion
Using circles of confusion involves plotting aspects of a visual design into a series of concentric circles, starting at the centre with elements that demand the most consistency. Then, work outwards, placing elements in order of their priority so that they become progressively ‘softer’, more defocused as they’re plotted into outer rings.
If layout and typography must remain consistent, place them in the centre circle as they’re aspects of a design that must remain ‘sharp’.
When gradients are important – but not vital – to a user’s experience of a brand, plot them close to, but not in the centre. This makes everyone aware that to achieve consistency, you’ll need to carve out extra images for browsers that don’t support CSS gradients.
If achieving rounded corners or shadows in all browsers isn’t important, place them into outer circles, allowing you to save time by not creating images or employing JavaScript workarounds.
I’ve found plotting aspects of a visual design into circles of confusion is a useful technique when explaining the natural differences between browsers to clients. It sets more realistic expectations and creates an environment for more meaningful discussions about progressive and emerging technologies. Best of all, it enables everyone to make better and more informed decisions about design implementation priorities.
Involving clients allows the implications of the decisions they make more transparent. For me, this has sometimes meant shifting deadlines or it has allowed me to more easily justify an increase in fees. Most important of all, circles of confusion have helped the people that I work with move beyond yesterday’s one-size-fits-all thinking about visual design, towards accepting the rich diversity of today’s web.",2010,Andy Clarke,andyclarke,2010-12-23T00:00:00+00:00,https://24ways.org/2010/circles-of-confusion/,process
238,Everything You Wanted To Know About Gradients (And a Few Things You Didn’t),"Hello. I am here to discuss CSS3 gradients. Because, let’s face it, what the web really needed was more gradients.
Still, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer’s vocabulary. There’s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work.
Of course, that whole ‘proper application’ thing is the tricky bit.
But given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They’re part of the draft images module, and implemented in two of the major rendering engines.
Still, I’ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you’ll indulge me, let’s take a quick look at how to create CSS gradients—hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser.
Gradient theory 101 (I hope that’s not really a thing)
Right. So before we dive into the code, let’s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here’s a straightforward one:
I spent seconds hours designing this gradient. I hope you like it.
At either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different:
(Don’t ever really do this. Please. I beg you.)
It’s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way.
Now, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn’t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient.
A tale of two syntaxes
Armed with our new vocabulary, let’s look at a CSS gradient in the wild. Behold, the simple input button:
There’s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here’s the CSS that makes it happen:
input[type=submit] {
background-color: #F47A20;
background-image: -moz-linear-gradient(
#FAA51A,
#F47A20
);
background-image: -webkit-gradient(linear, 0 0, 0 100%,
color-stop(0, #FAA51A),
color-stop(1, #F47A20)
);
}
I’ve borrowed David DeSandro’s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that’s perfectly understandable—heck, it sort of turned mine. But let’s step through the CSS slowly, and see if we can’t make it a little less terrifying.
Verbose WebKit is verbose
Here’s the syntax for our little gradient on WebKit:
background-image: -webkit-gradient(linear, 0 0, 0 100%,
color-stop(0, #FAA51A),
color-stop(1, #F47A20)
);
Woof. Quite a mouthful, no? Well, here’s what we’re looking at:
WebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients.
The next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points.
Afterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop’s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself.
For a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us:
background-image: -webkit-gradient(linear, 0 0, 0 100%,
from(#FAA51A),
to(#FAA51A)
);
from(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)—in both cases, we’re simply declaring the first and last color stops in our gradient.
Terse Gecko is terse
WebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3.
Naturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented.
Here’s how we get gradient-y in Gecko:
background-image: -moz-linear-gradient(
#FAA51A,
#F47A20
);
Wait, what? Done already? That’s right. By default, -moz-linear-gradient assumes you’re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that’s the case, then you simply need to specify your color stops, delimited with a few commas.
I know: that was almost… painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them.
We can specify an origin point for our gradient:
background-image: -moz-linear-gradient(50% 100%,
#FAA51A,
#F47A20
);
As well as an angle, to give it a direction:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#F47A20
);
And we can specify multiple stops, simply by adding to our comma-delimited list:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#FCC,
#F47A20
);
By adding a percentage after a given color value, we can determine its position along the gradient path:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#FCC 20%,
#F47A20
);
So that’s some of the flexibility implicit in the W3C/Mozilla-style syntax.
Now, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit’s more verbose approach to the, well, looseness behind the -moz syntax. À chacun son gradient syntax.
Still, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same.
Reusing color stops for fun and profit
But CSS gradients aren’t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects.
Tim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it’s the feature image that really catches the eye.
You see, there’s nothing that says you can’t reuse color stops. And Tim’s exploited that perfectly.
He’s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he’s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo.
But then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss.
And his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so.
Rocking the radials
We’ve been looking at linear gradients pretty exclusively. But I’d be remiss if I didn’t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar:
And here’s the relevant CSS:
background: -moz-radial-gradient(50% 100%, farthest-side,
rgb(204, 255, 255) 1%,
rgb(85, 85, 85) 15%,
rgba(85, 85, 85, 0)
);
background: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15,
from(rgb(204, 255, 255)),
to(rgba(85, 85, 85, 0))
);
Now, the syntax builds on what we’ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes’ reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, …)) to shift into circular mode.
Mozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There’s also a size constant defined (farthest-side), which determines the reach and shape of our gradient.
WebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle.
Again, this is a fairly modest little radial gradient. Time and article length (and, let’s be honest, your author’s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can’t recommend the references at Mozilla and Apple strongly enough.
Leave no browser behind
But no matter the kind of gradients you’re working with, there is a large swathe of browsers that simply don’t support gradients. Thankfully, it’s fairly easy to declare a sensible fallback—it just depends on the kind of fallback you’d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties.
For example: if you’d like to fall back to a flat color, simply declare a separate background-color:
.nav {
background-color: #000;
background-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
Or perhaps just create three separate background properties.
.nav {
background: #000;
background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
We can even build on this to fall back to a non-gradient image:
.nav {
background: #000 url(""faux-gradient-lol.png"") repeat-x ;
background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
No matter the approach you feel most appropriate to your design, it’s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings.
(If you’re feeling especially masochistic, there’s even a way to get simple linear gradients working in IE via Microsoft’s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I’d recommend avoiding those.
And don’t tell Andy Clarke I told you, or he’ll probably unload his Derringer at me. Or something.)
Go forth and, um, gradientify!
It’s entirely possible your head’s spinning. Heck, mine is, but that might be the effects of the ’nog. But maybe you’re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine.
Well, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they’re easily modifiable by tweaking a few CSS properties. Because, let’s face it, less time spent yelling at Photoshop is a very, very good thing.
Of course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it’s still under development at the W3C. As we’ve seen, browser support is still very much in flux. And it’s possible that gradients themselves have some real performance drawbacks—so test thoroughly, and gradient carefully.
But still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you.
So have fun, and get gradientin’.",2010,Ethan Marcotte,ethanmarcotte,2010-12-22T00:00:00+00:00,https://24ways.org/2010/everything-you-wanted-to-know-about-gradients/,code
239,Using the WebFont Loader to Make Browsers Behave the Same,"Web fonts give us designers a whole new typographic palette with which to work. However, browsers handle the loading of web fonts in different ways, and this can lead to inconsistent user experiences.
Safari, Chrome and Internet Explorer leave a blank space in place of the styled text while the web font is loading. Opera and Firefox show text with the default font which switches over when the web font has loaded, resulting in the so-called Flash of Unstyled Text (aka FOUT). Some people prefer Safari’s approach as it eliminates FOUT, others think the Firefox way is more appropriate as content can be read whilst fonts download. Whatever your preference, the WebFont Loader can make all browsers behave the same way.
The WebFont Loader is a JavaScript library that gives you extra control over font loading. It was co-developed by Google and Typekit, and released as open source. The WebFont Loader works with most web font services as well as with self-hosted fonts.
The WebFont Loader tells you when the following events happen as a browser downloads web fonts (or loads them from cache):
when fonts start to download (‘loading’)
when fonts finish loading (‘active’)
if fonts fail to load (‘inactive’)
If your web page requires more than one font, the WebFont Loader will trigger events for individual fonts, and for all the fonts as a whole. This means you can find out when any single font has loaded, and when all the fonts have loaded (or failed to do so).
The WebFont Loader notifies you of these events in two ways: by applying special CSS classes when each event happens; and by firing JavaScript events. For our purposes, we’ll be using just the CSS classes.
Implementing the WebFont Loader
As stated above, the WebFont Loader works with most web font services as well as with self-hosted fonts.
Self-hosted fonts
To use the WebFont Loader when you are hosting the font files on your own server, paste the following code into your web page:
Replace Font Family Name and Another Font Family with a comma-separated list of the font families you want to check against, and replace http://yourwebsite.com/styles.css with the URL of the style sheet where your @font-face rules reside.
Fontdeck
Assuming you have added some fonts to a website project in Fontdeck, use the afore-mentioned code for self-hosted solutions and replace http://yourwebsite.com/styles.css with the URL of the tag in your Fontdeck website settings page. It will look something like http://f.fontdeck.com/s/css/xxxx/domain/nnnn.css.
Typekit
Typekit’s JavaScript-based implementation incorporates the WebFont Loader events by default, so you won’t need to include any WebFont Loader code.
Making all browsers behave like Safari
To make Firefox and Opera work in the same way as WebKit browsers (Safari, Chrome, etc.) and Internet Explorer, and thus minimise FOUT, you need to hide the text while the fonts are loading.
While fonts are loading, the WebFont Loader adds a class of wf-loading to the element. Once the fonts have loaded, the wf-loading class is removed and replaced with a class of wf-active (or wf-inactive if all of the fonts failed to load). This means you can style elements on the page while the fonts are loading and then style them differently when the fonts have finished loading.
So, let’s say the text you need to hide while fonts are loading is contained in all paragraphs and top-level headings. By writing the following style rule into your CSS, you can hide the text while the fonts are loading:
.wf-loading h1, .wf-loading p {
visibility:hidden;
}
Because the wf-loading class is removed once the the fonts have loaded, the visibility:hidden rule will stop being applied, and the text revealed. You can see this in action on this simple example page.
That works nicely across the board, but the situation is slightly more complicated. WebKit doesn’t wait for all fonts to load before displaying text: it displays text elements as soon as the relevant font is loaded.
To emulate WebKit more accurately, we need to know when individual fonts have loaded, and apply styles accordingly. Fortunately, as mentioned earlier, the WebFont Loader has events for individual fonts too.
When a specific font is loading, a class of the form wf-fontfamilyname-n4-loading is applied. Assuming headings and paragraphs are styled in different fonts, we can make our CSS more specific as follows:
.wf-fontfamilyname-n4-loading h1,
.wf-anotherfontfamily-n4-loading p {
visibility:hidden;
}
Note that the font family name is transformed to lower case, with all spaces removed. The n4 is a shorthand for the weight and style of the font family. In most circumstances you’ll use n4 but refer to the WebFont Loader documentation for exceptions.
You can see it in action on this Safari example page (you’ll probably need to disable your cache to see any change occur).
Making all browsers behave like Firefox
To make WebKit browsers and Internet Explorer work like Firefox and Opera, you need to explicitly show text while the fonts are loading. In order to make this happen, you need to specify a font family which is not a web font while the fonts load, like this:
.wf-fontfamilyname-n4-loading h1 {
font-family: 'arial narrow', sans-serif;
}
.wf-anotherfontfamily-n4-loading p {
font-family: arial, sans-serif;
}
You can see this in action on the Firefox example page (again you’ll probably need to disable your cache to see any change occur).
And there’s more
That’s just the start of what can be done with the WebFont Loader. More areas to explore would be tweaking font sizes to reduce the impact of reflowing text and to better cater for very narrow fonts. By using the JavaScript events much more can be achieved too, such as fading in text as the fonts load.",2010,Richard Rutter,richardrutter,2010-12-02T00:00:00+00:00,https://24ways.org/2010/using-the-webfont-loader-to-make-browsers-behave-the-same/,code
240,My CSS Wish List,"I love Christmas. I love walking around the streets of London, looking at the beautifully decorated windows, seeing the shiny lights that hang above Oxford Street and listening to Christmas songs.
I’m not going to lie though. Not only do I like buying presents, I love receiving them too. I remember making long lists that I would send to Father Christmas with all of the Lego sets I wanted to get. I knew I could only get one a year, but I would spend days writing the perfect list.
The years have gone by, but I still enjoy making wish lists. And I’ll tell you a little secret: my mum still asks me to send her my Christmas list every year.
This time I’ve made my CSS wish list. As before, I’d be happy with just one present.
Before I begin…
… this list includes:
things that don’t exist in the CSS specification (if they do, please let me know in the comments – I may have missed them);
others that are in the spec, but it’s incomplete or lacks use cases and examples (which usually means that properties haven’t been implemented by even the most recent browsers).
Like with any other wish list, the further down I go, the more unrealistic my expectations – but that doesn’t mean I can’t wish. Some of the things we wouldn’t have thought possible a few years ago have been implemented and our wishes fulfilled (think multiple backgrounds, gradients and transformations, for example).
The list
Cross-browser implementation of font-size-adjust
When one of the fall-back fonts from your font stack is used, rather than the preferred (first) one, you can retain the aspect ratio by using this very useful property. It is incredibly helpful when the fall-back fonts are smaller or larger than the initial one, which can make layouts look less polished.
What font-size-adjust does is divide the original font-size of the fall-back fonts by the font-size-adjust value. This preserves the x-height of the preferred font in the fall-back fonts. Here’s a simple example:
p {
font-family: Calibri, ""Lucida Sans"", Verdana, sans-serif;
font-size-adjust: 0.47;
}
In this case, if the user doesn’t have Calibri installed, both Lucida Sans and Verdana will keep Calibri’s aspect ratio, based on the font’s x-height. This property is a personal favourite and one I keep pointing to.
Firefox supported this property from version three. So far, it’s the only browser that does. Fontdeck provides the font-size-adjust value along with its fonts, and has a handy tool for calculating it.
More control over overflowing text
The text-overflow property lets you control text that overflows its container. The most common use for it is to show an ellipsis to indicate that there is more text than what is shown. To be able to use it, the container should have overflow set to something other than visible, and white-space: nowrap:
div {
white-space: nowrap;
width: 100%;
overflow: hidden;
text-overflow: ellipsis;
}
This, however, only works for blocks of text on a single line. In the wish list of many CSS authors (and in mine) is a way of defining text-overflow: ellipsis on a block of multiple text lines. Opera has taken the first step and added support for the -o-ellipsis-lastline property, which can be used instead of ellipsis. This property is not part of the CSS3 spec, but we could certainly make good use of it if it were…
WebKit has -webkit-line-clamp to specify how many lines to show before cutting with an ellipsis, but support is patchy at best and there is no control over where the ellipsis shows in the text. Many people have spent time wrangling JavaScript to do this for us, but the methods used are very processor intensive, and introduce a JavaScript dependency.
Indentation and hanging punctuation properties
You might notice a trend here: almost half of the items in this list relate to typography. The lack of fine-grained control over typographical detail is a general concern among designers and CSS authors. Indentation and hanging punctuation fall into this category.
The CSS3 specification introduces two new possible values for the text-indent property: each-line; and hanging. each-line would indent the first line of the block container and each line after a forced line break; hanging would invert which lines are affected by the indentation.
The proposed hanging-punctuation property would allow us to specify whether opening and closing brackets and quotes should hang outside the edge of the first and last lines. The specification is still incomplete, though, and asks for more examples and use cases.
Text alignment and hyphenation properties
Following the typographic trend of this list, I’d like to add better control over text alignment and hyphenation properties. The CSS3 module on Generated Content for Paged Media already specifies five new hyphenation-related properties (namely: hyphenate-dictionary; hyphenate-before and hyphenate-after; hyphenate-lines; and hyphenate-character), but it is still being developed and lacks examples.
In the text alignment realm, the new text-align-last property allows you to define how the last line of a block (or a line just before a forced break) is aligned, if your text is set to justify. Its value can be: start; end; left; right; center; and justify. The text-justify property should also allow you to have more control over text set to text-align: justify but, for now, only Internet Explorer supports this.
calc()
This is probably my favourite item in the list: the calc() function. This function is part of the CSS3 Values and Units module, but it has only been implemented by Firefox (4.0). To take advantage of it now you need to use the Mozilla vendor code, -moz-calc().
Imagine you have a fluid two-column layout where the sidebar column has a fixed width of 240 pixels, and the main content area fills the rest of the width available. This is how you could create that using -moz-calc():
#main {
width: -moz-calc(100% - 240px);
}
Can you imagine how many hacks and headaches we could avoid were this function available in more browsers? Transitions and animations are really nice and lovely but, for me, it’s the ability to do the things that calc() allows you to that deserves the spotlight and to be pushed for implementation.
Selector grouping with -moz-any()
The -moz-any() selector grouping has been introduced by Mozilla but it’s not part of any CSS specification (yet?); it’s currently only available on Firefox 4.
This would be especially useful with the way HTML5 outlines documents, where we can have any number of variations of several levels of headings within numerous types of containers (think sections within articles within sections…).
Here is a quick example (copied from the Mozilla blog post about the article) of how -moz-any() works. Instead of writing:
section section h1, section article h1, section aside h1,
section nav h1, article section h1, article article h1,
article aside h1, article nav h1, aside section h1,
aside article h1, aside aside h1, aside nav h1, nav section h1,
nav article h1, nav aside h1, nav nav h1, {
font-size: 24px;
}
You could simply write:
-moz-any(section, article, aside, nav)
-moz-any(section, article, aside, nav) h1 {
font-size: 24px;
}
Nice, huh?
More control over styling form elements
Some are of the opinion that form elements shouldn’t be styled at all, since a user might not recognise them as such if they don’t match the operating system’s controls. I partially agree: I’d rather put the choice in the hands of designers and expect them to be capable of deciding whether their particular design hampers or improves usability.
I would say the same idea applies to font-face: while some fear designers might go crazy and litter their web pages with dozens of different fonts, most welcome the freedom to use something other than Arial or Verdana.
There will always be someone who will take this freedom too far, but it would be useful if we could, for example, style the default Opera date picker:
or Safari’s slider control (think star movie ratings, for example):
Parent selector
I don’t think there is one CSS author out there who has never come across a case where he or she wished there was a parent selector. There have been many suggestions as to how this could work, but a variation of the child selector is usually the most popular:
article < h1 {
…
}
One can dream…
Flexible box layout
The Flexible Box Layout Module sounds a bit like magic: it introduces a new box model to CSS, allowing you to distribute and order boxes inside other boxes, and determine how the available space is shared.
Two of my favourite features of this new box model are:
the ability to redistribute boxes in a different order from the markup
the ability to create flexible layouts, where boxes shrink (or expand) to fill the available space
Let’s take a quick look at the second case. Imagine you have a three-column layout, where the first column takes up twice as much horizontal space as the other two:
With the flexible box model, you could specify it like this:
body {
display: box;
box-orient: horizontal;
}
#main {
box-flex: 2;
}
#links {
box-flex: 1;
}
aside {
box-flex: 1;
}
If you decide to add a fourth column to this layout, there is no need to recalculate units or percentages, it’s as easy as that.
Browser support for this property is still in its early stages (Firefox and WebKit need their vendor prefixes), but we should start to see it being gradually introduced as more attention is drawn to it (I’m looking at you…). You can read a more comprehensive write-up about this property on the Mozilla developer blog.
It’s easy to understand why it’s harder to start playing with this module than with things like animations or other more decorative properties, which don’t really break your layouts when users don’t see them. But it’s important that we do, even if only in very experimental projects.
Nested selectors
Anyone who has never wished they could do something like the following in CSS, cast the first stone:
article {
h1 { font-size: 1.2em; }
ul { margin-bottom: 1.2em; }
}
Even though it can easily turn into a specificity nightmare and promote redundancy in your style sheets (if you abuse it), it’s easy to see how nested selectors could be useful. CSS compilers such as Less or Sass let you do this already, but not everyone wants or can use these compilers in their projects.
Every wish list has an item that could easily be dropped. In my case, I would say this is one that I would ditch first – it’s the least useful, and also the one that could cause more maintenance problems. But it could be nice.
Implementation of the ::marker pseudo-element
The CSS Lists module introduces the ::marker pseudo-element, that allows you to create custom list item markers. When an element’s display property is set to list-item, this pseudo-element is created.
Using the ::marker pseudo-element you could create something like the following:
Footnote 1: Both John Locke and his father, Anthony Cooper, are
named after 17th- and 18th-century English philosophers; the real
Anthony Cooper was educated as a boy by the real John Locke.
Footnote 2: Parts of the plane were used as percussion instruments
and can be heard in the soundtrack.
where the footnote marker is generated by the following CSS:
li::marker {
content: ""Footnote "" counter(notes) "":"";
text-align: left;
width: 12em;
}
li {
counter-increment: notes;
}
You can read more about how to use counters in CSS in my article from last year.
Bear in mind that the CSS Lists module is still a Working Draft and is listed as “Low priority”. I did say this wish list would start to grow more unrealistic closer to the end…
Variables
The sight of the word ‘variables’ may make some web designers shy away, but when you think of them applied to things such as repeated colours in your stylesheets, it’s easy to see how having variables available in CSS could be useful.
Think of a website where the main brand colour is applied to elements like the main text, headings, section backgrounds, borders, and so on. In a particularly large website, where the colour is repeated countless times in the CSS and where it’s important to keep the colour consistent, using variables would be ideal (some big websites are already doing this by using server-side technology).
Again, Less and Sass allow you to use variables in your CSS but, again, not everyone can (or wants to) use these.
If you are using Less, you could, for instance, set the font-family value in one variable, and simply call that variable later in the code, instead of repeating the complete font stack, like so:
@fontFamily: Calibri, ""Lucida Grande"", ""Lucida Sans Unicode"", Helvetica, Arial, sans-serif;
body {
font-family: @fontFamily;
}
Other features of these CSS compilers might also be useful, like the ability to ‘call’ a property value from another selector (accessors):
header {
background: #000000;
}
footer {
background: header['background'];
}
or the ability to define functions (with arguments), saving you from writing large blocks of code when you need to write something like, for example, a CSS gradient:
.gradient (@start:"""", @end:"""") {
background: -webkit-gradient(linear, left top, left bottom, from(@start), to(@end));
background: -moz-linear-gradient(-90deg,@start,@end);
}
button {
.gradient(#D0D0D0,#9F9F9F);
}
Standardised comments
Each CSS author has his or her own style for commenting their style sheets. While this isn’t a massive problem on smaller projects, where maybe only one person will edit the CSS, in larger scale projects, where dozens of hands touch the code, it would be nice to start seeing a more standardised way of commenting.
One attempt at creating a standard for CSS comments is CSSDOC, an adaptation of Javadoc (a documentation generator that extracts comments from Java source code into HTML). CSSDOC uses ‘DocBlocks’, a term borrowed from the phpDocumentor Project. A DocBlock is a human- and machine-readable block of data which has the following structure:
/**
* Short description
*
* Long description (this can have multiple lines and contain tags
*
* @tags (optional)
*/
CSSDOC includes a standard for documenting bug fixes and hacks, colours, versioning and copyright information, amongst other important bits of data.
I know this isn’t a CSS feature request per se; rather, it’s just me pointing you at something that is usually overlooked but that could contribute towards keeping style sheets easier to maintain and to hand over to new developers.
Final notes
I understand that if even some of these were implemented in browsers now, it would be a long time until all vendors were up to speed. But if we don’t talk about them and experiment with what’s available, then it will definitely never happen.
Why haven’t I mentioned better browser support for existing CSS3 properties? Because that would be the same as adding chocolate to your Christmas wish list – you don’t need to ask, everyone knows you want it.
The list could go on. There are dozens of other things I would love to see integrated in CSS or further developed. These are my personal favourites: some might be less useful than others, but I’ve wished for all of them at some point.
Part of the research I did while writing this article was asking some friends what they would add to their lists; other than a couple of items I already had in mine, everything else was different. I’m sure your list would be different too. So tell me, what’s on your CSS wish list?",2010,Inayaili de León Persson,inayailideleon,2010-12-03T00:00:00+00:00,https://24ways.org/2010/my-css-wish-list/,code
265,Designing for Perfection,"Hello, 24 ways readers. I hope you’re having a nice run up to Christmas. This holiday season I thought I’d share a few things with you that have been particularly meaningful in my work over the last year or so. They may not make you wet your santa pants with new-idea-excitement, but in the context of 24 ways I think they may serve as a nice lesson and a useful seasonal reminder going into the New Year. Enjoy!
Story
Despite being a largely scruffy individual for most of my life, I had some interesting experiences regarding kitchen tidiness during my third year at university.
As a kid, my room had always been pretty tidy, and as a teenager I used to enjoy reordering my CDs regularly (by artist, label, colour of spine – you get the picture); but by the time I was twenty I’d left most of these traits behind me, mainly due to a fear that I was turning into my mother. The one remaining anally retentive part of me that remained however, lived in the kitchen. For some reason, I couldn’t let all the pots and crockery be strewn across the surfaces after cooking. I didn’t care if they were washed up or not, I just needed them tidied. The surfaces needed to be continually free of grated cheese, breadcrumbs and ketchup spills. Also, the sink always needed to be clear. Always. Even a lone teabag, discarded casually into the sink hours previously, would give me what I used to refer to as “kitchen rage”.
Whilst this behaviour didn’t cause any direct conflicts, it did often create weirdness. We would be happily enjoying a few pre-night out beverages (Jack Daniels and Red Bull – nice) when I’d notice the state of the kitchen following our round of customized 49p Tesco pizzas. Kitchen rage would ensue, and I’d have to blitz the kitchen, which usually resulted in me having to catch everyone up at the bar afterwards.
One evening as we were just about to go out, I was stood there, in front of the shithole that was our kitchen with the intention of cleaning it all up, when a realization popped into my head. In hindsight, it was a pretty obvious one, but it went along the lines of “What the fuck are you doing? Sort your life out”. I sodded the washing up, rolled out with my friends, and had a badass evening of partying.
After this point, whenever I got the urge to clean the kitchen, I repeated that same realization in my head. My tidy kitchen obsession strived for a level of perfection that my housemates just didn’t share, so it was ultimately pointless. It didn’t make me feel that good, either; it was like having a cigarette after months of restraint – initially joyous but soon slightly shameful.
Lesson
Now, around seven years later, I’m a designer on the web and my life is chaotic. It features no planning for significant events, no day-to-day routine or structure, no thought about anything remotely long-term, and I like to think I do precisely what I want. It seems my days at striving for something ordered and tidy, in most parts of my life, are long gone.
For much of my time as a designer, though, it’s been a different story. I relished industry-standard terms such as ‘pixel perfection’ and ‘polished PSDs’, taking them into my stride as I strove to design everything that was put on my plate perfectly. Even down to grids and guidelines, all design elements would be painstakingly aligned to a five-pixel grid. There were no seven-pixel margins or gutters to be found in my design work, that’s for sure. I put too much pride and, inadvertently, too much ego into my work. Things took too long to create, and because of the amount of effort put into the work, significant changes, based on client feedback for example, were more difficult to stomach.
Over the last eighteen months I’ve made a conscious effort to change the way I approach designing for the web. Working on applications has probably helped with this; they seem to have a more organic development than rigid content-based websites. Mostly though, a realization similar to my kitchen rage one came about when I had to make significant changes to a painstakingly crafted Photoshop document I had created. The changes shouldn’t have been difficult or time-consuming to implement, but they were turning out to be. One day, frustrated with how long it was taking, the refrain “What the fuck are you doing? Sort your life out” again entered my head. I blazed the rest of the work, not rushing or doing scruffy work, but just not adhering to the insane levels of perfection I had previously set for myself. When the changes were presented, everything went down swimmingly. The client in this case (and I’d argue most cases) cared more about the ideas than the perfect way in which they had been implemented. I had taken myself and my ego out of the creative side of the work, and it had been easier to succeed.
Argument
I know many other designers who work on the web share such aspirations to perfection. I think it’s a common part of the designer DNA, but I’m not sure it really has a place when designing for the web.
First, there’s the environment. The landscape in which we work is continually shifting and evolving. The inherent imperfection of the medium itself makes attempts to create perfect work for it redundant. Whether you consider it a positive or negative point, the products we make are never complete. They’re always scaling and changing.
Like many aspects of web design, this striving for perfection in our design work is a way of thinking borrowed from other design industries where it’s more suited. A physical product cannot be as easily altered or developed after it has been manufactured, so the need to achieve perfection when designing is more apt.
Designers who can relate to anything I’ve talked about can easily let go of that anal retentiveness if given the right reasons to do so. Striving for perfection isn’t a bad thing, but I simply don’t think it can be achieved in such a fast-moving, unique industry. I think design for the web works better when it begins with quick and simple, followed by iteration and polish over time.
To let go of ego and to publish something that you’re not completely happy with is perhaps the most difficult part of the job for designers like us, but it’s followed by a satisfaction of knowing your product is alive and breathing, whereas others (possibly even competitors) may still be sitting in Photoshop, agonizing over whether a margin should be twenty or forty pixels.
I keep telling myself to stop sitting on those two hundred ideas that are all half-finished. Publish them, clean them up and iterate over time. I’ve been telling myself this for months and, hopefully, writing this article will give me the kick in the arse I need. Hopefully, it will also give someone else the same kick.",2011,Greg Wood,gregwood,2011-12-17T00:00:00+00:00,https://24ways.org/2011/designing-for-perfection/,process
266,Collaborative Development for a Responsively Designed Web,"In responsive web design we’ve found a technique that allows us to design for the web as a medium in its own right: one that presents a fluid, adaptable and ever changing canvas.
Until this point, we gave little thought to the environment in which users will experience our work, caring more about the aggregate than the individual. The applications we use encourage rigid layouts, whilst linear processes focus on clients signing off paintings of websites that have little regard for behaviour and interactions. The handover of pristine, pixel-perfect creations to developers isn’t dissimilar to farting before exiting a crowded lift, leaving front-end developers scratching their heads as they fill in the inevitable gaps. If you haven’t already, I recommend reading Drew’s checklist of things to consider before handing over a design.
Somehow, this broken methodology has survived for the last fifteen years or so. Even the advent of web standards has had little impact. Now, as we face an onslaught of different devices, the true universality of the web can no longer be ignored.
Responsive web design is just the thin end of the wedge. Largely concerned with layout, its underlying philosophy could ignite a trend towards interfaces that adapt to any number of different variables: input methods, bandwidth availability, user preference – you name it!
With such adaptability, a collaborative and iterative process is required. Ethan Marcotte, who worked with the team behind the responsive redesign of the Boston Globe website, talked about such an approach in his book:
The responsive projects I’ve worked on have had a lot of success combining design and development into one hybrid phase, bringing the two teams into one highly collaborative group.
Whilst their process still involved the creation of desktop-centric mock-ups, these were presented to the entire team early on, where questions about how pages might adapt and behave at different sizes were asked. Mock-ups were quickly converted into HTML prototypes, meaning further decisions could be based on usage rather than guesswork (and endless hours spent in Photoshop).
Regardless of the exact process, it’s clear that the relationship between our two disciplines is more crucial than ever. Yet, historically, it seems a wedge has been driven between us – perhaps a result of segregation and waterfall-style processes – resulting in animosity.
So how can we improve this relationship? Ultimately, we’ll need to adapt, but even within existing workflows we can start to overlap. Simply adjusting our attitude can effect change, and bring design and development teams closer together.
Good design is constant contact.
Mark Otto
The way we work needs to be more open and inclusive. For example, ensuring members of the development team attend initial kick-off meetings and design workshops will not only ensure technical concerns are raised, but mean that those implementing our designs better understand the problems we’re trying to solve.
It can also be useful at this stage to explain how you work and the sort of deliverables you expect to produce. This will give developers a chance to make recommendations on how these can be optimized for their own needs.
You may even find opportunities to share the load. On a recent project I worked on, our development partners offered to produce the interactive prototypes needed for user testing. This allowed us to concentrate on refining the experience, whilst they were able to get a head start on building the product.
While developers should be involved at the beginning of projects, it’s also important that designers are able to review and contribute to a product as it’s being built. Any handover should be done in person, and ideally you’ll have a day set aside to do so. Having additional budget available for follow-up design reviews is also recommended. Learning how to use version control tools like Subversion or Git will allow you to work within the same environment as developers, and allow you to contribute code or graphic assets directly to a project if needed.
Don’t underestimate the benefits of designer and developer sitting next to each other. Subtle nuances can be explored far more easily than if they were conducted over email or phone. As Ethan writes, “‘Design’ is the means, not merely the end; the path we walk over the course of a project, the choices we make”.
It’s from collaboration like this that I’ve become fond of producing visual style guides. These demonstrate typographic treatments for common markup and patterns (blockquotes, lists, pagination, basic form controls and so on). Thinking in terms of components rather than individual pages not only fits in better with how a developer will implement a site, but can also ensure your design works as a coherent whole.
Despite the amount of research and design produced, when it comes to the crunch, there will always be a need for compromise. As the old saying goes, ‘fast, cheap and good – pick two.’ It’s important that you know which pieces are crucial to a design and which areas can allow for movement. Pick your battles wisely. Having an agreed set of design principles can be useful when making such decisions, as they help everyone focus on the goals of the project.
The best compromises are reached when both sides understand the issues of the other.
Richard Rutter
Ultimately, better collaboration comes through a shared understanding of the different competencies required to build a website. Instead of viewing ourselves in terms of discrete roles, we should instead look to emphasize our range of abilities, and work with others whose skills are complementary.
Perhaps somebody who actively seeks to broaden their knowledge is the mark of a professional. Seek these people out.
The best developers I’ve worked with have a respect for design, probably having attempted to do some themselves! Having wrangled with a few MySQL databases myself, I certainly believe the obverse is true. While knowing HTML won’t necessarily make you a better designer, it will help you understand the issues being faced by a front-end developer and, more importantly, allow you to offer solutions or alternative approaches.
So take a moment to think about how you work with developers and how you could improve your relationship with them. What are you doing to ease the path towards our collaborative future?",2011,Paul Lloyd,paulrobertlloyd,2011-12-05T00:00:00+00:00,https://24ways.org/2011/collaborative-development-for-a-responsively-designed-web/,business
267,Taming Complexity,"I’m going to step into my UX trousers for this one. I wouldn’t usually wear them in public, but it’s Christmas, so there’s nothing wrong with looking silly.
Anyway, to business. Wherever I roam, I hear the familiar call for simplicity and the denouncement of complexity. I read often that the simpler something is, the more usable it will be. We understand that simple is hard to achieve, but we push for it nonetheless, convinced it will make what we build easier to use. Simple is better, right?
Well, I’ll try to explore that. Much of what follows will not be revelatory to some but, like all good lessons, I think this serves as a welcome reminder that as we live in a complex world it’s OK to sometimes reflect that complexity in the products we build.
Myths and legends
Less is more, we’ve been told, ever since master of poetic verse Robert Browning used the phrase in 1855. Well, I’ve conducted some research, and it appears he knew nothing of web design. Neither did modernist architect Ludwig Mies van der Rohe, a later pedlar of this worthy yet contradictory notion. Broad is narrow. Tall is short. Eggs are chips. See: anyone can come up with this stuff.
To paraphrase Einstein, simple doesn’t have to be simpler. In other words, simple doesn’t dictate that we remove the complexity. Complex doesn’t have to be confusing; it can be beautiful and elegant. On the web, complex can be necessary and powerful. A website that simplifies the lives of its users by offering them everything they need in one site or screen is powerful. For some, the greater the density of information, the more useful the site.
In our decision-making process, principles such as Occam’s razor’s_razor (in a nutshell: simple is better than complex) are useful, but simple is for the user to determine through their initial impression and subsequent engagement. What appears simple to me or you might appear very complex to someone else, based on their own mental model or needs. We can aim to deliver simple, but they’ll be the judge.
As a designer, developer, content alchemist, user experience discombobulator, or whatever you call yourself, you’re often wrestling with a wealth of material, a huge number of features, and numerous objectives. In many cases, much of that stuff is extraneous, and goes in the dustbin. However, it can be just as likely that there’s a truckload of suggested features and content because it all needs to be there. Don’t be afraid of that weight.
In the right hands, less can indeed mean more, but it’s just as likely that less can very often lead to, well… less.
Complexity is powerful
Simple is the ability to offer a powerful experience without overwhelming the audience or inducing information anxiety. Giving them everything they need, without having them ferret off all over a site to get things done, is important.
It’s useful to ask throughout a site’s lifespan, “does the user have everything they need?” It’s so easy to let our designer egos get in the way and chop stuff out, reduce down to only the things we want to see. That benefits us in the short term, but compromises the audience long-term.
The trick is not to be afraid of complexity in itself, but to avoid creating the perception of complexity. Give a user a flight simulator and they’ll crash the plane or jump out. Give them everything they need and more, but make it feel simple, and you’re building a relationship, empowering people.
This can be achieved carefully with what some call gradual engagement, and often the sensible thing might be to unleash complexity in carefully orchestrated phases, initially setting manageable levels of engagement and interaction, gradually increasing the inherent power of the product and fostering an empowered community.
The design aesthetic
Here’s a familiar scenario: the client or project lead gets overexcited and skips most of the important decision-making, instead barrelling straight into a bout of creative direction Tourette’s. Visually, the design needs to be minimal, white, crisp, full of white space, have big buttons, and quite likely be “clean”. Of course, we all like our websites to be clean as that’s more hygienic.
But what do these words even mean, really? Early in a project they’re abstract distractions, unnecessary constraints. This premature narrowing forces us to think much more about throwing stuff out rather than acknowledging that what we’re building is complex, and many of the components perhaps necessary.
Simple is not a formula. It cannot be achieved just by using a white background, by throwing things away, or by breathing a bellowsful of air in between every element and having it all float around in space. Simple is not a design treatment. Simple is hard. Simple requires deep investigation, a thorough understanding of every aspect of a project, in line with the needs and expectations of the audience.
Recognizing this helps us empathize a little more with those most vocal of UX practitioners. They usually appreciate that our successes depend on a thorough understanding of the user’s mental models and expected outcomes. I personally still consider UX people to be web designers like the rest of us (mainly to wind them up), but they’re web designers that design every decision, and by putting the user experience at the heart of their process, they have a greater chance of finding simplicity in complexity. The visual design aesthetic — the façade — is only a part of that.
Divide and conquer
I’m currently working on an app that’s complex in architecture, and complex in ambition. We’ll be releasing in carefully orchestrated private phases, gradually introducing more complexity in line with the unavoidably complex nature of the objective, but my job is to design the whole, the complete system as it will be when it’s out of beta and beyond.
I’ve noticed that I’m not throwing much out; most of it needs to be there. Therefore, my responsibility is to consider interesting and appropriate methods of navigation and bring everything together logically.
I’m using things like smart defaults, graphical timelines and colour keys to make sense of the complexity, techniques that are sympathetic to the content. They act as familiar points of navigation and reference, yet are malleable enough to change subtly to remain relevant to the information they connect. It’s really OK to have a lot of stuff, so long as we make each component work smartly.
It’s a divide and conquer approach. By finding simplicity and logic in each content bucket, I’ve made more sense of the whole, allowing me to create key layouts where most of the simplified buckets are collated and sometimes combined, providing everything the user needs and expects in the appropriate places.
I’m also making sure I don’t reduce the app’s power. I need to reflect the scale of opportunity, and provide access to or knowledge of the more advanced tools and features for everyone: a window into what they can do and how they can help. I know it’s the minority who will be actively building the content, but the power is in providing those opportunities for all.
Much of this will be familiar to the responsible practitioners who build websites for government, local authorities, utility companies, newspapers, magazines, banking, and we-sell-everything-ever-made online shops. Across the web, there are sites and tools that thrive on complexity.
Alas, the majority of such sites have done little to make navigation intuitive, or empower audiences. Where we can make a difference is by striving to make our UIs feel simple, look wonderful, not intimidating — even if they’re mind-meltingly complex behind that façade.
Embrace, empathize and tame
So, there are loads of ways to exploit complexity, and make it seem simple. I’ve hinted at some methods above, and we’ve already looked at gradual engagement as a way to make sense of complexity, so that’s a big thumbs-up for a release cycle that increases audience power.
Prior to each and every release, it’s also useful to rest on the finished thing for a while and use it yourself, even if you’re itching to release. ‘Ready’ often isn’t, and ‘finished’ never is, and the more time you spend browsing around the sites you build, the more you learn what to question, where to add, or subtract. It’s definitely worth building in some contingency time for sitting on your work, so to speak.
One thing I always do is squint at my layouts. By squinting, I get a sort of abstract idea of the overall composition, and general feel for the thing. It makes my face look stupid, but helps me see how various buckets fit together, and how simple or complex the site feels overall.
I mentioned the need to put our design egos to one side and not throw out anything useful, and I think that’s vital. I’m a big believer in economy, reduction, and removing the extraneous, but I’m usually referring to decoration, bells and whistles, and fluff. I wouldn’t ever advocate the complete removal of powerful content from a project roadmap.
Above all, don’t fear complexity. Embrace and tame it. Work hard to empathize with audience needs, and you can create elegant, playful, risky, surprising, emotive, delightful, and ultimately simple things.",2011,Simon Collison,simoncollison,2011-12-21T00:00:00+00:00,https://24ways.org/2011/taming-complexity/,ux
268,Getting the Most Out of Google Analytics,"Something a bit different for today’s 24 ways article. For starters, I’m not a designer or a developer. I’m an evil man who sells things to people on the internet. Second, this article will likely be a little more nebulous than you’re used to, since it covers quite a number of points in a relatively short space.
This isn’t going to be the complete Google Analytics Conversion University IQ course compressed into a single article, obviously. What it will be, however, is a primer on setting up and using Google Analytics in real life, and a great deal of what I’ve learned using Google Analytics nearly every working day for the past six (crikey!) years.
Also, to be clear, I’ll be referencing new Google Analytics here; old Google Analytics is for loooosers (and those who want reliable e-commerce conversion data per site search term, natch).
You may have been running your Analytics account for several years now, dipping in and out, checking traffic levels, seeing what’s popular… and that’s about it. Google Analytics provides so much more than that, but the number of reports available can often intimidate users, and documentation and case studies on their use are minimal at best.
Let’s start! Setting up your Analytics profile
Before we plough on, I just want to run through a quick checklist that some basic settings have been enabled for your profile. If you haven’t clicked it, click the big cog on the top-right of Google Analytics and we’ll have a poke about.
If you have an e-commerce site, e-commerce tracking has been enabled
If your site has a search function, site search tracking has been enabled.
Query string parameters that you do not want tracked as separate pages have been excluded (for example, any parameters needed for your platform to function, otherwise you’ll get multiple entries for the same page appearing in your reports)
Filters have been enabled on your main profile to exclude your office IP address and any IPs of people who frequently access the site for work purposes. In decent numbers they tend to throw data off a tad.
You may also find the need to set up multiple profiles prefiltered for specific audience segments. For example, at Lovehoney we have seventeen separate profiles that allow me quick access to certain countries, devices and traffic sources without having to segment first. You’ll also find load time for any complex reports much improved. Use the same filter screen as above to set up a series of profiles that only include, say, mobile visits, or UK visitors, so you can quickly analyse important segments.
Matt, what’s a segment?
A segment is a subsection of your visitor base, which you define and then call on in reports to see specific data for that subsection. For example, in this report I’ve defined two segments, the first for IE6 users and the second for IE7.
Segments are easily created by clicking the Advanced Segments tabs at the top of any report and clicking +New Custom Segment.
What does your site do?
Understanding the goals of your site is an oft-covered topic, but it’s necessary not just to form a better understand of your business and prioritize your time. Understanding what you wish visitors to do on your site translates well into a goal-driven analytics package like Google Analytics.
Every site exists essentially to sell something, either financially through e-commerce, or to sell an idea or impart information, get people to download a CV or enquire about service, or to sell space on that website to advertisers. If the site did not provide a positive benefit to its owners, it would not have a reason for being.
Once you have understood the reason why you have a site, you can map that reason on to one of the three goal types Google Analytics provides.
E-commerce
This conversion type registers transactions as part of a sales process which requires a monetary value, what products have been bought, an SKU (stock keeping unit), affiliation (if you’re then attributing the sale to a third party or franchise) and so on.
The benefit of e-commerce tracking is not only assigning non-arbitrary monetary value to behaviour of visitors on your site, as well as being able to see ancillary costs such as shipping, but seeing product-level information, like which products are preferred from various channels, popular categories, and so on.
However, I find the e-commerce tracking options also useful for non-e-commerce sites. For example, if you’re offering downloads or subscriptions and having an email address or user’s details is worth something to you, you can set up e-commerce tracking to understand how much value your site is bringing. For example, an email address might be worth 20p to you, but if it also includes a name it’s worth 50p. A contact telephone number is worth £2, and so on.
Page goals
Page goals, unsurprisingly, track a visit to a page (often with a sequence of pages leading up to that page). This is what’s referred to as a goal funnel, and is generally used to track how visitors behave in a multistep checkout.
Interestingly, the page doesn’t have to actually exist. For example, if you have a single page checkout, you can register virtual page views using trackPageview() when a visitor clicks into a particular section of the checkout or other form. If your site is geared towards getting someone to a particular page, but where there isn’t a transaction (for example, a subscription page) this is for you.
There are also behavioural goals, such as time on site and number of pages viewed, which are geared towards sites that make money from advertising.
But, going back to the page goals, these can be abstracted using regular expressions, meaning that you can define a funnel based on page type rather than having to set individual folders.
In this example, I’ve created regexes for the main page types on my site, so I can create a wide funnel that captures visitors from where they enter through to checkout.
Events
Event tracking registers a predefined event, such as playing a video, or some interaction that can trigger JavaScript, such as a Tweet This button. Events can then be triggered using the trackEvent() call. If you want someone to complete watching a video, you would code your player to fire trackEvent() upon completion.
While I don’t use events as goals, I use events elsewhere to see how well a video play aids to conversion. This not only helps me justify the additional spend on creating video content, but also quickly highlights which videos are underperforming as sales tools.
What a visitor can tell you
Now you have some proper goals set up, we can start to see how changes in content (on-site and external) affect those goals.
Ultimately, when a visitor comes to your site, they bring information with them:
where they came from (a search engine – including: keyword searched for; a referral; direct; affiliate; or ad campaign)
demographics (country; whether they’re new or returning, within thirty days)
technical information (browser; screen size; device; bandwidth)
site-specific information (landing page; next click; previous values assigned to them as custom variables*)
* A note about custom variables. There’s no hope in hell that I can cover custom variables in this article. Go research them. Custom variables are the single best way to hack Google Analytics and bend it to your will. Custom variables allow you to record anything you want about a visitor, which that visitor will then carry around with them between visits. It’s also great for plugging other services into Google Analytics (as shown by the marvelous way Visual Website Optimizer allows you to track and segment tests within the GA interface). Just make sure not to breach the terms of service, eh?
CSI your website
Police procedural TV shows are all the same: the investigators are called to a crime and come across a clue; there’s then an autopsy; new evidence leads them to a new location; they find a new clue; they put two and two together; they solve the mystery.
This is your life now. Exciting!
So, now you’re gathering a wealth of information about what sort of people visit your site, what they do when they’re there, and what eventually gets them to drive value to you. It’s now your job to investigate all these little clues to see which types of people drive the most value, and what you can change to improve it.
Maybe not that exciting.
However, Google Analytics comes pre-armed with extensive reports for you to delve into. As an e-commerce guy (as opposed to a page goal guy) my day pretty much follows the pattern below.
Look at e-commerce conversion rate by traffic source compared to the same day in the previous week and previous month. As ours is an e-commerce site, we have weekly and monthly trends. A big spike on Sundays and Mondays, and payday towards the end of the month is always good; on the third week of a month there tends to be a lull. Spend time letting your Google Analytics data brew, understand your own trends and patterns, and you’ll start to get a feel for when something isn’t quite right.
Traffic Sources → Sources → All Traffic
Look at the conversion rate by landing page for any traffic source that feels significantly different to what’s expected. Check bounce rates, drill down to likely landing pages and check search keyword or referral site to see if it’s a particular subset of visitor. You can do this by clicking Secondary Dimension and choosing Keyword or Source. If it’s direct, choose Visitor Type to break down by new or returning visitor.
Content → Site Content → Landing Pages
I then tend to flip into Content Drilldown to see what the next clicks were from those landing pages, and whether they changed significantly to the date I’m comparing with. If they have, that’s usually an indicator of changed content (or its relevancy). Remember, if a bunch of people have found their way to your page via a method you’re not expecting (such as a mention on a Spanish radio station – this actually happened to me once), while the content hasn’t changed, the relevancy of it to the audience may have.
Content → Site Content → Content Drilldown
Once I have an idea of what content was consumed, and whether it was relevant to the user, I then look at the visitor specifics, such as browser or demographic data, to see again whether the change was limited to a specific subset. Site speed, for example, is normally a good factor towards bounce rate, so compare that with previous data as well.
Now, to be investigating at this level you still need a serious amount of data, in order to tell what’s a significant change or not. If you’re struggling with a small number of visitors, you might find reporting on a weekly or fortnightly basis more appropriate.
However, once you’ve looked into the basics of why changes happen to the value of your site, you’ll soon find yourself limited by the reports offered in Standard Reporting. So, it’s time to build your own. Hooray!
Custom reporting
Google Analytics provides the tools to build reports specific to the types of investigations you frequently perform.
Welcome to my world.
Custom reports are quite simple to build: first, you determine the metric you want the report to cover (number of visitors, bounce rate, conversion rate, and so on), then choose a set of dimensions that you’d like to segment the report by (say, the source of the traffic, and whether they were new or returning users). You can filter the report, including or excluding particular dimension values, and you can assign the report to any of the profiles you created earlier.
In the example below, I’ve created a report that shows me visits and conversion rate for any Google traffic that landed directly only on a product page. I can then drill down on each product page to see the complete phrases use to search. I can use this information in two ways:
I can see which products aren’t converting, which shows me where I need to work harder on merchandising.
I can give this information to my content team, showing them the actual phrases visitors used to reach our product content, helping them write better targeted product descriptions.
The possibilities here are nearly endless, but here are a few examples of reports I find useful:
Non-brand inbound search
By creating a report that shows inbound search traffic which doesn’t include your brand, you can see more clearly the behaviour of visitors most likely to be unfamiliar with your site and brand values, without having to rely on the clumsy new or returning demographic date.
Traffic/conversion/sales by hour
This is pure stats porn, but actually more useful than real-time data. By seeing this data broken down at an hourly level, you can not only compare the current day to previous days, but also see the best performing times for email broadcasts and tweets.
Visits, load time, conversion and sales by page and browser
Page speed can often kill conversion rates, but it’s difficult to prove the value of focusing on speed in monetary terms. Having this report to hand helps me drive Operation Greenbelt, our effort to get into the sub-1.5 second band in Google Webmaster Tools.
Useful things you can’t do in custom reporting
If you have a search function on your website, then Conversion Rate and Products Bought by Site Search Term is an incredibly useful report that allows you to measure the effectiveness of your site’s search engine at returning products and content related to the search term used. By including the products actually bought by visitors who searched for each term, you can use this information to better searchandise these results, escalating high propensity and high value products to the top of the results.
However, it’s not possible to get this information out of new Google Analytics.
Try it, select the following in the report builder:
Metrics: total unique searches; e-commerce or goal conversion rate
Dimensions: search term; product
You’ll see that the data returned is a little nonsensical, though a 2,000% conversion rate would be nice. However, you can get more accurate information using advanced segments. By creating individual segments to define users who have searched for a particular term, you can run the sales performance and product performance reports as normal. It’s laborious, but it teaches a good lesson: data that seems inaccessible can normally be found another way!
Reporting infrastructure
Now that you have a series of reports that you can refer to on a daily or weekly basis, it’s time to put together a regular reporting infrastructure.
Even if you’re not reporting to someone, having a set of key performance indicators that you can use to see how your performance is improving over time allows you to set yourself business goals on a monthly and annual basis.
For my own reporting, I take some high-level metrics (such as visitors, conversion rate and average order value), and segment them by traffic source and, separately, landing page. These statistics I record weekly and report:
current week compared with previous week
same week previous year (if available)
4 week average
13 week average
52 week average (if available)
This takes into account weekly, monthly, seasonal and annual trends, and gives you a much clearer view of your performance.
Getting data in other ways
If you’re using Google Analytics frequently, with any large site you’ll come to a couple of conclusions:
Doing any kind of practical comparative analysis is unwieldy.
Boy, Google Analytics is slow!
As you work with bigger datasets and put together more complex queries, you’ll see the loading graphic more than you’ll see actual data. So when you reach that level, there are ways to completely bypass the Google Analytics interface altogether, and get data into your own spreadsheet application for manipulation.
Data Feed Query Explorer
If you just want to pull down some quick statistics but still use complex filters and exotic metric and dimension combinations, the Data Feed Query Explorer is the quickest way of doing so. Authenticate with your Google Analytics account, select a profile, and you can start selecting metrics and dimensions to be generated in a handy, selectable tabulated format.
Google Analytics API
If you’re feeling clever, you can bypass having to copy and paste data by pulling in directly into Excel, Google Docs or your own application using the Google Analytics API. There are several scripts and plugins available to do this. I use Automate Analytics Google Docs code (there’s also a paid version that simplifies setup and creates some handy reports for you).
New shiny things
Well, now that that’s over, I can show you some cool stuff. Well, at least it’s cool to me. Google Analytics is being constantly improved and new functionality is introduced nearly every month. Here are a couple of my favourites.
Multichannel attribution
Not every visitor converts on your site on the first visit. They may not even do so on the second visit, or third. If they convert on the fourth visit, but each time they visit they do so via a different channel (for example, Search PPC, Search Organic, Direct, Email), which channel do you attribute the conversion to? The last channel, or the first? Dilemma!
Google now has a Multichannel Attribution report, available in the Conversion category, which shows how each channel assists in converting, the overlap between channels, and where in the process that channel was important.
For example, you may have analysed your blog traffic from Twitter and become disheartened that not many people were subscribing after visiting from Twitter links, but instead your high-value subscribers were coming from natural search. On the face of it, you’d spend less time tweeting, but a multichannel report may tell you that visitors first arrived via a Twitter link and didn’t subscribe, but then came back later after searching for your blog name on Google, after which they did. Don’t pack Twitter in yet!
Visitor and goal flow
Visitor and goal flow are amazing reports that help you visualize the flow of traffic through your site and, ultimately, into your checkout funnel or similar goal path. Flow reports are perfect for understanding drop-off points in your process, as well as what the big draws are on each page.
Previously, if you wanted to visualize this data you had to set up several abstracted microgoals and chain them together in custom reports. Frankly, it was a pain in the arse and burned through your precious and limited goal allocation.
Visitor flow bypasses all that and produces the report in an interactive flow diagram. While it doesn’t show you the holy grail of conversion likelihood by each path, you can segment visitor flow so that you can see very specifically how different segments of your visitor base behave.
Go play with it now!",2011,Matt Curry,mattcurry,2011-12-18T00:00:00+00:00,https://24ways.org/2011/getting-the-most-out-of-google-analytics/,business
269,Adaptive Images for Responsive Designs… Again,"When I was asked to write an article for 24 ways I jumped at the chance, as I’d been wanting to write about some fun hacks for responsive images and related parsing behaviours. My heart sank a little when Matt Wilcox beat me to the subject, but it floated back up when I realized I disagreed with his method and still had something to write about.
So, Matt Wilcox, if that is your real name (and I’m pretty sure it is), I disagree. I see your dirty server-based hack and raise you an even dirtier client-side hack. Evil laugh, etc., etc.
You guys can stomach yet another article about responsive design, right? Right?
Half the room gets up to leave
Whoa, whoa… OK, I’ll cut to the chase…
TL;DR
In a previous episode, we were introduced to Debbie and her responsive cat poetry page. Well, now she’s added some reviews of cat videos and some images of cats. Check out her new page and have a play around with the browser window. At smaller widths, the images change and the design responds. The benefits of this method are:
it’s entirely client-side
images are still shown to users without JavaScript
your media queries stay in your CSS file
no repetition of image URLs
no extra downloads per image
it’s fast enough to work on resize
it’s pure filth
What’s wrong with the server-side solution?
Responsive design is a client-side issue; involving the server creates a boatload of problems.
It sets a cookie at the top of the page which is read in subsequent requests. However, the cookie is not guaranteed to be set in time for requests on the same page, so the server may see an old value or no value at all.
Serving images via server scripts is much slower than plain old static hosting.
The URL can only cache with vary: cookie, so the cache breaks when the cookie changes, even if the change is unrelated. Also, far-future caching is out for devices that can change width.
It depends on detecting screen width, which is rather messy on mobile devices.
Responding to things other than screen width (such as DPI) means packing more information into the cookie, and a more complicated script at the top of each page.
So, why isn’t this straightforward on the client?
Client-side solutions to the problem involve JavaScript testing user agent properties (such as screen width), looping through some images and setting their URLs accordingly. However, by the time JavaScript has sprung into action, the original image source has already started downloading. If you change the source of an image via JavaScript, you’re setting off yet another request.
Images are downloaded as soon as their DOM node is created. They don’t need to be visible, they don’t need to be in the document.
new Image().src = url
The above will start an HTTP request for url. This is a handy trick for quick requests and preloading, but also shows the browser’s eagerness to download images.
Here’s an example of that in action. Check out the network tab in Web Inspector (other non-WebKit development aids are available) to see the image requests.
Because of this, some client-side solutions look like this:
where t.gif is a 1×1px tiny transparent GIF.
This results in no images if JavaScript isn’t available. Dealing with the absence of JavaScript is still important, even on mobile. I was recently asked to make a website work on an old Blackberry 9000. I was able to get most of the way there by preventing that OS parsing any JavaScript, and that was only possible because the site didn’t depend on it.
We need to delay loading images for JavaScript users, but ensure they load for users without JavaScript. How can we conditionally parse markup depending on JavaScript support?
Oh yeah! !
Whoa! First spacer GIFs and now ? This really is the future! The image above will only load for users without JavaScript support. Now all we need to do is send JavaScript in there to get the textContent of the element, then we can alter the image source before handing it to the DOM for parsing.
Here’s an example of that working … unless you’re using Internet Explorer.
Internet Explorer doesn’t retain the content of elements. As soon as it’s parsed, it considers it an empty element. FANKS INTERNET EXPLORER. This is why some solutions do this:
so JavaScript can still get at the URL via the data-src attribute. However, repeating stuff isn’t great. Surely we can do better than that.
A dirty, dirty hack
Thankfully, I managed to come up with a solution, and by me, I mean someone cleverer than me. Pornel’s solution uses , but surely we don’t need that.
Now, before we look at this, I can’t stress how dirty it is. It’s so dirty that if you’ve seen it, schools will refuse to employ you.
Phwoar! Dirty, isn’t it? I’ll stop for a moment, so you can go have a wash.
Done? Excellent.
With this, the image is wrapped in a comment only for users with JavaScript. Without JavaScript, we get the image. Unlike the example above, we can get the text content of the comment pretty easily.
Hurrah! But wait… Some browsers are sometimes downloading the image, even with JavaScript enabled. Notably Firefox. Huh?
Images are downloaded in comments now? What?
No. What we’re seeing here is the effect of speculative parsing. Here’s what’s happening:
While the browser is parsing the script, it parses the rest of the document. This is usually a good thing, as it can download subsequent images and scripts without waiting for the script to complete. The problem here is we create an unbalanced tree.
An unbalanced tree, yesterday.
In this case, the browser must throw away its speculative parsing and reparse from the end of the
And there we have it. We can now prevent images loading for users with JavaScript, but we can still get at the markup.
We’re still creating an unbalanced tree and there’s a performance impact in that. However, the parser won’t have got far by the time our script executes, so the impact is small. Unbalanced trees are more of a concern for external scripts; a lot of parsing can happen by the time the script has downloaded and parsed.
Using dirtiness to create responsive images
Now all we need to do is give each of our dirty scripts a class name, then JavaScript can pick them up, grab the markup from the comment and decide what to do with the images.
This technique isn’t exclusively useful for responsive images. It could also be used to delay images loading until they’ve scrolled into view. But to do that you’ll need a bulletproof way of detecting when elements are in view. This involves getting the height of the viewport, which is extremely unreliable on mobile devices.
Here’s a hastily thrown together example showing how it can be used for responsive images.
I adjust the end of the image URLs conditionally depending on the result of media queries. This is done on page load, and on resize.
I’m using regular expressions to alter the URLs. Using regex to deal with HTML is usually a sign of insanity, but parsing it with the browser’s DOM parser would trigger the download of images before we change the URLs. My implementation currently requires double-quoted image URLs, because I’m lazy. Wanna fight about it?
Media querying via JavaScript
Jeremy Keith used document.documentElement.clientWidth in his example, which is great as a proof of concept, but unfortunately is rather unreliable across mobile devices.
Thankfully, standards are coming to the rescue with window.matchMedia, which lets us provide a media query string and get a boolean result. There’s even a great polyfill for browsers that don’t support it (as long as they support media queries in CSS).
I didn’t go with that for three reasons:
I’d like to keep media queries in the CSS file, if possible.
I wanted something a little lighter to keep things speedy while resizing.
It’s just not dirty enough yet.
To make things ultra-dirty, I add a test element to the page with a specific class, let’s say media-test. Then, I control the width of it using media queries in my CSS file:
@media all and (min-width: 640px) {
.media-test {
width: 1px;
}
}
@media all and (min-width: 926px) {
.media-test {
width: 2px;
}
}
The JavaScript part changes the URL suffix depending on the width of media-test. I’m using a min-width media query, but you can use others such as pixel-ratio to detect high DPI displays. Basically, it’s a hacky way for CSS to set a value that can be picked up by JavaScript. It means the bit that signals changes to the images sits with the rest of the responsive code, without duplication.
Also, phwoar, dirty!
The API
I threw a script together to demonstrate the technique. I’m not particularly attached to it, I’m not even sure I like it, but here’s the API:
responsiveGallery({
// Class name of dirty script element(s) to target
scriptClass: 'dirty-gallery-script',
// Class name for our test element
testClass: 'dirty-gallery-test',
// The initial suffix of URLs, the bit that changes.
initialSuffix: '-mobile.jpg',
// A map of suffixes, for each width of 'dirty-gallery-test'
suffixes: {
'1': '-desktop.jpg',
'2': '-large-desktop.jpg',
'3': '-mobile-retina.jpg'
}
});
The API can cover individual images or multiple galleries at once. In the example I gave at the start of the article I make two calls to the API, one for both galleries, and one for the single image above the video reviews. They’re separate calls because they respond slightly differently.
The future
Hopefully, we’ll get a proper solution to this soon. My favourite suggestion is the element that Bruce Lawson covers.
Unfortunately, we’re nowhere near that yet, and I’d still rather have my media queries stay in CSS. Perhaps the source elements could be skipped if they’re display:none; then they could have class names and be controlled via CSS. Sigh.
Well, I’m tired of writing now and I’m sure you’re tired of reading. I realize what I’ve presented is a yet another dirty hack to the responsive image problem (perhaps the dirtiest?) and may be completely unfeasible in professional situations. But isn’t that the true spirit of Christmas?
No.",2011,Jake Archibald,jakearchibald,2011-12-08T00:00:00+00:00,https://24ways.org/2011/adaptive-images-for-responsive-designs-again/,ux
270,From Side Project to Not So Side Project,"In the last article I wrote for 24 ways, back in 2009, I enthused about the benefits of having a pet project, suggesting that we should all have at least one so that we could collaborate with our friends, escape our day jobs, fulfil our own needs, help others out, raise our profiles, make money, and — most importantly — have fun. I don’t think I need to offer any further persuasions: it seems that designers and developers are launching their own pet projects left, right and centre. This makes me very happy.
However, there still seems to be something of a disconnect between having a side project and turning it into something that is moderately successful; in particular, the challenge of making enough money to sustain the project and perhaps even elevating it from the sidelines so that it becomes something not so on the side at all.
Before we even begin this, let’s spend a moment talking about money, also known as…
Evil, nasty, filthy money
Over the last couple of years, I’ve started referring to myself as an accidental businessman. I say accidental because my view of the typical businessman is someone who is driven by money, and I usually can’t stand such people. Those who are motivated by profit, obsessed with growth, and take an active interest in the world’s financial systems don’t tend to be folks with whom I share a beer, unless it’s to pour it over them. Especially if they’re wearing pinstriped suits.
That said, we all want to make money, don’t we? And most of us want to make a relatively decent amount, too. I don’t think there’s any harm in admitting that, is there? Hello, I’m Elliot and I’m a capitalist.
The key is making money from doing what we love. For most people I know in our community, we’ve already achieved that — I’m hard-pressed to think of anyone who isn’t extremely passionate about working in our industry and I think it’s one of the most positive, unifying benefits we enjoy as a group of like-minded people — but side projects usually arise from another kind of passion: a passion for something other than what we do as our day jobs. Perhaps it’s because your clients are driving you mental and you need a break; perhaps it’s because you want to create something that is truly your own; perhaps it’s because you’re sick of seeing your online work disappear so fast and you want to try your hand at print in order to make a more permanent mark.
The three factors I listed there led me to create 8 Faces, a printed magazine about typography that started as a side project and is now a very significant part of my yearly output and income.
Like many things that prove fruitful, 8 Faces’ success was something of an accident, too. For a start, the magazine was never meant to be profitable; its only purpose at all was to scratch my own itch. Then, after the first issue took off and I realized how much time I needed to spend in order to make the next one decent, it became clear that I would have to cover more than just the production costs: I’d have to take time out from client work as well. Doing this meant I’d have to earn some money. Probably not enough to equate to the exact amount of time lost when I could be doing client work (not that you could ever describe time as being lost when you work on something you love), but enough to survive; for me to feel that I was getting paid while doing all of the work that 8 Faces entailed. The answer was to raise money through partnerships with some cool companies who were happy to be associated with my little project.
A sustainable business model
Business model! I can’t believe I just wrote those words! But a business model is really just a loose plan for how not to screw up. And all that stuff I wrote in the paragraph above about partnering with companies so I could get some money in while I put the magazine together? Well, that’s my business model.
If you’re making any product that has some sort of production cost, whether that’s physical print run expenses or up-front dev work to get an app built, covering those costs before you even release your product means that you’ll be in profit from the first copy you sell. This is no small point: production expenses are pretty much the only cost you’ll ever need to recoup, so having them covered before you launch anything is pretty much the best possible position in which you could place yourself. Happy days, as Jamie Oliver would say.
Obtaining these initial funds through partnerships has another benefit. Sure, it’s a form of advertising but, done right, your partners can potentially provide you with great content, too. In the case of 8 Faces, the ads look as nice as the rest of the magazine, and a couple of our partners also provide proper articles: genuinely meaningful, relevant, reader-pleasing articles at that. You’d be amazed at how many companies are willing to become partners and, as the old adage goes, if you don’t ask, you don’t get.
With profit comes responsibility
Don’t forget about the responsibility you have to your audience if you engage in a relationship with a partner or any type of advertiser: although I may have freely admitted my capitalist leanings, I’m still essentially a hairy hippy, and I feel that any partnership should be good for me as a publisher, good for the partner and — most importantly — good for the reader. Really, the key word here is relevance, and that’s where 99.9% of advertising fails abysmally.
(99.9% is not a scientific figure, but you know what I’m on about.)
The main grey area when a side project becomes profitable is how you share that profit, partly because — in my opinion, at least — the transition from non-profitable side project to relatively successful source of income can be a little blurred. Asking for help for nothing when there’s no money to be had is pretty normal, but sometimes it’s easy to get used to that free help even once you start making money. I believe the best approach is to ask for help with the promise that it will always be rewarded as soon as there’s money available. (Oh, god: this sounds like one of those nightmarish client proposals. It’s not, honest.) If you’re making something cool, people won’t mind helping out while you find your feet.
Events often think that they’re exempt from sharing profit. Perhaps that’s because many event organizers think they’re doing the speakers a favour rather than the other way around (that’s a whole separate article), but it’s shocking to see how many people seem to think they can profit from content-makers — speakers, for example — and yet not pay for that content. It was for this reason that Keir and I paid all of our speakers for our Insites: The Tour side project, which we ran back in July. We probably could’ve got away without paying them, especially as the gig was so informal, but it was the right thing to do.
In conclusion: money as a by-product
Let’s conclude by returning to the slightly problematic nature of money, because it’s the pivot on which your side project’s success can swing, regardless of whether you measure success by monetary gain. I would argue that success has nothing to do with profit — it’s about you being able to spend the time you want on the project. Unfortunately, that is almost always linked to money: money to pay yourself while you work on your dream idea; money to pay for more servers when your web app hits the big time; money to pay for efforts to get the word out there. The key, then, is to judge success on your own terms, and seek to generate as much money as you see fit, whether it’s purely to cover your running costs, or enough to buy a small country. There’s nothing wrong with profit, as long as you’re ethical about it. (Pro tip: if you’ve earned enough to buy a small country, you’ve probably been unethical along the way.)
The point at which individuals and companies fail — in the moral sense, for sure, but often in the competitive sense, too — is when money is the primary motivation. It should never be the primary motivation. If you’re not passionate enough about something to do it as an unprofitable side project, you shouldn’t be doing it all.
Earning money should be a by-product of doing what you love. And who doesn’t want to spend their life doing what they love?",2011,Elliot Jay Stocks,elliotjaystocks,2011-12-22T00:00:00+00:00,https://24ways.org/2011/from-side-project-to-not-so-side-project/,business
271,Creating Custom Font Stacks with Unicode-Range,"Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We’ve all used it — either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts.
If you’re like me, however, you’ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec.
One such property — the unicode-range descriptor — sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways.
Unicode-range
The unicode-range descriptor is designed to help when using fonts that don’t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers.
@font-face {
font-family: BBCBengali;
src: url(fonts/BBCBengali.ttf) format(""opentype"");
unicode-range: U+00-FF;
}
In this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to ÿ at U+FF – the extent of the Basic Latin character range.
By adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts.
When I say that it’s possible to specify the range of characters the font covers, that’s true, but what you’re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together.
The best available ampersand
A few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a element with a class applied:
&
A CSS rule can then be written to select the and apply a different font:
span.amp {
font-family: Baskerville, Palatino, ""Book Antiqua"", serif;
}
That’s a perfectly serviceable technique, but the drawbacks are clear — you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn’t always possible when working with a CMS.
Perhaps we could do this with unicode-range.
A better best available ampersand
The Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so:
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
What we’ve done here is specify a new family called Ampersand and created a font stack for it with the user’s locally installed copies of Baskerville, Palatino or Book Antiqua. We’ve then limited it to a single character range — the ampersand. Of course, those don’t need to be local fonts — they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life.
We can then use that new family in a regular font stack.
h1 {
font-family: Ampersand, Arial, sans-serif;
}
With this in place, any elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn’t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial.
You didn’t think it was that easy, did you?
Oh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all.
If you’re familiar with how CSS works when it comes to unsupported properties, you’ll know that if a browser encounters a property it doesn’t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius — if the browser can’t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect.
Less perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters — the whole range. If you’re using a fancy font for flamboyant ampersands, you probably don’t want that applied to all your text if unicode-range isn’t supported. That would be bad. Really bad.
Ensuring good fallbacks
As ever, the trick is to make sure that there’s a sensible fallback in place if a browser doesn’t have support for whatever technology you’re trying to use. This is where being a super nerd about understanding the spec you’re working with really pays off.
We can make use of the rules of the CSS cascade to make sure that if unicode-range isn’t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren’t implemented.
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
@font-face {
font-family: 'Ampersand';
src: local('Arial');
}
In theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don’t have support, the second rule should just supersede the first, setting the font to Arial.
Unfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That’s both unexpected and unfortunate. Bad Firefox. On your rug.
If that doesn’t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That’s really what we’re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we’re never going to use in our page? Then it wouldn’t affect the outcome for browsers that do support ranges.
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
@font-face {
/* Ampersand fallback font */
font-family: 'Ampersand';
src: local('Arial');
unicode-range: U+270C;
}
By specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we’re not using in any of our pages — U+270C.
So we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect!
That obscure character, my friends, is what Unicode defines as the VICTORY HAND.
✌
So, how can we use this?
Ampersands are a neat trick, and it works well in browsers that support ranges, but that’s not really the point of all this. Styling ampersands is fun, but they’re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting.
How do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end.
Of course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you’re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you’ll have to use your own best judgement based on your needs and audience.
One thing to keep in mind is that if you’re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn’t be downloaded if none of the characters within the Unicode range are present in a given page.
As ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.",2011,Drew McLellan,drewmclellan,2011-12-01T00:00:00+00:00,https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/,code
272,Crafting the Front-end,"Much has been spoken and written recently about the virtues of craftsmanship in the context of web design and development. It seems that we as fabricators of the web are finally tiring of seeking out parallels between ourselves and architects, and are turning instead to the fabled specialist artisans.
Identifying oneself as a craftsman or craftswoman (let’s just say craftsperson from here onward) will likely be a trend of early 2012. In this pre-emptive strike, I’d like to expound on this movement as I feel it pertains to front-end development, and encourage care and understanding of the true qualities of craftsmanship (craftspersonship).
The core values
I’ll begin by defining craftspersonship. What distinguishes a craftsperson from a technician? Dictionaries tend to define a craftsperson as one who possesses great skill in a chosen field. The badge of a craftsperson for me, though, is a very special label that should be revered and used sparingly, only where it is truly deserved. A genuine craftsperson encompasses a few other key traits, far beyond raw skill, each of which must be learned and mastered.
A craftsperson has:
An appreciation of good work, in both the work of others and their own. And not just good as in ‘hey, that’s pretty neat’, I mean a goodness like a shining purity – the kind of good that feels right when you see it.
A belief in quality at every level: every facet of the craftsperson’s product is as crucial as any other, without exception, even those normally hidden from view.
Vision: an ability to visualize their path ahead, pre-empting the obstacles that may be encountered to plan a route around them.
A preference for simplicity: an almost Bauhausesque devotion to undecorated functionality, with no unjustifiable parts included.
Sincerity: producing work that speaks directly to its purpose with flawless clarity.
Only when you become a custodian of such values in your work can you consider calling yourself a craftsperson. Now let’s take a look at some steps we front-end developers can take on our journey of enlightenment toward craftspersonhood.
Speaking of the craftsman’s journey, be sure to watch out for the video of The Standardistas’ stellar talk at the Build 2011 conference titled The Journey, which should be online sometime soon.
Building your own toolbox
My grandfather was a carpenter and trained as a young apprentice under a master. After observing and practising the many foundation theories, principles and techniques of carpentry, he was tasked with creating his own set of woodworking tools, which he would use and maintain throughout his career. By going through the process of having to create his own tools, he would be connected at the most direct level with every piece of wood he touched, his tools being his own creations and extensions of his own skilled hands. The depth of his knowledge of these tools must have surpassed the intricate as he fathered, used, cleaned and repaired them, day in and day out over many years.
And so it should be, ideally, with all crafts. We must understand our tools right down to the most fundamental level. I firmly believe that a level of true craftsmanship cannot be reached while there exists a layer that remains not wholly understood between a creator and his canvas. Of course, our tools as front-end developers are somewhat more complex than those of other crafts – it may seem reasonable to require that a carpenter create his or her own set of chisels, but somewhat less so to ask a front-end developer to code their own CSS preprocessor, or design their own computer.
However, it is still vitally important that you understand how your tools work. This is particularly critical when it comes to things like preprocessors, libraries and frameworks which aim to save you time by automating common processes and functions. For the most part, anything that saves you time is a Good Thing™ but it cannot be stressed enough that using tools like these in earnest should be avoided until you understand exactly what they are doing for you (and, to an extent, how they are doing it).
In particular, you must understand any drawbacks to using your tools, and any shortcuts they may be taking on your behalf. I’m not suggesting that you steer clear of paid work until you’ve studied each of jQuery’s 9,266 lines of JavaScript source code but, all levity aside, it will further you on your journey to look at interesting or relevant bits of jQuery, and any other libraries you might want to use. Such libraries often directly link to corresponding sections of their source code on sites like GitHub from their official documentation. Better yet, they’re almost always written in high level languages (easy to read), so there’s no excuse not to don your pith helmet and go on something of an exploration. Any kind of tangential learning like this will drive you further toward becoming a true craftsperson, so keep an open mind and always be ready to step out of your comfort zone.
Downtime and tool honing
With any craft, it is essential to keep your tools in good condition, and a good idea to stay up-to-date with the latest equipment. This is especially true on the web, which, as we like to tell anyone who is still awake more than a minute after asking what it is that we do, advances at a phenomenal pace. A tool or technique that could be considered best practice this week might be the subject of haughty derision in a comment thread within six months.
I have little doubt that you already spend a chunk of time each day keeping up with the latest material from our industry’s finest Interblogs and Twittertubes, but do you honestly put aside time to collect bookmarks and code snippets from things you read into a slowly evolving toolbox? At @media in 2009, Simon Collison delivered a candid talk on his ‘Ultimate Package’. Those of us who didn’t flee the room anticipating a newfound and unwelcome intimacy with the contents of his trousers were shown how he maintained his own toolkit – a collection of files and folders all set up and ready to go for a new project. By maintaining a toolkit in this way, he has consistency across projects and a dependable base upon which to learn and improve.
The assembly and maintenance of such a personalized and familiar toolkit is probably as close as we will get to emulating the tool making stage of more traditional craft trades. Keep a master copy of your toolkit somewhere safe, making copies of it for new projects. When you learn of a way in which part of it can be improved, make changes to the master copy.
Simplicity through modularity
I believe that the user interfaces of all web applications should be thought of as being made up primarily of modular components. Modules in this context are patterns in design that appear repeatedly throughout the app. These can be small collections of elements, like a user profile summary box (profile picture, username, meta data), as well as atomic elements such as headings and list items.
Well-crafted front-end architectures have the ability to support this kind of repeating pattern as modules, with as close to no repetition of CSS (or JavaScript) as possible, and as close to no variations in HTML between instances as possible.
One of the most fundamental and well known tenets of software engineering is the DRY rule – don’t repeat yourself. It requires that “every piece of knowledge must have a single, unambiguous, authoritative representation within a system.”
As craftspeople, we must hold this rule dear and apply it to the modules we have identified in our site designs. The moment you commit a second style definition for a module, the quality of your output (the front-end code) takes a huge hit. There should only ever be one base style definition for each distinct module or component. Keep these in a separate, sacred place in your CSS. I use a _modules.scss Sass include file, imported near the top of my main CSS files.
Be sure, of course, to avoid making changes to this file lightly, as the smallest adjustment can affect multiple pages (hint: keep a structure list of which modules are used on which pages). Avoid the inevitable temptation to duplicate code late in the project. Sticking to this rule becomes more important the more complex the codebase becomes.
If you can stick to this rule, using sensible class names and consistent HTML, you can reach a joyous, self-fulfilling plateau stage in each project where you are assembling each interface from your own set of carefully crafted building blocks.
Old school markup
Let’s take a step back. Before we fret about creating a divinely pure modular CSS framework, we need to know the site’s design and what it is made of. The best way to gain this knowledge is to go old school. Print out every comp, mockup, wireframe, sketch or whatever you have. If there are sections of pages that are hidden until some user action takes place, or if the page has multiple states, be sure that you have everything that could become visible to the user on paper.
Once you have your wedge of paper designs, lay out all the pages on the floor, or stick them to the wall if you can, arranging them logically according to the site hierarchy, by user journey, or whatever guidelines make most sense to you. Once you have the site laid out before you, study it for a while, familiarizing yourself with every part of every interface. This will eliminate nasty surprises late in the project when you realize you’ve duplicated something, or left an interface on the drawing board altogether.
Now that you know the site like it’s your best friend, get out your pens or pencils of choice and attack it. Mark it up like there’s no tomorrow. Pretend you’re a spy trying to identify communications from an enemy network hiding their messages in newspapers. Look for patterns and similarities, drawing circles around them. These are your modules. Start also highlighting the differences between each instance of these modules, working out which is the most basic or common type that will become the base definition from which all other representations are extended.
This simple but empowering exercise will equip you for your task of actually crafting, instead of just building, the front-end. Without the knowledge gained from this kind of research phase, you will be blundering forward, improvising as best you can, but ultimately making quality-compromising mistakes that could have been avoided.
For more on this theme, read Anna Debenham’s Front-end Style Guides which recommends a similar process, and the sublime idea of extending this into a guide to refer to during development and beyond.
Design homogeneity
Moving forward again, you now have your modules defined and things are looking good. I mentioned that many instances of these modules will carry minor differences. These differences must be given significant thinking time, and discussion time with your designer(s).
It should be common knowledge by now that successful software projects are not the product of distinct design and build phases with little or no bidirectional feedback. The crucial nature of the designer-developer relationship has been covered in depth this year by Paul Robert Lloyd, and a joint effort from both teams throughout the project lifecycle is pivotal to your ability to craft and ship successful products.
This relationship comes into play when you’re well into the development of the site, and you start noticing these differences between instances of modules (they’ll start to stand out very clearly to you and your carefully regimented modular CSS system). Before you start overriding your base styles, question the differences with the designer to work out why they exist. Perhaps they are required and are important to their context, but perhaps they were oversights from earlier design revisions, or simple mistakes.
The craftsperson’s gland
As you grow towards the levels of expertise and experience where you can proudly and honestly consider yourself a craftsperson, you will find that you steadily develop what initially feels like a kind of sixth sense. I think of it more as a new hormonal gland, secreting into your bloodstream a powerful messenger chemical that can either reward or punish your brain. This gland is connected directly to your core understanding of what good quality work looks and feels like, an understanding that itself improves with experience.
This gland will make itself known to you in two ways. First, when you solve a problem in a beautifully elegant way with clean and unobtrusive code that looks good and just feels right, your craftsperson’s gland will ooze something delicious that makes your brain and soul glow from the inside out. You will beam triumphantly at the succinct lines of code on your computer display before bounding outside with a spring in your step to swim up glittering rainbows and kiss soft fluffy puppies.
The second way that you may become aware of your craftsperson’s gland, though, is somewhat less pleasurable. In an alternate reality, your parallel self is faced with the same problem, but decides to take a shortcut and get around it by some dubious means – the kind of technical method that the words hack, kludge and bodge are reserved for. As soon as you have done this, or even as you are doing it, your craftsperson’s gland will damn well let you know that you took the wrong fork in the road. As your craftsperson’s gland begins to secrete a toxic pus, you will at first become entranced into a vacant stare at the monstrous mess you are considering unleashing upon your site’s visitors, before writhing in the horrible agony of an itch that can never be scratched, and a feeling of being coated with the devil’s own deep and penetrating filth that no shower will ever cleanse.
Perhaps I exaggerate slightly, but it is no overstatement to suggest that you will find yourself being guided by proverbial angels and demons perched on opposite shoulders, or a whispering voice inside your head. If you harness this sense, sharpening it as if it were another tool in your kit and letting it guide or at least advise your decision making, you will transcend the rocky realm of random trial and error when faced with problems, and tend toward the right answers instinctively.
This gland can also empower your ability to assess your own work, becoming a judge before whom all your work is cross-examined. A good craftsperson regularly takes a step back from their work, and questions every facet of their product for its precise alignment with their core values of quality and sincerity, and even the very necessity of each component.
The wrapping
By now, you may be thinking that I take this kind of thing far too seriously, but to terrify you further, I haven’t even shared the half of it. Hopefully, though, this gives you an idea of the kind of levels of professionalism and dedication that it should take to get you on your way to becoming a craftsperson. It’s a level of accomplishment and ability toward which we all should strive, both for our personal fulfilment and the betterment of the products we use daily. I look forward to seeing your finely crafted work throughout 2012.",2011,Ben Bodien,benbodien,2011-12-24T00:00:00+00:00,https://24ways.org/2011/crafting-the-front-end/,process
273,There’s No Formula for Great Designs,"Before he combined them with fluid images and CSS3 media queries to coin responsive design, Ethan Marcotte described fluid grids — one of the most enjoyable parts of responsive design. Enjoyable that is, if you like working with math(s). But fluid grids aren’t perfect and, unless we’re careful when applying them, they can sometimes result in a design that feels disconnected.
Recapping fluid grids
If you haven’t read Ethan’s Fluid Grids, now would be a good time to do that. It centres around a simple formula for converting pixel widths to percentages:
(target ÷ context) × 100 = result
How does that work in practice? Well, take that Fireworks or Photoshop comp you’re working on (I call them static design visuals, or just visuals.) Of course, everything on that visual — column divisions, inline images, navigation elements, everything — is measured in pixels. Now:
Pick something in the visual and measure its width. That’s our target.
Take that target measurement and divide it by the width of its parent (context).
Multiply what you’ve got by 100 (shift two decimal places).
What you’re left with is a percentage width to drop into your style sheets.
For example, divide this 300px wide sidebar division by its 948px parent and then multiply by 100: your original 300px is neatly converted to 31.646%.
.content-sub {
width : 31.646%; /* 300px ÷ 948px = .31646 */ }
That formula makes it surprisingly simple for even die-hard fixed width aficionados to convert their visuals to percentage-based, fluid layouts.
It’s a handy formula for those who still design using static visuals, and downright essential for those situations where one person in an organization designs in Fireworks or Photoshop and another develops with CSS. Why?
Well, although I think that designing in a browser makes the best sense — particularly when designing for multiple devices — I’ll wager most designers still make visuals in Fireworks or Photoshop and use them for demonstrations and get feedback and sign-off. That’s OK. If you haven’t made the transition to content-out designing in a browser yet, the fluid grids formula helps you carry on pushing pixels a while longer.
You can carry on moving pixel width measurements from your visuals to your style sheets, too, in the same way you always have. You can be precise to the pixel and even apply a grid image as a CSS background to help you keep everything lined up perfectly.
Once you’re done, and the fixed width layout in the browser matches your visual, loop back through your style sheets and convert those pixels to percentages using the fluid grids formula. With very little extra work, you’ll have a fluid implementation of your fixed width layout.
The fluid grids formula is simple and incredibly effective, but not long after I started working responsively I realized that the formula shouldn’t (always) be a one-fix, set-and-forget calculation. I noticed that unless we compensate for problems it sometimes creates, the result can be a disconnected design.
Staying connected
Good design relies on connectedness, a feeling of natural balance between elements and the grid they’re placed on. Give an element greater prominence or position in a visual hierarchy and you can fundamentally alter the balance and sometimes the meaning of a design.
Different from a browser’s page zooming feature — where images, text and overall layout change size by the same ratio — fluid grids flex a layout in response to a window or device width. Columns expand and contract, and within them fluid media (images and videos) can also change size. This can be one of the most impressive demonstrations of responsive design.
But not every element within a fluid grid can change size along with the window or device width. For example, type size and leading won’t change along with a column’s width.
When columns and elements within them change width, all too easily a visual hierarchy can be broken and along with it the relationship between element sizes and the outer window or viewport. This can happen quickly if you make just one set of fluid grid calculations and use those percentages across every screen width, from smartphones through tablets and up to large desktops.
The answer? Make several sets of fluid grids calculations, each one at a significant window or device width breakpoint. Then apply those new percentages, when needed, to help keep elements in proportion and maintain balance and connectedness. Here’s how I work.
Avoiding disconnection
I’ve never been entirely happy with grid frameworks such as the 960 Grid System, so I start almost every project by creating a custom grid to inform my layout decisions. Here’s a plain version of a grid from a recent project that I’ll use as an illustration.
This project’s grid comprises 84px columns and 24px gutters. This creates an odd number of columns at common tablet and desktop widths, and allows for 300px fixed width assets — useful when I need to fit advertising into a desktop layout’s sidebar.
Showing common advertising sizes (Larger image)
For this project I chose three 320 and Up breakpoints above 320px and, after placing as many columns as would fit those breakpoint widths, I derived three content widths:
Breakpoint
Columns
Content width
768px
7
732px
992px
9
948px
1,382px
13
1,380px
Here’s my grid again, this time with pixel measurements and breakpoints overlaid.
Showing pixel measurements and breakpoints (Larger image)
Now cast your mind back to the fluid grids calculation I made earlier. I divided a 300px element by 948px and arrived at 31.646%. For some elements it’s possible to use that percentage across all screen widths, but others will feel too small in relation to a narrower 768px and too large inside 1,380px.
To help maintain connectedness, I make a set of fluid grids calculations based on each of the content widths I established earlier. Now I can shift an element’s percentage width up or down when I switch to a new breakpoint and content width. For example:
300px is 40.984% of 732px
300px is 31.646% of 948px
300px is 21.739% of 1,380px
I’ll add all those fluid grid percentages to my grid image and save it for quick reference.
Showing percentages at all breakpoints (Larger image)
Then I can apply those different percentage widths to elements at each breakpoint using CSS3 media queries. For example, that sidebar division again:
/* 732px, 7-column width */
@media only screen and (min-width: 768px) {
.content-sub {
width : 40.983%; /* 300px ÷ 732px = .40983 */ }
}
/* 948px, 9-column width */
@media only screen and (min-width: 992px) {
.content-sub {
width : 31.645%; /* 300px ÷ 948px = .31645 */ }
}
/* 1380px, 13-column width */
@media only screen and (min-width: 1382px) {
.content-sub {
width : 21.739%; /* 300px ÷ 1380px = .21739 */ }
}
The number of changes you make to a layout at different breakpoints will, of course, depend on the specifics of the design you’re working on. Yes, this is additional work, but the result will be a layout that feels better balanced and within which elements remain in harmony with each other while they respond to new screen or device widths.
Putting the design in responsive web design
Until now, many of the conversations around responsive web design have been about aspects of technical implementation, rather than design. I believe we’re only beginning to understand what’s involved in designing responsively. In future, we’ll likely be making design decisions not just about proportions but also about responsive typography. We’ll also need to learn how to adapt our designs to device characteristics such as touch targets and more.
Sometimes we’ll make decisions to improve function, other times because they make a design ‘feel’ right. You’ll know when you’ve made a right decision. You’ll feel it.
After all, there really is no formula for making great designs.",2011,Andy Clarke,andyclarke,2011-12-23T00:00:00+00:00,https://24ways.org/2011/theres-no-formula-for-great-designs/,ux
274,Adaptive Images for Responsive Designs,"So you’ve been building some responsive designs and you’ve been working through your checklist of things to do:
You started with the content and designed around it, with mobile in mind first.
You’ve gone liquid and there’s nary a px value in sight; % is your weapon of choice now.
You’ve baked in a few media queries to adapt your layout and tweak your design at different window widths.
You’ve made your images scale to the container width using the fluid Image technique.
You’ve even done the same for your videos using a nifty bit of JavaScript.
You’ve done a good job so pat yourself on the back. But there’s still a problem and it’s as tricky as it is important: image resolutions.
HTML has an problem
CSS is great at adapting a website design to different window sizes – it allows you not only to tweak layout but also to send rescaled versions of the design’s images. And you want to do that because, after all, a smartphone does not need a 1,900-pixel background image1.
HTML is less great. In the same way that you don’t want CSS background images to be larger than required, you don’t want that happening with s either. A smartphone only needs a small image but desktop users need a large one. Unfortunately s can’t adapt like CSS, so what do we do?
Well, you could just use a high resolution image and the fluid image technique would scale it down to fit the viewport; but that’s sending an image five or six times the file size that’s really needed, which makes it slow to download and unpleasant to use. Smartphones are pretty impressive devices – my ancient iPhone 3G is more powerful in every way than my first proper computer – but they’re still terribly slow in comparison to today’s desktop machines. Sending a massive image means it has to be manipulated in memory and redrawn as you scroll. You’ll find phones rapidly run out of RAM and slow to a crawl.
Well, OK. You went mobile first with everything else so why not put in mobile resolution s too? Because even though mobile devices are rapidly gaining share in your analytics stats, they’re still not likely to be the major share of your user base. I don’t think desktop users would be happy with pokey little mobile resolution images, do you? What we need are adaptive images.
Adaptive image techniques
There are a number of possible solutions, each with pros and cons, and it’s not as simple to find a graceful solution as you might expect.
Your first thought might be to use JavaScript to trawl through the markup and rewrite the source attribute. That’ll get you the right end result, but it’ll have done it in a way you absolutely don’t want. That’s because of the way browsers load resources. It starts to load the HTML and builds the page on-the-fly; as soon as it finds an element it immediately asks the server for that image. After the HTML has finished loading, the JavaScript will run, change the src attribute, and then the browser will request that new image too. Not instead of, but as well as. Not good: that’s added more bloat instead of cutting it.
Plain JavaScript is out then, which is a problem, because what other tools do we have to work with as web designers? Let’s ignore that for now and I’ll outline another issue with the concept of serving different resolution images for different window widths: a basic file management problem. To request a different image, that image has to exist on the server. How’s it going to get there? That’s not a trivial problem, especially if you have non-technical users that update content through a CMS. Let’s say you solve that – do you plan on a simple binary switch: big image|little image? Is that really efficient or future-proof? What happens if you have an archive of existing content that needs to behave this way? Can you apply such a solution to existing content or markup?
There’s a detailed round-up of potential techniques for solving the adaptive images problem over at the Cloud Four blog if you fancy a dig around exploring all the options currently available. But I’m here to show you what I think is the most flexible and easy to implement solution, so here we are.
Adaptive Images
Adaptive Images aims to mitigate most of the issues surrounding the problems of bringing the venerable tag into the 21st century. And it works by leaving that tag completely alone – just add that desktop resolution image into the markup as you’ve been doing for years now. We’ll fix it using secret magic techniques and bottled pixie dreams. Well, fine: with one .htaccess file, one small PHP file and one line of JavaScript. But you’re killing the mystique with that kind of talk.
So, what does this solution do?
It allows s to adapt to the same break points you use in your media queries, giving granular control in the same way you get with your CSS.
It installs on your server in five minutes or less and after that is automatic and you don’t need to do anything.
It generates its own rescaled images on the server and doesn’t require markup changes, so you can apply it to existing web content.
If you wish, it will make all of your images go mobile-first (just in case that’s what you want if JavaScript and cookies aren’t available).
Sound good? I hope so. Here’s what you do.
Setting up and rolling out
I’ll assume you have some basic server knowledge along with that wealth of front-end wisdom exploding out of your head: that you know not to overwrite any existing .htaccess file for example, and how to set file permissions on your server. Feeling up to it? Excellent.
Download the latest version of Adaptive Images either from the website or from the GitHub repository.
Upload the included .htaccess and adaptive-images.php files into the root folder of your website.
Create a directory called ai-cache and make sure the server can write to it (CHMOD 755 should do it).
Add the following line of JavaScript into the of your site:
That’s it, unless you want to tweak the default settings. You likely do, but essentially you’re already up and running.
How it works
Adaptive Images does a number of things depending on the scenario the script has to handle, but here’s a basic overview of what it does when you load a page running it:
A session cookie is written with the value of the visitor’s screen size in pixels.
The HTML encounters an tag and sends a request to the server for that image. It also sends the cookie, because that’s how browsers work.
Apache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL.
There are! The .htaccess says “Hey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.”
The PHP file then does some intelligent thinking which can cover a number of scenarios, but I’ll illustrate one path that can happen:
The PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px.
The PHP has a look at the available media query sizes that were configured and decides which one matches the user’s device.
It then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there.
We’ll pretend it doesn’t – the PHP then goes to the actual requested URI and finds that the original file does exist.
It has a look to see how wide that image is. If it’s already smaller than the user’s screen width it sends it along and stops there. But, let’s pretend the image is 1,000px wide.
The PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it.
It also does a few other things when needs arise, for example:
It sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it.
It handles cases where there isn’t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size.
It compares timestamps between the source image and the generated cache image – to ensure that if the source image gets updated, the old cached file won’t be sent.
Customizing
There are a few options you can customize if you don’t like the default values. By looking in the PHP’s configuration section at the top of the file, you can:
Set the resolution breakpoints to match your media query break points.
Change the name and location of the ai-cache folder.
Change the quality level any generated JPG images are saved at.
Have it perform a subtle sharpen on rescaled images to help keep detail.
Toggle whether you want it to compare the files in the cache folder with the source ones or not.
Set how long the browser should cache the images for.
Switch between a mobile-first or desktop-first approach when a cookie isn’t found.
More importantly, you probably want to omit a few folders from the AI behaviour. You don’t need or want it resizing the images you’re using in your CSS, for example. That’s fine – just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you’re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it’ll only apply AI behaviour to a given list of folders.
Caveats
As I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I’ve seen so far.
This is a PHP solution
I wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don’t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3.
Content delivery networks
Adaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won’t allow that to happen. Adaptive Images will not work if you’re using a CDN to deliver your website.
A minor but interesting cookie issue.
As Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested – even though the JavaScript that sets the cookie is loaded by the browser before it finds any tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem:
The $mobile_first toggle allows you to choose what to send to a browser if a cookie isn’t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest.
Even if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE.
This means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn’t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest.
The best way to get a cookie written is to use JavaScript as I’ve explained above, because it’s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP ‘image’ to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won’t be set until after images have been requested.
The future
For today, this is a pretty good solution. It works, and as it doesn’t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you’re good to go – you’d never know AI had been there.
However, this isn’t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future.
First, we could do with browsers sending far more information about the user’s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn’t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for.
Secondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does.
I hope you’ve found this interesting and will find Adaptive Images useful.
Footnotes
1 While I’m talking about preventing smartphones from downloading resources they don’t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files.
2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository.
3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images.",2011,Matt Wilcox,mattwilcox,2011-12-04T00:00:00+00:00,https://24ways.org/2011/adaptive-images-for-responsive-designs/,ux
275,Context First: Web Strategy in Four Handy Ws,"Many, many years ago, before web design became my proper job, I trained and worked as a journalist. I studied publishing in London and spent three fun years learning how to take a few little nuggets of information and turn them into a story. I learned a bunch of stuff that has all been a huge help to my design career. Flatplanning, layout, typographic theory. All of these disciplines have since translated really well to web design, but without doubt the most useful thing I learned was how to ask difficult questions.
Pretty much from day one of journalism school they hammer into you the importance of the Five Ws. Five disarmingly simple lines of enquiry that eloquently manage to provide the meat of any decent story. And with alliteration thrown in too. For a young journo, it’s almost too good to be true.
Who? What? Where? When? Why? It seems so obvious to almost be trite but, fundamentally, any story that manages to answer those questions for the reader is doing a pretty good job. You’ll probably have noticed feeling underwhelmed by certain news pieces in the past – disappointed, like something was missing. Some irritating oversight that really lets the story down. No doubt it was one of the Ws – those innocuous little suckers are generally only noticeable by their absence, but they sure get missed when they’re not there.
Question everything
I’ve always been curious. An inveterate tinkerer with things and asker of dopey questions, often to the point of abject annoyance for anyone unfortunate enough to have ended up in my line of fire. So, naturally, the Five Ws started drifting into other areas of my life. I’d scrutinize everything, trying to justify or explain my rationale using these Ws, but I’d also find myself ripping apart the stuff that clearly couldn’t justify itself against the same criteria.
So when I started working as a designer I applied the same logic and, sure enough, the Ws pretty much mapped to the exact same needs we had for gathering requirements at the start of a project. It seemed so obvious, such a simple way to establish the purpose of a product. What was it for? Why we were making it? And, of course, who were we making it for? It forced clients to stop and think, when really what they wanted was to get going and see something shiny. Sometimes that was a tricky conversation to have, but it’s no coincidence that those who got it also understood the value of strategy and went on to have good solid products, while those that didn’t often ended up with arrogantly insular and very shiny but ultimately unsatisfying and expendable products. Empty vessels make the most noise and all that…
Content first
I was both surprised and pleased when the whole content first idea started to rear its head a couple of years back. Pleased, because without doubt it’s absolutely the right way to work. And surprised, because personally it’s always been the way I’ve done it – I wasn’t aware there was even an alternative way. Content in some form or another is the whole reason we were making the things we were making. I can’t even imagine how you’d start figuring out what a site needs to do, how it should be structured, or how it should look without a really good idea of what that content might be. It baffles me still that this was somehow news to a lot of people. What on earth were they doing? Design without purpose is just folly, surely?
It’s great to see the idea gaining momentum but, having watched it unfold, it occurred to me recently that although it’s fantastic to see a tangible shift in thinking – away from those bleak times, where making things up was somehow deemed an appropriate way to do things – there’s now a new bad guy in town.
With any buzzword solution of the moment, there’s always a catch, and it seems like some have taken the content first approach a little too literally. By which I mean, it’s literally the first thing they do. The project starts, there’s a very cursory nod towards gathering requirements, and off they go, cranking content. Writing copy, making video, commissioning illustrations.
All that’s happened is that the ‘making stuff up’ part has shifted along the line, away from layout and UI, back to the content.
Starting is too easy
I can’t remember where I first heard that phrase, but it’s a great sentiment which applies to so much of what we do on the web. The medium is so accessible and to an extent disposable; throwing things together quickly carries far less burden than in any other industry. We’re used to tweaking as we go, changing bits, iterating things into shape. The ubiquitous beta tag has become the ultimate caveat, and has made the unfinished and unpolished acceptable. Of course, that can work brilliantly in some circumstances. Occasionally, a product offers such a paradigm shift it’s beyond the level of deep planning and prelaunch finessing we’d ideally like. But, in the main, for most client sites we work on, there really is no excuse not to do things properly. To ask the tricky questions, to challenge preconceptions and really understand the Ws behind the products we’re making before we even start.
The four Ws
For product definition, only four of the five Ws really apply, although there’s a lot of discussion around the idea of when being an influencing factor. For example, the context of a user’s engagement with your product is something you can make a call on depending on the specifics of the project.
So, here’s my take on the four essential Ws. I’ll point out here that, of course, these are not intended to be autocratic dictums. Your needs may differ, your clients’ needs may differ, but these four starting points will get you pretty close to where you need to be.
Who
It’s surprising just how many projects start without a real understanding of the intended audience. Many clients think they have an idea, but without really knowing – it’s presumptive at best, and we all know what presumption is the mother of, right? Of course, we can’t know our audiences in the same way a small shop owner might know their customers. But we can at least strive to find out what type of people are likely to be using the product. I’m not talking about deep user research. That should come later.
These are the absolute basics. What’s the context for their visit? How informed are they? What’s their level of comprehension? Are they able to self-identify and relate to categories you have created? I could go on, and it changes on a per-project basis. You’ll only find this out by speaking to them, if not in person, then indirectly through surveys, questionnaires or polls. The mechanism is less important than actually reaching out and engaging with them, because without that understanding it’s impossible to start to design with any empathy.
What
Once you become deeply involved directly with a product or service, it’s notoriously difficult to see things as an outsider would. You learn the thing inside and out, you develop shortcuts and internal phraseology. Colloquialisms creep in. You become too close. So it’s no surprise when clients sometimes struggle to explain what it is their product actually does in a way that others can understand.
Often products are complex but, really, the core reasons behind someone wanting to use that product are very simple. There’s a value proposition for the customer and, if they choose to engage with it, there’s a value exchange. If that proposition or exchange isn’t transparent, then people become confused and will likely go elsewhere. Make sure both your client and you really understand what that proposition is and, in turn, what the expected exchange should be. In a nutshell: what is the intended outcome of that engagement? Often the best way to do this is strip everything back to nothing. Verbosity is rife on the web. Just because it’s easy to create content, that shouldn’t be a reason to do so. Figure out what the value proposition is and then reintroduce content elements that genuinely help explain or present that to a level that is appropriate for the audience.
Why
In advertising, they talk about the truths behind a product or service. Truths can be both tangible or abstract, but the most important part is the resonance those truths hit with a customer. In a digital product or service those truths are often exposed as benefits. Why is this what I need? Why will it work for me? Why should I trust you? The why is one of the more fluffy Ws, yet it’s such an important one to nail. Clients can get prickly when you ask them to justify the why behind their product, but it’s a fantastic way to make sure the value proposition is clear, realistic and meets with the expectations of both client and customer.
It’s our job as designers to question things: we’re not just a pair of hands for clients. Just recently I spoke to a potential client about a site for his business. I asked him why people would use his product and also why his product seemed so fractured in its direction. He couldnt answer that question so, instead of ploughing on regardless, he went back to his directors and is now re-evaluating that business. It was awkward but he thanked me and hopefully he’ll have a better product as a result.
Where
In this instance, where is not so much a geographical thing, although in some cases that level of context may indeed become a influencing factor… The where we’re talking about here is the position of the product in relation to others around it. By looking at competitors or similar services around the one you are designing, you can start to get a sense for many of the things that are otherwise hard to pin down or have yet to be defined. For example, in a collection of sites all selling cars, where does yours fit most closely? Where are the overlaps? How are they communicating to their customers? How is the product range presented or categorized?
It’s good to look around and see how others are doing it. Not in a quest for homogeneity but more to reference or to avoid certain patterns that may or may not make sense for your own particular product. Clients often strive to be different for the sake of it. They feel they need to provide distinction by going against the flow a bit. We know different. We know users love convention. They embrace familiar mental models. They’re comfortable with things that they’ve experienced elsewhere. By showing your client that position is a vital part of their strategy, you can help shape their product into something great.
To conclude
So there we have it – the four Ws. Each part tells a different and vital part of the story you need to be able to make a really good product. It might sound like a lot of work, particularly when the client is breathing down your neck expecting to see things, but without those pieces in place, the story you’re building your product on, and the content that you’re creating to form that product can only ever fit into one genre. Fiction.",2011,Alex Morris,alexmorris,2011-12-10T00:00:00+00:00,https://24ways.org/2011/context-first/,content
276,Your jQuery: Now With 67% Less Suck,"Fun fact: more websites are now using jQuery than Flash.
jQuery is an amazing tool that’s made JavaScript accessible to developers and designers of all levels of experience. However, as Spiderman taught us, “with great power comes great responsibility.” The unfortunate downside to jQuery is that while it makes it easy to write JavaScript, it makes it easy to write really really f*ing bad JavaScript. Scripts that slow down page load, unresponsive user interfaces, and spaghetti code knotted so deep that it should come with a bottle of whiskey for the next sucker developer that has to work on it.
This becomes more important for those of us who have yet to move into the magical fairy wonderland where none of our clients or users view our pages in Internet Explorer. The IE JavaScript engine moves at the speed of an advancing glacier compared to more modern browsers, so optimizing our code for performance takes on an even higher level of urgency.
Thankfully, there are a few very simple things anyone can add into their jQuery workflow that can clear up a lot of basic problems. When undertaking code reviews, three of the areas where I consistently see the biggest problems are: inefficient selectors; poor event delegation; and clunky DOM manipulation. We’ll tackle all three of these and hopefully you’ll walk away with some new jQuery batarangs to toss around in your next project.
Selector optimization
Selector speed: fast or slow?
Saying that the power behind jQuery comes from its ability to select DOM elements and act on them is like saying that Photoshop is a really good tool for selecting pixels on screen and making them change color – it’s a bit of a gross oversimplification, but the fact remains that jQuery gives us a ton of ways to choose which element or elements in a page we want to work with. However, a surprising number of web developers are unaware that all selectors are not created equal; in fact, it’s incredible just how drastic the performance difference can be between two selectors that, at first glance, appear nearly identical. For instance, consider these two ways of selecting all paragraph tags inside a with an ID.
$(""#id p"");
$(""#id"").find(""p"");
Would it surprise you to learn that the second way can be more than twice as fast as the first? Knowing which selectors outperform others (and why) is a pretty key building block in making sure your code runs well and doesn’t frustrate your users waiting for things to happen.
There are many different ways to select elements using jQuery, but the most common ways can be basically broken down into five different methods. In order, roughly, from fastest to slowest, these are:
$(""#id"");
This is without a doubt the fastest selector jQuery provides because it maps directly to the native document.getElementbyId() JavaScript method. If possible, the selectors listed below should be prefaced with an ID selector in conjunction with jQuery’s .find() method to limit the scope of the page that has to be searched (as in the $(""#id"").find(""p"") example shown above).
$(""p"");, $(""input"");, $(""form""); and so on
Selecting elements by tag name is also fast, since it maps directly to the native document.getElementsByTagname() method.
$("".class"");
Selecting by class name is a little trickier. While still performing very well in modern browsers, it can cause some pretty significant slowdowns in IE8 and below. Why? IE9 was the first IE version to support the native document.getElementsByClassName() JavaScript method. Older browsers have to resort to using much slower DOM-scraping methods that can really impact performance.
$(""[attribute=value]"");
There is no native JavaScript method for this selector to use, so the only way that jQuery can perform the search is by crawling the entire DOM looking for matches. Modern browsers that support the querySelectorAll() method will perform better in certain cases (Opera, especially, runs these searches much faster than any other browser) but, generally speaking, this type of selector is Slowey McSlowersons.
$("":hidden"");
Like attribute selectors, there is no native JavaScript method for this one to use. Pseudo-selectors can be painfully slow since the selector has to be run against every element in your search space. Again, modern browsers with querySelectorAll() will perform slightly better here, but try to avoid these if at all possible. If you must use one, try to limit the search space to a specific portion of the page: $(""#list"").find("":hidden"");
But, hey, proof is in the performance testing, right? It just so happens that said proof is sitting right here. Be sure to notice the class selector numbers beside IE7 and 8 compared to other browsers and then wonder how the people on the IE team at Microsoft manage to sleep at night. Yikes.
Chaining
Almost all jQuery methods return a jQuery object. This means that when a method is run, its results are returned and you can continue executing more methods on them. Rather than writing out the same selector multiple times over, just making a selection once allows multiple actions to be run on it.
Without chaining
$(""#object"").addClass(""active"");
$(""#object"").css(""color"",""#f0f"");
$(""#object"").height(300);
With chaining
$(""#object"").addClass(""active"").css(""color"", ""#f0f"").height(300);
This has the dual effect of making your code shorter and faster. Chained methods will be slightly faster than multiple methods made on a cached selector, and both ways will be much faster than multiple methods made on non-cached selectors. Wait… “cached selector”? What is this new devilry?
Caching
Another easy way to speed up your code that seems to be a mystery to developers is the idea of caching your selectors. Think of how many times you end up writing the same selector over and over again in any project. Every $("".element"") selector has to search the entire DOM each time, regardless of whether or not that selector had been previously run. Running the selection once and then storing the results in a variable means that the DOM only has to be searched once. Once the results of a selector have been cached, you can do anything with them.
First, run your search (here we’re selecting all of the
elements inside ):
var blocks = $(""#blocks"").find(""li"");
Now, you can use the blocks variable wherever you want without having to search the DOM every time.
$(""#hideBlocks"").click(function() {
blocks.fadeOut();
});
$(""#showBlocks"").click(function() {
blocks.fadeIn();
});
My advice? Any selector that gets run more than once should be cached. This jsperf test shows just how much faster a cached selector runs compared to a non-cached one (and even throws some chaining love in to boot).
Event delegation
Event listeners cost memory. In complex websites and apps it’s not uncommon to have a lot of event listeners floating around, and thankfully jQuery provides some really easy methods for handling event listeners efficiently through delegation.
In a bit of an extreme example, imagine a situation where a 10×10 cell table needs to have an event listener on each cell; let’s say that clicking on a cell adds or removes a class that defines the cell’s background color. A typical way that this might be written (and something I’ve often seen during code reviews) is like so:
$('table').find('td').click(function() {
$(this).toggleClass('active');
});
jQuery 1.7 has provided us with a new event listener method, .on(). It acts as a utility that wraps all of jQuery’s previous event listeners into one convenient method, and the way you write it determines how it behaves. To rewrite the above .click() example using .on(), we’d simply do the following:
$('table').find('td').on('click',function() {
$(this).toggleClass('active');
});
Simple enough, right? Sure, but the problem here is that we’re still binding one hundred event listeners to our page, one to each individual table cell. A far better way to do things is to create one event listener on the table itself that listens for events inside it. Since the majority of events bubble up the DOM tree, we can bind a single event listener to one element (in this case, the ) and wait for events to bubble up from its children. The way to do this using the .on() method requires only one change from our code above:
$('table').on('click','td',function() {
$(this).toggleClass('active');
});
All we’ve done is moved the td selector to an argument inside the .on() method. Providing a selector to .on() switches it into delegation mode, and the event is only fired for descendants of the bound element (table) that match the selector (td). With that one simple change, we’ve gone from having to bind one hundred event listeners to just one. You might think that the browser having to do one hundred times less work would be a good thing and you’d be completely right. The difference between the two examples above is staggering.
(Note that if your site is using a version of jQuery earlier than 1.7, you can accomplish the very same thing using the .delegate() method. The syntax of how you write the function differs slightly; if you’ve never used it before, it’s worth checking the API docs for that page to see how it works.)
DOM manipulation
jQuery makes it very easy to manipulate the DOM. It’s trivial to create new nodes, insert them, remove other ones, move things around, and so on. While the code to do this is simple to write, every time the DOM is manipulated, the browser has to repaint and reflow content which can be extremely costly. This is no more evident than in a long loop, whether it be a standard for() loop, while() loop, or jQuery $.each() loop.
In this case, let’s say we’ve just received an array full of image URLs from a database or Ajax call or wherever, and we want to put all of those images in an unordered list. Commonly, you’ll see code like this to pull this off:
var arr = [reallyLongArrayOfImageURLs];
$.each(arr, function(count, item) {
var newImg = ' ';
$('#imgList').append(newImg);
});
There are a couple of problems with this. For one (which you should have already noticed if you’ve read the earlier part of this article), we’re making the $(""#imgList"") selection once for each iteration of our loop. The other problem here is that each time the loop iterates, it’s adding a new to the DOM. Each of those insertions is going to be costly, and if our array is quite large then this could lead to a massive slowdown or even the dreaded ‘A script is causing this page to run slowly’ warning.
var arr = [reallyLongArrayOfImageURLs],
tmp = '';
$.each(arr, function(count, item) {
tmp += ' ';
});
$('#imgList').append(tmp);
All we’ve done here is create a tmp variable that each is added to as it’s created. Once our loop has finished iterating, that tmp variable will contain all of our list items in memory, and can be appended to our