rowid,title,contents,year,author,author_slug,published,url,topic 54,Putting My Patterns through Their Paces,"Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work. The thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me. Meet the Teaser Here’s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.) In its simplest form, a teaser contains a headline, which links to an article: Fairly straightforward, sure. But it’s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types – some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile. Nearly every element visible on this page is built out of our generic “teaser” pattern. But the teaser variation I’d like to call out is the one that appears on The Toast’s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline. The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts. And this is, as it happens, the teaser variation that gave me pause. Back in the old days – you know, like six months ago – I probably would’ve marked this module up to match the design. In other words, I would’ve looked at the module’s visual hierarchy (metadata up top, headline and content below) and written the following HTML:

By Author Name

126 comments

Article Title

Lorem ipsum dolor sit amet, consectetur…

But then I caught myself, and realized this wasn’t the best approach. Moving Beyond Layout Since I’ve started working responsively, there’s a question I work into every step of my design process. Whether I’m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself: What if someone doesn’t browse the web like I do? …Okay, that doesn’t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it’s been invaluable to so many aspects of my practice. If I’m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I’m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It’s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web. And that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it’d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they’re associated. That’s why I find it’s helpful to begin designing a pattern’s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I’m trying to support. In other words, if someone’s encountering my design without the CSS I’ve written, what should their experience be? So I took a step back, and came up with a different approach:

Article Title

By Author Name

Lorem ipsum dolor sit amet, consectetur… 126 comments

Much, much better. This felt like a better match for the content I was designing: the headline – easily most important element – was at the top, followed by the author’s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that’s why it’s at the very end of the excerpt, the last element within our teaser. And with some light styling, we’ve got a respectable-looking hierarchy in place: Yeah, you’re right – it’s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we’ll bolster the markup with an extra element around our title and byline:

Article Title

By Author Name

With that in place, we can use flexbox to tweak our layout, like so: .teaser-hed { display: flex; flex-direction: column-reverse; } flex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children. Getting closer! But as great as flexbox is, it doesn’t do anything for elements outside our container, like our little comment count, which is, as you’ve probably noticed, still stranded at the very bottom of our teaser. Flexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser’s using a sufficiently modern version of flexbox. Here’s the one we used: var doc = document.body || document.documentElement; var style = doc.style; if ( style.webkitFlexWrap == '' || style.msFlexWrap == '' || style.flexWrap == '' ) { doc.className += "" supports-flex""; } Eagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser’s design: .supports-flex .teaser-hed { display: flex; flex-direction: column-reverse; } .supports-flex .teaser .comment-count { position: absolute; right: 0; top: 1.1em; } If the supports-flex class is present, we can apply our flexbox layout to the title area, sure – but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don’t meet our threshold for our advanced styles are left with an attractive design that matches our HTML’s content hierarchy; but the ones that pass our test receive the finished, final design. And with that, our teaser’s complete. Diving Into Device-Agnostic Design This is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I’d recommend Heydon Pickering’s “Flexbox Grid Finesse”, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you’d be right! In fact, it’s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web. And what’s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well. Open video Even a simple search form can be conditionally enhanced, given a little layered thinking. This more layered approach to interface design isn’t a new one, mind you: it’s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I’ve found to practice responsive design. As Trent Walton once wrote, Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability. We have a weird job, working on the web. We’re designing for the latest mobile devices, sure, but we’re increasingly aware that our definition of “smartphone” is much too narrow. Browsers have started appearing on our wrists and in our cars’ dashboards, but much of the world’s mobile data flows over sub-3G networks. After all, the web’s evolution has never been charted along a straight line: it’s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don’t yet know about, a more device-agnostic, more layered design process can better prepare our patterns – and ourselves – for the future. (It won’t help you get enough to eat at holiday parties, though.)",2015,Ethan Marcotte,ethanmarcotte,2015-12-10T00:00:00+00:00,https://24ways.org/2015/putting-my-patterns-through-their-paces/,code 115,"Charm Clients, Win Pitches","Over the years I have picked up a number of sales techniques that have lead to us doing pretty well in the pitches we go for. Of course, up until now, these top secret practices have remained firmly locked in the company vault but now I am going to share them with you. They are cunningly hidden within the following paragraphs so I’m afraid you’re going to have to read the whole thing. Ok, so where to start? I guess a good place would be getting invited to pitch for work in the first place. Shameless self promotion What not to do You’re as keen as mustard to ‘sell’ what you do, but you have no idea as to the right approach. From personal experience (sometimes bitter!), the following methods are as useful as the proverbial chocolate teapot: Cold calling Advertising Bidding websites Sales people Networking events Ok, I’m exaggerating; sometimes these things work. For example, cold calling can work if you have a story – a reason to call and introduce yourself other than “we do web design and you have a website”. “We do web design and we’ve just moved in next door to you” would be fine. Advertising can work if your offering is highly specialist. However, paying oodles of dollars a day to Google Ads to appear under the search term ‘web design’ is probably not the best use of your budget. Specialising is, in fact, probably a good way to go. Though it can feel counter intuitive in that you are not spreading yourself as widely as you might, you will eventually become an expert and therefore gain a reputation in your field. Specialism doesn’t necessarily have to be in a particular skillset or technology, it could just as easily be in a particular supply chain or across a market. Target audience ‘Who to target?’ is the next question. If you’re starting out then do tap-up your family and friends. Anything that comes your way from them will almost certainly come with a strong recommendation. Also, there’s nothing wrong with calling clients you had dealings with in previous employment (though beware of any contractual terms that may prevent this). You are informing your previous clients that your situation has changed; leave it up to them to make any move towards working with you. After all, you’re simply asking to be included on the list of agencies invited to tender for any new work. Look to target clients similar to those you have worked with previously. Again, you have a story – hopefully a good one! So how do you reach these people? Mailing lists Forums Writing articles Conferences / Meetups Speaking opportunities Sharing Expertise In essence: blog, chat, talk, enthuse, show off (a little)… share. There are many ways you can do this. There’s the traditional portfolio, almost obligatory blog (regularly updated of course), podcast, ‘giveaways’ like Wordpress templates, CSS galleries and testimonials. Testimonials are your greatest friend. Always ask clients for quotes (write them and ask for their permission to use) and even better, film them talking about how great you are. Finally, social networking sites can offer a way to reach your target audiences. You do have to be careful here though. You are looking to build a reputation by contributing value. Do not self promote or spam! Writing proposals Is it worth it? Ok, so you have been invited to respond to a tender or brief in the form of a proposal. Good proposals take time to put together so you need to be sure that you are not wasting your time. There are two fundamental questions that you need to ask prior to getting started on your proposal: Can I deliver within the client’s timescales? Does the client’s budget match my price? The timescales that clients set are often plucked from the air and a little explanation about how long projects usually take can be enough to change expectations with regard to delivery. However, if a deadline is set in stone ask yourself if you can realistically meet it. Agreeing to a deadline that you know you cannot meet just to win a project is a recipe for an unhappy client, no chance of repeat business and no chance of any recommendations to other potential clients. Price is another thing altogether. So why do we need to know? The first reason, and most honest reason, is that we don’t want to do a lot of unpaid pitch work when there is no chance that our price will be accepted. Who would? But this goes both ways – the client’s time is also being wasted. It may only be the time to read the proposal and reject it, but what if all the bids are too expensive? Then the client needs to go through the whole process again. The second reason why we need to know budgets relates to what we would like to include in a proposal over what we need to include. For example, take usability testing. We always highly recommend that a client pays for at least one round of usability testing because it will definitely improve their new site – no question. But, not doing it doesn’t mean they’ll end up with an unusable turkey. It’s just more likely that any usability issues will crop up after launch. I have found that the best way to discover a budget is to simply provide a ballpark total, usually accompanied by a list of ‘likely tasks for this type of project’, in an initial email or telephone response. Expect a lot of people to dismiss you out of hand. This is good. Don’t be tempted to ‘just go for it’ anyway because you like the client or work is short – you will regret it. Others will say that the ballpark is ok. This is not as good as getting into a proper discussion about what priorities they might have but it does mean that you are not wasting your time and you do have a chance of winning the work. The only real risk with this approach is that you misinterpret the requirements and produce an inaccurate ballpark. Finally, there is a less confrontational approach that I sometimes use that involves modular pricing. We break down our pricing into quite detailed tasks for all proposals but when I really do not have a clue about a client’s budget, I will often separate pricing into ‘core’ items and ‘optional’ items. This has proved to be a very effective method of presenting price. What to include So, what should go into a proposal? It does depend on the size of the piece of work. If it’s a quick update for an existing client then they don’t want to read through all your blurb about why they should choose to work with you – a simple email will suffice. But, for a potential new client I would look to include the following: Your suitability Summary of tasks Timescales Project management methodology Pricing Testing methodology Hosting options Technologies Imagery References Financial information Biographies However, probably the most important aspect of any proposal is that you respond fully to the brief. In other words, don’t ignore the bits that either don’t make sense to you or you think irrelevant. If something is questionable, cover it and explain why you don’t think it is something that warrants inclusion in the project. Should you provide speculative designs? If the brief doesn’t ask for any, then certainly not. If it does, then speak to the client about why you don’t like to do speculative designs. Explain that any designs included as part of a proposal are created to impress the client and not the website’s target audience. Producing good web design is a partnership between client and agency. This can often impress and promote you as a professional. However, if they insist then you need to make a decision because not delivering any mock-ups will mean that all your other work will be a waste of time. Walking away As I have already mentioned, all of this takes a lot of work. So, when should you be prepared to walk away from a potential job? I have already covered unrealistic deadlines and insufficient budget but there are a couple of other reasons. Firstly, would this new client damage your reputation, particularly within current sectors you are working in? Secondly, can you work with this client? A difficult client will almost certainly lead to a loss-making project. Perfect pitch Requirements If the original brief didn’t spell out what is expected of you at a presentation then make sure you ask beforehand. The critical element is how much time you have. It seems that panels are providing less and less time these days. The usual formula is that you get an hour; half of which should be a presentation of your ideas followed by 30 minutes of questions. This isn’t that much time, particularly for a big project that covers all aspect of web design and production. Don’t be afraid to ask for more time, though it is very rare that you will be granted any. Ask if there any areas that a) they particularly want you to cover and b) if there are any areas of your proposal that were weak. Ask who will be attending. The main reason for this is to see if the decision maker(s) will be present but it’s also good to know if you’re presenting to 3 or 30 people. Who should be there Generally speaking, I think two is the ideal number. Though I have done many presentations on my own, I always feel having two people to bounce ideas around with and have a bit of banter with, works well. You are not only trying to sell your ideas and expertise but also yourselves. One of the main things in the panels minds will be – “can I work with these people?” Having more than two people at a presentation often looks like you’re wheeling people out just to demonstrate that they exist. What makes a client want to hire you? In a nutshell: Confidence, Personality, Enthusiasm. You can impart confidence by being well prepared and professional, providing examples and demonstrations and talking about your processes. You may find project management boring but pretty much every potential client will want to feel reassured that you manage your projects effectively. As well as demonstrating that you know what you’re talking about, it is important to encourage, and be part of, discussion about the project. Be prepared to suggest and challenge and be willing to say “I don’t know”. Also, no-one likes a show-off so don’t over promote yourself; encourage them to contact your existing clients. What makes a client like you? Engaging with a potential client is tricky and it’s probably the area where you need to be most on your toes and try to gauge the reaction of the client. We recommend the following: Encourage questions throughout Ask if you make sense – which encourages questions if you’re not getting any Humour – though don’t keep trying to be funny if you’re not getting any laughs! Be willing to go off track Read your audience Empathise with the process – chances are, most of the people in front of you would rather be doing something else Think about what you wear – this sounds daft but do you want to be seen as either the ‘stiff in the suit’ or the ‘scruffy art student’? Chances are neither character would get hired. Differentiation Sometimes, especially if you think you are an outsider, it’s worth taking a few risks. I remember my colleague Paul starting off a presentation once with the line (backed up on screen) – “Headscape is not a usability consultancy”. This was in response to the clients request to engage a usability consultancy. The thrust of Paul’s argument was that we are a lot more than that. This really worked. We were the outside choice but they ended up hiring us. Basically, this differentiated us from the crowd. It showed that we are prepared to take risks and think, dare I say it, outside of the box. Dealing with difficult characters How you react to tricky questioning is likely to be what determines whether you have a good or bad presentation. Here are a few of those characters that so often turn up in panels: The techie – this is likely to be the situation where you need to say “I don’t know”. Don’t bluff as you are likely to dig yourself a great big embarrassment-filled hole. Promise to follow up with more information and make sure that you do so as quickly as possible after the pitch. The ‘hard man’ MD – this the guy who thinks it is his duty to throw ‘curve ball’ questions to see how you react. Focus on your track record (big name clients will impress this guy) and emphasise your processes. The ‘no clue’ client – you need to take control and be the expert though you do need to explain the reasoning behind any suggestions you make. This person will be judging you on how much you are prepared to help them deliver the project. The price negotiator – be prepared to discuss price but do not reduce your rate or the effort associated with your proposal. Fall back on modular pricing and try to reduce scope to come within budget. You may wish to offer a one-off discount to win a new piece of work but don’t get into detail at the pitch. Don’t panic… If you go into a presentation thinking ‘we must win this’ then, chances are, you won’t. Relax and be yourself. If you’re not hitting it off with the panel then so be it. You have to remember that quite often you will be making up the numbers in a tendering process. This is massively frustrating but, unfortunately, part of it. If it’s not going well, concentrate on what you are offering and try to demonstrate your professionalism rather than your personality. Finally, be on your toes, watch people’s reactions and pay attention to what they say and try to react accordingly. So where are the secret techniques I hear you ask? Well, using the words ‘secret’ and ‘technique’ was probably a bit naughty. Most of this stuff is about being keen, using your brain and believing in yourself and what you are selling rather than following a strict set of rules.",2008,Marcus Lillington,marcuslillington,2008-12-09T00:00:00+00:00,https://24ways.org/2008/charm-clients-win-pitches/,business 203,Jobs-to-Be-Done in Your UX Toolbox,"Part 1: What is JTBD? The concept of a “job” in “Jobs-To-Be-Done” is neatly encapsulated by a oft-quoted line from Theodore Levitt: “People want a quarter-inch hole, not a quarter inch drill”. Even so, Don Norman pointed out that perhaps Levitt “stopped too soon” at what the real customer goal might be. In the “The Design of Everyday Things”, he wrote: “Levitt’s example of the drill implying that the goal is really a hole is only partially correct, however. When people go to a store to buy a drill, that is not their real goal. But why would anyone want a quarter-inch hole? Clearly that is an intermediate goal. Perhaps they wanted to hang shelves on the wall. Levitt stopped too soon. Once you realize that they don’t really want the drill, you realize that perhaps they don’t really want the hole, either: they want to install their bookshelves. Why not develop methods that don’t require holes? Or perhaps books that don’t require bookshelves.” In other words, a “job” in JTBD lingo is a way to express a user need or provide a customer-centric problem frame that’s independent of a solution. As Tony Ulwick says: “A job is stable, it doesn’t change over time.” An example of a job is “tiding you over from breakfast to lunch.” You could hire a donut, a flapjack or a banana for that mid-morning snack—whatever does the job. If you can arrive at a clearly identified primary job (and likely some secondary ones too), you can be more creative in how you come up with an effective solution while keeping the customer problem in focus. The team at Intercom wrote a book on their application of JTBD. In it, Des Traynor cleverly characterised how JTBD provides a different way to think about solutions that compete for the same job: “Economy travel and business travel are both capable candidates applying for [the job: Get me face-to-face with my colleague in San Francisco], though they’re looking for significantly different salaries. Video conferencing isn’t as capable, but is willing to work for a far smaller salary. I’ve a hiring choice to make.” So far so good: it’s relatively simple to understand what a job is, once you understand how it’s different from a “task”. Business consultant and Harvard professor Clay Christensen talks about the concept of “hiring” a product to do a “job”, and firing it when something better comes along. If you’re a company that focuses solutions on the customer job, you’re more likely to succeed. You’ll find these concepts often referred to as “Jobs-to-be-Done theory”. But the application of Jobs-to-Be-Done theory is a little more complicated; it comprises several related approaches. I particularly like Jim Kalbach’s description of how JTBD is a “lens through which to understand value creation”. But it is also more. In my view, it’s a family of frameworks and methods—and perhaps even a philosophy. Different facets in a family of frameworks JTBD has its roots in market research and business strategy, and so it comes to the research table from a slightly different place compared to traditional UX or design research—we have our roots in human-computer interaction and ergonomics. I’ve found it helpful to keep in mind is that the application of JTBD theory is an evolving beast, so it’s common to find contradictions across different resources. My own use of it has varied from project to project. In speaking to others who have adopted it in different measures, it seems that we have all applied it in somewhat multifarious ways. As we like to often say in interviews: there are no wrong answers. Outcome Driven Innovation Tony Ulwick’s version of the JTBD history began with Outcome Driven Innovation (ODI), and this approach is best outlined in his seminal article published in the Harvard Business Review in 2002. To understand his more current JTBD approach in his new book “Jobs to Be Done: Theory to Practice”, I actually found it beneficial to read his approach in the original 2002 article for a clearer reference point. In the earlier article, Ulwick presented a rigorous approach that combines interviews, surveys and an “opportunity” algorithm—a sequence of steps to determine the business opportunity. ODI centres around working with “desired outcome statements” that you unearth through interviews, followed by a means to quantify the gap between importance and satisfaction in a survey to different types of customers. Since 2008, Ulwick has written about using job maps to make sense of what the customer may be trying to achieve. In a recent article, he describes the aim of the activity is “to discover what the customer is trying to get done at different points in executing a job and what must happen at each juncture in order for the job to be carried out successfully.” A job map is not strictly a journey map, however tempting it is to see it that way. From a UX perspective, is one of many models we can use—and as our research team at Clearleft have found, how we use model can depend on the nature of the jobs we’ve uncovered in interviews and the characteristics of the problem we’re attempting to solve. Figure 1. Universal job map Ulwick’s current methodology is outlined in his new book, where he describes a complete end-to-end process: from customer and competitor research to framing market and product strategy. The Jobs-To-Be-Done Interview Back in 2013, I attended a workshop by Chris Spiek and Bob Moesta from the ReWired Group on JTBD at the behest of a then-MailChimp colleague, and I came away excited about their approach to product research. It felt different from anything I’d done before and for the first time in years, I felt that I was genuinely adding something new to my research toolbox. A key idea is that if you focus on the stories of those who switched to you, and those who switch away from you, you can uncover the core jobs through looking at these opposite ends of engagement. This framework centres around the JTBD interview method, which harnesses the power of a narrative framework to elicit the real reasons why someone “hired” something to do a job—be it something physical like a new coffee maker, or a digital service, such as a to-do list app. As you interview, you are trying to unearth the context around the key moments on the JTBD timeline (Figure 2). A common approach is to begin from the point the customer might have purchased something, back to the point where the thought of buying this thing first occurred to them. Figure 2. JTBD Timeline Figure 3. The Four Forces The Forces Diagram (Figure 3) is a post-interview analysis tool where you can map out what causes customers to switch to something new and what holds them back. The JTBD interview is effective at identifying core and secondary jobs, as well as some context around the user need. Because this method is designed to extract the story from the interviewee, it’s a powerful way to facilitate recall. Having done many such interviews, I’ve noticed one interesting side effect: participants often remember more details later on after the conversation has formally ended. It is worth scheduling a follow-up phone call or keep the channels open. Strengths aside, it’s good to keep in mind that the JTBD interview is still primarily an interview technique, so you are relying on the context from the interviewee’s self-reported perspective. For example, a stronger research methodology combines JTBD interviews with contextual research and quantitative methods. Job Stories Alan Klement is credited for coming up with the term “job story” to describe the framing of jobs for product design by the team at Intercom: “When … I want to … so I can ….” Figure 4. Anatomy of a Job Story Unlike a user story that traditionally frames a requirement around personas, job stories frame the user need based on the situation and context. Paul Adams, the VP of Product at Intercom, wrote: “We frame every design problem in a Job, focusing on the triggering event or situation, the motivation and goal, and the intended outcome. […] We can map this Job to the mission and prioritise it appropriately. It ensures that we are constantly thinking about all four layers of design. We can see what components in our system are part of this Job and the necessary relationships and interactions required to facilitate it. We can design from the top down, moving through outcome, system, interactions, before getting to visual design.” Systems of Progress Apart from advocating using job stories, Klement believes that a core tenet of applying JTBD revolves around our desire for “self-betterment”—and that focusing on everyone’s desire for self-betterment is core to a successful strategy. In his book, Klement takes JTBD further to being a tool for change through applying systems thinking. There, he introduces the systems of progress and how it can help focus product strategy approach to be more innovative. Coincidentally, I applied similar thinking on mapping systemic change when we were looking to improve users’ trust with a local government forum earlier this year. It’s not just about capturing and satisfying the immediate job-to-be-done, it’s about framing the job so that you can a clear vision forward on how you can help your users improve their lives in the ways they want to. This is really the point where JTBD becomes a philosophy of practice. Part 2: Mixing It Up There has been some misunderstanding about how adopting JTBD means ditching personas or some of our existing design tools or research techniques. This couldn’t have been more wrong. Figure 5: Jim Kalbach’s JTBD model Jim Kalbach has used Outcome-Driven Innovation for around 10 years. In a 2016 article, he presents a synthesised model of how to think about that has key elements from ODI, Christensen’s theories and the structure of the job story. More interestingly, Kalbach has also combined the use of mental models with JTBD. Claire Menke of UDemy has written a comprehensive article about using personas, JTBD and customer journey maps together in order to communicate more complete story from the users’ perspective. Claire highlights an especially interesting point in her article as she described her challenges: “After much trial and error, I arrived at a foundational research framework to suit every team’s needs — allowing everyone to share the same holistic understanding, but extract the type of information most applicable to their work.” In other words, the organisational context you are in likely can dictate what works best—after all the goal is to arrive at the best user experience for your audiences. Intercom can afford to go full-on on applying JTBD theory as a dominant approach because they are a start-up, but a large company or organisation with multiple business units may require a mix of tools, outputs and outcomes. JTBD is an immensely powerful approach on many fronts—you’ll find many different references that lists the ways you can apply JTBD. However, in the context of this discussion, it might also be useful to we examine where it lies in our models of how we think about our UX and product processes. JTBD in the UX ecosystem Figure 6. The Elements of User Experience (source) There are many ways we have tried to explain the UX discipline but I think Jesse James Garrett’s Elements of User Experience is a good place to begin. I sometimes also use little diagram to help me describe the different levels you might work at when you work through the complexity of designing and developing a product. A holistic UX strategy needs to address all the different levels for a comprehensive experience: your individual product UI, product features, product propositions and brand need to have a cohesive definition. Figure 7. Which level of product focus? We could, of course, also think about where it fits best within the double diamond. Again, bearing in mind that JTBD has its roots in business strategy and market research, it is excellent at clarifying user needs, defining high-level specifications and content requirements. It is excellent for validating brand perception and value proposition —all the way down to your feature set. In other words, it can be extremely powerful all the way through to halfway of the second diamond. You could quite readily combine the different JTBD approaches because they have differences as much as overlaps. However, JTBD generally starts getting a little difficult to apply once we get to the details of UI design. The clue lies in JTBD’s raison d’être: a job statement is solution independent. Hence, once we get to designing solutions, we potentially fall into a existential black hole. That said, Jim Kalbach has a quick case study on applying JTBD to content design tucked inside the main article on a synthesised JTBD model. Alan Klement has a great example of how you could use UI to resolve job stories. You’ll notice that the available language of “jobs” drops off at around that point. Job statements and outcome statements provide excellent “mini north-stars” as customer-oriented focal points, but purely satisfying these statements would not necessarily guarantee that you have created a seamless and painless user experience. Playing well with others You will find that JTBD plays well with Lean, and other strategy tools like the Value Proposition Canvas. With every new project, there is potential to harness the power of JTBD alongside our established toolbox. When we need to understand complex contexts where cultural or socioeconomic considerations have to be taken into account, we are better placed with combining JTBD with more anthropological approaches. And while we might be able to evaluate if our product, website or app satisfies the customer jobs through interviews or surveys, without good old-fashioned usability testing we are unlikely to be able to truly validate why the job isn’t being represented as it should. In this case, individual jobs solved on the UI can be set up as hypotheses to be proven right or wrong. The application of Jobs-to-be-Done is still evolving. I’ve found it to be very powerful and I struggle to remember what my UX professional life was like before I encountered it—it has completely changed my approach to research and design. The fact JTBD is still evolving as a practice means we need to be watchful of dogma—there’s no right way to get a UX job done after all, it nearly always depends. At the end of the day, isn’t it about having the right tool for the right job?",2017,Steph Troeth,stephtroeth,2017-12-04T00:00:00+00:00,https://24ways.org/2017/jobs-to-be-done-in-your-ux-toolbox/,ux 151,Get In Shape,"Pop quiz: what’s wrong with the following navigation? Maybe nothing. But then again, maybe there’s something bugging you about the way it comes together, something you can’t quite put your finger on. It seems well-designed, but it also seems a little… off. The design decisions that led to this eventual form were no doubt well-considered: Client: The top level needs to have a “current page” status indicator of some sort. Designer: How about a white tab? Client: Great! The second level needs to show up underneath the first level though… Designer: Okay, but that white tab I just added makes it hard to visually connect the bottom nav to the top. Client: Too late, we’ve seen the white tab and we love it. Try and make it work. Designer: Right. So I placed the second level in its own box. Client: Hmm. They seem too separated. I can’t tell that the yellow nav is the second level of the first. Designer: How about an indicator arrow? Client: Brilliant! The problem is that the end result feels awkward and forced. During the design process, little decisions were made that ultimately affect the overall shape of the navigation. What started out as a neatly contained rounded rectangle ended up as an ambiguous double shape that looks funny, though it’s often hard to pinpoint precisely why. The Shape of Things Well the why in this case is because seemingly unrelated elements in a design still end up visually interacting. Adding a new item to a page impacts everything surrounding it. In this navigation example, we’re looking at two individual objects that are close enough to each other that they form a relationship; if we reduce them to strictly their outlines, it’s a little easier to see that this particular combination registers oddly. The two shapes float with nothing really grounding them. If they were connected, perhaps it would be a different story. The white tab divides the top shape in half, leaving a gap in the middle of it. There’s very little balance in this pairing because the overall shape of the navigation wasn’t considered during the design process. Here’s another example: Gmail. Great email client, but did you ever closely look at what’s going on in that left hand navigation? The continuous blue bar around the message area spills out into the navigation. If we remove all text, we’re left with this odd configuration: Though the reasoning for anchoring the navigation highlight against the message area might be sound, the result is an irregular shape that doesn’t correspond with anything in reality. You may never consciously notice it, but once you do it’s hard to miss. One other example courtesy of last.fm: The two header areas are the same shade of pink so they appear to be closely connected. When reduced to their outlines it’s easy to see that this combination is off-balance: the edges don’t align, the sharp corners of the top shape aren’t consistent with the rounded corners of the bottom, and the part jutting out on the right of the bottom one seems fairly random. The result is a duo of oddly mis-matched shapes. Design Strategies Our minds tend to pick out familiar patterns. A clever designer can exploit this by creating references in his or her work to shapes and combinations with which viewers are already familiar. There are a few simple ideas that can be employed to help you achieve this: consistency, balance, and completion. Consistency A fairly simple way to unify the various disparate shapes on a page is by designing them with a certain amount of internal consistency. You don’t need to apply an identical size, colour, border, or corner treatment to every single shape; devolving a design into boring repetition isn’t what we’re after here. But it certainly doesn’t hurt to apply a set of common rules to most shapes within your work. Consider purevolume and its multiple rounded-corner panels. From the bottom of the site’s main navigation to the grey “Extras” panels halfway down the page (shown above), multiple shapes use a common border radius for unity. Different colours, different sizes, different content, but the consistent outlines create a strong sense of similarity. Not that every shape on the site follows this rule; they break the pattern right at the top with a darker sharp-cornered header, and again with the thumbnails below. But the design remains unified, nonetheless. Balance Arguably the biggest problem with the last.fm example earlier is one of balance. The area poking out of the bottom shape created a fairly obvious imbalance for no apparent reason. The right hand side is visually emphasized due to the greater area of pink coverage, but with the white gap left beside it, the emphasis seems unwarranted. It’s possible to create tension in your design by mismatching shapes and throwing off the balance, but when that happens unintentionally it can look like a mistake. Above are a few examples of design elements in balanced and unbalanced configurations. The examples in the top row are undeniably more pleasing to the eye than those in the bottom row. If these were fleshed out into full designs, those derived from the templates in the top row would naturally result in stronger work. Take a look at the header on 9Rules for a study in well-considered balance. On the left you’ll see a couple of paragraphs of text, on the right you have floating navigational items, and both flank the site’s logo. This unusual layout combines multiple design elements that look nothing alike, and places them together in a way that anchors each so that no one weighs down the header. Completion And finally we come to the idea of completion. Shapes don’t necessarily need hard outlines to be read visually as shapes, which can be exploited for various purposes. Notice how Zend’s mid-page “Business Topics” and “News” items (below) fade out to the right and bottom, but the placement of two of these side-by-side creates an impression of two panels rather than three disparate floating columns. By allowing the viewer’s eye to complete the shapes, they’ve lightened up the design of the page and removed inessential lines. In a busy design this technique could prove quite handy. Along the same lines, the individual shapes within your design may also be combined visually to form outlines of larger shapes. The differently-coloured header and main content/sidebar shapes on Veerle’s blog come together to form a single central panel, further emphasized by the slight drop shadow to the right. Implementation Studying how shape can be used effectively in design is simply a starting point. As with all things design-related, there are no hard and fast rules here; ultimately you may choose to bring these principles into your work more often, or break them for effect. But understanding how shapes interact within a page, and how that effect is ultimately perceived by viewers, is a key design principle you can use to impress your friends.",2007,Dave Shea,daveshea,2007-12-16T00:00:00+00:00,https://24ways.org/2007/get-in-shape/,design 73,How to Make Your Site Look Half-Decent in Half an Hour,"Programmers like me are often intimidated by design – but a little effort can give a huge return on investment. Here are one coder’s tips for making any site quickly look more professional. I am a programmer. I am not a designer. I have a degree in computer science, and I don’t mind Comic Sans. (It looks cheerful. Move on.) But although I am a programmer, I want to make my sites look attractive. This is partly out of vanity, and partly realism. Vanity because I want people to think my work is good, and realism because the research shows that people won’t think a site is credible unless it also looks attractive. For a very long time after I became a programmer, I was scared of design. Design seemed to consist of complicated rules that weren’t written down anywhere, plus an unlearnable sense of taste, possessed only by a black-clad elite. But a little while ago, I decided to do my best to hack what it took to make my own projects look vaguely attractive. And although this doesn’t come close to the effect a professional designer could achieve, gathering these resources for improving a site’s look and feel has been really helpful. If I hadn’t figured out some basic design shortcuts, it’s unlikely that a weekend hack of mine would have ended up on page three of the Daily Mail. And too often now, I see excellent programming projects that don’t reach the audience they deserve, simply because their design doesn’t match their execution. So, if you are a developer, my Christmas present to you is this: my own collection of hacks that, used rightly, can make your personal programming projects look professional, quickly. None are hard to learn, most are free, and they let you focus on writing code. One thing to note about these tips, though. They are a personal, pragmatic compilation. They are suggestions, not a definitive guide. You will definitely get better results by working with a professional designer, and by studying design more deeply. If you are a designer, I would love to hear your suggestions for the best tools that I’ve missed, and I’d love to know how we programmers can do things better. With that, on to the tools… 1. Use Bootstrap If you’re not already using Bootstrap, start now. I really think that Bootstrap is one of the most significant technical achievements of the last few years: it democratizes the whole process of web design. Essentially, Bootstrap is a a grid system, with a bunch of common elements. So you can lay your site out how you want, drop in simple elements like forms and tables, and get a good-looking, consistent result, without spending hours fiddling with CSS. You just need HTML. Another massive upside is that it makes it easy to make any site responsive, so you don’t have to worry about writing media queries. Go, get Bootstrap and check out the examples. To keep your site lightweight, you can customize your download to include only the elements you want. If you have more time, then Mark Otto’s article on why and how Bootstrap was created at Twitter is well worth a read. 2. Pimp Bootstrap Using Bootstrap is already a significant advance on not using Bootstrap, and massively reduces the tedium of front-end development. But you also run the risk of creating Yet Another Bootstrap Site, or Hack Day Design, as it’s known. If you’re really pressed for time, you could buy a theme from Wrap Bootstrap. These are usually created by professional designers, and will give a polish that we can’t achieve ourselves. Your site won’t be unique, but it will look good quickly. Luckily, it’s pretty easy to make Bootstrap not look too much like Bootstrap – using fonts, CSS effects, background images, colour schemes and so on. Most of the rest of this article covers different ways to achieve this. We are going to customize this Bootstrap example page. This already has some custom CSS in the . We’ll pull it all out, and create a new CSS file, custom.css. Then we add a reference to it in the header. Now we’re ready to start customizing things. 3. Fonts Web fonts are one of the quickest ways to make your site look distinctive, modern, and less Bootstrappy, so we’ll start there. First, we can add some sweet fonts, from Google Web Fonts. The intimidating bit is choosing fonts that look nice together. Luckily, there are plenty of suggestions around the web: we’re going to use one of DesignShack’s suggested free Google Fonts pairings. Our fonts are Corben (for headings) and Nobile (for body copy). Then we add these files to our . …and this to custom.css: h1, h2, h3, h4, h5, h6 { font-family: 'Corben', Georgia, Times, serif; } p, div { font-family: 'Nobile', Helvetica, Arial, sans-serif; } Now our example looks like this. It’s not going to win any design awards, but it’s immediately better: I also recommend the web font services Fontdeck, or Typekit – these have a wider selection of fonts, and are worth the investment if you regularly need to make sites look good. For more font combinations, Just My Type suggests appealing pairings from Typekit. Finally, you can experiment with type pairing ideas at Type Connection. For the design background on pairing fonts, Typekit’s post is worth a read. 4. Textures An instant way to make a site look classy is to use textures. You know the grey, stripy, indefinably elegant background on 24ways.org? That. If only there was a superb resource listing attractive, free, ready-to-use textures… Oh wait, there’s Atle Mo’s Subtle Patterns. We’re going to use Cream Dust, for an effect that can only be described as subtle. We download the file to a new /img/ directory, then add this to the CSS file: body { background: url(/img/cream_dust.png) repeat 0 0; } Bang: For some design background on patterns, I recommend reading through Smashing Magazine’s guidelines on textures. (TL;DR version: use textures to enhance beauty, and clarify the information architecture of your site; but don’t overdo it, or inadvertently obscure your text.) Still more to do, though. Onwards. 5. Icons Last year’s 24 ways taught us to use icon fonts for our site icons. This is great for the time-pressed coder, because icon fonts don’t just cut down on HTTP requests – they’re a lot quicker to set up than image-based icons, too. Bootstrap ships with an extensive, free for commercial use icon set in the shape of Font Awesome. Its icons are safe for screen readers, and can even be made to work in IE7 if needed (we’re not going to bother here). To start using these icons, just download Font Awesome, and add the /fonts/ directory to your site and the font-awesome.css file into your /css/ directory. Then add a reference to the CSS file in your header: Finally, we’ll add a truck icon to the main action button, as follows. Why a truck? Why not? Sign up today We’ll also tweak our CSS file to stop the icon nudging up against the button text: .jumbotron .btn i { margin-right: 8px; } And this is how it looks: Not the most exciting change ever, but it livens up the page a bit. The licence is CC-BY-3.0, so we also include a mention of Font Awesome and its URL in the source code. If you’d like something a little more distinctive, Shifticons lets you pay a few cents for individual icons, with the bonus that you only have to serve the icons you actually use, which is more efficient. Its icons are also friendly to screen readers, but won’t work in IE7. 6. CSS3 The next thing you could do is add some CSS3 goodness. It can really help the key elements of the site stand out. If you are pressed for time, just adding box-shadow and text-shadow to emphasize headings and standouts can be useful: h1 { text-shadow: 1px 1px 1px #ccc; } .div-that-you want-to-stand-out { box-shadow: 0 0 1em 1em #ccc; } We have a little more time though, so we’re going to do something more subtle. We’ll add a radial gradient behind the main heading, using an online gradient editor. The output is hefty, but you can see it in the CSS. Note that we also need to add the following to our HTML, for IE9 support: And the effect – I don’t know what a designer would think, but I like the way it makes the heading pop. For a crash course in useful modern CSS effects, I highly recommend CodeSchool’s online course in Functional HTM5 and CSS3. It costs money ($25 a month to subscribe), but it’s worth it for the time you’ll save. As a bonus, you also get access to some excellent JavaScript, Ruby and GitHub courses. (Incidentally, if you find yourself fighting with basic float and display attributes in CSS – and there’s no shame in it, CSS layout is not intuitive – I recommend the CSS Cross-Country course at CodeSchool.) 7. Add a twist We could leave it there, but we’re going to add a background image, and give the site some personality. This is the area of design that I think many programmers find most intimidating. How do we create the graphics and photographs that a designer would use? The answer is iStockPhoto and its competitors – online image libraries where you can find and pay for images. They won’t be unique, but for our purposes, that’s fine. We’re going to use a Christmassy image. For a twist, we’re going to use Backstretch to make it responsive. We must pay for the image, then download it to our /img/ directory. Then, we set it as our ’s background-image, by including a JavaScript file with just the following line: $.backstretch(""/img/winter.jpg""); We also reset the subtle pattern to become the background for our container image. It would look much better transparent, so we can use this technique in GIMP to make it see-through: .container-narrow { background: url(/img/cream_dust_transparent.png) repeat 0 0; } We also fiddle with the padding on body and .container-narrow a bit, and this is the result: (Aside: If this were a real site, I’d want to buy images in multiple sizes and ensure that Backstretch chose the most appropriately sized image for our screen, perhaps using responsive images.) How to find the effects that make a site interesting? I keep a set of bookmarks for interesting JavaScript and CSS effects I might want to use someday, from realistic shadows to animating grids. The JavaScript Weekly newsletter is a great source of ideas. 8. Colour schemes We’re just about getting there – though we’re probably past half an hour now – but that button and that menu still both look awfully Bootstrappy. Real sites, with real designers, have a colour palette, carefully chosen to harmonize and match the brand profile. For our purposes, we’re just going to borrow some colours from the image. We use Gimp’s colour picker tool to identify the hex values of the blue of the snow. Then we can use Color Scheme Designer to find contrasting, but complementary, colours. Finally, we use those colours for our central button. There are lots of tools to help us do this, such as Bootstrap Buttons. The new HTML is quite long, so I won’t paste it all here, but you can find it in the CSS file. We also reset the colour of the pills in the navigation menu, which is a bit easier: .nav-pills > .active > a, .nav-pills > .active > a:hover { background-color: #FF9473; } I’m not sure if the result is great to be honest, but at least we’ve lost those Bootstrap-blue buttons: Another way to do it, if you didn’t have an image to match, would be to borrow an attractive colour scheme. Colourlovers is a community where people create and share ready-made colour palettes. The key thing is to find a palette with an open licence, so you can legitimately use it. Unfortunately, you can’t search for palettes by licence type, but many do have open licences. Here’s a popular palette with a CC-BY-SA licence that allows reuse with attribution. As above, you can use the hex values from the palette in your custom CSS, and bask in the newly colourful results. 9. Read on With the above techniques, you can make a site that is starting to look slightly more professional, pretty quickly. If you have the time to invest, it’s well worth learning some design principles, if only so that design seems less intimidating and more like fun. As part of my design learning, I read a few introductory design books aimed at coders. The best I found was David Kadavy’s Design for Hackers: Reverse-Engineering Beauty, which explains the basic principles behind choosing colours, fonts, typefaces and layout. In the introduction to his book, David writes: No group stands to gain more from design literacy than hackers do… The one subject that is exceedingly frustrating for hackers to try to learn is design. Hackers know that in order to compete against corporate behemoths with just a few lines of code, they need to have good, clear design, but the resources with which to learn design are simply hard to find. Well said. If you have half a day to invest, rather than half an hour, I recommend getting hold of David’s book. And the journey is over. Perhaps that took slightly more than half an hour, but with practice, using the above techniques can become second nature. What useful tools have I missed? Designers, how would you improve on the above? I would love to know, so please give me your views in the comments.",2012,Anna Powell-Smith,annapowellsmith,2012-12-16T00:00:00+00:00,https://24ways.org/2012/how-to-make-your-site-look-half-decent/,design 236,Extreme Design,"Recently, I set out with twelve other designers and developers for a 19th century fortress on the Channel Island of Alderney. We were going to /dev/fort, a sort of band camp for geeks. Our cohort’s mission: to think up, build and finish something – without readily available internet access. Alderney runway, photo by Chris Govias Wait, no internet? Well, pretty much. As the creators of /dev/fort James Aylett and Mark Norman Francis put it: “Imagine a place with no distractions – no IM, no Twitter”. But also no way to quickly look up a design pattern, code sample or source material. Like packing for camping, /dev/fort means bringing everything you’ll need on your back or your hard drive: from long johns to your favourite icon set. We got to work the first night discussing ideas for what we wanted to build. By the time breakfast was cleared up the next morning, we’d settled on Russ’s idea to make the Apollo 13 (PDF) transcript accessible. Days two and three were spent collaboratively planning (KJ style) what features we wanted to build, and unravelling the larger UX challenges of the project. The next five days were spent building it. Within 36 hours of touchdown at Southampton Airport, we launched our creation: spacelog.org The weather was cold, the coal fire less than ideal, food and supplies a hike away, and the process lightning-fast. A week of designing under extreme circumstances called for an extreme process. Some of this was driven by James’s and Norm’s experience running these things, but a lot of it materialised while we were there – especially for our three-strong design team (myself, Gavin O’ Carroll and Chris Govias) who, though we knew each other, had never worked together as a group in this kind of scenario before. The outcome was a pretty spectacular process, with a some key takeaways useful for any small group trying to build something quickly. What it’s like inside the fort /dev/fort has the pressure and pace of a hack day without being a hack day – primarily, no workshops or interruptions‚ but also a different mentality. While hack days are typically developer-driven with a ‘hack first, design later (if at all)’ attitude, James was quick to tell the team to hold off from writing any code until we had a plan. This put a healthy pressure on the design and product folks to slash through the UX problems before we started building. While the fort had definitely more of a hack day feel, all of us were familiar with Agile methods, so we borrowed a few useful techniques such as morning stand-ups and an emphasis on teamwork. We cut some really good features to make our launch date, and chunked the work based on user goals, iterating as we went. What made this design process work? A golden ratio of teams My personal experience both professionally and in free-form situations like this, is a tendency to get/hire a designer. Leaders of businesses, founders of start-ups, organisers of events: one designer is not enough! Finding one ace-blooded designer who can ‘do everything’ will always result in bottleneck and burnout. Like the nuances between different development languages, design is a multifaceted discipline, and very few can claim to be equally strong in every aspect. Overlap in skill set will result in a stronger, more robust interface. More importantly, however, having lots of designers to go around meant that we all had the opportunity to pair with developers, polishing the details that don’t usually get polished. As soon as we launched, the public reception of the design and UX was overwhelmingly positive (proof!). But also, a lot of people asked us who the designer was, attributing it to one person. While it’s important to note that everyone in our team was multitalented (and could easily shift between roles, helping us all stay unblocked), the golden ratio James and Norm devised was two product/developer folks, three interaction designers and eight developers. photo by Ben Firshman Equality inside the fortress walls Something magical about the fort is how everyone leaves the outside world on the drawbridge. Job titles, professional status, Twitter followers, and so on. Like scout camp, a mutual respect and trust is expected of all the participants. Like extreme programming, extreme design requires us all to be equal partners in a collaborative team. I think this is especially worth noting for designers; our past is filled with the clear hierarchy of the traditional studio system which, however important for taste and style, seems less compatible with modern web/software development methods. Being equal doesn’t mean being the same, however. We established clear roles and teams for ourselves on the second day, deferring to that person when a decision needed to be made. As the interface coalesced, the designers and developers took ownership over certain parts to ensure the details got looked after, while staying open to ideas and revisions from the rest of the cohort. Create a space where everyone who enters is equal, but be sure to establish clear roles. Even if it’s just for a short while, the environment will be beneficial. photo by Ben Firshman Hang your heraldry from the rafters Forts and castles are full of lore: coats of arms; paintings of battles; suits of armour. It’s impossible not to be surrounded by these stories, words and ways of thinking. Like the whiteboards on the walls, putting organisational lore in your physical surroundings makes it impossible not to see. Ryan Alexander brought some of those static-cling whiteboard sheets which were quickly filled with use cases; IA; team roles; and, most importantly, a glossary. As soon as we started working on the project, we realised we needed to get clear on what certain words meant: what was a logline, a range, a phase, a key moment? Were the back-end people using these words in the same way design and product was? Quickly writing up a glossary of terms meant everyone was instantly speaking the same language. There was no “Ah, I misunderstood because in the data structure x means y” or, even worse, accidental seepage of technical language into the user interface copy. Put a glossary of your internal terminology somewhere big and fat on the wall. Stand around it and argue until you agree on what it says. Leave it up; don’t underestimate the power of ambient communication and physical reference. Plan more, download less While internet is forbidden inside the fort, we did go on downloading expeditions: NASA photography; code documentation; and so on. The project wouldn’t have been possible without a few trips to the web. We had two lists on the wall: groceries and supplies; internets – “loo roll; Tom Stafford photo“. This changed our usual design process, forcing us to plan carefully and think of what we needed ahead of time. Getting to the internet was a thirty-minute hike up a snow covered cliff to the town airport, so you really had to need it, too. The path to the internet For the visual design, especially, this resulted in more focus up front, and communication between the designers on what assets we required. It made us make decisions earlier and stick with them, creating less distraction and churn later in the process. Try it at home: unplug once you’ve got the things you need. As an artist, it’s easier to let your inner voice shine through if you’re not looking at other people’s work while creating. Social design Finally, our design team experimented with a collaborative approach to wireframing. Once we had collectively nailed down use cases, IA, user journeys and other critical artefacts, we tried a pairing approach. One person drew in Illustrator in real time as the other two articulated what to draw. (This would work equally well with two people, but with three it meant that one of us could jump up and consult the lore on the walls or clarify a technical detail.) The result: we ended up considering more alternatives and quickly rallying around one solution, and resolved difficult problems more quickly. At a certain stage we discovered it was more efficient for one person to take over – this happened around the time when the basic wireframes existed in Illustrator and we’d collectively run through the use cases, making sure that everything was accounted for in a broad sense. At this point, take a break, go have a beer, and give yourself a pat on the back. Put the files somewhere accessible so everyone can use them as their base, and divide up the more detailed UI problems, screens or journeys. At this level of detail it’s better to have your personal headspace. Gavin called this ‘social design’. Chatting and drawing in real time turned what was normally a rather solitary act into a very social process, with some really promising results. I’d tried something like this before with product or developer folks, and it can work – but there’s something really beautiful about switching places and everyone involved being equally quick at drawing. That’s not something you get with non-designers, and frequent swapping of the ‘driver’ and ‘observer’ roles is a key aspect to pairing. Tackle the forest collectively and the trees individually – it will make your framework more robust and your details more polished. Win/win. The return home Grateful to see a 3G signal on our phones again, our flight off the island was delayed, allowing for a flurry of domain name look-ups, Twitter catch-up, and e-mails to loved ones. A week in an isolated fort really made me appreciate continuous connectivity, but also just how unique some of these processes might be. You just never know what crazy place you might be designing from next.",2010,Hannah Donovan,hannahdonovan,2010-12-09T00:00:00+00:00,https://24ways.org/2010/extreme-design/,process 202,Design Systems and CSS Grid,"Recently, my client has been looking at creating a few new marketing pages for their website. They currently have a design system in place but they’re looking to push this forward into 2018 with some small and possibly some larger changes. To start with we are creating a couple of new marketing pages. As well as making use of existing components within the design systems component library there are a couple of new components. Looking at the first couple of sketch files I felt this would be a great opportunity to use CSS Grid, to me the newer components need to be laid out on that page and grid would help with this perfectly. As well as this layout of the new components and the text within it, imagery would be used that breaks out of the grid and pushes itself into the spaces where the text is aligned. The existing grid system When the site was rebuilt in 2015 the team decided to make use of Sass and Susy, a “lightweight grid-layout engine using Sass”. It was built separating the grid system from the components that would be laid out on the page with a container, a row, an optional column, and a block. To make use of the grid system on a page for a component that would take the full width of the row you would have to write something like this:
Using a grid system similar to this can easily create quite the tag soup. It could fill the HTML full of divs that may become complex to understand and difficult to edit. Although there is this reliance on several
s to lay out the components on a page it does allow a tidy way to place the component code within that page. It abstracts the layout of the page to its own code, its own system, so the components can ‘fit’ where needed. The requirements of the new grid system Moving forward I set myself some goals for what I’d like to have achieved in this new grid system: It needs to behave like the existing grid systems We are not ripping up the existing grid system, it would be too much work, for now, to retrofit all of the existing components to work in a grid that has a different amount of columns, and spacing at various viewport widths. Allow full-width components Currently the grid system is a 14 column grid that becomes centred on the page when viewport is wide enough. We have, in the past, written some CSS that would allow for a full-width component, but his had always felt like a hack. We want the option to have a full-width element as part of the new grid system, not something that needs CSS to fight against. Less of a tag soup Ideally we want to end up writing less HTML to layout the page. Although the existing system can be quite clear as to what each element is doing, it can also become a little laborious in working out what each grid row or block is doing where. I would like to move the layout logic to CSS as much as is possible, potentially creating some utility classes or additional ‘layout classes’ for the components. Easier for people to use and author With many people using the existing design systems codebase we need to create a new grid system that is as easy or easier to use than the existing one. I think and hope this would be helped by removing as many
s as needed and would require new documentation and examples, and potentially some initial training. Separating layout from style There still needs to be a separation of layout from the styles for the component. To allow for the component itself to be placed wherever needed in the page we need to make sure that the CSS for the layout is a separate entity to the CSS for that styling. With these base requirements I took to CodePen and started working on some throwaway code to get started. Making the new grid(s) The Full-Width Grid To start with I created a grid that had three columns, one for the left, one for the middle, and one for the right. This would give the full-width option to components. Thankfully, one of Rachel Andrew’s many articles on Grid discussed this exact requirement of the new grid system to break out with Grid. I took some of the code in the examples and edited to make grid we needed. .container { display: grid; grid-template-columns: [full-start] minmax(.75em, 1fr) [main-start] minmax(0, 1008px) [main-end] minmax(.75em, 1fr) [full-end]; } We are declaring a grid, we have four grid column lines which we name and we define how the three columns they create react to the viewport width. We have a left and right column that have a minimum of 12px, and a central column with a maximum width of 1008px. Both left and right columns fill up any additional space if the viewport is wider that 1032px wide. We are also not declaring any gutters to this grid, the left and right columns would act as gutters at smaller viewports. At this point I noticed that older versions of Sass cannot parse the brackets in this code. To combat this I used Sass’ unquote method to wrap around the value of the grid-template-column. .container { display: grid; grid-template-columns: unquote("" [full-start] minmax(.75em, 1fr) [main-start] minmax(0, 1008px) [main-end] minmax(.75em, 1fr) [full-end] ""); } The existing codebase makes use of Sass variables, mixins and functions so to remove that would be a problem, but luckily the version of Sass used is up-to-date (note: example CodePens will be using CSS). The initial full-width grid displays on a webpage as below: The 14 column grid I decided to work out the 14 column grid as a separate prototype before working out how it would fit within the full-width grid. This grid is very similar to the 12 column grids that have been used in web design. Here we need 14 columns with a gutter between each one. Along with the many other resources on Grid, Mozilla’s MDN site had a page on common layouts using CSS Grid. This gave me the perfect CSS I needed to create my grid and I edited it as required: .inner { display: grid; grid-template-columns: repeat(14, [col-start] 1fr); grid-gap: .75em; } We, again, are declaring a grid, and we are splitting up the available space by creating 14 columns with 1 fr-unit and giving each one a starting line named col-start. This grid would display on web page as below: Bringing the grids together Now that we have got the two grids we need to help fulfil our requirements we need to put them together so that they are actually we we need. The subgrid There is no subgrid in CSS, yet. To workaround this for the new grid system we could nest the 14 column grid inside the full-width grid. In the HTML we nest the 14 column inner grid inside the full-width container.
So that the inner knows where to be laid out within the container we tell it what column to start and end with, with this code it would be the start and end of the main column. .inner { display: grid; grid-column: main-start / main-end; grid-template-columns: repeat(14, [col-start] 1fr); grid-gap: .75em; } The CSS for the container remains unchanged. This works, but we have added another div to our HTML. One of our requirements is to try and remove the potential for tag soup. The faux subgrid subgrid I wanted to see if it would be possible to place the CSS for the 14 column grid within the CSS for the full-width grid. I replaced the CSS for the main grid and added the grid-column-gap to the .container. .container { display: grid; grid-gap: .75em; grid-template-columns: [full-start] minmax(.75em, 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(.75em, 1fr) [full-end]; } What this gave me was a 16 column grid. I was unable to find a way to tell the main grid, the grid betwixt main-start and main-end to be a maximum of 1008px as required. I trawled the internet to find if it was possible to create our main requirement, a 14 column grid which also allows for full-width components. I found that we could not reverse minmax to minmax(1fr, 72px) as 1fr is not allowed as a minimum if there is a maximum. I tried working out if we could make the min larger than its max but in minmax it would be ignored. I was struggling, I was hoping for a cleaner version of the grid system we currently use but getting to the point where needing that extra
would be a necessity. At 3 in the morning, when I was failing to get to sleep, my mind happened upon an question: “Could you use calc?” At some point I drifted back to sleep so the next day I set upon seeing if this was possible. I knew that the maximum width of the central grid needed to be 1008px. The left and right columns needed to be however many pixels were left in the viewport divided by 2. In CSS it looked like I would need to use calc twice. The first time to takeaway 1008px from 100% of the viewport width and the second to divide that result by 2. calc(calc(100% - 1008px) / 2) The CSS above was part of the value that I would need to include in the declaration for the grid. .container { display: grid; grid-gap: .75em; grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; } We have created the grid required. A full-width grid, with a central 14 column grid, using fewer
elements. See the Pen Design Systems and CSS Grid, 6 by Stuart Robson (@sturobson) on CodePen. Success! Progressive enhancement Now that we have created the grid system required we need to back-track a little. Not all browsers support Grid, over the last 9 months or so this has gotten a lot better. However there will still be browsers that visit that potentially won’t have support. The effort required to make the grid system fall back for these browsers depends on your product or sites browser support. To determine if we will be using Grid or not for a browser we will make use of feature queries. This would mean that any version of Internet Explorer will not get Grid, as well as some mobile browsers and older versions of other browsers. @supports (display: grid) { /* Styles for browsers that support Grid */ } If a browser does not pass the requirements for @supports we will fallback to using flexbox where possible, and if that is not supported we are happy for the page to be laid out in one column. A website doesn’t have to look the same in every browser after all. A responsive grid We started with the big picture, how the grid would be at a large viewport and the grid system we have created gets a little silly when the viewport gets smaller. At smaller viewports we have a single column layout where every item of content, every component stacks atop each other. We don’t start to define a grid before we the viewport gets to 700px wide. At this point we have an 8 column grid and if the viewport gets to 1100px or wider we have our 14 column grid. /* * to start with there is no 'grid' just a single column */ .container { padding: 0 .75em; } /* * when we get to 700px we create an 8 column grid with * a left and right area to breakout of the grid. */ @media (min-width: 700px) { .container { display: grid; grid-gap: .75em; grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(8, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; padding: 0; } } /* * when we get to 1100px we create an 14 column grid with * a left and right area to breakout of the grid. */ @media (min-width: 1100px) { .container { grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; } } Being explicit in creating this there is some repetition that we could avoid, we will define the number of columns for the inner grid by using a Sass variable or CSS custom properties (more commonly termed as CSS variables). Let’s use CSS custom properties. We need to declare the variable first by adding it to our stylesheet. :root { --inner-grid-columns: 8; } We then need to edit a few more lines. First make use of the variable for this line. repeat(8, [col-start] 1fr) /* replace with */ repeat(var(--inner-grid-columns), [col-start] 1fr) Then at the 1100px breakpoint we would only need to change the value of the —inner-grid-columns value. @media (min-width: 1100px) { .container { grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; } } /* replace with */ @media (min-width: 1100px) { .container { --inner-grid-columns: 14; } } See the Pen Design Systems and CSS Grid, 8 by Stuart Robson (@sturobson) on CodePen. The final grid system We have finally created our new grid for the design system. It stays true to the existing grid in place, adds the ability to break-out of the grid, removes a
that could have been needed for the nested 14 column grid. We can move on to the new component. Creating a new component Back to the new components we are needing to create. To me there are two components one of which is a slight variant of the first. This component contains a title, subtitle, a paragraph (potentially paragraphs) of content, a list, and a call to action. To start with we should write the HTML for the component, something like this:

To place the component on the existing grid is fine, but as child elements are not affected by the container grid we need to define another grid for the features component. As the grid doesn’t get invoked until 700px it is possible to negate the need for a media query. .features { grid-column: col-start 1 / span 6; } @supports (display: grid) { @media (min-width: 1100px) { .features { grid-column-end: 9; } } } We can also avoid duplication of declarations by making use of the grid-column shorthand and longhand. We need to write a little more CSS for the variant component, the one that will sit on the right side of the page too. .features:nth-of-type(even) { grid-column-start: 4; grid-row: 2; } @supports (display: grid) { @media (min-width: 1100px) { .features:nth-of-type(even) { grid-column-start: 9; grid-column-end: 16; } } } We cannot place the items within features on the container grid as they are not direct children. To make this work we have to define a grid for the features component. We can do this by defining the grid at the first breakpoint of 700px making use of CSS custom properties again to define how many columns there will need to be. .features { grid-column: col-start 1 / span 6; --features-grid-columns: 5; } @supports (display: grid) { @media (min-width: 700px) { .features { display: grid; grid-gap: .75em; grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr); } } } @supports (display: grid) { @media (min-width: 1100px) { .features { grid-column-end: 9; --features-grid-columns: 7; } } } See the Pen Design Systems and CSS Grid, 10 by Stuart Robson (@sturobson) on CodePen. Laying out the parts Looking at the spec and reading several articles I feel there are two ways that I could layout the text of this component on the grid. We could use the grid-column shorthand that incorporates grid-column-start and grid-column-end or we can make use of grid-template-areas. grid-template-areas allow for a nice visual way of representing how the parts of the component would be laid out. We can take the the mock of the features on the grid and represent them in text in our CSS. Within the .features rule we can add the relevant grid-template-areas value to represent the above. .features { display: grid; grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr); grid-template-areas: "". title title title title title title"" "". subtitle subtitle subtitle subtitle subtitle . "" "". content content content content . . "" "". list list list . . . "" "". . . . link link link ""; } In order to make the variant of the component we would have to create the grid-template-areas for that component too. We then need to tell each element of the component in what grid-area it should be placed within the grid. .features__title { grid-area: title; } .features__subtitle { grid-area: subtitle; } .features__content { grid-area: content; } .features__list { grid-area: list; } .features__link { grid-area: link; } See the Pen Design Systems and CSS Grid, 12 by Stuart Robson (@sturobson) on CodePen. The other way would be to use the grid-column shorthand and the grid-column-start and grid-column-end we have used previously. .features .features__title { grid-column: col-start 2 / span 6; } .features .features__subtitle { grid-column: col-start 2 / span 5; } .features .features__content { grid-column: col-start 2 / span 4; } .features .features__list { grid-column: col-start 2 / span 4; } .features .features__link { grid-column: col-start 5 / span 3; } For the variant of the component we can use the grid-column-start property as it will inherit the span defined in the grid-column shorthand. .features:nth-of-type(even) .features__title { grid-column-start: col-start 1; } .features:nth-of-type(even) .features__subtitle { grid-column-start: col-start 1; } .features:nth-of-type(even) .features__content { grid-column-start: col-start 3; } .features:nth-of-type(even) .features__list { grid-column-start: col-start 3; } .features:nth-of-type(even) .features__link { grid-column-start: col-start 1; } See the Pen Design Systems and CSS Grid, 14 by Stuart Robson (@sturobson) on CodePen. I think, for now, we will go with using grid-column properties rather than grid-template-areas. The repetition needed for creating the variant feels too much where we can change the grid-column-start instead, keeping the components elements layout properties tied a little closer to the elements rather than the grid. Some additional decisions The current component library has existing styles for titles, subtitles, lists, paragraphs of text and calls to action. These are name-spaced so that they shouldn’t clash with any other components. Looking forward there will be a chance that other products adopt the component library, but they may bring their own styles for titles, subtitles, etc. One way that we could write our code now for that near future possibility is to make sure our classes are working hard. Using class-attribute selectors we can target part of the class attributes that we know the elements in the component will have using *=. .features [class*=""title""] { grid-column: col-start 2 / span 6; } .features [class*=""subtitle""] { grid-column: col-start 2 / span 5; } .features [class*=""content""] { grid-column: col-start 2 / span 4; } .features [class*=""list""] { grid-column: col-start 2 / span 4; } .features [class*=""link""] { grid-column: col-start 5 / span 3; } See the Pen Design Systems and CSS Grid, 15 by Stuart Robson (@sturobson) on CodePen. Although the component we have created have a title, subtitle, paragraphs, a list, and a call to action there may be a time where one ore more of these is not required or available. One thing I found out is that if the element doesn’t exist then grid will not create space for it. This may be obvious, but it can be really helpful in making a nice malleable component. We have only looked at columns, as existing components have their own spacing for the vertical rhythm of the page we don’t really want to have them take up equal space in the component and just take up the space as needed. We can do this by adding grid-auto-rows: min-content; to our .features. This is useful if you also need your component to take up a height that is more than the component itself. The grid of the future From prototyping this new grid and components in CSS Grid, I’ve found it a fantastic way to reimagine how we can create a layout or grid system for our sites. It gives us options to create the same layouts in differing ways that could suit a project and its needs. It allows us to carry on – if we choose to – using a
-based grid but swapping out floats for CSS Grid or to tie it to our components so they have specific places to go depending on what component is being used. Or we could have several ‘grid components’ in our design system that we could use to layout various components throughout a page. If you find yourself tasked with creating some new components for your design system try it. If you are starting from scratch I believe you really should start with CSS Grid for your layout. It really feels like the possibilities are endless in terms of layout for the web. Resources Here are just a few resources I have pawed over these last few weeks whilst getting acquainted with CSS Grid. A collection of CodePens from this article Grid by Example from Rachel Andrew A Complete Guide to CSS Grid on Codrops from Hui Jing Chen Rachel Andrew’s Blog Archive tagged: cssgrid CSS Grid Layout Examples MDN’s CSS Grid Layout A Complete Guide to Grid from CSS-Tricks CSS Grid Layout Module Level 1 Specification",2017,Stuart Robson,stuartrobson,2017-12-12T00:00:00+00:00,https://24ways.org/2017/design-systems-and-css-grid/,code 91,Infinite Canvas: Moving Beyond the Page,"Remember Web 2.0? I do. In fact, that phrase neatly bifurcates my life on the internet. Pre-2.0, I was occupied by chatting on AOL and eventually by learning HTML so I could build sites on Geocities. Around 2002, however, I saw a WYSIWYG demo in Dreamweaver. The instructor was dragging boxes and images around a canvas. With a few clicks he was able to build a dynamic, single-page interface. Coming from the world of tables and inline HTML styles, I was stunned. As I entered college the next year, the web was blossoming: broadband, Wi-Fi, mobile (proud PDA owner, right here), CSS, Ajax, Bloglines, Gmail and, soon, Google Maps. I was a technology fanatic and a hobbyist web developer. For me, the web had long been informational. It was now rapidly becoming something else, something more: sophisticated, presentational, actionable. In 2003 we watched as the internet changed. The predominant theme of those early Web 2.0 years was the withering of Internet Explorer 6 and the triumph of web standards. Upon cresting that mountain, we looked around and collectively breathed the rarefied air of pristine HMTL and CSS, uncontaminated by toxic hacks and forks – only to immediately begin hurtling down the other side at what is, frankly, terrifying speed. Ten years later, we are still riding that rocket. Our days (and nights) are spent cramming for exams on CSS3 and RWD and Sass and RESS. We are the proud, frazzled owners of tiny pocket computers that annihilate the best laptops we could have imagined, and the architects of websites that are no longer restricted to big screens nor even segregated by device. We dragoon our sites into working any time, anywhere. At this point, we can hardly ask the spec developers to slow down to allow us to catch our breath, nor should we. It is, without a doubt, a most wonderful time to be a web developer. But despite the newfound luxury of rounded corners, gradients, embeddable fonts, low-level graphics APIs, and, glory be, shadows, the canyon between HTML and native appears to be as wide as ever. The improvements in HTML and CSS have, for the most part, been conveniences rather than fundamental shifts. What I’d like to do now, if you’ll allow me, is outline just a few of the remaining gaps that continue to separate web sites and applications from their native companions. What I’d like for Christmas There is one irritant which is the grandfather of them all, the one from which all others flow and have their being, and it is, simply, the page refresh. That’s right, the foundational principle of the web is our single greatest foe. To paraphrase a patron saint of designers everywhere, if you see a page refresh, we blew it. The page refresh brings with it, of course, many noble and lovely benefits: addressability, for one; and pagination, for another. (See also caching, resource loading, and probably half a dozen others.) Still, those concerns can be answered (and arguably answered more compellingly) by replacing the weary page with the young and hearty document. Flash may be dead, but it has many lessons yet to bequeath. Preparing a single document when the site loads allows us to engage the visitor in a smooth and engrossing experience. We have long known this, of course. Twitter was not the first to attempt, via JavaScript, to envelop the user in a single-page application, nor the first to abandon it. Our shared task is to move those technologies down the stack, to make them more primitive, so that the next Twitter can be built with the most basic combination of HTML and CSS rather than relying on complicated, slow, and unreliable scripted solutions. So, let’s take a look at what we can do, right now, that we might have a better idea of where our current tools fall short. A print magazine in HTML clothing Like many others, I suspect, one of my earliest experiences with publishing was laying out newsletters and newspapers on a computer for print. If you’ve ever used InDesign or Quark or even Microsoft Publisher, you’ll remember reflowing content from page to page. The advent of the internet signaled, in many ways, the abandonment of that model. Articles were no longer constrained by the physical limitations of paper. In shedding our chains, however, it is arguable that we’ve lost something useful. We had a self-contained and complete package, a closed loop. It was a thing that could be handled and finished, and doing so provided a sense of accomplishment that our modern, infinitely scrolling, ever-fractal web of content has stolen. For our purposes today, we will treat 24 ways as the online equivalent of that newspaper or magazine. A single year’s worth of articles could easily be considered an issue. Right now, navigating between articles means clicking on the article you’d like to view and being taken to that specific address via a page reload. If Drew wanted to, it wouldn’t be difficult to update the page in place (via JavaScript) and change the address (again via JavaScript with the History API) to reflect the new content found at the new location. But what if Drew wanted to do that without JavaScript? And what if he wanted the site to not merely load the content but actually whisk you along the page in a compelling and delightful way, à la the Mag+ demo we all saw a few years ago when the iPad was first introduced? Uh, no. We’re all familiar with websites that have attempted to go beyond the page by weaving many chunks of content together into a large document and for good reason. There is tremendous appeal in opening and exploring the canvas beyond the edges of our screens. In one rather straightforward example from last year, Mozilla contacted Full Stop to build a website promoting Aza Raskin’s proposal for a set of Creative Commons-style privacy icons. Like a lot of the sites we build (including our own), the amount of information we were presenting was minimal. In these instances, we encourage our clients to consider including everything on a single page. The result was a horizontally driven site that was, if not whimsical, at least clever and attractive to the intended audience. An experience that is taken for granted when using device-native technology is utterly, maddeningly impossible to replicate on the web without jumping through JavaScript hoops. In another, more complex example, we again had the pleasure of working with Aza earlier this year, this time on a redesign of the Massive Health website. Our assignment was to design and build a site that communicated Massive’s commitment to modern personal health. The site had to be visually and interactively stunning while maintaining a usable and clear interface for the casual visitor. Our solution was to extend the infinite company logo into a ribbon that carried the visitor through the site narrative. It also meant we’d be asking the browser to accommodate something it was never designed to handle: a non-linear design. (Be sure to play around. There’s a lot going on under the hood. We were also this close to a ZUI, if WebKit didn’t freak out when pages were scaled beyond 10×.) Despite the apparent and deliberate design simplicity, the techniques necessary to implement it are anything but. From updating the URL to moving the visitor from section to section, we’re firmly in JavaScript territory. And that’s a shame. What can we do? We might not be able to specify these layouts in HTML and CSS just yet, but that doesn’t mean we can’t learn a few new tricks while we wait. Let’s see how close we can come to recreating the privacy icons design, the Massive design, or the Mag+ design without resorting to JavaScript. A horizontally paginated site The first thing we’re going to need is the concept of a page within our HTML document. Using plain old HTML and CSS, we can stack a series of
s sideways (with a little assist from our new friend, the viewport-width unit, not that he was strictly necessary). All we need to know is how many pages we have. (And, boy, wouldn’t it be nice to be able to know that without having to predetermine it or use JavaScript?) .window { overflow: hidden; width: 100%; } .pages { width: 200vw; } .page { float: left; overflow: hidden; width: 100vw; } If you look carefully, you’ll see that the conceit we’ll use in the rest of the demos is in place. Despite the document containing multiple pages, only one is visible at any given time. This allows us to keep the user focused on the task (or content) at hand. By the way, you’ll need to use a modern, WebKit-based browser for these demos. I recommend downloading the WebKit nightly builds, Chrome Canary, or being comfortable with setting flags in Chrome. A horizontally paginated site, with transitions Ah, here’s the rub. We have functional navigation, but precious few cues for the user. It’s not much good shoving the visitor around various parts of the document if they don’t get the pleasant whooshing experience of the journey. You might be thinking, what about that new CSS selector, target-something…? Well, my friend, you’re on the right track. Let’s test it. We’re going to need to use a bit of sleight of hand. While we’d like to simply offset the containing element by the number of pages we’re moving (like we did on Massive), CSS alone can’t give us that information, and that means we’re going to need to fake it by expanding and collapsing pages as you navigate. Here are the bits we’re going to need: .page { -webkit-transition: width 1s; // Naturally you're going to want to include all the relevant prefixes here float: left; left: 0; overflow: hidden; position: relative; width: 100vw; } .page:not(:target) { width: 0; } Ah, but we’re not fooling anyone with that trick. As soon as you move beyond a single page, the visitor’s disbelief comes tumbling down when the linear page transitions are unaffected by the distance the pages are allegedly traveling. And you may have already noticed an even more fatal flaw: I secretly linked you to the first page rather than the unadorned URL. If you visit the same page with no URL fragment, you get a blank screen. Sure, we could force a redirect with some server-side trickery, but that feels like cheating. Perhaps if we had the CSS4 subject selector we could apply styles to the parent based on the child being targeted by the URL. We might also need a few more abilities, like determining the total number of pages and having relative sibling selectors (e.g. nth-sibling), but we’d sure be a lot closer. A horizontally paginated site, with transitions – no cheating Well, what other cards can we play? How about the checkbox hack? Sure, it’s a garish trick, but it might be the best we can do today. Check it out. label { cursor: pointer; } input { display: none; } input:not(:checked) + .page { max-height: 100vh; width: 0; } Finally, we can see the first page thanks to the state we are able to set on the appropriate radio button. Of course, now we don’t have URLs, so maybe this isn’t a winning plan after all. While our HTML and CSS toolkit may feel primitive at the moment, we certainly don’t want to sacrifice the addressability of the web. If there’s one bedrock principle, that’s it. A horizontally paginated site, with transitions – no cheating and a gorgeous homepage Gorgeous may not be the right word, but our little magazine is finally shaping up. Thanks to the CSS regions spec, we’ve got an exciting new power, the ability to begin an article in one place and bend it to our will. (Remember, your everyday browser isn’t going to work for these demos. Try the WebKit nightly build to see what we’re talking about.) As with the rest of the examples, we’re clearly abusing these features. Off-canvas layouts (you can thank Luke Wroblewski for the name) are simply not considered to be normal patterns… yet. Here’s a quick look at what’s going on: .excerpt-container { float: left; padding: 2em; position: relative; width: 100%; } .excerpt { height: 16em; } .excerpt_name_article-1, .page-1 .article-flow-region { -webkit-flow-from: article-1; } .article-content_for_article-1 { -webkit-flow-into: article-1; } The regions pattern is comprised of at least three components: a beginning; an ending; and a source. Using CSS, we’re able to define specific elements that should be available for the content to flow through. If magazine-style layouts are something you’re interested in learning more about (and you should be), be sure to check out the great work Adobe has been doing. Looking forward, and backward As designers, builders, and consumers of the web, we share a desire to see the usability and enjoyability of websites continue to rise. We are incredibly lucky to be working in a time when a three-month-old website can be laughably outdated. Our goal ought to be to improve upon both the weaknesses and the strengths of the web platform. We seek not only smoother transitions and larger canvases, but fine-grained addressability. Our URLs should point directly and unambiguously to specific content elements, be they pages, sections, paragraphs or words. Moreover, off-screen design patterns are essential to accommodating and empowering the multitude of devices we use to access the web. We should express the desire that interpage links take advantage of the CSS transitions which have been put to such good effect in every other aspect of our designs. Transitions aren’t just nice to have, they’re table stakes in the highly competitive world of native applications. The tools and technologies we have right now allow us to create smart, beautiful, useful webpages. With a little help, we can begin removing the seams and sutures that bind the web to an earlier, less sophisticated generation.",2012,Nathan Peretic,nathanperetic,2012-12-21T00:00:00+00:00,https://24ways.org/2012/infinite-canvas-moving-beyond-the-page/,code 136,Making XML Beautiful Again: Introducing Client-Side XSL,"Remember that first time you saw XML and got it? When you really understood what was possible and the deep meaning each element could carry? Now when you see XML, it looks ugly, especially when you navigate to a page of XML in a browser. Well, with every modern browser now supporting XSL 1.0, I’m going to show you how you can turn something as simple as an ATOM feed into a customised page using a browser, Notepad and some XSL. What on earth is this XSL? XSL is a family of recommendations for defining XML document transformation and presentation. It consists of three parts: XSLT 1.0 – Extensible Stylesheet Language Transformation, a language for transforming XML XPath 1.0 – XML Path Language, an expression language used by XSLT to access or refer to parts of an XML document. (XPath is also used by the XML Linking specification) XSL-FO 1.0 – Extensible Stylesheet Language Formatting Objects, an XML vocabulary for specifying formatting semantics XSL transformations are usually a one-to-one transformation, but with newer versions (XSL 1.1 and XSL 2.0) its possible to create many-to-many transformations too. So now you have an overview of XSL, on with the show… So what do I need? So to get going you need a browser an supports client-side XSL transformations such as Firefox, Safari, Opera or Internet Explorer. Second, you need a source XML file – for this we’re going to use an ATOM feed from Flickr.com. And lastly, you need an editor of some kind. I find Notepad++ quick for short XSLs, while I tend to use XMLSpy or Oxygen for complex XSL work. Because we’re doing a client-side transformation, we need to modify the XML file to tell it where to find our yet-to-be-written XSL file. Take a look at the source XML file, which originates from my Flickr photos tagged sky, in ATOM format. The top of the ATOM file now has an additional instruction, as can been seen on Line 2 below. This instructs the browser to use the XSL file to transform the document. Your first transformation Your first XSL will look something like this: This is pretty much the starting point for most XSL files. You will notice the standard XML processing instruction at the top of the file (line 1). We then switch into XSL mode using the XSL namespace on all XSL elements (line 2). In this case, we have added namespaces for ATOM (line 4) and Dublin Core (line 5). This means the XSL can now read and understand those elements from the source XML. After we define all the namespaces, we then move onto the xsl:output element (line 6). This enables you to define the final method of output. Here we’re specifying html, but you could equally use XML or Text, for example. The encoding attributes on each element do what they say on the tin. As with all XML, of course, we close every element including the root. The next stage is to add a template, in this case an as can be seen below: Making XML beautiful again : Transforming ATOM The beautiful thing about XSL is its English syntax, if you say it out loud it tends to make sense. The / value for the match attribute on line 8 is our first example of XPath syntax. The expression / matches any element – so this will match against any element in the document. As the first element in any XML document is the root element, this will be the one matched and processed first. Once we get past our standard start of a HTML document, the only instruction remaining in this is to look for and match all elements using the in line 14, above.

This new template (line 12, above) matches and starts to write the new HTML elements out to the output stream. The does exactly what you’d expect – it finds the value of the item specifed in its select attribute. With XPath you can select any element or attribute from the source XML. The last part is a repeat of the now familiar from before, but this time we’re using it inside of a called template. Yep, XSL is full of recursion…
  • ()

  • The which matches atom:entry (line 1) occurs every time there is a element in the source XML file. So in total that is 20 times, this is naturally why XSLT is full of recursion. This has been matched and therefore called higher up in the document, so we can start writing list elements directly to the output stream. The first part is simply a

    with a link wrapped within it (lines 3-7). We can select attributes using XPath using @. The second part of this template selects the date, but performs a XPath string function on it. This means that we only get the date and not the time from the string (line 9). This is achieved by getting only the part of the string that exists before the T. Regular Expressions are not part of the XPath 1.0 string functions, although XPath 2.0 does include them. Because of this, in XSL we tend to rely heavily on the available XML output. The third part of the template (line 12) is a again, but this time we use an attribute of called disable output escaping to turn escaped characters back into XML. The very last section is another call, taking us three templates deep. Do not worry, it is not uncommon to write XSL which go 20 or more templates deep! tag In our final , we see a combination of what we have done before with a couple of twists. Once we match atom:category we then count how many elements there are at that same level (line 2). The XPath . means ‘self’, so we count how many category elements are within the element. Following that, we start to output a link with a rel attribute of the predefined text, tag (lines 4-6). In XSL you can just type text, but results can end up with strange whitespace if you do (although there are ways to simply remove all whitespace). The only new XPath function in this example is concat(), which simply combines what XPaths or text there might be in the brackets. We end the output for this tag with an actual tag name (line 10) and we add a space afterwards (line 12) so it won’t touch the next tag. (There are better ways to do this in XSL using the last() XPath function). After that, we go back to the element again if there is another category element, otherwise we end the loop and end this . A touch of style Because we’re using recursion through our templates, you will find this is the end of the templates and the rest of the XML will be ignored by the parser. Finally, we can add our CSS to finish up. (I have created one for Flickr and another for News feeds) So we end up with a nice simple to understand but also quick to write XSL which can be used on ATOM Flickr feeds and ATOM News feeds. With a little playing around with XSL, you can make XML beautiful again. All the files can be found in the zip file (14k)",2006,Ian Forrester,ianforrester,2006-12-07T00:00:00+00:00,https://24ways.org/2006/beautiful-xml-with-xsl/,code 261,Surviving—and Thriving—as a Remote Worker,"Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there’s an entire world of talent out there? I’ve been working remotely, full-time, for five and a half years. I’ve reached the point where I can’t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I’ve grown attached to my current level of freedom and flexibility. However, it took me a lot of trial and error to reach success as a remote worker — and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn’t for everyone, but most people, with enough effort, can make it work — or even thrive. Here’s what I’ve learned in over five years of working remotely. Experiment with your environment As a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space — whether that’s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch… but I’ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn’t completely destroy your eyesight. Working remotely doesn’t always mean working from home. If you don’t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don’t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you’re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls. Co-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They’re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.) Just be polite — make sure your headphones don’t leak, and don’t work from a library if you have a day full of calls. Remember, too, that you don’t have to stay in the same spot all day. It’s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don’t force yourself to sit at your desk for eight hours if that doesn’t work for you. Set boundaries If you’re a workaholic, working remotely can be a challenge. It’s incredibly easy to just… work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there’s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it’s time to figure this out, eh? Learning how to set boundaries is one of the most important lessons I’ve learned working remotely. (And honestly, it’s something I still struggle with). For a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it’s noon, I’m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot. I recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time. I too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can’t get sucked into work before I’m even dressed. I’ve gotten pretty good at forbidding myself from working until I’m ready, but building any new habit requires intentionality. Something else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn’t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I’ve heard other coworkers have figured out ways to work through these technical issues, so I’m hoping to give it another try soon. You might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It’s true! If you’re a more disciplined person, you might not need any of these coping mechanisms. If you’re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out. Create intentional transition time I know it’s a stereotype that people who work from home stay in their pajamas all day, but… sometimes, it’s very easy to do. I’ve found that in order to reach peak focus, I need to create intentional transition time. The most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it’s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort. I’ve found it helpful to take similar steps at the end of my day. If I’ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch. This is another reason working from outside your home is advantageous. Commutes, even if it’s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen — not next to your computer. Embrace async If you’re used to working in an office, you’ve probably gotten pretty used to being able to pop over to a colleague’s desk if you need to ask a question. They’re pretty much forced to engage with you at that point. When you’re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary’s in Australia and it’s 3am there right now. For many remote workers, that’s part of the package. When you’re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers. Asynchronous communication is great. Not everyone can be present for every meeting or office conversation — and the same goes for working remotely. However, when you’re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (“p2s”) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase — “p2 or it didn’t happen.” Working remotely has made me a better communicator largely because I’ve gotten into the habit of making written updates. I’ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it! (That said, if you’re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.) Seek out social opportunities Even introverts can feel lonely or isolated. When you work remotely, there isn’t a built-in community you’re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide. I have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.) Every now and then, I’ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh. If you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work. If you don’t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you’ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can’t fall back on your work to provide community, you need to build your own. These are tips that I’ve found help me, but not everyone works the same way. Remember that it’s okay to experiment — just because you’ve worked one way, doesn’t mean that’s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment! Hope to see you all online soon 🙌",2018,Mel Choyce,melchoyce,2018-12-09T00:00:00+00:00,https://24ways.org/2018/thriving-as-a-remote-worker/,process 92,Redesigning the Media Query,"Responsive web design is showing us that designing content is more important than designing containers. But if you’ve given RWD a serious try, you know that shifting your focus from the container is surprisingly hard to do. There are many factors and instincts working against you, and one culprit is a perpetrator you’d least suspect. The media query is the ringmaster of responsive design. It lets us establish the rules of the game and gives us what we need most: control. However, like some kind of evil double agent, the media query is actually working against you. Its very nature diverts your attention away from content and forces you to focus on the container. The very act of choosing a media query value means choosing a screen size. Look at the history of the media query—it’s always been about the container. Values like screen, print, handheld and tv don’t have anything to do with content. The modern media query lets us choose screen dimensions, which is great because it makes RWD possible. But it’s still the act of choosing something that is completely unpredictable. Content should dictate our breakpoints, not the container. In order to get our focus back to the only thing that matters, we need a reengineered media query—one that frees us from thinking about screen dimensions. A media query that works for your content, not the window. Fortunately, Sass 3.2 is ready and willing to take on this challenge. Thinking in Columns Fluid grids never clicked for me. I feel so disoriented and confused by their squishiness. Responsive design demands their use though, right? I was ready to surrender until I found a grid that turned my world upright again. The Frameless Grid by Joni Korpi demonstrates that column and gutter sizes can stay fixed. As the screen size changes, you simply add or remove columns to accommodate. This made sense to me and armed with this concept I was able to give Sass the first component it needs to rewrite the media query: fixed column and gutter size variables. $grid-column: 60px; $grid-gutter: 20px; We’re going to want some resolution independence too, so let’s create a function that converts those nasty pixel values into ems. @function em($px, $base: $base-font-size) { @return ($px / $base) * 1em; } We now have the components needed to figure out the width of multiple columns in ems. Let’s put them together in a function that will take any number of columns and return the fixed width value of their size. @function fixed($col) { @return $col * em($grid-column + $grid-gutter) } With the math in place we can now write a mixin that takes a column count as a parameter, then generates the perfect media query necessary to fit that number of columns on the screen. We can also build in some left and right margin for our layout by adding an additional gutter value (remembering that we already have one gutter built into our fixed function). @mixin breakpoint($min) { @media (min-width: fixed($min) + em($grid-gutter)) { @content } } And, just like that, we’ve rewritten the media query. Instead of picking a minimum screen size for our layout, we can simply determine the number of columns needed. Let’s add a wrapper class so that we can center our content on the screen. @mixin breakpoint($min) { @media (min-width: fixed($min) + em($grid-gutter)) { .wrapper { width: fixed($min) - em($grid-gutter); margin-left: auto; margin-right: auto; } @content } } Designing content with a column count gives us nice, easy, whole numbers to work with. Sizing content, sidebars or widgets is now as simple as specifying a single-digit number. @include breakpoint(8) { .main { width: fixed(5); } .sidebar { width: fixed(3); } } Those four lines of Sass just created a responsive layout for us. When the screen is big enough to fit eight columns, it will trigger a fixed width layout. And give widths to our main content and sidebar. The following is the outputted CSS… @media (min-width: 41.25em) { .wrapper { width: 38.75em; margin-left: auto; margin-right: auto; } .main { width: 25em; } .sidebar { width: 15em; } } Demo I’ve created a Codepen demo that demonstrates what we’ve covered so far. I’ve added to the demo some grid classes based on Griddle by Nicolas Gallagher to create a floatless layout. I’ve also added a CSS gradient overlay to help you visualize columns. Try changing the column variable sizes or the breakpoint includes to see how the layout reacts to different screen sizes. Responsive Images Responsive images are a serious problem, but I’m excited to see the community talk so passionately about a solution. Now, there are some excellent stopgaps while we wait for something official, but these solutions require you to mirror your breakpoints in JavaScript or HTML. This poses a serious problem for my Sass-generated media queries, because I have no idea what the real values of my breakpoints are anymore. For responsive images to work, JavaScript needs to recognize which media query is active so that proper images can be loaded for that layout. What I need is a way to label my breakpoints. Fortunately, people much smarter than I have figured this out. Jeremy Keith devised a labeling method by using CSS-generated content as the storage method for breakpoint labels. We can use this technique in our breakpoint mixin by passing a label as another argument. @include breakpoint(8, 'desktop') { /* styles */ } Sass can take that label and use it when writing the corresponding media query. We just need to slightly modify our breakpoint mixin. @mixin breakpoint($min, $label) { @media (min-width: fixed($min) + em($grid-gutter)) { // label our mq with CSS generated content body::before { content: $label; display: none; } .wrapper { width: fixed($min) - em($grid-gutter); margin-left: auto; margin-right: auto; } @content } } This allows us to label our breakpoints with a user-friendly string. Now that our media queries are defined and labeled, we just need JavaScript to step in and read which label is active. // get css generated label for active media query var label = getComputedStyle(document.body, '::before')['content']; JavaScript now knows which layout is active by reading the label in the current media query—we just need to match that label to an image. I prefer to store references to different image sizes as data attributes on my image tag. These data attributes have names that match the labels set in my CSS. So while there is some duplication going on, setting a keyword like ‘tablet’ in two places is much easier than hardcoding media query values. With matching labels in CSS and HTML our script can marry the two and load the right sized image for our layout. // get css generated label for active media query var label = getComputedStyle(document.body, '::before')['content']; // select image var $image = $('.responsive-image'); // create source from data attribute $image.attr('src', $image.data(label)); Demo With some slight additions to our previous Codepen demo you can see this responsive image technique in action. While the above JavaScript will work it is not nearly robust enough for production so the demo uses a jQuery plugin that can accomodate multiple images, reloading on screen resize and fallbacks if something doesn’t match up. Creating a Framework This media query mixin and responsive image JavaScript are the center piece of a front end framework I use to develop websites. It’s a fluid, mobile first foundation that uses the breakpoint mixin to structure fixed width layouts for tablet and desktop. Significant effort was focused on making this framework completely cross-browser. For example, one of the problems with using media queries is that essential desktop structure code ends up being hidden from legacy Internet Explorer. Respond.js is an excellent polyfill, but if you’re comfortable serving a single desktop layout to older IE, we don’t need JavaScript. We simply need to capture layout code outside of a media query and sandbox it under an IE only class name. // set IE fallback layout to 8 columns $ie-support = 8; // inside of our breakpoint mixin (but outside the media query) @if ($ie-support and $min <= $ie-support) { .lt-ie9 { @content; } } Perspective Regained Thinking in columns means you are thinking about content layout. How big of a screen do you need for 12 columns? Who cares? Having Sass write media queries means you can use intuitive numbers for content layout. A fixed grid means more layout control and less edge cases to test than a fluid grid. Using CSS labels for activating responsive images means you don’t have to duplicate breakpoints across separations of concern. It’s a harmonious blend of approaches that gives us something we need—responsive design that feels intuitive. And design that, from the very outset, focuses on what matters most. Just like our kindergarten teachers taught us: It’s what’s inside that counts.",2012,Les James,lesjames,2012-12-13T00:00:00+00:00,https://24ways.org/2012/redesigning-the-media-query/,code 263,Securing Your Site like It’s 1999,"Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities. As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned. What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate. Bad input validation: Trusting anything the user sends you Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web. One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong. One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile. Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened? There aren’t any official rules for developing software on the web. But if there were, my golden rule would be: Never trust user input. Ever. Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe. Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML. Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner. Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world. SQL injection: Allowing the user to run their own database queries A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day. One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned. It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it? ' OR '1'='1 SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this: SELECT COUNT(*) FROM USERS WHERE USERNAME='Alice' AND PASSWORD='hunter2' This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead! SELECT COUNT(*) FROM USERS WHERE USERNAME='Admin' AND PASSWORD='' OR '1'='1' Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum! So how can you protect yourself from SQL injection? Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize. Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can! Cross site request forgery: Getting other users to do your dirty work for you Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge. Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky… Let’s just say there are older and fouler things than Rick Astley in the dark places of the web. What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it: Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge! This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from. The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set. Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies. You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers. Cross site scripting: Someone else’s code running on your website In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends. Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then… Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
    tags into his headline, but That’s it, unless you want to tweak the default settings. You likely do, but essentially you’re already up and running. How it works Adaptive Images does a number of things depending on the scenario the script has to handle, but here’s a basic overview of what it does when you load a page running it: A session cookie is written with the value of the visitor’s screen size in pixels. The HTML encounters an tag and sends a request to the server for that image. It also sends the cookie, because that’s how browsers work. Apache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL. There are! The .htaccess says “Hey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.” The PHP file then does some intelligent thinking which can cover a number of scenarios, but I’ll illustrate one path that can happen: The PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px. The PHP has a look at the available media query sizes that were configured and decides which one matches the user’s device. It then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there. We’ll pretend it doesn’t – the PHP then goes to the actual requested URI and finds that the original file does exist. It has a look to see how wide that image is. If it’s already smaller than the user’s screen width it sends it along and stops there. But, let’s pretend the image is 1,000px wide. The PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it. It also does a few other things when needs arise, for example: It sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it. It handles cases where there isn’t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size. It compares timestamps between the source image and the generated cache image – to ensure that if the source image gets updated, the old cached file won’t be sent. Customizing There are a few options you can customize if you don’t like the default values. By looking in the PHP’s configuration section at the top of the file, you can: Set the resolution breakpoints to match your media query break points. Change the name and location of the ai-cache folder. Change the quality level any generated JPG images are saved at. Have it perform a subtle sharpen on rescaled images to help keep detail. Toggle whether you want it to compare the files in the cache folder with the source ones or not. Set how long the browser should cache the images for. Switch between a mobile-first or desktop-first approach when a cookie isn’t found. More importantly, you probably want to omit a few folders from the AI behaviour. You don’t need or want it resizing the images you’re using in your CSS, for example. That’s fine – just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you’re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it’ll only apply AI behaviour to a given list of folders. Caveats As I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I’ve seen so far. This is a PHP solution I wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don’t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3. Content delivery networks Adaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won’t allow that to happen. Adaptive Images will not work if you’re using a CDN to deliver your website. A minor but interesting cookie issue. As Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested – even though the JavaScript that sets the cookie is loaded by the browser before it finds any tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem: The $mobile_first toggle allows you to choose what to send to a browser if a cookie isn’t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest. Even if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE. This means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn’t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest. The best way to get a cookie written is to use JavaScript as I’ve explained above, because it’s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP ‘image’ to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won’t be set until after images have been requested. The future For today, this is a pretty good solution. It works, and as it doesn’t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you’re good to go – you’d never know AI had been there. However, this isn’t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future. First, we could do with browsers sending far more information about the user’s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn’t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for. Secondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does. I hope you’ve found this interesting and will find Adaptive Images useful. Footnotes 1 While I’m talking about preventing smartphones from downloading resources they don’t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files. 2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository. 3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images.",2011,Matt Wilcox,mattwilcox,2011-12-04T00:00:00+00:00,https://24ways.org/2011/adaptive-images-for-responsive-designs/,ux 66,Solve the Hard Problems,"So, here we find ourselves on the cusp of 2016. We’ve had a good year – the web is still alive, no one has switched it off yet. Clients still have websites, teenagers still have phone apps, and there continue to be plenty of online brands to meaningfully engage with each day. Good job team, high fives all round. As it’s the time to make resolutions, I wanted to share three small ideas to take into the new year. Get good at what you do “How do you get to Carnegie Hall?” the old joke goes. “Practise, practise, practise.” We work in an industry where there is an awful lot to learn. There’s a lot to learn to get started and then once you do, there’s a lot more to learn to keep your skills current. Just when you think you’ve mastered something, it changes. This is true of many industries, of course, but the sheer pace of change for us makes learning not an annual activity, but daily. Learning takes time, and while I’m not convinced that every skill takes the fabled ten thousand hours to master, there is certainly no escaping that to remain current we must reinvest time in keeping our skills up to date. Picking where to spend your time One of the hardest aspects of this thing of ours is just choosing what to learn. If you, like me, invested any time in learning the Less CSS preprocessor over the last few years, you’ll probably now be spending your time relearning Sass instead. If you spent time learning Grunt, chances are you’ll now be thinking about whether you should switch to Gulp. It’s not just that there are new types of tools, there are new tools and frameworks to do the things you’re already doing, but, well, differently. Deciding what to learn is hard and the costs of backing the wrong horse can seriously mount up; so much so that by the time you’ve learned and then relearned the tools everyone says you need for your job, there’s rarely enough time to spend really getting to know how best to use them.  Practise, practise, practise Do you know how you don’t get to Carnegie Hall? By learning a new instrument each week. It takes time and experience to really learn something well. That goes for a new JavaScript framework as much as a violin. If you flit from one shiny new thing to another, you’re destined to produce amateurish work forever. Learn the new thing, but then stick with it long enough to get really good at it – even if Twitter trolls try to convince you it’s not cool. What’s really not cool is living as a forevernoob. If you’re still not sure what to learn, go back to basics. Considering a new CSS or JavaScript framework? Invest that time in learning the underlying CSS or JavaScript really well instead. Those skills will stand the test of time. Audience and purpose Back when I was in school, my English teacher (a nice Welsh lady, who I appreciate more now than I did back then) used to love to remind us that every piece of writing should have an audience and a purpose. So much so that audience and purpose almost became her catch phrase. For every essay, article or letter, we were reminded to consider who we were writing it for and what we were trying to achieve. It’s something I think about a lot; certainly when writing, but also in almost every other creative endeavour. Asking who is this for and what am I trying to achieve applies equally to designing a logo or website, through to composing music or writing software. Being productive It seems like everyone wants to have a product these days. As someone who used to do client services work and now has a product company, I often talk with people who are interested in taking something they’ve built in-house and turning it into a product. You know the sort of thing: a design agency with its own CMS or project management web app; the very logical thought process of: if this helps our business, maybe others will find it valuable too; the question that inevitably follows: could we turn this into a product? Whether consciously or not, the audience and purpose influence nearly every aspect of your creative process. Once written or designed or developed or created, revising a work to change the audience and purpose can be quite a challenge. No matter how much you want to turn the tension-building, atmospheric music for a horror film into a catchy chart hit, it’s going to be a struggle. Yes, it’s music, but that’s neither the audience nor purpose for which it was created. The same is absolutely true for your in-house tools – those were also designed for a specific audience and purpose. Your in-house CMS would have been designed with an audience of your own development team, who are busy implementing sites for clients. The purpose is to make that team more productive overall, taking into account considerations of maintaining multiple sites on a common codebase, training clients, a more mature and stable platform and all the other benefits of reusing the same code for each project. The audience is your team and the purpose increased productivity. That’s very different from a customer who wants to buy a polished system to use off-the-shelf. If their needs perfectly aligned with yours then they wouldn’t be in the market for your product – they would have built their own. Sometimes you hear the advice to “scratch your own itch” when it comes to product design. I don’t completely agree. Got an itch? Great. Find other itchy people and sell them a backscratcher. Building a product, like designing a website, is a lot of work. It requires knowing your audience and purpose inside out. You can’t fudge it and you can’t just hope you’ll find an audience for some old thing you have lying around. Always consider the audience and purpose for everything you create. It’s often the difference between success and failure. Solve the hard problems Human beings have a natural tendency to avoid hard problems. In digital design (websites, software, whatever) the received wisdom is often that we can get 80% of the way towards doing the hard thing by doing something that’s not very hard. Do you know what you get at the end of it? Paid. But nothing really great ever happens that way. I worked on a client project a while back where one of the big challenges was making full use of the massive image library they had built up over the years. The client had tens of thousands of photographs, along with a fair amount of video and a large MP3 audio library too. If it wasn’t managed carefully, storage sizes would get out of control, content would go unattributed, and everything would get very messy very quickly. I could tell from the outset that this aspect of the project was going to be a constant problem. So we tackled it head-on. We designed and built a media management system to hold and process all the assets, and added an API so the content management system could talk to it. Every time the site needed a photo at a new size, it made an API request to the system and everything was handled seamlessly. It was a daunting job to invest all the time and effort in building that dedicated system and API, but it really paid off. Instead of having the constant troubles of a vast library of media, it became one of the strongest parts of the project. Turn your hardest problems into your biggest strengths There’s a funny thing about hard problems. The hardest problems are the most fun to solve and have the biggest impact. Maybe you’re the sort of person who clocks in for work, does their job and clocks out at 5pm without another thought. But I don’t think you are, because you’re here reading this. If you really love what you do, I don’t think you can be satisfied in your work unless you’re seeking out and working on those hard problems. That’s where the magic is. The new year is a helpful time to think about breaking bad habits. Whether it’s smoking a bit less, or going to the gym a bit more, the ticking over of the calendar can provide the motivation for a new start. I have some suggestions for you. Get good at what you do. Practise your skills and don’t just flit from one shiny thing to the next. Remember who you’re doing it for and why. Consider the audience and purpose for everything you create. Solve the hard problems. It’s more interesting, more satisfying, and has a greater impact. As we move into 2016, these are the things I’m going to continue to work on. Maybe you’d like to join me.",2015,Drew McLellan,drewmclellan,2015-12-24T00:00:00+00:00,https://24ways.org/2015/solve-the-hard-problems/,process 218,Put Yourself in a Corner,"Some backstory, and a shameful confession For the first couple years of high school I was one of those jerks who made only the minimal required effort in school. Strangely enough, how badly I behaved in a class was always in direct proportion to how skilled I was in the subject matter. In the subjects where I was confident that I could pass without trying too hard, I would give myself added freedom to goof off in class. Because I was a closeted lit-nerd, I was most skilled in English class. I’d devour and annotate required reading over the weekend, I knew my biblical and mythological allusions up and down, and I could give you a postmodern interpretation of a text like nobody’s business. But in class, I’d sit in the back and gossip with my friends, nap, or scribble patterns in the margins of my textbooks. I was nonchalant during discussion, I pretended not to listen during lectures. I secretly knew my stuff, so I did well enough on tests, quizzes, and essays. But I acted like an ass, and wasn’t getting the most I could out of my education. The day of humiliation, but also epiphany One day in Ms. Kaney’s AP English Lit class, I was sitting in the back doodling. An earbud was dangling under my sweater hood, attached to the CD player (remember those?) sitting in my desk. Because of this auditory distraction, the first time Ms. Kaney called my name, I barely noticed. I definitely heard her the second time, when she didn’t call my name so much as roar it. I can still remember her five feet frame stomping across the room and grabbing an empty desk. It screamed across the worn tile as she slammed it next to hers. She said, “This is where you sit now.” My face gets hot just thinking about it. I gathered my things, including the CD player (which was now impossible to conceal), and made my way up to the newly appointed Seat of Shame. There I sat, with my back to the class, eye-to-eye with Ms. Kaney. From my new vantage point I couldn’t see my friends, or the clock, or the window. All I saw were Ms. Kaney’s eyes, peering at me over her reading glasses while I worked. In addition to this punishment, I was told that from now on, not only would I participate in class discussions, but I would serve detention with her once a week until an undetermined point in the future. During these detentions, Ms. Kaney would give me new books to read, outside the curriculum, and added on to my normal homework. They ranged from classics to modern novels, and she read over my notes on each book. We’d discuss them at length after class, and I grew to value not only our private discussions, but the ones in class as well. After a few weeks, there wasn’t even a question of this being punishment. It was heaven, and I was more productive than ever. To the point Please excuse this sentimental story. It’s not just about honoring a teacher who cared enough to change my life, it’s really about sharing a lesson. The most valuable education Ms. Kaney gave me had nothing to do with literature. She taught me that I (and perhaps other people who share my special brand of crazy) need to be put in a corner to flourish. When we have physical and mental constraints applied, we accomplish our best work. For those of you still reading, now seems like a good time to insert a pre-emptive word of mediation. Many of you, maybe all of you, are self-disciplined enough that you don’t require the rigorous restrictions I use to maximize productivity. Also, I know many people who operate best in a stimulating and open environment. I would advise everyone to seek and execute techniques that work best for them. But, for those of you who share my inclination towards daydreams and digressions, perhaps you’ll find something useful in the advice to follow. In which I pretend to be Special Agent Olivia Dunham Now that I’m an adult, and no longer have Ms. Kaney to reign me in, I have to find ways to put myself in the corner. By rejecting distraction and shaping an environment designed for intense focus, I’m able to achieve improved productivity. Lately I’ve been obsessed with the TV show Fringe, a sci-fi series about an FBI agent and her team of genius scientists who save the world (no, YOU’RE a nerd). There’s a scene in the show where the primary character has to delve into her subconscious to do extraordinary things, and she accomplishes this by immersing herself in a sensory deprivation tank. The premise is this: when enclosed in a space devoid of sound, smell, or light, she will enter a new plane of consciousness wherein she can tap into new levels of perception. This might sound a little nuts, but to me this premise has some real-world application. When I am isolated from distraction, and limited to only the task at hand, I’m able to be productive on a whole new level. Since I can’t actually work in an airtight iron enclosure devoid of input, I find practical ways to create an interruption-free environment. Since I work from home, many of my methods for coping with distractions wouldn’t be necessary for my office-bound counterpart. However for some of you 9-to-5-ers, the principles will still apply. Consider your visual input First, I have to limit my scope to the world I can (and need to) affect. In the largest sense, this means closing my curtains to the chaotic scene of traffic, birds, the post office, a convenience store, and generally lovely weather that waits outside my window. When the curtains are drawn and I’m no longer surrounded by this view, my sphere is reduced to my desk, my TV, and my cat. Sometimes this step alone is enough to allow me to focus. But, my visual input can be whittled down further still. For example, the desk where I usually keep my laptop is littered with twelve owl figurines, a globe, four books, a three-pound weight, and various nerdy paraphernalia (hard drives, Wacom tablets, unnecessary bluetooth accessories, and so on). It’s not so much a desk as a dumping ground for wacky flea market finds and impulse technology buys. Therefore, in addition to this Official Desk, I have an adult version of Ms. Kaney’s Seat of Shame. It’s a rusty old student’s desk I picked up at the Salvation Army, almost an exact replica of the model Ms. Kaney dragged across the classroom all those years ago. This tiny reproduction Seat of Shame is literally in a corner, where my only view is a blank wall. When I truly need to focus, this is where I take refuge, with only a notebook and a pencil (and occasionally an iPad). Find out what works for your ears Even from my limited sample size of two people, I know there are lots of different ways to cope with auditory distraction. I prefer silence when focused on independent work, and usually employ some form of a white noise generator. I’ve yet to opt for the fancy ‘real’ white noise machines; instead, I use a desktop fan or our allergy filter machine. This is usually sufficient to block out the sounds of the dishwasher and the cat, which allows me to think only about the task of hand. My boyfriend, the other half of my extensive survey, swears by another method. He calls it The Wall of Sound, and it’s basically an intense blast of raucous music streamed directly into his head. The outcome of his technique is really the same as mine; he’s blocking out unexpected auditory input. If you can handle the grating sounds of noisy music while working, I suggest you give The Wall of Sound a try. Don’t count the minutes When I sat in the original Seat of Shame in lit class, I could no longer see the big classroom clock slowly ticking away the seconds until lunch. Without the marker of time, the class period often flew by. The same is true now when I work; the less aware of time I am, the less it feels like time is passing too quickly or slowly, and the more I can focus on the task (not how long it takes). Nowadays, to assist in my effort to forget the passing of time, I sometimes put a sticky note over the clock on my monitor. If I’m writing, I’ll use an app like WriteRoom, which blocks out everything but a simple text editor. There are situations when it’s not advisable to completely lose track of time. If I’m working on a project with an hourly rate and a tight scope, or if I need to be on time to a meeting or call, I don’t want to lose myself in the expanse of the day. In these cases, I’ll set an alarm that lets me know it’s time to reign myself back in (or on some days, take a shower). Put yourself in a mental corner, too When Ms. Kaney took action and forced me to step up my game, she had the insight to not just change things physically, but to challenge me mentally as well. She assigned me reading material outside the normal coursework, then upped the pressure by requiring detailed reports of the material. While this additional stress was sometimes uncomfortable, it pushed me to work harder than I would have had there been less of a demand. Just as there can be freedom in the limitations of a distraction-free environment, I’d argue there is liberty in added mental constraints as well. Deadlines as a constraint Much has been written about the role of deadlines in the creative process, and they seem to serve different functions in different cases. I find that deadlines usually act as an important constraint and, without them, it would be nearly impossible for me to ever consider a project finished. There are usually limitless ways to improve upon the work I do and, if there’s no imperative for me to be done at a certain point, I will revise ad infinitum. (Hence, the personal site redesign that will never end – Coming Soon, Forever!). But if I have a clear deadline in mind, there’s a point when the obsessive tweaking has to stop. I reach a stage where I have to gather up the nerve to launch the thing. Putting the pro in procrastination Sometimes I’ve found that my tendency to procrastinate can help my productivity. (Ducks, as half the internet throws things at her.) I understand the reasons why procrastination can be harmful, and why it’s usually a good idea to work diligently and evenly towards a goal. I try to divide my projects up in a practical way, and sometimes I even pull it off. But for those tasks where you work aimlessly and no focus comes, or you find that every other to-do item is more appealing, sometimes you’re forced to bring it together at the last moment. And sometimes, this environment of stress is a formula for magic. Often when I’m down to the wire and have no choice but to produce, my mind shifts towards a new level of clarity. There’s no time to endlessly browse for inspiration, or experiment with convoluted solutions that lead nowhere. Obviously a life lived perpetually on the edge of a deadline would be a rather stressful one, so it’s not a state of being I’d advocate for everyone, all the time. But every now and then, the work done when I’m down to the wire is my best. Keep one toe outside your comfort zone When I’m choosing new projects to take on, I often seek out work that involves an element of challenge. Whether it’s a design problem that will require some creative thinking, or a coding project that lends itself to using new technology like HTML5, I find a manageable level of difficulty to be an added bonus. The tension that comes from learning a new skill or rethinking an old standby is a useful constraint, as it keeps the work interesting, and ensures that I continue learning. There you have it Well, I think I’ve spilled most of my crazy secrets for forcing my easily distracted brain to focus. As with everything we web workers do, there are an infinite number of ways to encourage productivity. I hope you’ve found a few of these to be helpful, and please share your personal techniques in the comments. Have a happy and productive new year!",2010,Meagan Fisher,meaganfisher,2010-12-20T00:00:00+00:00,https://24ways.org/2010/put-yourself-in-a-corner/,process 223,Calculating Color Contrast,"Some websites and services allow you to customize your profile by uploading pictures, changing the background color or other aspects of the design. As a customer, this personalization turns a web app into your little nest where you store your data. As a designer, letting your customers have free rein over the layout and design is a scary prospect. So what happens to all the stock text and images that are designed to work on nice white backgrounds? Even the Mac only lets you choose between two colors for the OS, blue or graphite! Opening up the ability to customize your site’s color scheme can be a recipe for disaster unless you are flexible and understand how to find maximum color contrasts. In this article I will walk you through two simple equations to determine if you should be using white or black text depending on the color of the background. The equations are both easy to implement and produce similar results. It isn’t a matter of which is better, but more the fact that you are using one at all! That way, even with the craziest of Geocities color schemes that your customers choose, at least your text will still be readable. Let’s have a look at a range of various possible colors. Maybe these are pre-made color schemes, corporate colors, or plucked from an image. Now that we have these potential background colors and their hex values, we need to find out whether the corresponding text should be in white or black, based on which has a higher contrast, therefore affording the best readability. This can be done at runtime with JavaScript or in the back-end before the HTML is served up. There are two functions I want to compare. The first, I call ’50%’. It takes the hex value and compares it to the value halfway between pure black and pure white. If the hex value is less than half, meaning it is on the darker side of the spectrum, it returns white as the text color. If the result is greater than half, it’s on the lighter side of the spectrum and returns black as the text value. In PHP: function getContrast50($hexcolor){ return (hexdec($hexcolor) > 0xffffff/2) ? 'black':'white'; } In JavaScript: function getContrast50(hexcolor){ return (parseInt(hexcolor, 16) > 0xffffff/2) ? 'black':'white'; } It doesn’t get much simpler than that! The function converts the six-character hex color into an integer and compares that to one half the integer value of pure white. The function is easy to remember, but is naive when it comes to understanding how we perceive parts of the spectrum. Different wavelengths have greater or lesser impact on the contrast. The second equation is called ‘YIQ’ because it converts the RGB color space into YIQ, which takes into account the different impacts of its constituent parts. Again, the equation returns white or black and it’s also very easy to implement. In PHP: function getContrastYIQ($hexcolor){ $r = hexdec(substr($hexcolor,0,2)); $g = hexdec(substr($hexcolor,2,2)); $b = hexdec(substr($hexcolor,4,2)); $yiq = (($r*299)+($g*587)+($b*114))/1000; return ($yiq >= 128) ? 'black' : 'white'; } In JavaScript: function getContrastYIQ(hexcolor){ var r = parseInt(hexcolor.substr(0,2),16); var g = parseInt(hexcolor.substr(2,2),16); var b = parseInt(hexcolor.substr(4,2),16); var yiq = ((r*299)+(g*587)+(b*114))/1000; return (yiq >= 128) ? 'black' : 'white'; } You’ll notice first that we have broken down the hex value into separate RGB values. This is important because each of these channels is scaled in accordance to its visual impact. Once everything is scaled and normalized, it will be in a range between zero and 255. Much like the previous ’50%’ function, we now need to check if the input is above or below halfway. Depending on where that value is, we’ll return the corresponding highest contrasting color. That’s it: two simple contrast equations which work really well to determine the best readability. If you are interested in learning more, the W3C has a few documents about color contrast and how to determine if there is enough contrast between any two colors. This is important for accessibility to make sure there is enough contrast between your text and link colors and the background. There is also a great article by Kevin Hale on Particletree about his experience with choosing light or dark themes. To round it out, Jonathan Snook created a color contrast picker which allows you to play with RGB sliders to get values for YIQ, contrast and others. That way you can quickly fiddle with the knobs to find the right balance. Comparing results Let’s revisit our color schemes and see which text color is recommended for maximum contrast based on these two equations. If we use the simple ’50%’ contrast function, we can see that it recommends black against all the colors except the dark green and purple on the second row. In general, the equation feels the colors are light and that black is a better choice for the text. The more complex ‘YIQ’ function, with its weighted colors, has slightly different suggestions. White text is still recommended for the very dark colors, but there are some surprises. The red and pink values show white text rather than black. This equation takes into account the weight of the red value and determines that the hue is dark enough for white text to show the most contrast. As you can see, the two contrast algorithms agree most of the time. There are some instances where they conflict, but overall you can use the equation that you prefer. I don’t think it is a major issue if some edge-case colors get one contrast over another, they are still very readable. Now let’s look at some common colors and then see how the two functions compare. You can quickly see that they do pretty well across the whole spectrum. In the first few shades of grey, the white and black contrasts make sense, but as we test other colors in the spectrum, we do get some unexpected deviation. Pure red #FF0000 has a flip-flop. This is due to how the ‘YIQ’ function weights the RGB parts. While you might have a personal preference for one style over another, both are justifiable. In this second round of colors, we go deeper into the spectrum, off the beaten track. Again, most of the time the contrasting algorithms are in sync, but every once in a while they disagree. You can select which you prefer, neither of which is unreadable. Conclusion Contrast in color is important, especially if you cede all control and take a hands-off approach to the design. It is important to select smart defaults by making the contrast between colors as high as possible. This makes it easier for your customers to read, increases accessibility and is generally just easier on the eyes. Sure, there are plenty of other equations out there to determine contrast; what is most important is that you pick one and implement it into your system. So, go ahead and experiment with color in your design. You now know how easy it is to guarantee that your text will be the most readable in any circumstance.",2010,Brian Suda,briansuda,2010-12-24T00:00:00+00:00,https://24ways.org/2010/calculating-color-contrast/,code 268,Getting the Most Out of Google Analytics,"Something a bit different for today’s 24 ways article. For starters, I’m not a designer or a developer. I’m an evil man who sells things to people on the internet. Second, this article will likely be a little more nebulous than you’re used to, since it covers quite a number of points in a relatively short space. This isn’t going to be the complete Google Analytics Conversion University IQ course compressed into a single article, obviously. What it will be, however, is a primer on setting up and using Google Analytics in real life, and a great deal of what I’ve learned using Google Analytics nearly every working day for the past six (crikey!) years. Also, to be clear, I’ll be referencing new Google Analytics here; old Google Analytics is for loooosers (and those who want reliable e-commerce conversion data per site search term, natch). You may have been running your Analytics account for several years now, dipping in and out, checking traffic levels, seeing what’s popular… and that’s about it. Google Analytics provides so much more than that, but the number of reports available can often intimidate users, and documentation and case studies on their use are minimal at best. Let’s start! Setting up your Analytics profile Before we plough on, I just want to run through a quick checklist that some basic settings have been enabled for your profile. If you haven’t clicked it, click the big cog on the top-right of Google Analytics and we’ll have a poke about. If you have an e-commerce site, e-commerce tracking has been enabled
 If your site has a search function, site search tracking has been enabled. Query string parameters that you do not want tracked as separate pages have been excluded (for example, any parameters needed for your platform to function, otherwise you’ll get multiple entries for the same page appearing in your reports) Filters have been enabled on your main profile to exclude your office IP address and any IPs of people who frequently access the site for work purposes. In decent numbers they tend to throw data off a tad.
 You may also find the need to set up multiple profiles prefiltered for specific audience segments. For example, at Lovehoney we have seventeen separate profiles that allow me quick access to certain countries, devices and traffic sources without having to segment first. You’ll also find load time for any complex reports much improved. Use the same filter screen as above to set up a series of profiles that only include, say, mobile visits, or UK visitors, so you can quickly analyse important segments. Matt, what’s a segment? A segment is a subsection of your visitor base, which you define and then call on in reports to see specific data for that subsection. For example, in this report I’ve defined two segments, the first for IE6 users and the second for IE7. Segments are easily created by clicking the Advanced Segments tabs at the top of any report and clicking +New Custom Segment. What does your site do? Understanding the goals of your site is an oft-covered topic, but it’s necessary not just to form a better understand of your business and prioritize your time. Understanding what you wish visitors to do on your site translates well into a goal-driven analytics package like Google Analytics. Every site exists essentially to sell something, either financially through e-commerce, or to sell an idea or impart information, get people to download a CV or enquire about service, or to sell space on that website to advertisers. If the site did not provide a positive benefit to its owners, it would not have a reason for being. Once you have understood the reason why you have a site, you can map that reason on to one of the three goal types Google Analytics provides. E-commerce This conversion type registers transactions as part of a sales process which requires a monetary value, what products have been bought, an SKU (stock keeping unit), affiliation (if you’re then attributing the sale to a third party or franchise) and so on. The benefit of e-commerce tracking is not only assigning non-arbitrary monetary value to behaviour of visitors on your site, as well as being able to see ancillary costs such as shipping, but seeing product-level information, like which products are preferred from various channels, popular categories, and so on. However, I find the e-commerce tracking options also useful for non-e-commerce sites. For example, if you’re offering downloads or subscriptions and having an email address or user’s details is worth something to you, you can set up e-commerce tracking to understand how much value your site is bringing. For example, an email address might be worth 20p to you, but if it also includes a name it’s worth 50p. A contact telephone number is worth £2, and so on. Page goals Page goals, unsurprisingly, track a visit to a page (often with a sequence of pages leading up to that page). This is what’s referred to as a goal funnel, and is generally used to track how visitors behave in a multistep checkout. Interestingly, the page doesn’t have to actually exist. For example, if you have a single page checkout, you can register virtual page views using trackPageview() when a visitor clicks into a particular section of the checkout or other form. If your site is geared towards getting someone to a particular page, but where there isn’t a transaction (for example, a subscription page) this is for you. There are also behavioural goals, such as time on site and number of pages viewed, which are geared towards sites that make money from advertising. But, going back to the page goals, these can be abstracted using regular expressions, meaning that you can define a funnel based on page type rather than having to set individual folders. In this example, I’ve created regexes for the main page types on my site, so I can create a wide funnel that captures visitors from where they enter through to checkout. Events Event tracking registers a predefined event, such as playing a video, or some interaction that can trigger JavaScript, such as a Tweet This button. Events can then be triggered using the trackEvent() call. If you want someone to complete watching a video, you would code your player to fire trackEvent() upon completion. While I don’t use events as goals, I use events elsewhere to see how well a video play aids to conversion. This not only helps me justify the additional spend on creating video content, but also quickly highlights which videos are underperforming as sales tools. What a visitor can tell you 
Now you have some proper goals set up, we can start to see how changes in content (on-site and external) affect those goals. Ultimately, when a visitor comes to your site, they bring information with them: where they came from (a search engine – including: keyword searched for; a referral; direct; affiliate; or ad campaign) demographics (country; whether they’re new or returning, within thirty days) technical information (browser; screen size; device; bandwidth) site-specific information (landing page; next click; previous values assigned to them as custom variables*) * A note about custom variables. There’s no hope in hell that I can cover custom variables in this article. Go research them. Custom variables are the single best way to hack Google Analytics and bend it to your will. Custom variables allow you to record anything you want about a visitor, which that visitor will then carry around with them between visits. It’s also great for plugging other services into Google Analytics (as shown by the marvelous way Visual Website Optimizer allows you to track and segment tests within the GA interface). Just make sure not to breach the terms of service, eh? CSI your website Police procedural TV shows are all the same: the investigators are called to a crime and come across a clue; there’s then an autopsy; new evidence leads them to a new location; they find a new clue; they put two and two together; they solve the mystery. This is your life now. Exciting! So, now you’re gathering a wealth of information about what sort of people visit your site, what they do when they’re there, and what eventually gets them to drive value to you. It’s now your job to investigate all these little clues to see which types of people drive the most value, and what you can change to improve it. Maybe not that exciting. However, Google Analytics comes pre-armed with extensive reports for you to delve into. As an e-commerce guy (as opposed to a page goal guy) my day pretty much follows the pattern below. Look at e-commerce conversion rate by traffic source compared to the same day in the previous week and previous month. As ours is an e-commerce site, we have weekly and monthly trends. A big spike on Sundays and Mondays, and payday towards the end of the month is always good; on the third week of a month there tends to be a lull. Spend time letting your Google Analytics data brew, understand your own trends and patterns, and you’ll start to get a feel for when something isn’t quite right. Traffic Sources → Sources → All Traffic Look at the conversion rate by landing page for any traffic source that feels significantly different to what’s expected. Check bounce rates, drill down to likely landing pages and check search keyword or referral site to see if it’s a particular subset of visitor. You can do this by clicking Secondary Dimension and choosing Keyword or Source. If it’s direct, choose Visitor Type to break down by new or returning visitor. Content → Site Content → Landing Pages I then tend to flip into Content Drilldown to see what the next clicks were from those landing pages, and whether they changed significantly to the date I’m comparing with. If they have, that’s usually an indicator of changed content (or its relevancy). Remember, if a bunch of people have found their way to your page via a method you’re not expecting (such as a mention on a Spanish radio station – this actually happened to me once), while the content hasn’t changed, the relevancy of it to the audience may have. Content → Site Content → Content Drilldown Once I have an idea of what content was consumed, and whether it was relevant to the user, I then look at the visitor specifics, such as browser or demographic data, to see again whether the change was limited to a specific subset. Site speed, for example, is normally a good factor towards bounce rate, so compare that with previous data as well. Now, to be investigating at this level you still need a serious amount of data, in order to tell what’s a significant change or not. If you’re struggling with a small number of visitors, you might find reporting on a weekly or fortnightly basis more appropriate. However, once you’ve looked into the basics of why changes happen to the value of your site, you’ll soon find yourself limited by the reports offered in Standard Reporting. So, it’s time to build your own. Hooray! Custom reporting Google Analytics provides the tools to build reports specific to the types of investigations you frequently perform. Welcome to my world. Custom reports are quite simple to build: first, you determine the metric you want the report to cover (number of visitors, bounce rate, conversion rate, and so on), then choose a set of dimensions that you’d like to segment the report by (say, the source of the traffic, and whether they were new or returning users). You can filter the report, including or excluding particular dimension values, and you can assign the report to any of the profiles you created earlier. In the example below, I’ve created a report that shows me visits and conversion rate for any Google traffic that landed directly only on a product page. I can then drill down on each product page to see the complete phrases use to search. I can use this information in two ways: I can see which products aren’t converting, which shows me where I need to work harder on merchandising. I can give this information to my content team, showing them the actual phrases visitors used to reach our product content, helping them write better targeted product descriptions. The possibilities here are nearly endless, but here are a few examples of reports I find useful: Non-brand inbound search By creating a report that shows inbound search traffic which doesn’t include your brand, you can see more clearly the behaviour of visitors most likely to be unfamiliar with your site and brand values, without having to rely on the clumsy new or returning demographic date. Traffic/conversion/sales by hour This is pure stats porn, but actually more useful than real-time data. By seeing this data broken down at an hourly level, you can not only compare the current day to previous days, but also see the best performing times for email broadcasts and tweets. Visits, load time, conversion and sales by page and browser Page speed can often kill conversion rates, but it’s difficult to prove the value of focusing on speed in monetary terms. Having this report to hand helps me drive Operation Greenbelt, our effort to get into the sub-1.5 second band in Google Webmaster Tools. Useful things you can’t do in custom reporting If you have a search function on your website, then Conversion Rate and Products Bought by Site Search Term is an incredibly useful report that allows you to measure the effectiveness of your site’s search engine at returning products and content related to the search term used. By including the products actually bought by visitors who searched for each term, you can use this information to better searchandise these results, escalating high propensity and high value products to the top of the results. However, it’s not possible to get this information out of new Google Analytics. Try it, select the following in the report builder: Metrics: total unique searches; e-commerce or goal conversion rate Dimensions: search term; product You’ll see that the data returned is a little nonsensical, though a 2,000% conversion rate would be nice. However, you can get more accurate information using advanced segments. By creating individual segments to define users who have searched for a particular term, you can run the sales performance and product performance reports as normal. It’s laborious, but it teaches a good lesson: data that seems inaccessible can normally be found another way! Reporting infrastructure Now that you have a series of reports that you can refer to on a daily or weekly basis, it’s time to put together a regular reporting infrastructure. Even if you’re not reporting to someone, having a set of key performance indicators that you can use to see how your performance is improving over time allows you to set yourself business goals on a monthly and annual basis. For my own reporting, I take some high-level metrics (such as visitors, conversion rate and average order value), and segment them by traffic source and, separately, landing page. These statistics I record weekly and report: current week compared with previous week same week previous year (if available) 4 week average 13 week average 52 week average (if available) This takes into account weekly, monthly, seasonal and annual trends, and gives you a much clearer view of your performance. Getting data in other ways If you’re using Google Analytics frequently, with any large site you’ll come to a couple of conclusions: Doing any kind of practical comparative analysis is unwieldy. Boy, Google Analytics is slow! As you work with bigger datasets and put together more complex queries, you’ll see the loading graphic more than you’ll see actual data. So when you reach that level, there are ways to completely bypass the Google Analytics interface altogether, and get data into your own spreadsheet application for manipulation. Data Feed Query Explorer If you just want to pull down some quick statistics but still use complex filters and exotic metric and dimension combinations, the Data Feed Query Explorer is the quickest way of doing so. Authenticate with your Google Analytics account, select a profile, and you can start selecting metrics and dimensions to be generated in a handy, selectable tabulated format. Google Analytics API If you’re feeling clever, you can bypass having to copy and paste data by pulling in directly into Excel, Google Docs or your own application using the Google Analytics API. There are several scripts and plugins available to do this. I use Automate Analytics Google Docs code (there’s also a paid version that simplifies setup and creates some handy reports for you). New shiny things Well, now that that’s over, I can show you some cool stuff. Well, at least it’s cool to me. Google Analytics is being constantly improved and new functionality is introduced nearly every month. Here are a couple of my favourites. Multichannel attribution Not every visitor converts on your site on the first visit. They may not even do so on the second visit, or third. If they convert on the fourth visit, but each time they visit they do so via a different channel (for example, Search PPC, Search Organic, Direct, Email), which channel do you attribute the conversion to? The last channel, or the first? Dilemma! Google now has a Multichannel Attribution report, available in the Conversion category, which shows how each channel assists in converting, the overlap between channels, and where in the process that channel was important. For example, you may have analysed your blog traffic from Twitter and become disheartened that not many people were subscribing after visiting from Twitter links, but instead your high-value subscribers were coming from natural search. On the face of it, you’d spend less time tweeting, but a multichannel report may tell you that visitors first arrived via a Twitter link and didn’t subscribe, but then came back later after searching for your blog name on Google, after which they did. Don’t pack Twitter in yet! Visitor and goal flow Visitor and goal flow are amazing reports that help you visualize the flow of traffic through your site and, ultimately, into your checkout funnel or similar goal path. Flow reports are perfect for understanding drop-off points in your process, as well as what the big draws are on each page. Previously, if you wanted to visualize this data you had to set up several abstracted microgoals and chain them together in custom reports. Frankly, it was a pain in the arse and burned through your precious and limited goal allocation. Visitor flow bypasses all that and produces the report in an interactive flow diagram. While it doesn’t show you the holy grail of conversion likelihood by each path, you can segment visitor flow so that you can see very specifically how different segments of your visitor base behave. Go play with it now!",2011,Matt Curry,mattcurry,2011-12-18T00:00:00+00:00,https://24ways.org/2011/getting-the-most-out-of-google-analytics/,business 252,Turn Jekyll up to Eleventy,"Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper — and more secure — option. The JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it’s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013. By combining three approachable languages — Markdown for content, YAML for data and Liquid for templating — the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it’s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself “this, but in Node”. Thankfully, one of Santa’s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree. Introducing Eleventy Eleventy is a more flexible alternative Jekyll. Besides being written in Node, it’s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains). As content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I’ve discovered, there are a few gotchas. If you’ve been considering making the switch, here are a few tips and tricks to help you on your way1. Note: Throughout this article, I’ll be converting Matt Cone’s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory: git clone https://github.com/mattcone/markdown-guide.git cd markdown-guide Before you start If you’ve used tools like Grunt, Gulp or Webpack, you’ll be familiar with Node.js but, if you’ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now’s the time to install Node.js and set up your project to work with its package manager, NPM: Install Node.js: Mac: If you haven’t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node. Windows: Download the Windows installer from the Node.js website and follow the instructions. Initiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems’s Gemfile, this file contains a list of your project’s third-party dependencies. If you’re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn’t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too. Installing Eleventy With Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency: npm install --save-dev @11ty/eleventy If you open package.json you should see the following: … ""devDependencies"": { ""@11ty/eleventy"": ""^0.6.0"" } … We can now run Eleventy from the command line using NPM’s npx command. For example, to covert the README.md file to HTML, we can run the following: npx eleventy --input=README.md --formats=md This command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition. Configuration Whereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we’ll see later on. We’ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options: module.exports = function(eleventyConfig) { return { dir: { input: ""./"", // Equivalent to Jekyll's source property output: ""./_site"" // Equivalent to Jekyll's destination property } }; }; A few other things to bear in mind: Whereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore). By default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you’ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins. Layouts One area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub). Wanting to keep our layouts together, we’ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy’s config: module.exports = function(eleventyConfig) { // Aliases are in relation to the _includes folder eleventyConfig.addLayoutAlias('about', 'layouts/about.html'); eleventyConfig.addLayoutAlias('book', 'layouts/book.html'); eleventyConfig.addLayoutAlias('default', 'layouts/default.html'); return { dir: { input: ""./"", output: ""./_site"" } }; } Determining which template language to use Eleventy will transform Markdown (.md) files using Liquid by default, but we’ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do. Variables On the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating. Site variables Alongside build settings, Jekyll let’s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values: title: ""Markdown Guide"" url: https://www.markdownguide.org baseurl: """" repo: http://github.com/mattcone/markdown-guide comments: false author: name: ""Matt Cone"" og_locale: ""en_US"" Eleventy’s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged. { ""title"": ""Markdown Guide"", ""url"": ""https://www.markdownguide.org"", ""baseurl"": """", ""repo"": ""http://github.com/mattcone/markdown-guide"", ""comments"": false, ""author"": { ""name"": ""Matt Cone"" }, ""og_locale"": ""en_US"" } Page variables The table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace: Jekyll Eleventy page.url page.url page.date page.date page.path page.inputPath page.id page.outputPath page.name page.fileSlug page.content content page.title title page.foobar foobar When iterating through pages, frontmatter values are available via the data object while content is available via templateContent: Jekyll Eleventy item.url item.url item.date item.date item.path item.inputPath item.name item.fileSlug item.id item.outputPath item.content item.templateContent item.title item.data.title item.foobar item.data.foobar Ideally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data. Pagination variables Whereas Jekyll’s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables: Jekyll Eleventy paginator.page pagination.pageNumber paginator.per_page pagination.size paginator.posts pagination.items paginator.previous_page_path pagination.previousPageHref paginator.next_page_path pagination.nextPageHref Filters Although Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few — more than can be covered by this article — but you can replicate them by using Eleventy’s addFilter configuration option. Let’s convert two used by our Markdown Guide: jsonify and where. The jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform: // {{ variable | jsonify }} eleventyConfig.addFilter('jsonify', function (variable) { return JSON.stringify(variable); }); Jekyll’s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match: {{ site.members | where: ""graduation_year"",""2014"" }} To account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match: // {{ array | where: key,value }} eleventyConfig.addFilter('where', function (array, key, value) { return array.filter(item => { const keys = key.split('.'); const reducedKey = keys.reduce((object, key) => { return object[key]; }, item); return (reducedKey === value ? item : false); }); }); There’s quite a bit going on within this filter, but I’ll try to explain. Essentially we’re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it’s removed. Phew! Includes As with filters, Jekyll provides a set of tags that aren’t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you’re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this: {% include include.html value=""key"" %} {{ include.value }} in Eleventy, you would do this: {% include ""include.html"", value: ""key"" %} {{ value }} A downside of Shopify’s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments. Tweaking Liquid You may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS’s dynamicPartials option to false. Additionally, Eleventy doesn’t support the include_relative tag, meaning you can’t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option. Thankfully, Eleventy allows us to pass options to LiquidJS: eleventyConfig.setLiquidOptions({ dynamicPartials: false, root: [ '_includes', '.' ] }); Collections Jekyll’s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way. Collections in Jekyll In Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections: collections: - basic-syntax - extended-syntax These correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so: {% for syntax in site.extended-syntax %} {{ syntax.title }} {% endfor %} Collections in Eleventy There are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files: --- title: Strikethrough syntax-id: strikethrough syntax-summary: ""~~The world is flat.~~"" tag: extended-syntax --- We can then iterate over tagged content like this: {% for syntax in collections.extended-syntax %} {{ syntax.data.title }} {% endfor %} Eleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters): eleventyConfig.addCollection('basic-syntax', collection => { return collection.getFilteredByGlob('_basic-syntax/*.md'); }); eleventyConfig.addCollection('extended-syntax', collection => { return collection.getFilteredByGlob('_extended-syntax/*.md'); }); We can extend this further. For example, say we wanted to sort a collection by the display_order property in our document’s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript’s sort method to sort the result: eleventyConfig.addCollection('example', collection => { return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => { return a.data.display_order - b.data.display_order; }); }); Hopefully, this gives you just a hint of what’s possible using this approach. Using directory data to manage defaults By default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following: --- title: Lists syntax-id: lists api: ""no"" permalink: /basic-syntax/lists.html --- Again, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables. For example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path: { ""layout"": ""syntax"", ""tag"": ""basic-syntax"", ""permalink"": ""basic-syntax/{{ title | slug }}.html"" } However, Markdown Guide doesn’t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let’s change things around a little. No longer tied to Jekyll’s rules about where collection folders should be saved and how they should be labelled, we’ll move them into a folder called _content: markdown-guide └── _content ├── basic-syntax ├── extended-syntax ├── getting-started └── _content.json We will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published: { ""permalink"": false } Static files Eleventy only transforms files whose template language it’s familiar with. But often we may have static assets that don’t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true: module.exports = function(eleventyConfig) { … // Copy the `assets` directory to the compiled site folder eleventyConfig.addPassthroughCopy('assets'); return { dir: { input: ""./"", output: ""./_site"" }, passthroughFileCopy: true }; } Final considerations Assets Unlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts — we have plenty of choices in that department already. If you’ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly. Publishing to GitHub Pages One of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site — or any site not built with Jekyll — to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It’s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere! Going one louder If you’ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons. While it’s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy’s appeal, it’s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it. I moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that’s perfectly fine! But these go to 11. Information provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5 ↩",2018,Paul Lloyd,paulrobertlloyd,2018-12-11T00:00:00+00:00,https://24ways.org/2018/turn-jekyll-up-to-eleventy/,content 81,Science!,"Sometimes we want to capture people’s attention at a glance to communicate something fast. At other times we want to have the interface fade away into the background, letting people paint pictures in their minds with our words (if you’ll forgive a little flowery festive flourish). I tend to distinguish between these two broad objectives as designing for impact on the one hand, and designing for immersion on the other. What defines them is interruption. Impact needs an attention-grabbing interruption. Immersion requires us to remove interruption from the interface. Careful design deliberately interrupts but doesn’t accidentally disrupt. If that seems to make sense to you, then you’ll find the following snippets of science as useful as I did. Saccades and fixations As you’re reading this your eyes are skipping along the lines in tiny jumps. During each jump everything is blurred. Each jump ends in a small pause so your brain can take a snapshot of the letters. It arranges them into words, and then parses out the meaning — fast — in around a quarter of a second. The jumps are called saccades. The pauses are called fixations. Sometimes we take regressive saccades, skipping back to reread. There’s a simple example in the excellent little book, Detail in Typography, by Jost Hochuli. If you want to explore the science of reading in much more depth, I recommend the excellent paper, “The Science of Word Recognition”, by Dr Kevin Larson of Microsoft. To design for legibility and readability is to design for saccades and fixations. It’s the craft of making it easy for people’s brains to extract meaning, using techniques like good contrast, font size, spacing and structure, and only interrupting the reading experience deliberately. Scan paths At some point when visiting 24 ways you probably scanned the screen to get orientated. The journey your eyes took is known as a scan path. Scan paths are made up of saccades and fixations. Right now you’re following a scan path as you read, along one line, and down to the next. This is a map of the scan paths found by Olivier Le Meur from observing people looking at Rembrandt’s Leçon d’anatomie: For websites, the scan path is a little different. This is an aggregate scan path of Google from LC Technologies: The average shape of a website scan path becomes clearer in this average scan path taken by forty-six people during research by the Poynter Institute, the Estlow Center for Journalism & New Media, and Eyetools: Just like when we read text arranged left to right in a vertical column, scan paths follow a roughly Z-shaped pattern from the top left to bottom right. Sometimes we skip back to reread a word or sentence, or glance again at a specific element, but the Z-shaped scan path persists. Designing for scan paths is to organise content to help people move through an interface to get orientated, and to read. The elements that are important enough to need impact must interrupt the scan path and clearly call attention to themselves. However, they don’t always need to clip people round the ear from multiple directions at once to get attention. It helps to list elements by importance. That gives us an interruption hierarchy to work with. Elements can then interrupt the design with degrees of contrast to the rest of the content using either positioning, treatment, or both. Ta-da! Impact achieved, but gently. No clips round the ear required. Swinging mood Human beings are resilient. Among the immersion and occasional interruptions, we even like a little disruption, especially if it’s absurd and funny. The Ling’s Cars website proves it. In fact, we’re so resilient that we can work around all kinds of mayhem to get a seemingly simple task done. In one study, “The Aesthetics of Reading” (PDF, 480Kb), Dr Kevin Larson of Microsoft and Dr Rosalind Picard of MIT explored the effect of good typography on mood. Two versions of the New Yorker ePeriodical were created. One was typeset well and the other poorly. They engaged twenty volunteers — half male, half female — and showed the good version to half of the participants. The other half saw the poor version. The good doctors found that, “there are important differences between good and poor typography that appear to have little effect on common performance measures such as reading speed and comprehension.” In short, good typography didn’t help people read faster or comprehend better. Oh. On the face of it that seems to invalidate what we designers do. Hold your horses, though! They also found that “the participants who received the good typography performed better on relative subjective duration and on certain cognitive tasks”, and that “good typography induces a good mood.” This means that even though there were no actual differences in reading speed and comprehension, the people who read the version with good typography thought that it took less time to read, and were induced into a good mood by doing so. Not only that, but by being in a good mood, people were more capable of completing creative tasks faster. That was a revelation to me. It means that the study showed there is a positive, measurable, emotional and perceptual benefit to good typography and design. To paraphrase: time and tasks fly when you’re having fun! Source: Nationaal Archief of the Netherlands: Cheering man after the first goal, Netherlands vs. Belgium, Amsterdam, 1931. So, among all my talk of saccades, fixations, scan paths and typesetting, there is science, and the science helps us qualify our design decisions when we need to, and do our jobs better. The science helps us understand how people will interact with our work, and what the actual benefits are for them, and the products or organisations we serve. Good design equals a subjectively quicker experience, a good mood, objectively faster completion of tasks, and happiness for everyone. Thank you, science!",2012,Jon Tan,jontan,2012-12-24T00:00:00+00:00,https://24ways.org/2012/science/,design 219,Speed Up Your Site with Delayed Content,"Speed remains one of the most important factors influencing the success of any website, and the first rule of performance (according to Yahoo!) is reducing the number of HTTP requests. Over the last few years we’ve seen techniques like sprites and combo CSS/JavaScript files used to reduce the number of HTTP requests. But there’s one area where large numbers of HTTP requests are still a fact of life: the small avatars attached to the comments on articles like this one. Avatars Many sites like 24 ways use a fantastic service called Gravatar to provide user images. As a user, you can sign up to Gravatar, give them your e-mail address, and upload an image to represent you. Sites can then include your image by generating a one way hash of your e-mail address and using that to build an image URL. For example, the markup for the comments on this page looks something like this:

    Drew McLellan

    This is a great article!

    The Gravatar URL contains two parts. 100 is the size in pixels of the image we want. 13734b0cb20708f79e730809c29c3c48 is an MD5 digest of Drew’s e-mail address. Using MD5 means we can request an image for a user without sharing their e-mail address with anyone who views the source of the page. So what’s wrong with avatars? The problem is that a popular article can easily get hundreds of comments, and every one of them means another image has to be individually requested from Gravatar’s servers. Each request is small and the Gravatar servers are fast but, when you add them up, it can easily add seconds to the rendering time of a page. Worse, they can delay the loading of more important assets like the CSS required to render the main content of the page. These images aren’t critical to the page, and don’t need to be loaded up front. Let’s see if we can delay loading them until everything else is done. That way we can give the impression that our site has loaded quickly even if some requests are still happening in the background. Delaying image loading The first problem we find is that there’s no way to prevent Internet Explorer, Chrome or Safari from loading an image without removing it from the HTML itself. Tricks like removing the images on the fly with JavaScript don’t work, as the browser has usually started requesting the images before we get a chance to stop it. Removing the images from the HTML means that people without JavaScript enabled in their browser won’t see avatars. As Drew mentioned at the start of the month, this can affect a large number of people, and we can’t completely ignore them. But most sites already have a textual name attached to each comment and the avatars are just a visual enhancement. In most cases it’s OK if some of our users don’t see them, especially if it speeds up the experience for the other 98%. Removing the images from the source of our page also means we’ll need to put them back at some point, so we need to keep a record of which images need to be requested. All Gravatar images have the same URL format; the only thing that changes is the e-mail hash. Storing this is a great use of HTML5 data attributes. HTML5 data what? Data attributes are a new feature in HTML5. The latest version of the spec says: A custom data attribute is an attribute in no namespace whose name starts with the string “data-”, has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z). […] Custom data attributes are intended to store custom data private to the page or application, for which there are no more appropriate attributes or elements. These attributes are not intended for use by software that is independent of the site that uses the attributes. In other words, they’re attributes of an HTML element that start with “data-” which you can use to share data with scripts running on your site. They’re great for adding small bits of metadata that don’t fit into an existing markup pattern the way microformats do. Let’s see this in action Take a look at the markup for comments again:

    Drew McLellan

    This is a great article!

    Let’s replace the element with a data-gravatar-hash attribute on the element:

    Drew McLellan

    This is a great article!

    Once we’ve done this, we’ll need a small bit of JavaScript to find all these attributes, and replace them with images after the page has loaded. Here’s an example using jQuery: $(window).load(function() { $('a[data-gravatar-hash]').prepend(function(index){ var hash = $(this).attr('data-gravatar-hash') return '' }) }) This code waits until everything on the page is loaded, then uses jQuery.prepend to insert an image into every link containing a data-gravatar-hash attribute. It’s short and relatively simple, but in tests it reduced the rendering time of a sample page from over three seconds to well under one. Finishing touches We still need to consider the appearance of the page before the avatars have loaded. When our script adds extra content to the page it will cause a browser reflow, which is visually annoying. We can avoid this by using CSS to reserve some space for each image before it’s inserted into the HTML: #comments div { padding-left: 110px; min-height: 100px; position: relative; } #comments div h4 img { display: block; position: absolute; top: 0; left: 0; } In a real world example, we’ll also find that the HTML for a comment is more varied as many users don’t provide a web page link. We can make small changes to our JavaScript and CSS to handle this case. Put this all together and you get this example. Taking this idea further There’s no reason to limit this technique to sites using Gravatar; we can use similar code to delay loading any images that don’t need to be present immediately. For example, this year’s redesigned Flickr photo page uses a “data-defer-src” attribute to describe any image that doesn’t need to be loaded straight away, including avatars and map tiles. You also don’t have to limit yourself to loading the extra resources once the page loads. You can get further bandwidth savings by waiting until the user takes an action before downloading extra assets. Amazon has taken this tactic to the extreme on its product pages – extra content is loaded as you scroll down the page. So next time you’re building a page, take a few minutes to think about which elements are peripheral and could be delayed to allow more important content to appear as quickly as possible.",2010,Paul Hammond,paulhammond,2010-12-18T00:00:00+00:00,https://24ways.org/2010/speed-up-your-site-with-delayed-content/,ux 148,Typesetting Tables,"Tables have suffered in recent years on the web. They were used for laying out web pages. Then, following the Web Standards movement, they’ve been renamed by the populous as `data tables’ to ensure that we all know what they’re for. There have been some great tutorials for the designing tables using CSS for presentation and focussing on the semantics in the displaying of data in the correct way. However, typesetting tables is a subtle craft that has hardly had a mention. Table design can often end up being a technical exercise. What data do we need to display? Where is the data coming from and what form will it take? When was the last time your heard someone talk about lining numerals? Or designing to the reading direction? Tables are not read like sentences When a reader looks at, and tries to understand, tabular data, they’re doing a bunch of things at the same time. Generally, they’re task based; they’re looking for something. They are reading horizontally AND vertically Reading a table is not like reading a paragraph in a novel, and therefore shouldn’t be typeset in the same way. Designing tables is information design, it’s functional typography—it’s not a time for eye candy. Typesetting tables Typesetting great looking tables is largely an exercise in restraint. Minimal interference with the legibility of the table should be in the forefront of any designers mind. When I’m designing tables I apply some simple rules: Plenty of negative space Use the right typeface Go easy on the background tones, unless you’re giving reading direction visual emphasis Design to the reading direction By way of explanation, here are those rules as applied to the following badly typeset table. Your default table This table is a mess. There is no consideration for the person trying to read it. Everything is too tight. The typeface is wrong. It’s flat. A grim table indeed. Let’s see what we can do about that. Plenty of negative space The badly typeset table has been set with default padding. There has been little consideration for the ascenders and descenders in the type interfering with the many horizontal rules. The first thing we do is remove most of the lines, or rules. You don’t need them – the data in the rows forms its own visual rules. Now, with most of the rules removed, the ones that remain mean something; they are indicating some kind of hierarchy to the help the reader understand what the different table elements mean – in this case the column headings. Now we need to give the columns and rows more negative space. Note the framing of the column headings. I’m giving them more room at the bottom. This negative space is active—it’s empty for a reason. The extra air in here also gives more hierarchy to the column headings. Use the right typeface The default table is set in a serif typeface. This isn’t ideal for a couple of reasons. This serif typeface has a standard set of text numerals. These dip below the baseline and are designed for using figures within text, not on their own. What you need to use is a typeface with lining numerals. These align to the baseline and are more legible when used for tables. Sans serif typefaces generally have lining numerals. They are also arguably more legible when used in tables. Go easy on the background tones, unless you’re giving reading direction visual emphasis We’ve all seen background tones on tables. They have their use, but my feeling is that use should be functional and not decorative. If you have a table that is long, but only a few columns wide, then alternate row shading isn’t that useful for showing the different lines of data. It’s a common misconception that alternate row shading is to increase legibility on long tables. That’s not the case. Shaded rows are to aid horizontal reading across multiple table columns. On wide tables they are incredibly useful for helping the reader find what they want. Background tone can also be used to give emphasis to the reading direction. If we want to emphasis a column, that can be given a background tone. Hierarchy As I said earlier, people may be reading a table vertically, and horizontally in order to find what they want. Sometimes, especially if the table is complex, we need to give them a helping hand. Visually emphasising the hierarchy in tables can help the reader scan the data. Column headings are particularly important. Column headings are often what a reader will go to first, so we need to help them understand that the column headings are different to the stuff beneath them, and we also need to give them more visual importance. We can do this by making them bold, giving them ample negative space, or by including a thick rule above them. We can also give the row titles the same level of emphasis. In addition to background tones, you can give emphasis to reading direction by typesetting those elements in bold. You shouldn’t use italics—with sans serif typefaces the difference is too subtle. So, there you have it. A couple of simple guidelines to make your tables cleaner and more readable.",2007,Mark Boulton,markboulton,2007-12-07T00:00:00+00:00,https://24ways.org/2007/typesetting-tables/,design 55,How Tabs Should Work,"Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They’ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented. So this post is my definition of how a tabbing system should work, and one approach of implementing that. But… tabs are easy, right? I’ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system: var tabs = $('.tab').click(function () { tabs.hide().filter(this.hash).show(); }).map(function () { return $(this.hash)[0]; }); $('.tab:first').click(); Simple, right? Nearly fits in a tweet (ignoring the whole jQuery library…). Still, it’s riddled with problems that make it a far from perfect solution. Requirements: what makes the perfect tab? All content is navigable and available without JavaScript (crawler-compatible and low JS-compatible). ARIA roles. The tabs are anchor links that: are clickable have block layout have their href pointing to the id of the panel element use the correct cursor (i.e. cursor: pointer). Since tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open. Right-clicking (and Shift-clicking) doesn’t cause the tab to be selected. Native browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place). The first three points are all to do with the semantics of the markup and how the markup has been styled. I think it’s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work. The last three points are JavaScript problems. Let’s investigate that. The shitmus test Like a litmus test, here’s a couple of quick ways you can tell if a tabbing system is poorly implemented: Change tab, then use the Back button (or keyboard shortcut) and it breaks The tab isn’t a link, so you can’t open it in a new tab These two basic things are, to me, the bare minimum that a tabbing system should have. Why is this important? The people who push their so-called native apps on users can’t have more reasons why the web sucks. If something as basic as a tab doesn’t work, obviously there’s more ammo to push a closed native app or platform on your users. If you’re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn’t mean don’t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. :breath: URI fragment, absolute URL or query string? A URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content. This decision really depends on the context of your tabbing system. For something like GitHub’s tabs to view a pull request, it makes sense that the full URL changes. For our problem though, I want to solve the issue when the page doesn’t do a full URL update; that is, your regular run-of-the-mill tabbing system. I used to be from the school of using the hash to show the correct tab, but I’ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don’t work, and comma-separated hash fragments don’t make any sense to control multiple tabs (since it doesn’t actually link to anything). For this article, I’ll keep focused on using a single tabbing system and a hash on the URL to control the tabs. Markup I’m going to assume subcontent, so my markup would look like this (yes, this is a cat demo…):
    It’s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we’ll see once we’re on to writing the code. URL-driven tabbing systems Instead of making the code responsive to the user’s input, we’re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free. With that in mind, let’s start building up our code. I’ll assume we have the jQuery library, but I’ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support). Note that I’ll start with the simplest solution, and I’ll refactor the code as I go along, like in places where I keep calling jQuery selectors. function show(id) { // remove the selected class from the tabs, // and add it back to the one the user selected $('.tab').removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it $('.panel').hide().filter(id).show(); } $(window).on('hashchange', function () { show(location.hash); }); // initialise by showing the first panel show('#dizzy'); This works pretty well for such little code. Notice that we don’t have any click handlers for the user and the Back button works right out of the box. However, there’s a number of problems we need to fix: The initialised tab is hard-coded to the first panel, rather than what’s on the URL. If there’s no hash on the URL, all the panels are hidden (and thus broken). If you scroll to the bottom of the example, you’ll find a “top” link; clicking that will break our tabbing system. I’ve purposely made the page long, so that when you click on a tab, you’ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying. From our criteria at the start of this post, we’ve already solved items 4 and 5. Not a terrible start. Let’s solve items 1 through 3 next. Using the URL to initialise correctly and protect from breakage Instead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it’s available. The problem is: what if the hash on the URL isn’t actually for a tab? The solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won’t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached. So now the code will collect all the tabs, then find the related panels from those tabs, and we’ll use that list to double the values we give the show function (during initialisation, for instance). // collect all the tabs var tabs = $('.tab'); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')); function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', function () { var hash = location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } }); // initialise show(targets.indexOf(location.hash) !== -1 ? location.hash : ''); The core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab. The second breakage we saw in the original demo was that clicking the “top” link would break our tabs. This was due to the hashchange event firing and the code didn’t validate the hash that was passed. Now this happens, the panels don’t break. So far we’ve got a tabbing system that: Works without JavaScript. Supports right-click and Shift-click (and doesn’t select in these cases). Loads the correct panel if you start with a hash. Supports native browser navigation. Supports the keyboard. The only annoying problem we have now is that the page jumps when a tab is selected. That’s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it’s all for a good cause. Removing the jump to tab You’d be forgiven for thinking you just need to hook a click handler and return false. It’s what I started with. Only that’s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support. There may be another way to solve this, but what follows is the way I found – and it works. It’s just a bit… hairy, as I said. We’re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won’t jump the page. The change involves the following: Add a click handle that removes the id from the target panel, and cache this in a target variable that we’ll use later in hashchange (see point 4). In the same click handler, set the location.hash to the current link’s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line). For each panel, put a backup copy of the id attribute in a data property (I’ve called it old-id). When the hashchange event fires, if we have a target value, let’s put the id back on the panel. These changes result in this final code: /*global $*/ // a temp value to cache *what* we're about to show var target = null; // collect all the tabs var tabs = $('.tab').on('click', function () { target = $(this.hash).removeAttr('id'); // if the URL isn't going to change, then hashchange // event doesn't fire, so we trigger the update manually if (location.hash === this.hash) { // but this has to happen after the DOM update has // completed, so we wrap it in a setTimeout 0 setTimeout(update, 0); } }); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')).each(function () { // keep a copy of what the original el.id was $(this).data('old-id', this.id); }); function update() { if (target) { target.attr('id', target.data('old-id')); target = null; } var hash = window.location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } } function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', update); // initialise if (targets.indexOf(window.location.hash) !== -1) { update(); } else { show(); } This version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add. ARIA roles This article on ARIA tabs made it very easy to get the tabbing system working as I wanted. The tasks were simple: Add aria-role set to tab for the tabs, and tabpanel for the panels. Set aria-controls on the tabs to point to their related panel (by id). I use JavaScript to add tabindex=0 to all the tab elements. When I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false. When I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false. And that’s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible. Check out the final version (and the non-jQuery version as promised). In conclusion There’s a lot of tab implementations out there, but there’s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there’s a special hell for those tab systems that don’t even use links, but I think it’s clear that even in something that’s relatively simple, it’s the small details that make or break the user experience. Obviously there are corners I’ve not explored, like when there’s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that’s for another year!",2015,Remy Sharp,remysharp,2015-12-22T00:00:00+00:00,https://24ways.org/2015/how-tabs-should-work/,code 147,Christmas Is In The AIR,"That’s right, Christmas is coming up fast and there’s plenty of things to do. Get the tree and lights up, get the turkey, buy presents and who know what else. And what about Santa? He’s got a list. I’m pretty sure he’s checking it twice. Sure, we could use an existing list making web site or even a desktop widget. But we’re geeks! What’s the fun in that? Let’s build our own to-do list application and do it with Adobe AIR! What’s Adobe AIR? Adobe AIR, formerly codenamed Apollo, is a runtime environment that runs on both Windows and OSX (with Linux support to follow). This runtime environment lets you build desktop applications using Adobe technologies like Flash and Flex. Oh, and HTML. That’s right, you web standards lovin’ maniac. You can build desktop applications that can run cross-platform using the trio of technologies, HTML, CSS and JavaScript. If you’ve tried developing with AIR before, you’ll need to get re-familiarized with the latest beta release as many things have changed since the last one (such as the API and restrictions within the sandbox.) To get started To get started in building an AIR application, you’ll need two basic things: The AIR runtime. The runtime is needed to run any AIR-based application. The SDK. The software development kit gives you all the pieces to test your application. Unzip the SDK into any folder you wish. You’ll also want to get your hands on the JavaScript API documentation which you’ll no doubt find yourself getting into before too long. (You can download it, too.) Also of interest, some development environments have support for AIR built right in. Aptana doesn’t have support for beta 3 yet but I suspect it’ll be available shortly. Within the SDK, there are two main tools that we’ll use: one to test the application (ADL) and another to build a distributable package of our application (ADT). I’ll get into this some more when we get to that stage of development. Building our To-do list application The first step to building an application within AIR is to create an XML file that defines our default application settings. I call mine application.xml, mostly because Aptana does that by default when creating a new AIR project. It makes sense though and I’ve stuck with it. Included in the templates folder of the SDK is an example XML file that you can use. The first key part to this after specifying things like the application ID, version, and filename, is to specify what the default content should be within the content tags. Enter in the name of the HTML file you wish to load. Within this HTML file will be our application. ui.html Create a new HTML document and name it ui.html and place it in the same directory as the application.xml file. The first thing you’ll want to do is copy over the AIRAliases.js file from the frameworks folder of the SDK and add a link to it within your HTML document. The aliases create shorthand links to all of the Flash-based APIs. Now is probably a good time to explain how to debug your application. Debugging our application So, with our XML file created and HTML file started, let’s try testing our ‘application’. We’ll need the ADL application located in BIN folder of the SDK and tell it to run the application.xml file. /path/to/adl /path/to/application.xml You can also just drag the XML file onto ADL and it’ll accomplish the same thing. If you just did that and noticed that your blank application didn’t load, you’d be correct. It’s running but isn’t visible. Which at this point means you’ll have to shut down the ADL process. Sorry about that! Changing the visibility You have two ways to make your application visible. You can do it automatically by setting the placing true in the visible tag within the application.xml file. true The other way is to do it programmatically from within your application. You’d want to do it this way if you had other startup tasks to perform before showing the interface. To turn the UI on programmatically, simple set the visible property of nativeWindow to true. Sandbox Security Now that we have an application that we can see when we start it, it’s time to build the to-do list application. In doing so, you’d probably think that using a JavaScript library is a really good idea — and it can be but there are some limitations within AIR that have to be considered. An HTML document, by default, runs within the application sandbox. You have full access to the AIR APIs but once the onload event of the window has fired, you’ll have a limited ability to make use of eval and other dynamic script injection approaches. This limits the ability of external sources from gaining access to everything the AIR API offers, such as database and local file system access. You’ll still be able to make use of eval for evaluating JSON responses, which is probably the most important if you wish to consume JSON-based services. If you wish to create a greater wall of security between AIR and your HTML document loading in external resources, you can create a child sandbox. We won’t need to worry about it for our application so I won’t go any further into it but definitely keep this in mind. Finally, our application Getting tired of all this preamble? Let’s actually build our to-do list application. I’ll use jQuery because it’s small and should suit our needs nicely. Let’s begin with some structure:
      Now we need to wire up that button to actually add a new item to our to-do list. And just like that, we’ve got a to-do list! That’s it! Just never close your application and you’ll remember everything. Okay, that’s not very practical. You need to have some way of storing your to-do items until the next time you open up the application. Storing Data You’ve essentially got 4 different ways that you can store data: Using the local database. AIR comes with SQLLite built in. That means you can create tables and insert, update and select data from that database just like on a web server. Using the file system. You can also create files on the local machine. You have access to a few folders on the local system such as the documents folder and the desktop. Using EcryptedLocalStore. I like using the EcryptedLocalStore because it allows you to easily save key/value pairs and have that information encrypted. All this within just a couple lines of code. Sending the data to a remote API. Our to-do list could sync up with Remember the Milk, for example. To demonstrate some persistence, we’ll use the file system to store our files. In addition, we’ll let the user specify where the file should be saved. This way, we can create multiple to-do lists, keeping them separate and organized. The application is now broken down into 4 basic tasks: Load data from the file system. Perform any interface bindings. Manage creating and deleting items from the list. Save any changes to the list back to the file system. Loading in data from the file system When the application starts up, we’ll prompt the user to select a file or specify a new to-do list. Within AIR, there are 3 main file objects: File, FileMode, and FileStream. File handles file and path names, FileMode is used as a parameter for the FileStream to specify whether the file should be read-only or for write access. The FileStream object handles all the read/write activity. The File object has a number of shortcuts to default paths like the documents folder, the desktop, or even the application store. In this case, we’ll specify the documents folder as the default location and then use the browseForSave method to prompt the user to specify a new or existing file. If the user specifies an existing file, they’ll be asked whether they want to overwrite it. var store = air.File.documentsDirectory; var fileStream = new air.FileStream(); store.browseForSave(""Choose To-do List""); Then we add an event listener for when the user has selected a file. When the file is selected, we check to see if the file exists and if it does, read in the contents, splitting the file on new lines and creating our list items within the interface. store.addEventListener(air.Event.SELECT, fileSelected); function fileSelected() { air.trace(store.nativePath); // load in any stored data var byteData = new air.ByteArray(); if(store.exists) { fileStream.open(store, air.FileMode.READ); fileStream.readBytes(byteData, 0, store.size); fileStream.close(); if(byteData.length > 0) { var s = byteData.readUTFBytes(byteData.length); oldlist = s.split(“\r\n”); // create todolist items for(var i=0; i < oldlist.length; i++) { createItem(oldlist[i], (new Date()).getTime() + i ); } } } } Perform Interface Bindings This is similar to before where we set the click event on the Add button but we’ve moved the code to save the list into a separate function. $('#add').click(function(){ var t = $('#text').val(); if(t){ // create an ID using the time createItem(t, (new Date()).getTime() ); } }) Manage creating and deleting items from the list The list management is now in its own function, similar to before but with some extra information to identify list items and with calls to save our list after each change. function createItem(t, id) { if(t.length == 0) return; // add it to the todo list todolist[id] = t; // use DOM methods to create the new list item var li = document.createElement('li'); // the extra space at the end creates a buffer between the text // and the delete link we're about to add li.appendChild(document.createTextNode(t + ' ')); // create the delete link var del = document.createElement('a'); // this makes it a true link. I feel dirty doing this. del.setAttribute('href', '#'); del.addEventListener('click', function(evt){ var id = this.id.substr(1); delete todolist[id]; // remove the item from the list this.parentNode.parentNode.removeChild(this.parentNode); saveList(); }); del.appendChild(document.createTextNode('[del]')); del.id = 'd' + id; li.appendChild(del); // append everything to the list $('#list').append(li); //reset the text box $('#text').val(''); saveList(); } Save changes to the file system Any time a change is made to the list, we update the file. The file will always reflect the current state of the list and we’ll never have to click a save button. It just iterates through the list, adding a new line to each one. function saveList(){ if(store.isDirectory) return; var packet = ''; for(var i in todolist) { packet += todolist[i] + '\r\n'; } var bytes = new air.ByteArray(); bytes.writeUTFBytes(packet); fileStream.open(store, air.FileMode.WRITE); fileStream.writeBytes(bytes, 0, bytes.length); fileStream.close(); } One important thing to mention here is that we check if the store is a directory first. The reason we do this goes back to our browseForSave call. If the user cancels the dialog without selecting a file first, then the store points to the documentsDirectory that we set it to initially. Since we haven’t specified a file, there’s no place to save the list. Hopefully by this point, you’ve been thinking of some cool ways to pimp out your list. Now we need to package this up so that we can let other people use it, too. Creating a Package Now that we’ve created our application, we need to package it up so that we can distribute it. This is a two step process. The first step is to create a code signing certificate (or you can pay for one from Thawte which will help authenticate you as an AIR application developer). To create a self-signed certificate, run the following command. This will create a PFX file that you’ll use to sign your application. adt -certificate -cn todo24ways 1024-RSA todo24ways.pfx mypassword After you’ve done that, you’ll need to create the package with the certificate adt -package -storetype pkcs12 -keystore todo24ways.pfx todo24ways.air application.xml . The important part to mention here is the period at the end of the command. We’re telling it to package up all files in the current directory. After that, just run the AIR file, which will install your application and run it. Important things to remember about AIR When developing an HTML application, the rendering engine is Webkit. You’ll thank your lucky stars that you aren’t struggling with cross-browser issues. (My personal favourites are multiple backgrounds and border radius!) Be mindful of memory leaks. Things like Ajax calls and event binding can cause applications to slowly leak memory over time. Web pages are normally short lived but desktop applications are often open for hours, if not days, and you may find your little desktop application taking up more memory than anything else on your machine! The WebKit runtime itself can also be a memory hog, usually taking about 15MB just for itself. If you create multiple HTML windows, it’ll add another 15MB to your memory footprint. Our little to-do list application shouldn’t be much of a concern, though. The other important thing to remember is that you’re still essentially running within a Flash environment. While you probably won’t notice this working in small applications, the moment you need to move to multiple windows or need to accomplish stuff beyond what HTML and JavaScript can give you, the need to understand some of the Flash-based elements will become more important. Lastly, the other thing to remember is that HTML links will load within the AIR application. If you want a link to open in the users web browser, you’ll need to capture that event and handle it on your own. The following code takes the HREF from a clicked link and opens it in the default web browser. air.navigateToURL(new air.URLRequest(this.href)); Only the beginning Of course, this is only the beginning of what you can do with Adobe AIR. You don’t have the same level of control as building a native desktop application, such as being able to launch other applications, but you do have more control than what you could have within a web application. Check out the Adobe AIR Developer Center for HTML and Ajax for tutorials and other resources. Now, go forth and create your desktop applications and hopefully you finish all your shopping before Christmas! Download the example files.",2007,Jonathan Snook,jonathansnook,2007-12-19T00:00:00+00:00,https://24ways.org/2007/christmas-is-in-the-air/,code 215,Teach the CLI to Talk Back,"The CLI is a daunting tool. It’s quick, powerful, but it’s also incredibly easy to screw things up in – either with a mistyped command, or a correctly typed command used at the wrong moment. This puts a lot of people off using it, but it doesn’t have to be this way. If you’ve ever interacted with Slack’s Slackbot to set a reminder or ask a question, you’re basically using a command line interface, but it feels more like having a conversation. (My favourite Slack app is Lunch Train which helps with the thankless task of herding colleagues to a particular lunch venue on time.) Same goes with voice-operated assistants like Alexa, Siri and Google Home. There are even games, like Lifeline, where you interact with a stranded astronaut via pseudo SMS, and KOMRAD where you chat with a Soviet AI. I’m not aiming to build an AI here – my aspirations are a little more down to earth. What I’d like is to make the CLI a friendlier, more forgiving, and more intuitive tool for new or reluctant users. I want to teach it to talk back. Interactive command lines in the wild If you’ve used dev tools in the command line, you’ve probably already used an interactive prompt – something that asks you questions and responds based on your answers. Here are some examples: Yeoman If you have Yeoman globally installed, running yo will start a command prompt. The prompt asks you what you’d like to do, and gives you options with how to proceed. Seasoned users will run specific commands for these options rather than go through this prompt, but it’s a nice way to start someone off with using the tool. npm If you’re a Node.js developer, you’re probably familiar with typing npm init to initialise a project. This brings up prompts that will populate a package.json manifest file for that project. The alternative would be to expect the user to craft their own package.json, which is more error-prone since it’s in JSON format, so something as trivial as an extraneous comma can throw an error. Snyk Snyk is a dev tool that checks for known vulnerabilities in your dependencies. Running snyk wizard in the CLI brings up a list of all the known vulnerabilities, and gives you options on how to deal with it – such as patching the issue, applying a fix by upgrading the problematic dependency, or ignoring the issue (you are then prompted for a reason). These decisions get mapped to the manifest and a .snyk file, and committed into the repo so that the settings are the same for everyone who uses that project. I work at Snyk, and running the wizard is what made me think about building my own personal assistant in the command line to help me with some boring, repetitive tasks. Writing your own Something I do a lot is add bookmarks to styleguides.io – I pull down the entire repo, copy and paste a template YAML file, and edit to contents. Sometimes I get it wrong and break the site. So I’ve been putting together a tool to help me add bookmarks. It’s called bookmarkbot – it’s a personal assistant squirrel called Mark who will collect and bury your bookmarks for safekeeping.* *Fortunately, this metaphor also gives me a charming excuse for any situation where bookmarks sometimes get lost – it’s not my poorly-written code, honest, it’s just being realistic because sometimes squirrels forget where they buried things! When you run bookmarkbot, it will ask you for some information, and save that information as a Markdown file in YAML format. For this demo, I’m going to use a Node.js package called inquirer, which is a well supported tool for creating command line prompts. I like it because it has a bunch of different question types; from input, which asks for some text back, confirm which expects a yes/no response, or a list which gives you a set of options to choose from. You can even nest questions, Choose Your Own Adventure style. Prerequisites Node.js npm RubyGems (Only if you want to go as far as serving a static site for your bookmarks, and you want to use Jekyll for it) Disclaimer Bear in mind that this is a really simplified walkthrough. It doesn’t have any error states, and it doesn’t handle the situation where we save a file with the same name. But it gets you in a good place to start building out your tool. Let’s go! Create a new folder wherever you keep your projects, and give it an awesome name (I’ve called mine bookmarks and put it in the Sites directory because I’m unimaginative). Now cd to that directory. cd Sites/bookmarks Let’s use that example I gave earlier, the trusty npm init. npm init Pop in the information you’d like to provide, or hit ENTER to skip through and save the defaults. Your directory should now have a package.json file in it. Now let’s install some of the dependencies we’ll need. npm install --save inquirer npm install --save slugify Next, add the following snippet to your package.json to tell it to run this file when you run npm start. ""scripts"": { … ""start"": ""node index.js"" } That index.js file doesn’t exist yet, so let’s create it in the root of our folder, and add the following: // Packages we need var fs = require('fs'); // Creates our file (part of Node.js so doesn't need installing) var inquirer = require('inquirer'); // The engine for our questions prompt var slugify = require('slugify'); // Will turn a string into a usable filename // The questions var questions = [ { type: 'input', name: 'name', message: 'What is your name?', }, ]; // The questions prompt function askQuestions() { // Ask questions inquirer.prompt(questions).then(answers => { // Things we'll need to generate the output var name = answers.name; // Finished asking questions, show the output console.log('Hello ' + name + '!'); }); } // Kick off the questions prompt askQuestions(); This is just some barebones where we’re including the inquirer package we installed earlier. I’ve stored the questions in a variable, and the askQuestions function will prompt the user for their name, and then print “Hello ” in the console. Enough setup, let’s see some magic. Save the file, go back to the command line and run npm start. Extending what we’ve learnt At the moment, we’re just saving a name to a file, which isn’t really achieving our goal of saving bookmarks. We don’t want our tool to forget our information every time we talk to it – we need to save it somewhere. So I’m going to add a little function to write the output to a file. Saving to a file Create a folder in your project’s directory called _bookmarks. This is where the bookmarks will be saved. I’ve replaced my questions array, and instead of asking for a name, I’ve extended out the questions, asking to be provided with a link and title (as a regular input type), a list of tags (using inquirer’s checkbox type), and finally a description, again, using the input type. So this is how my code looks now: // Packages we need var fs = require('fs'); // Creates our file var inquirer = require('inquirer'); // The engine for our questions prompt var slugify = require('slugify'); // Will turn a string into a usable filename // The questions var questions = [ { type: 'input', name: 'link', message: 'What is the url?', }, { type: 'input', name: 'title', message: 'What is the title?', }, { type: 'checkbox', name: 'tags', message: 'Would you like me to add any tags?', choices: [ { name: 'frontend' }, { name: 'backend' }, { name: 'security' }, { name: 'design' }, { name: 'process' }, { name: 'business' }, ], }, { type: 'input', name: 'description', message: 'How about a description?', }, ]; // The questions prompt function askQuestions() { // Say hello console.log('🐿 Oh, hello! Found something you want me to bookmark?\n'); // Ask questions inquirer.prompt(questions).then((answers) => { // Things we'll need to generate the output var title = answers.title; var link = answers.link; var tags = answers.tags + ''; var description = answers.description; var output = '---\n' + 'title: ""' + title + '""\n' + 'link: ""' + link + '""\n' + 'tags: [' + tags + ']\n' + '---\n' + description + '\n'; // Finished asking questions, show the output console.log('\n🐿 All done! Here is what I\'ve written down:\n'); console.log(output); // Things we'll need to generate the filename var slug = slugify(title); var filename = '_bookmarks/' + slug + '.md'; // Write the file fs.writeFile(filename, output, function () { console.log('\n🐿 Great! I have saved your bookmark to ' + filename); }); }); } // Kick off the questions prompt askQuestions(); The output is formatted into YAML metadata as a Markdown file, which will allow us to turn it into a static HTML file using a build tool later. Run npm start again and have a look at the file it outputs. Getting confirmation Before the user makes critical changes, it’s good to verify those changes first. We’re going to add a confirmation step to our tool, before writing the file. More seasoned CLI users may favour speed over a “hey, can you wait a sec and just check this is all ok” step, but I always think it’s worth adding one so you can occasionally save someone’s butt. So, underneath our questions array, let’s add a confirmation array. // Packages we need … // The questions … // Confirmation questions var confirm = [ { type: 'confirm', name: 'confirm', message: 'Does this look good?', }, ]; // The questions prompt … As we’re adding the confirm step before the file gets written, we’ll need to add the following inside the askQuestions function: // The questions prompt function askQuestions() { // Say hello … // Ask questions inquirer.prompt(questions).then((answers) => { … // Things we'll need to generate the output … // Finished asking questions, show the output … // Confirm output is correct inquirer.prompt(confirm).then(answers => { // Things we'll need to generate the filename var slug = slugify(title); var filename = '_bookmarks/' + slug + '.md'; if (answers.confirm) { // Save output into file fs.writeFile(filename, output, function () { console.log('\n🐿 Great! I have saved your bookmark to ' + filename); }); } else { // Ask the questions again console.log('\n🐿 Oops, let\'s try again!\n'); askQuestions(); } }); }); } // Kick off the questions prompt askQuestions(); Now run npm start and give it a go! Typing y will write the file, and n will take you back to the start. Ideally, I’d store the answers already given as defaults so the user doesn’t have to start from scratch, but I want to keep this demo simple. Serving the files Now that your bookmarking tool is successfully saving formatted Markdown files to a folder, the next step is to serve those files in a way that lets you share them online. The easiest way to do this is to use a static-site generator to convert your YAML files into HTML, and pop them all on one page. Now, you’ve got a few options here and I don’t want to force you down any particular path, as there are plenty out there – it’s just a case of using the one you’re most comfortable with. I personally favour Jekyll because of its tight integration with GitHub Pages – I don’t want to mess around with hosting and deployment, so it’s really handy to have my bookmarks publish themselves on my site as soon as I commit and push them using Git. I’ll give you a very brief run-through of how I’m doing this with bookmarkbot, but I recommend you read my Get Started With GitHub Pages (Plus Bonus Jekyll) guide if you’re unfamiliar with them, because I’ll be glossing over some bits that are already covered in there. Setting up a build tool If you haven’t already, install Jekyll and Bundler globally through RubyGems. Jekyll is our static-site generator, and Bundler is what we use to install Ruby dependencies. gem install jekyll bundler In my project folder, I’m going to run the following which will install the Jekyll files we’ll need to build our listing page. I’m using --force, otherwise it will complain that the directory isn’t empty. jekyll new . --force If you check your project folder, you’ll see a bunch of new files. Now run the following to start the server: bundle exec jekyll serve This will build a new directory called _site. This is where your static HTML files have been generated. Don’t touch anything in this folder because it will get overwritten the next time you build. Now that serve is running, go to http://127.0.0.1:4000/ and you’ll see the default Jekyll page and know that things are set up right. Now, instead, we want to see our list of bookmarks that are saved in the _bookmarks directory (make sure you’ve got a few saved). So let’s get that set up next. Open up the _config.yml file that Jekyll added earlier. In here, we’re going to tell it about our bookmarks. Replace everything in your _config.yml file with the following: title: My Bookmarks description: These are some of my favourite articles about the web. markdown: kramdown baseurl: /bookmarks # This needs to be the same name as whatever you call your repo on GitHub. collections: - bookmarks This will make Jekyll aware of our _bookmarks folder so that we can call it later. Next, create a new directory and file at _layouts/home.html and paste in the following. {{site.title}}

      {{site.title}}

      {{site.description}}

        {% for bookmark in site.bookmarks %}
      • {{bookmark.title}}

        {{bookmark.content}} {% if bookmark.tags %}
          {% for tags in bookmark.tags %}
        • {{tags}}
        • {% endfor %}
        {% endif %}
      • {% endfor %}
      Restart Jekyll for your config changes to kick in, and go to the url it provides you (probably http://127.0.0.1:4000/bookmarks, unless you gave something different as your baseurl). It’s a decent start – there’s a lot more we can do in this area but now we’ve got a nice list of all our bookmarks, let’s get it online! If you want to use GitHub Pages to host your files, your first step is to push your project to GitHub. Go to your repository and click “settings”. Scroll down to the section labelled “GitHub Pages”, and from here you can enable it. Select your master branch, and it will provide you with a url to view your published pages. What next? Now that you’ve got a framework in place for publishing bookmarks, you can really go to town on your listing page and make it your own. First thing you’ll probably want to do is add some CSS, then when you’ve added a bunch of bookmarks, you’ll probably want to have some filtering in place for the tags, perhaps extend the types of questions that you ask to include an image (if you’re feeling extra-fancy, you could just ask for a url and pull in metadata from the site itself). Maybe you’ve got an idea that doesn’t involve bookmarks at all. You could use what you’ve learnt to build a place where you can share quotes, a list of your favourite restaurants, or even Christmas gift ideas. Here’s one I made earlier My demo, bookmarkbot, is on GitHub, and I’ve reused a lot of the code from styleguides.io. Feel free to grab bits of code from there, and do share what you end up making!",2017,Anna Debenham,annadebenham,2017-12-11T00:00:00+00:00,https://24ways.org/2017/teach-the-cli-to-talk-back/,code 96,Unwrapping the Wii U Browser,"The Wii U was released on 18 November 2012 in the US, and 30 November in the UK. It’s the first eighth generation home console, the first mainstream second-screen device, and it has some really impressive browser specs. Consoles are not just for games now: they’re marketed as complete entertainment solutions. Internet connectivity and browser functionality have gone from a nice-to-have feature in game consoles to a selling point. In Nintendo’s case, they see it as a challenge to design an experience that’s better than browsing on a desktop. Let’s make a browser that users can use on a daily basis, something that can really handle everything we’ve come to expect from a browser and do it more naturally. Sasaki – Iwata Asks on Nintendo.com With 11% of people using console browsers to visit websites, it’s important to consider these devices right from the start of projects. Browsing the web on a TV or handheld console is a very different experience to browsing on a desktop or a mobile phone, and has many usability implications. Console browser testing When I’m testing a console browser, one of the first things I do is run Niels Leenheer’s HTML5 test and Lea Verou’s CSS3 test. I use these benchmarks as a rough comparison of the standards each browser supports. In October, IE9 came out for the Xbox 360, scoring 120/500 in the HTML5 test and 32% in the CSS3 test. The PS Vita also had an update to its browser in recent weeks, jumping from 58/500 to 243/500 in the HTML5 test, and 32% to 55% in the CSS3 test. Manufacturers have been stepping up their game, trying to make their browsing experiences better. To give you an idea of how the Wii U currently compares to other devices, here are the test results of the other TV consoles I’ve tested. I’ve written more in-depth notes on TV and portable console browsers separately. Year of releaseHTML5 scoreCSS3 scoreNotes Wii U2012258/50048%Runs a Netfront browser (WebKit). Wii200689/500Wouldn’t runRuns an Opera browser. PS3200668/50038%Runs a Netfront browser (WebKit). Xbox 3602005120/50032%A browser for the Xbox (IE9) was only recently released in October 2012. The Kinect provides voice and gesture support. There’s also SmartGlass, a second-screen app for platforms including Android and iOS. The Wii U browser is Nintendo’s fifth attempt at a console browser. Based on these tests, it’s already looking promising. Why console browsers used to suck It takes a lot of system memory to run a good browser, and the problem of older consoles is that they don’t have much memory available. The original Nintendo DS needs a memory expansion pack just to run the browser, because the 4MB it has on board isn’t enough. I noticed that even on newer devices, some sites fail to load because the system runs out of memory. The Wii came out six years ago with an Opera browser. Still being used today and with such low resources available, the latest browser features can’t reasonably be supported. There’s also pressure to add features such as tabs, and enable gamers to use the browser while a game is paused. Nintendo’s browser team have the advantage of higher specs to play with on their new console (1GB of memory dedicated to games, 1GB for the system), which makes it easier to support the latest standards. But it’s still a challenge to fit everything in. …even though we have more memory, the amount of memory we can use for the browser is limited compared to a PC, so we’ve worked in ways that efficiently allocates the available memory per tab. To work on this, the experience working on the browser for the Nintendo 3DS system under a limited memory constraint helped us greatly. Sasaki – Iwata Asks on Nintendo.com In the box The Wii U consists of a console unit which plugs into a TV (the first to support HD), and a wireless controller known as a gamepad. The gamepad is a lot bigger than typical TV console controllers, and it has a touchscreen on the front. The touchscreen is resistive, responding to pressure rather than electrical current. It’s intended to be used with a stylus (provided) but fingers can be used. It might look a bit like one, but the gamepad isn’t a portable console designed to be taken out like the PS Vita. The gamepad can be used as a standalone screen with the TV switched off, as long as it’s within range of the console unit – it basically piggybacks off it. It’s surprisingly lightweight for its size. It has a wealth of detectors including 9-axis control. Sensors wake the device from sleep when it’s picked up. There’s also a camera on the front, and a headphone port and speakers, with audio coming through both the TV and the gamepad giving a surround sound feel. Up to six tabs can be opened at once, and the browser can be used while games are paused. There’s a really nice little feature here – the current game’s name is saved as a search option, so it’s really quick to look up contextual content such as walk-throughs. Controls Only one gamepad can be used to control the browser, but if there are Wiimotes connected, they can be used as pointers. This doesn’t let the user do anything except point (they each get a little hand icon with a number on it displayed on the screen), but it’s interesting that multiple people can be interacting with a site at once. See a bigger version The gamepad can also be used as a simple TV remote control, with basic functions such as bringing up the programme guide, adjusting volume and changing channel. I found the simplified interface much more usable than a full-featured remote control. I’m used to scrolling being sluggish on consoles, but the Wii U feels almost as snappy as a desktop browser. Sites load considerably faster compared with others I’ve tested. Tilt-scroll Holding down ZL and ZR while tilting the screen activates an Instapaper-style tilt to scroll for going up and down the page quickly, useful for navigating very long pages. Second screen The TV mirrors most of what’s on the gamepad, although the TV screen just displays the contents of the browser window, while the gamepad displays the site along with the browser toolbar. When the user with the gamepad is typing, the keyboard is hidden from the TV screen – there’s just a bit of text at the top indicating what’s happening on the gamepad. Pressing X draws an on-screen curtain over the TV, hiding the content that’s on the gamepad from the TV. Pressing X again opens the curtains, revealing what’s on the gamepad. Holding the button down plays a drumroll before it’s released and the curtains are opened. I can imagine this being used in meetings as a fun presentation tool. In a sense, browsing is a personal activity, but you get the idea that people will be coming and going through the room. When I first saw the curtain function, it made a huge impression on me. I walked around with it all over the company saying, “They’ve really come up with something amazing!” Iwata – Iwata Asks on Nintendo.com Text Writing text Unlike the capacitive screens on smartphones, the Wii U’s resistive screen needs to be pressed harder than you’re probably used to for registering a touch event. The gamepad screen is big, which makes it much easier to type on this device than other handheld consoles, even without the stylus. It’s still more fiddly than a full-sized keyboard though. When you’re designing forms, consider the extra difficulty console users experience. Although TV screens are physically big, they are typically viewed from further away than desktop screens. This makes readability an issue, so Nintendo have provided not one, but four ways to zoom in and out: Double-tapping on the screen. Tapping the on-screen zoom icons in the browser toolbar. Pressing the + and - buttons on the device. Moving the right analogue stick up and down. As well as making it easy to zoom in and out, Nintendo have done a few other things to improve the reading experience on the TV. System font One thing you’ll notice pretty quickly is that the browser lacks all the fonts we’re used to falling back to. Serif fonts are replaced with the system’s sans-serif font. I couldn’t get Typekit’s font loading method to work but Fontdeck, which works slightly differently, does display custom fonts. The system font has been optimised for reading at a distance and is easy to distinguish because the lowercase e has a quirky little tilt. Don’t lose :focus Using the D-pad to navigate is similar to using a keyboard. Individual links are focused on, with a blue outline drawn around them. The recently redesigned An Event Apart site is an example that improves the experience for keyboard and D-pad users. They’ve added a yellow background colour to links on focus. It feels nicer than the default blue outline on its own. Media This year, television overtook PCs as the primary way to watch online video content. TV is the natural environment for video, and 42% of online TVs in the US are connected to the internet via a console. Unfortunately, the