{"rowid": 81, "title": "Science!", "contents": "Sometimes we want to capture people\u2019s attention at a glance to communicate something fast. At other times we want to have the interface fade away into the background, letting people paint pictures in their minds with our words (if you\u2019ll forgive a little flowery festive flourish).\n\nI tend to distinguish between these two broad objectives as designing for impact on the one hand, and designing for immersion on the other. What defines them is interruption. Impact needs an attention-grabbing interruption. Immersion requires us to remove interruption from the interface. Careful design deliberately interrupts but doesn\u2019t accidentally disrupt. If that seems to make sense to you, then you\u2019ll find the following snippets of science as useful as I did.\n\nSaccades and fixations\n\nAs you\u2019re reading this your eyes are skipping along the lines in tiny jumps. During each jump everything is blurred. Each jump ends in a small pause so your brain can take a snapshot of the letters. It arranges them into words, and then parses out the meaning \u2014 fast \u2014 in around a quarter of a second.\n\nThe jumps are called saccades. The pauses are called fixations. Sometimes we take regressive saccades, skipping back to reread. There\u2019s a simple example in the excellent little book, Detail in Typography, by Jost Hochuli.\n\n\n\nIf you want to explore the science of reading in much more depth, I recommend the excellent paper, \u201cThe Science of Word Recognition\u201d, by Dr Kevin Larson of Microsoft.\n\nTo design for legibility and readability is to design for saccades and fixations. It\u2019s the craft of making it easy for people\u2019s brains to extract meaning, using techniques like good contrast, font size, spacing and structure, and only interrupting the reading experience deliberately.\n\nScan paths\n\nAt some point when visiting 24 ways you probably scanned the screen to get orientated. The journey your eyes took is known as a scan path. Scan paths are made up of saccades and fixations. Right now you\u2019re following a scan path as you read, along one line, and down to the next. This is a map of the scan paths found by Olivier Le Meur from observing people looking at Rembrandt\u2019s Le\u00e7on d\u2019anatomie:\n\n\n\nFor websites, the scan path is a little different. This is an aggregate scan path of Google from LC Technologies:\n\n\n\nThe average shape of a website scan path becomes clearer in this average scan path taken by forty-six people during research by the Poynter Institute, the Estlow Center for Journalism & New Media, and Eyetools:\n\n\n\nJust like when we read text arranged left to right in a vertical column, scan paths follow a roughly Z-shaped pattern from the top left to bottom right. Sometimes we skip back to reread a word or sentence, or glance again at a specific element, but the Z-shaped scan path persists.\n\nDesigning for scan paths is to organise content to help people move through an interface to get orientated, and to read.\n\nThe elements that are important enough to need impact must interrupt the scan path and clearly call attention to themselves. However, they don\u2019t always need to clip people round the ear from multiple directions at once to get attention. It helps to list elements by importance. That gives us an interruption hierarchy to work with. Elements can then interrupt the design with degrees of contrast to the rest of the content using either positioning, treatment, or both. Ta-da! Impact achieved, but gently. No clips round the ear required.\n\nSwinging mood\n\nHuman beings are resilient. Among the immersion and occasional interruptions, we even like a little disruption, especially if it\u2019s absurd and funny. The Ling\u2019s Cars website proves it. In fact, we\u2019re so resilient that we can work around all kinds of mayhem to get a seemingly simple task done.\n\nIn one study, \u201cThe Aesthetics of Reading\u201d (PDF, 480Kb), Dr Kevin Larson of Microsoft and Dr Rosalind Picard of MIT explored the effect of good typography on mood. Two versions of the New Yorker ePeriodical were created. One was typeset well and the other poorly.\n\n\n\nThey engaged twenty volunteers \u2014 half male, half female \u2014 and showed the good version to half of the participants. The other half saw the poor version.\n\nThe good doctors found that, \u201cthere are important differences between good and poor typography that appear to have little effect on common performance measures such as reading speed and comprehension.\u201d In short, good typography didn\u2019t help people read faster or comprehend better.\n\nOh. On the face of it that seems to invalidate what we designers do. Hold your horses, though! They also found that \u201cthe participants who received the good typography performed better on relative subjective duration and on certain cognitive tasks\u201d, and that \u201cgood typography induces a good mood.\u201d\n\nThis means that even though there were no actual differences in reading speed and comprehension, the people who read the version with good typography thought that it took less time to read, and were induced into a good mood by doing so. Not only that, but by being in a good mood, people were more capable of completing creative tasks faster.\n\nThat was a revelation to me. It means that the study showed there is a positive, measurable, emotional and perceptual benefit to good typography and design. To paraphrase: time and tasks fly when you\u2019re having fun!\n\n\n\nSource: Nationaal Archief of the Netherlands: Cheering man after the first goal, Netherlands vs. Belgium, Amsterdam, 1931.\n\nSo, among all my talk of saccades, fixations, scan paths and typesetting, there is science, and the science helps us qualify our design decisions when we need to, and do our jobs better. The science helps us understand how people will interact with our work, and what the actual benefits are for them, and the products or organisations we serve. Good design equals a subjectively quicker experience, a good mood, objectively faster completion of tasks, and happiness for everyone. Thank you, science!", "year": "2012", "author": "Jon Tan", "author_slug": "jontan", "published": "2012-12-24T00:00:00+00:00", "url": "https://24ways.org/2012/science/", "topic": "design"} {"rowid": 84, "title": "Responsive Responsive Design", "contents": "Now more than ever, we\u2019re designing work meant to be viewed along a gradient of different experiences. Responsive web design offers us a way forward, finally allowing us to \u201cdesign for the ebb and flow of things.\u201d\n\n\nWith those two sentences, Ethan closed the article that introduced the web to responsive design. Since then, responsive design has taken the web by storm. Seemingly every day, some company is touting their new responsive redesign. Large brands such as Microsoft, Time and Disney are getting in on the action, blowing away the once common criticism that responsive design was a technique only fit for small blogs.\n\nCertainly, this is a good thing. As Ethan and John Allsopp before him, were right to point out, the inherent flexibility of the web is a feature, not a bug. The web\u2019s unique ability to be consumed and interacted with on any number of devices, with any number of input methods is something to be embraced.\n\nBut there\u2019s one part of the web\u2019s inherent flexibility that seems to be increasingly overlooked: the ability for the web to be interacted with on any number of networks, with a gradient of bandwidth constraints and latency costs, on devices with varying degrees of hardware power.\n\nA few months back, Stephanie Rieger tweeted\n\n\n\t\u201cShoot me now\u2026responsive design has seemingly become confused with an opportunity to reduce performance rather than improve it.\u201d\n\n\nI would love to disagree, but unfortunately the evidence is damning. Consider the size and number of requests for four highly touted responsive sites that were launched this year:\n\n\n\t74 requests, 1,511kb\n\t114 requests, 1,200kb\n\t99 requests, 1,298kb\n\t105 requests, 5,942kb\n\n\nAnd those numbers were for the small screen versions of each site!\n\nThese sites were praised for their visual design and responsive nature, and rightfully so. They\u2019re very easy on the eyes and a lot of thought went into their appearance. But the numbers above tell an inconvenient truth: for all the time spent ensuring the visual design was airtight, seemingly very little (if any) attention was given to their performance.\n\nIt would be one thing if these were the exceptions, but unfortunately they\u2019re not. Guy Podjarny, who has done a lot of research around responsive performance, discovered that 86% of the responsive sites he tested were either the same size or larger on the small screen as they were on the desktop.\n\nThe reality is that high performance should be a requirement on any web project, not an afterthought. Poor performance has been tied to a decrease in revenue, traffic, conversions, and overall user satisfaction. Case study after case study shows that improving performance, even marginally, will impact the bottom line. The situation is no different on mobile where 71% of people say they expect sites to load as quickly or faster on their phone when compared to the desktop.\n\nThe bottom line: performance is a fundamental component of the user experience.\n\nSo, given it\u2019s extreme importance in the success of any web project, why is it that we\u2019re seeing so many bloated responsive sites?\n\nFirst, I adamantly disagree with the belief that poor performance is inherent to responsive design. That\u2019s not a rule \u2013 it\u2019s a cop-out. It\u2019s an example of blaming the technique when we should be blaming the implementation. This argument also falls flat because it ignores the fact that the trend of fat sites is increasing on the web in general. While some responsive sites are the worst offenders, it\u2019s hardly an issue resigned to one technique.\n\nTo fix the issue, we need to stop making excuses and start making improvements instead. Here, then, are some things we can do to start improving the state of responsive performance, and performance in general, right now.\n\nCreate a culture of performance\n\nIf you understand just how important performance is to the success of a project, the natural next step is to start creating a culture where high performance is a key consideration. \n\nOne of the things you can do is set a baseline. Determine the maximum size and number of requests you are going to allow, and don\u2019t let a page go live if either of those numbers is exceeded. The BBC does this with its responsive mobile site.\n\nA variation of that, which Steve Souders discussed in a recent podcast is to create a performance budget based on those numbers. Once you have that baseline set, if someone comes along and wants to add a something to the page, they have to make sure the page remains under budget. If it exceeds the budget, you have three options:\n\n\n\tOptimize an existing feature or asset on the page\n\tRemove an existing feature or asset from the page\n\tDon\u2019t add the new feature or asset\n\n\nThe idea here is that you make performance part of the process instead of something that may or may not get tacked on at the end.\n\nEmbrace the pain\n\nThis troubling trend of web bloat can be blamed in part on the lack of pain associated with poor performance. Most of us work on high-speed connections with low latency. When we fire up a 4Mb site, it doesn\u2019t feel so bad. \n\nWhen I tested the previously mentioned 5,942kb site on a 3G network, it took over 93 seconds to load. A minute and a half just staring at a white screen. Had anyone working on that project experienced that, you can bet the site wouldn\u2019t have launched in that state.\n\nDon\u2019t just crunch numbers. Fire up your site on a slower network and see what it feels like to wait. If you don\u2019t have access to a slow network, simulate one using a tool like Slowy, Throttle or the Network Conditioner found in Mac OS X 10.7.\n\nWatch for low-hanging fruit\n\nThere are a bunch of general performance improvements that apply to any site (responsive or not) but often aren\u2019t made. A great starting point is to refer to Yahoo!\u2018s list of rules.\n\nSome of this might sound complicated or intimidating, but it doesn\u2019t have to be. You can grab an .htaccess file from HTML 5 Boilerplate or use Sergey Chernyshev\u2019s drop-in .htaccess file. You can use tools like SpriteMe to simplify the creation of sprites, and ImageOptim to compress images.\n\nJust by implementing these simple optimizations you will achieve a noticeable improvement in terms of weight and page load time.\n\nBe careful with images\n\nThe most common offender for poor responsive performance is downloading unnecessarily large images, or worse yet, multiple sizes of the same image. \n\nFor background images, simply being careful with where and how you include the image can ensure you don\u2019t get caught in the trap of multiple background images being downloaded without being used. Don\u2019t count on display:none to help. While it may hide elements from displaying on screen, those images will still be requested and downloaded.\n\nContent images can be a little trickier. Whatever you do, don\u2019t serve a large image that works on a large screen display to small screens. It\u2019s wasteful, not only in terms of adding weight to the page, but also in wasting precious memory. Instead, use a tool like Adaptive Images or src.sencha.io to make sure only appropriately sized images are being downloaded. \n\nThe new element that has been so often discussed is another excellent solution if you\u2019re feeling particularly future-oriented. A picture polyfill exists so that you can start using the element now without any worries about support.\n\nConditional loading\n\nDon\u2019t load any more than you absolutely need to. If a script isn\u2019t needed at certain sizes, use the matchMedia polyfill to ensure it only loads when needed. Use eCSSential to do the same for unnecessary CSS files.\n\nLast year on 24 ways, Jeremy Keith wrote an article about conditional loading of content in a responsive design based on the screen width. The technique was later refined by the Filament Group into what they dubbed the Ajax-Include Pattern. It\u2019s a powerful and simple way to lighten the load on small screens as well as reduce clutter.\n\nGo vanilla?\n\nIf you take a look at the HTTP Archive you\u2019ll see that other than image size, JavaScript is the heaviest asset on a page weighing in at 215kb on average. It also boasts the fifth highest correlation to load time as well as the second highest correlation to render time. \n\nMuch of the weight can be attributed to our industry\u2019s increasing reliance on frameworks. This is especially a concern on mobile devices. PPK recently exclaimed that current JavaScript libraries are just \u201ctoo heavy for mobile\u201d. \u201cResearch from Stoyan Stefanov on parse times supports this. On some Android and iOS devices, it can take as long as 200-300ms just to parse jQuery.\n\nThere\u2019s nothing wrong about using a framework, but the problem is that they\u2019ve become the default. Before dropping another framework or plugin into a page, we should stop to consider the value it adds and whether we could accomplish what we need to do using a combination of vanilla JavaScript and CSS instead. (This is a great example of a scenario where a performance budget could help.)\n\nStart thinking beyond visual aesthetics\n\nWe love to tout the web\u2019s universality when discussing the need for responsive design. But that universality is not limited simply to screen size. Networks and hardware capabilities must factor in as well.\n\nThe web is an incredibly dynamic and interactive medium, and designing for it demands that we consider more than just visual aesthetics. Let\u2019s not forget to give those other qualities the attention they deserve.", "year": "2012", "author": "Tim Kadlec", "author_slug": "timkadlec", "published": "2012-12-05T00:00:00+00:00", "url": "https://24ways.org/2012/responsive-responsive-design/", "topic": "design"} {"rowid": 93, "title": "Design Systems", "contents": "The most important part of responsive web design is that, no matter what the viewport width, the content is accessible in an optimum display. The best responsive designs are those that allow you to go from one optimised display to another, but with the feeling that these experiences are part of a greater product whole.\n\nResponsive design: where we\u2019ve been going wrong\n\nResponsive web design was a shock to my web designer system. Those of us who had already been designing sites for mobile probably had the biggest leap to make. We might have been detecting user agents in order to deliver a mobile-specific site, or using the slightly more familiar Bushido technique to deliver sites optimised for device type and viewport size, but either way our focus was on devices. A site was optimised for either a mobile phone or a desktop.\n\nResponsive web design brought us back to pre-table layout fluid sites that expanded or contracted to fit the viewport. This was a big difference to get our heads around when we were so used to designing for fixed-width layouts. Suddenly, an element could be any width or, at least, we needed to consider its maximum and minimum widths. Pixel perfection, while pretty, became wholly unrealistic, and a whole load of designers who prided themselves in detailed and precise designs got a bit scared.\n\nHanging on to our previous processes and typical deliverables led us to continue to optimise our sites for particular devices and provide pixel-perfect mockups for those device widths.\n\nWith all this we were concentrating on devices, not content, deliverables and not process, and making assumptions about users and their devices based on nothing but the width of the viewport.\n\nI don\u2019t think this is a crime, I think it was inevitable.\n\nWe can be up to date with our principles and ideals, but it\u2019s never as easy in practice. That\u2019s why it\u2019s more important than ever to share our successful techniques and processes. Let\u2019s drag each other into modern web design.\n\nDesign systems: the principles\n\nWhat are design systems?\n\nA visual design system is built out of the core components of typography, layout, shape or form, and colour. When considering the design of a whole product, a design system should also include patterns in user flow, content strategy, copy, and tone of voice. These concepts, design decisions or rules, created around the core components are used consistently across your product to create a cohesive feel, whether it\u2019s from one element to another, page to page, or viewport width to viewport width.\n\nResponsive design is one of the most important considerations in the components of a design system. For each component, you must decide what will unite the design across the viewports to maintain that consistent feel, and what parts of the design will differentiate in order to provide a flexible and optimal experience for different viewport sizes.\n\nComponents you might keep the same across viewports\n\n\n\ttypeface\n\tbase unit\n\tcolour\n\tshape/form\n\n\nComponents you might differentiate across viewports\n\n\n\tgrids\n\tlayout\n\tfont size\n\tmeasure (line length)\n\tleading (line height)\n\n\nContent: it must always be the same\n\nThe focus of a design system is the optimum display of content. As Mark Boulton put it, designing \u201ccontent out, not canvas in.\u201d Chris Armstrong puts the emphasis on not designing for viewports but for content \u2013 \u201cwe need to build on what we do know: content.\u201d In order to do this, we must share the same content across all devices and focus on how best to display and represent content through design system components.\n\nThe practical: core visual components\n\nTypography first\n\nWhen you work with a lot of text content, typography is the easiest way to set the visual tone of the design across all viewport widths. It\u2019s likely that you\u2019ll choose one or two typefaces to use across the whole system, but you might change the most legible font size, balanced with the most comfortable measure, as the viewport width changes.\n\nWhere typography meets layout\n\nThe unit on which you choose to base the grid and layout design, font sizes and leading could be based on the typeface, an optimal reading size, or something more arbitrary. Sometimes I\u2019ll choose a unit based on multiples of ten because it makes the maths in the CSS easier. Tim Brown suggests trying a modular scale. Chris Armstrong suggests basing it on your ideal measure, or the width of a fixed item of content such as an ad unit.\n\nGrids and layouts\n\nSensible grid design can be a flexible yet solid foundation for your design system layout component. But you must be wary in responsive design that a grid might not work across all widths: even four columns could make for very cramped content and one-word measures on smaller screens.\n\nMaybe the grid columns are something you differentiate across widths, but you can keep the concept of the grid consistent. If the content has blocks in groups of three, you might decide on a three-column grid which folds down to one column for narrow viewports. If the grid focuses on the idea of symmetry and has a four-column grid on larger viewports, it might fold down to two columns for narrower viewports. These consistencies may seem subtle, not at all obvious to many except the designer, but it\u2019s all these little constants and patterns across the whole of the design system that makes design decisions easier to make (as they adhere to the guiding concepts of your system), and give the product a uniform feel no matter what the device.\n\nShape or form\n\nThe shape or form components are concepts you already use in fixed-width web design for a strong, consistent look and feel. \n\nSince CSS border-radius became widely supported by browsers, a lot of designs feature circle themes. These are very distinctive and can be used across viewport widths giving them the same united feel, even if they\u2019re not used in the same way. This could also apply to border styles, consistent shadows and any number of decorative details and textures. These are the elements that make up the shape or form of a design system.\n\nColour\n\nColour is the most basic way to reinforce a brand and unite experiences across viewports. The same hex colour used system-wide is instantly recognisable, no matter what the viewport width.\n\nThe process\n\nWhile using a design system isn\u2019t necessarily attached to any particular process, it does lend itself to some process ideals.\n\nDetaching design considerations from viewport widths\n\nA design system allows you to focus separately on the components that make up the system, disconnecting the look and feel from the layout. This helps prevent us getting stuck in the rut of the Apple breakpoints (brilliantly coined by Simon Foster) of mobile, tablet and desktop. It also forces us to design for variation in viewport experiences side by side, not one after the other.\n\nDesign in the browser\n\nI can\u2019t start off designing in the browser \u2013 it just doesn\u2019t seem to bring out my creative side (and I\u2019m incredibly envious of you if you can; I just have to start on paper) \u2013 but static mock-ups aren\u2019t the only alternative. Style guides and style tiles are perfect for expressing the concepts of your design system. Pattern libraries could also work well.\n\nMock-ups and breakpoints\n\nAt some point, whether it\u2019s to test your system ideas, or because a client needs help visualising how your system might work, you may end up producing some static mock-ups. It\u2019s not the end of the world, but you must ensure that these consider all the viewports, not just those of the iDevices, or even the devices currently on the market. You need to decide the breakpoints where the states of your design change. The blocks within your content will always have optimum points for their display (based on their hierarchy, density, width, or type of interaction) and so your breakpoints should be based around these points.\n\nThese are probably the ideal points at which to produce static mockups; treat them as snapshots. They\u2019re not necessarily mock-ups, so much as a way of capturing how your design system would be interpreted when frozen at that particular viewport width.\n\nThe future\n\nCreating design systems will give us the flexibility we need for working with the unknown devices of the future. It may be a change in process, but it shouldn\u2019t be too much of a difference in thinking. The pioneers in responsive design have a hard job. Some of these problems may have already been solved in other technologies or industries, but it\u2019s up to the pioneers to find those connections and help us formulate solutions and standards that will make responsive design the best it can possibly be. We need to keep experimenting and communicating, particularly in the area of design, as good user experiences are the true sign of whether our products are a success.", "year": "2012", "author": "Laura Kalbag", "author_slug": "laurakalbag", "published": "2012-12-12T00:00:00+00:00", "url": "https://24ways.org/2012/design-systems/", "topic": "design"} {"rowid": 102, "title": "Art Directing with Looking Room", "contents": "Using photographic composition techniques to start to art direct on the template-driven web.\n\nThink back to last night. There you are, settled down in front of the TV, watching your favourite soap opera, with nice hot cup of tea in hand. Did you notice \u2013 whilst engrossed in the latest love-triangle \u2013 that the cameraman has worked very hard to support your eye\u2019s natural movement on-screen? He\u2019s carefully framed individual shots to create balance.\n\nThink back to last week. There you were, sat with your mates watching the big match. Did you notice that the cameraman frames the shot to go with the direction of play? A player moving right will always be framed so that he is on the far left, with plenty of \u2018room\u2019 to run into.\n\nBoth of these cameramen use a technique called Looking Room, sometimes called Lead Room. Looking Room is the space between the subject (be it a football, or a face), and the edge of the screen. Specifically, Looking Room is the negative space on the side the subject is looking or moving. The great thing is, it\u2019s not just limited to photography, film or television; we can use it in web design too.\n\nBasic Framing\n\nBefore we get into Looking Room, and how it applies to web, we need to have a look at some basics of photographic composition.\n\nMany web sites use imagery, or photographs, to enhance the content. But even with professionally shot photographs, without a basic understanding of framing or composition, you can damage how the image is perceived. \n\nA simple, easy way to make photographs more interesting is to fill the frame. \n\nTake this rather mundane photograph of a horse:\n\n\n\nA typical point and click affair. But, we can work with this.\n\nBy closely cropping, and filling the frame, we can instantly change the mood of the shot.\n\n\n\nI\u2019ve also added Looking Room on the right of the horse. This is space that the horse would be walking into. It gives the photograph movement.\n\nSubject, Space, and Movement\n\nGenerally speaking, a portrait photograph will have a subject and space around them. Visual interest in portrait photography can come from movement; how the eye moves around the shot. To get the eye moving, the photographer modifies the space around the subject.\n\nLook at this portrait:\n\n\n\nThe photography has framed the subject on the right, allowing for whitespace, or Looking Room, in the direction the subject is looking. The framing of the subject (1), with the space to the left (2) \u2013 the Looking Room \u2013 creates movement, shown by the arrow (3).\n\n\n\nNote the subject is not framed centrally (shown by the lighter dotted line).\n\nIf the photographer had framed the subject with equal space either side, the resulting composition is static, like our horse.\n\n\n\nIf the photographer framed the subject way over on the left, as she is looking that way, the resulting whitespace on the right leads a very uncomfortable composition.\n\n\n\nThe root of this discomfort is what the framing is telling our eye to do. The subject, looking to the left, suggests to us that we should do the same. However, the Looking Room on the right is telling our eye to occupy this space. The result is a confusing back and forth.\n\nHow Looking Room applies to the web\n\nWe can apply the same theory to laying out a web page or application. Taking the three same elements \u2013 Subject, Space, and resulting Movement \u2013 we can guide a user\u2019s eye to the elements we need to. As designers, or content editors, framing photographs correctly can have a subtle but important effect on how a page is visually scanned. Take this example:\n\n\n\nThe BBC homepage uses great photography as a way of promoting content. Here, they have cropped the main photograph to guide the user\u2019s eye into the content. \n\nBy applying the same theory, the designer or content editor has applied considerable Looking Room (2) to the photograph to create balance with the overall page design, but also to create movement of the user\u2019s eye toward the content (1)\n\n\n\nIf the image was flipped horizontally. The Looking Room is now on the right. The subject of the photograph is looking off the page, drawing the user\u2019s eye away from the content. Once again, this results in a confusing back and forth as your eye fights its way over to the left of the page.\n\n\n\nA little bit of Art Direction\n\nArt Direction can be described as the act or process of managing the visual presentation of content. Art Direction is difficult to do on the web, because content and presentation are, more often than not, separated. But where there are images, and when we know the templates that those images will populate, we can go a little way to bridging the gap between content and presentation.\n\nBy understanding the value of framing and Looking Room, and the fact that it extends beyond just a good looking photograph, we can start to see photography playing more of an integral role in the communication of content. \n\nWe won\u2019t just be populating templates. We\u2019ll be art directing.\n\nPhoto credits: \n\n\n\tPortrait by Carsten Tolkmit\n\tHorse by Mike Pedroncelli", "year": "2008", "author": "Mark Boulton", "author_slug": "markboulton", "published": "2008-12-05T00:00:00+00:00", "url": "https://24ways.org/2008/art-directing-with-looking-room/", "topic": "design"} {"rowid": 108, "title": "A Festive Type Folly", "contents": "\u2018Tis the season to be jolly, so the carol singers tell us. At 24 ways, we\u2019re keeping alive another British tradition that includes the odd faux-Greco-Roman building dotted around the British countryside, Tower Bridge built in 1894, and your Dad\u2019s Christmas jumper with the dancing reindeer motif. \u2018Tis the season of the folly!\n\n \n 24 Ways to impress your friends\n \n\nThe example is not an image, just text. You may wish to see a screenshot in Safari to compare with your own operating system and browser rendering.\n\nLike all follies this is an embellishment\u200a\u2014\u200aa bit of web typography fun. It\u2019s similar to the masthead text at my place, but it\u2019s also a hyperlink. Unlike the architectural follies of the past, no child labour was used to fund or build it, just some HTML flavoured with CSS, and a heavy dose of Times New Roman. Why Times New Roman, you ask? Well, after a few wasted hours experimenting with heaps of typefaces, seeking an elusive consistency of positioning and rendering across platforms, it proved to be the most consistent. Who\u2019d\u2018a thought? To make things more interesting, I wanted to use a traditional scale and make the whole thing elastic by using relative lengths that would react to a person\u2019s font size. So, to the meat of this festive frippery:\n\nThere are three things we rely on to create this indulgence:\n\n\n\tDescendant selectors\n\tAbsolute positioning\n\tInheritance\n\n\nHTML & Descendant Selectors\n\nThe markup for the folly might seem complex at first glance. To semantics pedants and purists it may seem outrageous. If that\u2019s you, read on at your peril! Here it is with lots of whitespace:\n\n
\n

\n\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a02\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a04 \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0w\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0a\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0y\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0s\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0 \n\u00a0\u00a0\u00a0\u00a0to \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0i\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0m\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pre\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0s\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0s \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0your \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0friends\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\t\n\u00a0\u00a0\n

\n
\n\nWhy so much markup? Well, we want to individually style many of the glyphs. By nesting the elements, we can pick out the bits we need as descendant selectors.\n\nTo retain a smidgen of semantics, the text is wrapped in

and elements. The two phrases, \u201c24 ways\u201d and \u201cto impress your friends\u201d are wrapped in and tags, respectively. Within those loving arms, their descendant s cascade invisibly, making a right mess of our source, but ready to be picked out in our CSS rules.\n\nSo, to select the \u201c2\u201d from the example we can simply write, #type h1 em{ }. Of course, that selects everything within the tags, but as we drill down the document tree, selecting other glyphs, any property / value styles can be reset or changed as required.\n\nPixels Versus Ems\n\nBefore we get stuck into the CSS, I should say that the goal here is to have everything expressed in relative \u201cem\u201d lengths. However, when I\u2019m starting out, I use pixels for all values, and only convert them to ems after I\u2019ve finished. It saves re-calculating the em length for every change I make as the folly evolves, but still makes the final result elastic, without relying on browser zoom.\n\nTo skip ahead, see the complete CSS.\n\nAbsolutely Positioned Glyphs\n\nIf a parent element has position: relative, or position: absolute applied to it, all children of that parent can be positioned absolutely relative to it. (See Dave Shea\u2019s excellent introduction to this.) That\u2019s exactly how the folly is achieved. As the parent, #type also has a font-size of 16px set, a width and height, and some basic style with a background and border:\n\n#type{\n\tfont-size: 16px;\n\ttext-align: left;\n\tbackground: #e8e9de;\n\tborder: 0.375em solid #fff;\n\twidth: 22.5em;\n\theight: 13.125em;\n\tposition: relative;\n}\n\nThe h1 is also given a default style with a font-size of 132px in ems relative to the parent font-size of 16px:\n\n#type h1{\n\tfont-family: \"Times New Roman\", serif;\n\tfont-size: 8.25em; /* 132px */\n\tline-height: 1em;\n\tfont-weight: 400;\n\tmargin: 0;\n\tpadding: 0;\n}\n\nTo get the em value, we divide the required size in pixels by the actual parent font-size in pixels\n\n132 \u00f7 16 = 8.25\n\nWe also give the descendants of the h1 some default properties. The line height, style and weight are normalised, they are positioned absolutely relative to #type, and a border and padding is applied:\n\n#type h1 em,\n#type h1 strong,\n#type h1 span{\n\tline-height: 1em;\n\tfont-style: normal;\n\tfont-weight: 400;\n\tposition: absolute;\n\tpadding: 0.1em;\n\tborder: 1px solid transparent;\n}\n\nThe padding ensures that some browsers don\u2019t forget about parts of a glyph that are drawn outside of their invisible container. When this happens, IE will trim the glyph, cutting off parts of descenders, for example. The border is there to make sure the glyphs have layout. Without this, positioning can be problematic. IE6 will not respect the transparent border colour\u200a\u2014\u200ait uses the actual text colour\u200a\u2014\u200abut in all other respects renders the example. You can hack around it, but it seemed unnecessary for this example.\n\nOnce these defaults are established, the rest is trial and error. As a quick example, the numeral \u201c2\u201d is first to be positioned:\n\n#type h1 a em{\n\tfont-size: 0.727em; /* (2) 96px */\n\tleft: 0.667em;\n\ttop: 0;\n}\n\nEvery element of the folly is positioned in exactly the same way as you can see in the complete CSS. When converting pixels to ems, the font-size is set first. Then, because we know what that is, we calculate the equivalent x- and y-position accordingly.\n\nInheritance\n\nCSS inheritance gave me a headache a long time ago when I first encountered it. After the penny dropped I came to experience something disturbingly close to affection for this characteristic. What it basically means is that children inherit the characteristics of their parents. For example:\n\n\n\tWe gave #type a font-size of 16px.\n\tFor #type h1 we changed it by setting font-size: 8.25em;. Than means that #type h1 now has a computed font-size of 8.25 \u00d7 16px = 132px.\n\tNow, all children of #type h1 in the document tree will inherit a font-size of 132px unless we explicitly change it as we did for #type h1 a em.\n\n\nThe \u201c2\u201d in the example\u200a\u2014\u200aselected with #type h1 a em\u200a\u2014\u200ais set at 96px with left and top positioning calculated relatively to that. So, the left position of 0.667em is 0.667 \u00d7 96 = 64px, approximately (three decimal points in em lengths don\u2019t always give exact pixel equivalents).\n\nOne way to look at inheritance is as a cascade of dependancy: In our example, the computed font size of any given element depends on that of the parent, and the absolute x- and y-position depends on the computed font size of the element itself.\n\nLink Colours\n\nThe same descendant selectors we use to set and position the type are also used to apply the colour by combining them with pseudo-selectors like :focus and :hover. Because the descendant selectors are available to us, we can pretty much pick out any glyph we like. First, we need to disable the underline:\n\n#type h1 a:link,\n#type h1 a:visited{\n\ttext-decoration: none;\n}\n\nIn our example, the \u201c24\u201d has a unique default state (colour):\n\n#type h1 a:link em,\n#type h1 a:visited em{\n\tcolor: #624;\n}\n\nThe rest of the \u201cWays\u201d text has a different colour, which it shares with the large \u201cs\u201d in \u201cimpress\u201d:\n\n#type h1 a:link em span span,\n#type h1 a:visited em span span,\n#type h1 a:link strong span span span span,\n#type h1 a:visited strong span span span span{\n\tcolor: #b32720;\n}\n\n\u201c24\u201d changes on :focus, :hover and :active. Critically though, the whole of the \u201c24 Ways\u201d text, and the large \u201cs\u201d in \u201cimpress\u201d all have the same style in this instance:\n\n#type h1 a:focus em,\n#type h1 a:hover em,\n#type h1 a:active em,\n#type h1 a:focus em span span,\n#type h1 a:hover em span span,\n#type h1 a:active em span span,\n#type h1 a:focus strong span span span span,\n#type h1 a:hover strong span span span span,\n#type h1 a:active strong span span span span{\n\tcolor: #804;\n}\n\nIf a descendant selector has a :link and :visited state set as a pseudo element, it needs to also have the corresponding :focus, :hover and :active states set.\n\nA Final Note About Web Typography\n\nFrom grids to basic leading to web fonts, and even absolute positioning, there\u2019s a wealth of things we can do to treat type on the Web with love and respect. However, experiments like this can highlight the vagaries of rasterisation and rendering that limit our ability to achieve truly subtle and refined results. At the operating system level, the differences in type rendering are extreme, and even between sequential iterations in Windows\u200a\u2014\u200afrom Standard to ClearType\u200a\u2014\u200athey can be daunting. Add to that huge variations in screen quality, and even the paper we print our type onto has many potential variations. Compare our example in Safari 3.2.1 / OS X 10.5.5 (left) and IE7 / Win XP (right). Both rendered on a 23\u201d Apple Cinema HD (LCD):\n\n\n\nBrowser developers continue to make great strides. However, those of us who set type on the Web need more consistency and quality if we want to avoid technologies like Flash and evolve web typography. Although web typography is inevitably\u200a\u2014\u200aand mistakenly\u200a\u2014\u200acompared unfavourably to print, it has the potential to achieve the same refinement in a different way. Perhaps one day, the glyphs of our favourite faces, so carefully crafted, kerned and hinted for the screen, will be rendered with the same precision with which they were drawn by type designers and styled by web designers. That would be my wish for the new year. Happy holidays!", "year": "2008", "author": "Jon Tan", "author_slug": "jontan", "published": "2008-12-17T00:00:00+00:00", "url": "https://24ways.org/2008/a-festive-type-folly/", "topic": "design"} {"rowid": 111, "title": "Geometric Background Patterns", "contents": "When the design is finished and you\u2019re about to start the coding process, you have to prepare your graphics. If you\u2019re working with a pattern background you need to export only the repeating fragment. It can be a bit tricky to isolate a fragment to achieve a seamless pattern background. For geometric patterns there is a method I always follow and that I want to share with you. Take for example a perfect 45\u00b0 diagonal line pattern. \n\n\n\nHow do you define this pattern fragment so it will be rendered seamlessly?\n\n\n\nHere is the method I usually follow to avoid a mismatch. First, zoom in so you see enough detail and you can distinguish the pixels. Select the Rectangular Marquee Selection tool and start your selection at the intersection of 2 different colors of a diagonal line. Hold down the Shift key while dragging so you drag a perfect square.\n\n\n\nRelease the mouse when you reach the exact same intesection (as your starting) point at the top right. \n\n\n\nCopy this fragment (using Copy Merged: Cmd/Ctrl + Shift + C) and paste the fragment in a new layer. Give this layer the name \u2018pattern\u2019. Now hold down the Command Key (Control Key on Windows) and click on the \u2018pattern\u2019 layer in the Layers Palette to select the fragment. Now go to Edit > Define Pattern, enter a name for your pattern and click OK. Test your pattern in a new document. Create a new document of 600 px by 400px, hit Cmd/Ctrl + A and go to Edit > Fill\u2026 and choose your pattern. If the result is OK, you have created a perfect pattern fragment.\n\n\n\nBelow you see this pattern enlarged. The guides show the boundaries of the pattern fragment and the red pixels are the reference points. The red pixels at the top right, bottom right and bottom left should match the red pixel at the top left.\n\n\n\nThis technique should work for every geometric pattern. Some patterns are easier than others, but this, and the Photoshop pattern fill test, has always been my guideline.\n\nOther geometric pattern examples\n\nExample 1\n\nNot all geometric pattern fragments are squares. Some patterns look easy at first sight, because they look very repetitive, but they can be a bit tricky.\n\n\n\nZoomed in pattern fragment with point of reference shown:\n\n\n\nExample 2\n\nSome patterns have a clear repeating point that can guide you, such as the blue small circle of this pattern as you can see from this zoomed in screenshot:\n\n\n\nZoomed in pattern fragment with point of reference shown:\n\n\n\nExample 3\n\nThe different diagonal colors makes a bit more tricky to extract the correct pattern fragment. \n\n\n\nThe orange dot, which is the starting point of the selection is captured a few times inside the fragment selection:", "year": "2008", "author": "Veerle Pieters", "author_slug": "veerlepieters", "published": "2008-12-02T00:00:00+00:00", "url": "https://24ways.org/2008/geometric-background-patterns/", "topic": "design"} {"rowid": 131, "title": "Random Lines Made With Mesh", "contents": "I know that Adobe Illustrator can be a bit daunting for people who aren\u2019t really advanced users of the program, but you would be amazed by how easy you can create cool effects or backgrounds. In this short tutorial I show you how to create a cool looking background only in 5 steps. \n\nStep 1 \u2013 Create Lines\n\n\n\nCreate lines using random widths and harmonious suitable colors. If you get stuck on finding the right colors, check out Adobe\u2019s Kuler and start experimenting.\n\nStep 2 \u2013 Convert Strokes to Fills\n\n\n\nSelect all lines and convert them to fills. Go to the Object menu, select Path > Outline Stroke. Select the Rectangle tool and draw 1 big rectangle on top the lines. Give the rectangle a suitable color. With the rectangle still selected, go to the Object menu, select Arrange > Send to Back.\n\nStep 3 \u2013 Convert to Mesh\n\n\n\nSelect all objects by pressing the command key (for Mac users), control key (for Windows users) + the \u201ca\u201d key. Go to the Object menu and select the Envelope Distort > Make with Mesh option. Enter 2 rows and 2 columns. Check the preview box to see what happens and click the OK button.\n\nStep 4 \u2013 Play Around with The Mesh Points \n\n\n\nPlay around with the points of the mesh using the Direct Selection tool (the white arrow in the Toolbox). Click on the top right point of the mesh. Once you\u2019re starting to drag hold down the shift key and move the point upwards. \n\n\n\nNow start dragging the bezier handles on the mesh to achieve the effect as shown in the above picture. Of course you can try out all kind of different effects here. \n\nThe Final Result\n\n\n\nThis is an example of how the final result can look. You can try out all kinds of different shapes dragging the handles of the mesh points. This is just one of the many results you can get. So next time you haven\u2019t got inspiration for a background of a header, a banner or whatever, just experiment with a few basic shapes such as lines and try out the \u2018Envelope Distort\u2019 options in Illustrator or the \u2018Make with Mesh\u2019 option and experiment, you\u2019ll be amazed by the unexpected creative results.", "year": "2006", "author": "Veerle Pieters", "author_slug": "veerlepieters", "published": "2006-12-08T00:00:00+00:00", "url": "https://24ways.org/2006/random-lines-made-with-mesh/", "topic": "design"} {"rowid": 133, "title": "Gravity-Defying Page Corners", "contents": "While working on Stikkit, a \u201cpage curl\u201d came to be.\nNot being as crafty as Veerle, you see.\nI fired up Photoshop to see what could be.\n\u201cAnother copy is running on the network\u201c \u2026 oopsie.\n\nWith license issues sorted out and a concept in mind\nI set out to create something flexible and refined.\nOne background image and code that is sure to be lean.\nA simple solution for lazy people like me.\n\nThe curl I\u2019ll be showing isn\u2019t a curl at all.\nIt\u2019s simply a gradient that\u2019s 18 pixels tall.\nWith a fade to the left that\u2019s diagonally aligned\nand a small fade on the right that keeps the illusion defined.\n\n\n\nCreate a selection with the marquee tool (keeping in mind a reasonable minimum width) and drag a gradient (black to transparent) from top to bottom.\n\n\n\nNow drag a gradient (the background color of the page to transparent) from the bottom left corner to the top right corner.\n\n\n\nFinally, drag another gradient from the right edge towards the center, about 20 pixels or so.\n\nBut the top is flat and can be positioned precisely\njust under the bottom right edge very nicely.\nAnd there it will sit, never ever to be busted\nby varying sizes of text when adjusted.\n\n
\n\t
\n\t\t

Gravity-Defying!

\n\t\t

Lorem ipsum dolor ...

\n\t
\n
\n\nLet\u2019s dive into code and in the markup you\u2019ll see\n\u201cis that an extra div?\u201d \u2026 please don\u2019t kill me?\nThe #page div sets the width and bottom padding\nwhose height is equal to the shadow we\u2019re adding.\n\n\n\nThe #page-contents div will set padding in ems\nto scale with the text size the user intends.\nThe background color will be added here too\nbut not overlapping the shadow where #page\u2019s padding makes room.\n\nA simple technique that you may find amusing\nis to substitute a PNG for the GIF I was using.\nFor that would be crafty and future-proof, too.\nThe page curl could sit on any background hue.\n\nI hope you\u2019ve enjoyed this easy little trick.\nIt\u2019s hardly earth-shattering, and arguably slick.\nBut it could come in handy, you just never know.\nHappy Holidays! And pleasant dreams of web three point oh.", "year": "2006", "author": "Dan Cederholm", "author_slug": "dancederholm", "published": "2006-12-24T00:00:00+00:00", "url": "https://24ways.org/2006/gravity-defying-page-corners/", "topic": "design"} {"rowid": 134, "title": "Photographic Palettes", "contents": "How many times have you seen a colour combination that just worked, a match so perfect that it just seems obvious? \n\nNow, how many times do you come up with those in your own work? A perfect palette looks easy when it\u2019s done right, but it\u2019s often maddeningly difficult and time-consuming to accomplish.\n\nChoosing effective colour schemes will always be more art than science, but there are things you can do that will make coming up with that oh-so-smooth palette just a little a bit easier. A simple trick that can lead to incredibly gratifying results lies in finding a strong photograph and sampling out particularly harmonious colours.\n\nPhoto Selection\n\nNot all photos are created equal. You certainly want to start with imagery that fits the eventual tone you\u2019re attempting to create. A well-lit photo of flowers might lead to a poor colour scheme for a funeral parlour\u2019s web site, for example. It\u2019s worth thinking about what you\u2019re trying to say in advance, and finding a photo that lends itself to your message.\n\nAs a general rule of thumb, photos that have a lot of neutral or de-saturated tones with one or two strong colours make for the best palette; bright and multi-coloured photos are harder to derive pleasing results from. Let\u2019s start with a relatively neutral image.\n\nSampling\n\n\n\nIn the above example, I\u2019ve surrounded the photo with three different background colours directly sampled from the photo itself. Moving from left to right, you can see how each of the sampled colours is from an area of increasingly smaller coverage within the photo, and yet there\u2019s still a strong harmony between the photo and the background image. I don\u2019t really need to pick the big obvious colours from the photo to create that match, I can easily concentrate on more interesting colours that might work better for what I intend.\n\nUsing a similar palette, let\u2019s apply those colour choices to a more interesting layout:\n\n\n\nIn this mini-layout, I\u2019ve re-used the same tan colour from the previous middle image as a background, and sampled out a nicely matching colour for the top and bottom overlays, as well as the two different text colours. Because these colours all fall within a narrow range, the overall balance is harmonious.\n\nWhat if I want to try something a little more daring? I have a photo of stacked chairs of all different colours, and I\u2019d like to use a few more of those. No problem, provided I watch my colour contrast:\n\n\n\nThough it uses varying shades of red, green, and yellow, this palette actually works because the values are even, and the colours muted. Placing red on top of green is usually a hideous combination of death, but if the green is drab enough and the red contrasts well enough, the result can actually be quite pleasing. I\u2019ve chosen red as my loudest colour in this palette, and left green and yellow to play the quiet supporting roles.\n\nObviously, there are no hard and fast rules here. You might not want to sample absolutely every colour in your scheme from a photo. There are times where you\u2019ll need a variation that\u2019s just a little bit lighter, or a blue that\u2019s not in the photo. You might decide to start from a photo base and tweak, or add in colours of your own. That\u2019s okay too.\n\nTonal Variations\n\nI\u2019ll leave you with a final trick I\u2019ve been using lately, a way to bring a bit more of a formula into the equation and save some steps.\n\n\n\nStarting with the same base palette I sampled from the chairs, in the above image I\u2019ve added a pair of overlaying squares that produce tonal variations of each primary. The lighter variation is simply a solid white square set to 40% opacity, the darker one is a black square at 20%. That gives me a highlight and shadow for each colour, which would prove handy if I had to adapt this colour scheme to a larger layout.\n\nI could add a few more squares of varying opacities, or adjust the layer blending modes for different effects, but as this looks like a great place to end, I\u2019ll leave that up to your experimental whims. Happy colouring!", "year": "2006", "author": "Dave Shea", "author_slug": "daveshea", "published": "2006-12-22T00:00:00+00:00", "url": "https://24ways.org/2006/photographic-palettes/", "topic": "design"} {"rowid": 137, "title": "Cheating Color", "contents": "Have you ever been strapped to use specific colors outlined in a branding guide? Felt restricted because those colors ended up being too light or dark for the way you want to use them?\n\nHere\u2019s the solution: throw out your brand guide.\n\ngasp!\n\nOK, don\u2019t throw it out. Just put it in a drawer for a few minutes.\n\nBranding Guides be Damned\n\nWhen dealing with color on screen, it\u2019s easy to get caught up in literal values from hex colors, you can cheat colors ever so slightly to achieve the right optical value. This is especially prevalent when trying to bring a company\u2019s identity colors to a screen design. Because the most important idea behind a brand guide is to help a company maintain the visual integrity of their business, consider hex numbers to be guidelines rather than law. Once you are familiar enough with the colors your company uses, you can start to flex them a bit, and take a few liberties.\n\nThis is a quick method for cheating to get the color you really want. With a little sleight of design, we can swap a color that might be part of your identity guidelines, with one that works better optically, and no one will be the wiser!\n\nColor is a Wily Beast\n\nThis might be hard: You might have to break out of the idea that a color can only be made using one method. Color is fluid. It interacts and changes based on its surroundings. Some colors can appear lighter or darker based on what color they appear on or next to. The RGB gamut is additive color, and as such, has a tendency to push contrast in the direction that objects may already be leaning\u2014increasing the contrast of light colors on dark colors and decreasing the contrast of light on light. Obviously, because we are talking about monitors here, these aren\u2019t hard and fast rules.\n\nCheat and Feel Good About It\n\nOn a light background, when you have a large element of a light color, a small element of the same color will appear lighter.\n\nEnter our fake company: Double Dagger. They manufacture footnotes. Take a look at Fig. 1 below. The logo (Double Dagger), rule, and small text are all #6699CC. Because the logo so large, we get a good sense of the light blue color. Unfortunately, the rule and small text beneath it feel much lighter because we can\u2019t create enough contrast with such small shapes in that color.\n\nNow take a look at Fig. 2. Our logo is still #6699CC, but now the rule and smaller text have been cheated to #4477BB, effectively giving us the same optical color that we used in the logo. You will find that we get a better sense of the light blue, and the added benefit of more contrast for our text. Doesn\u2019t that feel good?\n\n\n\nConversely, when you have a large element of a dark color, a small element of the same color will appear darker.\n\nLet\u2019s look at Fig. 3 below. Double Dagger has decided to change its identity colors from blue to red. In Fig. 3, our logo, rule, and small text are all #330000, a very dark red. If you look at the rule and small text below the logo, you will notice that they seem dark enough to be confused with black. The dark red can\u2019t be sustained by the smaller shapes. Now let\u2019s look at Fig. 4. The logo is still #33000, but we\u2019ve now cheated the rule and smaller text to #550000. This gives us a better sense of a red, but preserves the dark and moody direction the company has taken.\n\n\n\nBut we\u2019ve only touched on color against a white background. For colors against a darker background, you may find lighter colors work fine, but darker colors need to be cheated a bit to the lighter side in order to reach a good optical equivalent. Take a look below at Fig. 5 and Fig. 6. Both use the same exact corresponding colors as Fig. 1 and Fig. 2 above, but now they are set against a dark background. Where the blue used in Fig. 1 above was too light for the smaller elements, we find it is just right for them in Fig. 5, and the darker blue we used in Fig. 2 has now proven too dark for a dark background, as evidenced in Fig. 6.\n\n\n\nYour mileage may vary, and this may not be applicable in all situations, but consider it to be just another tool on your utility belt for dealing with color problems.", "year": "2006", "author": "Jason Santa Maria", "author_slug": "jasonsantamaria", "published": "2006-12-23T00:00:00+00:00", "url": "https://24ways.org/2006/cheating-color/", "topic": "design"} {"rowid": 140, "title": "Styling hCards with CSS", "contents": "There are plenty of places online where you can learn about using the hCard microformat to mark up contact details at your site (there are some resources at the end of the article). But there\u2019s not yet been a lot of focus on using microformats with CSS. So in this installment of 24 ways, we\u2019re going to look at just that \u2013 how microformats help make CSS based styling simpler and more logical.\n\nBeing rich, quite complex structures, hCards provide designers with a sophisticated scaffolding for styling them. A recent example of styling hCards I saw, playing on the business card metaphor, was by Andy Hume, at http://thedredge.org/2005/06/using-hcards-in-your-blog/. While his approach uses fixed width cards, let\u2019s take a look at how we might style a variable width business card style for our hCards.\n\nLet\u2019s take a common hCard, which includes address, telephone and email details\n\n
\n\t

Web Directions North\n\t\t\n\t\t\t\"download\n\t

\n\n\t\n\t\t 1485 Laperri\u00e8re Avenue \n\t\t Ottawa ON K1Z 7S8 \n\t\tCanada\n\t\n\n\t\n\t\tPhone/Fax: Work: 61 2 9365 5007\n\t\tEmail: info@webdirections.org\n\t\n\n\nWe\u2019ll be using a variation on the now well established \u201csliding doors\u201d technique (if you create a CSS technique, remember it\u2019s very important to give it a memorable name or acronym, and bonus points if you get your name in there!) by Douglas Bowman, enhanced by Scott Schiller (see http://www.schillmania.com/projects/dialog/,) which will give us a design which looks like this\n\n\n\nThe technique, in a nutshell, uses background images on four elements, two at the top, and two at the bottom, to add each rounded corner.\n\nWe are going to make this design \u201cfluid\u201d in the sense that it grows and shrinks in proportion with the size of the font that the text of the element is displayed with. This is sometimes referred to as an \u201cem driven design\u201d (we\u2019ll see why in a moment).\n\nTo see how this works in practice, here\u2019s the same design with the text \u201czoomed\u201d up in size\n\n\n\nand the same design again, when we zoom the text size down\n\n\n\nBy the way, the hCard image comes from Chris Messina, and you can download it and other microformat icons from the microformats wiki.\n\nNow, with CSS3, this whole task would be considerably easier, because we can add multiple background images to an element, and border images for each edge of an element. Safari, version 1.3 up, actually supports multiple background images, but sadly, it\u2019s not supported in Firefox 1.5, or even Firefox 2.0 (let\u2019s not mention IE7 eh?). So it\u2019s probably too little supported to use now. So instead we\u2019ll use a technique that only involves CSS2, and works in pretty much any browser.\n\nVery often, developers add div or span elements as containers for these background images, and in fact, if you visit Scott Shiller\u2019s site, that\u2019s what he has done there. But if at all possible we shouldn\u2019t be adding any HTML simply for presentational purposes, even if the presentation is done via CSS. What we can do is to use the HTML we have already, as much as is possible, to add the style we want. This can take some creative thinking, but once you get the hang of this approach it becomes a more natural way of using HTML compared with simply adding divs and spans at will as hooks for style. Of course, this technique isn\u2019t always simple, and in fact sometimes simply not possible, requiring us to add just a little HTML to provide the \u201chooks\u201d for CSS.\n\nLet\u2019s go to work\n\nThe first step is to add a background image to the whole vCard element.\n\n\n\nWe make this wide enough (for example 1000 or more pixels) and tall enough that no matter how large the content of the vCard grows, it will never overflow this area. We can\u2019t simply repeat the image, because the top left corner will show when the image repeats.\n\nWe add this as the background image of the vCard element using CSS.\n\nWhile we are at it, let\u2019s give the text a sans-serif font, some color so that it will be visible, and stop the image repeating.\n\n.vcard {\n\tbackground-image: url(images/vcardfill.png);\n\tbackground-repeat: no-repeat;\n\tcolor: #666;\n\tfont-family: \"Lucida Grande\", Verdana, Helvetica, Arial, sans-serif;\n}\n\nWhich in a browser, will look something like this.\n\n\n\nNext step we need to add the top right hand corner of the hCard. In keeping with our aim of not adding HTML simply for styling purposes, we want to use the existing structure of the page where possible. Here, we\u2019ll use the paragraph of class fn and org, which is the first child element of the vcard element.\n\n

Web Directions Conference Pty Ltd \"download

\n\nHere\u2019s our CSS for this element\n\n.fn {\n\tbackground-image: url(images/topright.png);\n\tbackground-repeat: no-repeat;\n\tbackground-position: top right;\n\tpadding-top: 2em;\n\tfont-weight: bold;\n\tfont-size: 1.1em;\n}\n\nAgain, we don\u2019t want it to repeat, but this time, we\u2019ve specified a background position for the image. This will make the background image start from the top, but its right edge will be located at the right edge of the element. I also made the font size a little bigger, and the weight bold, to differentiate it from the rest of the text in the hCard.\n\nHere\u2019s the image we are adding as the background to this element.\n\n\n\nSo, putting these two CSS statements together we get\n\n\n\nWe specified a padding-top of 2em to give some space between the content of the fn element and the edge of the fn element. Otherwise the top of the hCard image would be hard against the border. To see this in action, just remove the padding-top: 2em; declaration and preview in a browser.\n\nSo, with just two statements, we are well under way. We\u2019ve not even had to add any HTML so far. Let\u2019s turn to the bottom of the element, and add the bottom border (well, the background image which will serve as that border).\n\nNow, which element are we going to use to add this background image to?\n\nOK, here I have to admit to a little, teensie bit of cheating. If you look at the HTML of the hCard, I\u2019ve grouped the email and telephone properties into a div, with a class of telecommunications. This grouping is not strictly requred for our hCard.\n\n
\n\t

Phone/Fax: Work:\n\t\t61 2 9365 5007

\n\t

Email: info@webdirections.org

\n
\n\nNow, I chose that class name because that is what the vCard specification calls this group of properties. And typically, I do tend to group together related elements using divs when I mark up content. I find it makes the page structure more logical and readable. But strictly speaking, this isn\u2019t necessary, so you may consider it cheating. But my lesson in this would be, if you are going to add markup, try to make it as meaningful as possible.\n\nAs you have probably guessed by now, we are going to add one part of the bottom border image to this element. We\u2019re going to add this image as the background-image.\n\n\n\nAgain, it will be a very wide image, like the top left one, so that no matter how wide the element might get, the background image will still be wide enough. Now, we\u2019ll need to make this image sit in the bottom left of the element we attach it to, so we use a backgound position of left bottom (we put the horizontal position before the vertical). Here\u2019s our CSS statement for this\n\n.telecommunications {\n\tbackground-image: url(images/bottom-left.png);\n\tbackground-repeat: no-repeat;\n\tbackground-position: left bottom;\n\tmargin-bottom: 2em;\n}\n\nAnd that will look like this\n\n\n\nNot quite there, but well on the way. Time for the final piece in the puzzle.\n\nOK, I admit, I might have cheated just a little bit more in this step. But like the previous step, all valid, and (hopefully) quite justifiable markup. If we look at the HTML again, you\u2019ll find that our email address is marked up like this\n\n

Email: info@webdirections.org

\n\nTypically, in hCard, the value part of this property isn\u2019t required, and we could get away with\n\ninfo@webdirections.org\n\nThe form I\u2019ve used, with the span of class value is however, perfectly valid hCard markup (hard allows for multiple email addresses of different types, which is where this typically comes in handy). Why have I gone to all this trouble? Well, when it came to styling the hCard, I realized I needed a block element to attach the background image for the bottom right hand corner to. Typically the last block element in the containing element is the ideal choice (and sometimes it\u2019s possible to take an inline element, for example the link here, and use CSS to make it a block element, and attach it to that, but that really doesn\u2019t work with this design).\n\nSo, if we are going to use the paragraph which contains the email link, we need a way to select it exclusively, which means that with CSS2 at least, we need a class or id as a hook for our CSS selector (in CSS3 we could use the last-child selector, which selects the last child element of a specified element, but again, as last child is not widely supported, we won\u2019t rely on it here.)\n\nSo, the least worst thing we could do is take an existing element, and add some reasonably meaningful markup to it. That\u2019s why we gave the paragraph a class of email, and the email address a class of value. Which reminds me a little of a moment in Hamlet\n\n\n\tThe lady doth protest too much, methinks\n\n\nOK, let\u2019s get back to the CSS.\n\nWe add the bottom right corner image, positioning it in the bottom right of the element, and making sure it doesn\u2019t repeat. We also add some padding to the bottom, to balance out the padding we added to the top of the hCard.\n\np.email {\n\tbackground-image: url(images/bottom-right.png);\n\tbackground-position: right bottom;\n\tbackground-repeat: no-repeat;\n\tpadding-bottom: 2em;\n}\n\nWhich all goes to make our hCard look like this\n\n\n\nIt just remains for us to clean up a little.\n\nLet\u2019s start from the top. We\u2019ll float the download image to the right like this\n\n.vcard img {\n\tfloat: right;\n\tpadding-right: 1em;\n\tmargin-top: -1em\n}\n\nSee how we didn\u2019t have to add a class to style the image, we used the fact that the image is a descendent of the vcard element, and a descendent selector. In my experience, the very widely supported, powerful descendent selector is one of the most underused aspects of CSS. So if you don\u2019t use it frequently, look into it in more detail.\n\nWe added some space to the right of the image, and pulled it up a bit closer to the top of the hCard, like this\n\n\n\nWe also want to add some whitespace between the edge of the hCard and the text. We would typically add padding to the left of the containing element, (in this case the vcard element) but this would break our bottom left hand corner, like this\n\n\n\nThat\u2019s because the div element we added this bottom left background image to would be moved in by the padding on its containing element.\n\nSo instead, we add left margin to all the paragraphs in the hCard\n\n.vcard p {\n\tmargin-left: 1em;\n}\n\n(there is the descendent selector again \u2013 it is the swiss army knife of CSS)\n\nNow, we\u2019ve not yet made the width of the hCard a function of the size of the text inside it (or \u201cem driven\u201d as we described it earlier). We do this by giving the hCard a width that is specified in em units. Here we\u2019ll set a width of say 28em, which makes the hCard always roughly as wide as 28 characters (strictly speaking 28 times the width of the letter capital M). \n\nSo the statement for our containing vcard element becomes\n\n.vcard {\n\tbackground-image: url(images/vcardfill.png);\n\tbackground-repeat: no-repeat;\n\tcolor: #666;\n\tfont-family: \"Lucida Grande\", Verdana, Helvetica, Arial, sans-serif;\n\twidth: 28em;\n}\n\nand now our element will look like this\n\n\n\nWe\u2019ve used almost entirely the existing HTML from our original hCard (adding just a little, and trying as much as possible to keep that additional markup meaningful), and just 6 CSS statements.\n\nHoliday Bonus \u2013 a downloadable vCard\n\nDid you notice this part of the HTML\n\n\n \"download\n\nWhat\u2019s with the odd looking url\n\nbody {\n\tfont-size: 12px; \n}\np {\n\tline-height 1.5em; \n}\n\nThere are many ways to size text in CSS and the above approach provides and accessible method of achieving the pixel-precision solid typography requires. By way of explanation, the first font-size reduces the body text from the 16px default (common to most browsers and OS set-ups) down to the 12px we require. This rule is primarily there for Internet Explorer 6 and below on Windows: the percentage value means that the text will scale predictably should a user bump the text size up or down. The second font-size sets the text size specifically and is ignored by IE6, but used by Firefox, Safari, IE7, Opera and other modern browsers which allow users to resize text sized in pixels.\n\nSpacing between paragraphs\n\nWith our rhythmic unit set at 18px we need to ensure that it is maintained throughout the body copy. A common place to lose the rhythm is the gaps set between margins. The default treatment by web browsers of paragraphs is to insert a top- and bottom-margin of 1em. In our case this would give a spacing between the paragraphs of 12px and hence throw the text out of rhythm. If the rhythm of the page is to be maintained, the spacing of paragraphs should be related to the basic line height unit. This is achieved simply by setting top- and bottom-margins equal to the line height.\n\nIn order that typographic integrity is maintained when text is resized by the user we must use ems for all our vertical measurements, including line-height, padding and margins.\n\np {\n\tfont-size:1em;\n\tmargin-top: 1.5em;\n\tmargin-bottom: 1.5em; \n}\n\nBrowsers set margins on all block-level elements (such as headings, lists and blockquotes) so a way of ensuring that typographic attention is paid to all such elements is to reset the margins at the beginning of your style sheet. You could use a rule such as:\n\nbody,div,dl,dt,dd,ul,ol,li,h1,h2,h3,h4,h5,h6,pre,form,fieldset,p,blockquote,th,td { \n\tmargin:0; \n\tpadding:0; \n}\n\nAlternatively you could look into using the Yahoo! UI Reset style sheet which removes most default styling, so providing a solid foundation upon which you can explicitly declare your design intentions.\n\nVariations in text size\n\nWhen there is a change in text size, perhaps with a heading or sidenotes, the differing text should also take up a multiple of the basic leading. This means that, in our example, every diversion from the basic text size should take up multiples of 18px. This can be accomplished by adjusting the line-height and margin accordingly, as described following.\n\nHeadings\n\nSubheadings in the example page are set to 14px. In order that the height of each line is 18px, the line-height should be set to 18\u00a0\u00f7\u00a014 =\u00a01.286. Similarly the margins above and below the heading must be adjusted to fit. The temptation is to set heading margins to a simple 1em, but in order to maintain the rhythm, the top and bottom margins should be set at 1.286em so that the spacing is equal to the full 18px unit.\n\nh2 {\n\tfont-size:1.1667em;\n\tline-height: 1.286em;\n\tmargin-top: 1.286em;\n\tmargin-bottom: 1.286em; \n}\n\nOne can also set asymmetrical margins for headings, provided the margins combine to be multiples of the basic line height. In our example, a top margin of 1\u00bd lines is combined with a bottom margin of half a line as follows:\n\nh2 {\n\tfont-size:1.1667em;\n\tline-height: 1.286em;\n\tmargin-top: 1.929em;\n\tmargin-bottom: 0.643em; \n}\n\nAlso in our example, the main heading is given a text size of 18px, therefore the line-height has been set to 1em, as has the margin:\n\nh1 {\n\tfont-size:1.5em;\n\tline-height: 1em;\n\tmargin-top: 0;\n\tmargin-bottom: 1em; \n}\n\nSidenotes\n\nSidenotes (and other supplementary material) are often set at a smaller size to the basic text. To keep the rhythm, this smaller text should still line up with body copy, so a calculation similar to that for headings is required. In our example, the sidenotes are set at 10px and so their line-height must be increased to 18\u00a0\u00f7\u00a010 =\u00a01.8.\n\n.sidenote {\n\tfont-size:0.8333em;\n\tline-height:1.8em; \n}\n\nBorders\n\nOne additional point where vertical rhythm is often lost is with the introduction of horizontal borders. These effectively act as shims pushing the subsequent text downwards, so a two pixel horizontal border will throw out the vertical rhythm by two pixels. A way around this is to specify horizontal lines using background images or, as in our example, specify the width of the border in ems and adjust the padding to take up the slack. \n\nThe design of the footnote in our example requires a 1px horizontal border. The footnote contains 12px text, so 1px in ems is 1\u00a0\u00f7\u00a012 =\u00a00.0833. I have added a margin of 1\u00bd lines above the border (1.5\u00a0\u00d7\u00a018\u00a0\u00f7\u00a012 =\u00a02.5ems), so to maintain the rhythm the border + padding must equal a \u00bd (9px). We know the border is set to 1px, so the padding must be set to 8px. To specify this in ems we use the familiar calculation: 8\u00a0\u00f7\u00a012 =\u00a00.667.\n\nHit me with your rhythm stick\n\nComposing to a vertical rhythm helps engage and guide the reader down the page, but it takes typographic discipline to do so. It may seem like a lot of fiddly maths is involved (a few divisions and multiplications never hurt anyone) but good type setting is all about numbers, and it is this attention to detail which is the key to success.", "year": "2006", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2006-12-12T00:00:00+00:00", "url": "https://24ways.org/2006/compose-to-a-vertical-rhythm/", "topic": "design"} {"rowid": 146, "title": "Increase Your Font Stacks With Font Matrix", "contents": "Web pages built in plain old HTML and CSS are displayed using only the fonts installed on users\u2019 computers (@font-face implementations excepted). To enable this, CSS provides the font-family property for specifying fonts in order of preference (often known as a font stack). For example:\n\nh1 {font-family: 'Egyptienne F', Cambria, Georgia, serif}\n\nSo in the above rule, headings will be displayed in Egyptienne F. If Egyptienne F is not available then Cambria will be used, failing that Georgia or the final fallback default serif font. This everyday bit of CSS will be common knowledge among all 24 ways readers.\n\nIt is also a commonly held belief that the only fonts we can rely on being installed on users\u2019 computers are the core web fonts of Arial, Times New Roman, Verdana, Georgia and friends. But is that really true?\n\nIf you look in the fonts folder of your computer, or even your Mum\u2019s computer, then you are likely to find a whole load of fonts besides the core ones. This is because many software packages automatically install extra typefaces. For example, Office 2003 installs over 100 additional fonts. Admittedly not all of these fonts are particularly refined, and not all are suitable for the Web. However they still do increase your options. \n\nThe Matrix\n\nI have put together a matrix of (western) fonts showing which are installed with Mac and Windows operating systems, which are installed with various versions of Microsoft Office, and which are installed with Adobe Creative Suite.\n\n\n\nThe matrix is available for download as an Excel file and as a CSV. There are no readily available statistics regarding the penetration of Office or Creative Suite, but you can probably take an educated guess based on your knowledge of your readers.\n\nThe idea of the matrix is that use can use it to help construct your font stack. First of all pick the font you\u2019d really like for your text \u2013 this doesn\u2019t have to be in the matrix. Then pick the generic family (serif, sans-serif, cursive, fantasy or monospace) and a font from each of the operating systems. Then pick any suitable fonts from the Office and Creative Suite lists.\n\nFor example, you may decide your headings should be in the increasingly ubiquitous Clarendon. This is a serif type face. At OS-level the most similar is arguably Georgia. Adobe CS2 comes with Century Old Style which has a similar feel. Century Schoolbook is similar too, and is installed with all versions of Office. Based on this your font stack becomes:\n\nfont-family: 'Clarendon Std', 'Century Old Style Std', 'Century Schoolbook', Georgia, serif\n\n\n\n\n\n\nNote the \u2018Std\u2019 suffix indicating a \u2018standard\u2019 OpenType file, which will normally be your best bet for more esoteric fonts.\n\nI\u2019m not suggesting the process of choosing suitable fonts is an easy one. Firstly there are nearly two hundred fonts in the matrix, so learning what each font looks like is tricky and potentially time consuming (if you haven\u2019t got all the fonts installed on a machine to hand you\u2019ll be doing a lot of Googling for previews). And it\u2019s not just as simple as choosing fonts that look similar or have related typographic backgrounds, they need to have similar metrics as well, This is especially true in terms of x-height which gives an indication of how big or small a font looks.\n\nOver to You\n\nThe main point of all this is that there are potentially more fonts to consider than is generally accepted, so branch out a little (carefully and tastefully) and bring a little variety to sites out there. If you come up with any novel font stacks based on this approach, please do blog them (tagged as per the footer) and at some point they could all be combined in one place for everyone to consider.\n\nAppendix\n\nWhat about Linux?\n\nThe only operating systems in the matrix are those from Microsoft and Apple. For completeness, Linux operating systems should be included too, although these are many and varied and very much in a minority, so I omitted them for time being. For the record, some Linux distributions come packaged with Microsoft\u2019s core fonts. Others use the Vera family, and others use the Liberation family which comprises fonts metrically identical to Times New Roman and Arial.\n\nSources\n\nThe sources of font information for the matrix are as follows:\n\n\n\tWindows XP SP2\n\tWindows Vista\n\tOffice 2003\n\tOffice 2007\n\tMac OSX Tiger\n\tMac OSX Leopard (scroll down two thirds)\n\tOffice 2004 (Mac) by inspecting my Microsoft Office 2004/Office/Fonts folder\n\tOffice 2008 (Mac) is expected to be as Office 2004 with the addition of the Vista ClearType fonts\n\tCreative Suite 2 (see pdf link in first comment)\n\tCreative Suite 3", "year": "2007", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2007-12-17T00:00:00+00:00", "url": "https://24ways.org/2007/increase-your-font-stacks-with-font-matrix/", "topic": "design"} {"rowid": 148, "title": "Typesetting Tables", "contents": "Tables have suffered in recent years on the web. They were used for laying out web pages. Then, following the Web Standards movement, they\u2019ve been renamed by the populous as `data tables\u2019 to ensure that we all know what they\u2019re for. There have been some great tutorials for the designing tables using CSS for presentation and focussing on the semantics in the displaying of data in the correct way. However, typesetting tables is a subtle craft that has hardly had a mention.\n\nTable design can often end up being a technical exercise. What data do we need to display? Where is the data coming from and what form will it take? When was the last time your heard someone talk about lining numerals? Or designing to the reading direction?\n\nTables are not read like sentences\n\nWhen a reader looks at, and tries to understand, tabular data, they\u2019re doing a bunch of things at the same time.\n\n\n\tGenerally, they\u2019re task based; they\u2019re looking for something.\n\tThey are reading horizontally AND vertically\n\n\nReading a table is not like reading a paragraph in a novel, and therefore shouldn\u2019t be typeset in the same way. Designing tables is information design, it\u2019s functional typography\u2014it\u2019s not a time for eye candy.\n\nTypesetting tables\n\nTypesetting great looking tables is largely an exercise in restraint. Minimal interference with the legibility of the table should be in the forefront of any designers mind.\n\nWhen I\u2019m designing tables I apply some simple rules:\n\n\n\tPlenty of negative space\n\tUse the right typeface\n\tGo easy on the background tones, unless you\u2019re giving reading direction visual emphasis\n\tDesign to the reading direction\n\n\nBy way of explanation, here are those rules as applied to the following badly typeset table.\n\nYour default table\n\nThis table is a mess. There is no consideration for the person trying to read it. Everything is too tight. The typeface is wrong. It\u2019s flat. A grim table indeed.\n\n\n\nLet\u2019s see what we can do about that.\n\nPlenty of negative space\n\nThe badly typeset table has been set with default padding. There has been little consideration for the ascenders and descenders in the type interfering with the many horizontal rules.\n\nThe first thing we do is remove most of the lines, or rules. You don\u2019t need them \u2013 the data in the rows forms its own visual rules. Now, with most of the rules removed, the ones that remain mean something; they are indicating some kind of hierarchy to the help the reader understand what the different table elements mean \u2013 in this case the column headings.\n\n\n\nNow we need to give the columns and rows more negative space. Note the framing of the column headings. I\u2019m giving them more room at the bottom. This negative space is active\u2014it\u2019s empty for a reason. The extra air in here also gives more hierarchy to the column headings.\n\n\n\nUse the right typeface\n\nThe default table is set in a serif typeface. This isn\u2019t ideal for a couple of reasons. This serif typeface has a standard set of text numerals. These dip below the baseline and are designed for using figures within text, not on their own. What you need to use is a typeface with lining numerals. These align to the baseline and are more legible when used for tables.\n\n\n\nSans serif typefaces generally have lining numerals. They are also arguably more legible when used in tables.\n\nGo easy on the background tones, unless you\u2019re giving reading direction visual emphasis \n\nWe\u2019ve all seen background tones on tables. They have their use, but my feeling is that use should be functional and not decorative.\n\nIf you have a table that is long, but only a few columns wide, then alternate row shading isn\u2019t that useful for showing the different lines of data. It\u2019s a common misconception that alternate row shading is to increase legibility on long tables. That\u2019s not the case. Shaded rows are to aid horizontal reading across multiple table columns. On wide tables they are incredibly useful for helping the reader find what they want.\n\n\n\nBackground tone can also be used to give emphasis to the reading direction. If we want to emphasis a column, that can be given a background tone.\n\n\n\nHierarchy\n\nAs I said earlier, people may be reading a table vertically, and horizontally in order to find what they want. Sometimes, especially if the table is complex, we need to give them a helping hand.\n\nVisually emphasising the hierarchy in tables can help the reader scan the data. Column headings are particularly important. Column headings are often what a reader will go to first, so we need to help them understand that the column headings are different to the stuff beneath them, and we also need to give them more visual importance. We can do this by making them bold, giving them ample negative space, or by including a thick rule above them. We can also give the row titles the same level of emphasis.\n\n\n\nIn addition to background tones, you can give emphasis to reading direction by typesetting those elements in bold. You shouldn\u2019t use italics\u2014with sans serif typefaces the difference is too subtle.\n\nSo, there you have it. A couple of simple guidelines to make your tables cleaner and more readable.", "year": "2007", "author": "Mark Boulton", "author_slug": "markboulton", "published": "2007-12-07T00:00:00+00:00", "url": "https://24ways.org/2007/typesetting-tables/", "topic": "design"} {"rowid": 149, "title": "Underpants Over My Trousers", "contents": "With Christmas approaching faster than a speeding bullet, this is the perfect time for you to think about that last minute present to buy for the web geek in your life. If you\u2019re stuck for ideas for that special someone, forget about that svelte iPhone case carved from solid mahogany and head instead to your nearest comic-book shop and pick up a selection of comics or graphic novels. (I\u2019ll be using some of my personal favourite comic books as examples throughout). \n\nTrust me, whether your nearest and dearest has been reading comics for a while or has never peered inside this four-colour world, they\u2019ll thank-you for it.\n\nAside from indulging their superhero fantasies, comic books can provide web designers with a rich vein of inspiring ideas and material to help them create shirt button popping, trouser bursting work for the web. I know from my own personal experience, that looking at aspects of comic book design, layout and conventions and thinking about the ways that they can inform web design has taken my design work in often-unexpected directions. \n\nThere are far too many fascinating facets of comic book design that provide web designers with inspiration to cover in the time that it takes to pull your underpants over your trousers. So I\u2019m going to concentrate on one muscle bound aspect of comic design, one that will make you think differently about how you lay out the content of your pages in panels. \n\nA suitcase full of Kryptonite\n\nNow, to the uninitiated onlooker, the panels of a comic book may appear to perform a similar function to still frames from a movie. But inside the pages of a comic, panels must work harder to help the reader understand the timing of a story. It is this method for conveying narrative timing to a reader that I believe can be highly useful to designers who work on the web as timing, drama and suspense are as important in the web world as they are in worlds occupied by costumed crime fighters and superheroes.\n\nI\u2019d like you to start by closing your eyes and thinking about your own process for laying out panels of content on a page. OK, you\u2019ll actually be better off with your eyes open if you\u2019re going to carry on reading.\n\nI\u2019ll bet you a suitcase full of Kryptonite that you often, if not always, structure your page layouts, and decide on the dimensions of those panels according to either:\n\n\n\tThe base grid that you are working to\n\tThe Golden Ratio or another mathematical schema\n\n\nMore likely, I bet that you decide on the size and the number of your panels based on the amount of content that will be going into them. From today, I\u2019d like you to think about taking a different approach. This approach not only addresses horizontal and vertical space, but also adds the dimension of time to your designs.\n\nSlowing down the action\n\nA comic book panel not only acts as a container for its content but also indicates to a reader how much time passes within the panel and as a result, how much time the reader should focus their attention on that one panel. \n\nSmaller panels create swift eye movement and shorter bursts of attention. Larger panels give the perception of more time elapsing in the story and subconsciously demands that a reader devotes more time to focus on it. \n\nConcrete by Paul Chadwick (Dark Horse Comics)\n\nThis use of panel dimensions to control timing can also be useful for web designers in designing the reading/user experience. Imagine a page full of information about a product or service. You\u2019ll naturally want the reader to focus for longer on the key benefits of your offering rather than perhaps its technical specifications.\n\nNow take a look at this spread of pages from Watchmen by Alan Moore and Dave Gibbons.\n\nWatchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004)\n\nThroughout this series of (originally) twelve editions, artist Dave Gibbons stuck rigidly to his 3\u00d73 panels per page design and deviated from it only for dramatic moments within the narrative. \n\nIn particular during the last few pages of chapter eleven, Gibbons adds weight to the impending doom by slowing down the action by using larger panels and forces the reader to think longer about what was coming next. The action then speeds up through twelve smaller panels until the final panel: nothing more than white space and yet one of the most iconic and thought provoking in the entire twelve book series.\n\nWatchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004)\n\nOn the web it is common for clients to ask designers to fill every pixel of screen space with content, perhaps not understanding the drama that can be added by nothing more than white space.\n\nIn the final chapter, Gibbons emphasises the carnage that has taken place (unseen between chapters eleven and twelve) by presenting the reader with six full pages containing only single, large panels. \n\nWatchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004)\n\nThis drama, created by the artist\u2019s use of panel dimensions to control timing, is a technique that web designers can also usefully employ when emphasising important areas of content.\n\nThink back for a moment to the home page of Apple Inc., during the launch of their iconic iPhone, where the page contained nothing more than a large image and the phrase \u201cSay hello to iPhone\u201d. Rather than fill the page with sales messages, Apple\u2019s designers allowed the space itself to tell the story and created a real sense of suspense and expectation among their readers.\n\nBorders\n\nWhereas on the web, panel borders are commonly used to add emphasis to particular areas of content, in comic books they take on a different and sometimes opposite role. \n\nIn the examples so far, borders have contained all of the action. Removing a border can have the opposite effect to what you might at first think. Rather than taking emphasis away from their content, in comics, borderless panels allow the reader\u2019s eyes to linger for longer on the content adding even stronger emphasis.\n\nConcrete by Paul Chadwick (Dark Horse Comics)\n\nThis effect is amplified when the borderless content is allowed to bleed to the edges of a page. Because the content is no longer confined, except by the edges of the page (both comic and web) the reader\u2019s eye is left to wander out into open space. \n\nConcrete by Paul Chadwick (Dark Horse Comics)\n\nThis type of open, borderless content panel can be highly useful in placing emphasis on the most important content on a page in exactly the very opposite way that we commonly employ on the web today. \n\nSo why is time an important dimension to think about when designing your web pages? On one level, we are often already concerned with the short attention spans of visitors to our pages and should work hard to allow them to quickly and easily find and read the content that both they and we think is important. Learning lessons from comic book timing can only help us improve that experience.\n\nOn another: timing, suspense and drama are already everyday parts of the web browsing experience. Will a reader see what they expect when they click from one page to the next? Or are they in for a surprise? \n\nMost importantly, I believe that the web, like comics, is about story telling: often the story of the experiences that a customer will have when they use our product or service or interact with our organisation. It is this element of story telling than can be greatly improved by learning from comics.\n\nIt is exactly this kind of learning and adapting from older, more established and at first glance unrelated media that you will find can make a real distinctive difference to the design work that you create.\n\nFill your stockings\n\nIf you\u2019re a visual designer or developer and are not a regular reader of comics, from the moment that you pick up your first title, I know that you will find them inspiring. \n\nI will be writing more, and speaking about comic design applied to the web at several (to be announced) events this coming year. I hope you\u2019ll be slipping your underpants over your trousers and joining me then. In the meantime, here is some further reading to pick up on your next visit to a comic book or regular bookshop and slip into your stockings:\n\n\n\tComics and Sequential Art by Will Eisner (Northern Light Books 2001)\n\tUnderstanding Comics: The Invisible Art by Scott McCloud (Harper Collins 1994)\n\n\nHave a happy superhero season.\n\n(I would like to thank all of the talented artists, writers and publishers whose work I have used as examples in this article and the hundreds more who inspire me every day with their tall tales and talent.)", "year": "2007", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2007-12-14T00:00:00+00:00", "url": "https://24ways.org/2007/underpants-over-my-trousers/", "topic": "design"} {"rowid": 151, "title": "Get In Shape", "contents": "Pop quiz: what\u2019s wrong with the following navigation?\n\n\n\nMaybe nothing. But then again, maybe there\u2019s something bugging you about the way it comes together, something you can\u2019t quite put your finger on. It seems well-designed, but it also seems a little\u2026 off.\n\nThe design decisions that led to this eventual form were no doubt well-considered:\n\n\n\tClient: The top level needs to have a \u201ccurrent page\u201d status indicator of some sort.\n\tDesigner: How about a white tab?\n\tClient: Great! The second level needs to show up underneath the first level though\u2026\n\tDesigner: Okay, but that white tab I just added makes it hard to visually connect the bottom nav to the top.\n\tClient: Too late, we\u2019ve seen the white tab and we love it. Try and make it work.\n\tDesigner: Right. So I placed the second level in its own box.\n\tClient: Hmm. They seem too separated. I can\u2019t tell that the yellow nav is the second level of the first.\n\tDesigner: How about an indicator arrow?\n\tClient: Brilliant!\n\n\nThe problem is that the end result feels awkward and forced. During the design process, little decisions were made that ultimately affect the overall shape of the navigation. What started out as a neatly contained rounded rectangle ended up as an ambiguous double shape that looks funny, though it\u2019s often hard to pinpoint precisely why.\n\nThe Shape of Things\n\nWell the why in this case is because seemingly unrelated elements in a design still end up visually interacting. Adding a new item to a page impacts everything surrounding it. In this navigation example, we\u2019re looking at two individual objects that are close enough to each other that they form a relationship; if we reduce them to strictly their outlines, it\u2019s a little easier to see that this particular combination registers oddly.\n\n\n\nThe two shapes float with nothing really grounding them. If they were connected, perhaps it would be a different story. The white tab divides the top shape in half, leaving a gap in the middle of it. There\u2019s very little balance in this pairing because the overall shape of the navigation wasn\u2019t considered during the design process.\n\nHere\u2019s another example: Gmail. Great email client, but did you ever closely look at what\u2019s going on in that left hand navigation? The continuous blue bar around the message area spills out into the navigation. If we remove all text, we\u2019re left with this odd configuration:\n\n\n\nThough the reasoning for anchoring the navigation highlight against the message area might be sound, the result is an irregular shape that doesn\u2019t correspond with anything in reality. You may never consciously notice it, but once you do it\u2019s hard to miss. One other example courtesy of last.fm:\n\n\n\nThe two header areas are the same shade of pink so they appear to be closely connected. When reduced to their outlines it\u2019s easy to see that this combination is off-balance: the edges don\u2019t align, the sharp corners of the top shape aren\u2019t consistent with the rounded corners of the bottom, and the part jutting out on the right of the bottom one seems fairly random. The result is a duo of oddly mis-matched shapes.\n\nDesign Strategies\n\nOur minds tend to pick out familiar patterns. A clever designer can exploit this by creating references in his or her work to shapes and combinations with which viewers are already familiar. There are a few simple ideas that can be employed to help you achieve this: consistency, balance, and completion.\n\nConsistency\n\nA fairly simple way to unify the various disparate shapes on a page is by designing them with a certain amount of internal consistency. You don\u2019t need to apply an identical size, colour, border, or corner treatment to every single shape; devolving a design into boring repetition isn\u2019t what we\u2019re after here. But it certainly doesn\u2019t hurt to apply a set of common rules to most shapes within your work.\n\n\n\nConsider purevolume and its multiple rounded-corner panels. From the bottom of the site\u2019s main navigation to the grey \u201cExtras\u201d panels halfway down the page (shown above), multiple shapes use a common border radius for unity. Different colours, different sizes, different content, but the consistent outlines create a strong sense of similarity. Not that every shape on the site follows this rule; they break the pattern right at the top with a darker sharp-cornered header, and again with the thumbnails below. But the design remains unified, nonetheless.\n\nBalance\n\nArguably the biggest problem with the last.fm example earlier is one of balance. The area poking out of the bottom shape created a fairly obvious imbalance for no apparent reason. The right hand side is visually emphasized due to the greater area of pink coverage, but with the white gap left beside it, the emphasis seems unwarranted. It\u2019s possible to create tension in your design by mismatching shapes and throwing off the balance, but when that happens unintentionally it can look like a mistake.\n\n\n\nAbove are a few examples of design elements in balanced and unbalanced configurations. The examples in the top row are undeniably more pleasing to the eye than those in the bottom row. If these were fleshed out into full designs, those derived from the templates in the top row would naturally result in stronger work.\n\nTake a look at the header on 9Rules for a study in well-considered balance. On the left you\u2019ll see a couple of paragraphs of text, on the right you have floating navigational items, and both flank the site\u2019s logo. This unusual layout combines multiple design elements that look nothing alike, and places them together in a way that anchors each so that no one weighs down the header.\n\nCompletion\n\nAnd finally we come to the idea of completion. Shapes don\u2019t necessarily need hard outlines to be read visually as shapes, which can be exploited for various purposes. Notice how Zend\u2019s mid-page \u201cBusiness Topics\u201d and \u201cNews\u201d items (below) fade out to the right and bottom, but the placement of two of these side-by-side creates an impression of two panels rather than three disparate floating columns. By allowing the viewer\u2019s eye to complete the shapes, they\u2019ve lightened up the design of the page and removed inessential lines. In a busy design this technique could prove quite handy.\n\n\n\nAlong the same lines, the individual shapes within your design may also be combined visually to form outlines of larger shapes. The differently-coloured header and main content/sidebar shapes on Veerle\u2019s blog come together to form a single central panel, further emphasized by the slight drop shadow to the right.\n\nImplementation\n\nStudying how shape can be used effectively in design is simply a starting point. As with all things design-related, there are no hard and fast rules here; ultimately you may choose to bring these principles into your work more often, or break them for effect. But understanding how shapes interact within a page, and how that effect is ultimately perceived by viewers, is a key design principle you can use to impress your friends.", "year": "2007", "author": "Dave Shea", "author_slug": "daveshea", "published": "2007-12-16T00:00:00+00:00", "url": "https://24ways.org/2007/get-in-shape/", "topic": "design"} {"rowid": 152, "title": "CSS for Accessibility", "contents": "CSS is magical stuff. In the right hands, it can transform the plainest of (well-structured) documents into a visual feast. But it\u2019s not all fur coat and nae knickers (as my granny used to say). Here are some simple ways you can use CSS to improve the usability and accessibility of your site. \n\nEven better, no sexy visuals will be harmed by the use of these techniques. Promise.\n\nNae knickers\n\nThis is less of an accessibility tip, and more of a reminder to check that you\u2019ve got your body background colour specified.\n\nIf you\u2019re sitting there wondering why I\u2019m mentioning this, because it\u2019s a really basic thing, then you might be as surprised as I was to discover that from a sample of over 200 sites checked last year, 35% of UK local authority websites were missing their body background colour.\n\nForgetting to specify your body background colour can lead to embarrassing gaps in coverage, which are not only unsightly, but can prevent your users reading the text on your site if they use a different operating system colour scheme.\n\nAll it needs is the following line to be added to your CSS file:\n\nbody {background-color: #fff;}\n\nIf you pair it with \n\ncolor: #000;\n\n\u2026 you\u2019ll be assured of maintaining contrast for any areas you inadvertently forget to specify, no matter what colour scheme your user needs or prefers.\n\nEven better, if you\u2019ve got standard reset CSS you use, make sure that default colours for background and text are specified in it, so you\u2019ll never be caught with your pants down. At the very least, you\u2019ll have a white background and black text that\u2019ll prompt you to change them to your chosen colours.\n\nElbow room\n\nPaying attention to your typography is important, but it\u2019s not just about making it look nice. \n\nCareful use of the line-height property can make your text more readable, which helps everyone, but is particularly helpful for those with dyslexia, who use screen magnification or simply find it uncomfortable to read lots of text online. \n\nWhen lines of text are too close together, it can cause the eye to skip down lines when reading, making it difficult to keep track of what you\u2019re reading across. \n\nSo, a bit of room is good.\n\nThat said, when lines of text are too far apart, it can be just as bad, because the eye has to jump to find the next line. That not only breaks up the reading rhythm, but can make it much more difficult for those using Screen Magnification (especially at high levels of magnification) to find the beginning of the next line which follows on from the end of the line they\u2019ve just read.\n\nUsing a line height of between 1.2 and 1.6 times normal can improve readability, and using unit-less line heights help take care of any pesky browser calculation problems.\n\nFor example: \n\np {\n\tfont-family: \"Lucida Grande\", Lucida, Verdana, Helvetica, sans-serif;\n\tfont-size: 1em;\n\tline-height: 1.3;\n}\n\nor if you want to use the shorthand version:\n\np {\n\tfont: 1em/1.3 \"Lucida Grande\", Lucida, Verdana, Helvetica, sans-serif;\n}\n\nView some examples of different line-heights, based on default text size of 100%/1em.\n\nFurther reading on Unitless line-heights from Eric Meyer.\n\nTransformers: Initial case in disguise\n\nNobody wants to shout at their users, but there are some occasions when you might legitimately want to use uppercase on your site.\n\nAvoid screen-reader pronunciation weirdness (where, for example, CONTACT US would be read out as Contact U S, which is not the same thing \u2013 unless you really are offering your users the chance to contact the United States) caused by using uppercase by using title case for your text and using the often neglected text-transform property to fake uppercase.\n\nFor example:\n\n.uppercase {\n\ttext-transform: uppercase\n} \n\nDon\u2019t overdo it though, as uppercase text is harder to read than normal text, not to mention the whole SHOUTING thing.\n\nLinky love\n\nWhen it comes to accessibility, keyboard only users (which includes those who use voice recognition software) who can see just fine are often forgotten about in favour of screen reader users.\n\nThis Christmas, share the accessibility love and light up those links so all of your users can easily find their way around your site.\n\nThe link outline\n\nAKA: the focus ring, or that dotted box that goes around links to show users where they are on the site.\n\nThe techniques below are intended to supplement this, not take the place of it. You may think it\u2019s ugly and want to get rid of it, especially since you\u2019re going to the effort of tarting up your links.\n\nDon\u2019t. \n\nJust don\u2019t.\n\nThe non-underlined underline\n\nIf you listen to Jacob Nielsen, every link on your site should be underlined so users know it\u2019s a link.\n\nYou might disagree with him on this (I know I do), but if you are choosing to go with underlined links, in whatever state, then remove the default underline and replacing it with a border that\u2019s a couple of pixels away from the text. \n\nThe underline is still there, but it\u2019s no longer cutting off the bottom of letters with descenders (e.g., g and y) which makes it easier to read.\n\nThis is illustrated in Examples 1 and 2.\n\nYou can modify the three lines of code below to suit your own colour and border style preferences, and add it to whichever link state you like.\n\ntext-decoration: none;\nborder-bottom: 1px #000 solid;\npadding-bottom: 2px;\n\nStanding out from the crowd\n\nWhatever way you choose to do it, you should be making sure your links stand out from the crowd of normal text which surrounds them when in their default state, and especially in their hover or focus states.\n\nA good way of doing this is to reverse the colours when on hover or focus.\n\nWell-focused\n\nEveryone knows that you can use the :hover pseudo class to change the look of a link when you mouse over it, but, somewhat ironically, people who can see and use a mouse are the group who least need this extra visual clue, since the cursor handily (sorry) changes from an arrow to a hand.\n\nSo spare a thought for the non-pointing device users that visit your site and take the time to duplicate that hover look by using the :focus pseudo class.\n\nOf course, the internets being what they are, it\u2019s not quite that simple, and predictably, Internet Explorer is the culprit once more with it\u2019s frustrating lack of support for :focus. Instead it applies the :active pseudo class whenever an anchor has focus.\n\nWhat this means in practice is that if you want to make your links change on focus as well as on hover, you need to specify focus, hover and active.\n\nEven better, since the look and feel necessarily has to be the same for the last three states, you can combine them into one rule.\n\nSo if you wanted to do a simple reverse of colours for a link, and put it together with the non-underline underlines from before, the code might look like this:\n\na:link {\n\tbackground: #fff;\n\tcolor: #000;\n\tfont-weight: bold;\n\ttext-decoration: none; \n\tborder-bottom: 1px #000 solid; \n\tpadding-bottom: 2px;\n}\na:visited {\n\tbackground: #fff;\n\tcolor: #800080;\n\tfont-weight: bold;\n\ttext-decoration: none; \n\tborder-bottom: 1px #000 solid; \n\tpadding-bottom: 2px;\n}\na:focus, a:hover, a:active {\n\tbackground: #000;\n\tcolor: #fff;\n\tfont-weight: bold;\n\ttext-decoration: none; \n\tborder-bottom: 1px #000 solid; \n\tpadding-bottom: 2px;\n}\n\nExample 3 shows what this looks like in practice.\n\nLocation, Location, Location\n\nTo take this example to it\u2019s natural conclusion, you can add an id of current (or something similar) in appropriate places in your navigation, specify a full set of link styles for current, and have a navigation which, at a glance, lets users know which page or section they\u2019re currently in.\n\nExample navigation using location indicators.\n\nand the source code\n\nConclusion\n\nAll the examples here are intended to illustrate the concepts, and should not be taken as the absolute best way to format links or style navigation bars \u2013 that\u2019s up to you and whatever visual design you\u2019re using at the time.\n\nThey\u2019re also not the only things you should be doing to make your site accessible. \n\nAbove all, remember that accessibility is for life, not just for Christmas.", "year": "2007", "author": "Ann McMeekin", "author_slug": "annmcmeekin", "published": "2007-12-13T00:00:00+00:00", "url": "https://24ways.org/2007/css-for-accessibility/", "topic": "design"} {"rowid": 167, "title": "Back To The Future of Print", "contents": "By now we have weathered the storm that was the early days of web development, a dangerous time when we used tables, inline CSS and separate pages for print only versions. We can reflect in a haggard old sea-dog manner (\u201cyarrr\u2026 I remember back in the browser wars\u2026\u201d) on the bad practices of the time. We no longer need convincing that print stylesheets are the way to go1, though some of the documentation for them is a little outdated now.\n\nI am going to briefly cover 8 tips and 4 main gotchas when creating print stylesheets in our more enlightened era.\n\nGetting started\n\nAs with regular stylesheets, print CSS can be included in a number of ways2, for our purposes we are going to be using the link\nelement.\n\n\n\nThis is still my favourite way of linking to CSS files, its easy to see what files are being included and to what media they are being applied to. Without the media attribute specified the link element defaults to the media type \u2018all\u2019 which means that the styles within then apply to print and screen alike. The media type \u2018screen\u2019 only applies to the screen and wont be picked up by print, this is the best way of hiding styles from print.\n\nMake sure you include your print styles after all your other CSS, because you will need to override certain rules and this is a lot easier if you are flowing with the cascade than against it!\n\nAnother thing you should be thinking is \u2018does it need to be printed\u2019. Consider the context3, if it is not a page that is likely to be printed, such as a landing page or a section index then the print styles should resemble the way the page looks on the screen.\n\nContext is really important for the design of your print stylesheet, all the tips and tricks that follow should be taken in the context of the page. If for example you are designing a print stylesheet for an item in a shopping cart, it is irrelevant for the user to know the exact url of the link that takes them to your checkout.\n\nTips and tricks\n\nDuring these tip\u2019s we are going to build up print styles for a textileRef:11112857385470b854b8411:linkStartMarker:\u201csimple\nexample\u201d:/examples/back-to-the-future-of-print/demo-1.html\n\n1. Remove the cruft\n\nFirst things first, navigation, headers and most page furniture are pretty much useless and dead space in print so they will need to be removed, using display:none;.\n\n2. Linearise your content\n\nContent doesn\u2019t work so well in columns in print, especially if the content columns are long and intend to stretch over multiple columns (as mentioned in the gotcha section below). You might want to consider Lineariseing the content to flow down the page. If you have your source order in correct priority this will make things a lot easier4.\n\n3. Improve your type\n\nOnce you have removed all the useless cruft and jiggled things about a bit, you can concentrate more on the typography of the page.\n\nTypography is a complex topic5, but simply put serif-ed fonts such as Georgia work better on print and sans serif-ed fonts such as Verdana are more appropriate for screen use. You will probably want to increase font size and line height and change from px to pt (which is specifically a print measurement).\n\n4. Go wild on links\n\nThere are some incredibly fun things you can do with links in print using CSS. There are two schools of thought, one that consider it is best to disguise inline links as body text because they are not click-able on paper. Personally I believe it is useful to know for reference that the document did link to somewhere originally.\n\nWhen deciding which approach to take, consider the context of your document, do people need to know where they would have gone to? will it help or hinder them to know this information? Also for an alternative to the below, take a look at Aaron Gustafson\u2019s article on generating footnotes for print6.\n\nUsing some clever selector trickery and CSS generated content you can have the location of the link generated after the link itself.\n\nHTML:\n\n

I wish Google could find my keys

\n\nCSS:\n\na:link:after,\na:visited:after,\na:hover:after,\na:active:after {\n\tcontent: \" <\" attr(href) \"> \";\n}\n\nBut this is not perfect, in the above example the content of the href is just naively plonked after the link text:\n\nI wish Google would find my keys \n\nAs looking back over this printout the user is not immediately aware of the location of the link, a better solution is to use even more crazy selectors to deal with relative links. We can also add a style to the generated content so it is distinguishable from the link text itself.\n\nCSS:\n\na:link:after,\na:visited:after,\na:hover:after,\na:active:after {\n\tcontent: \" <\" attr(href) \"> \";\n\tcolor: grey;\n\tfont-style: italic;\n\tfont-weight: normal;\n}\na[href^=\"/\"]:after {\n\tcontent: \" \";\n}\n\nThe output is now what we were looking for (you will need to replace example.com with your own root URL):\n\nI wish Google would find my keys \n\nUsing regular expressions on the attribute selectors, one final thing you can do is to suppress the generated content on mailto: links, if for example you know the link text always reflects the email address. Eg:\n\nHTML:\n\nme@example.com\n\nCSS:\n\na[href^=\"mailto\"]:after {\n\tcontent: \"\";\n}\n\nThis example shows the above in action.\n\nOf course with this clever technique, there are the usual browser support issues. While it won\u2019t look as intended in browsers such as Internet Explorer 6 and 7 (IE6 and IE7) it will not break either and will just degrade gracefully because IE cannot do generated content. To the best of my knowledge Safari 2+ and Opera 9.X support a colour set on generated content whereas Firefox 2 & Camino display this in black regardless of the link or inherited text colour.\n\n5. Jazz your headers for print\n\nThis is more of a design consideration, don\u2019t go too nuts though; there are a lot more limitations in print media than on screen. For this example we are going to go for is having a bottom border on h2\u2019s and styling other headings with graduating colors or font sizes.\n\nAnd here is the example complete with jazzy headers.\n\n6. Build in general hooks\n\nIf you are building a large site with many different types of page, you may find it useful to build into your CSS structure, classes that control what is printed (e.g. noprint and printonly). This may not be semantically ideal, but in the past I have found it really useful for maintainability of code across large and varied sites\n\n7. For that extra touch of class\n\nWhen printing pages from a long URL, even if the option is turned on to show the location of the page in the header, browsers may still display a truncated (and thus useless) version.\n\nUsing the above tip (or just simply setting to display:none in screen and display:block in print) you can insert the URL of the page you are currently on for print only, using JavaScript\u2019s window.location.href variable.\n\nfunction addPrintFooter() {\n\tvar p = document.createElement('p');\n\tp.className = 'print-footer';\n\tp.innerHTML = window.location.href;\n\tdocument.body.appendChild(p);\n}\n\nYou can then call this function using whichever onload or ondomready handler suits your fancy. Here is our familiar demo to show all the above in action\n\n8. Tabular data across pages\n\nIf you are using tabled data in your document there are a number of things you can do to increase usability of long tables over several pages. If you use the element this should repeat your table headers on the next page should your table be split. You will need to set thead {display: table-header-group;} explicitly for IE even though this should be the default value.\n\nAlso if you use tr {page-break-inside: avoid;} this should (browser support depending) stop your table row from breaking across two pages. For more information on styling tables for print please see the CSS discuss wiki7.\n\nGotchas\n\n1. Where did all my content go?\n\nAbsolutely the most common mistake I see with print styles is the truncated content bug. The symptom of this is that only the first page of a div\u2019s content will be printed, the rest will look truncated after this.\n\nFloating long columns may still have this affect, as mentioned in Eric Meyer\u2019s article on \u2018A List Apart\u2019 article from 20028; though in testing I am no longer able to replicate this. Using overflow:hidden on long content in Firefox however still causes this truncation. Overflow hidden is commonly used to clear floats9.\n\nA simple fix can be applied to resolve this, if the column is floated you can override this with float:none similarly overflow:hidden can be overridden with overflow:visible or the offending rules can be banished to a screen only stylesheet.\n\nUsing position:absolute on long columns also has a very similar effect in truncating the content, but can be overridden in print with position:static;\n\nWhilst I only recommend having a print stylesheet for content pages on your site; do at least check other landing pages, section indexes and your homepage. If these are inaccessible in print possibly due to the above gotcha, it might be wise to provide a light dusting of print styles or move the offending overflow / float rules to a screen only stylesheet to fix the issues.\n\n2. Damn those background browser settings\n\nOne of the factors of life you now need to accept is that you can\u2019t control the user\u2019s browser settings, no more than you can control whether or not they use IE6. Most browsers by default will not print background colours or images unless explicitly told to by the user.\n\nNaturally this causes a number of problems, for starters you will need to rethink things like branding. At this point it helps if you are doing the print styles early in the project so that you can control the logo not being a background image for example.\n\nWhere colour is important to the meaning of the document, for example a status on an invoice, bear in mind that a textural representation will also need to be supplied as the user may be printing in black and white. Borders will print however regardless of setting, so assuming the user is printing in colour you can always use borders to indicate colour.\n\nCheck the colour contrast of the text against white, this may need to be altered without backgrounds. You should check how your page looks with backgrounds turned on too, for consistency with the default browser settings you may want to override your background anyway.\n\nOne final issue with backgrounds being off is list items. It is relatively common practice to suppress the list-style-type and replace with a background image to finely control the bullet positioning. This technique doesn\u2019t translate to print, you will need to disable this background bullet and re-instate your trusty friend the list-style-type.\n\n3. Using JavaScript in your CSS? \u2026 beware IE6\n\nInternet explorer has an issue that when Javascript is used in a stylesheet it applies this to all media types even if only applied to screen. For example, if you happen to be using expressions to set a width for IE, perhaps to mimic min-width, a simple width:100% !important rule can overcome the effects the expression has on your print styles10.\n\n4. De-enhance your Progressive enhancements\n\nQuite a classic \u201cdoh\u201d moment is when you realise that, of course paper doesn\u2019t support Javascript. If you have any dynamic elements on the page, for example a document collapsed per section, you really should have been using Progressive enhancement techniques11 and building for browsers without Javascript first, adding in the fancy stuff later.\n\nIf this is the case it should be trivial to override your wizzy JS styles in your print stylesheet, to display all your content and make it accessible for print, which is by far the best method of achieving this affect.\n\nAnd Finally\u2026\n\nI refer you back to the nature of the document in hand, consider the context of your site and the page. Use the tips here to help you add that extra bit of flair to your printed media.\n\nBe careful you don\u2019t get caught out by the common gotchas, keep the design simple, test cross browser and relish in the medium of print.\n\nFurther Reading\n\n1 For more information constantly updated, please see the CSS discuss wiki on print stylesheets\n\n2 For more information on media types and ways of including CSS please refer to the CSS discuss wiki on Media Stylesheets\n\n3 Eric Meyer talks to ThinkVitamin about the importance of context when designing your print strategy.\n\n4 Mark Boulton describes how he applies a newspaper like print stylesheet to an article in the Guardian website. Mark also has some persuasive arguments that print should not be left to last\n\n5 Richard Rutter Has a fantastic resource on typography which also applies to print.\n\n6 Aaron Gustafson has a great solution to link problem by creating footnotes at the end of the page.\n\n7 The CSS discuss wiki has more detailed information on printing tables and detailed browser support\n\n8 This \u2018A List Apart\u2019 article is dated May 10th 2002 though is still mostly relevant\n\n9 Float clearing technique using \u2018overflow:hidden\u2019\n\n10 Autistic Cuckoo describes the interesting insight with regards to expressions specified for screen in IE6 remaining in print\n\n11 Wikipedia has a good article on the definition of progressive enhancement\n\n12 For a really neat trick involving a dynamically generated column to displaying and meanings (as well as somewhere for the user to write notes), try print previewing on Brian Suda\u2019s site", "year": "2007", "author": "Natalie Downe", "author_slug": "nataliedowne", "published": "2007-12-09T00:00:00+00:00", "url": "https://24ways.org/2007/back-to-the-future-of-print/", "topic": "design"} {"rowid": 173, "title": "Real Fonts and Rendering: The New Elephant in the Room", "contents": "My friend, the content strategist Kristina Halvorson, likes to call content \u201cthe elephant in the room\u201d of web design. She means it\u2019s the huge problem that no one on the web development team or client side is willing to acknowledge, face squarely, and plan for. \n\nA typical web project will pass through many helpful phases of research, and numerous beneficial user experience design iterations, while the content\u2014which in most cases is supposed to be the site\u2019s primary focus\u2014gets handled haphazardly at the end. Hence, elephant in the room, and hence also artist Kevin Cornell\u2019s recent use of elephantine imagery to illustrate A List Apart articles on the subject. But I digress.\n\nWithout discounting the primacy of the content problem, we web design folk have now birthed ourselves a second lumbering mammoth, thanks to our interest in \u201creal fonts on the web\u201c (the unfortunate name we\u2019ve chosen for the recent practice of serving web-licensed fonts via CSS\u2019s decade-old @font-face declaration\u2014as if Georgia, Verdana, and Times were somehow unreal). \n\nFor the fact is, even bulletproof and mo\u2019 bulletproofer @font-face CSS syntax aren\u2019t really bulletproof if we care about looks and legibility across browsers and platforms.\n\nHyenas in the Breakfast Nook\n\nThe problem isn\u2019t just that foundries have yet to agree on a standard font format that protects their intellectual property. And that, even when they do, it will be a while before all browsers support that standard\u2014leaving aside the inevitable politics that impede all standardization efforts. Those are problems, but they\u2019re not the elephant. Call them the coyotes in the room, and they\u2019re slowly being tamed.\n\nNor is the problem that workable, scalable business models (of which Typekit\u2018s is the most visible and, so far, the most successful) are still being shaken out and tested. The quality and ease of use of such services, their stability on heavily visited sites (via massively backed-up server clusters), and the fairness and sustainability of their pricing will determine how licensing and serving \u201creal fonts\u201d works in the short and long term for the majority of designer/developers.\n\nNor is our primary problem that developers with no design background may serve ugly or illegible fonts that take forever to load, or fonts that take a long time to download and then display as ordinary system fonts (as happens on, say, about.validator.nu). Ugliness and poor optimization on the web are nothing new. That support for @font-face in Webkit and Mozilla browsers (and for TrueType fonts converted to Embedded OpenType in Internet Explorer) adds deadly weapons to the non-designer\u2019s toolkit is not the technology\u2019s fault. JavaScript and other essential web technologies are equally susceptible to abuse. \n\nBeauty is in the Eye of the Rendering Engine\n\nNo, the real elephant in the room\u2014the thing few web developers and no \u201cweb font\u201d enthusiasts are talking about\u2014has to do with legibility (or lack thereof) and aesthetics (or lack thereof) across browsers and platforms. Put simply, even fonts optimized for web use (which is a whole thing: ask a type designer) will not look good in every browser and OS. That\u2019s because every browser treats hinting differently, as does every OS, and every OS version. \n\nFirefox does its own thing in both Windows and Mac OS, and Microsoft is all over the place because of its need to support multiple generations of Windows and Cleartype and all kinds of hardware simultaneously. Thus \u201creal type\u201d on a single web page can look markedly different, and sometimes very bad, on different computers at the same company. If that web page is your company\u2019s, your opinion of \u201cweb fonts\u201d may suffer, and rightfully. (The advantage of Apple\u2019s closed model, which not everyone likes, is that it allows the company to guarantee the quality and consistency of user experience.) \n\nAs near as my font designer friends and I can make out, Apple\u2019s Webkit in Safari and iPhone ignores hinting and creates its own, which Apple thinks is better, and which many web designers think of as \u201cwhat real type looks like.\u201d The forked version of Webkit in Chrome, Android, and Palm Pre also creates its own hinting, which is close to iPhone\u2019s\u2014close enough that Apple, Palm, and Google could propose it as a standard for use in all browsers and platforms. Whether Firefox would embrace a theoretical Apple and Google standard is open to conjecture, and I somehow have difficulty imagining Microsoft buying in\u2014even though they know the web is more and more mobile, and that means more and more of their customers are viewing web content in some version of Webkit.\n\nThe End of Simple\n\nThere are ways around this ugly type ugliness, but they involve complicated scripting and sniffing\u2014the very nightmares from which web standards and the simplicity of @font-face were supposed to save us. I don\u2019t know that even mighty Typekit has figured out every needed variation yet (although, working with foundries, they probably will). \n\nFor type foundries, the complexity and expense of rethinking classic typefaces to survive in these hostile environments may further delay widespread adoption of web fonts and the resolution of licensing and formatting issues. The complexity may also force designers (even those who prefer to own) to rely on a hosted rental model simply to outsource and stay current with the detection and programming required.\n\nForgive my tears. I stand in a potter\u2019s field of ideas like \u201cKeep it simple,\u201d by a grave whose headstone reads \u201cWrite once, publish everywhere.\u201d", "year": "2009", "author": "Jeffrey Zeldman", "author_slug": "jeffreyzeldman", "published": "2009-12-22T00:00:00+00:00", "url": "https://24ways.org/2009/real-fonts-and-rendering/", "topic": "design"} {"rowid": 174, "title": "Type-Inspired Interfaces", "contents": "One of the things that terrifies me most about a new project is the starting point. How is the content laid out? What colors do I pick? Once things like that are decided, it becomes significantly easier to continue design, but it\u2019s the blank page where I spend the most time.\n\nTo that end, I often start by choosing type. I don\u2019t need to worry about colors or layout or anything else\u2026 just the right typefaces that support the art direction. (This article won\u2019t focus on how to choose a typeface, but there are some really great resources if you interested in that sort of thing.)\n\nAnd just like that, all your work is done. \u201cHold it just a second,\u201d you might say. \u201cAll I\u2019ve done is pick type. I still have to do the rest!\u201d\n\nTo which I would reply, \u201cSilly rabbit. You already have!\u201d You see, picking the right typeface gets you farther than you might think. Here are a few tips on taking cues from type to design interfaces and interface elements.\n\nPerfecting Web 2.0\n\nIf you\u2019re going for that beloved rounded corner look, you might class it up a bit by choosing the wonderful Omnes Pro by Joshua Darden. As the typeface already has a rounded aesthetic, making buttons that fit the style should be pretty easy.\n\nI\u2019ve found that using multiples helps to keep your interfaces looking balanced and proportional. Noticing that the top left edge of the letter \u201cP\u201d has about an 12px corner radius, let\u2019s choose a 24px radius for our button (a multiple of 2), so that we get proper rounded corners. By taking mathematical measurements from the typeface, our button looks more thought out than just \u201cplace arbitrary text on arbitrarily-sized button.\u201d Pretty easy, eh?\n\n\n\nWhat\u2019s in a name(plate)?\n\nRounded buttons are pretty popular buttons nowadays, so let\u2019s try something a bit more stylized.\n\nHave a gander at Brothers, a sturdy face from Emigre. The chiseled edges give us a perfect cue for a stylized button. Using the same slope, you can make plated-looking buttons that fit a different kind of style.\n\n\n\nHeadlining\n\nYou might even take some cues from the style of the typeface itself. Didone serifs are known for their lack of brackets\u30fcthat is, a gradual transition from the stem to the serif. Instead, they typically connect at a right angle. Another common characteristic is the high contrast in the strokes: very thick stems, very thin serifs.\n\nSo, when using a high contrast typeface, you can use it to your advantage to enhance hierarchy. Following our \u201cmultiples\u201d guideline, a 12px measurement from the stems helps us create a top rule with a height of 24px (a multiple of 2). We can take the exact 1px measurement from the serif\u2014a multiple of 1\u2014to create the bottom rule. Voil\u00e0! I use this technique a lot.\n\n\n\nSwashbucklers\n\nAnd don\u2019t forget the importance of visual \u201cspeed bumps\u201d to break up long passages of text. A beautiful face like Alejandro Paul\u2019s Ministry Script has over a thousand characters that can be manipulated or even combined to create elegant interface elements. Altering the partial differential character (\u2202) creates a delightful ornament that can help to guide the eye through content.\n\n\n\nStagger & Swagger\n\nWhat about layout? How can we use typography to inform how our content is displayed?\n\nLet\u2019s take a typeface like Assembler. We might use this for a design that needs to feel uneasy or uncomfortable. In design terms, that might translate into using irregular shapes and asymmetry. Using the proportional distances and degrees from the perpendiculars, we could easily create a multi-column layout that jives with the general tone. And for all you skeptics that don\u2019t think a layout like this is doable on the web, stranger things have happened.\n\n Background texture generously offered by Bittbox.\n\nOverall Design Direction\n\nFinally, your typography could impact the entire look of the site, from the navigation to the interaction and everything in between. Check out how the (now-defunct) Nike Free site\u2019s typography echoes the product itself, and in turn influences the navigation.\n\n\n\nFind Your Type\n\nWith thousands of fonts to choose from, the possibilities are ridiculously open. From angles to radii to color to weight, you\u2019ve got endless fodder before you. Great type designers spent countless hours slaving over these detailed letterforms; take advantage of it! Don\u2019t feel like you have to limit yourself to the same old Helvetica and wet floors\u2026 unless your design calls for it. \n\nHappy hunting!", "year": "2009", "author": "Dan Mall", "author_slug": "danmall", "published": "2009-12-07T00:00:00+00:00", "url": "https://24ways.org/2009/type-inspired-interfaces/", "topic": "design"} {"rowid": 183, "title": "Designing For The Switch", "contents": "For a long time on the web, we\u2019ve been typographically spoilt. Yes, you heard me correctly. Think about it: our computers come with web fonts already installed; fonts that have been designed specifically to work well online and at small size; and fonts that we can be sure other people have too. \n\nYes, we\u2019ve been spoilt. We don\u2019t need to think about using Verdana, Arial, Georgia or Cambria. \n\nYet, for a long time now, designers have felt we needed more. We want to choose whatever typeface we feel necessary for our designs. We did bad things along the way in pursuit of this goal such as images for text. Smart people dreamt up tools to help us such as sIFR, or Cuf\u00f3n. Only fairly recently, @font-face is supported in most browsers. The floodgates are opening. It really is the dawn of a new typographic era on the web. And we must tread carefully. \n\nThe New Typesetters \n\nMany years ago, before the advent of desktop publishing, if you wanted words set in a particular typeface, you had to go to a Typesetter. A Typesetter, or Compositor, as they were sometimes called, was a person whose job it was to take the written word (in the form of a document or manuscript) and \u2018set\u2019 the type in the desired typeface. The designer would chose what typeface they wanted \u2013 and all the ligatures, underlines, italics and whatnot \u2013 and then scribble all over the manuscript so the typesetter could set the correct type. \n\nThen along came Desktop Publishing and every Tom, Dick and Harry could choose type on their computer and an entire link in the typographic chain was removed within just a few years. Well, that\u2019s progress I guess. That was until six months ago when Typesetting was reborn on the web in the guise of a font service: Typekit. \n\nTypekit \u2013 and services like Typekit such as Typotheque, Kernest and the upcoming Fontdeck \u2013 are typesetting services for the web. You supply them with your content, in the form of a webpage, and they provide you with some JavaScript to render that webpage in the typeface you\u2019ve specified simply by adding the font name in your CSS file. \n\nThanks to services like these, font foundries are now talking to create licensing structures to allow us to embed fonts into our web pages legally \u2013 which has always been a sticking point in the past. So, finally, us designers can get what we want: whatever typeface we want on the web. \n\nYes, but\u2026 there are hurdles. One of which is the subject of this article. \n\nThe differences between Web Fonts and other fonts \n\nWeb fonts are different to normal fonts. They differ in a whole bunch of ways, from loose letter spacing to larger x-heights. But perhaps the most notable practical difference is file size. Let\u2019s take a look at one of Typekit\u2019s latest additions from the FontFont library, Meta. \n\nMeta Roman weighs in at 42 KB. This is a fairly typical file size for a single weight of a good font. Now, let\u2019s have a look at Verdana. Verdana is 186 KB. For one weight. The four weight family for Verdana weighs in at 686 KB. Four weights for half a megabyte!? Why so huge? \n\nWell, Verdana has a lot of information packed into its 186 KB. It has the largest hinting data table of any typeface (the information carried by a font that tells it how to align itself to the pixels on your screen). As it has been shipped with Microsoft products since 1996, it has had time to grow to support many, many languages. Along with its cousin, Georgia (283 KB), Verdana was a new breed of typeface. And it\u2019s grown fat. \n\nIf really serious web typography takes off \u2013 and by that I mean typefaces specifically designed for the screen \u2013 then we\u2019re going to see more fonts increase in file size as the font files include more data. So, if you\u2019re embedding a font weighing in at 100 KB, what happens? \n\nThe Flash of Unstyled Text \n\nWe all remember the Flash of Unstyled Content bug on Internet Explorer, right? That annoying bug that caused a momentary flash of unstyled HTML page. Well, the same thing can happen with embedding fonts using @font-face. An effect called The Flash of Unstyled Text (FOUT), first coined by Paul Irish. Personally, I prefer to call it the Flash of UnTypeset Text (still FOUT), as the text is styled, just not with what you want. \n\nIf you embed a typeface in your CSS, then the browser will download that typeface. Typically, browsers differ in the way they handle this procedure. \n\nFirefox and Opera will render the text using the next font in your font stack until the first (embedded) font is loaded. It will then switch to the embedded font. \n\nWebkit takes the approach that you asked for that font so it will wait until it\u2019s completely loaded before showing it you. \n\nIn Opera and Firefox, you get a FOUT. In Webkit, you don\u2019t. You wait. \n\nHang on there. Didn\u2019t I say that good web fonts weigh in considerably more than \u2018normal\u2019 fonts? And whilst the browser is downloading the font, the user gets what to look at? Some pictures, background colours and whatever else isn\u2019t HTML? I believe Webkit\u2019s handling of font embedding \u2013 as deliberate as it is \u2013 is damaging to the practice of font embedding. Why? Well, we can design to a switch in typeface (as jarring as that is for the user), but we can\u2019t design to blank space. \n\nLet\u2019s have a closer look at how we can design to FOUT. \n\nMore considered font stacks \n\nWe all know that font stacks in CSS are there for when a user doesn\u2019t have a font; the browser will jump to the next one in the stack. Adding embedded fonts into the font stack means that because of FOUT (in gecko and Opera), the user can see a switch, and depending on their connection that switch could happen well into any reading that the user may be doing. \n\nThe practicalities of this are that a user could be reading and be towards the end of a line when the paragraph they are reading changes shape. The word they were digesting suddenly changes to three lines down. It\u2019s the online equivalent of someone turning the page for you when you least expect it. So, how can we think about our font stacks slightly differently so we can minimise the switch? \n\nTwo years ago, Richard Rutter wrote on this very site about increasing our font stacks. By increasing the font stacks (by using his handy matrix) we can begin to experiment with different typefaces. However, when we embed a typeface, we must look very carefully at the typefaces in the font stack and the relationship between them. Because, previously, the user would not see a switch from one typeface to another, they\u2019d just get either one or the other. Not both. With FOUT, the user sees two typefaces. \n\nBy carefully looking at the characteristics of the typefaces you choose, you can minimise the typographic \u2018distance\u2019 between the type down the stack. In doing so, you minimise the jarring effect of the switch. \n\nLet\u2019s take a look at an example of how to go about this. \n\nMicro Typography to build better font stacks \n\nLet\u2019s say I want to use a recent edition to Typekit \u2013 Meta Serif Book \u2013 as my embedded font. My font stack would start like this: \n\nfont-family: 'Meta Serif Bold'; \n\nWhere do you go from here? Well, first, familiarise yourself with Richard\u2019s Font Matrix so you get an idea of what fonts are available for different people. Then start by looking closely at the characters of the embedded font and then compare them to different fonts from the matrix. \n\nWhen I do this, I\u2019m looking to match type characteristics such as x-height, contrast (the thickness and thinness of strokes), the stress (the angle of contrast) and the shape of the serifs (if the typeface has any). \n\n\n\nUsing just these simple comparative metrics means you can get to a \u2018best fit\u2019 reasonably quickly. And remember, you\u2019re not after an ideal match. You\u2019re after a match that means the switch is less painful for the reader, but also a typeface that carries similar characteristics so your design doesn\u2019t change too much. \n\nBuilding upon my choice of embedded font, I can quickly build up a stack by comparing letters. \n\n\n\nThis then creates my \u2018best fit\u2019 stack. \n\n\n\nThis translates to the CSS as: \n\nfont-family: 'Meta Serif Bold', 'Lucida Bright', Cambria, Georgia, serif \n\nFollowing this process, and ending up with considered font stacks, means that we can design to the Flash of UnTypeset Content and ensure that our readers don\u2019t get a diminished experience.", "year": "2009", "author": "Mark Boulton", "author_slug": "markboulton", "published": "2009-12-16T00:00:00+00:00", "url": "https://24ways.org/2009/designing-for-the-switch/", "topic": "design"} {"rowid": 210, "title": "Stop Leaving Animation to the Last Minute", "contents": "Our design process relies heavily on static mockups as deliverables and this makes it harder than it needs to be to incorporate UI animation in our designs. Talking through animation ideas and dancing out the details of those ideas can be fun; but it\u2019s not always enough to really evaluate or invest in animated design solutions. \nBy including deliverables that encourage discussing animation throughout your design process, you can set yourself (and your team) up for creating meaningful UI animations that feel just as much a part of the design as your colour palette and typeface. You can get out of that \u201crunning out of time to add in the animation\u201d trap by deliberately including animation in the early phases of your design process. This will give you both the space to treat animation as a design tool, and the room to iterate on UI animation ideas to come up with higher quality solutions. Two deliverables that can be especially useful for this are motion comps and animated interactive prototypes. \nMotion comps - an animation deliverable\nMotion comps (also called animatics or motion mock-ups) are usually video representation of UI animations. They are used to explore the details of how a particular animation might play out. And they\u2019re most often made with timeline-based tools like Adobe After Effects, Adobe Animate, or Tumult Hype. \nThe most useful things about motion comps is how they allow designers and developers to share the work of creating animations. (Instead of pushing all the responsibility of animation on one group or the other.) For example, imagine you\u2019re working on a design that has a content panel that can either be open or closed. You might create a mockup like the one below including the two different views: the closed state and the open state. If you\u2019re working with only static deliverables, these two artboards might be exactly what you handoff to developers along with the instruction to animate between the two. \n\nOn the surface that seems pretty straight forward, but even with this relatively simple transition there\u2019s a lot that those two artboards don\u2019t address. There are seven things that change between the closed state and the open state. That\u2019s seven things the developer building this out has to figure out how to move in and out of view, when, and in what order. And all of that is even before starting to write the code to make it work. \nBy providing only static comps, all the logic of the animation falls on the developer. This might go ok if she has the bandwidth and animation knowledge, but that\u2019s making an awful lot of assumptions.\nInstead, if you included a motion mock up like this with your static mock ups, you could share the work of figuring out the logic of the animation between design and development. Designers could work out the logic of the animation in the motion comp, exploring which items move at which times and in which order to create the opening and closing transitions. \n\nThe motion comp can also be used to iterate on different possible animation approaches before any production code has to be committed too. Sharing the work and giving yourself time to explore animation ideas before you\u2019re backed up again the deadline will lead to happier teammates and better design solutions. \nWhen to use motion comps\nI\u2019m not a fan of making more deliverables just for the sake of having more things to make, so I find it helps to narrow down what question I\u2019m trying answer before choosing which sort of deliverable to make to investigate. \nMotion comps can be most helpful for answering questions like: \n\nExactly how should this animation look? \nWhich items should move? Where? And when? \nDo the animation qualities reflect our brand or our voice and tone?\nOne of the added bonuses of creating motion comps to answer these questions is that you\u2019ll have a concrete thing to bring to design critiques or reviews to get others\u2019 input on them as well.\n\nUsing motion comps as handoff\nMotion comps are often used to handoff animation ideas from design to development. They can be super useful for this, but they\u2019re even more useful when you include the details of the motion specs with them. (It\u2019s difficult, if not impossible, to glean these details from playing back a video.)\nMore specifically, you\u2019ll want to include:\n\nDurations and the properties animated for each animation\nEasing curve values or spring values used\nDelay values and repeat counts\n\nIn many cases you\u2019ll have to collect these details up manually. But this isn\u2019t necessarily something that that will take a lot of time. If you take note of them as you\u2019re creating the motion comp, chances are most of these details will already be top of mind. (Also, if you use After Effects for your motion comps, the Inspector Spacetime plugin might be helpful for this task.)\nAnimated prototypes - an interactive deliverable\nMaking prototypes isn\u2019t a new idea for web work by any stretch, but creating prototypes that include animation \u2013 or even creating prototypes specifically to investigate potential animation solutions \u2013 can go a long way towards having higher quality animations in your final product.\nInteractive prototypes are web or app-based, or displayed in a particular tool\u2019s preview window to create a useable version of interactions that might end up in the end product. They\u2019re often made with prototyping apps like Principle, Framer, or coded up in HTML, CSS and JS directly like the example below.\nSee the Pen Prototype example by Val Head (@valhead) on CodePen.\n\nThe biggest different between motion comps and animated prototypes is the interactivity. Prototypes can reposed to taps, drags or gestures, while motion comps can only play back in a linear fashion. Generally speaking, this makes prototypes a bit more of an effort to create, but they can also help you solve different problems. The interactive nature of prototypes can also make them useful for user testing to further evaluate potential solutions. \nWhen to use prototypes\nWhen it comes to testing out animation ideas, animated prototypes can be especially helpful in answering questions like these: \n\nHow will this interaction feel to use? (Interactive animations often have different timing needs than animations that are passively viewed.)\nWhat will the animation be like with real data or real content? \nDoes this animation fit the context of the task at hand? \n\nPrototypes can be used to investigate the same questions that motion comps do if you\u2019re comfortable working in code or your prototyping tool of choice has capabilities to address high fidelity animation details. There are so many different prototyping tools out there at the moment, you\u2019re sure to be able to find one that fits your needs. \nAs a quick side note: If you\u2019re worried that your coding skills might not be up to par to prototype in code, know that prototype code doesn\u2019t have to be production quality code. Animated prototypes\u2019 main concern is working out the animation details. Once you\u2019ve arrived at a combination of animations that works, the animation specifics can be extracted or the prototype can be refactored for production.\nMotion comp or prototype?\nBoth motion comps and prototypes can be extremely useful in the design process and you can use whichever one (or ones) that best fits your team\u2019s style. The key thing that both offer is a way to make animation ideas visible and sharable. When you and your teammate are both looking at the same deliverable, you can be confident you\u2019re talking about the same thing and discuss its pros and cons more easily than just describing the idea verbally. \nMotion comps tend to be more useful earlier in the design process when you want to focus on the motion without worrying about the underlying structure or code yet. Motion comps also be great when you want to try something completely new. Some folks prefer motion comps because the tools for making them feel more familiar to them which means they can work faster. \nPrototypes are most useful for animations that rely heavily on interaction. (Getting the timing right for interactions can be tough without the interaction part sometimes.) Prototypes can also be helpful to investigate and optimize performance if that\u2019s a specific concern.\nGive them a try\nWhichever deliverables you choose to highlight your animation decisions, including them in your design reviews, critiques, or other design discussions will help you make better UI animation choices. More discussion around UI animation ideas during the design phase means greater buy-in, more room for iteration, and higher quality UI animations in your designs. Why not give them a try for your next project?", "year": "2017", "author": "Val Head", "author_slug": "valhead", "published": "2017-12-08T00:00:00+00:00", "url": "https://24ways.org/2017/stop-leaving-animation-to-the-last-minute/", "topic": "design"} {"rowid": 217, "title": "Beyond Web Mechanics \u2013 Creating Meaningful Web Design", "contents": "It was just over three years ago when I embarked on becoming a web designer, and the first opinion piece about the state of web design I came across was a conference talk by Elliot Jay Stocks called \u2018Destroy the Web 2.0 Look\u2019. Elliot\u2019s presentation was a call to arms, a plea to web designers the world over to stop the endless reproductions of the so called \u2018Web 2.0 look\u2019.\n\nThree and a half years on from Elliot\u2019s talk, what has changed? Well, from an aesthetic standpoint, not a whole lot. The Web 2.0 look has evolved, but it\u2019s still with us and much of the web remains filled with cookie cutter websites that bear a striking resemblance to one another. This wouldn\u2019t matter so much if these websites were selling comparable services or products, but they\u2019re not. They look similar, they follow the same web design trends; their aesthetic style sends out a very similar message, yet they\u2019re selling completely different services or products. How can you be communicating effectively with your users when your online book store is visually indistinguishable from an online cosmetic store? This just doesn\u2019t make sense. \n\nI don\u2019t want to belittle the current version of the Web 2.0 look for the sake of it. I want to talk about the opportunity we have as web designers to create more meaningful experiences for the people using our websites. Using design wisely gives us the ability to communicate messages, ideas and attitudes that our users will understand and connect with.\n\nBeing human\n\nAs human beings we respond emotionally to everything around us \u2013 people, objects, posters, packaging or websites. We also respond in different ways to different kinds of aesthetic design and style. We care about style and aesthetics deeply, whether we realise it or not. Aesthetic design has the power to attract or repel. We often make decisions based purely on aesthetics and style \u2013 and don\u2019t retailers the world over know it! We connect attitudes and strongly held beliefs to style. Individuals will proudly associate themselves with a certain style or aesthetic because it\u2019s an expression of who they are. You know that old phrase, \u2018Don\u2019t judge a book by its cover\u2019? Well, the problem is that people do, so it\u2019s important we get the cover right.\n\nMuch is made of how to structure web pages, how to create a logical information hierarchy, how to use layout and typography to clearly communicate with your users. It\u2019s important, however, not to mistake clarity of information or legibility with getting your message across. Few users actually read websites word by word: it\u2019s far more likely they\u2019ll just scan the page. If the page is copy-heavy and nothing grabs their attention, they may well just move on. This is why it\u2019s so important to create a visual experience that actually means something to the user. \n\nMeaningful design\n\nWhen we view a poster or website, we make split-second assessments and judgements of what is in front of us. Our first impressions of what a website does or who it is aimed at are provoked by the style and aesthetic of the website. For example, with clever use of colour, typography, graphic design and imagery we can communicate to users that an organisation is friendly, edgy, compassionate, fun or environmentally conscious.\n\nUsing a certain aesthetic we can convey the personality of that organisation, target age ranges, different sexes or cultural groups, communicate brand attributes, and more. We can make our users feel like they\u2019re part of something and, perhaps even more importantly, we can make new users want to be a part of something. And we can achieve all this before the user has read a single word. \n\nBy establishing a website\u2019s aesthetic and creating a meaningful visual language, a design is no longer just a random collection of pretty gradients that have been plucked out of thin air. There can be a logic behind the design decisions we make. So, before you slap another generic piece of ribbon or an ultra shiny icon into the top-left corner of your website, think about why you are doing it. If you can\u2019t come up with a reason better than \u201cI saw it on another website\u201d, it\u2019s probably a poor application of style.\n\nDesign and style\n\nThere are a number of reasons why the web suffers from a lack meaningful design. Firstly, there are too many preconceptions of what a website should look like. It\u2019s too easy for designers to borrow styles from other websites, thereby limiting the range of website designs we see on the web. Secondly, many web designers think of aesthetic design as of secondary importance, which shouldn\u2019t be the case. Designing websites that are accessible and easy to use is, of course, very important but this is the very least a web designer should be delivering. Easy to use websites should come as standard \u2013 it\u2019s equally important to create meaningful, compelling and beautiful experiences for everyone who uses our websites. The aesthetics of your site are part of the design, and to ignore this and play down the role of aesthetic design is just a wasted opportunity. \n\nNo compromise necessary\n\nEasy to use, accessible websites and beautiful, meaningful aesthetics are not mutually exclusive. The key is to apply style and aesthetic design appropriately. We need to think about who and what we\u2019re designing for and ask ourselves why we\u2019re applying a certain kind of aesthetic style to our design. If you do this, there\u2019s no reason why effective, functional design should come at the expense of jaw-dropping, meaningful aesthetics.\n\nWeb designers need to understand the differences between functional design and aesthetic design but, even more importantly, they need to know how to make them work together. It\u2019s combining these elements of design successfully that makes for the best web design in the world.", "year": "2010", "author": "Mike Kus", "author_slug": "mikekus", "published": "2010-12-05T00:00:00+00:00", "url": "https://24ways.org/2010/beyond-web-mechanics-creating-meaningful-web-design/", "topic": "design"} {"rowid": 222, "title": "Golden Spirals", "contents": "As building blocks go, the rectangle is not one to overwhelm the designer with decisions. On the face of it, you have two options: you can set the width, and the height. But despite this apparent simplicity, there are combinations of width and height that can look unbalanced. If a rectangle is too tall and slim, it might appear precarious. If it is not tall enough, it may simply look flat. But like a guitar string that\u2019s out of tune, you can tweak the proportions little by little until a rectangle feels, as Goldilocks said, just right.\n\nA golden rectangle has its height and width in the golden ratio, which is approximately 1:1.618. These proportions have long been recognised as being aesthetically harmonious. Whether through instruction or by intuition, artists have understood how to exploit these proportions over the centuries. Examples can be found in classical architecture, medieval book construction, and even in the recent #newtwitter redesign.\n\nA mathematical curiosity\n\n\n\n\n\n\n\nThe golden rectangle is unique, in that if you remove a square section from it, what is left behind is itself a golden rectangle. The removal of a square can be repeated on the rectangle that is left behind, and then repeated again, as many times as you like. This means that the golden rectangle can be treated as a building block for recursive patterns. In this article, we will exploit this property to build a golden spiral, using only HTML and CSS.\n\nThe markup\n\nThe HTML we\u2019ll use for this study is simply a series of nested
s.\n\n\n\t
\n\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\nThe first of these has the class cycle, and so does every fourth ancestor thereafter. The spiral completes a cycle every four steps, so this class allows styles to be reused on
s that appear at the same position in each cycle.\n\nGolden proportions\n\nTo create our spiral we are going to exploit the unique properties of the golden rectangle, so our first priority is to ensure that we have a golden rectangle to begin with. If we pick a length for the short edge \u2013 say, 288 pixels \u2013 we can then calculate the length of the long edge by multiplying this value by 1.618. In this case, 288\u2009\u00d7\u20091.618\u2009=\u2009466, so our starting point will be a
with these properties:\n\n#container > div {\n width: 466px;\n height: 288px;\n}\n\nThe greater than symbol is used here to single out the immediate child of the #container element, without affecting the grandchild or any of the more distant descendants.\n\nWe could go on to specify the precise pixel dimensions of every child element, but that means doing a lot of sums. It would be much easier if we just specified the dimensions for each element as a percentage of the width and height of its parent. This also has the advantage that if you change the size of the outermost container, all nested elements would be resized automatically \u2013 something that we shall exploit later.\n\n\n\n\n\n\n\nThe approximate value of 38.2% can be derived from (100\u2009\u00d7\u20091\u2009\u2212\u2009phi)\u2009\u00f7\u2009phi, where the Greek letter phi (\u03d5) stands for the golden ratio. The value of phi can be expressed as phi\u2009=\u2009(1\u2009+\u2009\u221a5\u2009)\u2009\u00f7\u20092, which is approximately 1.618. You don\u2019t have to understand the derivation to use it. Just remember that if you start with a golden rectangle, you can slice 38.2% from it to create a new golden rectangle.\n\nThis can be expressed in CSS quite simply:\n\n.cycle,\n.cycle > div > div {\n height: 38.2%;\n width: 100%;\n}\n.cycle > div,\n.cycle > div > div > div {\n width: 38.2%;\n height: 100%;\n}\n\nYou can see the result so far by visiting Demo One. With no borders or shading, there is nothing to see yet, so let\u2019s address that next.\n\nShading with transparency\n\nWe\u2019ll need to apply some shading to distinguish each segment of the spiral from its neighbours. We could start with a white background, then progress through shades of grey: #eee, #ddd, #ccc and so on, but this means hard-coding the background-color for every element. A more elegant solution would be to use the same colour for every element, but to make each one slightly transparent.\n\nThe nested
s that we are working with could be compared to layers in Photoshop. By applying a semi-transparent shade of grey, each successive layer can build on top of the darker layers beneath it. The effect accumulates, causing each successive layer to appear slightly darker than the last. In his 2009 article for 24 ways, Drew McLellan showed how to create a semi-transparent effect by working with RGBA colour. Here, we\u2019ll use the colour black with an alpha value of 0.07.\n\n#container div { background-color: rgba(0,0,0,0.07) }\n\nNote that I haven\u2019t used the immediate child selector here, which means that this rule will apply to all
elements inside the #container, no matter how deeply nested they are. You can view the result in Demo Two. As you can see, the golden rectangles alternate between landscape and portrait orientation.\n\n\n\nDemo Three).\n\n\n\nCSS3 specification indicates that a percentage can be used to set the border-radius property, but using percentages does not achieve consistent results in browsers today. Luckily, if you specify a border-radius in pixels using a value that is greater than the width and height of the element, then the resulting curve will use the shorter length side as its radius. This produces exactly the effect that we want, so we\u2019ll use an arbitrarily high value of 10,000 pixels for each border-radius:\n\n.cycle {\n border-radius: 0px;\n border-bottom-left-radius: 10000px;\n}\n.cycle > div {\n border-radius: 0px;\n border-bottom-right-radius: 10000px;\n}\n.cycle > div > div {\n border-radius: 0px;\n border-top-right-radius: 10000px;\n}\n.cycle > div > div > div {\n border-radius: 0px;\n border-top-left-radius: 10000px;\n}\n\nNote that the specification for the border-radius property is still in flux, so it is advisable to use vendor-specific prefixes. I have omitted them from the example above for the sake of clarity, but if you view source on Demo Four then you\u2019ll see that the actual styles are not quite as brief.\n\n\n\n\n\n\n\nFilling the available space\n\nWe have created an approximation of the Golden Spiral using only HTML and CSS. Neat! It\u2019s a shame that it occupies just a fraction of the available space. As a finishing touch, let\u2019s make the golden spiral expand or contract to use the full space available to it.\n\nIdeally, the outermost container should use the full available width or height that could accomodate a rectangle of golden proportions. This behaviour is available for background images using the \u201c background-size: contain; property, but I know of no way to make block level HTML elements behave in this fashion (if I\u2019m missing something, please enlighten me). Where CSS fails to deliver, JavaScript can usually provide a workaround. This snippet requires jQuery:\n\n$(document).ready(function() {\n\tvar phi = (1 + Math.sqrt(5))/2;\n\n\t$(window).resize(function() {\n\t\tvar goldenWidth = windowWidth = $(this).width(),\n\t\t\tgoldenHeight = windowHeight = $(this).height();\n\n\t\tif (windowWidth/windowHeight > phi) {\n\t\t\t// panoramic viewport \u2013 use full height\n\t\t\tgoldenWidth = windowHeight * phi;\n\t\t} else {\n\t\t\t// portrait viewport \u2013 use full width\n\t\t\tgoldenHeight = windowWidth / phi;\n\t\t};\n\n\t$(\"#container > div.cycle\")\n\t\t.width(goldenWidth)\n\t\t.height(goldenHeight);\n\n\t}).resize();\n\n});\n\nYou can view the result by visiting Demo Five.\n\n\n\n\n\n\n\nIs it just me, or can you see an elephant in there?\n\nYou can probably think of many ways to enhance this further, but for this study we\u2019ll leave it there. It has been a good excuse to play with proportions, positioning and the immediate child selector, as well as new CSS3 features such as border-radius and RGBA colours. If you are not already designing with golden proportions, then perhaps this will inspire you to begin.", "year": "2010", "author": "Drew Neil", "author_slug": "drewneil", "published": "2010-12-07T00:00:00+00:00", "url": "https://24ways.org/2010/golden-spirals/", "topic": "design"} {"rowid": 248, "title": "How to Use Audio on the Web", "contents": "I know what you\u2019re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! \ud83d\ude49\nYou\u2019re having flashbacks, flashbacks to the days of yore, when we had a element and yup did everyone think that was the most rad thing since . I mean put those two together with a , only use CSS colour names, make sure your borders were all set to ridge and you\u2019ve got yourself the neatest website since 1998.\nThe sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then.\nYes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth.\nWe can\u2019t help but be jaded, even though the element is depreciated, and the autoplay policy appeared this year. Although still in it\u2019s infancy, the policy \u201ccontrols when video and audio is allowed to autoplay\u201d, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future.\nBut then of course comes the question, having lived in a muted present for so long, where and why would you use audio?\n\u2728 Showcase Time \u2728\nThere are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience.\nfuturelibrary.no\nAnother site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good.\npottermore.com\nEighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience.\nEighty-six and a half years\nSound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which \u201callows the user to explore charts on web pages using sound and the keyboard\u201d. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here.\nThen there\u2019s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today.\nlightnote.co\nlearningmusic.ableton.com\nConsiderations\nYes, tis the season, let\u2019s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to \u2018enter\u2019 the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment.\nEighty-six and a half years\nAllowing mute and/or volume control clearly within the user interface is a good idea. It won\u2019t draw the user out of the experience, it\u2019ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it\u2019s less thought to reach for a very visible volume than to fumble with device settings.\nIndicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn\u2019t always the first place to look for everyone.\nTo The Future\nSo let\u2019s go!\nWe see amazing demos built with Web Audio, and I\u2019m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that.\nBut audio doesn\u2019t actually need to be all bells and whistles (hey, it\u2019s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design.\nIsn\u2019t it great then that there\u2019s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers).\nThis year I believe we have all experienced the web as a shopping mall more than ever. It\u2019s shining store fronts, flashing adverts, fast food, loud noises.\nLet\u2019s use 2019 to create more forests to explore, oceans to dive and mountains to climb.", "year": "2018", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2018-12-22T00:00:00+00:00", "url": "https://24ways.org/2018/how-to-use-audio-on-the-web/", "topic": "design"} {"rowid": 277, "title": "Raising the Bar on Mobile", "contents": "One of the primary challenges of designing for mobile devices is that screen real estate is often in limited supply. Through the advocacy of Luke W and others, we\u2019ve drawn comfort from the idea that this constraint ends up benefiting users and designers alike, from obvious advantages like portability and reach, to influencing our content strategy decisions through focus and restraint. But that doesn\u2019t mean we shouldn\u2019t take advantage of every last pixel of that screen we can snag!\n\nAs anyone who has designed a website for use on a smartphone can attest, there\u2019s an awful lot of space on mobile screens dedicated to browser functions that would be better off toggled out of view. Unfortunately, the visibility of some of these elements is beyond our control, such as the buttons fixed to the bottom of the viewport in iOS\u2019s Safari and the WebOS browser. However, in many devices, the address bar at the top can be manually hidden, and its absence frees up enough pixel room for a large, impactful heading, a critical piece of navigation, or even just a little more white space to air things out.\n\nSo, as my humble contribution to this most festive of web publications, today I\u2019ll dig into the approach I used to hide the address bar in a browser-agnostic fashion for sites like BostonGlobe.com, and the jQuery Mobile framework.\n\nSurveying the land\n\nFirst, let\u2019s assess the chromes of some popular, current mobile browsers. For example purposes, the following screen-captures feature the homepage of the Boston Globe site, without any address-bar-hiding logic in place.\n\nNote: these captures are just mockups \u2013 actual experience on these platforms may vary.\n\n On the left is iOS5\u2019s Safari (running on iPhone), and on the right is Windows Phone 7 (pre-Mango).\n\n BlackBerry 7 (left), and Android 2.3 (right).\n\n WebOS (left), Opera Mini (middle), and Opera Mobile (right).\n\nSome browsers, such the default browsers on WebOS and BlackBerry 5, hide the bar automatically without any developer intervention, but many of them don\u2019t. Of these, we can only manually hide the address bar on iOS Safari and Android (according to Opera Web Opener, Mike Taylor, some discussion is underway for support in Opera Mini and Mobile as well, which would be great!). This is unfortunate, but iOS and Android are incredibly popular, so let\u2019s direct our focus there.\n\nGreat API, or greatest API?\n\nAs it turns out, iOS and Android not only allow you to hide the address bar, they use the same JavaScript method to do so, too (this shouldn\u2019t be surprising, given that they are both WebKit browsers, but nothing expected happens in mobile). However, the method they use is not exactly intuitive. You might set out looking for a JavaScript API dedicated to this purpose, like, say, window.toolbar.hide(), but alas, to hide the address bar you need to use the window.scrollTo method!\n\nwindow.scrollTo(0, 0);\n\nThe scrollTo method is not new, it\u2019s just this particular use of it that is. For the uninitiated, scrollTo is designed to scroll a document to a particular set of coordinates, assuming the document is large enough to scroll to that spot. The method accepts two arguments: a left coordinate; and a top coordinate. It\u2019s both simple and supported well pretty much everywhere. In iOS and Android, these coordinates are calculated from the top of the browser\u2019s viewport, just below the address bar (interestingly, it seems that some platforms like BlackBerry 6 treat the top of the browser chrome as 0 instead, meaning the page content is closer to 20px from the top).\n\nAnyway, by passing the coordinates 0, 0 to the scrollTo method, the browser will jump to the top of the page and pull the address bar out of view! Of course, if a quick call to scrollTo was all we need to do to hide the address bar in iOS and Android, this article would be pretty short, and nothing new. Unfortunately, the first issue we need to deal with is that this method alone will not usually do the trick: it must be called after the page has finished loading.\n\nThe browser gives us a load event for just that purpose, so we\u2019ll wrap our scrollTo method in it and continue on our merry way! We\u2019ll use the standard, addEventListener method to bind the the load event, passing arguments for event name load, and a callback function to execute when the event is triggered.\n\nwindow.addEventListener(\"load\",function() {\n window.scrollTo(0, 0);\n});\n\nFor the sake of preventing errors in those using browsers that don\u2019t support addEventListener, such as Internet Explorer 8 and under, let\u2019s make sure that method exists before we use it:\n\nif( window.addEventListener ){\n window.addEventListener(\"load\",function() {\n window.scrollTo(0, 0);\n });\n}\n\nNow we\u2019re getting somewhere, but we must also call the method after the load event\u2019s default behavior has been applied. For this, we can use the setTimeout method, delaying its execution to after the load event has run its course.\n\nif( window.addEventListener ){\n window.addEventListener(\"load\",function() {\n setTimeout(function(){\n window.scrollTo(0, 0);\n }, 0);\n });\n}\n\nSweet sugar of Christmas! Hit this demo in iOS and watch that address bar drift up and away!\n\nNot so fast\u2026\n\nWe\u2019ve got a little problem: the approach above does work in iOS but, in some cases, it works a little too well. In the process of applying this behavior, we\u2019ve broken one of the primary tenets of responsible web development: don\u2019t break the browser\u2019s default behaviour. This usability rule of thumb is often violated by developers with even the best of intentions, from breaking the browser\u2019s back button through unrecorded Ajax page refreshes, to fancy momentum touch scrolling scripts that can wreak havoc in all but the most sophisticated of devices. In this case, we\u2019ve prevented the browser\u2019s native support of deep-linking to sections of a page (a hash identifier in the URL matching a page element\u2019s id attribute, for example, http://example.com#contact) from working properly, because our script always scrolls to the top.\n\nTo avoid this collision, we\u2019ll need to detect whether a deep link, or hash, is present in the URL before applying our logic. We can do this by ensuring that the location.hash property is falsey:\n\nif( !window.location.hash && window.addEventListener ){\n window.addEventListener( \"load\",function() {\n setTimeout(function(){\n window.scrollTo(0, 0);\n }, 0);\n });\n}\n\nStill works great! And a quick test using a hash-based URL confirms that our script will not execute when a deep anchor is in play. Now iOS is looking sharp, and we\u2019ve added our feature defensively to avoid conflicts.\n\n\n\nNow, on to Android\u2026\n\nWait. You didn\u2019t expect that we could write code for one browser and be finished, right? Of course you didn\u2019t. I mentioned earlier that Android uses the same method for getting rid of the scrollbar, but I left out the fact that the arguments it prefers vary slightly, but significantly, from iOS. Bah!\n\nDifferering from the earlier logic from iOS, to remove the address bar on Android\u2019s default browser, you need to pass a Y coordinate of 1 instead of 0. Aside from being just plain odd, this is particularly unfortunate because to any other browser on the planet, 1px is a very real, however small, distance from the top of the page!\n\nwindow.scrollTo( 0, 1 );\n\nLooks like we\u2019re going to need a fork\u2026\n\nR UA Android?\n\nAt this point, some developers might decide to simply not support this feature in Android, and more determined devs might decide that a quick check of the User Agent string would be a reliable way to determine the browser and tweak the scroll value accordingly. Neither of those decisions would be tragic, but in the spirit of cross-browser and future-friendly development, I\u2019ll propose an alternative.\n\nBy this point, it should be clear that neither of the implementations above offer a particularly intuitive way to hide an address bar. As such, one might be skeptical that these approaches will stick around very long in their present state in either browser. Perhaps at some point, Android will decide to use 0 like iOS, making our lives a little easier, or maybe some new browser will decide to model their address bar hiding method after one of these implementations. In any case, detecting the User Agent only allows us to apply logic based on the known present, and in the world of mobile, let\u2019s face it, the present is already the past.\n\nWriting a check\n\nIn this next step of today\u2019s technique, we\u2019ll apply some logic to quickly determine the behavior model of the browser we\u2019re using, then capitalize on that model \u2013 without caring which browser it happens to come from \u2013 by applying the appropriate scroll distance.\n\nTo do this, we\u2019ll rely on a fortunate side effect of Android\u2019s implementation, which is when you programatically scroll the page to 1 using scrollTo, Android will report that it\u2019s still at 0 because oddly enough, it is! Of course, any other browser in this situation will report a scroll distance of 1. Thus, by scrolling the page to 1, then asking the browser its scroll distance, we can use this artifact of their wacky implementation to our advantage and scroll to the location that makes sense for the browser in play.\n\nGetting the scroll distance\n\nTo pull off our test, we\u2019ll need to ask the browser for its current scroll distance. The methods for getting scroll distance are not entirely standardized across popular browsers, so we\u2019ll need to use some cross-browser logic. The following scroll distance function is similar to what you\u2019d find in a library like jQuery. It checks the few common ways of getting scroll distance before eventually falling back to 0 for safety\u2019s sake (that said, I\u2019m unaware of any browsers that won\u2019t return a numeric value from one of the first three properties).\n\n// scrollTop getter\nfunction getScrollTop(){\n return scrollTop = window.pageYOffset ||\ndocument.compatMode === \"CSS1Compat\" && document.documentElement.scrollTop ||\ndocument.body.scrollTop || 0;\n}\n\nIn order to execute that code above, the body object (referenced here as document.body) will need to be defined already, or we\u2019ll risk an error. To determine that it\u2019s defined, we can run a quick timer to execute code as soon as that object is defined and ready for use.\n\nvar bodycheck = setInterval(function(){\n if( document.body ){\n clearInterval( bodycheck );\n //more logic can go here!!\n } \n}, 15 );\n\nAbove, we\u2019ve defined a 15 millisecond interval called bodycheck that checks if document.body is defined and, if so, clears itself of running again. Within that if statement, we can extend our logic further to run other code, such as our check for the scroll distance, defined via the variable scrollTop below:\n\nvar scrollTop,\n bodycheck = setInterval(function(){\n if( document.body ){\n clearInterval( bodycheck );\n scrollTop = getScrollTop();\n } \n}, 15 );\n\nWith this working, we can immediately scroll to 1, then check the scroll distance when the body is defined. If the distance reports 1, we\u2019re likely in a non-Android browser, so we\u2019ll scroll back to 0 and clean up our mess.\n\nwindow.scrollTo( 0, 1 );\n\nvar scrollTop,\n bodycheck = setInterval(function(){\n if( document.body ){\n clearInterval( bodycheck );\n scrollTop = getScrollTop();\n window.scrollTo( 0, scrollTop === 1 ? 0 : 1 );\n } \n}, 15 );\n\nCashing in\n\nAll of the pieces are written now, so all we need to do is combine them with our previous logic for scrolling when the window is loaded, and we\u2019ll have a cross-browser solution of which John Resig would be proud. Here\u2019s our combined code snippet, with some formatting updates rolled in as well:\n\n(function( win ){\n\tvar doc = win.document;\n\n\t// If there\u2019s a hash, or addEventListener is undefined, stop here\n\tif( !location.hash && win.addEventListener ){\n\t\t//scroll to 1\n\t\twindow.scrollTo( 0, 1 );\n\t\tvar scrollTop = 1,\n\t\t\tgetScrollTop = function(){\n\t\t\t\treturn win.pageYOffset || doc.compatMode = \"CSS1Compat\" && doc.documentElement.scrollTop || doc.body.scrollTop || 0;\n\t\t\t},\n\t\t\t//reset to 0 on bodyready, if needed\n\t\t\tbodycheck = setInterval(function(){\n\t\t\t\tif( doc.body ){\n\t\t\t\t\tclearInterval( bodycheck );\n\t\t\t\t\tscrollTop = getScrollTop();\n\t\t\t\t\twin.scrollTo( 0, scrollTop = 1 ? 0 : 1 );\n\t\t\t\t}\t\n\t\t\t}, 15 );\n\t\twin.addEventListener( \u201cload\u201d, function(){\n\t\t\tsetTimeout(function(){\n\t\t\t\t\t//reset to hide addr bar at onload\n\t\t\t\t\twin.scrollTo( 0, scrollTop === 1 ? 0 : 1 );\n\t\t\t}, 0);\n\t\t} );\n\t}\n})( this );\nView code example\n\nAnd with that, we\u2019ve got a bunch more room to play with on both iOS and Android.\n\n\n\nBreak out the eggnog\n\n\u2026because we\u2019re not done yet! In the spirit of making our script act more defensively, there\u2019s still another use case to consider. It was essential that we used the window\u2019s load event to trigger our scripting, but on pages with a lot of content, its use can come at a cost. Often, a user will begin interacting with a page, scrolling down as they read, before the load event has fired. In those situations, our script will jump the user back to the top of the page, resulting in a jarring experience.\n\nTo prevent this problem from occurring, we\u2019ll need to ensure that the page has not been scrolled beyond a certain amount. We can add a simple check using our getScrollTop function again, this time ensuring that its value is not greater than 20 pixels or so, accounting for a small tolerance.\n\nif( getScrollTop() < 20 ){\n //reset to hide addr bar at onload\n window.scrollTo( 0, scrollTop === 1 ? 0 : 1 );\n}\n\nAnd with that, we\u2019re pretty well protected! Here\u2019s a final demo.\n\nThe completed script can be found on Github (full source: https://gist.github.com/1183357 ). It\u2019s MIT licensed. Feel free to use it anywhere or any way you\u2019d like!\n\nYour thoughts?\n\nI hope this article provides you with a browser-agnostic approach to hiding the address bar that you can use in your own projects today. Perhaps alternatively, the complications involved in this approach convinced you that doing this well is more trouble than it\u2019s worth and, depending on the use case, that could be a fair decision. But at the very least, I hope this demonstrates that there\u2019s a lot of work involved in pulling off this small task in only two major platforms, and that there\u2019s a real need for standardization in this area.\n\nFeel free to leave a comment or criticism and I\u2019ll do my best to answer in a timely fashion.\n\nThanks, everyone!\n\nSome parting notes\n\nI scream, you scream\u2026\n\nAt the time of writing, I was not able to test this method on the latest Android 4.0 (Ice Cream Sandwich) build. According to Sencha Touch\u2019s browser scorecard, the browser in 4.0 may have a different way of managing the address bar, so I\u2019ll post in the comments once I get a chance to dig into it further.\n\nShort pages get no love\n\nToday\u2019s technique only works when the page is as tall, or taller than, the device\u2019s available screen height, so that the address bar may be scrolled out of view. On a short page, you might work around this issue by applying a minimum height to the body element ( body { min-height: 460px; } ), but given the variety of screen sizes out there, not to mention changes in orientation, it\u2019s tough to find a value that makes much sense (unless you manipulate it with JavaScript).", "year": "2011", "author": "Scott Jehl", "author_slug": "scottjehl", "published": "2011-12-20T00:00:00+00:00", "url": "https://24ways.org/2011/raising-the-bar-on-mobile/", "topic": "design"} {"rowid": 279, "title": "Design the Invisible to Tell Better Stories on the Web", "contents": "For design to be meaningful we need to tell stories. We need to design the invisible, the cues, the messages and the extra detail hidden beneath the aesthetics. It\u2019s all about the story.\n\n\n\nFrom verbal exchanges around the campfire to books, the web and everything in between, storytelling allows us to share, organize and process information more efficiently. It helps us understand our surroundings and make emotional connections to people, places and experiences.\n\nWeb design lends itself perfectly to the conventions of storytelling, a universal process. However, the stories vary because they\u2019re defined by culture, society, politics and religion. All of which need considering if you are to design stories that are relevant to your target audience.\n\nThe benefits of approaching design with storytelling in mind from the very start of the project is that we are creating considered design that allows users to quickly gather meaning from the website. They do this by reading between the lines and drawing on the wealth of knowledge they have acquired about the associations between colours, typyefaces and signs.\n\nWith so much recognition and analysis happening subconsciously you have to consider how design communicates on this level. This invisible layer has a significant impact on what you say, how you say it and who you say it to.\n\nHow can we design something that\u2019s invisible?\n\nBy researching and making conscious decisions about exactly what you are communicating, you can make the invisible visible. As is often quoted, good design is like air, you only notice it when it\u2019s bad. So by designing the invisible the aim is to design stories that the audience receive subliminally, so that they go somewhat unnoticed, like good air.\n\nStorytelling strands\n\nTo share these stories through design, you can break it down into several strands. Each strand tells a story on its own, but when combined they may start to tell a different story altogether. These strands are colour, typefaces, branding, tone of voice and symbols. All are literal and visible but the invisible element is the meaning behind them \u2013 meaning that you can extract and share. In this article I want to focus on colour, typefaces and tone of voice and on how combining story strands can change the meaning.\n\nColour\n\nLet\u2019s start with colour. Red represents emotions such as love but can also signify war. Green is commonly used for all things environmental and purple is a colour that connotes wealth and royalty. These associations between colour and emotion or value have been learned over time and are continually reinforced through media and culture. \n\n\n\nWith this knowledge come expectations from your users. For example, they will expect Valentine\u2019s Day sites to be red and kids\u2019 sites to be bright and colourful. This is true in the same way audiences have expectations of certain genres of film or music. These conventions help savvy audiences decode texts and read between the lines or, rather, to draw meaning from the invisible. It\u2019s practically an innate skill. This is why you need to design the invisible: because users will quickly deduce meaning from your site and fill in the story\u2019s gaps, it\u2019s important to give them as much of that story to begin with. A story relevant to their culture.\n\n\n\nOf all the ways you can tell stories through web design, colour is the most fascinating and important. Not only does it evoke emotions in users but its meaning varies significantly between cultures. In the west, for example, white is a colour associated with weddings, and black is the colour of mourning. This is signified by the traditions of brides wearing white and those in mourning wearing black. In other cultures the meanings are reversed, as black is a colour that represents good luck and white is a colour that signifies mourning. If you assume the same values are true in all cultures then you risk offending the very people you are targeting.\n\nWhen colours combine, the story being told can change. If you design using red, white and blue then it\u2019s going to be difficult to shake off patriotic connotations because this colour combination is so ingrained as being American or British or French thanks largely to their flags. This extends to politics too. Each party has its own representative colour. In the UK, the Conservatives are blue and Labour is red so it would be inappropriate storytelling to design a Labour-related website in blue as there would be a conflict between the content and the design, a conflict that would result in a poor user experience.\n\nConflicts become more of an issue when you start to combine story strands. I once saw a No VAT advert use the symbol on the left:\n\n\n\nThere\u2019s a complete conflict in storytelling here between the sign and its colour. The prohibition sign was used over the word VAT to mean no VAT; that makes sense. But this is a symbol that is used to communicate to people that something is being prohibited or prevented, it mustn\u2019t continue. So to use green contradicts the message of the sign itself; green is used as a colour to say yes, go, proceed, enter. The same would be true if we had a tick in red and a cross in green. Bad design here means the story is flawed and the user experience is compromised.\n\nTypefaces\n\nTypefaces also tell stories. They are so much more than the words that are written with them because they connote different values. Here are a few:\n\n\n\nSerif fonts are more formal and are associated with tradition, sophistication and high-end values. Sans serif fonts, on the other hand, are synonymous with modernity, informality and friendliness. These perceptions are again reinforced through more traditional media such as newspaper mastheads, where the serious news-focused broadsheets have serif titles, and the showbiz and gossip-led tabloids have sans serif titles. This translates to the web as well. With these associations already familiar to users, they may see copy and focus on the words, but if the way that copy is displayed jars with the context then we are back to having conflicting stories like the No VAT sign earlier.\n\nLet\u2019s take official institutions, for example. The White House, the monarchy, 10 Downing Street and other government departments are formal, traditional and important organisations. If the copy on their websites were written in a typeface like Cooper Black, it would erase any authority and respect that they were due. They need people to take them seriously and trust them, and part of the way to do this is to have a typeface that represents those values.\n\nIt works both ways though. If Innocent, Threadless or other fun companies used traditional typefaces, they wouldn\u2019t be accurate reflections of their core values, brand and personality. They are better positioned to use friendly, informal and modern typefaces. But still never Comic Sans.\n\nTone of voice\n\nClosely tied to this is tone of voice, my absolute bugbear on the web. Tone of voice isn\u2019t what is said but, rather, how it is said. When we interact with others in person we don\u2019t just listen to the words they say, but we also draw meaning from their body language, and pitch and tone of voice. Just because the web removes that face-to-face interaction with your audience it doesn\u2019t mean you can\u2019t have a tone of voice. \n\n\n\nInnocent pioneered the informal chatty tone of voice that so many others have since emulated, but unless it is representative of your company, then it\u2019s not appropriate. It works for Innocent because the tone of voice is consistent across all the company\u2019s materials, both online and offline. Ben and Jerry\u2019s takes the same approach, as does Threadless, but maybe you need a more formal or corporate tone of voice. It really depends on what your business or service is and who it is for, and that is why I think LoveFilm has it all wrong. \n\nLoveFilm offers a film and game rental service, something fun for people in their downtime. While they aren\u2019t particularly stuffy, neither is their tone of voice very friendly or informal, which is what I would expect from a service like theirs. The reason they have it wrong is in the language they use and the way their sentences are constructed.\n\nThis is the first time we\u2019ve discussed language because, on the whole, designing the invisible isn\u2019t concerned with language at all. But that doesn\u2019t mean that these strands can\u2019t still elicit an emotional response in users. Jon Tan quoted Dr Mazviita Chirimuuta in his New Adventures in Web Design talk in January 2011:\n\n\n\tAlthough there is no absolute separation between language and emotion, there will still be countless instances where you have emotional response without verbal input or linguistic cognition. In general language is not necessary for emotion.\n\n\nThis is even more pertinent when the emotions evoked are connected to people\u2019s culture, surroundings and way of life. It makes design personal, something that audiences can connect with at more than just face value but, rather, on a subliminal or, indeed, invisible level. \n\nIt also means that when you are asked the inevitable question of why \u2013 why is blue the dominant colour? why have you used that typeface? why don\u2019t we sound like Innocent? \u2013 you will have a rationale behind each design decision that can explain what story you are telling, how you discovered the story and how it is targeted at the core audience.\n\nResearch\n\nThis is where research plays a vital role in the project cycle. If you don\u2019t know and understand your audience then you don\u2019t know what story to design. Every project lends itself to some level of research, but how in-depth and what methods are most appropriate will be dictated by project requirements and budget restrictions \u2013 but do your research. \n\nEven if you think you know your audience, it doesn\u2019t hurt to research and reaffirm that because cultures and society do change, albeit slowly, but they can change. So ask questions at the start of the project during the research phase:\n\n\n\tWhat do different colours mean for your audience\u2019s culture?\n\tDo the typeface and tone of voice appeal to the demographic?\n\tDoes the brand identity represent the values and personality of your service?\n\tAre there any social, political or religious significances associated with your audience that you need to take into consideration so you don\u2019t offend them?\n\n\nAsk questions, understand your audience, design your story based on these insights, and create better user experiences in context that have good, solid storytelling at their heart.\n\nMajor hat tip to Gareth Strange for the beautiful graphics used within this article.", "year": "2011", "author": "Robert Mills", "author_slug": "robertmills", "published": "2011-12-14T00:00:00+00:00", "url": "https://24ways.org/2011/design-the-invisible/", "topic": "design"} {"rowid": 285, "title": "Composing the New Canon: Music, Harmony, Proportion", "contents": "Ohne Musik w\u00e4re das Leben ein Irrtum\n\u2014Friedrich NIETZSCHE, G\u00f6tzen-D\u00e4mmerung, Spr\u00fcche und Pfeile 33, 1889\n\n\nSomehow, music is hardcoded in human beings. It is something we understand and respond to without prior knowledge. Music exercises the emotions and our imaginative reflex, not just our hearing. It behaves so much like our emotions that music can seem to symbolize them, to bear them from one person to another. Not surprisingly, it conjures memories: the word music derives from Greek \u03bc\u03bf\u03c5\u03c3\u03b9\u03ba\u03ae (mousike), art of the Muses, whose mythological mother was Mnemosyne, memory. But it can also summon up the blood, console the bereaved, inspire fanaticism, bolster governments and dissenters alike, help us learn, and make web designers dance. And what would Christmas be without music?\n\nMusic moves us, often in ways we can\u2019t explain. By some kind of alchemy, music frees us from the elaborate nuisance and inadequacy of words. Across the world and throughout recorded history \u2013 and no doubt well before that \u2013 people have listened and made (and made out to) music.\n\n\n\t[I]t appears probable that the progenitors of man, either the males or females or both sexes, before acquiring the power of expressing their mutual love in articulate language, endeavoured to charm each other with musical notes and rhythm.\n\u2014Charles DARWIN, The Descent of Man, and Selection in Relation to Sex, 1871\n\n\nIt\u2019s so integral to humankind, we\u2019ve sent it into space as a totem for who we are. (Who knows? It might be important.) Music is essential, a universal compulsion; as Nietzsche wrote, without music life would be a mistake.\n\nMusic, design and web design\n\nThere are some obvious and notable similarities between music and visual design. Both can convey mood and evoke emotion but, even under close scrutiny, how they do that remains to a great extent mysterious. Each has formal qualities or parts that can be abstracted, analysed and discussed, often using the same terminology: composition, harmony, rhythm, repetition, form, theme; even colour, texture and tone.\n\nA possible reason for these shared aspects is that both visual design and music are means to connect with people in deep and lasting ways. Furthermore, I believe the connections to be made can complement direct emotional appeal. Certain aesthetic qualities in music work on an unconscious and, it could be argued, universal level. Using musical principles in our designs, then, can help provide the connectedness between content, device and user that we now seek as web designers.\n\nYet, when we talk about music and web design, the conversation is almost always about the music designers listen to while working, a theme finding its apotheosis in Designers.MX. Sometimes, articles in that dreary list format seek inspiration from music industry websites. There\u2019s even a service offering pre-templated web designs for bands, and at least one book surveyed the landscape back in 2006. Occasionally, discussions broaden somewhat into whether and how different kinds of music can inspire and influence the design work we produce.\n\nSuch enquiries, it seems to me, are beside the point. Do I really design differently when I listen to Bach rather than Bacharach? Will the barely restrained energy of Count Basie\u2019s The Kid from Red Bank mean I choose a lively colour palette, and rural, autumnal shades when inspired by Fleet Foxes? Mahler means a thirteen-column layout? Gillian Welch leads to distressed black and white photography? While reflecting the importance we place in music and how it seems to help us in our work, surveys on musical taste and lists of favourite artists fail to recognize that some of the fundamental aesthetic characteristics of music can be adapted and incorporated into modern web design.\n\nAntiphonal geometry\n\nOver recent years, web designers have embraced grid systems as powerful tools to help create good-looking and intuitive user experiences. With the advent of responsive design, these grids and their contents must adapt to the different screen sizes and properties of all kinds of user devices. Finding and using grid values that can scale well and retain or enhance their proportions and relationships while making the user experience meaningful in several different contexts is more important than ever.\n\nIn print, this challenge has always started with the dimensions and proportions of the page. Content can thereby be made to belong inside the page and be bound to it. And music has been used for centuries to further this aim. As Robert Bringhurst says in The Elements of Typographical Style:\n\n\n\tIndeed, one of the simplest of all systems of page proportions is based on the familiar intervals of the diatonic scale. Pages that embody these basic musical proportions have been in common use in Europe for more than a thousand years.\n\n\nVery well. But while he goes on to list (from the full chromatic scale, rather than just diatonic) the proportions and the musical intervals they\u2019re based on, Bringhurst fails to mention what they\u2019re ratios of or their potential effects. Shame. In his favour, however, he later touches on how proportions in print might be considered to work:\n\n\n\tThe page is a piece of paper. It is also a visible and tangible proportion, silently sounding the thoroughbass of the book. On it lies the textblock, which must answer to the page. The two together \u2013 page and textblock \u2013 produce an antiphonal geometry. That geometry alone can bond the reader to the book. Or conversely, it can put the reader to sleep, or put the reader\u2019s nerves on edge, or drive the reader away.\n\n\nSo what does Bringhurst mean by antiphonal geometry, a phrase that marries the musical to the spatial? By stating that the textblock \u201cmust answer to the page\u201d, the implication is that the relationship between the proportions of the page and the shape of the textblock printed on it embodies a spatial (geometrical) call-and-response (antiphony) that can be appealing or not.\n\nBoulton\u2019s new canon\n\nBut, as Mark Boulton has pointed out, on the web \u201cthere are no edges. There are no \u2018pages\u2019. We\u2019ve made them up.\u201d So, what is to be done? In January 2011 at the New Adventures in Web Design conference, Boulton presented his vision of a new canon of web design, a set of principles to guide us as we design the web. There are three overlapping tenets:\n\n\n\tdesign from the content out\n\tcreate connectedness between the different content elements\n\tbind the content to the web device\n\n\nRather than design from the edges in, we need to design layout systems from the content out. To this end, Boulton asserts that grid system design should begin with a constraint, and he suggests we use the size of a fixed content element, such as an advertising unit or image, as a starting point for online grid calculations. Khoi Vinh advocates the same approach in his book, Ordering Disorder: Grid Principles for Web Design.\n\nBoulton\u2019s second and third tenets, however, are more complex and overlap significantly with each other. Connecting the different parts of the content and binding the content to the device share many characteristics and solutions:\n\n\n\tadopting ems and percentages as units of size for layout elements\n\taltering text size, line length and line height for different viewport dimensions\n\tproviding higher resolution images for devices with greater pixel densities\n\tfluid layout grids, flexible images and responsive design\n\n\nAll can help relate the presentation of the content to its delivery in a certain context.\n\nBut how do we determine the relationship between one element of a layout and another? How can we avoid making arbitrary decisions about the relative sizes of parts of our designs? What can we use to connect the parts of our design to one another, and how can we bind the presentation of the content to the user\u2019s device?\n\nTim Brown\u2019s application of modular typographic scales hints at an answer. In the very useful tool he created for calculating such scales, Brown includes two musical ratios: the perfect fifth (2:3); and the perfect fourth (3:4). Why? Where do they come from? And what do they mean?\n\nHarmonies musical and visual\n\nFundamental to music are rhythm and harmony.\n\nAs any drummer will tell you, without rhythm there is no music. Even when there\u2019s no regular beat, any tune follows a rhythm, however irregular, simply because a change of note is a point of change in the music. Although rhythm, timing and pacing are all relevant to interaction design, right now it\u2019s harmony we\u2019re interested in.\n\nSometimes harmony is called the vertical aspect of music, and melody the horizontal. But this conceit overlooks the fact that harmony is both vertical and horizontal. A single melodic line, as it is played, implies various sets of harmonies on which it is grounded, whether or not those harmonies are played. So, harmony doesn\u2019t just sit vertically beneath the horizontal melody, but moves horizontally as well, through harmonic progression.\n\nTo stretch this arrangement pixel-thin, we could argue that in onscreen design melody is the content, and the layout and arrangement of the content is the harmony. We sometimes say a design is harmonious when the interplay of different elements of a design is pleasing or balanced or in proportion, and the content (the melody) is set off or conveyed well by or appropriate to the design.\n\nWe seem to know instinctively whether a layout is harmonious\u2026\n\nIn the design of The Great Discontent, the relationships between different elements combine to form a balanced whole.\n\n\u2026or not.\n\nThere\u2019s no harmony in the Department for Education\u2019s website because the different parts of the content don\u2019t feel related to one another.\n\nWhat is it that makes one design harmonious and another dissonant? It\u2019s not just whether things line up, though that\u2019s a start. I believe there are much deeper aesthetic forces at work, forces we can tap into in our onscreen designs. Now, I\u2019m not going start a difficult discussion about aesthetics. For our purposes, we just need to know that it\u2019s the branch of philosophy dealing with the nature of beauty, and the creation and perception of beauty. And among the key components in the perception of beauty are harmony and proportion. These have been part of traditional western aesthetics since Plato (about 2,500 years).\n\nOne of the ways we appreciate the beauty of music is through the harmonic intervals we hear. A musical interval is a combination of two notes and it describes the distance between the two pitches. For example, the distance between C and the G above it (if we take C as the tonic or root) is called a perfect fifth.\n\nLeft: C to G, a perfect fifth. Right: C and G, not a perfect fifth.\n\n \n\nAnd, to get superficially scientific for a moment, each musical interval can be expressed as a ratio of the wavelength frequencies of the notes; for our perfect fifth, with every two wavelengths of C, there are three of G. And what is a ratio, if not a proportion of one thing to another?\n\nSo, simple musical harmony (using what\u2019s known as just intonation1) affords us a set of proportions, expressed as ratios. Where better to apply these ideas of harmony and proportion from music in web design, than grid systems?\n\nA digression: whither \u03c6?\n\nQuite often in our discussions of pure design and aesthetics, we mention the golden ratio and regurgitate the same justifications for its use: roots in antiquity; embodied in classical and Renaissance architecture and art; occurrence in nature; the New Twitter, and so forth (oh, really?).\n\nYet the ratios of musical intervals from just intonation are equally venerable and much more widespread: described by Pythagorus; employed in Palladian architecture, and printing, books and art from the Renaissance onwards; in modern times, film and television dimensions; standard international paper sizes (ISO 216, the A and B series); and, again and again, screen dimensions \u2013 chances are that screen you\u2019re probably looking at right now has the proportions 2:3 (iPhone and iPod Touch), 3:4 (iPad and Kindle), 3:5 (many smartphones), 5:8 or 16:9 (many widescreen monitors), all ratios of musical intervals.\n\nBack to our theme\u2026\n\nMusical interval ratios\n\nLet\u2019s take a look at most of the ratios within a couple of octaves and crunch some numbers to generate some percentages and other values that we can use in our designs. First, the intervals and their ratios in just intonation and expressed as ratios of one:\n\n\n\t\t\n\t\t\tName \n\t\t\tInterval in C \n\t\t\tRatio \n\t\t\tRatio (1:x) \n\t\t\n\t\t\n\t\t\t unison \n\t\t\t C\u2192C \n\t\t\t 1:1 \n\t\t\t 1:1 \n\t\t\n\t\t\n\t\t\t minor second \n\t\t\t C\u2192D\u266d \n\t\t\t 15:16 \n\t\t\t 1:1.067 \n\t\t\n\t\t\n\t\t\t major second \n\t\t\t C\u2192D \n\t\t\t 8:9 \n\t\t\t 1:1.125 \n\t\t\n\t\t\n\t\t\t minor third \n\t\t\t C\u2192E\u266d \n\t\t\t 5:6 \n\t\t\t 1:1.2 \n\t\t\n\t\t\n\t\t\t major third \n\t\t\t C\u2192E \n\t\t\t 4:5 \n\t\t\t 1:1.25 \n\t\t\n\t\t\n\t\t\t perfect fourth \n\t\t\t C\u2192F \n\t\t\t 3:4 \n\t\t\t 1:1.333 \n\t\t\n\t\t\n\t\t\t augmented fourth \nor diminished fifth \n\t\t\t C\u2192F\u266f/G\u266d \n\t\t\t 1:\u221a2 \n\t\t\t 1:1.414 \n\t\t\n\t\t\n\t\t\t perfect fifth \n\t\t\t C\u2192G \n\t\t\t 2:3 \n\t\t\t 1:1.5 \n\t\t\n\t\t\n\t\t\t minor sixth \n\t\t\t C\u2192A\u266d \n\t\t\t 5:8 \n\t\t\t 1:1.6 \n\t\t\n\t\t\n\t\t\t major sixth \n\t\t\t C\u2192A \n\t\t\t 3:5 \n\t\t\t 1:1.667 \n\t\t\n\t\t\n\t\t\t minor seventh \n\t\t\t C\u2192B\u266d \n\t\t\t 9:16 \n\t\t\t 1:1.778 \n\t\t\n\t\t\n\t\t\t major seventh \n\t\t\t C\u2192B \n\t\t\t 8:15 \n\t\t\t 1:1.875 \n\t\t\n\t\t\n\t\t\t octave \n\t\t\t C\u2192C\u2191 \n\t\t\t 1:2 \n\t\t\t 1:2 \n\t\t\n\t\t\n\t\t\t major tenth \n\t\t\t C\u2192E\u2191 \n\t\t\t 2:5 \n\t\t\t 1:2.5 \n\t\t\n\t\t\n\t\t\t major eleventh \n\t\t\t C\u2192F\u2191 \n\t\t\t 3:8 \n\t\t\t 1:2.667 \n\t\t\n\t\t\n\t\t\t major twelfth \n\t\t\t C\u2192G\u2191 \n\t\t\t 1:3 \n\t\t\t 1:3 \n\t\t\n\t\t\n\t\t\t double octave \n\t\t\t C\u2192C\u2191 \n\t\t\t 1:4 \n\t\t\t 1:4 \n\t\t\n\t\t\n\t\t\tName \n\t\t\tInterval in C \n\t\t\tRatio \n\t\t\tRatio (1:x) \n\t\t\n\n\nAnd now as percentages, of both the larger and smaller values in the ratios:\n\n\n\t\t\n\t\t\tName \n\t\t\tRatio \n\t\t\t% of larger value \n\t\t\t% of smaller value \n\t\t\n\t\t\n\t\t\t unison \n\t\t\t 1:1 \n\t\t\t 100% \n\t\t\t 100% \n\t\t\n\t\t\n\t\t\t minor second \n\t\t\t 15:16 \n\t\t\t 93.75% \n\t\t\t 106.667% \n\t\t\n\t\t\n\t\t\t major second \n\t\t\t 8:9 \n\t\t\t 88.889% \n\t\t\t 112.5% \n\t\t\n\t\t\n\t\t\t minor third \n\t\t\t 5:6 \n\t\t\t 83.333% \n\t\t\t 120% \n\t\t\n\t\t\n\t\t\t major third \n\t\t\t 4:5 \n\t\t\t 80% \n\t\t\t 125% \n\t\t\n\t\t\n\t\t\t perfect fourth \n\t\t\t 3:4 \n\t\t\t 75% \n\t\t\t 133.333% \n\t\t\n\t\t\n\t\t\t augmented fourth \nor diminished fifth \n\t\t\t 1:\u221a2 \n\t\t\t 70.711% \n\t\t\t 141.421% \n\t\t\n\t\t\n\t\t\t perfect fifth \n\t\t\t 2:3 \n\t\t\t 66.667% \n\t\t\t 150% \n\t\t\n\t\t\n\t\t\t minor sixth \n\t\t\t 5:8 \n\t\t\t 62.5% \n\t\t\t 160% \n\t\t\n\t\t\n\t\t\t major sixth \n\t\t\t 3:5 \n\t\t\t 60% \n\t\t\t 166.667% \n\t\t\n\t\t\n\t\t\t minor seventh \n\t\t\t 9:16 \n\t\t\t 56.25% \n\t\t\t 177.778% \n\t\t\n\t\t\n\t\t\t major seventh \n\t\t\t 8:15 \n\t\t\t 53.333% \n\t\t\t 187.5% \n\t\t\n\t\t\n\t\t\t octave \n\t\t\t 1:2 \n\t\t\t 50% \n\t\t\t 200% \n\t\t\n\t\t\n\t\t\t major tenth \n\t\t\t 2:5 \n\t\t\t 40% \n\t\t\t 250% \n\t\t\n\t\t\n\t\t\t major eleventh \n\t\t\t 3:8 \n\t\t\t 37.5% \n\t\t\t 266.667% \n\t\t\n\t\t\n\t\t\t major twelfth \n\t\t\t 1:3 \n\t\t\t 33.333% \n\t\t\t 300% \n\t\t\n\t\t\n\t\t\t double octave \n\t\t\t 1:4 \n\t\t\t 25% \n\t\t\t 400% \n\t\t\n\t\t\n\t\t\tName \n\t\t\tRatio \n\t\t\t% of larger value \n\t\t\t% of smaller value \n\t\t\n\n\nAs you can see, the simple musical intervals are expressed as ratios of small whole numbers (integers). We can then normalize them as ratios of one, as well as derive percentage values, both in terms of the smaller value to the larger, and vice versa. These are the numbers we can incorporate into our designs. If you\u2019ve ever written something like body { font: 100%/1.5 \"Museo Sans\", Helvetica, sans-serif; } in your CSS, you\u2019re already using a musical ratio: the perfect fifth.\n\nModular scales allow us to generate a set of numbers based on a musical interval that can be used for all kinds of typographic and layout decisions to create harmony in a visual design for the web. As Tim Brown said at the 2010 Build conference:\n\n\n\tI think that from that most atomic unit \u2013 type \u2013 whole experiences can resonate, whole experiences can be harmonious. And whole experiences can have a purpose suited to our design intentions.\n\n\nOnce more, with feeling: connectedness\n\nAs well as modular scales, there are other methods of incorporating musical interval ratios into our work. Setting the ratio of font size to line height in CSS is one such example. We could also create a typographic hierarchy using the same principle and combining several ratios that we know harmonize well musically in a chord:\n\nbody { font-size: 75%; } /* =12px = base size or tonic */\n\nh1 { font-size: 32px; font-size: 2.667rem; }\n /* =32px = 3:8 = major eleventh (C\u2192F\u2191) */\n\nh2 { font-size: 24px; font-size: 2rem; }\n /* =24px = 1:2 = octave (C\u2192C\u2191) */\n\nh3 { font-size: 20px; font-size: 1.667rem; }\n /* =20px = 3:5 = major sixth (C\u2192A) */\n\nfigcaption, small { font-size: 9px; font-size : 0.75rem }\n /* =9px = 3:4 = perfect fourth (C\u2192F) */\n\nWhoa! Hold your reindeer, Santa! How can we know what interval combinations work well together to form chords? Well, I\u2019m a classically trained musician, so perhaps I have an advantage. To avoid a long, technically complex digression into musical harmony, here are a few basic combinations of intervals that are harmonious in one way or another:\n\n\n\tunison; major third; perfect fifth; octave\n\tunison; perfect fourth; major sixth; octave\n\tunison; minor third; minor sixth; octave\n\tunison; minor third; diminished fifth; major sixth; octave\n\n\nThis isn\u2019t to say that other combinations can\u2019t be used to interesting effect and particular purpose \u2013 they surely can \u2013 but I have to make sure there\u2019s something left for you to experiment with in the wee small hours over the holiday. Bear in mind, though, were I to play you two notes from the same scale to form a minor second, for example, you\u2019d probably say it was dissonant and maybe that quality of the 15:16 ratio would be translated to the design.\n\n \n\nIn the typographic hierarchy above, you\u2019ll notice I used an interval in the higher octave, which affords a broader range of ratios while retaining the harmony. Thus, a perfect fifth (2:3) becomes a major twelfth (1:3), or a major sixth (3:5) becomes a major thirteenth (3:10).\n\nThe harmonic ratios can obviously be used as proportions for layout as well, in several different ways:\n\n\n\timage width and height (for example, 450\u00d7800px = 9:16 = minor seventh)\n\tmain content to page width (67%:100% = 2:3 = perfect fifth)\n\tpage width to viewport width (80%:100% = 4:5 = major third)\n\n\nOne great benefit of using such ratios in web design work is that they can be applied in responsive web design. Proportional values, based on percentages or equivalent em units, will scale with changing viewports, so your layout and image proportions can be maintained or deliberately changed, as we\u2019re about to find out, across devices.\n\nSmall speakers, tall speakers: binding to the device\n\nThe musical interval ratios also provide an opportunity, not only to create connectedness between the parts of a layout, but to bind the content to a device \u2013 that elusive antiphonal geometry. Just as a textblock and page resonate together, so too can web content and the screen. Earlier, I mentioned that several common screen aspect ratios match musical interval ratios. It would seem, then, that we have a set of proportions that we can use in different ways to establish and retain a sense of harmony that can be based on and change with those contexts. Using musical interval ratios, we can bind the display of our content to the device it\u2019s displayed on.\n\nIf you haven\u2019t met already, let me introduce you to the device-aspect-ratio property of CSS media queries.\n\n@media only screen and (device-aspect-ratio: 3/4) { }\n@media only screen and (device-aspect-ratio: 480/640) { }\n@media only screen and (device-aspect-ratio: 600/800) { }\n@media only screen and (device-aspect-ratio: 768/1024) { }\n\nRegardless of the precise pixel values, each of these media queries would apply to devices whose display area has an aspect ratio of 3:4. It works by comparing the device-width with the device-height. (It\u2019s not to be confused with aspect-ratio, which is defined by the width and height of the browser within the device.) The values in the media query are always presented as width/height, with portrait being the default orientation for smartphones and tablets; that is, to match an iPhone screen, you\u2019d use device-aspect-ratio: 2/3, not 3/2, which won\u2019t work.\n\nHere\u2019s a table of the musical intervals with their corresponding screens.\n\n\n\t\t\n\t\t\tName \n\t\t\tdevice-aspect-ratio \n\t\t\tScreens \n\t\t\tCommon resolutions (pixels) \n\t\t\n\t\t\n\t\t\t major third \n\t\t\t 5/4 \n\t\t\t TFT LCD computer screens \n\t\t\t 1,280\u00d71,024 \n\t\t\n\t\t\n\t\t\t perfect fourth \n\t\t\t 3/4 or 4/3 \n\t\t\t iPad, Kindle and other tablets, PDAs \n\t\t\t 320\u00d7240, 768\u00d71,024 \n\t\t\n\t\t\n\t\t\t perfect fifth \n\t\t\t 2/3 \n\t\t\t iPhone, iPod Touch \n\t\t\t 320\u00d7480, 640\u00d7960 \n\t\t\n\t\t\n\t\t\t minor sixth \n\t\t\t 8/5 (16/10) \n\t\t\t Many widescreens \n\t\t\t 1,152\u00d7720, 1,440\u00d7900, 1,920\u00d71,200 \n\t\t\n\t\t\n\t\t\t major sixth \n\t\t\t 3/5 \n\t\t\t Many smartphones \n\t\t\t 240\u00d7400, 480\u00d7800 \n\t\t\n\t\t\n\t\t\t minor seventh \n\t\t\t 16/9 or 9/16 \n\t\t\t Many widescreens and some smartphones \n\t\t\t 720\u00d71,280, 1,366\u00d7768, 1,920\u00d71,080, 2,560\u00d71,440 \n\t\t\n\n\n[You might argue that I\u2019m playing fast and loose with the ratios. I suppose, mathematically speaking, 9:16 is not the same as 16:9: I\u2019m no expert. But let\u2019s not throw the baby out with the bath water, particularly at Christmas.]\n\nWith this in mind, we can begin to write media queries that will influence various typographic and layout values in line with the aspect ratios of specific screens and in combination with em-based min-width queries that work from smaller, mobile screens to larger, desktop screens.\n\nHere\u2019s a very simple demo page with only some text, an image with a caption and a little basic layout: no seasonal overindulgence here.\n\nDemo: Sample page using device-aspect-ratio media queries based on musical interval ratios\n\nOur initial styles for all devices are based on the perfect fifth, with the major third and octave rounding things out into a harmonious whole, whether or not media queries are supported. For example:\n\nhtml { font-size: 100%; line-height: 1.5; }\n /* font-size:line-height = 16:24 = 2:3 = perfect fifth */\n\nh1 { font-size: 32px; font-size: 2rem; line-height: 1.25; }\n /* font-size:line-height = 32:40 = 4:5 = major third\n body:h1 = 16:32 = 1:2 = octave */\n\nWhile we should really consider methods of delivering images appropriate to the screen size, let\u2019s just stick to a single image for all devices. But why don\u2019t we change its aspect ratio from 4:3 to 3:2, to fit with our harmonic scheme? It\u2019s easy enough to do with overflow:hidden on the
element to hide a part of the image, and a negative margin fudge:\n\nfigure img { margin: -8.5% 0 0 0; width: 100%; max-width: 100%; }\n\nOur first break point targets devices 320 pixels wide with an aspect ratio of 2:3, namely the iPhone and iPod Touch:\n\n/* 320px = 20\u00d716 */\n@media only screen and (min-width: 20em) and (device-aspect-ratio: 2/3) { }\n\nWe\u2019re actually already there, of course, as the intervals we\u2019ve chosen resonate with this aspect ratio \u2013 the content is already bound to the device.\n\nOur next media query, then, will make some changes to match a different ratio, the major sixth (3:5), which is same as that of many smartphones:\n\n/* 480px = 30\u00d716 */\n@media only screen and (min-width: 30em) and (device-aspect-ratio: 3/5) { }\n\nA different aspect ratio might require a change in harmony. For devices with these proportions, we\u2019ll now use the perfect fourth (3:4) and the major sixth (3:5) along with the octave we already have to create a new resonating harmony. For instance, a slightly wider screen means we can increase the line-height to aid the legibility of longer lines:\n\nhtml { line-height: 1.667; }\n /* font-size:line-height = 16:26.672 = 3:5 = major sixth */\n\nh1 { font-size: 32px; font-size: 2rem; line-height: 1.667; }\n /* font-size:line-height = 32:53.333 = 3:5 = major sixth\n body:h1 = 16:32 = 1:2 = octave */\n\nand we can remove the negative margin to display our 4:3 image in its entirety.\n\nEach screen displays content styled using relationships related to its own proportions. On the left, an iPhone 4 (2:3); on the right, a Samsung Nexus S (3:5). Your mileage may vary.\n\nAnother device, another media query. At 768 pixels, screens are wide enough to add columns. The ratios we\u2019ve used for the 3:5 screens include the perfect fourth (3:4) so we don\u2019t need to change any of the font measurements, but we can base the proportions of the columns on the major sixth interval:\n\narticle { float: left; width: 56%; }\n /* width of main column 3:5 (60% of 100%, major sixth)\n incorporating gutter width */\n\naside { float : right; width : 36%; }\n\nOn devices with a 3:4 aspect ratio, this works even better in landscape orientation.\n\nWhile not every screen over 768 pixels wide will have 3:4 proportions, the range of intervals informing the design ensure harmonious relationships between the different parts of the layout.\n\nFor wide screens proper (break point at 1,280 pixels) we can employ a new set of harmonious intervals. Many laptop and desktop screens have a 16:10 aspect ratio, which boils down to 8:5, equivalent to the minor sixth (5:8). Combined with a minor third (5:6) and the octave (1:2), this creates a new harmony appropriate to these devices. Let\u2019s increase the font size and change the image\u2019s aspect ratio to match:\n\nhtml { font-size: 120%; line-height: 1.6; }\n /* font-size increased for wider screens from 16px to 19.2px\n (5:6 = minor third)\n font-size:line-height = 19.2:30.72 = 5:8 = minor sixth */\n\nfigure img { margin: -12.5% 0 0 ; }\t\n /* using -ve margin combined with overflow:hidden\n on the figure element\n to crop the image from 4:3 to 8:5 = minor sixth */\n\nA wide screen with a 8:5 (16:10) aspect ratio and an image to match.\n\nWith more pixels at our disposal, we can also now use the musical interval ratios to determine the width of the layout, and change the column proportions as well:\n\nsection { margin: 0 auto; width: 83.333%; }\n /* content width:screen width = 5:6 = minor third */\n\narticle { width: 60%; }\n /* width of main column 5:8 (62.5% of 100%, minor sixth)\n incorporating gutter width */\n\naside { width: 35%; }\n\nWith some carefully targeted media queries, we can begin to reach towards fulfilling the second and third tenets of Boulton\u2019s new canon for web design: connecting the parts of content through relationships embodied in the layout design; and binding the content to the devices people use to access it.\n\nCoda\n\nMusical interval ratios and screen aspect ratios reveal more than convenient correspondence. These proportions work on a deep aesthetic level. Much is claimed for the golden ratio \u03c6, but none of the screens pervading our lives use it. Perhaps that\u2019s an accident of technology, but can making screens to \u03c6\u2019s proportions be more difficult or expensive than 2:3 or 3:4 or 16:10? Here, then, is not just one but a set of proportions with a uniquely human focus, originating in nature, recognized in antiquity, fundamental still.\n\nWe find music to be an art steeped with meaning, yet, unlike literary and representational arts, purely instrumental music has no obvious semantic content. It boasts an ability to express emotions while remaining an abstract art in some sense, which makes it very like design. These days, we\u2019re rightly encouraged to design for emotion, to make our users\u2019 experience meaningful, seductive, delightful. Using musical ideas and principles in our designs can help achieve those ends.\n\nLet\u2019s not be na\u00efve, of course; designing web pages is even less like composing music than it\u2019s like designing for print. In visual design, the eye will always be sovereign to the ear; following these principles will only get us so far. We cannot truly claim that a carefully composed web page layout will have the same qualities and effect as any musical patterns that inform it. In music, a set of intervals is always harmonious in relation to other sets of intervals: music rarely stands still. What aspect ratios will future screens take? Already today there is great variation in devices and support for media queries (and within that, support for device-aspect-ratio). And what of non-western musical traditions? Or rhythm, form, tempo and dynamics? What I\u2019ve demonstrated above is only a suggestion, a tentative exploration of one possible way forward.\n\nBut as our discipline matures and we become more articulate about what we do, we must look longer and deeper into areas of human endeavour already rich with value. Music is a fertile ground to explore and has the potential to yield up new approaches for web design.\n\nFootnotes\n\n \n Just intonation is a system of tuning that uses small integers to describe the musical intervals, based initially on the perfect fifth, that most consonant of intervals. Simple instruments such as vibrating strings and natural horns, as well as unaccompanied voices, tend to fall into just intonation naturally.", "year": "2011", "author": "Owen Gregory", "author_slug": "owengregory", "published": "2011-12-09T00:00:00+00:00", "url": "https://24ways.org/2011/composing-the-new-canon/", "topic": "design"} {"rowid": 311, "title": "Designing Imaginative Style Guides", "contents": "(Living) style guides and (atomic) patterns libraries are \u201call the rage,\u201d as my dear old Nana would\u2019ve said. If articles and conference talks are to be believed, making and using them has become incredibly popular. I think there are plenty of ways we can improve how style guides look and make them better at communicating design information to creatives without it getting in the way of information that technical people need.\nGuides to libraries of patterns\nMost of my consulting work and a good deal of my creative projects now involve designing style guides. I\u2019ve amassed a huge collection of brand guidelines and identity manuals as well as, more recently, guides to libraries of patterns intended to help designers and developers make digital products and websites.\nTwo pages from one of my Purposeful style guide packs. Designs \u00a9 Stuff & Nonsense.\n\u201cStyle guide\u201d is an umbrella term for several types of design documentation. Sometimes we\u2019re referring to static style or visual identity guides, other times voice and tone. We might mean front-end code guidelines or component/pattern libraries. These all offer something different but more often than not they have something in common. They look ugly enough to have been designed by someone who enjoys configuring a router.\nOK, that was mean, not everyone\u2019s going to think an unimaginative style guide design is a problem. After all, as long as a style guide contains information people need, how it looks shouldn\u2019t matter, should it?\nInspiring not encyclopaedic\nWell here\u2019s the thing. Not everyone needs to take the same information away from a style guide. If you\u2019re looking for markup and styles to code a \u2018media\u2019 component, you\u2019re probably going to be the technical type, whereas if you need to understand the balance of sizes across a typographic hierarchy, you\u2019re more likely to be a creative. What you need from a style guide is different.\nSure, some people1 need rules:\n\n\u201cDo this (responsive pattern)\u201d or \u201cdon\u2019t do that (auto-playing video.)\u201d \n\nThose people probably also want facts:\n\n\u201cUse this (hexadecimal value)\u201d and not that inaccessible colour combination.\u201d\n\nStyle guides need to do more than list facts and rules. They should demonstrate a design, not just document its parts. The best style guides are inspiring not encyclopaedic. I\u2019ll explain by showing how many style guides currently present information about colour. \nColours communicate\nI\u2019m sure you\u2019ll agree that alongside typography, colour\u2019s one of the most important ingredients in a design. Colour communicates personality, creates mood and is vital to an easily understandable interactive vocabulary. So you\u2019d think that an average style guide would describe all this in any number of imaginative ways. Well, you\u2019d be disappointed, because the most inspiring you\u2019ll find looks like a collection of chips from a paint chart.\n\nLonely Planet\u2019s Rizzo does a great job of separating its Design Elements from UI Components, and while its \u2018Click to copy\u2019\u00a0colour values are a thoughtful touch, you\u2019ll struggle to get a feeling for Lonely Planet\u2019s design by looking at their colour chips.\nLonely Planet\u2019s Rizzo style guide.\nLonely Planet approach is a common way to display colour information and it\u2019s one that you\u2019ll also find at Greenpeace, Sky, The Times and on countless more style guides.\n\n\n\n\n\nGreenpeace, Sky and The Times style guides.\nGOV.UK\u2014not a website known for its creative flair\u2014varies this approach by using circles, which I find strange as circles don\u2019t feature anywhere else in its branding or design. On the plus side though, their designers have provided some context by categorising colours by usage such as text, links, backgrounds and more.\nGOV.UK style guide.\nGoogle\u2019s Material Design offers an embarrassment of colours but most helpfully it also advises how to combine its primary and accent colours into usable palettes.\nGoogle\u2019s Material Design.\nWhile the ability to copy colour values from a reference might be all a technical person needs, designers need to understand why particular colours were chosen as well as how to use them. \nInspiration not documentation\nFew style guides offer any explanation and even less by way of inspiring examples. Most are extremely vague when they describe colour:\n\n\u201cUse colour as a presentation element for either decorative purposes or to convey information.\u201d\n\nThe Government of Canada\u2019s Web Experience Toolkit states, rather obviously.\n\n\u201cCertain colors have inherent meaning for a large majority of users, although we recognize that cultural differences are plentiful.\u201d\n\nSalesforce tell us, without actually mentioning any of those plentiful differences. \nI\u2019m also unsure what makes the Draft U.S. Web Design Standards colours a \u201cdistinctly American palette\u201d but it will have to work extremely hard to achieve its goal of communicating \u201cwarmth and trustworthiness\u201d now. \nIn Canada, \u201cbold and vibrant\u201d colours reflect Alberta\u2019s \u201cdiverse landscape.\u201d \nAdding more colours to their palette has made Adobe \u201crich, dynamic, and multi-dimensional\u201d and at Skype, colours are \u201cbold, colourful (obviously) and confident\u201d although their style guide doesn\u2019t actually provide information on how to use them.\nThe University of Oxford, on the other hand, is much more helpful by explaining how and why to use their colours:\n\n\u201cThe (dark) Oxford blue is used primarily in general page furniture such as the backgrounds on the header and footer. This makes for a strong brand presence throughout the site. Because it features so strongly in these areas, it is not recommended to use it in large areas elsewhere. However it is used more sparingly in smaller elements such as in event date icons and search/filtering bars.\u201d\n\nOpenTable style guide.\nThe designers at OpenTable have cleverly considered how to explain the hierarchy of their brand colours by presenting them and their supporting colours in various size chips. It\u2019s also obvious from OpenTable\u2019s design which colours are primary, supporting, accent or neutral without them having to say so.\nArt directing style guides\nFor the style guides I design for my clients, I go beyond simply documenting their colour palette and type styles and describe visually what these mean for them and their brand. I work to find distinctive ways to present colour to better represent the brand and also to inspire designers. \nFor example, on a recent project for SunLife, I described their palette of colours and how to use them across a series of art directed pages that reflect the lively personality of the SunLife brand. Information about HEX and RGB values, Sass variables and when to use their colours for branding, interaction and messaging is all there, but in a format that can appeal to both creative and technical people.\nSunLife style guide. Designs \u00a9 Stuff & Nonsense.\nPurposeful style guides\nIf you want to improve how you present colour information in your style guides, there\u2019s plenty you can do.\nFor a start, you needn\u2019t confine colour information to the palette page in your style guide. Find imaginative ways to display colour across several pages to show it in context with other parts of your design. Here are two CSS gradient filled \u2018cover\u2019 pages from my Purposeful style sheets.\nColour impacts other elements too, including typography, so make sure you include colour information on those pages, and vice-versa.\nPurposeful. Designs \u00a9 Stuff & Nonsense.\nA visual hierarchy can be easier to understand than labelling colours as \u2018primary,\u2019 \u2018supporting,\u2019 or \u2018accent,\u2019 so find creative ways to present that hierarchy. You might use panels of different sizes or arrange boxes on a modular grid to fill a page with colour.\nDon\u2019t limit yourself to rectangular colour chips, use circles or other shapes created using only CSS. If irregular shapes are a part of your brand, fill SVG silhouettes with CSS and then wrap text around them using CSS shapes. \nPurposeful. Designs \u00a9 Stuff & Nonsense.\nSumming up\nIn many ways I\u2019m as frustrated with style guide design as I am with the general state of design on the web. Style guides and pattern libraries needn\u2019t be dull in order to be functional. In fact, they\u2019re the perfect place for you to try out new ideas and technologies. There\u2019s nowhere better to experiment with new properties like CSS Grid than on your style guide.\nThe best style guide designs showcase new approaches and possibilities, and don\u2019t simply document the old ones. Be as creative with your style guide designs as you are with any public-facing part of your website.\n\nPurposeful are HTML and CSS style guides templates designed to help you develop creative style guides and pattern libraries for your business or clients. Save time while impressing your clients by using easily customisable HTML and CSS files that have been designed and coded to the highest standards. Twenty pages covering all common style guide components including colour, typography, buttons, form elements, and tables, plus popular pattern library components. Purposeful style guides will be available to buy online in January.\n\n\n\n\nBoring people\u00a0\u21a9", "year": "2016", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2016-12-13T00:00:00+00:00", "url": "https://24ways.org/2016/designing-imaginative-style-guides/", "topic": "design"} {"rowid": 325, "title": "\"Z's not dead baby, Z's not dead\"", "contents": "While Mr. Moll and Mr. Budd have pipped me to the post with their predictions for 2006, I\u2019m sure they won\u2019t mind if I sneak in another. The use of positioning together with z-index will be one of next year\u2019s hot techniques \n\nBoth has been a little out of favour recently. For many, positioned layouts made way for the flexibility of floats. Developers I speak to often associate z-index with Dreamweaver\u2019s layers feature. But in combination with alpha transparency support for PNG images in IE7 and full implementation of position property values, the stacking of elements with z-index is going to be big. I\u2019m going to cover the basics of z-index and how it can be used to create designs which \u2018break out of the box\u2019.\n\nNo positioning? No Z!\n\nRemember geometry? The x axis represents the horizontal, the y axis represents the vertical. The z axis, which is where we get the z-index, represents /depth/. Elements which are stacked using z-index are stacked from front to back and z-index is only applied to elements which have their position property set to relative or absolute. No positioning, no z-index. Z-index values can be either negative or positive and it is the element with the highest z-index value appears closest to the viewer, regardless of its order in the source. Furthermore, if more than one element are given the same z-index, the element which comes last in source order comes out top of the pile. \n\nLet\u2019s take three
s.\n\n
\n
\n
\n\n #one { \n position: relative; \n z-index: 3;\n }\n\n #two { \n position: relative; \n z-index: 1;\n }\n\n #three {\n position: relative; \n z-index: 2;\n }\n\n\n\nAs you can see, the
with the z-index of 3 will appear closest, even though it comes before its siblings in the source order. As these three
s have no defined positioning context in the form of a positioned parent such as a
, their stacking order is defined from the root element . Simple stuff, but these building blocks are the basis on which we can create interesting interfaces (particularly when used in combination with image replacement and transparent PNGs).\n\nBrand building\n\nNow let\u2019s take three more basic elements, an

,
and

, all inside a branding

which acts a new positioning context. By enclosing them inside a positioned parent, we establish a new stacking order which is independent of either the root element or other positioning contexts on the page.\n\n
\n

Worrysome.com

\n

Don' worry 'bout a thing...

\n

Take the weight of the world off your shoulders.

\n
\n\nApplying a little positioning and z-index magic we can both set the position of these elements inside their positioning context and their stacking order. As we are going to use background images made from transparent PNGs, each element will allow another further down the stacking order to show through. This makes for some novel effects, particularly in liquid layouts.\n\n(Ed: We\u2019re using n below to represent whatever values you require for your specific design.) \n\n#branding {\n position: relative;\n width: n;\n height: n;\n background-image: url(n);\n }\n\n #branding>h1 {\n position: absolute;\n left: n;\n top: n;\n width: n;\n height: n;\n background-image: url(h1.png);\n text-indent: n;\n }\n\n #branding>blockquote {\n position: absolute;\n left: n;\n top: n;\n width: n;\n height: n;\n background-image: url(bq.png);\n text-indent: n;\n\n }\n\n #branding>p {\n position: absolute;\n right: n;\n top: n;\n width: n;\n height: n;\n background-image: url(p.png);\n text-indent: n;\n }\n\nNext we can begin to see how the three elements build upon each other.\n\n\n1. Elements outlined\n\n\n2. Positioned elements overlayed to show context\n\n\n3. Our final result\n\nMultiple stacking orders\n\nNot only can elements within a positioning context be given a z-index, but those positioning contexts themselves can also be stacked.\n\n\nTwo positioning contexts, each with their own stacking order\n\nInterestingly each stacking order is independent of that of either the root element or its siblings and we can exploit this to make complex layouts from just a few semantic elements. This technique was used heavily on my recent redesign of Karova.com.\n\nDissecting part of Karova.com\n\nFirst the XHTML. The default template markup used for the site places
and
as siblings inside their container.\n\n
\n
\n

\n

\n
\n
\n
\n\nBy giving the navigation
a lower z-index than the content
we can ensure that the positioned content elements will always appear closest to the viewer, despite the fact that the navigation comes after the content in the source.\n\n#content { \n position: relative; \n z-index: 2; \n }\n\n #nav_main { \n position: absolute; \n z-index: 1; \n }\n\nNow applying absolute positioning with a negative top value to the

and a higher z-index value than the following

ensures that the header sits not only on top of the navigation but also the styled paragraph which follows it.\n\nh2 {\n position: absolute;\n z-index: 200;\n top: -n;\n }\n\n h2+p {\n position: absolute;\n z-index: 100;\n margin-top: -n;\n padding-top: n;\n }\n\n\nDissecting part of Karova.com\n\nYou can see the full effect in the wild on the Karova.com site.\n\nHave a great holiday season!", "year": "2005", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2005-12-16T00:00:00+00:00", "url": "https://24ways.org/2005/zs-not-dead-baby-zs-not-dead/", "topic": "design"} {"rowid": 329, "title": "Broader Border Corners", "contents": "A note from the editors: Since this article was written the CSS border-radius property has become widely supported in browsers. It should be preferred to this image technique.\n \n \n \n A quick and easy recipe for turning those single-pixel borders that the kids love so much into into something a little less right-angled.\n\nHere\u2019s the principle: We have a box with a one-pixel wide border around it. Inside that box is another box that has a little rounded-corner background image sitting snugly in one of its corners. The inner-box is then nudged out a bit so that it\u2019s actually sitting on top of the outer box. If it\u2019s all done properly, that little background image can mask the hard right angle of the default border of the outer-box, giving the impression that it actually has a rounded corner.\n\nTake An Image, Finely Chopped\n\n\n\nAdd A Sprinkle of Markup\n\n

\n

Lorem ipsum etc. etc. etc.

\n
\n\nThrow In A Dollop of CSS\n\n#content { \n border: 1px solid #c03;\n}\n\n#content p {\n background: url(corner.gif) top left no-repeat;\n position: relative;\n left: -1px;\n top: -1px;\n padding: 1em;\n margin: 0;\n}\n\nBubblin\u2019 Hot\n\n\n\tThe content div has a one-pixel wide red border around it.\n\tThe paragraph is given a single instance of the background image, created to look like a one-pixel wide arc.\n\tThe paragraph is shunted outside of the box \u2013 back one pixel and up one pixel \u2013 so that it is sitting over the div\u2019s border. The white area of the image covers up that part of the border\u2019s corner, and the arc meets up with the top and left border.\n\tBecause, in this example, we\u2019re applying a background image to a paragraph, its top margin needs to be zeroed so that it starts at the top of its container.\n\n\nEt voil\u00e0. Bon app\u00e9tit.\n\nExtra Toppings\n\n\n\tIf you want to apply a curve to each one of the corners and you run out of meaningful markup to hook the background images on to, throw some spans or divs in the mix (there\u2019s nothing wrong with this if that\u2019s the effect you truly want \u2013 they don\u2019t hurt anybody) or use some nifty DOM Scripting to put the scaffolding in for you.\n\tNote that if you\u2019ve got more than one of these relative corners, you will need to compensate for the starting position of each box which is nested in an already nudged parent.\n\tYou\u2019re not limited to one pixel wide, rounded corners \u2013 the same principles apply to thicker borders, or corners with different shapes.", "year": "2005", "author": "Patrick Griffiths", "author_slug": "patrickgriffiths", "published": "2005-12-14T00:00:00+00:00", "url": "https://24ways.org/2005/broader-border-corners/", "topic": "design"} {"rowid": 330, "title": "An Explanation of Ems", "contents": "Ems are so-called because they are thought to approximate the size of an uppercase letter M (and so are pronounced emm), although 1em is actually significantly larger than this. The typographer Robert Bringhurst describes the em thus:\n\n\n\tThe em is a sliding measure. One em is a distance equal to the type size. In 6 point type, an em is 6 points; in 12 point type an em is 12 points and in 60 point type an em is 60 points. Thus a one em space is proportionately the same in any size.\n\n\nTo illustrate this principle in terms of CSS, consider these styles:\n\n#box1 {\n font-size: 12px;\n width: 1em;\n height: 1em;\n border:1px solid black;\n}\n\n#box2 {\n font-size: 60px;\n width: 1em;\n height: 1em;\n border: 1px solid black;\n}\n\nThese styles will render like:\n\n M\n\nand\n\n M\n\nNote that both boxes have a height and width of 1em but because they have different font sizes, one box is bigger than the other. Box 1 has a font-size of 12px so its width and height is also 12px; similarly the text of box 2 is set to 60px and so its width and height are also 60px.", "year": "2005", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2005-12-02T00:00:00+00:00", "url": "https://24ways.org/2005/an-explanation-of-ems/", "topic": "design"} {"rowid": 9, "title": "How to Write a Book", "contents": "Were you recently inspired to write a book after reading Owen Gregory\u2019s compendium of author insights? Maybe so inspired to strike out on your own and self-publish? \n\nBased on personal experience, writing a book is hard. It requires a great deal of research, experience, and patience. To be able to consolidate your thoughts and what you\u2019ve learned into a sensible and readable tome is an admirable feat. To decide to self-publish and take on yourself all of the design, printing, distribution, and so much more is tantamount to insanity. Again, based on personal experience.\n\nSo, why might you want to self-publish?\n\nIf you\u2019ve spent many a late night doing cross-browser testing just to know that your site works flawlessly in twenty-four different browsers \u2014 including Mosaic, of course \u2014 then maybe you\u2019ll understand the fun that comes from doing it all.\n\nWorking with a publisher, you\u2019re left to focus on one core thing: writing. That\u2019s a good thing. A good publisher has the right resources to help you get your idea polished and the distribution network to get your book on store shelves around the world. It\u2019s a very proud moment to be able to walk into a book store and see your book sitting there on the shelf.\n\nSelf-publishing can also be a wonderful process as you get to own it from beginning to end. Every decision is yours and if you\u2019re a control freak like me, this can be a very rewarding experience. \n\nWhile there are many aspects to self-publishing, I\u2019m going to speak to just one of them: creating an ebook.\n\nFormats \n\nIn creating an ebook, you first need to decide what formats you wish to support. There are three main formats, each with their own pros and cons:\n\n\n\tPDF\n\tEPUB\n\tMOBI\n\n\nPDFs are supported on almost every device (Windows, Mac, Kindle, iPad, Android, etc.) and can even be a stepping stone to creating a print version of your book. PDFs allow for full typographic and design control, but at the cost of needing to fit things into a predefined page layout. Is it US Letter or A4? Or is it a format that isn\u2019t easily printed by readers on their home printers?\n\nEPUB is a more fluid format that is supported by the Apple iPad, iPhone, and now on the desktop with OS X Mavericks. It\u2019s also supported by Google Play for Android devices. While EPUB is supported on other devices, you\u2019re likely to choose EPUB because you\u2019re targeting your book at the Apple audience. The EPUB format is HTML-based with support for some CSS and even video and interactive elements. You can create very rich and exciting experiences using the EPUB format that just aren\u2019t possible with PDF or MOBI. However, if you decide to support multiple file formats, you\u2019ll likely find \u2014 as I did \u2014 that a consistent experience between all formats is easier to build and maintain, and therefore the extra benefits of interactivity go out the window.\n\nMOBI is a format originally developed for the Mobipocket Reader but more popularly supported by the Amazon Kindle. If you\u2019re looking to attract the Kindle audience or publish to Amazon via the Kindle Direct Publishing platform then the HTML-based MOBI format is the format you\u2019ll want to go with. \n\nDistribution will probably factor in heavily with what format you decide to go with. Many people I know who self-publish go with PDF only due to its ubiquity. \n\nIf you want to garner a wider audience by distributing via Amazon or the iBookstore then you\u2019ll need to think about supporting all three formats (as I did).\n\nWhat tools should I use?\n\nI spent a lot of time figuring out the right toolset and finally got something that suits me just right.\n\nIn the past, when working with a publisher, I was given a Microsoft Word template that was passed back and forth between myself, the editor, and tech reviewer. This template has been the bane of any book writer that I\u2019ve spoken to. Not every publisher is like that, though. Some publishers, like O\u2019Reilly, use DocBook, an XML-based format that can be converted into PDF, EPUB, and MOBI.\n\nPublishers already have a style guide and whether it\u2019s DocBook or a Word template, they have the tools already in place to easily convert your work into multiple formats.\n\nSelf-publishing means that you\u2019ll likely have to do a lot of tweaking to get things looking and working the way you want them to. I tried DocBook and the open source export tools didn\u2019t create HTML to my liking. Fixing even the most mundane things required fiddling with XSL transformations for hours on end. Not the way I like to spend my time. I can only imagine the hoops I would\u2019ve had to go through to get a PDF to look half-decent.\n\nTools like Pages or Scrivener offer up the ability to publish to multiple formats, too, but none offered me the control over the output that I truly desired. Have a mentioned that I\u2019m a control freak?\n\nI ended up writing my book using a technology that I already knew quite well: HTML. By writing in HTML, I already had something that I could post on my website, use for the EPUB and use for the MOBI format. All without having to change a thing. (That\u2019s right: the same HTML that is used on SMACSS.com is used in the EPUB and is used in the MOBI.)\n\nWhat about PDF? I could open up the HTML in a web browser, choose Save as PDF and be done with it but let\u2019s face it: the filename and date attached to every single page doesn\u2019t exactly scream professional. Web browsers actually do a surprisingly poor job with supporting the CSS paged media spec.\n\nI had resorted to copying and pasting the content into Pages and saving as PDF from there. It wasn\u2019t elegant but it worked. However, any changes to my HTML source required redoing those changes in Pages, as well. \n\nThen I met my Prince Charming: Prince XML. It\u2019s pricey but it works incredibly well. It takes HTML and CSS (that very format I\u2019ve been using for all of my other file formats) and will generate a PDF via a command line interface. Prince supports CSS paged media including headers, footers, page counts, and alternating page styles. \n\nFrom one format, HTML, I can now easily publish to PDF, MOBI, and EPUB, and even my website. I use the PDF version to send to the printer along with cover art to be bound and ready to ship around the world. It\u2019s amazing how versatile HTML (and CSS) is.\n\nTo learn more about writing books with HTML and CSS, I recommend reading Building Books with CSS3 over at A List Apart.\n\nCreating an EPUB\n\nLet\u2019s take a step back. Prince gets us from HTML to PDF but how do we make an EPUB out of the HTML? \n\nAn EPUB file is essentially a ZIP file with a renamed extension. There are some core files that you need to start with:\n\nRoot\n META-INF\n container.xml\n mimetype\n content.opf\n toc.ncx\n\nAfter that, you can start adding your content to the project. Be sure to update the toc.ncx (Table of Contents) and content.opf (the ebook manifest) with any changes you make to your project.\n\nYou can learn more about the file formats with the EPUB Format Construction Guide.\n\nOnce all your files are in place, you\u2019ll need to create the EPUB file by running two commands (on OS X, at least):\n\nzip -X0 your-ebook.epub mimetype\nzip -Xur9D your-ebook.epub *\n\nThe mimetype needs to be the first file inside the ZIP file and therefore gets added first. Then, the rest of the files are added. \n\nI\u2019ve added a function to my .bash_profile to make this even easier:\n\nfunction epub()\n{\n zip -q0X $@ mimetype; zip -qXr9D $@ *\n}\n\nThen, within the folder from which I want to create an ebook, I just run epub your-ebook.epub from the Terminal command line and the EPUB file should be ready to go.\n\nCreating the MOBI\n\nWe have our EPUB and we have our PDF. The last step is the MOBI file. For this, I call upon Calibre. Calibre can be used as a reader and as a library but I use it exclusively to export my EPUB files to MOBI. \n\nCalibre includes a command line utility to convert from EPUB to MOBI. (To install the command line tools, go to Preferences > Advanced > Miscellaneous and click Install Command Line Tools.)\n\nebook-convert your-ebook.epub your-ebook.mobi \n\nSpread the joy\n\nNow that you have all of your different file formats, you need to get them into the hands of people who want to (ho-ho-hopefully) buy your book!\n\nThere are a number of marketplaces such as Amazon\u2019s Kindle Direct Publishing, iBookstore, Google Play, and NOOK Press.\n\nSome publishers, like PragProg and O\u2019Reilly will also add self-published books to their roster if they feel it\u2019s a good fit for their audience.\n\nWith any distribution, you\u2019ll have to give up a percentage of your sales\u2014from 30% to 70% of each sale, so consider your options wisely.\n\nOf course, you can always open your own online store and reap as much of the revenue as possible, assuming you can get the traffic to your site. Handling your own distribution allows you to create a deeper one-on-one connection with your customers, something that is impossible with other distribution channels since you don\u2019t get customer information through other services\u2014even though you are giving them a huge chunk of your sales!\n\nGo forth and prosper\n\nThere\u2019s a lot of thought and time that goes into writing a book and just as much thought and time can go into creating, publishing, and marketing your book once you\u2019re done. \n\nIn the end, self-publishing can be a very rewarding process and well worth the time that goes into it.", "year": "2013", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2013-12-19T00:00:00+00:00", "url": "https://24ways.org/2013/how-to-write-a-book/", "topic": "content"} {"rowid": 19, "title": "In Their Own Write: Web Books and their Authors", "contents": "The currency of written communication \u2014 words on the page, words on the screen \u2014 comprises many denominations. To further our ends in web design and development, we freely spend and receive several: tweets aphoristic and trenchant, banal and perfunctory; blog posts and articles that call us to action or reflection; anecdotes, asides, comments, essays, guides, how-tos, manuals, musings, notes, opinions, stories, thoughts, tips pro and not-so-pro. So many, many words.\n\nOur industry (so much more than this, but what on earth are we, collectively?), our community thrives on writing and sharing knowledge and experience. 24 ways is a case in point. Everyone can learn and contribute through reading and writing \u2014 it\u2019s what we\u2019ve always done.\n\nTo web authors and readers seeking greater returns, though, broader culture has vouchsafed an enduring and singular artefact: the book.\n\nLast month I asked a small sample of web book authors if they would be prepared to answer a few questions; most of them kindly agreed. In spirit, the survey was informal: I had neither hypothesis nor unground axe. I work closely with writers \u2014 and yes, I\u2019ve edited or copy-edited books by several of the authors I surveyed \u2014 and wanted to share their thoughts about what it was like to write a book (\u201c\u2026it was challenging to find a coherent narrative\u201d), why they did it (\u201cWho wouldn\u2019t want to?\u201d) and what they learned from the experience (\u201cThat I could!\u201d).\n\nReasons for writing a book\n\nIn web development the connection between authors and readers is unusually close and immediate. Working in our medium precipitates a unity that\u2019s rare elsewhere. Yet writing and publishing a book, even during the current books revolution, is something only a few of us attempt and it remains daunting and a little remote. What spurs an author to try it? For some, it\u2019s a deeply held resistance to prevailing trends:\n\nI felt that designers and developers needed to be shaken out of what seemed to me had been years of stagnation.\n\u2014Andrew Clarke\n\n\nOr even a desire to protect us from ourselves:\n\nI felt that without a book that clearly defined progressive enhancement in a very approachable and succinct fashion, the web was at risk. I was seeing Tim Berners-Lee\u2019s vision of universal availability slip away\u2026\n\u2014Aaron Gustafson\n\n\nSometimes, there\u2019s a knowledge gap to be filled by an author with the requisite excitement and need to communicate. Jon Hicks took his \u201cpet subject\u201d and was \u201centhused enough to want to spend all that time writing\u201d, particularly because:\n\n\n\t\u2026there was a gap in the market for it. No one had done it before, and it\u2019s still on its own out there, with no competition. It felt like I was able to contribute something.\n\n\nCennydd Bowles felt a professional itch at a particular point in his career, understanding that\n\n\n\t[a]s a designer becomes more senior, they start looking for ways to scale the effects of their work. For some, that leads into management. For others, into writing.\n\n\nOften, though, it\u2019s also simply a personal challenge and ambition to explore a subject at length and create something substantial. Anna Debenham describes a motivation shared by several authors:\n\nTo be able to point to something more tangible than an article and be able to say \u201cI did that.\u201d\n\n\nThat sense of a book\u2019s significance, its heft and gravity even, stems partly from the cultural esteem which honours books and their authors. Books have a long history as sources of wisdom, truth and power. Even with more books being published each year than ever before, writing one is still commonly considered a laudable achievement, including in our field.\n\nChallenges of writing a book\n\nReceived wisdom has it that writing online should be brief and chunky and approachable: get to the point; divide it all up; subheadings and lists are our friends; write like you\u2019re talking; no one has time to read. Much of such advice is true. Followed well, it lends our writing punch and pith, vigour and vim. The web is nimble, the web keeps up, and it suits what we write about developing for it. It\u2019s perfect for delivering our observations, queries and investigations into all the various aspects of the work, professional and personal.\n\nYet even for digital natives like web authors, books printed and electronic retain an attractive glister. \n\nIdeas can be developed more fully, their consequences explored to greater depth and extended with more varied examples, and the whole conveyed with more eloquence, more style. Why shouldn\u2019t authors delay their conclusions if the intervening text is apposite, rich with value and helps to flesh out the skeleton of an argument? Conclusions might or might not be reached, of course, but a writer is at greater liberty in a book to digress in tangential and interesting ways.\n\nWriting a book involves committing time, energy, thought and money. As Brian Suda found, it can be tough \u201cgetting the ideas out of my head into a cohesive blob of text.\u201d Some authors end up talking to themselves\u2026\n\nIt helps me to keep a real person in mind, someone who I\u2019m talking to as I write. Sometimes I have the same conversations over and over in my head.\n\u2014Andrew Clarke\n\n\n\u2026while others are thinking ahead, concerned with how their book will be received:\n\nWould anyone want to read it? Would they care? Would it be respected by my peers?\n\u2014Joe Leech\n\n\nChallenges that arose time and again included \u201cstarting\u201d and \u201cgetting words on the page\u201d as well as \u201cknowing when to stop\u201d or \u201cletting go\u201d. Personal organization problems and those caused by publishers were also widely mentioned. Time loomed large. Making time, finding time. Giving up \u201csleep and some sanity\u201d and realizing \u201cit will take you far, far, far longer than you naively assumed\u201d. Importantly, writing time is time away from gainful employment: Aaron Gustafson found the hardest thing about writing a book to be \u201cthe loss of income while I was writing.\u201d\n\nPerils and pleasures of editing\n\nEditing, be it structural, technical or copy editing, is founded on reciprocity. Without openness and a shared belief that the book is worthwhile, work can founder in acrimony and mistrust. Editors are a book\u2019s first and most critical (in every sense) readers. Effective and perceptive editing makes a book as good as it can be, finding the book within the draft like sculpture reveals the statue in the stone.\n\nA good editor calls you out on poor assumptions and challenges you to really clarify your thinking. Whilst it can be difficult during the process to have your thinking challenged, it\u2019s always been worth it \u2014 for me personally \u2014 in the long run. A good editor also reins you in when you\u2019ve perhaps wandered off track or taken a little too long to make a point.\n\u2014Christopher Murphy\n\n\nAndy Croll found editing \u201call positive\u201d and Aaron Gustafson loves \u201cworking with a strong editor [\u2026] I want someone to tell it to me straight.\u201d But it can be a rollercoaster, \u201cboth terrifying and the real moment of elation\u201d. Mixed emotions during the editing process are common:\n\nIt was very uncomfortable! I knew it was making the work stronger, but it was awkward having my inconsistencies and waffle picked apart.\n\u2014Jon Hicks\n\n\nIt can be distressing to have written work looked over by a professional, particularly for first-time book authors whose expertise lies elsewhere:\n\nI was a little nervous because I don\u2019t consider myself a skilled writer \u2014 I never dreamed of becoming an author. I\u2019m a designer, after all.\n\u2014Geri Coady\n\n\nCommunication is key, particularly when it comes to checking or changing the author\u2019s words.\n\nI like a good banter between me and the tech editor \u2014 if we can have a proper argument in Word comments, that\u2019s great.\n\u2014Rachel Andrew\n\n\nBut if handled poorly, small battles can break out. Rachel Andrew again:\n\n\n\tHowever, having had plenty of times where the technical editor has done nothing more than give a cursory glance, I started to leave little issues in for them to spot. If they picked them up I knew they were actually testing the code and I could be sure the work was being properly tech edited. If they didn\u2019t spot them, I\u2019d find someone myself to read through and check it!\n\n\nA major concern for writers is that their voices will be altered, filtered, mangled or otherwise obscured by the editing process. Good copy editing must remain unnoticed while enhancing the author\u2019s voice in print. Donna Spencer appreciated the way her editor \u201ctidied up my work and made it a million times better, but left it sounding exactly like me.\u201d Similarly, Andrew Travers \u201cwas incredibly impressed at how well my editor tightened up my own writing without it feeling like another\u2019s voice\u201d and Val Head sums up the consensus that:\n\n\n\tthe editor was able to help me express what I was trying to say in a better way [\u2026] I want to have editors for everything now.\n\n\nAt the keyboard, keep your friends close, but your editors closer.\n\nPublishing and publishers\n\nConditions ought to militate against the allure of writing a book about web design and development. More books are published each year than ever before, so readerships elude new authors and readers can struggle to find authors to trust in their fields of interest. New spaces for more expansive online writing about working on and with the web are opening up (sites like Contents Magazine and STET), and seminal online web development texts are emerging. Publishing online is simple, far-reaching and immediate.\n\nMuch more so than articles and blog posts, books take time to research, write and read; add the complexity of commissioning, editing, designing, proofreading, printing, marketing and distribution processes, and it can take many months, even years to publish. The ceaseless headlong momentum of the web can leave articles more than a few weeks old whimpering in its wake, but updating them at least is straightforward; printed books about web development can depreciate as rapidly as the technology and techniques they describe, while retaining the \u201cterrifying permanence that print bestows: your opinions will follow you forever\u201d.\n\nSo much moves on, and becomes out of date. Companies featured get bought by larger companies and die, techniques improve and solutions featured become terribly out of date. Unlike a website, which could be updated continuously, a book represents the thinking \u2018at that time\u2019.\n\u2014Jon Hicks\n\n\nPublishers work hard to mitigate these issues, promoting new books and new authors, bringing authors and readers together under a trusted banner. When a publisher packages up and releases a writer\u2019s words, it confers a seal of approval and \u201cbadge of quality\u201d, very important to new authors.\n\nPublishers have other benefits to offer, from expert knowledge:\n\nMy publisher was extraordinarily supportive (and patient). Her expertise in my chosen subject was both a pressure (I didn\u2019t want to let her down) and a reassurance (if she liked it, I knew it was going to be fine).\n\u2014Andrew Travers\n\n\n\u2026to systems and support mechanisms set up specifically to encourage writers and publish books:\n\nWorking as a team means you\u2019re bringing in everyone\u2019s expertise.\n\u2014Chui Chui Tan\n\n\nAs a writer, the best part about writing for a publisher was the writing infrastructure offered.\n\u2014Christopher Murphy\n\n\nThere can be drawbacks, however, and the occasional horror story:\n\nWe were just one small package on a huge conveyor belt. The publisher\u2019s process ruled all.\n\u2014Cennydd Bowles\n\n\nIt\u2019s only looking back I realise how poorly some publishers treat writers \u2014 especially when the work is so poorly remunerated.My worst experience was when a publisher decided, after I had completed the book, that they wanted to push a different take on the subject than the brief I had been given. Instead of talking to me, they rewrote chunks of my words, turning my advice into something that I would never have encouraged. Ultimately, I refused to let the book go out under my name alone, and I also didn\u2019t really promote the book as I would have had to point out the things I did not agree with that had been inserted!\n\u2014Rachel Andrew\n\n\nSelf-publishing is now a realistic option for web authors, and can offers \u201ccomplete control over the end product\u201d as well as the possibility of earning more than a \u201cpathetic author revenue percentage\u201d. There can be substantial barriers, of course, as self-publishing authors must face for themselves the risks and challenges conventional publishers usually bear. Ideally, creating a book is a collaboration between author and publisher. Geri Coady found that \u201cworking with my publisher felt more like working with a partner or co-worker, rather than working for a boss.\u201d\n\nWise words\n\nSo, after meeting the personal costs of writing and publishing a web book \u2014 fear, uncertainty, doubt, typing (so much typing) \u2014 and then smelling the roses of success, what\u2019s left for an author to say? Some words, perhaps, to people thinking of writing a book.\n\nDonna Spencer identifies a stumbling block common to many writers with an insight into the writing process:\n\n\n\tHaving talked to a lot of potential authors, I think most have the problem that they haven\u2019t actually figured out the \u2018answer\u2019 to their premise yet. They feel like they are stuck in the writing, but they are actually stuck in the thinking.\n\n\nFor some no-nonsense, straightforward advice to cut through any anxiety or inadequacy, Rachel Andrew encourages authors to \u201ctreat it like any other work. There is no mystery to writing, you just have to write. Schedule the time, sit down, write words.\u201d Tim Brown notes the importance of the editing process to refine a book and help authors reach their readers:\n\n\n\tHire good editors. Editors are amazing thinkers who can vastly improve the quality and clarity of a piece of writing.\n\n\nWe are too much beholden to the practical demands and challenges of technology, so Aaron Gustafson suggests a writer should \u201cfavor philosophies over techniques and your book will have a longer shelf life.\u201d\n\nMost intimations of renown and recognition are nipped in the bud by Joe Leech\u2019s warning: \u201cDon\u2019t expect fame and fortune.\u201d Although Cennydd Bowles\u2019 bitter experience can be discouraging:\n\n\n\tThe sacrifices required are immense. You probably won\u2019t make it.\n\n\n\u2026he would do things differently for a future book:\n\n\n\tI would approach the book with [\u2026] far more concern about conveying the damn joy of what I do for a living.\n\n\nThe pleasure of writing, not just having written is captured by James Chudley when he recalls:\n\n\n\tHow much I enjoy writing and also how much I enjoy the discipline or having a side project like this. It\u2019s a really good supplement to working life.\n\n\nAnd Jon Hicks has words that any author will find comforting:\n\n\n\tIt will be fine. Everything will be fine. Just get on with it!\n\n\n\n\nAs the web expands effortlessly and ceaselessly to make room for all our words, yet it can also discourage the accumulation of any particular theme in one space, dividing rich seams and scattering knowledge across the web\u2019s surface and into its deepest reaches. How many words become weightless and insubstantial, signals lost in the constant white noise of indistinguishable voices, unloved, unlinked? The web forgets constantly, despite the (somewhat empty) promise of digital preservation: articles and data are sacrificed to expediency, profit and apathy; online attention, acknowledgement and interest wax and wane in days, hours even.\n\nBooks can encourage deeper engagement in readers, and foster faith in an author, particularly if released under the imprint of a recognized publisher within the field. And books are changing. Although still not widely adopted, EPUB3 is the new standard in ebooks, bringing with it new possibilities for interaction and connection: readers with the text; readers with readers; and readers with authors. EPUB3 is built on HTML, CSS and JavaScript \u2014 sound familiar? In the past, we took what we could from the printed page to make the web; now books are rubbing up against what we\u2019ve made.\n\nSo: a book.\n\nEver thought you could write one? Should write one? Would?\n\n\n\nI\u2019d like to thank all the authors who wrote their books and answered my questions.\n\n\n\tRachel Andrew \u00b7 CSS3 Layout Modules, The CSS3 Anthology and more\n\tCennydd Bowles \u00b7 Undercover User Experience Design, with James Box\n\tTim Brown \u00b7 Combining Typefaces\n\tJames Chudley \u00b7 Usability of Web Photos\n\tAndrew Clarke \u00b7 Hardboiled Web Design\n\tGeri Coady \u00b7 Colour Accessibility\n\tAndy Croll \u00b7 HTML Email\n\tAnna Debenham \u00b7 Front-end Style Guides\n\tAaron Gustafson \u00b7 Adaptive Web Design\n\tVal Head \u00b7 CSS Animations\n\tJon Hicks \u00b7 The Icon Handbook\n\tJoe Leech \u00b7 Psychology for Designers\n\tChristopher Murphy \u00b7 The Craft of Words, with Niklas Persson\n\tDonna Spencer \u00b7 Information Architecture, Card Sorting and How to Write Great Copy for the Web\n\tBrian Suda \u00b7 Designing with Data\n\tChui Chui Tan \u00b7 International User Research\n\tAndrew Travers \u00b7 Interviewing for Research", "year": "2013", "author": "Owen Gregory", "author_slug": "owengregory", "published": "2013-12-15T00:00:00+00:00", "url": "https://24ways.org/2013/web-books/", "topic": "content"} {"rowid": 24, "title": "Kill It With Fire! What To Do With Those Dreaded FAQs", "contents": "In the mid-1640s, a man named Matthew Hopkins attempted to rid England of the devil\u2019s influence, primarily by demanding payment for the service of tying women to chairs and tossing them into lakes.\n\nUnsurprisingly, his methods garnered criticism. Hopkins defended himself\u00a0in The Discovery of Witches\u00a0in 1647, subtitled \u201cCertaine Queries answered, which have been and are likely to be objected against MATTHEW HOPKINS, in his way of finding out Witches.\u201d\n\nEach \u201cquerie\u201d was written in the voice of an imagined detractor, and answered in the voice of an imagined defender (always referring to himself as \u201cthe discoverer,\u201d or \u201chim\u201d):\n\n\n\tQuer. 14.\n\n\tAll that the witch-finder doth is to fleece the country of their money, and therefore rides and goes to townes to have imployment, and promiseth them faire promises, and it may be doth nothing for it, and possesseth many men that they have so many wizzards and so many witches in their towne, and so hartens them on to entertaine him.\n\n\tAns.\n\n\tYou doe him a great deale of wrong in every of these particulars.\n\n\nHopkins\u2019 self-defense was an early modern English FAQ.\n\nDigital beginnings\n\nQuestion and answer formatting certainly isn\u2019t new, and stretches back much further than witch-hunt days. But its most modern, most notorious, most reviled incarnation is the internet\u2019s frequently asked questions page.\n\nFAQs began showing up on pre-internet mailing lists\u00a0as a way for list members to answer and pre-empt newcomers\u2019 repetitive questions:\n\n\n\tThe presumption was that new users would download archived past messages through ftp. In practice, this rarely happened and the users tended to post questions to the mailing list instead of searching its archives. Repeating the \u201cright\u201d answers becomes tedious\u2026\n\n\nWhen all the users of a system can hear all the other users, FAQs make a lot of sense: the conversation needs to be managed and manageable. FAQs were a stopgap for the technological limitations of the time.\n\nBut the internet moved past mailing lists. Online information can be stored, searched, filtered, and muted; we choose and control our conversations. New users no longer rely on the established community to answer their questions for them.\n\nAnd yet, FAQs are still around. They\u2019re a content anti-pattern, replicated from site to site to solve a problem we no longer have.\n\nWhat we hate when we hate FAQs\n\nAs someone who creates and structures online content \u2013 always with the goal of making that content as useful as possible to people \u2013 FAQs drive me absolutely batty. Almost universally, FAQs represent the opposite of useful. A brief list of their sins:\n\n\nDouble trouble\nDuplicated content is practically a given with FAQs. They\u2019re written as though they\u2019ll be accessed in a vacuum \u2013 but search results, navigation patterns, and curiosity ensure that users will seek answers throughout the site. Is our goal to split their focus? To make them uncertain of where to look? To divert them to an isolated microcosm of the website? Duplicated content means user confusion (to say nothing of the duplicated workload for maintaining content).\nLeaving the job unfinished\nMany FAQs fail before they\u2019re even out of the gate, presenting a list of questions that\u2019s incomplete (too short and careless to be helpful) or irrelevant (avoiding users\u2019 real concerns in favor of soundbites). Alternately, if the right questions are there, the answers may be convoluted, jargon-heavy, or otherwise difficult to understand.\nLong lists of not-my-question\nGetting a single answer often means sifting through a haystack of questions. For each potential question, the user must read, comprehend, assess, move on, rinse, repeat. That\u2019s a lot of legwork for little reward \u2013 and a lot of opportunity for mistakes. Users may miss their question, or they may fail to recognize a differently worded version of their question, or they may not notice when their sought-after answer appears somewhere they didn\u2019t expect.\nThe ventriloquist act\nFAQs shift the point of view. While websites speak on behalf of the organization (\u201cour products,\u201d \u201cour services,\u201d \u201cyou can call us for assistance,\u201d etc.), FAQs speak as the user \u2013 \u201cI can\u2019t find my password\u201d or \u201cHow do I sign up?\u201d Both voices are written from the first-person perspective, but speak for different entities, which is disorienting: it breaks the tone and messaging across the website. It\u2019s also presumptuous: why do you get to speak for the user?\n\n\nThese all underscore FAQs\u2019 fatal flaw: they are content without context, delivered without regard for the larger experience of the website. You can hear the absurdity in the name itself: if users are asking the same questions so frequently, then there is an obvious gulf between their needs and the site content. (And if not, then we have a labeling problem.)\n\nInstead of sending users to a jumble of maybe-it\u2019s-here-maybe-it\u2019s-not questions, the answers to FAQs should be found naturally throughout a website. They are not separated, not isolated, not other. They are\u00a0the content.\n\nTo present it otherwise is to create a runaround, and users know it. Jay Martel\u2019s parody, \u201cF.A.Q.s about F.A.Q.s\u201d\u00a0captures the silliness and frustration of such a system:\n\n\n\tQ: Why are you so rude?\n\n\tA: For that answer, you would have to consult an F.A.Q.s about F.A.Q.s about F.A.Q.s. But your time might be better served by simply abandoning your search for a magic answer and taking responsibility for your own profound ignorance.\n\n\nFAQs aren\u2019t magic answers. They don\u2019t resolve a content dilemma or even help users. Yet they keep cropping up, defiant, weedy, impossible to eradicate.\n\nWhere are they all coming from?\n\nBlame it on this: writing is hard. When generating content, most of us do whatever it takes to get some words on the screen. And the format of question and answer makes it easy: a reactionary first stab at content development.\n\nAfter all, the point of website content is to answer users\u2019 questions. So this \u2013 to give everyone credit \u2013 is a really good move. Content creators who think in terms of questions and answers are actually thinking of their users, particularly first-time users, trying to anticipate their needs and write towards them.\n\nIt\u2019s a good start. But it\u2019s scaffolding: writing that helps you get to the writing you\u2019re supposed to be doing. It supports you while you write your way to the heart of your content. And once you get there, you have to look back and take the scaffolding down.\n\nLeaving content in the Q&A format that helped you develop it is missing the point. You\u2019re not there to build scaffolding. You have to see your content in its naked purpose and determine the best method for communicating that purpose \u2013 and it usually won\u2019t be what got you there.\n\nThe goal (to borrow a lesson from content management systems) is to separate the content from its presentation, to let the meaning of the content inform its display.\n\nThis is, of course, a nice theory.\n\nAn occasionally necessary evil\n\nI have a lot of clients who adore FAQs. They\u2019ve developed their content over a long period of time. They\u2019ve listened to the questions their users are asking. And they\u2019ve answered them all on a page that I simply cannot get them to part with.\n\nWhich means I\u2019ve had to consider that there may be occasions where an FAQ page is appropriate.\n\nAs an example: one of my clients is a financial office in a large institution. Because this office manages several third-party systems that serve a range of niche audiences, they had developed FAQs that addressed hyper-specific instances of dysfunction within systems for different users \u2013 \u00e0 la \u201cI\u2019m a financial director and my employee submitted an expense report in such-and-such system and it returned such-and-such error. What do I do?\u201d\n\nYes, this content could be removed from the question format and rewritten. But I\u2019m not sure it would be an improvement. It won\u2019t necessarily resolve concerns about length and searchability, and the different audiences may complicate the delivery. And since the work of rewriting it didn\u2019t fit into the client workflow (small team, no writers, pressed for time), I didn\u2019t recommend the change.\n\nI\u2019ve had to make peace with not being to torch all the FAQs on the internet. Some content, like troubleshooting information or complex procedures, may be better in that format. It may be the smartest way for a particular client to handle that particular information.\n\nOf course, this has to be determined on a case-by-case basis, taking into account the amount of content, the subject matter, the skill levels of the content creators, the publishing workflow, and the search habits of the users.\n\nIf you determine that an FAQ page is the only way to go, ask yourself:\n\n\n\tIs there a better label or more specific term for the page (support, troubleshooting, product concerns, etc.)?\n\tIs there way to structure the page, categorize the questions, or otherwise make it easier for users to navigate quickly to the answer they need?\n\tIs a question and answer format absolutely the best way to communicate this information?\n\n\nForm follows function\n\nJust as a question and answer format isn\u2019t necessarily required to deliver the content, neither is it an inappropriate method in and of itself. Content professionals have developed a knee-jerk reaction:\u00a0It\u2019s an FAQ page! Quick, burn it! Buuuuurn it!\n\nBut there\u2019s no inherent evil in questions and answers. Framing content in an interrogatory construct is no more a deal with the devil than subheads and paragraphs, or narrative arcs, or bullet points.\n\nYes, FAQs are riddled with communication snafus. They deserve, more often than not, to be tied to a chair and thrown into a lake. But that wouldn\u2019t fix our content problems. FAQs are a shiny and obvious target for our frustration, but they\u2019re not unique in their flaws. In any format, in any display, in any kind of page, weak content can rear its ugly, poorly written head.\n\nIt\u2019s not the Q&A that\u2019s to blame, it\u2019s bad content. Content without context will always fail users. That\u2019s the real witch in our midst.", "year": "2013", "author": "Lisa Maria Martin", "author_slug": "lisamariamartin", "published": "2013-12-08T00:00:00+00:00", "url": "https://24ways.org/2013/what-to-do-with-faqs/", "topic": "content"} {"rowid": 43, "title": "Content Production Planning", "contents": "While everyone agrees that getting the content of a website right is vital to its success, unless you\u2019re lucky enough to have an experienced editor or content strategist on board, planning content production often seems to fall through the cracks. One reason is that, for most of the team, it feels like someone else\u2019s problem. Not necessarily a specific person\u2019s problem. Just someone else\u2019s. It\u2019s only when everyone starts urgently asking when the content is going to be ready, that it becomes clear the answer is, \u201cNot as soon as we\u2019d like it\u201d.\n\nThe good news is that there are some quick and simple things you can do, even if you\u2019re not the official content person on a project, to get everyone on the same content planning page. \n\nContent production planning boils down to answering three deceptively simple questions:\n\n\n\tWhat content do you need?\n\tHow much of it do you need?\n\tWho\u2019s going to make it?\n\n\nEven if it\u2019s not your job to come up with the answers, by asking these questions early enough and agreeing who is going to come up with the answers, you\u2019ll be a long way towards avoiding the last-minute content problems which so often plague projects.\n\nHow much content do we need?\n\nPeople tend to underestimate two crucial things about content: how much content they need, and how long that content takes to produce.\n\nWhen I ask someone how big their website is \u2013 how many pages it contains \u2013 I usually double or triple the answer I get. That\u2019s because almost everyone\u2019s mental model of their website greatly underestimates its true size. You can see the problem for yourself if you look at a site map. Site maps are great at representing a mental model of a website. But because they\u2019re a deliberate simplification they naturally lead us to underestimate how much content is involved in populating them.\n\nSeveral years ago I was asked to help a client create a new microsite (their word) which they wanted ready in two weeks for a conference they were attending. Here\u2019s the site map they had in mind. At first glance it looks like a pretty small website. Maybe twenty to thirty pages?\n\n\n\nThat\u2019s what the client thought.\n\nBut see those boxes which are multiple boxes stacked on top of one another, for product categories, descriptions and supporting material? They\u2019re known as page stacks, and page stacks are the content strategy equivalent of Here Be Dragons. \n\n\n\nSay we have:\n\n\n\tfive product categories\n\teach with five products\n\twhich all have two or three supporting documents\n\n\nThose are still fairly small numbers. But small numbers multiplied by other small numbers tend to lead to big numbers.\n\n\n\n5 categories = 5 category descriptions\n\nplus\n\n5 categories \u00d7 5 products each = 25 product descriptions\n\nplus\n\n25 products \u00d7 2.5 (average) supporting documents = 63 supporting documents\n\nequals\n\n93 pages\n\n\n\nSuddenly our twenty- or thirty-page website is running towards one hundred.\n\nThat\u2019s probably enough to get most project teams to sit up and take notice. But there\u2019s still the danger of underestimating how long it\u2019s going to take to create the content. After all, assuming the supporting documents already exist in some form, there are only about twenty-five to thirty pages of new copy to write.\n\nHow much work is it?\n\nAgain, we have the problem that small numbers when multiplied by other small numbers tend to lead to big numbers. Let\u2019s make a rough guess that it\u2019ll take four hours to write each product category and description page we need. That feels a little conservative if we\u2019re writing stuff from scratch, but assuming the person doing it already knows the products fairly well it\u2019s not unreasonable.\n\n\n\n30 pages \u00d7 4 hours each = 120 hours\n\n120 hours \u00f7 7.5 working hours a day = 16 days\n\n\n\nOuch.\n\nAt this point it\u2019s pretty clear we\u2019re not getting this site launched in two weeks. \n\nThe goal is the conversation\n\nBy breaking down the site into its content components, and putting some rough estimates on how long each might take to produce, the client instantly realised that there was no way they would be ready to launch it in two weeks. Although we still didn\u2019t know exactly when it would be ready, getting to that realisation right at the start of the project was a major win for everybody. Without it, the design agency would have bust a gut to get the design, front-end and CMS all done in double-quick time, only to find it was all for nothing as barely half the content was ready. As it was, an early discussion about content, albeit a brief one, bought everyone time to tackle the project properly, without pulling any long nights or working weekends.\n\nIf you haven\u2019t been able to get people to discuss content plans for the project, these kinds of rough estimates should give you enough evidence to get everyone to start taking it seriously. Your goal is to get everyone on the project to a place where they are ready to talk in detail about who is going to create this content, and how long it\u2019s really going to take them, and to get to those conversations before lack of content becomes a problem.\n\nBe careful though. It\u2019s best to talk in ranges and round numbers when your estimates are this uncertain. And watch those multipliers. Given small numbers multiplied by other small numbers lead to big numbers, changing just one number can greatly change the overall estimate. I like to run a couple of different scenarios to check what things look like if I\u2019ve under- or overestimated either how many pages we\u2019re going to need, or how long they\u2019re going to take to create. For example:\n\n\n\nTop end: 30 pages \u00d7 5 hours = 150 hours, or 20 days\n\nBottom end: 25 pages \u00d7 4 hours = 100 hours, or 13.3 days\n\n\n\nSo rather than say, \u201cI estimate the content will take around sixteen days to produce\u201d, I\u2019m going to say, \u201cI think the content will take about three to four weeks to produce\u201d. Even with qualifiers like estimate and around, sixteen days sounds too precise. Whereas three to four weeks instantly conveys that this is just a rough figure.\n\nWho\u2019s going to make it?\n\nSo, people tend to underestimate two crucial things about content: how much content they need, and how long content takes to write. At this stage, you\u2019re still in danger of the latter, because it\u2019s tempting to simply estimate how much time content takes to write (or record, if we\u2019re talking audio or visual content), and overlook all the other work that needs to goes on around it. \n\nTake 24 ways as an example. In terms of our three deceptively simple questions: what is practical articles about web design; how many is twenty-four, one for each day of Advent; and who are experts working on the web, one to write each article. \n\nBut there\u2019s another who you might not have considered. \n\nSomeone needs to select those authors in the first place, make sure they deliver their articles on time (and find someone to replace them if they don\u2019t), review drafts, copy-edit and proofread final versions, upload them to the site, promote them, keep an eye on the comments and make sure there are still presents under the tree on Christmas morning.\n\nEven if each of those tasks only takes an hour or so, it then needs multiplying by twenty-four (except the presents, obviously). And as we\u2019ve already seen, small numbers multiplied by small numbers quickly turn into much bigger numbers. Just a few hours per article, when multiplied by twenty-four articles, easily multiplies up to days or even weeks of effort.\n\nTo get a more accurate estimate of how long the different kinds of content are going to take, you need to break down the content production work into its constituent stages, starting with planning, moving on through the main work of creation, to reviewing, approvals and finally publishing. You need to think about who needs to be involved at each step, and how much time they\u2019ll need to do their bit. \n\nTaken together, these things make up your content workflow. The workflow will be different for each organisation, but might look something like this:\n\n\n\tEddie the web editor will work out the key messages and objectives for each page, and agree them with Mo the marketing director.\n\tEddie will then get Cal, the copywriter, to write the first draft.\n\tAs part of that, Cal will interview Sam the subject expert to understand the intricacies of the subject and get all the facts straight.\n\tOnce Cal\u2019s done the first draft, it\u2019ll go to Sam to check for accuracy, while Eddie reviews it for style and message.\n\tOnce Cal has incorporated their feedback it\u2019s time to get Mo to have a look at the final draft.\n\tIf Mo\u2019s happy, it\u2019ll get a final proofread, be uploaded to the CMS, and Mo will give the final sign-off and release it for publishing.\n\n\nYou can plot this on a table, with the stages of the content production process down the side, and the key roles or personnel along with top. Then the team can estimate how much time they think each of them needs at each stage.\n\n\n \n \n Mo (marketing director)\n Sam (subject expert)\n Eddie (web editor)\n Cal (copywriter)\n \n \n Outline: define key messages and objectives\n \n \n 30 min\n \n \n \n Review outline\n 15 min\n \n \n \n \n \n First draft\n \n 30 min\n \n 3 hours\n \n \n Review 1st draft\n \n 30 min\n 30 min\n \n \n \n 2nd draft\n \n \n \n 1 hour\n \n \n Review 2nd draft\n 15 min\n 15 min\n 15 min\n \n \n \n Final amendments\n \n \n \n 30 min\n \n \n Proofread\n \n \n 15 min\n \n \n \n Upload\n \n \n \n 15 min\n \n \n Sign-off\n 10 min\n \n \n \n \n \n TOTAL\n 40 min\n 1 hour 15 min\n 1 hour 30 min\n 4 hours 45 min\n \n\n\nYou can then bring out your calculator again, and come up with some more big scary numbers showing how much time it\u2019s going to take for the whole team to get all the content needed not just written, but also planned, reviewed, approved and published.\n\nWith an experienced team you can run this exercise as a group workshop and get some fairly accurate estimates pretty quickly. If this is all a bit new to you, check out Gather Content\u2019s Content Production Planning for Agencies ebook for a useful guide to common content roles, ballpark estimates for how much time each one needs on a typical piece of content, and how to run a process and estimating workshop to dig into them in more detail. \n\nOn a small team, one person might play many roles, but you should still sanity-check your estimates by breaking down the process and putting a rough estimate on each stage. With only a couple of people involved, it\u2019s even easier to only include the core activity like writing or recording in your estimates, and forget to allow time for the planning, reviewing, proofreading, publishing and promoting you\u2019ll still need to do. And even in a team of one, if at all possible you should find at least one other person to act as a second pair of eyes, and give anything you produce a quick once-over and proofread before it\u2019s published.\n\nDepending on the kind of content you\u2019re making, you should also consider what will happen after it\u2019s published. The full content life cycle should include promotion, monitoring and regular reviews to make sure content stays accurate and up to date. Making sure you have the time and resources available to do all those things for each piece of content is essential for creating a sustainable content programme.\n\nThe proof of the pudding\n\nEven after digging into workflow and getting the whole team involved in estimating, you\u2019re still largely in the realm of the guesstimate. The good news, though, is that you can quite quickly start finding out if your guesstimates are right or not. As soon as you can, pilot the production process with some real content. This is a double-win: you start finding out how long it really takes to produce all this fab new content, and you get real content to work with in designs and prototypes.\n\nOnce you\u2019ve run a few things through your process, you\u2019ll be able to refine your estimates, confirm your workflow, and give everyone involved a clear idea of when it will all be ready, and what you need from them.\n\nKeeping it all on track\n\nAt this point I like to pull everything together into the content strategist\u2019s favourite tool: the spreadsheet.\n\nA simple content production checklist is a bit like a content inventory or audit, but for the content you don\u2019t yet have, not the stuff already done. You can grab an example here.\n\nEach piece of content gets its own row, with columns for basic information like page title, ID (which should match the site map), and who\u2019s responsible for making it. You can capture simple details like target audience and key messages here too, though for more complex content, page description tables like those described by Relly Annett-Baker in \u201cExtracting the Content\u201d may be a better tool to use. Just adapt these columns to whatever makes sense for your content.\n\nI then have columns to track where each piece is in the production process. I usually keep this simple, with a column each to mark whether it\u2019s draft, final or uploaded. The status column on the left automatically shows the item\u2019s status, using a simple traffic light colour scheme for whether the item is still to do (red), in draft (amber), or done (green). Seeing the whole thing slowly turn from red to green is a nice motivator.\n\nIf you want to track the workflow in more detail, a kanban board in a tool like Trello is a great way for a team to collaborate on content production, track each item\u2019s progress, and keep an eye out for bottlenecks and delays. \n\nGetting to the content strategy conversation\n\nIt\u2019s a relatively simple exercise, then, to decide not just what kinds of pages you need, but also how many of them: put some rough estimates of effort on the tasks needed to create those pages \u2013 not just the writing, but all the other stages of planning, reviewing, approving, publishing and promoting \u2013 and then multiply all those things together. This will quickly bring some reality to grand visions and overambitious plans. Do it early enough, and even when the final big scary number is a lot bigger and scarier than everyone thought, you\u2019ll still have time to do something about it.\n\nAs well as getting everyone on board for some proper content planning activities, that big scary number is your opportunity to get to the real core questions of content strategy: do we really need all this content? Where can existing content be reused and repurposed? How do we prioritise our efforts? What really matters to our readers and users?\n\nTime and again, case studies show that less content delivers more: more leads, more sales, more self-service support and savings in the call centre. Although that argument is primarily one you should make from a good-for-the-users perspective, it doesn\u2019t hurt to be able to make it from the cheaper-for-the-business perspective as well, and to have some big scary numbers to back that up.", "year": "2014", "author": "Sophie Dennis", "author_slug": "sophiedennis", "published": "2014-12-17T00:00:00+00:00", "url": "https://24ways.org/2014/content-production-planning/", "topic": "content"} {"rowid": 57, "title": "Cooking Up Effective Technical Writing", "contents": "Merry Christmas! May your preparations for this festive season of gluttony be shaping up beautifully. By the time you read this I hope you will have ordered your turkey, eaten twice your weight in Roses/Quality Street (let\u2019s not get into that argument), and your Christmas cake has been baked and is now quietly absorbing regular doses of alcohol.\nSome of you may be reading this and scoffing Of course! I\u2019ve also made three batches of mince pies, a seasonal chutney and enough gingerbread men to feed the whole street! while others may be laughing Bake? Oh no, I can\u2019t cook to save my life.\nFor beginners, recipes are the step-by-step instructions that hand-hold us through the cooking process, but even as a seasoned expert you\u2019re likely to refer to a recipe at some point. Recipes tell us what we need, what to do with it, in what order, and what the outcome will be. It\u2019s the documentation behind our ideas, and allows us to take the blueprint for a tasty morsel and to share it with others so they can recreate it. In fact, this is a little like the open source documentation and tutorials that we put out there, similarly aiming to guide other developers through our creations.\nThe \u2018just\u2019ification of documentation\nLately it feels like we\u2019re starting to consider the importance of our words, and the impact they can have on others. Brad Frost warned us of the dangers of \u201cJust\u201d when it comes to offering up solutions to queries:\n\n\u201cJust use this software/platform/toolkit/methodology\u2026\u201d\n\u201cJust\u201d makes me feel like an idiot. \u201cJust\u201d presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. \u201cJust\u201d is a dangerous word.\n\u201cJust\u201d by Brad Frost\n\nI can really empathise with these sentiments. My relationship with code started out as many good web tales do, with good old HTML, CSS and JavaScript. University years involved some time with Perl, PHP, Java and C. In my first job I worked primarily with ColdFusion, a bit of ActionScript, some classic\u00a0 ASP and pinch of Java. I\u2019d do a bit of PHP outside work every now and again. .NET came in, but we never really got on, and eventually I started learning some Ruby, Python and Node. It was a broad set of learnings, and I enjoyed the similarities and differences that came with new languages. I don\u2019t develop day in, day out any more, and my interests and work have evolved over the years, away from full-time development and more into architecture and strategy. But I still make things, and I still enjoy learning.\nI have often found myself bemoaning the lack of tutorials or courses that cater for the middle level \u2013 someone who may be learning a new language, but who has enough programming experience under their belt to not need to revise the concepts of how loops or objects work, and is perfectly adept at googling the syntax for getting a substring. I don\u2019t want snippets out of context; I want an understanding of architectural principles, of the strengths and weaknesses, of the type of applications that work well with the language.\nI\u2019m caught in the place between snoozing off when \u2018Using the Instagram API with Ruby\u2019 hand-holds me through what REST is, and feeling like I\u2019m stupid and need to go back to dev school when I can\u2019t get my environment and dependencies set up, let alone work out how I\u2019m meant to get any code to run.\nIt\u2019s seems I\u2019m not alone with this \u2013 Erin McKean seems to have been here too:\n\n\u201cSome tutorials (especially coding tutorials) like to begin things in media res. Great for a sense of dramatic action, bad for getting to \u201cStep 1\u201d without tears. It can be really discouraging to fire up a fresh terminal window only to be confronted by error message after error message because there were obligatory steps 0.1.0 through 0.9.9 that you didn\u2019t even know about.\u201d\n\u201cTips for Learning What You Don\u2019t Know You Don\u2019t Know\u201d by Erin McKean\n\nI\u2019m sure you\u2019ve been here too. Many tutorials suffer badly from the fabled \u2018how to draw an owl\u2019-itis.\n\nIt\u2019s the kind of feeling you can easily get when sifting through recipes as well as with code. Far from being the simple instructions that let us just follow along, they too can be a minefield. Fall in too low and you may be skipping over an explanation of what simmering is, or set your sights too high and you may get stuck at the point where you\u2019re trying to sous vide a steak using your bathtub and a Ziploc bag.\nDon\u2019t be a turkey, use your loaf!\nMy mum is a great cook in my eyes (aren\u2019t all mums?). I love her handcrafted collection of gathered recipes from over the years, including the one below, which is a great example of how something may make complete sense to the writer, but could be impermeable to a reader.\n\nDepending on your level of baking knowledge, you may ask: What\u2019s SR flour? What\u2019s a tsp? Should I use salted or unsalted butter? Do I use sticks of cinnamon or ground? Why is chopped chocolate better? How do I cream things? How big should the balls be? How well is \u201cwell spaced\u201d? How much leeway do I have for \u201c(ish!!)\u201d? Does the \u201c20\u201d on the other cookie note mean I\u2019ll end up with twenty? At any point, making a wrong call could lead to rubbish cookies, and lead to someone heading down the path of an I can\u2019t cook mentality.\nYou may be able to cook (or follow recipes), but you may not understand the local terms for ingredients, may not be able to acquire something and need to know what kind of substitutes you can use, or may need to actually do some prep before you jump into the main bit.\nHowever, if we look at good examples of recipes, I think there\u2019s a lot we can apply when it comes to technical writing on the web. I\u2019ve written before about the benefit of breaking documentation into small, reusable parts, and this will help us, but we can also take it a bit further. Here are my five top tips for better technical writing.\n1. Structure and standardise your information\nThink of the structure of a recipe. We very often have some common elements and they usually follow roughly the same format. We have standards and conventions that allow us to understand very quickly what a recipe is and how it should be used.\u00a0\n\nGreat recipes help their chefs know what they need to get ready in advance, both in terms of buying ingredients and putting together their kit. They then talk through the process, using appropriate language, and without making assumptions that the person can fill in any gaps for themselves; they explain why things are done the way they are. The best recipes may also suggest how you can take what you\u2019ve done and put your own spin on it. For instance, a good recipe for the simple act of boiling an egg will explain cooking time in relation to your preference for yolk gooiness. There are also different flavour combinations to try, accompaniments, or presentation suggestions.\u00a0\nBy breaking down your technical writing into similar sections, you can help your audience understand the elements they\u2019ll be working with, what they need to do once they have these, and how they can move on from your self-contained illustration.\nTitle\n \n Ensure your title is suitably descriptive and representative of the result. Getting Started with Python perhaps isn\u2019t as helpful as Learn Python: General Syntax and Basics.\n \n Result\n \n Many recipes include a couple of lines as an overview of what you\u2019ll end up with, and many include a photo of the finished dish. With our technical writing we can do the same:\n In this tutorial we\u2019re going to learn how to set up our development environment, and we\u2019ll then undertake some exercises to explore the general syntax, finishing by building a mini calculator.\n \n Ingredients\n \n What are the components we\u2019ll be working with, whether in terms of versions, environment, languages or the software packages and libraries you\u2019ll need along the way? Listing these up front gives the reader a great summary of the things they\u2019ll be using, and any gotchas.\n Being able to provide a small amount of supporting information will also help less experienced users. Ideally, explain briefly what things are and why we\u2019re using it.\n \n Prep\n \n As we heard from Erin above, not fully understanding the prep needed can be a huge source of frustration. Attempting to run a code snippet without context will often lead to failure when the prerequisites and process aren\u2019t clear. Be sure to include information around any environment set-up, installation or config you\u2019ll need to have done before you start.\nStu Robson\u2019s Simple Sass documentation aims to do this before getting into specifics, although ideally this would also include setting up Sass itself.\n \n Instructions\n \nThe body of the tutorial itself is the whole point of our writing. The next four tips will hopefully make your tutorial much more successful.\n \n Variations\n \n Like our ingredients section, as important as explaining why we\u2019re using something in this context is, it\u2019s also great to explain alternatives that could be used instead, and the impact of doing so.\n Perhaps go a step further, explaining ways that people can change what you have done in your tutorial/readme for use in different situations, or to provide further reading around next steps. What happens if they want to change your static array of demo data to use JSON, for instance? By giving some thought to follow-up questions, you can better support your readers.\n While not in a separate section, the source code for GreenSock\u2019s GSAP JS basics explains:\n We\u2019ll use a window.onload for simplicity, but typically it is best to use either jQuery\u2019s $(document).ready() or $(window).load() or cross-browser event listeners so that you\u2019re not limited to one.\n Keep in mind to both:\n Explain what variations are possible.\n Explain why certain options may be more desirable than others in different situations.\n \n \n2. Small, reusable components\nReusable components are for life, not just for Christmas, and they\u2019re certainly not just for development. If you start to apply the structure above to your writing, you\u2019re probably going to keep coming across the same elements: Do I really have to explain how to install Sass and Node.js again, Sally? The danger with more clarity is that our writing becomes bloated and overly convoluted for advanced readers, those who don\u2019t need to be told how to beat an egg for the hundredth time.\u00a0\nInstead, by making our writing reusable and modular, and by creating smaller, central resources, we can provide context and extra detail where needed without diluting our core message. These could be references we create, or those already created well by others.\n\nThis recipe for katsudon makes use of this concept. Rather than explaining how to make tonkatsu or dashi stock, these each have their own page. Once familiar, more advanced readers will likely skip over the instructions for the component parts.\n\n3. Provide context to aid accessibility\nHere I\u2019m talking about accessibility in the broadest sense. Small, isolated snippets can be frustrating to those who don\u2019t fully understand the wider context of how our examples work.\nShowing an exciting standalone JavaScript function is great, but giving someone the full picture of how and when this is called, and how it should be included in relation to other HTML and CSS is even better. Giving your readers the ability to view a big picture version, and ideally the ability to download a full version of the source, will help to reduce some of the frustrations of trying to get your component to work in their set-up.\u00a0\n4. Be your own tech editor\nA good editor can be invaluable to your work, and wherever possible I\u2019d recommend that you try to get a neutral party to read over your writing. This may not always be possible, though, and you may need to rely on yourself to cast a critical eye over your work.\nThere are many tips out there around general editing, including printing out your work onto paper, or changing the font size: both will force your eyes to review it in a new light. Beyond this, I\u2019d like to encourage you to think about the following:\n\nExplain what things are. For example, instead of referencing Grunt, in the first instance perhaps reference \u201cGrunt (a JavaScript task runner that minimises repetitive activities through automation).\u201d\nExplain how you get things, even if this is a link to official installers and documentation. Don\u2019t leave your readers having to search.\nWhy are you using this approach/technology over other options?\nWhat happens if I use something else? What depends on this?\nAvoid exclusionary lingo or acronyms.\n\nAirbnb\u2019s JavaScript Style Guide includes useful pointers around their reasoning:\n\nUse computed property names when creating objects with dynamic property names.\nWhy? They allow you to define all the properties of an object in one place.\n\nThe language we use often makes assumptions, as we saw with \u201cjust\u201d. An article titled \u201cES6 for Beginners\u201d is hugely ambiguous: is this truly for beginner coders, or actually for people who have a good pre-existing understanding of JavaScript but are new to these features? Review your writing with different types of readers in mind. How might you confuse or mislead them? How can you better answer their questions?\nThis doesn\u2019t necessarily mean supporting everyone \u2013 your audience may need to have advanced skills \u2013 but even if you\u2019re providing low-level, deep-dive, reference material, trying not to make assumptions or take shortcuts will hopefully lead to better, clearer writing.\n5. A picture is worth a thousand words\u2026\n\u2026or even better: use a thousand pictures, stitched together into a quick video or animated GIF. People learn in different ways. Just as recipes often provide visual references or a video to work along with, providing your technical information with alternative demonstrations can really help get your point across. Your audience will be able to see exactly what you\u2019re doing, what they should expect as interaction responses, and what the process looks like at different points.\nThere are many, many options for recording your screen, including QuickTime Player on Mac OS X (File \u2192 New Screen Recording), GifGrabber, or Giffing Tool on Windows.\nPaul Swain, a UX designer, uses GIFs to provide additional context within his documentation, improving communication:\n\n\u201cMy colleagues (from across the organisation) love animated GIFs. Any time an interaction is referenced, it\u2019s accompanied by a GIF and a shared understanding of what\u2019s being designed. The humble GIF is worth so much more than a thousand words; and it\u2019s great for cats.\u201d\nPaul Swain\n\n\nNext time you\u2019re cooking up some instructions for readers, think back to what we can learn from recipes to help make your writing as accessible as possible. Use structure, provide reusable bitesize morsels, give some context, edit wisely, and don\u2019t scrimp on the GIFs. And above all, have a great Christmas!", "year": "2015", "author": "Sally Jenkinson", "author_slug": "sallyjenkinson", "published": "2015-12-18T00:00:00+00:00", "url": "https://24ways.org/2015/cooking-up-effective-technical-writing/", "topic": "content"} {"rowid": 87, "title": "Content Planning Demystified", "contents": "The first thing you learn as a junior editor is that you can\u2019t do everything yourself. You must rely on someone else to do at least part of what must be done: the long-range planning, the initial drafting or shooting or recording, the editing, the production, the final polish. All of those pieces of work that belong to someone else take quite a lot of time \u2014 days, weeks, sometimes months. If you\u2019re the sort of person who wrote college term papers the night before they were due, this can come as a bit of a shock. To my twenty-two-year-old self, it certainly did. \n\nIt turns out that the only real way to avoid a trainwreck with editorial work is to get ahead of the trouble, line everything up carefully, and leave oodles of room for all the pieces to connect on time. The same is true of content strategy, content planning, and just about everything to do with content on the web, except for the writing itself \u2014 and that, too, usually takes far longer than anyone expects. If you\u2019re not a professional editor and you suddenly find yourself dealing with content creation, you\u2019re almost certainly going to underestimate the time and effort involved, or to skip something important in the planning process that pops up to bite you later. \n\nWithout good content, it doesn\u2019t matter how well designed or coded your web project is, because it won\u2019t be doing the thing it\u2019s meant to do. And even if content is far from your specialty, you may well end up being the only one willing to coordinate it far enough in advance to avoid a chaotic ending. Whether you\u2019re hiring writers and editors for a big project, working with a small client, or coaxing some editorial help out of a co-worker, getting the planning work done correctly \u2014 and ahead of time \u2014 will allow you to orchestrate a glorious ballet of togetherness, instead of feverishly scraping together something to put on your site when the deadline looms. So get out the graph paper and the pocket protector, because we\u2019re going to go Full Nerd on this problem.\n\nKnow your poison\n\nAnyone who\u2019s seen a project delayed for six months by content trouble, or derailed by content that\u2019s bland and unhelpful, knows this stuff can make you feel like a dead sock. To get ahead of the problem, you\u2019re going to have to learn to spot common problems and plan your way around them. On web projects without a dedicated editorial lead, you\u2019re likely to encounter content that is:\n\n\n\tUseless \u2013 Content that doesn\u2019t serve your readers\u2019 needs in some way is pointless. And because it takes up your time and crowds out genuinely helpful things, it\u2019s actually damaging. The logic is simple: you can make content that\u2019s all about you, and that serves your stated messaging goals, but if no one is motivated to read it, it\u2019s a waste of everyone\u2019s time.\n\tBadly written \u2013 When you publish articles or instructions or other content that is too stiffly formal, overly wordy, hard to understand, offensive, unintentionally cheesy, or otherwise off in tone or style, you\u2019re doing two things. First, you\u2019re weakening the information you\u2019re trying to convey by making it obscure or annoying. Second \u2014 and this one is even more damaging \u2014 you\u2019re demonstrating bad taste. When you get the cultural elements of publishing wrong, you encourage your readers to believe that you either don\u2019t understand them or don\u2019t care about getting it wrong.\n\tGooey \u2013 Content strategists have been talking about structured content (that\u2019s chunks versus blobs) for years. If you\u2019re publishing more than a few dozen pages without thinking through the structure of your content, you\u2019re probably missing a chance to improve your long-term efficiency. If you\u2019re publishing more than a couple of thousand pages without taking care of your content structure, you\u2019re probably doing a lot more manual wrangling (or cumbersome CMS work) than you need to be, especially when it comes to cross-platform publishing.\n\tUnregulated \u2013 If you\u2019re not tracking what works and what doesn\u2019t \u2014 and especially if you don\u2019t know what \u201cworks\u201d means for your project or organization \u2014 you\u2019re almost certainly getting worse results than you should be, for more work.\n\tOverabundant \u2013 As demonstrated by the cinnamon challenge, too much of a delicious thing can be a giant and publicly embarrassing disaster. For most projects and organizations, if you\u2019re making more stuff than your readers can handle, or if you\u2019re spreading your creative and editorial resources too thinly, that\u2019s bad. Spammers, content farms, and barrel-bottom tabloids have their own special math, the side effects of which include insomnia, irritability, and crying in traffic while silently mouthing Wilson Phillips lyrics.\n\n\n\nPrevent all preventable damage\n\nOnce you know what kind of trouble to look for, you can prevent a lot of it by doing some smart planning well before someone starts writing (or recording or shooting video).\n\n\n\tTo prevent uselessness: Know your readers and decide what you\u2019re trying to accomplish \u2014 with your website as a whole, and with each piece of content, always. Once you know what you\u2019re trying to achieve, you can evaluate your work as you go to make sure that it\u2019s actually doing the right thing. (I\u2019ve written a lot more about this for A List Apart and in The Elements of Content Strategy.)\n\tTo prevent bad writing: Establish a consistent and appropriate style using examples (and a style guide if you need one), designate an editor, hire good writers, and make time for quality control. Kate Kiefer\u2019s style guide for MailChimp is a superb example of style-wrangling that everyone can use.\n\tTo prevent repulsive goo: Give your content as much structure as possible, and know how structure relates to your entire publishing ecosystem, including all those mobile devices. Sara Wachter-Boettcher\u2019s Content Everywhere and Karen McGrane\u2019s Content Strategy for Mobile offer brilliant yet friendly introductions to the wide world of structured content.\n\tTo prevent unregulated chaos: Measure everything that matters to your project, your client, your organization, and especially your readers \u2014 not generic measures of someone else\u2019s success. Measure it all regularly. Be disciplined. Adjust at regular intervals. Rick Allen\u2019s series on content strategy analytics is an excellent place to begin (part one; part two).\n\tTo prevent overabundance: Stop trying to do everything and focus on giving your readers just a few things they want and genuinely need. Don\u2019t establish a schedule your writers might not be able to keep, and focus on differentiating yourself with quality, not quantity. (And while you\u2019re at it, scratch the auto-posting to social networks and the cross-posting between them. It\u2019s about as engaging as an automated phone system.)\n\n\nAt a slightly higher level, pick the right content person (or team) for the work. If you really only need a few pages of copy, find a smart writer who does good work for multi-platform readers. If you\u2019re slinging tens of thousands of pages of content, get someone with field experience in high-level editorial planning and the ability to turn blobs into chunks and melted goo into Legos. If you\u2019re starting a project that involves making a lot of content over time, bring in someone with journalism experience (or get your client to do so). \n\n\u201cBut wait!\u201d you may say. \u201cI\u2019m not hiring anyone. I have to do this all myself.\u201d That\u2019s not uncommon at all. The bad news is, you have to learn a bunch of stuff. The good news is, you get to learn a bunch of awesome stuff. Figure out what the project needs, just as though you were going to hire someone, and then give yourself time to get up to speed. If it\u2019s a really complicated project, you\u2019re probably going to have trouble unless you eventually get professional help. But if it\u2019s small and you can do it in steps, you can certainly do much better by giving yourself a plan and working on the things that matter most.\n\n\nPlan for the marathon, not the sprint\n\nLaunching with awesome content is a tiny fraction of a victory, which is why it\u2019s so important that your content not be gooey or unregulated. It also means that if you don\u2019t plan for a realistic publication schedule, you are going to slam into reality in a really unpleasant way not too long after you\u2019ve begun. If you\u2019re asking people to make words (or videos or whatever) for you, they\u2019re going to have to do less of something else, so plan for that beforehand. \n\nAnd while you\u2019re at it, unless publishing is your core business, ditch the feed-the-beast plan that leads to fluffy blog posts and spiritless, unhelpful social media content. It\u2019s antisocial for your reading community, offers short-term gains at best, and will burn you out or lower your standards until you don\u2019t even know you\u2019re doing lousy work. Good content is expensive, no matter how you do it, but spreading yourself too thin is a much worse investment than doing a smaller thing well and gradually building up a body of superb content that people want to share and keep and return to.", "year": "2012", "author": "Erin Kissane", "author_slug": "erinkissane", "published": "2012-12-20T00:00:00+00:00", "url": "https://24ways.org/2012/content-planning-demystified/", "topic": "content"} {"rowid": 172, "title": "The Construction of Instruction", "contents": "If the world were made to my specifications, all your clients would be happy to pay for a web writer to craft every sentence into something as elegant as it was functional, and the client would have planned the content so that you had it just when you asked, but we both know that won\u2019t happen every time. Sometimes you just know they are going to write the About page, two company blog pages and a Facebook fan page before resigning their position as chief content writer and you are going to end up filling in all the details that will otherwise just be Lorem Ipsum.\n\nWelcome to the big world of microcopy:\n\n\n\tA man walks into a bar. The bartender nods a greeting and watches as the man scans the bottles behind the bar. \n\u201cEr, you have a lot of gin here. Is there one you would recommend?\u201d \n\u201cYes sir.\u201d\nLong pause. \n\u201c\u2026 Never mind, I\u2019ll have the one in the green bottle.\u201d \n\u201cCertainly, sir. But you can\u2019t buy it from this part of the bar. You need to go through the double doors there.\u201d \n\u201cBut they look like they lead into the kitchen.\u201d \n\u201cReally, sir? Well, no, that\u2019s where we allow customers to purchase gin.\u201d \nThe man walks through the doors. On the other side he is greeted by the same bartender. \n\u201cY-you!\u201d he stammers but the reticent bartender is now all but silent. \nUnnerved, the man points to a green bottle, \u201cEr, I\u2019d like to buy a shot of that please. With ice and tonic water.\u201d \nThe bartender mixes the drink and puts it on the bar just out of the reach of the man and looks up. \n\u201cUm, do you take cards?\u201d the man asks, ready to present his credit card. \nThe bartender goes to take the card to put it through the machine. \n\u201cWait! How much was it \u2013 with sales tax and everything? Do you take a gratuity?\u201d \nThe bartender simply shrugs. \nThe man eyes him for a moment and decides to try his luck at the bar next door.\n\n\nIn the Choose Your Own Adventure version of this story there are plenty of ways to stop the man giving up. You could let him buy the gin right where he was; you could make the price more obvious; you could signpost the place to buy gin. The mistakes made by the bar and bartender are painfully obvious. And yet, there are websites losing users everyday due to the same lack of clear instruction.\n\nA smidgen of well written copy goes a long way to reassure the nervous prospect. Just imagine if our man walked into the bar and the bartender explained that although the bar was here, sales were conducted in the next room because people were not then able to overhear the man\u2019s card details. Instead, he is left to fend for himself. Online, we kick customers through the anonymous double doors with a merry \u2018Paypal will handle your transaction!\u2019.\n\nRecently I worked on a site where the default error message, to account for anything happening that the developers hadn\u2019t accounted for, was \u2018SOMETHING HAS GONE WRONG!\u2019. It might have been technically accurate but this is not how to inspire confidence in your customers that they can make a successful purchase through you. As everyone knows they can shop just fine, thank you very much, it is your site they will blame. Card declined? It\u2019s the site. Didn\u2019t know my email address has changed? It\u2019s the site. Can\u2019t log in? It\u2019s the site.\n\nYes, yes. I know. None of these things are related to your site, or you the developer, but drop outs will be high and you\u2019ll get imploring emails from your client asking you to wade knee deep into the site analytics to find a solution by testing 41 shades of blue because if it worked for Google\u2026? Before you try a visual fix involving the Dulux paint chart breeding with a Pantone swatch, take an objective look at the information you are giving customers. How much are you assuming they know? How much are you relying on age-old labels and prompts without clarification?\n\nHere\u2019s a fun example for non-North Americans: ask your Granny to write out her billing address. If she looks at you blankly, tell her it is the address where the bank sends her statements. Imagine how many fewer instances of the wrong address there would be if we routinely added that information when people purchased from the UK? Instead, we rely on a language convention that hasn\u2019t much common usage without explanation because, well, because we always have since the banks told us how we could take payments online.\n\nSo. Your client is busying themselves with writing the ultimate Facebook fan page about themselves and here you are left with creating a cohesive signup process or basket or purchase instructions. Here are five simple rules for bending puny humans to your will creating instructive instructions and constructive error messages that ultimately mean less hassle for you.\n\nPlan what you want to say and plan it out as early as possible \n\nThis goes for all content. Walk a virtual mile in the shoes of your users. What specific help can you offer customers to actively encourage continuation and ensure a minimal amount of dropouts? Make space for that information. One of the most common web content mistakes is jamming too much into a space that has been defined by physical boundaries rather than planned out. If you manage it, the best you can hope for is that no-one notices it was a last-minute job. Mostly it reads like a bad game of Tetris with content sticking out all over the place.\n\nUse your words\n\nMicrocopy often says a lot in a few words but without those words you could leave room for doubt. When doubt creeps in a customer wants reassurance just like Alice:\n\n\n\tThis time (Alice) found a little bottle\u2026 with the words \u2018DRINK ME\u2019 beautifully printed on it in large letters. It was all very well to say \u2018Drink me,\u2019 but the wise little Alice was not going to do that in a hurry. \u2018No, I\u2019ll look first,\u2019 she said, \u2018and see whether it\u2019s marked \u201cpoison\u201d or not\u2019\n\n\nAlice in Wonderland, Lewis Carroll.\n\nValue clarity over brevity. Or a little more prosaically, \u201cIf in doubt, spell it out.\u201d Thanks, Jeremy!\n\nBe prepared to help\n\n\n\t\u2018Login failed: email/password combination is incorrect.\u2019\n\n\nOh.\n\n\n\t\u2018Login failed: email/password combination is incorrect. \nAre you typing in all capitals? Caps Lock may be on. \nHave you changed your email address recently and not updated your account with us? Try your old email address first. \nCan\u2019t remember your password? We can help you reset it.\u2019 \n\n\nAh!\n\nBe direct and be informative\n\nThere is rarely a site that doesn\u2019t suffer from some degree of jargon. Squash it early by setting a few guidelines about what language and tone of voice you will use to converse with your users. Be consistent. Equally, try to be as specific as possible when giving error messages or instructions and allay fears upfront.\n\nCard payments are handled by paypal but you do not need a paypal account to pay.\n\nWe will not display your email address but we might need it to contact you.\n\nSign up for our free trial (no credit card required).\n\nCombine copy and visual cues, learn from others and test new combinations\n\nWhile visual design and copy can work independently, they work best together. New phrases and designs are being tested all the time so take a peek at abtests.com for more ideas, then test some new ideas and add your own results. Have a look at the microcopy pool on Flickr for some wonderful examples of little words and pictures working together. And yes, you absolutely should join the group and post more examples.\n\n\n\tA man walks into a bar. The bartender greets him in a friendly manner and asks him what he would like to drink. \n\u201cGin and Tonic, please.\u201d \n\u201cYes sir, we have our house gin on offer but we also have a particularly good import here too.\u201d\n\u201cThe import, please.\u201d \n\u201cHow would you like it? With a slice of lemon? Over ice?\u201d \n\u201cBoth\u201d \n\u201cThat\u2019s \u00a33.80. We accept cash, cards or you could open a tab.\u201d \n\u201cCard please.\u201d \n\u201cCertainly sir. Move just over here so that you can\u2019t be observed. Now, please enter your pin number.\u201d \n\u201cThank you.\u201d \n\u201cAnd here is your drink. Do let me know if there is a problem with it. I shall just be here at the bar. Enjoy.\u201d\n\n\nCheers!", "year": "2009", "author": "Relly Annett-Baker", "author_slug": "rellyannettbaker", "published": "2009-12-08T00:00:00+00:00", "url": "https://24ways.org/2009/the-construction-of-instruction/", "topic": "content"} {"rowid": 198, "title": "Is Your Website Accidentally Sexist?", "contents": "Women make up 51% of the world\u2019s population. More importantly, women make 85% of all purchasing decisions about consumer goods, 75% of the decisions about buying new homes, and 81% of decisions about groceries. The chances are, you want your website to be as attractive to women as it is to men. But we are all steeped in a male-dominated culture that subtly influences the design and content decisions we make, and some of those decisions can result in a website that isn\u2019t as welcoming to women as it could be. \nTypography tells a story\nStudies show that we make consistent judgements about whether a typeface is masculine or feminine: Masculine typography has a square or geometric form with hard corners and edges, and is emphatically either blunt or spiky. Serif fonts are also considered masculine, as is bold type and capitals.\nFeminine typography favours slim lines, curling or flowing shapes with a lot of ornamentation and embellishment, and slanted letters. Sans-serif, cursive and script fonts are seen as feminine, as are lower case letters. \nThe effect can be so subtle that even choosing between bold and regular styles within a single font family can be enough to indicate masculinity or femininity.\nIf you want to appeal to both men and women, search for fonts that are gender neutral, or at least not too masculine. When you\u2019re choosing groups of fonts that need to work harmoniously together, consider which fonts you are prioritising in your design. Is the biggest word on the page in a masculine or feminine font? What about the smallest words? Is there an imbalance between the prominence of masculine and feminine fonts, and what does this imply? \nTypography is a language in and of itself, so be careful what you say with it. \nColour me unsurprised\nColour also has an obvious gender bias. We associate pinks and purples, especially in combination, with girls and women, and a soft pink has become especially strongly related to breast cancer awareness campaigns. On the other hand, pale blue is strongly associated with boys and men, despite the fact that pastels are usually thought of as more feminine. \nThese associations are getting stronger and stronger as more and more marketers use them to define products as \u201cfor girls\u201d and \u201cfor boys\u201d, setting expectations from an incredibly young age \u2014 children as young as four understand gender stereotypes. It should be obvious that using these highly gender-associated colours sends an incredibly strong message to your visitors about who you think your target audience is. If you want to appeal to both men and women, then avoid pinks and pale blues.\nBut men and women also have different colour preferences. Men tend to prefer intense primary colours and deeper colours (shades), and tolerate greys better, whilst women prefer pastels (tints). When choosing colours, consider not just the hue itself, but also tint, tone and shade.\nSlightly counterintuitively, everyone likes blue, but no one seems to particularly like brown or orange. \nA picture is worth a thousand words, or none\nStock photos are the quickest and easiest way to add a little humanity to your website, directly illustrating the kind of people you believe are in your audience. But the wrong photo can put a woman off before she\u2019s even read your text. \nA website about a retirement home will, for example, obviously include photos of older people, and a baby clothes retailer will obviously show photos of babies. But, in the latter case, should they also show only photographs of mothers with their children, or should they include fathers too? It\u2019s true that women take on the majority of childcare responsibilities, but that\u2019s a cultural holdover from a previous era, rather than some rule of law. We are seeing increasing number of stay at home dads as well as single dads, so showing only photographs of women both enforces the stereotype that only women can care, as well as marginalising male carers. \nEqually, featuring prominent photographs of women on sites about male-dominated topics such as science, technology or engineering help women feel welcomed and appreciated in those fields. Photos really do speak volumes, so make sure that you also represent other marginalised groups, especially ethnic groups. If people do not see themselves represented on your site, they are not going to engage with it as much as they might. \nAnother form of picture that we often ignore is the icon. When you do use icons, make sure that they are gender neutral. For example, avoid using a icon of a man to denote engineers, or of a woman to denote nurses. Avoid overly masculine or feminine metaphors, such as a hammer to denote DIY or a flower to denote gardens. Not only are these gendered, they\u2019re also trite and unappealing, so come up with more exciting and novel metaphors. \nUse gender-neutral language\nLast, but not least, be very careful in your use of gender in language. \nPronouns are an obvious pitfall. A lot of web content is written in the second person, using the cleary gender neutral \u2018you\u2019, but if you have to write in the third person, which uses \u2018she\u2019, \u2018he\u2019, \u2018it\u2019, and \u2018they\u2019, then be very careful which pronouns you use. The singular \u2018they\u2019 is becoming more widely acceptable, and is a useful gender-neutral option. If you must use generic \u2018he\u2019 and \u2018she\u2019, (as opposed to talking about a specific person), then vary the order that they come in, so don\u2019t always put the male pronoun first. \nWhen you are talking about people, make sure that you use the same level of formality for both men and women. The tendency is to refer to men by their surname and women by their first name so, for example, when people are talking about Ada Lovelace and Charles Babbage, they often talk about \u201cAda and Babbage\u201d, rather than \u201cLovelace and Babbage\u201d or \u201cAda and Charles\u201d. As a rule, it\u2019s best to use people\u2019s surnames in formal and semi-formal writing, and their first names only in very informal writing. \nIt\u2019s also very important to make sure that you respect people\u2019s honorifics, especially academic titles such as Dr or Professor, and that you use titles consistently. Studies show that women and people of colour are the most likely to have their honorifics dropped, which is not only disrespectful, it gives readers the idea that women and people of colour are less qualified than white men.\nIf you mention job titles, avoid old-fashioned gendered titles such as \u2018chairman\u2019, and instead look for a neutral version, like \u2018chair\u2019 or \u2018chairperson\u2019. Where neutral terms have strong gender associations, such as nurse or engineer, take special care that the surrounding text, especially pronouns, is diverse and/or neutral. Do not assume engineers are male and nurses female. \nMore subtle intimations of gender can be found in the descriptors people use. Military metaphors and phrases, out-sized claims, competitive words, and superlatives are masculine, such as \u2018ground-breaking\u2019, \u2018best\u2019, \u2018genius\u2019, \u2018world-beating\u2019, or \u2018killer\u2019. Excessive unnecessary factual detail is also very masculine. \nWomen tend to relate to more cooperative, non-competitive, future-focused, and warmer language, paired with more general information. Women\u2019s language includes word like \u2019global\u2019, \u2018responsive\u2019, \u2018support\u2019, \u2018include\u2019, \u2018engage\u2019 and \u2018imagine\u2019. Focus more on the kind of relationship you can build with your customers, how you can help make their lives easier, and less on your company or product\u2019s status. \nSmash the patriarchy, one assumption at a time\nWe\u2019re all brought up in a cultural stew that prioritises men\u2019s needs, feelings and assumptions over women\u2019s. This is the patriarchy, and it\u2019s been around for thousands of years. But given women\u2019s purchasing power, adhering to the patriarchy\u2019s norms is unlikely to be good for your business. If you want to tap into the female market, pay attention to the details of your design and content, and make sure that you\u2019re not inadvertently putting women off. A gender neutral website that designs away gender stereotypes will attract both men and women, expanding your market and helping your business flourish.", "year": "2017", "author": "Suw Charman-Anderson", "author_slug": "suwcharmananderson", "published": "2017-12-20T00:00:00+00:00", "url": "https://24ways.org/2017/is-your-website-accidentally-sexist/", "topic": "content"} {"rowid": 227, "title": "A Contentmas Epiphany", "contents": "The twelve days of Christmas fall between 25 December, Christmas Day, and 6 January, the Epiphany of the Kings. Traditionally, these have been holidays and a lot of us still take a good proportion of these days off. Equally, a lot of us have a got a personal site kicking around somewhere that we sigh over and think, \u201cOne day I\u2019ll sort you out!\u201d Why not take this downtime to give it a big ol\u2019 refresh? I know, good idea, huh?\n\nHEY WAIT! WOAH! NO-ONE\u2019S TOUCHING PHOTOSHOP OR DOING ANY CSS FANCYWORK UNTIL I\u2019M DONE WITH YOU!\n\nBe honest, did you immediately think of a sketch or mockup you have tucked away? Or some clever little piece of code you want to fiddle with? Now ask yourself, why would you start designing the container if you haven\u2019t worked out what you need to put inside?\n\nAnyway, forget the content strategy lecture; I haven\u2019t given you your gifts yet.\nI present The Twelve Days of Contentmas!\n\nThis is a simple little plan to make sure that your personal site, blog or portfolio is not just looking good at the end of these twelve days, but is also a really useful repository of really useful content.\n\nWARNING KLAXON: There are twelve parts, one for each day of Christmas, so this is a lengthy article. I\u2019m not expecting anyone to absorb this in one go. Add to Instapaper. There is no TL;DR for this because it\u2019s a multipart process, m\u2019kay? Even so, this plan of mine cuts corners on a proper applied strategy for content. You might find some aspects take longer than the arbitrary day I\u2019ve assigned. And if you apply this to your company-wide intranet, I won\u2019t be held responsible for the mess.\n\nThat said, I encourage you to play along and sample some of the practical aspects of organising existing content and planning new content because it is, honestly, an inspiring and liberating process. For one thing, you get to review all the stuff you have put out for the world to look at and see what you could do next. This always leaves me full of ideas on how to plug the gaps I\u2019ve found, so I hope you are similarly motivated come day twelve.\n\nLet\u2019s get to it then, shall we?\n\nOn the first day of Contentmas, Relly gave to me:\n\n1. A (partial) content inventory\n\nI\u2019m afraid being a site owner isn\u2019t without its chores. With great power comes great responsibility and all that. There are the domain renewing, hosting helpline calls and, of course, keeping on top of all the content that you have published.\n\nIf you just frowned a little and thought, \u201cWell, there\u2019s articles and images and\u2026 stuff\u201d, then I\u2019d like to introduce you to the idea of a content inventory. \n\nA content inventory is a list of all your content, in a simple spreadsheet, that allows you to see at a glance what is currently on your site: articles; about me page; contact form, and so on.\n\nYou add the full URL so that you can click directly to any page listed. You add a brief description of what it is and what tags it has. In fact, I\u2019ll show you. I\u2019ve made a Google Docs template for you. Sorry, it isn\u2019t wrapped.\n\nDoes it seem like a mammoth task? Don\u2019t feel you have to do this all in one day. But do do it. For one thing, looking back at all the stuff you\u2019ve pushed out into the world gives you a warm fuzzy feeling which keeps the heating bill down.\n\nGrab a glass of mulled cider and try going month-by-month through your blog archives, or project-by-project through your portfolio. Do a little bit each day for the next twelve days and you\u2019ll have done something awesome. The best bit is that this exploration of your current content helps you with the next day\u2019s task.\n\nBonus gift: for more on content auditing and inventory, check out Jeff Veen\u2019s article on just this topic, which is also suitable for bigger business sites too.\n\nOn the second day of Contentmas, Relly gave to me:\n\n2. Website loves\n\nRemember when you were a kid, you\u2019d write to Santa with a wish list that would make your parents squirm, because your biggest hope for your stocking would be either impossible or impossibly expensive. Do you ever get the same thing now as a grown-up where you think, \u201cWouldn\u2019t it be great if I could make a video blog every week\u201d, or \u201cI could podcast once a month about this\u201d, and then you push it to the back of your mind, assuming that you won\u2019t have time or you wouldn\u2019t know what to talk about anyway?\n\nTrue fact: content doesn\u2019t just have to be produced when we are so incensed that we absolutely must blog about a topic. Neither does it have to be a drain to a demanding schedule. You can plan for it. In fact, you\u2019re about to.\n\nSo, today, get a pen and a notebook. Move away from your computer. My gift to you is to grab a quiet ten minutes between turkey sandwiches and relatives visiting and give your site some of the attention it deserves for 2011.\n\nWhat would you do with your site if you could? I don\u2019t mean what would you do purely visually \u2013 although by all means note those things down too \u2013 but to your site as a whole. Here are some jumping off points:\n\n\n\tWould you like to individually illustrate and design some of your articles?\n\tWhat about a monthly exploration of your favourite topic through video or audio?\n\tWho would you like to collaborate with?\n\tWhat do you want your site to be like for a user?\n\tWhat tone of voice would you like to use?\n\tHow could you use imagery and typography to support your content?\n\tWhat would you like to create content about in the new year?\n\n\nIt\u2019s okay if you can\u2019t do these things yet. It\u2019s okay to scrub out anything where you think, \u201cNah, never gonna happen.\u201d But do give some thought to what you might want to do next. The best inspiration for this comes from what you\u2019ve already done, so keep on with that inventory.\n\nBonus gift: a Think Vitamin article on podcasting using Skype, so you can rope in a few friends to join in, too.\n\nOn the third day of Contentmas, Relly gave to me:\n\n3. Red pens\n\nShock news, just in: the web is not print!\n\nOne of the hardest things as a writer is to reach the point where you say, \u201cYeah, okay, that\u2019s it. I\u2019m done\u201d and send off your beloved manuscript or article to print. I\u2019m convinced that if deadlines didn\u2019t exist, nothing would get finished. Why? Well, at the point you hand it over to the publishing presses, you can make no more changes. At best, you can print an erratum or produce an updated second edition at a later date. And writers love to \u2013 no, they live to \u2013 tweak their creations, so handing them over is quite a struggle. Just one more comma and\u2026\n\nOnline, we have no such constraints. We can edit, correct, test, tweak, twiddle until we\u2019re blooming sick of it. Our red pens never run out of ink. It is time for you to run a more critical eye over your content, especially the stuff already published. Relish in the opportunity to change stuff on the fly. I am not so concerned by blog articles and such (although feel free to apply this concept to those, too), but mainly by your more concrete content: about pages; contact pages; home page navigation; portfolio pages; 404 pages.\n\nNow, don\u2019t go running amok with the cut function yet. First, put all these evergreen pages into your inventory. In the notes section, write a quick analysis of how useful this copy is. Example questions:\n\n\n\tIs your contact page up-to-date?\n\tDoes your about page link to the right places?\n\tIs your portfolio current?\n\tDoes your 404 page give people a way to find what they were looking for?\n\n\nWe\u2019ll come back to this in a few days once we have a clearer idea of how to improve our content.\n\nBonus gift: the audio and slides of a talk I gave on microcopy and 404 pages at @media WebDirections last year.\n\nOn the fourth day of Contentmas, Relly gave to me:\n\n4. Stalling nerds\n\nActually, I guess more accurately this is something I get given a lot. Designers and developers particularly can find a million ways to extract themselves from the content of a site but, as the site owner, and this being your personal playground and all, you mustn\u2019t. You actually can\u2019t, sorry. \n\nBut I do understand that at this point, \u2018sorting out your site\u2019 suddenly seems a lot less exciting, especially if you are a visually-minded person and words and lists aren\u2019t really your thing. So far, there has been a lot of not-very-exciting exercises in planning, and there\u2019s probably a nice pile of DVDs and video games that you got from Santa worth investigating. \n\nStay strong my friend. By now, you have probably hit upon an idea of some sort you are itching to start on, so for every half-hour you spend doing inventory, gift yourself another thirty minutes to play with that idea.\n\nBonus gift: the Pomodoro Technique. Take one kitchen timer and a to-do list and see how far you can go.\n\nOn the fifth day of Contentmas, Relly gave to me:\n\n5. Golden rules\n\nHere are some guidelines for writing online:\n\n\n\tMake headlines for tutorials and similar content useful and descriptive; use a subheading for any terrible pun you want to work in.\n\n\n\n\tCreate a broad opening paragraph that addresses what your article is about. Part of the creative skill in writing is to do this in a way that both informs the reader and captures their attention. If you struggle with this, consider a boxout giving a summary of the article.\n\n\n\n\tUse headings to break up chunks of text and allow people to scan. Most people will have a scoot about an article before starting at the beginning to give it a proper read. These headings should be equal parts informative and enticing. Try them out as questions that might be posed by the reader too.\n\n\n\n\tFinish articles by asking your reader to take an affirmative action: subscribe to your RSS feed; leave a comment (if comments are your thing \u2013 more on that later); follow you on Twitter; link you to somewhere they have used your tutorial or code. The web is about getting excited, making things and sharing with others, so give your readers the chance to do that.\n\n\n\n\tFor portfolio sites, this call to action is extra important as you want to pick up new business. Encourage people to e-mail you or call you \u2013 don\u2019t just rely on a number in the footer or an e-mail link at the top. Think up some consistent calls-to-action you can use and test them out.\n\n\nSo, my gift to you today is a simplified page table for planning out your content to make it as useful as possible.\n\nFeel free to write a new article or tutorial, or work on that great idea from yesterday and try out these guidelines for yourself. \n\nIt\u2019s a simple framework \u2013 good headline; broad opening; headings to break up volume; strong call to action \u2013 but it will help you recognise if what you\u2019ve written is in good shape to face the world. It doesn\u2019t tell you anything about how to create it \u2013 that\u2019s your endeavour \u2013 but it does give you a start. No more staring at a blank page.\n\nBonus gift: okay, you have to buy yourself this one, but it is the gift that keeps on giving: Ginny Reddish\u2019s Letting Go of the Words \u2013 the hands down best guide to web writing there is, with a ton of illustrative examples.\n\nOn the sixth day of Contentmas, Relly gave to me:\n\n6. Foundation-a-laying\n\nYesterday, we played with a page table for articles. Today, we are going to set the foundations for your new, spangly, spruced up, relaunched site (for when you\u2019re ready, of course). We\u2019ve checked out what we\u2019ve got, we\u2019ve thought about what we\u2019d like, we have a wish list for the future. Now is the time for a small reality check. \n\nBe realistic with yourself. Can you really give your site some attention every day? Record a short snippet of audio once a week? A photo diary post once a month? Look back at the wish list you made.\n\n\n\tWhat can you do?\n\tWhat can you aim for?\n\tWhat just isn\u2019t possible right now?\n\n\nAs much as we\u2019d all love to be producing a slick video podcast and screencast three times a week, it\u2019s better to set realistic expectations and work your way up.\n\nWhere does your site sit in your online world?\n\n\n\tDo you want it to be the hub of all your social interactions, a lifestream, a considered place of publication or a free for all?\n\tDo you want to have comments (do you have the personal resource to monitor comments?) or would you prefer conversation to happen via Twitter, Facebook or not at all?\n\tDoes this apply to all pages, posts and content types or just some?\n\tGet these things straight in your head and it\u2019s easier to know what sort of environment you want to create and what content you\u2019ll need to sustain it.\n\n\nGet your notebook again and think about specific topics you\u2019d like to cover, or aspects of a project you want to go into more, and how you can go ahead and do just that. A good motivator is to think what you\u2019ll get out of doing it, even if that is \u201cAnd I\u2019ll finally show the poxy $whatever_community that my $chosen_format is better than their $other_format.\u201d\n\nWhat topics have you really wanted to get off your chest? Look through your inventory again. What gaps are there in your content just begging to be filled?\n\nToday, you\u2019re going to give everyone the gift of your opinion. Find one of those things where someone on the internet is wrong and create a short but snappy piece to set them straight. Doesn\u2019t that feel good? Soon you\u2019ll be able to do this in a timely manner every time someone is wrong on the internet!\n\nBonus gift: we\u2019re halfway through, so I think something fun is in order. How about a man sledding naked down a hill in Brighton on a tea tray? Sometimes, even with a whole ton of content planning, it\u2019s the spontaneous stuff that is still the most fun to share.\n\nOn the seventh day of Contentmas, Relly gave to me:\n\n7. Styles-a-guiding\n\nNot colour style guides or brand style guides or code style guides. Content style guides. You could go completely to town and write yourself a full document defining every aspect of your site\u2019s voice and personality, plus declaring your view on contracted phrases and the Oxford comma, but this does seem a tad excessive. Unless you\u2019re writing an entire site as a fictional character, you probably know your own voice and vocabulary better than anyone. It\u2019s in your head, after all.\n\nInstead, equip yourself with a good global style guide (I like the Chicago Manual of Style because I can access it fully online, but the Associated Press (AP) Stylebook has a nifty iPhone app and, if I\u2019m entirely honest, I\u2019ve found a copy of Eats, Shoots and Leaves has set me right on all but the most technical aspects of punctuation). Next, pick a good dictionary and bookmark thesaurus.com. Then have a go at Kristina Halvorson\u2019s \u2018Voice and Tone\u2019 exercise from her book Content Strategy for the Web, to nail down what you\u2019d like your future content to be like:\n\nTo introduce the voice and tone qualities you\u2019re [looking to create], a good approach is to offer contrasting values. For example:\n\n\n\tProfessional, not academic.\n\tConfident, not arrogant.\n\tClever, not cutesy.\n\tSavvy, not hipster.\n\tExpert, not preachy.\n\n\n\nTake a look around some of your favourite sites and examine the writing and stylistic handling of content. What do you like? What do you want to emulate? What matches your values list?\n\nToday\u2019s gift to you is an idea. Create a \u2018swipe file\u2019 through Evernote or Delicious and save all the stuff you come across that, regardless of topic, makes you think, \u201cThat\u2019s really cool.\u201d This isn\u2019t the same as an Instapaper list you\u2019d like to read. This is stuff you have read or have seen that is worth looking at in closer detail.\n\n\n\tWhy is it so good?\n\tWhat is the language and style like?\n\tWhat impact does the typography have?\n\tHow does the imagery work to enhance the message?\n\n\nThis isn\u2019t about creating a personal brand or any such piffle. It\u2019s about learning to recognise how good content works and how to create something awesome yourself. Obviously, your ideas are brilliant, so take the time to understand how best to spring them on the unsuspecting public for easier world domination.\n\nBonus gift: a nifty style guide is a must when you do have to share content creation duties with others. Here is Leeds University\u2019s publicly available PDF version for you to take a gander at. I especially like the Rationale sections for chopping off dissenters at the knees. \n\nOn the eighth day of Contentmas, Relly gave to me:\n\n8. Times-a-making\n\nYou have an actual, real plan for what you\u2019d like to do with your site and how it is going to sound (and probably some ideas on how it\u2019s going to look, too). I hope you are full of enthusiasm and Getting Excited To Make Things. Just before we get going and do exactly that, we are going to make sure we have made time for this creative outpouring.\n\nHave you tried to blog once a week before and found yourself losing traction after a month or two? Are there a couple of podcasts lurking neglected in your archives? Whereas half of the act of running is showing up for training, half of creating is making time rather than waiting for it to become urgent. It\u2019s okay to write something and set a date to come back to it (which isn\u2019t the same as leaving it to decompose in your drafts folder).\n\nPutting a date in your calendar to do something for your site means that you have a forewarning to think of a topic to write about, and space in your schedule to actually do it. Crucially, you\u2019ve actually made some time for this content lark.\n\nTo do this, you need to think about how long it takes to get something out of the door/shipped/published/whatever you want to call it. It might take you just thirty minutes to record a podcast, but also a further hour to research the topic beforehand and another hour to edit and upload the clips. Suddenly, doing a thirty minute podcast every day seems a bit unlikely. But, on the flipside, it is easy to see how you could schedule that in three chunks weekly. \n\nPut it in your calendar. Do it, publish it, book yourself in for the next week. Keep turning up.\n\nToday my gift to you is the gift of time. Set up your own small content calendar, using your favourite calendar system, and schedule time to play with new ways of creating content, time to get it finished and time to get it on your site. Don\u2019t let good stuff go to your drafts folder to die of neglect.\n\nBonus gift: lots of writers swear by the concept of \u2018daily pages\u2019. That is, churning out whatever is in your head to see if there is anything worth building upon, or just to lose the grocery list getting in the way. 750words.com is a site built around this concept. Go have a play.\n\nOn the ninth day of Contentmas, Relly gave to me:\n\n9. Copy enhancing\n\nAn incredibly radical idea for day number nine. We are going to look at that list of permanent pages you made back on day three and rewrite the words first, before even looking at a colour palette or picking a font! Crazy as it sounds, doing it this way round could influence your design. It could shape the imagery you use. It could affect your choice of typography. IMAGINE THE POSSIBILITIES!\n\nLook at the page table from day five. Print out one for each of your homepage, about page, contact page, portfolio, archive, 404 page or whatever else you have. Use these as a place to brainstorm your ideas and what you\u2019d like each page to do for your site. Doodle in the margin, choose words you think sound fun to say, daydream about pictures you\u2019d like to use and colours you think would work, but absolutely, completely and utterly fill in those page tables to understand how much (or how little) content you\u2019re playing with and what you need to do to get to \u2018launch\u2019.\n\nThen, use them for guidance as you start to write. Don\u2019t skimp. Don\u2019t think that a fancy icon of an envelope encourages people to e-mail you. Use your words.\n\nPeople get antsy at this bit. Writing can be hard work and it\u2019s easy for me to say, \u201cGo on and write it then!\u201d I know this. I mean, you should see the faces I pull when I have to do anything related to coding. The closest equivalent would be when scientists have to stick their hands in big gloves attached to a glass box to do dangerous experiments.\n\nHere\u2019s today\u2019s gift, a little something about writing that I hope brings you comfort: \n\n\n\tTo write something fantastic you almost always have to write a rubbish draft first.\n\n\nNow, you might get lucky and write a \u2018good enough\u2019 draft first time and that\u2019s fab \u2013 you\u2019ve cut some time getting to \u2018fantastic\u2019. If, however, you\u2019ve always looked at your first attempt to write more than the bare minimum and sighed in despair, and resigned yourself to adding just a title, date and a screenshot, be cheered because you have taken the first step to being able to communicate with clarity, wit and panache.\n\nKeep going. Look at writing you admire and emulate it. Think about how you will lovingly design those words when they are done. Know that you can go back and change them. Check back with your page table to keep you on track. Do that first draft.\n\nBonus gift: becoming a better writer helps you to explain design concepts to clients.\n\nOn the tenth day of Contentmas, Relly gave to me:\n\n10. Ideas for keeping\n\nHurrah! You have something down on paper, ready to start evolving your site around it. Here\u2019s where the words and visuals and interaction start to come together. Because you have a plan, you can think ahead and do things you wouldn\u2019t be able to pull together otherwise.\n\n\n\tHow about finding a fresh-faced stellar illustrator on Dribbble to create you something perfect to pep up your contact page or visualize your witty statement on statements of work. A List Apart has been doing it for years and it hasn\u2019t worked out too badly for them, has it?\n\n\n\n\tWhat about spending this month creating a series of introductory tutorials on a topic, complete with screencasts and audio and give them a special home on your site?\n\n\n\n\tHow about putting in some hours creating a glorious about me page, with a biography, nice picture, and where you spend your time online?\n\n\n\n\tYou could even do the web equivalent of getting up in the attic and sorting out your site\u2019s search to make it easier to find things in your archives. Maybe even do some manual recommendations for relevant content and add them as calls to action.\n\n\n\n\tHow about writing a few awesome case studies with individual screenshots of your favourite work, and creating a portfolio that plays to your strengths? Don\u2019t just rely on the pretty pictures; use your words. Otherwise no-one understands why things are the way they are on that screenshot and BAM! you\u2019ll be judged on someone else\u2019s tastes. (Elliot has a head start on you for this, so get to it!)\n\n\n\n\tDo you have a serious archive of content? What\u2019s it like being a first-time visitor to your site? Could you write them a guide to introduce yourself and some of the most popular stuff on your site? Ali Edwards is a massively popular crafter and every day she gets new visitors who have found her multiple papercraft projects on Flickr, Vimeo and elsewhere, so she created a welcome guide just for them.\n\n\n\n\tWhat about your microcopy? Can you improve on your blogging platform\u2019s defaults for search, comment submission and labels? I\u2019ll bet you can.\n\n\n\n\tMaybe you could plan a collaboration with other like-minded souls. A week of posts about the more advanced wonders of HTML5 video. A month-long baton-passing exercise in extolling the virtues of IE (shut up, it could happen!). Just spare me any more online advent calendars.\n\n\n\n\tWatch David McCandless\u2019s TED talk on his jawdropping infographic work and make something as awesome as the Billion Dollar O Gram. I dare you.\n\n\nBonus gift: Grab a copy of Brian Suda\u2019s Designing with Data, in print or PDF if Santa didn\u2019t put one in your stocking, and make that awesome something with some expert guidance.\n\nOn the eleventh day of Contentmas, Relly gave to me:\n\n11. Pixels pushing\n\nOh, go on then. Make a gorgeous bespoke velvet-lined container for all that lovely content. It\u2019s proper informed design now, not just decoration. Mr. Zeldman says so.\n\nBonus gift: I made you a movie! If books were designed like websites.\n\nOn the twelfth day of Contentmas, Relly gave to me:\n\n12. Delighters delighting\n\nThe Epiphany is upon us; your site is now well on its way to being a beautiful, sustainable hub of content and you have a date in your calendar to help you keep that resolution of blogging more. What now?\n\n\n\tKeep on top of your inventory. One day it will save your butt, I promise.\n\tKeep making a little bit of time regularly to create something new: an article; an opinion piece; a small curation of related links; a photo diary; a new case study. That\u2019s easier than an annual content bootcamp for sure.\n\tAnd today\u2019s gift: look for ways to play with that content and make something a bit special. Stretch yourself a little. It\u2019ll be worth it.\n\n\nBonus gift: Paul Annett\u2019s presentation on Ooh, that\u2019s clever: Delighters in design from SxSW 09.\n\nAll my favourite designers and developers have their own unique styles and touches. It\u2019s what sets them apart. My very, very favourites have an eloquence and expression that they bring to their sites and to their projects. I absolutely love to explore a well-crafted, well-written site \u2013 don\u2019t we all? I know the time it takes. I appreciate the time it takes. But the end results are delicious. Do please share your spangly, refreshed sites with me in the comments.\n\nCatch me on Twitter, I\u2019m @RellyAB, and I\u2019ve been your host for these Twelve Days of Contentmas.", "year": "2010", "author": "Relly Annett-Baker", "author_slug": "rellyannettbaker", "published": "2010-12-21T00:00:00+00:00", "url": "https://24ways.org/2010/a-contentmas-epiphany/", "topic": "content"} {"rowid": 251, "title": "The System, the Search, and the Food Bank", "contents": "Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center. \nOn the belt are plastic bins filled with assortments of shelf-stable food\u2014one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like \u201cBottled Water\u201d or \u201cCereal\u201d or \u201cCandy.\u201d \nSuch was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins.\nI could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there\u2019s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into.\nLook, I can\u2019t quite explain it: I just know that I love sorting, organizing, and classifying things\u2014groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask?\nThe opportunity to create meaning from nothing is at the core of my excitement, which is why I\u2019ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. \u201cI can\u2019t believe they\u2019re letting me do this,\u201d I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey.\nThe jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It\u2019s not just a nice-to-have that we spent our morning separating cookies from carrots\u2014it\u2019s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we\u2019re talking about donated groceries or\u2014there it is\u2014web content.\nThis happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.) \nIs X a Y? was the question at the heart of our food sorting\u2014but it\u2019s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves:\n\nWhat is the criteria that defines Y?\nDoes X meet that criteria?\n\nWe don\u2019t usually articulate it so concretely because it\u2019s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn\u2019t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn\u2019t allowed, causing a small cognitive hiccup\u2014this X is NOT a Y\u2014that sometimes meant having to re-sort our boxes.\nOn the web, we\u2019re interested\u2014I would hope\u2014in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer\u2014Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y?\nWe have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means\u2014well, this means a lot of things, and I\u2019ve got limited space here, so let\u2019s focus on these three lessons from the food bank:\n\n\nUse plain, familiar language. This advice seems to be given constantly, but that\u2019s because it\u2019s solid and it\u2019s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it\u2019s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I\u2019ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows\u2014not only will they be more likely to spot what they\u2019re looking for on your site or app, but you\u2019ll turn up more often in search results.\n\n\nCreate consistency in all things. Missteps in consistency look like my earlier chicken broth example\u2014changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts\u2014these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall.\n\nMake the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it.\n\nThis idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn\u2019t know every category label around the conveyer belt, nor what criteria each category warranted. \nThe system wasn\u2019t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We\u2019d ask staff members. We\u2019d ask more seasoned volunteers. We\u2019d ask each other. We\u2019d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn.\nThe more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it.\nThe same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away\u2014or, worse, misinform or hurt them. \nThis is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y\u2014well-sorted content can give them the answer.", "year": "2018", "author": "Lisa Maria Martin", "author_slug": "lisamariamartin", "published": "2018-12-16T00:00:00+00:00", "url": "https://24ways.org/2018/the-system-the-search-and-the-food-bank/", "topic": "content"} {"rowid": 252, "title": "Turn Jekyll up to Eleventy", "contents": "Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper \u2014 and more secure \u2014 option. \nThe JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it\u2019s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.\nBy combining three approachable languages \u2014 Markdown for content, YAML for data and Liquid for templating \u2014 the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it\u2019s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself \u201cthis, but in Node\u201d. Thankfully, one of Santa\u2019s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.\nIntroducing Eleventy\nEleventy is a more flexible alternative Jekyll. Besides being written in Node, it\u2019s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).\nAs content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I\u2019ve discovered, there are a few gotchas. If you\u2019ve been considering making the switch, here are a few tips and tricks to help you on your way1.\nNote: Throughout this article, I\u2019ll be converting Matt Cone\u2019s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:\ngit clone https://github.com/mattcone/markdown-guide.git\ncd markdown-guide\nBefore you start\nIf you\u2019ve used tools like Grunt, Gulp or Webpack, you\u2019ll be familiar with Node.js but, if you\u2019ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now\u2019s the time to install Node.js and set up your project to work with its package manager, NPM:\n\nInstall Node.js:\n\nMac: If you haven\u2019t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.\nWindows: Download the Windows installer from the Node.js website and follow the instructions.\n\nInitiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems\u2019s Gemfile, this file contains a list of your project\u2019s third-party dependencies.\n\nIf you\u2019re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn\u2019t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.\nInstalling Eleventy\nWith Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:\nnpm install --save-dev @11ty/eleventy\nIf you open package.json you should see the following:\n\u2026\n\"devDependencies\": {\n \"@11ty/eleventy\": \"^0.6.0\"\n}\n\u2026\nWe can now run Eleventy from the command line using NPM\u2019s npx command. For example, to covert the README.md file to HTML, we can run the following:\nnpx eleventy --input=README.md --formats=md\nThis command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.\nConfiguration\nWhereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we\u2019ll see later on.\nWe\u2019ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:\nmodule.exports = function(eleventyConfig) {\n return {\n dir: {\n input: \"./\", // Equivalent to Jekyll's source property\n output: \"./_site\" // Equivalent to Jekyll's destination property\n }\n };\n};\nA few other things to bear in mind:\n\n\nWhereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).\n\nBy default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you\u2019ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.\n\nLayouts\nOne area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).\nWanting to keep our layouts together, we\u2019ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy\u2019s config:\nmodule.exports = function(eleventyConfig) {\n // Aliases are in relation to the _includes folder\n eleventyConfig.addLayoutAlias('about', 'layouts/about.html');\n eleventyConfig.addLayoutAlias('book', 'layouts/book.html');\n eleventyConfig.addLayoutAlias('default', 'layouts/default.html');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n }\n };\n}\nDetermining which template language to use\nEleventy will transform Markdown (.md) files using Liquid by default, but we\u2019ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.\nVariables\nOn the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.\nSite variables\nAlongside build settings, Jekyll let\u2019s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:\ntitle: \"Markdown Guide\"\nurl: https://www.markdownguide.org\nbaseurl: \"\"\nrepo: http://github.com/mattcone/markdown-guide\ncomments: false\nauthor:\n name: \"Matt Cone\"\nog_locale: \"en_US\"\nEleventy\u2019s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.\n{\n \"title\": \"Markdown Guide\",\n \"url\": \"https://www.markdownguide.org\",\n \"baseurl\": \"\",\n \"repo\": \"http://github.com/mattcone/markdown-guide\",\n \"comments\": false,\n \"author\": {\n \"name\": \"Matt Cone\"\n },\n \"og_locale\": \"en_US\"\n}\nPage variables\nThe table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:\n\n\n\nJekyll\nEleventy\n\n\n\n\npage.url\npage.url\n\n\npage.date\npage.date\n\n\npage.path\npage.inputPath\n\n\npage.id\npage.outputPath\n\n\npage.name\npage.fileSlug\n\n\npage.content\ncontent\n\n\npage.title\ntitle\n\n\npage.foobar\nfoobar\n\n\n\nWhen iterating through pages, frontmatter values are available via the data object while content is available via templateContent:\n\n\n\nJekyll\nEleventy\n\n\n\n\nitem.url\nitem.url\n\n\nitem.date\nitem.date\n\n\nitem.path\nitem.inputPath\n\n\nitem.name\nitem.fileSlug\n\n\nitem.id\nitem.outputPath\n\n\nitem.content\nitem.templateContent\n\n\nitem.title\nitem.data.title\n\n\nitem.foobar\nitem.data.foobar\n\n\n\nIdeally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.\nPagination variables\nWhereas Jekyll\u2019s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:\n\n\n\nJekyll\nEleventy\n\n\n\n\npaginator.page\npagination.pageNumber\n\n\npaginator.per_page\npagination.size\n\n\npaginator.posts\npagination.items\n\n\npaginator.previous_page_path\npagination.previousPageHref\n\n\npaginator.next_page_path\npagination.nextPageHref\n\n\n\nFilters\nAlthough Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few \u2014 more than can be covered by this article \u2014 but you can replicate them by using Eleventy\u2019s addFilter configuration option. Let\u2019s convert two used by our Markdown Guide: jsonify and where.\nThe jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:\n// {{ variable | jsonify }}\neleventyConfig.addFilter('jsonify', function (variable) {\n return JSON.stringify(variable);\n});\nJekyll\u2019s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:\n{{ site.members | where: \"graduation_year\",\"2014\" }}\nTo account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:\n// {{ array | where: key,value }}\neleventyConfig.addFilter('where', function (array, key, value) {\n return array.filter(item => {\n const keys = key.split('.');\n const reducedKey = keys.reduce((object, key) => {\n return object[key];\n }, item);\n\n return (reducedKey === value ? item : false);\n });\n});\nThere\u2019s quite a bit going on within this filter, but I\u2019ll try to explain. Essentially we\u2019re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it\u2019s removed. Phew!\nIncludes\nAs with filters, Jekyll provides a set of tags that aren\u2019t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you\u2019re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:\n\n{% include include.html value=\"key\" %}\n\n\n{{ include.value }}\nin Eleventy, you would do this:\n\n{% include \"include.html\", value: \"key\" %}\n\n\n{{ value }}\nA downside of Shopify\u2019s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.\nTweaking Liquid\nYou may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS\u2019s dynamicPartials option to false. Additionally, Eleventy doesn\u2019t support the include_relative tag, meaning you can\u2019t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option. \nThankfully, Eleventy allows us to pass options to LiquidJS:\neleventyConfig.setLiquidOptions({\n dynamicPartials: false,\n root: [\n '_includes',\n '.'\n ]\n});\nCollections\nJekyll\u2019s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.\nCollections in Jekyll\nIn Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:\ncollections:\n - basic-syntax\n - extended-syntax\nThese correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:\n{% for syntax in site.extended-syntax %}\n {{ syntax.title }}\n{% endfor %}\nCollections in Eleventy\nThere are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:\n---\ntitle: Strikethrough\nsyntax-id: strikethrough\nsyntax-summary: \"~~The world is flat.~~\"\ntag: extended-syntax\n---\nWe can then iterate over tagged content like this:\n{% for syntax in collections.extended-syntax %}\n {{ syntax.data.title }}\n{% endfor %}\nEleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):\neleventyConfig.addCollection('basic-syntax', collection => {\n return collection.getFilteredByGlob('_basic-syntax/*.md');\n});\n\neleventyConfig.addCollection('extended-syntax', collection => {\n return collection.getFilteredByGlob('_extended-syntax/*.md');\n});\nWe can extend this further. For example, say we wanted to sort a collection by the display_order property in our document\u2019s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript\u2019s sort method to sort the result:\neleventyConfig.addCollection('example', collection => {\n return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {\n return a.data.display_order - b.data.display_order;\n });\n});\nHopefully, this gives you just a hint of what\u2019s possible using this approach.\nUsing directory data to manage defaults\nBy default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:\n---\ntitle: Lists\nsyntax-id: lists\napi: \"no\"\npermalink: /basic-syntax/lists.html\n---\nAgain, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.\nFor example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:\n{\n \"layout\": \"syntax\",\n \"tag\": \"basic-syntax\",\n \"permalink\": \"basic-syntax/{{ title | slug }}.html\"\n}\nHowever, Markdown Guide doesn\u2019t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let\u2019s change things around a little. No longer tied to Jekyll\u2019s rules about where collection folders should be saved and how they should be labelled, we\u2019ll move them into a folder called _content:\nmarkdown-guide\n\u2514\u2500\u2500 _content\n \u251c\u2500\u2500 basic-syntax\n \u251c\u2500\u2500 extended-syntax\n \u251c\u2500\u2500 getting-started\n \u2514\u2500\u2500 _content.json\nWe will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:\n{\n \"permalink\": false\n}\nStatic files\nEleventy only transforms files whose template language it\u2019s familiar with. But often we may have static assets that don\u2019t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:\nmodule.exports = function(eleventyConfig) {\n \u2026\n\n // Copy the `assets` directory to the compiled site folder\n eleventyConfig.addPassthroughCopy('assets');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n },\n passthroughFileCopy: true\n };\n}\nFinal considerations\nAssets\nUnlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts \u2014 we have plenty of choices in that department already. If you\u2019ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.\nPublishing to GitHub Pages\nOne of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site \u2014 or any site not built with Jekyll \u2014 to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It\u2019s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!\n\nGoing one louder\nIf you\u2019ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons. \nWhile it\u2019s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy\u2019s appeal, it\u2019s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.\nI moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that\u2019s perfectly fine! \nBut these go to 11.\n\n\n\n\nInformation provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5\u00a0\u21a9", "year": "2018", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2018-12-11T00:00:00+00:00", "url": "https://24ways.org/2018/turn-jekyll-up-to-eleventy/", "topic": "content"} {"rowid": 275, "title": "Context First: Web Strategy in Four Handy Ws", "contents": "Many, many years ago, before web design became my proper job, I trained and worked as a journalist. I studied publishing in London and spent three fun years learning how to take a few little nuggets of information and turn them into a story. I learned a bunch of stuff that has all been a huge help to my design career. Flatplanning, layout, typographic theory. All of these disciplines have since translated really well to web design, but without doubt the most useful thing I learned was how to ask difficult questions.\n\nPretty much from day one of journalism school they hammer into you the importance of the Five Ws. Five disarmingly simple lines of enquiry that eloquently manage to provide the meat of any decent story. And with alliteration thrown in too. For a young journo, it\u2019s almost too good to be true.\n\nWho? What? Where? When? Why? It seems so obvious to almost be trite but, fundamentally, any story that manages to answer those questions for the reader is doing a pretty good job. You\u2019ll probably have noticed feeling underwhelmed by certain news pieces in the past \u2013 disappointed, like something was missing. Some irritating oversight that really lets the story down. No doubt it was one of the Ws \u2013 those innocuous little suckers are generally only noticeable by their absence, but they sure get missed when they\u2019re not there. \n\nQuestion everything\n\nI\u2019ve always been curious. An inveterate tinkerer with things and asker of dopey questions, often to the point of abject annoyance for anyone unfortunate enough to have ended up in my line of fire. So, naturally, the Five Ws started drifting into other areas of my life. I\u2019d scrutinize everything, trying to justify or explain my rationale using these Ws, but I\u2019d also find myself ripping apart the stuff that clearly couldn\u2019t justify itself against the same criteria.\n\nSo when I started working as a designer I applied the same logic and, sure enough, the Ws pretty much mapped to the exact same needs we had for gathering requirements at the start of a project. It seemed so obvious, such a simple way to establish the purpose of a product. What was it for? Why we were making it? And, of course, who were we making it for? It forced clients to stop and think, when really what they wanted was to get going and see something shiny. Sometimes that was a tricky conversation to have, but it\u2019s no coincidence that those who got it also understood the value of strategy and went on to have good solid products, while those that didn\u2019t often ended up with arrogantly insular and very shiny but ultimately unsatisfying and expendable products. Empty vessels make the most noise and all that\u2026\n\nContent first\n\nI was both surprised and pleased when the whole content first idea started to rear its head a couple of years back. Pleased, because without doubt it\u2019s absolutely the right way to work. And surprised, because personally it\u2019s always been the way I\u2019ve done it \u2013 I wasn\u2019t aware there was even an alternative way. Content in some form or another is the whole reason we were making the things we were making. I can\u2019t even imagine how you\u2019d start figuring out what a site needs to do, how it should be structured, or how it should look without a really good idea of what that content might be. It baffles me still that this was somehow news to a lot of people. What on earth were they doing? Design without purpose is just folly, surely?\n\nIt\u2019s great to see the idea gaining momentum but, having watched it unfold, it occurred to me recently that although it\u2019s fantastic to see a tangible shift in thinking \u2013 away from those bleak times, where making things up was somehow deemed an appropriate way to do things \u2013 there\u2019s now a new bad guy in town.\n\nWith any buzzword solution of the moment, there\u2019s always a catch, and it seems like some have taken the content first approach a little too literally. By which I mean, it\u2019s literally the first thing they do. The project starts, there\u2019s a very cursory nod towards gathering requirements, and off they go, cranking content. Writing copy, making video, commissioning illustrations.\n\nAll that\u2019s happened is that the \u2018making stuff up\u2019 part has shifted along the line, away from layout and UI, back to the content. \n\nStarting is too easy\n\nI can\u2019t remember where I first heard that phrase, but it\u2019s a great sentiment which applies to so much of what we do on the web. The medium is so accessible and to an extent disposable; throwing things together quickly carries far less burden than in any other industry. We\u2019re used to tweaking as we go, changing bits, iterating things into shape. The ubiquitous beta tag has become the ultimate caveat, and has made the unfinished and unpolished acceptable. Of course, that can work brilliantly in some circumstances. Occasionally, a product offers such a paradigm shift it\u2019s beyond the level of deep planning and prelaunch finessing we\u2019d ideally like. But, in the main, for most client sites we work on, there really is no excuse not to do things properly. To ask the tricky questions, to challenge preconceptions and really understand the Ws behind the products we\u2019re making before we even start. \n\nThe four Ws\n\nFor product definition, only four of the five Ws really apply, although there\u2019s a lot of discussion around the idea of when being an influencing factor. For example, the context of a user\u2019s engagement with your product is something you can make a call on depending on the specifics of the project.\n\nSo, here\u2019s my take on the four essential Ws. I\u2019ll point out here that, of course, these are not intended to be autocratic dictums. Your needs may differ, your clients\u2019 needs may differ, but these four starting points will get you pretty close to where you need to be.\n\nWho \n\nIt\u2019s surprising just how many projects start without a real understanding of the intended audience. Many clients think they have an idea, but without really knowing \u2013 it\u2019s presumptive at best, and we all know what presumption is the mother of, right? Of course, we can\u2019t know our audiences in the same way a small shop owner might know their customers. But we can at least strive to find out what type of people are likely to be using the product. I\u2019m not talking about deep user research. That should come later.\n\nThese are the absolute basics. What\u2019s the context for their visit? How informed are they? What\u2019s their level of comprehension? Are they able to self-identify and relate to categories you have created? I could go on, and it changes on a per-project basis. You\u2019ll only find this out by speaking to them, if not in person, then indirectly through surveys, questionnaires or polls. The mechanism is less important than actually reaching out and engaging with them, because without that understanding it\u2019s impossible to start to design with any empathy.\n\nWhat\n\nOnce you become deeply involved directly with a product or service, it\u2019s notoriously difficult to see things as an outsider would. You learn the thing inside and out, you develop shortcuts and internal phraseology. Colloquialisms creep in. You become too close. So it\u2019s no surprise when clients sometimes struggle to explain what it is their product actually does in a way that others can understand.\n\nOften products are complex but, really, the core reasons behind someone wanting to use that product are very simple. There\u2019s a value proposition for the customer and, if they choose to engage with it, there\u2019s a value exchange. If that proposition or exchange isn\u2019t transparent, then people become confused and will likely go elsewhere. Make sure both your client and you really understand what that proposition is and, in turn, what the expected exchange should be. In a nutshell: what is the intended outcome of that engagement? Often the best way to do this is strip everything back to nothing. Verbosity is rife on the web. Just because it\u2019s easy to create content, that shouldn\u2019t be a reason to do so. Figure out what the value proposition is and then reintroduce content elements that genuinely help explain or present that to a level that is appropriate for the audience. \n\nWhy \n\nIn advertising, they talk about the truths behind a product or service. Truths can be both tangible or abstract, but the most important part is the resonance those truths hit with a customer. In a digital product or service those truths are often exposed as benefits. Why is this what I need? Why will it work for me? Why should I trust you? The why is one of the more fluffy Ws, yet it\u2019s such an important one to nail. Clients can get prickly when you ask them to justify the why behind their product, but it\u2019s a fantastic way to make sure the value proposition is clear, realistic and meets with the expectations of both client and customer.\n\nIt\u2019s our job as designers to question things: we\u2019re not just a pair of hands for clients. Just recently I spoke to a potential client about a site for his business. I asked him why people would use his product and also why his product seemed so fractured in its direction. He couldnt answer that question so, instead of ploughing on regardless, he went back to his directors and is now re-evaluating that business. It was awkward but he thanked me and hopefully he\u2019ll have a better product as a result.\n\nWhere\n\nIn this instance, where is not so much a geographical thing, although in some cases that level of context may indeed become a influencing factor\u2026 The where we\u2019re talking about here is the position of the product in relation to others around it. By looking at competitors or similar services around the one you are designing, you can start to get a sense for many of the things that are otherwise hard to pin down or have yet to be defined. For example, in a collection of sites all selling cars, where does yours fit most closely? Where are the overlaps? How are they communicating to their customers? How is the product range presented or categorized?\n\nIt\u2019s good to look around and see how others are doing it. Not in a quest for homogeneity but more to reference or to avoid certain patterns that may or may not make sense for your own particular product. Clients often strive to be different for the sake of it. They feel they need to provide distinction by going against the flow a bit. We know different. We know users love convention. They embrace familiar mental models. They\u2019re comfortable with things that they\u2019ve experienced elsewhere. By showing your client that position is a vital part of their strategy, you can help shape their product into something great. \n\nTo conclude\n\nSo there we have it \u2013 the four Ws. Each part tells a different and vital part of the story you need to be able to make a really good product. It might sound like a lot of work, particularly when the client is breathing down your neck expecting to see things, but without those pieces in place, the story you\u2019re building your product on, and the content that you\u2019re creating to form that product can only ever fit into one genre. Fiction.", "year": "2011", "author": "Alex Morris", "author_slug": "alexmorris", "published": "2011-12-10T00:00:00+00:00", "url": "https://24ways.org/2011/context-first/", "topic": "content"} {"rowid": 287, "title": "Extracting the Content", "contents": "As we throw away our canvas in approaches and yearn for a content-out process, there remains a pain point: the Content. It is spoken of in the hushed tones usually reserved for Lord Voldemort. The-thing-that-someone-else-is-responsible-for-that-must-not-be-named.\n\nDesigners and developers have been burned before by not knowing what the Content is, how long it is, what style it is and when the hell it\u2019s actually going to be delivered, in internet eons past. Warily, they ask clients for it. But clients don\u2019t know what to make, or what is good, because no one taught them this in business school. Designers struggle to describe what they need and when, so the conversation gets put off until it\u2019s almost too late, and then everyone is relieved that they can take the cop-out of putting up a blog and maybe some product descriptions from the brochure.\n\nThe Content in content out.\n\nI\u2019m guessing, as a smart, sophisticated, and, may I say, nicely-scented reader of the honourable and venerable tradition of 24 ways, that you sense something better is out there. Bunches of boxes to fill in just don\u2019t cut it any more in a responsive web design world. The first question is, how are you going to design something to ensure users have the easiest access to the best Content, if you haven\u2019t defined at the beginning what that Content is? Of course, it\u2019s more than possible that your clients have done lots of user research before approaching you to start this project, and have a plethora of finely tuned Content for you to design with.\n\nHave you finished laughing yet? Alright then. Let\u2019s just assume that, for whatever reason of gross oversight, this hasn\u2019t happened. What next?\n\nBringing up Content for the first time with a client is like discussing contraception when you\u2019re in a new relationship. It might be awkward and either party would probably rather be doing something else, but it needs to be broached before any action happens (that, and it\u2019s disastrous to assume the other party has the matter in hand). If we can\u2019t talk about it, how can we expect people to be doing it right and not making stupid mistakes? That being the case, how do we talk about Content? Let\u2019s start by finding a way to talk about it without blushing and scuffing our shoes. And there\u2019s a reason I\u2019ve been treating Content as a Proper Noun. \n\nThe first step, and I mean really-first-step-way-back-at-the-beginning-of-the-project-while-you-are-still-scoping-out-what-the-hell-you-might-do-for-each-other-and-it\u2019s-still-all-a-bit-awkward-like-a-first-date, is for you to explain to the client how important it is that you, together, work out what is important to your users as part of the user experience design, so that your users get the best user experience. The trouble is that, in most cases, this would lead to blank stares, possibly followed by a light cough and a query about using Comic Sans because it seems friendly.\n\nLet\u2019s start by ensuring your clients understand the task ahead. You see, all the time we talk about the Content we do our clients a big disservice. Content is poorly defined. It looms over a project completion point like an unscalable (in the sense of a dozen stacked Kilimanjaros), seething, massive, singular entity. The Content.\n\nDefining the problem. \n\nWe should really be thinking of the Content as \u2018contents\u2019; as many parts that come together to form a mighty experience, like hit 90s kids\u2019 TV show Mighty Morphin Power Rangers*.\n\n*For those of you who might have missed the Power Rangers, they were five teenagers with attitude, each given crazy mad individual skillz and a coloured lycra suit from an alien overlord. In return, they had to fight a new monster of the week using their abilities and weaponry in sync (even if the audio was not) and then, finally, in thrilling combination as a Humongous Mechanoid Machine of Awesome. They literally joined their individual selves, accessories and vehicles into a big robot. It was a toy manufacturer\u2019s wet dream.\n\nSo, why do I say Content is like the Power Rangers? Because Content is not just a humongous mecha. It is a combination of well-crafted pieces of contents that come together to form a well-crafted humongous mecha. Of Content.\n\nThe Red Power Ranger was always the leader. You can imagine your text contents, found on about pages, product descriptions, blog articles, and so on, as being your Red Power Ranger.\n\nMaybe your pictures are your Yellow Power Ranger; video is Blue (not used as much as the others, but really impressive when given a good storyline); maybe Pink is your infographics (it\u2019s wrong to find it sexier than the other equally important Rangers, but you kind of do anyway). And so on. \n\nThese bits of content \u2013 Red Text Ranger, Yellow Picture Ranger and others \u2013 often join together on a page, like they are teaming up to fight the bad guy in an action scene, and when they all come together (your standard workaday huge mecha) in a launched site, that\u2019s when Content becomes an entity.\n\nWhile you might have a vision for the whole site, Content rarely works that way. Of course, you keep your eye on the bigger prize, the completion of your mega robot, but to get there you need to assemble your working parts, the cogs and springs of contents that will mesh together to finally create your Humongous Mecha of Content. You create parts and join them to form a whole. (It\u2019s rarely seamless; often we need to adjust as we go, but we can create our Mecha\u2019s blueprint by making sure we have all the requisite parts.)\n\nThe point here is the order these parts were created. No alien overlord plans a Humongous Mechanoid and then thinks, \u201cGee, how can I split this into smaller fighting units powered by teenagers in snazzy shiny suits?\u201d No toy manufacturer goes into production of a mega robot, made up of model mecha vehicles with detachable arsenal, without thinking how they will easily fit back together to form the \u2018Buy all five now to create the mega robot\u2019 set. No good contents are created as a singular entity and chunked up to be slotted in to place any which way, into the body of a site.\n\nThink contents, not the Content. Think of contents as smaller units, or as a plural. The Content is what you have at the end. The contents are what you are creating and they are easy to break down. You are no longer scaling the unscalable. You can draw the map and plot the path, page by page, section by section.\n\nThe page table is your friend\n\nTo do this, I use a page table. A page table is a simple table template you can create in the word processor of your choice, that you use to tell you everything about the contents of a page \u2013 everything except the contents itself. \n\nHere\u2019s a page table I created for an employee\u2019s guide to redundancy in the alpha.gov.uk website:\n\n \n\nGuide to redundancy for employees\n\n\n\tPage objective: Provide specific information for employees who are facing redundancy about the process, their options and next steps.\n\tSource content: directgov page on Redundancy.\n\tScope: In scope\n\n\n\n\t\t\n\t\t\tPage title \n\t\t\t An employee\u2019s guide to redundancy \n\t\t\n\t\t\n\t\t\tPriority content \n\t\t\t Message: You have rights as an employee facing redundancy\nMethod: A guide written in plain English, with links to appropriate additional content.\nA video guide (out of scope).\nCovers the stages of redundancy and rights for those in trade unions and not in trade unions. Glossary of unfamiliar terms.\nCall to action: Read full guide, act to explore redundancy actions, benefits or new employment.\nAssets: link to redundancy calculator. \n\t\t\n\t\t\n\t\t\tSecondary \n\t\t\t Related items, or popular additional links. \nAdditional tools, such as search and suggestions.\n\n\n\n\tlocation set v not set states\n\tmicrocopy encouraging location set where location may make a difference to the content \u2013 ie, Scotland/Northern Ireland.\n\n\t\t\n\t\t\n\t\t\tTertiary \n\t\t\t Footer and standard links. \n\t\t\n\n\n\n\tContent creation: Content exists but was created within the constraints of the previous CMS. Review, correct and edit where necessary.\n\tMaintenance: should be flagged for review upon advice from Department of Work and Pensions, and annually.\n\tTechnology/Publishing/Policy implications: Should be reviewed once the glossary styles have been decided. No video guide in scope at this time, so languages should be simple and screen reader friendly.\n\tReliance on third parties: None, all content and source exists in house.\n\tOutstanding questions: None.\n\n\n \n\nDownload a copy of this page table\n\nThis particular page table template owes a lot to Brain Traffic\u2019s version found in Kristina Halvorson\u2019s book Content Strategy for the Web. With smaller clients than, say, the government, I might use something a bit more casual. With clients who like timescales and deadlines, I might turn it into a covering sheet, with signatures and agreements from two departments who have to work together to get the piece done on time.\n\nI use page tables, and the process of working through them, to reassure clients that I understand the task they face and that I can help them break it down section by section, page stack to page, down to product descriptions and interaction copy. About 80% of my clients break into relieved smiles. Most clients want to work with you to produce something good, they just don\u2019t understand how, and they want you to show them the mountain path on the map. With page tables, clients can understand that with baby steps they can break down their content requirements and commission content they need in time for the designers to work with it (as opposed to around it). If I was Santa, these clients would be on my nice list for sure.\n\nMy own special brand of Voldemort-content-evilness comes in how I wield my page tables for the other 20%. Page tables are not always thrilling, I\u2019ll admit. Sometimes they get ignored in favour of other things, yet they are crucial to the continual growth and maintenance of a truly content-led site. For these naughty list clients who, even when given the gift of the page table, continually say \u201cOoh, yes. Content. Right\u201d, I have a special gift. I have a stack of recycled paper under my desk and a cheap black and white laser printer. And I print a blank page table for every conceivable page I can find on the planned redesign. If I\u2019m feeling extra nice, I hole punch them and put them in a fat binder. \n\nThere is nothing like saying, \u201cThis is all the contents you need to have in hand for launch\u201d, and the satisfying thud the binder makes as it hits the table top, to galvanize even the naughtiest clients to start working with you to create the content you need to really create in a content-out way.", "year": "2011", "author": "Relly Annett-Baker", "author_slug": "rellyannettbaker", "published": "2011-12-15T00:00:00+00:00", "url": "https://24ways.org/2011/extracting-the-content/", "topic": "content"} {"rowid": 291, "title": "Information Literacy Is a Design Problem", "contents": "Information literacy, wrote Dr. Carol Kulthau in her 1987 paper \u201cInformation Skills for an Information Society,\u201d is \u201cthe ability to read and to use information essential for everyday life\u201d\u2014that is, to effectively navigate a world built on \u201ccomplex masses of information generated by computers and mass media.\u201d\nNearly thirty years later, those \u201ccomplex masses of information\u201d have only grown wilder, thornier, and more constant. We call the internet a firehose, yet we\u2019re loathe to turn it off (or even down). The amount of information we consume daily is staggering\u2014and yet our ability to fully understand it all remains frustratingly insufficient. \nThis should hit a very particular chord for those of us working on the web. We may be developers, designers, or strategists\u2014we may not always be responsible for the words themselves\u2014but we all know that communication is much more than just words. From fonts to form fields, every design decision that we make changes the way information is perceived\u2014for better or for worse.\nWhat\u2019s more, the design decisions that we make feed into larger patterns. They don\u2019t just affect the perception of a single piece of information on a single site; they start to shape reader expectations of information anywhere. Users develop cumulative mental models of how websites should be: where to find a search bar, where to look at contact information, how to filter a product list. \nAnd yet: our models fail us. Fundamentally, we\u2019re not good at parsing information, and that\u2019s troubling. Our experience of an \u201cinformation society\u201d may have evolved, but the skills Dr. Kuhlthau spoke of are even more critical now: our lives depend on information literacy.\nPatterns from words\nLet\u2019s start at the beginning: with the words. Our choice of words can drastically alter a message, from its emotional resonance to its context to its literal meaning. Sometimes we can use word choice for good, to reinvigorate old, forgotten, or unfairly besmirched ideas.\nOne time at a wedding bbq we labeled the coleslaw BRASSICA MIXTA so people wouldn\u2019t skip it based on false hatred.\u2014 Eileen Webb (@webmeadow) November 27, 2016\n\nWe can also use clever word choice to build euphemisms, to name sensitive or intimate concepts without conjuring their full details. This trick gifts us with language like \u201cthe beast with two backs\u201d (thanks, Shakespeare!) and \u201csurfing the crimson wave\u201d (thanks, Cher Horowitz!).\nBut when we grapple with more serious concepts\u2014war, death, human rights\u2014this habit of declawing our language gets dangerous. Using more discrete wording serves to nullify the concepts themselves, euphemizing them out of sight and out of mind.\nThe result? Politicians never lie, they just \u201cmisspeak.\u201d Nobody\u2019s racist, but plenty of people are \u201ceconomically anxious.\u201d Nazis have rebranded as \u201calt-right.\u201d \nI\u2019m not an asshole, I\u2019m just alt-nice.\u2014 Andi Zeisler (@andizeisler) November 22, 2016\n\nThe problem with euphemisms like these is that they quickly infect everyday language. We use the words we hear around us. The more often we see \u201calt-right\u201d instead of \u201cNazi,\u201d the more likely we are to use that phrase ourselves\u2014normalizing the term as well as the terrible ideas behind it.\nPatterns from sentences\nThat process of normalization gets a boost from the media, our main vector of information about the world outside ourselves. Headlines control how we interpret the news that follows\u2014even if the story contradicts it in the end. We hear the framing more clearly than the content itself, coloring our interpretation of the news over time.\nEven worse, headlines are often written to encourage clicks, not to convey critical information. When headline-writing is driven by sensationalism, it\u2019s much, much easier to build a pattern of misinformation. Take this CBS News headline: \u201cDonald Trump: \u2018Millions\u2019 voted illegally for Hillary Clinton.\u201d The headline makes no indication that this an objectively false statement; instead, this word choice subtly suggestions that millions did, in fact, illegally vote for Hillary Clinton.\nHeadlines like this are what make lying a worthwhile political strategy. https://t.co/DRjGeYVKmW\u2014 Binyamin Appelbaum (@BCAppelbaum) November 27, 2016\n\nThis is a deeply dangerous choice of words when headlines are the primary way that news is conveyed\u2014especially on social media, where it\u2019s much faster to share than to actually read the article. In fact, according to a study from the Media Insight Project, \u201croughly six in 10 people acknowledge that they have done nothing more than read news headlines in the past week.\u201d \nIf a powerful person asserts X there are 2 responsible ways to cover:1. \u201cX is true\u201d2. \u201cPerson incorrectly thinks X\u201dNever \u201cPerson says X\u201d\u2014 Helen Rosner (@hels) November 27, 2016\n\nEven if we do, in fact, read the whole article, there\u2019s no guarantee that we\u2019re thinking critically about it. A study conducted by Stanford found that \u201c82 percent of students could not distinguish between a sponsored post and an actual news article on the same website. Nearly 70 percent of middle schoolers thought they had no reason to distrust a sponsored finance article written by the CEO of a bank, and many students evaluate the trustworthiness of tweets based on their level of detail and the size of attached photos.\u201d \nFriends: our information literacy is not very good. Luckily, we\u2014workers of the web\u2014are in a position to improve it.\nSentences into design\nConsider the presentation of those all-important headlines in social media cards, as on Facebook. The display is a combination of both the card\u2019s design and the article\u2019s source code, and looks something like this:\n\nA large image, a large headline; perhaps a brief description; and, at the bottom, in pale gray, a source and an author\u2019s name. \nThose choices convey certain values: specifically, they suggest that the headline and the picture are the entire point. The source is so deemphasized that it\u2019s easy to see how fake news gains a foothold: daily exposure to this kind of hierarchy has taught us that sources aren\u2019t important. \nAnd that\u2019s the message from the best-case scenario. Not every article shows every piece of data. Take this headline from the BBC: \u201cWisconsin receives request for vote recount.\u201d \n\nWith no image, no description, and no author, there\u2019s little opportunity to signal trust or provide nuance. There\u2019s also no date\u2014ever\u2014which presents potentially misleading complications, especially in the context of \u201cbreaking news.\u201d \nAnd lest you think dates don\u2019t matter in the light-speed era of social media, take the headline, \u201cMaryland sidesteps electoral college.\u201d Shared into my feed two days after the US presidential election, that\u2019s some serious news with major historical implications. But since there\u2019s no date on this card, there\u2019s no way for readers to know that the \u201cTuesday\u201d it refers to was in 2007. Again, a design choice has made misinformation far too contagious.\n\nMore recently, I posted my personal reaction to the death of Fidel Castro via a series of twenty tweets. Wanting to share my thoughts with friends and family who don\u2019t use Twitter, I then posted the first tweet to Facebook. The card it generated was less than ideal:\n\nThe information hierarchy created by this approach prioritizes the name of the Twitter user (not even the handle), along with the avatar. Not only does that create an awkward \u201cheadline\u201d (at least when you include a full stop in your name), but it also minimizes the content of the tweet itself\u2014which was the whole point. \nThe arbitrary elevation of some pieces of content over others\u2014like huge headlines juxtaposed with minimized sources\u2014teaches readers that these values are inherent to the content itself: that the headline is the news, that the source is irrelevant. We train readers to stop looking for the information we don\u2019t put in front of them. \nThese aren\u2019t life-or-death scenarios; they are just cases where design decisions noticeably dictate the perception of information. Not every design decision makes so obvious an impact, but the impact is there. Every single action adds to the pattern.\nDesign with intention\nWe can\u2019t necessarily teach people to read critically or vet their sources or stop believing conspiracy theories (or start believing facts). Our reach is limited to our roles: we make websites and products for companies and colleges and startups.\nBut we have more reach there than we might realize. Every decision we make influences how information is presented in the world. Every presentation adds to the pattern. No matter how innocuous our organization, how lowly our title, how small our user base\u2014every single one of us contributes, a little bit, to the way information is perceived.\nAre we changing it for the better?\nWhile it\u2019s always been crucial to act ethically in the building of the web, our cultural climate now requires dedicated, individual conscientiousness. It\u2019s not enough to think ourselves neutral, to dismiss our work as meaningless or apolitical. Everything is political. Every action, and every inaction, has an impact.\nAs Chappell Ellison put it much more eloquently than I can:\nEvery single action and decision a designer commits is a political act. The question is, are you a conscious actor?\u2014 Chappell Ellison\ud83e\udd14 (@ChappellTracker) November 28, 2016\n\nAs shapers of information, we have a responsibility: to create clarity, to further understanding, to advance truth. Every single one of us must choose to treat information\u2014and the society it builds\u2014with integrity.", "year": "2016", "author": "Lisa Maria Martin", "author_slug": "lisamariamartin", "published": "2016-12-14T00:00:00+00:00", "url": "https://24ways.org/2016/information-literacy-is-a-design-problem/", "topic": "content"} {"rowid": 8, "title": "Coding Towards Accessibility", "contents": "\u201cCan we make it AAA-compliant?\u201d \u2013 does this question strike fear into your heart? Maybe for no other reason than because you will soon have to wade through the impenetrable WCAG documentation once again, to find out exactly what AAA-compliant means?\n\nI\u2019m not here to talk about that.\n\nThe Web Content Accessibility Guidelines are a comprehensive and peer-reviewed resource which we\u2019re lucky to have at our fingertips. But they are also a pig to read, and they may have contributed to the sense of mystery and dread with which some developers associate the word accessibility.\n\nThis Christmas, I want to share with you some thoughts and some practical tips for building accessible interfaces which you can start using today, without having to do a ton of reading or changing your tools and workflow.\n\nBut first, let\u2019s clear up a couple of misconceptions.\n\nDreary, flat experiences\n\nI recently built a front-end framework for the Post Office. This was a great gig for a developer, but when I found out about my client\u2019s stringent accessibility requirements I was concerned that I\u2019d have to scale back what was quite a complex set of visual designs.\n\nSites like Jakob Neilsen\u2019s old workhorse useit.com and even the pioneering GOV.UK may have to shoulder some of the blame for this. They put a premium on usability and accessibility over visual flourish. (Although, in fairness to Mr Neilsen, his new site nngroup.com is really quite a snazzy affair, comparatively.)\n\nOf course, there are other reasons for these sites\u2019 aesthetics \u2014 and it\u2019s not because of the limitations of the form. You can make an accessible site look as glossy or as plain as you want it to look. It\u2019s always our own ingenuity and attention to detail that are going to be the limiting factors.\n\nSynecdoche\n\nWe must always guard against the tendency to assume that catering to screen readers means we have the whole accessibility ballgame covered. \n\nThere\u2019s so much more to accessibility than assistive technology, as you know. And within the field of assistive technology there are plenty of other devices for us to consider.\n\nPlanning to accommodate all these users and devices can be daunting. When I first started working in this field I thought that the breadth of technology was prohibitive. I didn\u2019t even know what a screen reader looked like. (I assumed they were big and heavy, perhaps like an old typewriter, and certainly they would be expensive and difficult to fathom.) This is nonsense, of course. Screen reader emulators are readily available as browser extensions and can be activated in seconds. Chromevox and Fangs are both excellent and you should download one or the other right now.\n\nBut the really good news is that you can emulate many other types of assistive technology without downloading a byte. And this is where we move from misconceptions into some (hopefully) useful advice.\n\nThe mouse trap\n\nThe simplest and most effective way to improve your abilities as a developer of accessible interfaces is to unplug your mouse.\n\nKeyboard operation has its own WCAG chapter, because most users of assistive technology are navigating the web using only their keyboards. You can go some way towards putting yourself into their shoes so easily \u2014 just by ditching a peripheral.\n\nLearning this was a lightbulb moment for me. When I build interfaces I am constantly flicking between code and the browser, testing or viewing the changes I have made. Now, instead of checking a new element once, I check it twice: once with my mouse and then again without.\n\nDon\u2019t just :hover\n\nThe reality is that when you first start doing this you can find your site becomes unusable straightaway. It\u2019s easy to lose track of which element is in focus as you hit the tab key repeatedly.\n\nOne of the easiest changes you can make to your coding practice is to add :focus and :active pseudo-classes to every hover state that you write. I\u2019m still amazed at how many sites fail to provide a decent focus state for links (and despite previous 24 ways authors in 2007 and 2009 writing on this same issue!).\n\nYou may find that in some cases it makes sense to have something other than, or in addition to, the hover state on focus, but start with the hover state that your designer has taken the time to provide you with. It\u2019s a tiny change and there is no downside. So instead of this:\n\n.my-cool-link:hover {\n\tbackground-color: MistyRose ;\t\n}\n\n\u2026try writing this:\n\n.my-cool-link:hover,\n.my-cool-link:focus,\n.my-cool-link:active {\n\tbackground-color: MistyRose ;\t\n}\n\nI\u2019ve toyed with the idea of making a Sass mixin to take care of this for me, but I haven\u2019t yet. I worry that people reading my code won\u2019t see that I\u2019m explicitly defining my focus and active states so I take the hit and write my hover rules out longhand.\n\nJavaScript can play, too\n\nThis was another revelation for me. Keyboard-only navigation doesn\u2019t necessitate a JavaScript-free experience, and up-to-date screen readers can execute JavaScript. So we\u2019re able to create complex JavaScript-driven interfaces which all users can interact with.\n\nSome of the hard work has already been done for us. First, there are already conventions around keyboard-driven interfaces. Think about the last time you viewed a photo album on Facebook. You can use the arrow keys to switch between photos, and the escape key closes whichever lightbox-y UI thing Facebook is showing its photos in this week. Arrow keys (up/down as well as left/right) for progression through content; Escape to back out of something; Enter or space bar to indicate a positive intention \u2014 these are established keyboard conventions which we can apply to our interfaces to improve their accessiblity. \n\nOf course, by doing so we are improving our interfaces in general, giving all users the option to switch between keyboard and mouse actions as and when it suits them.\n\nSecond, this guy wants to help you out. Hans Hillen is a developer who has done a great deal of work around accessibility and JavaScript-powered interfaces. Along with The Paciello Group he has created a version of the jQuery UI library which has been fully optimised for keyboard navigation and screen reader use. It\u2019s a fantastic reference which I revisit all the time \n\nI\u2019m not a huge fan of the jQuery UI library. It\u2019s a pain to style and the code is a bit bloated. So I\u2019ve not used this demo as a code resource to copy wholesale. I use it by playing with the various components and seeing how they react to keyboard controls. Each component is also fully marked up with the relevant ARIA roles to improve screen reader announcement where possible (more on this below).\n\nCoding for accessibility promotes good habits\n\nThis is a another observation around accessibility and JavaScript. I noticed an improvement in the structure and abstraction of my code when I started adding keyboard controls to my interface elements. \n\nYour code has to become more modular and event-driven, because any number of events could trigger the same interaction. A mouse-click, the Enter key and the space bar could all conceivably trigger the same open function on a collapsed accordion element. (And you want to keep things DRY, don\u2019t you?) \n\nIf you aren\u2019t already in the habit of separating out your interface functionality into discrete functions, you will be soon.\n\nvar doSomethingCool = function(){\n\t// Do something cool here.\n}\n\n// Bind function to a button click - pretty vanilla\n$('.myCoolButton').on('click', function(){\n\tdoSomethingCool();\n\treturn false;\n});\n\n// Bind the same function to a range of keypresses\n$(document).keyup(function(e){\n\tswitch(e.keyCode) {\n\t\tcase 13: // enter\n\t\tcase 32: // spacebar\n\t\t\tdoSomethingCool();\n\t\t\tbreak;\n\t\tcase 27: // escape\n\t\t\tdoSomethingElse();\n\t\t\tbreak;\n\t}\n});\n\nTo be honest, if you\u2019re doing complex UI stuff with JavaScript these days, or if you\u2019ve been building any responsive interfaces which rely on JavaScript, then you are most likely working with an application framework such as Backbone, Angular or Ember, so an abstraced and event-driven application structure will be familar to you. It should be super easy for you to start helping out your keyboard-only users if you aren\u2019t already \u2014 just add a few more event bindings into your UI layer!\n\nManipulating the tab order\n\nSo, you\u2019ve adjusted your mindset and now you test every change to your codebase using a keyboard as well as a mouse. You\u2019ve applied all your hover states to :focus and :active so you can see where you\u2019re tabbing on the page, and your interactive components react seamlessly to a mixture of mouse and keyboard commands. Feels good, right?\n\nThere\u2019s another level of optimisation to consider: manipulating the tab order. Certain DOM elements are naturally part of the tab order, and others are excluded. Links and input elements are the main elements included in the tab order, and static elements like paragraphs and headings are excluded. What if you want to make a static element \u2018tabbable\u2019? \n\nA good example would be in an expandable accordion component. Each section of the accordion should be separated by a heading, and there\u2019s no reason to make that heading into a link simply because it\u2019s interactive.\n\n
\n\t

Tyrannosaurus

\n\t

Tyrannosaurus; meaning \"tyrant lizard\"...

\n\n\t

Utahraptor

\n\t

Utahraptor is a genus of theropod dinosaurs...

\n\n\t

Dromiceiomimus

\n\t

Ornithomimus is a genus of ornithomimid dinosaurs...

\n

\n\nAdding the heading elements to the tab order is trivial. We just set their tabindex attribute to zero. You could do this on the server or the client. I prefer to do it with JavaScript as part of the accordion setup and initialisation process.\n\n$('.accordion-widget h3').attr('tabindex', '0');\n\nYou can apply this trick in reverse and take elements out of the tab order by setting their tabindex attribute to \u22121, or change the tab order completely by using other integers. This should be done with great care, if at all. You have to be sure that the markup you remove from the tab order comes out because it genuinely improves the keyboard interaction experience. This is hard to validate without user testing. The danger is that developers will try to sweep complicated parts of the UI under the carpet by taking them out of the tab order. This would be considered a dark pattern \u2014 at least on my team!\n\nA farewell ARIA\n\nThis is where things can get complex, and I\u2019m no expert on the ARIA specification: I feel like I\u2019ve only dipped my toe into this aspect of coding for accessibility. But, as with WCAG, I\u2019d like to demystify things a little bit to encourage you to look into this area further yourself.\n\nARIA roles are of most benefit to screen reader users, because they modify and augment screen reader announcements. \n\nLet\u2019s take our dinosaur accordion from the previous section. The markup is semantic, so a screen reader that can\u2019t handle JavaScript will announce all the content within the accordion, no problem.\n\nBut modern screen readers can deal with JavaScript, and this means that all the lovely dino information beneath each heading has probably been hidden on document.ready, when the accordion initialised. It might have been hidden using display:none, which prevents a screen reader from announcing content. If that\u2019s as far as you have gone, then you\u2019ve committed an accessibility sin by hiding content from screen readers. Your user will hear a set of headings being announced, with no content in between. It would sound something like this if you were using Chromevox:\n\n> Tyrannosaurus. Heading Three.\n> Utahraptor. Heading Three.\n> Dromiceiomimus. Heading Three.\n\nWe can add some ARIA magic to the markup to improve this, using the tablist role. Start by adding a role of tablist to the widget, and roles of tab and tabpanel to the headings and paragraphs respectively. Set boolean values for aria-selected, aria-hidden and aria-expanded. The markup could end up looking something like this.\n\n
\n\t\t\n\t

Utahraptor

\n\tUtahraptor is a genus of theropod dinosaurs...

\n\t\t\n
\n\nNow, if a screen reader user encounters this markup they will hear the following:\n\n> Tyrannosaurus. Tab not selected; one of three.\n> Utahraptor. Tab not selected; two of three.\n> Dromiceiomimus. Tab not selected; three of three.\n\nYou could add arrow key events to help the user browse up and down the tab list items until they find one they like. \n\nYour accordion open() function should update the ARIA boolean values as well as adding whatever classes and animations you have built in as standard. Your users know that unselected tabs are meant to be interacted with, so if a user triggers the open function (say, by hitting Enter or the space bar on the second item) they will hear this:\n\n> Utahraptor. Selected; two of three.\n\nThe paragraph element for the expanded item will not be hidden by your CSS, which means it will be announced as normal by the screen reader.\n\nThis kind of thing makes so much more sense when you have a working example to play with. Again, I refer you to the fantastic resource that Hans Hillen has put together: this is his take on an accessible accordion, on which much of my example is based.\n\nConclusion\n\nGetting complex interfaces right for all of your users can be difficult \u2014 there\u2019s no point pretending otherwise. And there\u2019s no substitute for user testing with real users who navigate the web using assistive technology every day. This kind of testing can be time-consuming to recruit for and to conduct. On top of this, we now have accessibility on mobile devices to contend with. That\u2019s a huge area in itself, and it\u2019s one which I have not yet had a chance to research properly.\n\nSo, there\u2019s lots to learn, and there\u2019s lots to do to get it right. But don\u2019t be disheartened. If you have read this far then I\u2019ll leave you with one final piece of advice: don\u2019t wait.\n\nDon\u2019t wait until you\u2019re building a site which mandates AAA-compliance to try this stuff out. Don\u2019t wait for a client with the will or the budget to conduct the full spectrum of user testing to come along. Unplug your mouse, and start playing with your interfaces in a new way. You\u2019ll be surprised at the things that you learn and the issues you uncover. \n\nAnd the next time an true accessibility project comes along, you will be way ahead of the game.", "year": "2013", "author": "Charlie Perrins", "author_slug": "charlieperrins", "published": "2013-12-03T00:00:00+00:00", "url": "https://24ways.org/2013/coding-towards-accessibility/", "topic": "code"} {"rowid": 11, "title": "JavaScript: Taking Off the Training Wheels", "contents": "JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it\u2019s understandable that when 24 ways asked, \u201cWhat one thing do you wish you had more time to learn about?\u201d, a number of you answered \u201cJavaScript!\u201d\n\nThis article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can\u2019t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources.\n\nWhy learn JavaScript?\n\nSo what\u2019s in it for you? Why take the next step and learn the fundamentals?\n\nConfidence with jQuery\n\nIf nothing else, learning JavaScript will improve your jQuery code; you\u2019ll be comfortable writing jQuery from scratch and feel happy bending others\u2019 code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it\u2019s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence.\n\nIn fact, you could say that jQuery\u2019s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That\u2019s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn\u2019t needed.\n\nAn example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here\u2019s a comparison of some jQuery code and the equivalent without.\n\n$('.counter').each(function (index) {\n $(this).text(index + 1);\n});\n\nvar counters = document.querySelectorAll('.counter');\n[].slice.call(counters).forEach(function (elem, index) {\n elem.textContent = index + 1;\n});\n\nSolving problems no one else has!\n\nWhen you have to go to the internet to solve a problem, you\u2019re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has.\n\nNode.js\n\nNode.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don\u2019t worry: the Node community is thriving, very friendly and willing to help.\n\nI think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting!\n\nHere\u2019s an example web server written with Node. http is a module that allows you to create servers and, like jQuery\u2019s $.ajax, make requests. It\u2019s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it\u2019s certainly not out of your reach.\n\nvar http = require('http');\nhttp.createServer(function (req, res) {\n res.writeHead(200, {'Content-Type': 'text/plain'});\n res.end('Hello World');\n}).listen(1337);\nconsole.log('Server running at http://localhost:1337/');\n\nGrunt and other website tools\n\nNode has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I\u2019ll talk a little bit about Grunt here.\n\nGrunt is a task runner, and many people use it for compiling Sass or compressing their site\u2019s JavaScript and images. It\u2019s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer.\n\nWays to improve your skills\n\nSo you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas.\n\nRebuild a jQuery app\n\nConverting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you\u2019re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way\u2019s jQuery to JavaScript article.\n\nFind a mentor\n\nIf you think you\u2019d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I\u2019d look out for someone who\u2019s active and friendly on Twitter, and does the kind of work you\u2019d like to do. Introduce yourself over Twitter or send them an email. I wouldn\u2019t expect a full tutoring course (although that is another option!) but they\u2019ll be very glad to answer a question and any follow-ups every now and then.\n\nGo to a workshop\n\nMany conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there\u2019s one in your area. Workshops are great because you can ask direct questions, and you\u2019re in an environment where others are learning just like you are \u2014 no need to learn alone!\n\nSet yourself challenges\n\nThis is one way I like to learn new things. I have a new thing that I\u2019m not very good at, so I pick something that I think is just out of my reach and I try to build it. It\u2019s learning by doing and, even if you fail, it can be enormously valuable.\n\nWhere to start?\n\nIf you\u2019ve decided learning JavaScript is an important step for you, your next question may well be where to go from here.\n\nI\u2019ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want.\n\nBeginner\n\nIf you\u2019re just getting started with JavaScript, I\u2019d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They\u2019re all reputable sources (although, I\u2019ve included something I wrote \u2014 you can decide about that one!) and will not lead you astray.\n\n\n\tjQuery\u2019s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro.\n\tCodecademy\u2019s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you.\n\tHTMLDog\u2019s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!]\n\tThe tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript.\n\n\nGetting in-depth\n\nFor more comprehensive documentation and help I\u2019d recommend adding these places to your list of go-tos.\n\n\n\tMDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it\u2019s a great place to just go and browse.\n\tAxel Rauschmayer\u2019s 2ality is a stunning collection of articles that will take you deep into JavaScript. It\u2019s certainly worth looking at.\n\tAddy Osmani\u2019s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications.\n\n\nAnd finally\u2026\n\nI think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you\u2019ll get there. I bet you\u2019ll learn a whole lot along the way. Good luck!\n\nMany thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag.", "year": "2013", "author": "Tom Ashworth", "author_slug": "tomashworth", "published": "2013-12-05T00:00:00+00:00", "url": "https://24ways.org/2013/javascript-taking-off-the-training-wheels/", "topic": "code"} {"rowid": 15, "title": "Git for Grown-ups", "contents": "You are a clever and talented person. You create beautiful designs, or perhaps you have architected a system that even my cat could use. Your peers adore you. Your clients love you. But, until now, you haven\u2019t *&^#^! been able to make Git work. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^! command to upload your work.\n\nIt\u2019s not you. It\u2019s Git. Promise.\n\nYes, this is an article about the popular version control system, Git. But unlike just about every other article written about Git, I\u2019m not going to give you the top five commands that you need to memorize; and I\u2019m not going to tell you all your problems would be solved if only you were using this GUI wrapper or that particular workflow. You see, I\u2019ve come to a grand realization: when we teach Git, we\u2019re doing it wrong.\n\nLet me back up for a second and tell you a little bit about the field of adult education. (Bear with me, it gets good and will leave you feeling both empowered and righteous.) Andragogy, unlike pedagogy, is a learner-driven educational experience. There are six main tenets to adult education: \n\n\n\tAdults prefer to know why they are learning something.\n\tThe foundation of the learning activities should include experience.\n\tAdults prefer to be able to plan and evaluate their own instruction.\n\tAdults are more interested in learning things which directly impact their daily activities.\n\tAdults prefer learning to be oriented not towards content, but towards problems.\n\tAdults relate more to their own motivators than to external ones.\n\n\nNowhere in this list does it include \u201cmemorize the five most popular Git commands\u201d. And yet this is how we teach version control: init, add, commit, branch, push. You\u2019re an expert! Sound familiar? In the hierarchy of learning, memorizing commands is the lowest, or most basic, form of learning. At the peak of learning you are able to not just analyze and evaluate a problem space, but create your own understanding in relation to your existing body of knowledge.\n\n\u201cFine,\u201d I can hear you saying to yourself. \u201cBut I\u2019m here to learn about version control.\u201d Right you are! So how can we use this knowledge to master Git? First of all: I give you permission to use Git as a tool. A tool which you control and which you assign tasks to. A tool like a hammer, or a saw. Yes, your mastery of your tools will shape the kinds of interactions you have with your work, and your peers. But it\u2019s yours to control. Git was written by kernel developers for kernel development. The web world has adopted Git, but it is not a tool designed for us and by us. It\u2019s no Sass, y\u2019know? Git wasn\u2019t developed out of our frustration with managing CSS files in an increasingly complex ecosystem of components and atomic design. So, as you work through the next part of this article, give yourself a bit of a break. We\u2019re in this together, and it\u2019s going to be OK.\n\nWe\u2019re going to do a little activity. We\u2019re going to create your perfect Git cheatsheet.\n\nI want you to start by writing down a list of all the people on your code team. This list may include:\n\n\n\tdevelopers\n\tdesigners\n\tproject managers\n\tclients\n\n\nNext, I want you to write down a list of all the ways you interact with your team. Maybe you\u2019re a solo developer and you do all the tasks. Maybe you only do a few things. But I want you to write down a list of all the tasks you\u2019re actually responsible for. For example, my list looks like this:\n\n\n\twriting code\n\treviewing code\n\tpublishing tested code to your server(s)\n\ttroubleshooting broken code\n\n\nThe next list will end up being a series of boxes in a diagram. But to start, I want you to write down a list of your tools and constraints. This list potentially has a lot of noun-like items and verb-like items:\n\n\n\tcode hosting system (Bitbucket? GitHub? Unfuddle? self-hosted?)\n\tserver ecosystem (dev/staging/live)\n\tautomated testing systems or review gates\n\tautomated build systems (that Jenkins dude people keep referring to)\n\n\nBrilliant! Now you\u2019ve got your actors and your actions, it\u2019s time to shuffle them into a diagram. There are many popular workflow patterns. None are inherently right or wrong; rather, some are more or less appropriate for what you are trying to accomplish.\n\nCentralized workflow\n\nEveryone saves to a single place. This workflow may mean no version control, or a very rudimentary version control system which only ever has a single copy of the work available to the team at any point in time.\n\n \n\nBranching workflow\n\nEveryone works from a copy of the same place, merging their changes into the main copy as their work is completed. Think of the branches as a motorcycle sidecar: they\u2019re along for the ride and probably cannot exist in isolation of the main project for long without serious danger coming to the either the driver or sidecar passenger. Branches are a fundamental concept in version control \u2014 they allow you to work on new features, bug fixes, and experimental changes within a single repository, but without forcing the changes onto others working from the same branch.\n\n \n\nForking workflow\n\nEveryone works from their own, independent repository. A fork is an exact duplicate of a repository that a developer can make their own changes to. It can be kept up to date with additional changes made in other repositories, but it cannot force its changes onto another\u2019s repository. A fork is a complete repository which can use its own workflow strategies. If developers wish to merge their work with the main project, they must make a request of some kind (submit a patch, or a pull request) which the project collaborators may choose to adopt or reject. This workflow is popular for open source projects as it enforces a review process.\n\n \n\nGitflow workflow\n\nA specific workflow convention which includes five streams of parallel coding efforts: master, development, feature branches, release branches, and hot fixes. This workflow is often simplified down to a few elements by web teams, but may be used wholesale by software product teams. The original article describing this workflow was written by Vincent Driessen back in January 2010.\n\n \n\nBut these workflows aren\u2019t about you yet, are they? So let\u2019s make the connections.\n\nFrom the list of people on your team you identified earlier, draw a little circle. Give each of these circles some eyes and a smile. Now I want you to draw arrows between each of these people in the direction that code (ideally) flows. Does your designer create responsive prototypes which are pushed to the developer? Draw an arrow to represent this.\n\nChances are high that you don\u2019t just have people on your team, but you also have some kind of infrastructure. Hopefully you wrote about it earlier. For each of the servers and code repositories in your infrastructure, draw a square. Now, add to your diagram the relationships between the people and each of the machines in the infrastructure. Who can deploy code to the live server? How does it really get there? I bet it goes through some kind of code hosting system, such as GitHub. Draw in those arrows.\n\nBut wait!\n\nThe code that\u2019s on your development machine isn\u2019t the same as the live code. This is where we introduce the concept of a branch in version control. In Git, a repository contains all of the code (sort of). A branch is a fragment of the code that has been worked on in isolation to the other branches within a repository. Often branches will have elements in common. When we compare two (or more) branches, we are asking about the difference (or diff) between these two slivers. Often the master branch is used on production, and the development branch is used on our dev server. The difference between these two branches is the untested code that is not yet deployed.\n\nOn your diagram, see if you can colour-code according to the branch names at each of the locations within your infrastructure. You might find it useful to make a few different copies of the diagram to isolate each of the tasks you need to perform. For example: our team has a peer review process that each branch must go through before it is merged into the shared development branch.\n\nFinally, we are ready to add the Git commands necessary to make sense of the arrows in our diagram. If we are bringing code to our own workstation we will issue one of the following commands: clone (the first time we bring code to our workstation) or pull. Remembering that a repository contains all branches, we will issue the command checkout to switch from one branch to another within our own workstation. If we want to share a particular branch with one of our team mates, we will push this branch back to the place we retrieved it from (the origin). Along each of the arrows in your diagram, write the name of the command you are are going to use when you perform that particular task.\n\n \n\nFrom here, it\u2019s up to you to be selfish. Before asking Git what command it would like you to use, sketch the diagram of what you want. Git is your tool, you are not Git\u2019s tool. Draw the diagram. Communicate your tasks with your team as explicitly as you can. Insist on being a selfish adult learner \u2014 demand that others explain to you, in ways that are relevant to you, how to do the things you need to do today.", "year": "2013", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2013-12-04T00:00:00+00:00", "url": "https://24ways.org/2013/git-for-grownups/", "topic": "code"} {"rowid": 16, "title": "URL Rewriting for the Fearful", "contents": "I think it was Marilyn Monroe who said, \u201cIf you can\u2019t handle me at my worst, please just fix these rewrite rules, I\u2019m getting an internal server error.\u201d Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from.\n\nThe majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable \u2014 I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we\u2019re going to go back to basics to try to make the whole rigmarole more understandable.\n\nWhen we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that\u2019s the most common case, that\u2019s what I\u2019ll be sticking to here. If you work with a different server, there\u2019s often documentation specifically for translating from Apache\u2019s mod_rewrite rules. I even found an automatic converter for nginx.\n\nThis isn\u2019t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you\u2019re doing. If you\u2019ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don\u2019t worry. As Michael Jackson once insipidly whined, you are not alone.\n\nThe basics\n\nRewrite rules form part of the Apache web server\u2019s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is:\n\nRewriteEngine on\n\nThe general formula for a rewrite rule is:\n\nRewriteRule URL/to/match URL/to/use/if/it/matches [options]\n\nWhen we talk about URL rewriting, we\u2019re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We\u2019ll look at those in turn.\n\nRedirects\n\nRedirects match an incoming URL, and then redirect the user\u2019s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work. \n\nIn 1998, Sir Tim Berners-Lee wrote that Cool URIs don\u2019t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it\u2019s fine to move things around \u2014 especially to correct bad URL design choices of the past \u2014 provided that you can do so while keeping those old URLs working. That\u2019s where redirects can help.\n\nA redirect might look like this\n\nRewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L]\n\nRewriting\n\nBy default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file.\n\nA rewrite rule changes that process by breaking the direct relationship between the URL and the file system. \u201cWhen there\u2019s a request for /about/history.html\u201d a rewrite rule might say, \u201cuse the file /about_section.php instead.\u201d\n\nThis opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page.\n\nRewriteRule ^for/this/url/$ /use/this/file.php [L] \n\nMatching patterns\n\nBy now you\u2019ll have noted the weird ^ and $ characters wrapped around the URL we\u2019re trying to match. That\u2019s because what we\u2019re actually using here is a pattern. Technically, it is what\u2019s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We\u2019ll call it a pattern because we\u2019re not animals.\n\nWhat are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you\u2019d wonder what I wanted your credit card details for, but you\u2019d know that I wanted a two-digit month, a slash, and a two-digit year. That\u2019s not a regular expression, but it\u2019s the same idea: using some placeholder characters to define the pattern of the input you\u2019re trying to match.\n\nWe\u2019ve already met two regexp characters.\n\n\n\t^\n\tMatches the beginning of a string\n\t$\n\tMatches the end of a string\n\n\nWhen a pattern starts with ^ and ends with $ it\u2019s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too:\n\n\n\t[0-9]\n\tMatches a number, 0\u20139. [2-4] would match numbers 2 to 4 inclusive.\n\t[a-z]\n\tMatches lowercase letters a\u2013z\n\t[A-Z]\n\tMatches uppercase letters A\u2013Z\n\t[a-z0-9]\n\tCombining some of these, this matches letters a\u2013z and numbers 0\u20139\n\n\nThese are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you\u2019re looking for within the brackets, as well as the ranges shown above. \n\nHowever, all these just match one single character. [0-9] would match 8 but not 84 \u2014 to match 84 we\u2019d need to use [0-9] twice.\n\n[0-9][0-9]\n\nSo, if we wanted to match 1984 we could to do this:\n\n[0-9][0-9][0-9][0-9] \n\n\u2026but that\u2019s getting silly. Instead, we can do this:\n\n[0-9]{4}\n\nThat means any character between 0 and 9, four times. If we wanted to match a number, but didn\u2019t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more.\n\n[0-9]+\n\nThis now matches 1, 123 and 1234567.\n\nPutting it into practice\n\nLet\u2019s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this:\n\n2013/article-title/\n\nThey start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We\u2019d match it like this:\n\n^[0-9]{4}/[a-z0-9-]+/$\n\nIf that looks frightening, don\u2019t worry. Breaking it down, from the start of the URL (^) we\u2019re looking for four numbers ([0-9]{4}). Then a slash \u2014 that\u2019s just literal \u2014 and then anything lowercase a\u2013z or 0\u20139 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$).\n\nPutting that into a rewrite rule, we end up with this:\n\nRewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php\n\nWe\u2019re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it\u2019s supposed to display.\n\nCapturing groups, and replacements\n\nWhen rewriting URLs you\u2019ll often want to take important parts of the URL you\u2019re matching and pass them along to the script that handles the request. That\u2019s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we\u2019re looking for. That means we need to call it like this:\n\n/article.php?year=2013&slug=article-title\n\nTo do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what\u2019s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses.\n\nOur pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes:\n\n^([0-9]{4})/([a-z0-9-]+)/$ \n\nTo use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.)\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 \n\nThe value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern.\n\nOptions\n\nSeveral brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are:\n\n\n\tR=301\n\tPerform an HTTP 301 redirect to send the user\u2019s browser to the new URL. A status of 301 means a resource has moved permanently and so it\u2019s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes.\n\tL\n\tLast. If this rule matches, don\u2019t bother processing the following rules.\n\n\nOptions are set in square brackets at the end of the rule. You can set multiple options by separating them with commas:\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L]\n\nor\n\nRewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L] \n\nCommon pitfalls\n\nOnce you\u2019ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag. \n\nL for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does \u2014 the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes. \n\nOne way to avoid this problem is to keep your \u2018real\u2019 pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules.\n\nUseful snippets\n\nI find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to.\n\nExcluding a directory\n\nAs mentioned above, if you\u2019re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won\u2019t be applied.\n\nRewriteRule ^_source - [L]\n\nThis is also useful for excluding things like CMS folders from your website\u2019s rewrite rules\n\nRewriteRule ^perch - [L] \n\nAdding or removing www from the domain\n\nSome folk like to use a www and others don\u2019t. Usually, it\u2019s best to pick one and go with it, and redirect the one you don\u2019t want. On this site, we don\u2019t use www.24ways.org so we redirect those requests to 24ways.org.\n\nThis uses a RewriteCond which is like an if for a rewrite rule: \u201cIf this condition matches, then apply the following rule.\u201d In this case, it\u2019s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything:\n\nRewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC]\nRewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L]\n\nThe [NC] flag means \u2018no case\u2019 \u2014 the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance.\n\nRemoving file extensions\n\nSometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions.\n\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteCond %{REQUEST_FILENAME} !-d\nRewriteCond %{REQUEST_FILENAME}.php -f\nRewriteRule ^(.+)$ $1.php [L,QSA]\n\nThis says if the file being asked for isn\u2019t a file (!-f) and if it isn\u2019t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means \u2018query string append\u2019: append the existing query string onto the rewritten URL.\n\nIt\u2019s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL.\n\nLogging for when it all goes wrong\n\nAlthough not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it\u2019s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance.\n\nRewriteEngine On\nRewriteLog \"/full/system/path/to/rewrite.log\"\nRewriteLogLevel 5\n\nTo be doubly clear: this will not work from an .htaccess file \u2014 it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.)\n\nThe white screen of death\n\nOne of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn\u2019t an error message, of course. It\u2019s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log.\n\nIf you have access to your server logs, check the Apache error log and you\u2019ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.)\n\nIn conclusion\n\nRewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future.\n\nIf you\u2019re redesigning a site, remember that cool URIs don\u2019t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working.\n\nFurther reading\n\nTo find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources.\n\n\n\tFrom the horse\u2019s mouth, the Apache mod_rewrite documentation\n\tParticularly useful with that documentation is the RewriteRule Flags listing\n\tYou may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial\n\tFriend of 24 ways, Neil Crosby has a mod_rewrite Beginner\u2019s Guide which I\u2019ve found handy over the years.\n\n\nAs noted at the start, this isn\u2019t a fully comprehensive guide, but I hope it\u2019s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments.", "year": "2013", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2013-12-01T00:00:00+00:00", "url": "https://24ways.org/2013/url-rewriting-for-the-fearful/", "topic": "code"} {"rowid": 18, "title": "Grunt for People Who Think Things Like Grunt are Weird and Hard", "contents": "Front-end developers are often told to do certain things:\n\n\n\tWork in as small chunks of CSS and JavaScript as makes sense to you, then concatenate them together for the production website.\n\tCompress your CSS and minify your JavaScript to make their file sizes as small as possible for your production website.\n\tOptimize your images to reduce their file size without affecting quality.\n\tUse Sass for CSS authoring because of all the useful abstraction it allows.\n\n\nThat\u2019s not a comprehensive list of course, but those are the kind of things we need to do. You might call them tasks.\n\nI bet you\u2019ve heard of Grunt. Well, Grunt is a task runner. Grunt can do all of those things for you. Once you\u2019ve got it set up, which isn\u2019t particularly difficult, those things can happen automatically without you having to think about them again.\n\nBut let\u2019s face it: Grunt is one of those fancy newfangled things that all the cool kids seem to be using but at first glance feels strange and intimidating. I hear you. This article is for you.\n\nLet\u2019s nip some misconceptions in the bud right away\n\nPerhaps you\u2019ve heard of Grunt, but haven\u2019t done anything with it. I\u2019m sure that applies to many of you. Maybe one of the following hang-ups applies to you.\n\nI don\u2019t need the things Grunt does\n\nYou probably do, actually. Check out that list up top. Those things aren\u2019t nice-to-haves. They are pretty vital parts of website development these days. If you already do all of them, that\u2019s awesome. Perhaps you use a variety of different tools to accomplish them. Grunt can help bring them under one roof, so to speak. If you don\u2019t already do all of them, you probably should and Grunt can help. Then, once you are doing those, you can keep using Grunt to do more for you, which will basically make you better at doing your job.\n\nGrunt runs on Node.js \u2014 I don\u2019t know Node\n\nYou don\u2019t have to know Node. Just like you don\u2019t have to know Ruby to use Sass. Or PHP to use WordPress. Or C++ to use Microsoft Word.\n\nI have other ways to do the things Grunt could do for me\n\nAre they all organized in one place, configured to run automatically when needed, and shared among every single person working on that project? Unlikely, I\u2019d venture.\n\nGrunt is a command line tool \u2014 I\u2019m just a designer\n\nI\u2019m a designer too. I prefer native apps with graphical interfaces when I can get them. But I don\u2019t think that\u2019s going to happen with Grunt1.\n\nThe extent to which you need to use the command line is:\n\n\n\tNavigate to your project\u2019s directory.\n\tType grunt and press Return.\n\n\nAfter set-up, that is, which again isn\u2019t particularly difficult.\n\nOK. Let\u2019s get Grunt installed\n\nNode is indeed a prerequisite for Grunt. If you don\u2019t have Node installed, don\u2019t worry, it\u2019s very easy. You literally download an installer and run it. Click the big Install button on the Node website.\n\nYou install Grunt on a per-project basis. Go to your project\u2019s folder. It needs a file there named package.json at the root level. You can just create one and put it there.\n\n package.json at root\n\nThe contents of that file should be this:\n\n{\n \"name\": \"example-project\",\n \"version\": \"0.1.0\",\n \"devDependencies\": {\n \"grunt\": \"~0.4.1\"\n }\n}\n\nFeel free to change the name of the project and the version, but the devDependencies thing needs to be in there just like that.\n\nThis is how Node does dependencies. Node has a package manager called NPM (Node packaged modules) for managing Node dependencies (like a gem for Ruby if you\u2019re familiar with that). You could even think of it a bit like a plug-in for WordPress.\n\nOnce that package.json file is in place, go to the terminal and navigate to your folder. Terminal rubes like me do it like this:\n\n Terminal rube changing directories\n\nThen run the command:\n\nnpm install\n\nAfter you\u2019ve run that command, a new folder called node_modules will show up in your project.\n\n Example of node_modules folder\n\nThe other files you see there, README.md and LICENSE are there because I\u2019m going to put this project on GitHub and that\u2019s just standard fare there.\n\nThe last installation step is to install the Grunt CLI (command line interface). That\u2019s what makes the grunt command in the terminal work. Without it, typing grunt will net you a \u201cCommand Not Found\u201d-style error. It is a separate installation for efficiency reasons. Otherwise, if you had ten projects you\u2019d have ten copies of Grunt CLI.\n\nThis is a one-liner again. Just run this command in the terminal:\n\nnpm install -g grunt-cli\n\nYou should close and reopen the terminal as well. That\u2019s a generic good practice to make sure things are working right. Kinda like restarting your computer after you install a new application was in the olden days.\n\nLet\u2019s make Grunt concatenate some files\n\nPerhaps in our project there are three separate JavaScript files:\n\n\n\tjquery.js \u2013 The library we are using.\n\tcarousel.js \u2013 A jQuery plug-in we are using.\n\tglobal.js \u2013 Our authored JavaScript file where we configure and call the plug-in.\n\n\nIn production, we would concatenate all those files together for performance reasons (one request is better than three). We need to tell Grunt to do this for us.\n\nBut wait. Grunt actually doesn\u2019t do anything all by itself. Remember Grunt is a task runner. The tasks themselves we will need to add. We actually haven\u2019t set up Grunt to do anything yet, so let\u2019s do that.\n\nThe official Grunt plug-in for concatenating files is grunt-contrib-concat. You can read about it on GitHub if you want, but all you have to do to use it on your project is to run this command from the terminal (it will henceforth go without saying that you need to run the given commands from your project\u2019s root folder):\n\nnpm install grunt-contrib-concat --save-dev\n\nA neat thing about doing it this way: your package.json file will automatically be updated to include this new dependency. Open it up and check it out. You\u2019ll see a new line:\n\n\"grunt-contrib-concat\": \"~0.3.0\"\n\nNow we\u2019re ready to use it. To use it we need to start configuring Grunt and telling it what to do.\n\nYou tell Grunt what to do via a configuration file named Gruntfile.js2\n\nJust like our package.json file, our Gruntfile.js has a very special format that must be just right. I wouldn\u2019t worry about what every word of this means. Just check out the format:\n\nmodule.exports = function(grunt) {\n\n // 1. All configuration goes here \n grunt.initConfig({\n pkg: grunt.file.readJSON('package.json'),\n\n concat: {\n // 2. Configuration for concatinating files goes here.\n }\n\n });\n\n // 3. Where we tell Grunt we plan to use this plug-in.\n grunt.loadNpmTasks('grunt-contrib-concat');\n\n // 4. Where we tell Grunt what to do when we type \"grunt\" into the terminal.\n grunt.registerTask('default', ['concat']);\n\n};\n\nNow we need to create that configuration. The documentation can be overwhelming. Let\u2019s focus just on the very simple usage example.\n\nRemember, we have three JavaScript files we\u2019re trying to concatenate. We\u2019ll list file paths to them under src in an array of file paths (as quoted strings) and then we\u2019ll list a destination file as dest. The destination file doesn\u2019t have to exist yet. It will be created when this task runs and squishes all the files together.\n\nBoth our jquery.js and carousel.js files are libraries. We most likely won\u2019t be touching them. So, for organization, we\u2019ll keep them in a /js/libs/ folder. Our global.js file is where we write our own code, so that will be right in the /js/ folder. Now let\u2019s tell Grunt to find all those files and squish them together into a single file named production.js, named that way to indicate it is for use on our real live website.\n\nconcat: { \n dist: {\n src: [\n 'js/libs/*.js', // All JS in the libs folder\n 'js/global.js' // This specific file\n ],\n dest: 'js/build/production.js',\n }\n}\n\nNote: throughout this article there will be little chunks of configuration code like above. The intention is to focus in on the important bits, but it can be confusing at first to see how a particular chunk fits into the larger file. If you ever get confused and need more context, refer to the complete file.\n\nWith that concat configuration in place, head over to the terminal, run the command:\n\ngrunt\n\nand watch it happen! production.js will be created and will be a perfect concatenation of our three files. This was a big aha! moment for me. Feel the power course through your veins. Let\u2019s do more things!\n\nLet\u2019s make Grunt minify that JavaScript\n\nWe have so much prep work done now, adding new tasks for Grunt to run is relatively easy. We just need to:\n\n\n\tFind a Grunt plug-in to do what we want\n\tLearn the configuration style of that plug-in\n\tWrite that configuration to work with our project\n\n\nThe official plug-in for minifying code is grunt-contrib-uglify. Just like we did last time, we just run an NPM command to install it:\n\nnpm install grunt-contrib-uglify --save-dev\n\nThen we alter our Gruntfile.js to load the plug-in:\n\ngrunt.loadNpmTasks('grunt-contrib-uglify');\n\nThen we configure it:\n\nuglify: {\n build: {\n src: 'js/build/production.js',\n dest: 'js/build/production.min.js'\n }\n}\n\nLet\u2019s update that default task to also run minification:\n\ngrunt.registerTask('default', ['concat', 'uglify']);\n\nSuper-similar to the concatenation set-up, right?\n\nRun grunt at the terminal and you\u2019ll get some deliciously minified JavaScript:\n\n Minified JavaScript\n\nThat production.min.js file is what we would load up for use in our index.html file.\n\nLet\u2019s make Grunt optimize our images\n\nWe\u2019ve got this down pat now. Let\u2019s just go through the motions. The official image minification plug-in for Grunt is grunt-contrib-imagemin. Install it:\n\nnpm install grunt-contrib-imagemin --save-dev\n\nRegister it in the Gruntfile.js:\n\ngrunt.loadNpmTasks('grunt-contrib-imagemin');\n\nConfigure it:\n\nimagemin: {\n dynamic: {\n files: [{\n expand: true,\n cwd: 'images/',\n src: ['**/*.{png,jpg,gif}'],\n dest: 'images/build/'\n }]\n }\n}\n\nMake sure it runs:\n\ngrunt.registerTask('default', ['concat', 'uglify', 'imagemin']);\n\nRun grunt and watch that gorgeous squishification happen:\n\n Squished images\n\nGotta love performance increases for nearly zero effort.\n\nLet\u2019s get a little bit smarter and automate\n\nWhat we\u2019ve done so far is awesome and incredibly useful. But there are a couple of things we can get smarter on and make things easier on ourselves, as well as Grunt:\n\n\n\tRun these tasks automatically when they should\n\tRun only the tasks needed at the time\n\n\nFor instance:\n\n\n\tConcatenate and minify JavaScript when JavaScript changes\n\tOptimize images when a new image is added or an existing one changes\n\n\nWe can do this by watching files. We can tell Grunt to keep an eye out for changes to specific places and, when changes happen in those places, run specific tasks. Watching happens through the official grunt-contrib-watch plugin.\n\nI\u2019ll let you install it. It is exactly the same process as the last few plug-ins we installed. We configure it by giving watch specific files (or folders, or both) to watch. By watch, I mean monitor for file changes, file deletions or file additions. Then we tell it what tasks we want to run when it detects a change.\n\nWe want to run our concatenation and minification when anything in the /js/ folder changes. When it does, we should run the JavaScript-related tasks. And when things happen elsewhere, we should not run the JavaScript-related tasks, because that would be irrelevant. So:\n\nwatch: {\n scripts: {\n files: ['js/*.js'],\n tasks: ['concat', 'uglify'],\n options: {\n spawn: false,\n },\n } \n}\n\nFeels pretty comfortable at this point, hey? The only weird bit there is the spawn thing. And you know what? I don\u2019t even really know what that does. From what I understand from the documentation it is the smart default. That\u2019s real-world development. Just leave it alone if it\u2019s working and if it\u2019s not, learn more.\n\nNote: Isn\u2019t it frustrating when something that looks so easy in a tutorial doesn\u2019t seem to work for you? If you can\u2019t get Grunt to run after making a change, it\u2019s very likely to be a syntax error in your Gruntfile.js. That might look like this in the terminal:\n\n Errors running Grunt\n\nUsually Grunt is pretty good about letting you know what happened, so be sure to read the error message. In this case, a syntax error in the form of a missing comma foiled me. Adding the comma allowed it to run.\n\nLet\u2019s make Grunt do our preprocessing\n\nThe last thing on our list from the top of the article is using Sass \u2014 yet another task Grunt is well-suited to run for us. But wait? Isn\u2019t Sass technically in Ruby? Indeed it is. There is a version of Sass that will run in Node and thus not add an additional dependency to our project, but it\u2019s not quite up-to-snuff with the main Ruby project. So, we\u2019ll use the official grunt-contrib-sass plug-in which just assumes you have Sass installed on your machine. If you don\u2019t, follow the command line instructions.\n\nWhat\u2019s neat about Sass is that it can do concatenation and minification all by itself. So for our little project we can just have it compile our main global.scss file:\n\nsass: {\n dist: {\n options: {\n style: 'compressed'\n },\n files: {\n 'css/build/global.css': 'css/global.scss'\n }\n } \n}\n\nWe wouldn\u2019t want to manually run this task. We already have the watch plug-in installed, so let\u2019s use it! Within the watch configuration, we\u2019ll add another subtask:\n\ncss: {\n files: ['css/*.scss'],\n tasks: ['sass'],\n options: {\n spawn: false,\n }\n}\n\nThat\u2019ll do it. Now, every time we change any of our Sass files, the CSS will automaticaly be updated.\n\nLet\u2019s take this one step further (it\u2019s absolutely worth it) and add LiveReload. With LiveReload, you won\u2019t have to go back to your browser and refresh the page. Page refreshes happen automatically and in the case of CSS, new styles are injected without a page refresh (handy for heavily state-based websites).\n\nIt\u2019s very easy to set up, since the LiveReload ability is built into the watch plug-in. We just need to:\n\n\nInstall the browser plug-in\nAdd to the top of the watch configuration:\n. watch: {\n options: {\n livereload: true,\n },\n scripts: { \n /* etc */\n\nRestart the browser and click the LiveReload icon to activate it.\nUpdate some Sass and watch it change the page automatically.\n\n\n Live reloading browser\n\nYum.\n\nPrefer a video?\n\nIf you\u2019re the type that likes to learn by watching, I\u2019ve made a screencast to accompany this article that I\u2019ve published over on CSS-Tricks: First Moments with Grunt\n\nLeveling up\n\nAs you might imagine, there is a lot of leveling up you can do with your build process. It surely could be a full time job in some organizations.\n\nSome hardcore devops nerds might scoff at the simplistic setup we have going here. But I\u2019d advise them to slow their roll. Even what we have done so far is tremendously valuable. And don\u2019t forget this is all free and open source, which is amazing.\n\nYou might level up by adding more useful tasks:\n\n\n\tRunning your CSS through Autoprefixer (A+ Would recommend) instead of a preprocessor add-ons.\n\tWriting and running JavaScript unit tests (example: Jasmine).\n\tBuild your image sprites and SVG icons automatically (example: Grunticon).\n\tStart a server, so you can link to assets with proper file paths and use services that require a real URL like TypeKit and such, as well as remove the need for other tools that do this, like MAMP.\n\tCheck for code problems with HTML-Inspector, CSS Lint, or JS Hint.\n\tHave new CSS be automatically injected into the browser when it ever changes.\n\tHelp you commit or push to a version control repository like GitHub.\n\tAdd version numbers to your assets (cache busting).\n\tHelp you deploy to a staging or production environment (example: DPLOY).\n\n\nYou might level up by simply understanding more about Grunt itself:\n\n\n\tRead Grunt Boilerplate by Mark McDonnell.\n\tRead Grunt Tips and Tricks by Nicolas Bevacqua.\n\tOrganize your Gruntfile.js by splitting it up into smaller files.\n\tCheck out other people\u2019s and projects\u2019 Gruntfile.js.\n\tLearn more about Grunt by digging into its source and learning about its API.\n\n\nLet\u2019s share\n\nI think some group sharing would be a nice way to wrap this up. If you are installing Grunt for the first time (or remember doing that), be especially mindful of little frustrating things you experience(d) but work(ed) through. Those are the things we should share in the comments here. That way we have this safe place and useful resource for working through those confusing moments without the embarrassment. We\u2019re all in this thing together!\n\n \n\n1 Maybe someday someone will make a beautiful Grunt app for your operating system of choice. But I\u2019m not sure that day will come. The configuration of the plug-ins is the important part of using Grunt. Each plug-in is a bit different, depending on what it does. That means a uniquely considered UI for every single plug-in, which is a long shot.\n\nPerhaps a decent middleground is this Grunt DevTools Chrome add-on.\n\n2 Gruntfile.js is often referred to as Gruntfile in documentation and examples. Don\u2019t literally name it Gruntfile \u2014 it won\u2019t work.", "year": "2013", "author": "Chris Coyier", "author_slug": "chriscoyier", "published": "2013-12-11T00:00:00+00:00", "url": "https://24ways.org/2013/grunt-is-not-weird-and-hard/", "topic": "code"} {"rowid": 20, "title": "Make Your Browser Dance", "contents": "It was a crisp winter\u2019s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue.\n\nThis was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these\u2026 and working out ways to mix them as the music played.\n\nThis was it. This was me VJing.\n\nThis was all a lifetime (well a decade!) ago.\n\nWhen I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API\n\nBut wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser?\n\nWell, there\u2019s only one thing to do: let\u2019s try it!\n\nLet\u2019s take to the dance floor \n\nOver the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let\u2019s take the same approach here.\n\nThe main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right?\n\nSo, let\u2019s start at the beginning: creating a simple visual. For this I\u2019m going to create a CSS animation. It\u2019s just a funky i element with the opacity being changed to make it flash.\n\n See the Pen Creating a light by Rumyra (@Rumyra) on CodePen\n\nA note about prefixes: I\u2019ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article\n\nStart the music\n\nWell, that\u2019s pretty easy so far. Next up: loading in some sound. For this we\u2019ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device\u2019s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext.\n\nSo, let\u2019s start by initialising our audio context.\n\nvar contextClass = window.AudioContext;\nif (contextClass) {\n //web audio api available.\n var audioContext = new contextClass();\n} else {\n //web audio api unavailable\n //warn user to upgrade/change browser\n}\n\nNow let\u2019s load our sound file into the new context we created with an XMLHttpRequest.\n\nfunction loadSound() {\n\t//set audio file url\n\tvar audioFileUrl = '/octave.ogg';\n\t//create new request\n\tvar request = new XMLHttpRequest();\n\trequest.open(\"GET\", audioFileUrl, true);\n\trequest.responseType = \"arraybuffer\";\n\n\trequest.onload = function() {\n\t\t//take from http request and decode into buffer\n\t\tcontext.decodeAudioData(request.response, function(buffer) {\n\t \taudioBuffer = buffer;\n\t });\n\t\t}\n\trequest.send();\n}\n\nPhew! Now we\u2019ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O\u2019Reilly Web Audio API book by Boris Smus is available to read online free.\n\nAll we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have.\n\n Learning the steps\n\nLet\u2019s take a minute to step back and remember our school days and science class. I\u2019m sure if I drew a picture of a sound wave, we would all start nodding our heads.\n\n \n\nThe sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size.\n\nThis is great when everything is analogue, but the waveform varies continuously and it\u2019s not suitable for digital processing: there\u2019s an infinite set of values. For digital processing, we need discrete numbers.\n\nWe have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we\u2019re doing in the code above is piping that data in the audio context. All we need to do now is access it.\n\nWe can do this with the Web Audio API\u2019s analysing functionality. Just pop in an analysing node before we connect the source to its destination node.\n\nfunction createAnalyser(source) {\n\t//create analyser node\n\tanalyser = audioContext.createAnalyser();\n\t//connect to source\n\tsource.connect(analyzer);\n\t//pipe to speakers\n\tanalyser.connect(audioContext.destination);\n}\n\nThe data I\u2019m really interested in here is frequency. Later we could look into amplitude or time, but for now I\u2019m going to stick with frequency.\n\nThe analyser node gives us frequency data via the getFrequencyByteData method.\n\n Don\u2019t forget to count!\n\nTo collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it?\n\nThis is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context\u2019s sampleRate attribute. This is good to bear in mind when you\u2019re thinking about your resolution of frequencies.\n\nvar sampleRate = audioContext.sampleRate;\n\nLet\u2019s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we\u2019re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0\u201320,000hz.\n\nSo, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property.\n\n//set our FFT size\nanalyzer.fftSize = 2048;\n//create an empty array with 1024 items\nvar frequencyData = new Uint8Array(1024);\n\nSo, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 \u00f7 1,024 = 23.44Hz apart.\n\nThe thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame.\n\nfunction update() {\n \t//constantly getting feedback from data\n \trequestAnimationFrame(update);\n \tanalyzer.getByteFrequencyData(frequencyData);\n}\n\n Putting it all together\n\nSweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier.\n\nWe can easily pause and run our CSS animation from JavaScript:\n\nelement.style.webkitAnimationPlayState = \"paused\";\nelement.style.webkitAnimationPlayState = \"running\";\n\nUnfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle.\n\nThere is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms.\n\nThis seems a bit much for our proof of concept, so let\u2019s backtrack a little. We know by the animation we\u2019ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript.\n\nelement.style.opacity = \"1\";\nelement.style.opacity = \"0.2\";\n\nSo let\u2019s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I\u2019ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold.\n\n//get light elements\nvar lights = document.getElementsByTagName('i');\nvar totalLights = lights.length;\n\nfor (var i=0; i 160){\n //start animation on element\n lights[i].style.opacity = \"1\";\n } else {\n lights[i].style.opacity = \"0.2\";\n }\n}\n\nSee all the code in action here. I suggest viewing in a modern browser :)\n\nAwesome! It is true \u2014 we can VJ in our browser!\n\nLet\u2019s dance!\n\nSo, let\u2019s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs.\n\nCheck it out!\n\nI don\u2019t know about you, but I\u2019m pretty excited \u2014 that\u2019s just a bit of HTML, CSS and JavaScript!\n\nThe other thing to think about, of course, is the sound that you would get at a venue. We don\u2019t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I\u2019ve found, is to capture what my laptop\u2019s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia.\n\nLet\u2019s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash.\n\n And relax :)\n\nThere you have it. Sit back, play some music and enjoy the Winamp like experience in front of you.\n\nSo, where do we go from here? I already have a wealth of ideas. We haven\u2019t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it\u2019s questionable whether the browser is the best environment for this. For one, I\u2019m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it\u2019s fun, and it looks cool and sometimes I think it\u2019s OK to just dance.", "year": "2013", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2013-12-02T00:00:00+00:00", "url": "https://24ways.org/2013/make-your-browser-dance/", "topic": "code"} {"rowid": 21, "title": "Keeping Parts of Your Codebase Private on GitHub", "contents": "Open source is brilliant, there\u2019s no denying that, and GitHub has been instrumental in open source\u2019s recent success. I\u2019m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private!\n\nA slightly less common issue, and one I\u2019ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes.\n\nBefore we begin, it\u2019s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site.\n\nN.B. This post requires some basic Git knowledge.\n\nAdding your public remote\n\nLet\u2019s assume you\u2019re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the \u201cAdding your private remote\u201d section.)\n\nSo, we have a clean slate: nothing has been set up yet, we\u2019re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account).\n\nOn your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff \u2014 whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this:\n\n$ git remote add origin git@github.com:[user]/site.com.git\n\nHere we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you\u2019ve explicitly changed it) to that remote:\n\n$ git push -u origin master\n\nHere we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like.\n\nThis is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote.\n\nAdding your private remote\n\nWe\u2019ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one.\n\nIn your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don\u2019t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository.\n\n$ git checkout -b dev\n\nWe have now made a new branch called dev off the branch we were on last (master, unless you renamed it).\n\nNow we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote:\n\n$ git remote add private git@github.com:[user]/private.site.com.git\n\nLike before, we are just telling Git to add a new remote to this repo, only this time we\u2019ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it.\n\nNow we need to tell our dev branch to push to our private remote:\n\n$ git push -u private dev\n\nHere, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we\u2019ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise).\n\nNow you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively.\n\nAny work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote.\n\nAdding more branches\n\nSo far we\u2019ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you\u2019d like. Of course, you\u2019d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let\u2019s imagine we want to create a branch to try something out real quickly:\n\n$ git checkout -b test\n\nNow, when we come to push this branch, we can choose which remote we send it to:\n\n$ git push -u private test\n\nThis pushes the new test branch to our private remote (again, setting the persistent tracking with -u).\n\nYou can have as many or as few remotes or branches as you like.\n\nCombining the two\n\nLet\u2019s say you\u2019ve been working on a new feature in private for a few days, and you\u2019ve kept that on the private remote. You\u2019ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch:\n\n$ git checkout master\n\nThen merge in the branch that contained the feature:\n\n$ git merge dev\n\nNow master contains the commits that were made on dev and, once you\u2019ve pushed master to its remote, those commits will be viewable publicly on GitHub:\n\n$ git push\n\nNote that we can just run $ git push on the master branch as we\u2019d previously set up our upstream tracking (-u).\n\nMultiple machines\n\nSo far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let\u2019s say you\u2019ve got yourself a new Mac (yay!) and you want to clone an existing project:\n\n$ git clone git@github.com:[user]/site.com.git\n\nThis will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above.\n\nDone!\n\nIf you\u2019d like to see me blitz through all that in one go, check the showterm recording.\n\nThe beauty of this is that we can still share our code, but we don\u2019t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it\u2019s ready for merge. Have a blog post in a Jekyll site that you\u2019re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it\u2019s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain.\n\nAll this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you!", "year": "2013", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2013-12-09T00:00:00+00:00", "url": "https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/", "topic": "code"} {"rowid": 30, "title": "Making Sites More Responsive, Responsibly", "contents": "With digital projects we\u2019re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations.\u00a0A product\u2019s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities.\n\nThe popularisation of responsive web design is a great example of how we are able to shape the web\u2019s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn\u2019t know or care about the difference in these approaches, but\u00a0we\u00a0did. Responsive design was similar in this respect \u2013 momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.\u00a0\n\nWe tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text.\n\n The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens.\n\nWhen we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more\u00a0responsive to people\u2019s needs, enhancing their usability and accessibility along the way.\n\nBeing responsibly responsive\n\nOf course, when we think about being more responsive, there\u2019s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that:\n\n\nA responsible responsive design equally considers the following throughout a project:\n\nUsability: The way a website\u2019s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions.\nAccess: The ability for users of all devices, browsers, and assistive technologies to access and understand a site\u2019s features and content.\nSustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future.\nPerformance: The speed at which a site\u2019s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface.\n\n\n\nScott\u2019s book covers these ideas in a lot more detail than I\u2019ll be able to here (put it on your Christmas list if it\u2019s not there already), but for now let\u2019s think a bit more about our roles as digital creators\u00a0and the power this gives us.\n\nOur choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies.\n\nThe power of the web\n\nWe all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions.\n\nYou\u2019ll have seen articles with flashy titles such as \u201cTop 5 JavaScript APIs You\u2019ve Never Heard Of!\u201d, which you\u2019ll probably read, think \u201cThat\u2019s quite cool\u201d, yet never use in any real work.\n\nThere is great potential for technologies like these\u00a0to be misused, but there are also great prospects for them to be used well to enhance experiences. Let\u2019s have a look at a few\u00a0examples you may not have considered.\n\nOffline first\n\nWhen we make websites, many of us follow a process which involves user stories \u2013 standardised snippets of context explaining who needs what, and why.\n\n\u201cAs a student I want to pay online for my course so I don\u2019t have to visit the college in person.\u201d\n\n\u201cAs a retailer I want to generate unique product codes so I can manage my stock.\u201d\n\nWe very often focus heavily on what\u00a0needs doing, but may not consider carefully how it will be done. As in Scott\u2019s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense \u2013 including under different conditions.\n\nOffline first is yet another \u2018first\u2019 methodology (my personal favourite being \u2018tea first\u2019), which encourages us to develop so that connectivity\u00a0itself is an enhancement \u2013 letting\u00a0users continue with tasks even when they\u2019re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK\u2019s signal-barren East Anglian wilderness as I do), then you\u2019ll realise that connectivity isn\u2019t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I\u2019m sure we\u2019re all familiar with \u2013 the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone\u2019s likely to be offline after all. Wouldn\u2019t it be better if we could do something like this instead?\n\n\n\tSomeone visits our conference website.\n\tOn this initial run, some assets may be cached for future use: the conference schedule, the site\u2019s CSS, photos of the speakers.\n\tWhen the attendee revisits the site on the day, the page shell loads up from the cache.\n\tIf we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use.\n\tIf we don\u2019t have something cached already, then we can try\u00a0grabbing it online.\n\tIf for any reason our requests for new content fail (we\u2019re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is\u00a0cached.\n\n\nThere are a number of ways we can make something like this, including using the application cache (AppCache) if you\u2019re that way inclined. However, you may want to look into service workers\u00a0instead. There are also some great resources on Offline First!\u00a0if you\u2019d like to find out more about this.\n\nBuilding in offline functionality isn\u2019t necessarily about starting offline first, and it\u2019s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we\u2019d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott\u2019s criteria.\n\nAs I mentioned, this isn\u2019t necessarily the kind of development choice that our clients will ask us for, but it\u2019s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation.\n\nEven more accessible\n\nWe\u2019ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices?\n\nAgain using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen.\n\nInput\n\nAmbient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It\u2019s not new \u2013 many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels.\n\nIf our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this:\n\nwindow.addEventListener('devicelight', function(e) {\n var lux = e.value;\n\n if (lux < 50) {\n //Change things for dim light\n }\n if (lux >= 50 && lux <= 10000) {\n //Change things for normal light\n }\n if (lux > 10000) {\n //Change things for bright light\n }\n});\n\nLive demo\u00a0(requires light sensor and supported browser).\n\nSoon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it\u2019ll probably look something like this:\n\n@media (light-level: dim) {\n /*Change things for dim light*/\n}\n\n@media (light-level: normal) {\n /*Change things for normal light*/\n}\n\n@media (light-level: washed) {\n /*Change things for bright light*/\n}\n\nWhile we may be quick to dismiss this kind of detection as being a gimmick, it\u2019s important to consider that apps such as Light Detector, listed on Apple\u2019s accessibility page, provide important context around exactly this functionality.\n\n\n\t\u201cIf you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.\u201d\n\n\teverywaretechnologies.com/apps/lightdetector\n\n\nInput can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft.\n\nOutput\n\nWeb technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users.\n\nWhen we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we\u2019re seeing and hearing.\n\nHaptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here\u2019s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds.\n\nnavigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]);\n\nLive demo\u00a0(requires supported browser)\n\nWith great power\u2026\n\nWhat you\u2019ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn\u2019t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects \u2013 if we don\u2019t root for them, they probably won\u2019t happen. This is where our professional responsibility comes in.\n\nWe won\u2019t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we\u2019ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let\u2019s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations.\n\nWhen you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible.", "year": "2014", "author": "Sally Jenkinson", "author_slug": "sallyjenkinson", "published": "2014-12-10T00:00:00+00:00", "url": "https://24ways.org/2014/making-sites-more-responsive-responsibly/", "topic": "code"} {"rowid": 31, "title": "Dealing with Emergencies in Git", "contents": "The stockings were hung by the chimney with care,\nIn hopes that version control soon would be there.\n\nThis summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I\u2019ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information.\n\nLet\u2019s start by painting an updated version of Clement Clarke Moore\u2019s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen.\n\nPerhaps a few of these images are familiar, or maybe they\u2019re just settings you\u2019ve seen in a movie. It doesn\u2019t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: \u2018What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?\u2019 It\u2019s okay if the map isn\u2019t perfect. As you refine your understanding of a new topic, you\u2019ll outgrow the initial metaphors, analogies, and comparisons.\n\nWith apologies to Mr. Moore, let\u2019s give it a try.\n\nGetting Interrupted in Git\n\nWhen on the roof there arose such a clatter!\n\nYou\u2019re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you\u2019ve been working on is going to need to wait while you investigate the commotion.\n\nIf you\u2019ve got even a little bit of experience working with Git, you know that you cannot simply change what you\u2019re working on in times of emergency. If you\u2019ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state.\n\nUp to this point, you\u2019ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of \u2018switching branches for a sec\u2019. This isn\u2019t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren\u2019t finished with what you were working on.\n\nYou don\u2019t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren\u2019t putting your book away though, you\u2019re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down.\n\nLet\u2019s say you\u2019ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof:\n\n$ git stash\n\nAfter running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work.\n\nWith the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it\u2019s not reindeer after all, but just your boss who thought they\u2019d help out by writing some code on the project you\u2019ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time.\n\nYou fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy:\n\n$ git fetch\n$ git branch -r\n$ git checkout -b helpful-boss-branch origin/helpful-boss-branch\n\nYou are now in a local copy of the branch where you are free to look around, and figure out exactly what\u2019s going on.\n\nYou sigh audibly and say, \u2018Okay. Tell me what was happening when you first realised you\u2019d gotten into a mess\u2019 as you look through the log messages for the branch.\n\n$ git log --oneline\n$ git log\n\nBy using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof.\n\nYou may also want to compare the work your boss has done to the main branch for your project. For this article, we\u2019ll assume the main branch is named master.\n\n$ git diff master\n\nLooking through the commits, you may be able to see that things started out okay but then took a turn for the worse.\n\nChecking out a single commit\n\nUsing commands you\u2019re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch.\n\nUsing the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let\u2019s say the unique identifier you want to checkout is 25f6d7f.\n\n$ git checkout 25f6d7f\n\nNote: checking out '25f6d7f'.\n\nYou are in 'detached HEAD' state. You can look around,\nmake experimental changes and commit them, and you can\ndiscard any commits you make in this state without\nimpacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example:\n\n$ git checkout -b new_branch_name\n\nHEAD is now at 25f6d7f... Removed first paragraph.\n\nThis is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic.\n\nTake a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you\u2019ve temporarily disconnected from a known chain of events. In other words, you\u2019re currently looking at the middle of a story (or branch) about what happened \u2013 and you\u2019re not at the endpoint for this particular story.\n\nGit allows you to view the history of your repository as a timeline (technically it\u2019s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you\u2019re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work.\n\n$ git checkout master\n\nWarning: you are leaving 1 commit behind, not connected to\nany of your branches:\n\n 7a85788 Your witty holiday commit message.\n\nIf you want to keep them by creating a new branch, this may be a good time to do so with:\n\n$ git branch new_branch_name 7a85788\n\nSwitched to branch 'master'\nYour branch is up-to-date with 'origin/master'.\n\nSo, if you want to save the commits you\u2019ve made while in a detached HEAD state, you simply need to put them on a new branch.\n\n$ git branch saved-headless-commits 7a85788\n\nWith this trick under your belt, you can jingle around in history as much as you\u2019d like. It\u2019s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you\u2019ll need to move back to the branch tip by checking out the branch again.\n\n$ git checkout helpful-boss-branch\n\nYou\u2019re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you\u2019ve followed the instructions Git gave you. That wasn\u2019t so scary after all, now, was it?\n\nBack to our reindeer problem.\n\nIf your boss is anything like the bosses I\u2019ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you\u2019ll want to capture the good work using one of several different methods.\n\nBack in the living room, we\u2019ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work.\n\nSaving just one commit\n\nAbout that bowl of nuts. If you\u2019re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we\u2019re just going to pick out one nut from the bowl. In Git terms, we\u2019re going to cherry-pick a commit and save it to another branch.\n\nFirst, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into.\n\n$ git checkout master\n$ git checkout -b rescue-the-boss\n\nFrom your boss\u2019s branch, helpful-boss-branch locate the commit you want to keep.\n\n$ git log --oneline helpful-boss-branch\n\nLet\u2019s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch.\n\n$ git cherry-pick e08740b\n\nIf you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss\u2019s branch.\n\nAt this point you might need to make a few additional fixes to help your boss out. (You\u2019re angling for a bonus out of all this. Go the extra mile.) Once you\u2019ve made your additional changes, you\u2019ll need to add that work to the branch as well.\n\n$ git add [filename(s)]\n$ git commit -m \"Building on boss's work to improve feature X.\"\n\nGo ahead and test everything, and make sure it\u2019s perfect. You don\u2019t want to introduce your own mistakes during the rescue mission!\n\nUploading the fixed branch\n\nThe next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch.\n\n$ git push -u origin rescue-the-boss\n\nCleaning up and getting back to work\n\nWith your boss rescued, and your bonus secured, you can now delete the local temporary branches.\n\n$ git branch --delete rescue-the-boss\n$ git branch --delete helpful-boss-branch\n\nAnd settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat.\n\n$ git checkout waiting-for-st-nicholas\n$ git stash pop\n\nYour working directory has been returned to exactly the same state you were in at the beginning of the article.\n\nHaving fun with analogies\n\nI\u2019ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it\u2019s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it\u2019s like trying to find that one broken light on the string of Christmas lights). It doesn\u2019t matter if the analogy isn\u2019t perfect. It\u2019s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner\u2019s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works.\n\nOr, if you\u2019re like me, you can choose to never grow old and just keep mucking about in the analogies. I\u2019d argue it\u2019s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.", "year": "2014", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2014-12-02T00:00:00+00:00", "url": "https://24ways.org/2014/dealing-with-emergencies-in-git/", "topic": "code"} {"rowid": 36, "title": "Naming Things", "contents": "There are only two hard things in computer science: cache invalidation and naming things.\nPhil Karlton\n\n\nBeing a professional web developer means taking responsibility for the code you write and ensuring it is comprehensible to others. Having a documented code style is one means of achieving this, although the size and type of project you\u2019re working on will dictate the conventions used and how rigorously they are enforced.\n\nWorking in-house may mean working with multiple developers, perhaps in distributed teams, who are all committing changes \u2013 possibly to a significant codebase \u2013 at the same time. Left unchecked, this codebase can become unwieldy. Coding conventions ensure everyone can contribute, and help build a product that works as a coherent whole.\n\nEven on smaller projects, perhaps working within an agency or by yourself, at some point the resulting product will need to be handed over to a third party. It\u2019s sensible, therefore, to ensure that your code can be understood by those who\u2019ll eventually take ownership of it.\n\nPut simply, code is read more often than it is written or changed. A consistent and predictable naming scheme can make code easier for other developers to understand, improve and maintain, presumably leaving them free to worry about cache invalidation.\n\nLet\u2019s talk about semantics\n\nNames not only allow us to identify objects, but they can also help us describe the objects being identified.\n\nSemantics (the meaning or interpretation of words) is the cornerstone of standards-based web development. Using appropriate HTML elements allows us to create documents and applications that have implicit structural meaning. Thanks to HTML5, the vocabulary we can choose from has grown even larger.\n\nHTML elements provide one level of meaning: a widely accepted description of a document\u2019s underlying structure. It\u2019s only with the mutual agreement of browser vendors and developers that

indicates a paragraph.\n\nYet (with the exception of widely accepted microdata and microformat schemas) only HTML elements convey any meaning that can be parsed consistently by user agents. While using semantic values for class names is a noble endeavour, they provide no additional information to the visitor of a website; take them away and a document will have exactly the same semantic value.\n\nI didn\u2019t always think this was the case, but the real world has a habit of changing your opinion. Much of my thinking around semantics has been informed by the writing of my peers. In \u201cAbout HTML semantics and front-end architecture\u201d, Nicholas Gallagher wrote:\n\n\n\tThe important thing for class name semantics in non-trivial applications is that they be driven by pragmatism and best serve their primary purpose \u2013 providing meaningful, flexible, and reusable presentational/behavioural hooks for developers to use.\n\n\nThese thoughts are echoed by Harry Roberts in his CSS Guidelines:\n\n\n\tThe debate surrounding semantics has raged for years, but it is important that we adopt a more pragmatic, sensible approach to naming things in order to work more efficiently and effectively. Instead of focussing on \u2018semantics\u2019, look more closely at sensibility and longevity \u2013 choose names based on ease of maintenance, not for their perceived meaning.\n\n\nNaming methodologies\n\nFront-end development has undergone a revolution in recent years. As the projects we\u2019ve worked on have grown larger and more important, our development practices have matured. The pros and cons of object-orientated approaches to CSS can be endlessly debated, yet their introduction has highlighted the usefulness of having documented naming schemes.\n\nJonathan Snook\u2019s SMACSS (Scalable and Modular Architecture for CSS) collects style rules into five categories: base, layout, module, state and theme. This grouping makes it clear what each rule does, and is aided by a naming convention:\n\n\n\tBy separating rules into the five categories, naming convention is beneficial for immediately understanding which category a particular style belongs to and its role within the overall scope of the page. On large projects, it is more likely to have styles broken up across multiple files. In these cases, naming convention also makes it easier to find which file a style belongs to.\n\n\tI like to use a prefix to differentiate between layout, state and module rules. For layout, I use l- but layout- would work just as well. Using prefixes like grid- also provide enough clarity to separate layout styles from other styles. For state rules, I like is- as in is-hidden or is-collapsed. This helps describe things in a very readable way.\n\n\nSMACSS is more a set of suggestions than a rigid framework, so its ideas can be incorporated into your own practice. Nicholas Gallagher\u2019s SUIT CSS project is far more strict in its naming conventions:\n\n\n\tSUIT CSS relies on structured class names and meaningful hyphens (i.e., not using hyphens merely to separate words). This helps to work around the current limits of applying CSS to the DOM (i.e., the lack of style encapsulation), and to better communicate the relationships between classes.\n\n\nOver the last year, I\u2019ve favoured a BEM-inspired approach to CSS. BEM stands for block, element, modifier, which describes the three types of rule that contribute to the style of a single component. This means that, given the following markup:\n\n

    \n
  • Rudolph
  • \n
  • Dasher
  • \n
  • Dancer
  • \n
  • Prancer
  • \n
  • Vixen
  • \n
  • Comet
  • \n
  • Cupid
  • \n
  • Dunder
  • \n
  • Blixem
  • \n
\n\nI know that:\n\n\n\t.sleigh is a containing block or component.\n\t.sleigh__reindeer is used only as a descendent element of .sleigh.\n\t.sleigh__reindeer\u2013\u2013famous is used only as a modifier of .sleigh__reindeer.\n\n\nWith this naming scheme in place, I know which styles relate to a particular component, and which are shared. Beyond reducing specificity-related head-scratching, this approach has given me a framework within which I can consistently label items, and has sped up my workflow considerably.\n\nEach of these methodologies shows that any robust CSS naming convention will have clear rules around case (lowercase, camelCase, PascalCase) and the use of special (allowed) characters like hyphens and underscores.\n\nWhat makes for a good name?\n\nRegardless of higher-level conventions, there\u2019s no getting away from the fact that, at some point, we\u2019re still going to have to name things. Recognising that classes should be named with other developers in mind, what makes for a good name?\n\nUnderstandable\n\nThe most important aspect is for a name to be understandable. Words used in your project may come from a variety of sources: some may be widely understood, and others only be recognised by people working within a particular environment.\n\n\n\tCulture\nMost words you\u2019ll choose will have common currency outside the world of web development, although they may have a particular interpretation among developers (think menu, list, input). However, words may have a narrower cultural significance; for example, in Germany and other German-speaking countries, impressum is the term used for legally mandated statements of ownership.\n\tIndustry\nIndustries often use specific terms to describe common business practices and concepts. Publishing has a number of these (headline, standfirst, masthead, colophon\u2026) all have well understood meanings \u2013 and not all of them are relevant to online usage.\n\tOrganisation\nCompanies may have internal names (or nicknames) for their products and services. The Guardian is rife with such names: bisons (and buffalos), pixies (and super-pixies), bentos (and mini-bentos)\u2026 all of which mean something very different outside the organisation. Although such names can be useful inside smaller teams, in larger organisations they can become a barrier to entry, a sort of secret code used among employees who have been around long enough to know what they mean.\n\tProduct\nYour team will undoubtedly have created names for specific features or interface components used in your product. For example, at Clearleft we coined the term gravigation for a navigation bar that was pinned to the bottom of the viewport. Elements of a visual design language may have names, too. Transport for London\u2019s bar and circle logo is known internally as the roundel, while Nike\u2019s logo is called the swoosh. Branding agencies often christen colours within a brand palette, too, either to evoke aspects of the identity or to indicate intended usage.\n\n\nOnce you recognise the origin of the words you use, you\u2019ll be better able to judge their appropriateness. Using Latin words for class names may satisfy a need to use semantic-sounding terms but, unless you work in a company whose employees have a basic grasp of Latin, a degree of translation will be required. Military ranks might be a clever way of declaring sizes without implying actual values, but I\u2019d venture most people outside the armed forces don\u2019t know how they\u2019re ordered.\n\nObvious\n\nQuite often, the first name that comes into your head will be the best option. Names that obliquely reference the function of a class (e.g. receptacle instead of container, kevlar instead of no-bullets) only serve to add an additional layer of abstraction. Don\u2019t overthink it!\n\nOne way of knowing if the names you use are well understood is to look at what similar concepts are called in existing vocabularies. schema.org, Dublin Core and the BBC\u2019s ontologies are all useful sources for object names.\n\nFunctional\n\nWhile we\u2019ve learned to avoid using presentational classes, there remains a tension between naming things based on their content, and naming them for their intended presentation or behaviour (which may change at different breakpoints). Rather than think about a component\u2019s appearance or behaviour, instead look to its function, its purpose. To clarify, ask what a component\u2019s function is, and not how the component functions.\n\nFor example, the Guardian\u2019s internal content system uses the following names for different types of image placement: supporting, showcase and thumbnail, with inline being the default. These options make no promise of the resulting position on a webpage (or smartphone app, or television screen\u2026), but do suggest intended use, and therefore imply the likely presentation.\n\nConsistent\n\nBeing consistent in your approach to names will allow for easier naming of successive components, and extending the vocabulary when necessary. For example, a predictably named hierarchy might use names like primary and secondary. Should another level need to be added, tertiary is clearly be preferred over third.\n\nAppropriate\n\nYour project will feature a mix of style rules. Some will perform utility functions (clearing floats, removing bullets from a list, reseting margins), while others will perform specific functions used only once or twice in a project. Names should reflect this. For commonly used classes, be generic; for unique components be more specific.\n\nIt\u2019s also worth remembering that you can use multiple classes on an element, so combining both generic and specific can give you a powerful modular design system:\n\n\n\tGeneric: list\n\tSpecific: naughty-children\n\tCombined: naughty-children list\n\n\nIf following the BEM methodology, you might use the following classes:\n\n\n\tGeneric: list\n\tSpecific: list\u2013\u2013nice-children\n\tCombined: list list\u2013\u2013nice-children\n\n\nExtensible\n\nGood naming schemes can be extended. One way of achieving this is to use namespaces, which are basically a way of grouping related names under a higher-level term.\n\nMicroformats are a good example of a well-designed naming scheme, with many of its vocabularies taking property names from existing and related specifications (e.g. hCard is a 1:1 representation of vCard). Microformats 2 goes one step further by grouping properties under several namespaces:\n\n\n\th-* for root class names (e.g. h-card)\n\tp-* for simple (text) properties (e.g. p-name)\n\tu-* for URL properties (e.g. u-photo)\n\tdt-* for date/time properties (e.g. dt-bday)\n\te-* for embedded markup properties (e.g. e-note)\n\n\nThe inclusion of namespaces is a massive improvement over the earlier specification, but the downside is that microformats now occupy five separate namespaces. This might be problematic if you are using u-* for your utility classes. While nothing will break, your naming system won\u2019t be as robust, so plan accordingly.\n\n(Note: Microformats perform a very specific function, separate from any presentational concerns. It\u2019s therefore considered best practice to not use microformat classes as styling hooks, but instead use additional classes that relate to the function of the component and adhere to your own naming conventions.)\n\nShort\n\nNames should be as long as required, but no longer. When looking for words to describe a particular function, I try to look for single words where possible. Avoid abbreviations unless they are understood within the contexts described above. rrp is fine if labelling a recommended retail price in an online shop, but not very helpful if used to mean ragged-right paragraph, for example.\n\nFun!\n\nFinally, names can be an opportunity to have some fun! Names can give character to a project, be it by providing an outlet for in-jokes or adding little easter eggs for those inclined to look.\n\nThe copyright statement on Apple\u2019s website has long been named sosumi, a word that has a nice little history inside Apple. Until recently, the hamburger menu icon on the Guardian website was labelled honest-burger, after the developer\u2019s favourite burger restaurant.\n\nA few thoughts on preprocessors\n\nCSS preprocessors have solved a lot of problems, but they have an unfortunate downside: they require you to name yet more things! Whereas we needed to worry only about style rules, now we need names for variables, mixins, functions\u2026 oh my!\n\nA second article could be written about naming these, so for now I\u2019ll offer just a few thoughts. The first is to note that preprocessors make it easier to change things, as they allow for DRYer code. So while the names of variables are important (and the advice in this article still very much applies), you can afford to relax a little.\n\nLooking to name colour variables? If possible, find out if colours have been assigned names in a brand palette. If not, use obvious names (based on appearance or function, depending on your preference) and adapt as the palette grows. If it becomes difficult to name colours that are too similar, I\u2019d venture that the problem lies with the design rather than the naming scheme.\n\nThe same is true for responsive breakpoints. Preprocessors allow you to move awkward naming conventions out of the markup and into the CSS. Although terms like mobile, tablet and desktop are not desirable given the need to think about device-agnostic design, if these terms are widely understood within a product team and among stakeholders, using them will ensure everyone is using the same language (they can always be changed later).\n\nIt still feels like we\u2019re at the very beginning of understanding how preprocessors fit into a development workflow, if at all! I suspect over the next few years, best practices will emerge for all of these considerations. In the meantime, use your brain!\n\n\n\nEven with sensible rules and conventions in place, naming things can remain difficult, but hopefully I\u2019ve made this exercise a little less painful. Christmas is a time of giving, so to the developer reading your code in a year\u2019s time, why not make your gift one of clearer class names.", "year": "2014", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2014-12-21T00:00:00+00:00", "url": "https://24ways.org/2014/naming-things/", "topic": "code"} {"rowid": 37, "title": "JavaScript Modules the ES6 Way", "contents": "JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. \n\nThis is something nearly all other languages come with out of the box, whether it be Ruby\u2019s require, Python\u2019s import, or any other language you\u2019re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let\u2019s be clear: it doesn\u2019t mean the new module system in the upcoming version of JavaScript won\u2019t be useful to you if you\u2019re building smaller websites rather than the next Instagram.\n\nThankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won\u2019t be landing in browsers for a while yet \u2013 but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I\u2019d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It\u2019s much simpler than you might think.\n\nWhat is ES6?\n\nECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It\u2019s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn\u2019t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet.\n\nThe ES6 module spec\n\nThe ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I\u2019ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it\u2019s what I\u2019ll focus on more.\n\nModule syntax\n\nThe key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we\u2019ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it.\n\nAnother important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There\u2019s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous.\n\nNamed exports\n\nModules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword:\n\nexport function double(x) {\n return x + x;\n};\n\n\nYou can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces.\n\nvar double = function(x) {\n return x + x;\n}\n\nexport { double };\n\nA module can then import the double function like so:\n\nimport { double } from 'mymodule';\ndouble(2); // 4\n\nAgain, curly braces are required around the variable you would like to import. It\u2019s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension.\n\nThe reason for those extra braces is that this syntax lets you export multiple variables:\n\nvar double = function(x) {\n return x + x;\n}\n\nvar square = function(x) {\n return x * x;\n}\n\nexport { double, square }\n\nI personally prefer this syntax over the export function \u2026, but only because it makes it much clearer to me what the module exports. Typically I will have my export {\u2026} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting.\n\nA file importing both double and square can do so in just the way you\u2019d expect:\n\nimport { double, square } from 'mymodule';\ndouble(2); // 4\nsquare(3); // 9\n\nWith this approach you can\u2019t easily import an entire module and all its methods. This is by design \u2013 it\u2019s much better and you\u2019re encouraged to import just the functions you need to use.\n\nDefault exports\n\nAlong with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so:\n\nexport default function(x) {\n return x + x;\n}\n\nAnd that can be imported:\n\nimport double from 'mymodule';\ndouble(2); // 4\n\n\nThis time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you\u2019d like. Default exports are not named, so you can import them as anything you like:\n\nimport christmas from 'mymodule';\nchristmas(2); // 4\n\nThe above is entirely valid.\n\nAlthough it\u2019s not something that is used too often, a module can have both named exports and a default export, if you wish.\n\nOne of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that\u2019s fine, and you shouldn\u2019t change that to meet the preferences of those designing the spec.\n\nProgrammatic API\n\nAlong with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It\u2019s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user\u2019s browser didn\u2019t support a feature your app relied on. An example of doing this is:\n\nif(someFeatureNotSupported) {\n System.import('my-polyfill').then(function(myPolyFill) {\n // use the module from here\n });\n}\n\nSystem.import will return a promise, which, if you\u2019re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete.\n\nThis programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task.\n\nHow to use it today\n\nIt\u2019s all well and good having this new syntax, but right now it won\u2019t work in any browser \u2013 and it\u2019s not likely to for a long time. Maybe in next year\u2019s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we\u2019re stuck with a bit of extra work.\n\nES6 module transpiler\n\nOne solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it \u2013 not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp.\n\nThe advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don\u2019t have any system in place for handling modules in the browser, using the transpiler doesn\u2019t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn\u2019t do anything to help you get that code running in the browser, but if you have that part sorted it\u2019s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler.\n\nSystemJS\n\nAnother solution is SystemJS. It\u2019s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser).\n\nTo load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn\u2019t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you.\n\nAll you need to do to get SystemJS running is to add a \n\nWhen you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn\u2019t make a lot of requests on the live site. In development though, it makes for a really nice workflow.\n\nWhen working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application\u2019s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file.\n\nSystemJS also provides a workflow for bundling your application together into one file.\n\nConclusion\n\nES6 modules may be at least six months to a year away (if not more) but that doesn\u2019t mean they can\u2019t be used today. Although there is an overhead to using them now \u2013 with the work required to set up SystemJS, the module transpiler, or another solution \u2013 that doesn\u2019t mean it\u2019s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You\u2019ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution.\n\nIf you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road.\n\nFurther reading\n\nIf you\u2019d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources:\n\n\n\tECMAScript 6 modules: the final syntax by Axel Rauschmayer\n\tPractical Workflows for ES6 Modules by Guy Bedford\n\tECMAScript 6 resources for the curious JavaScripter by Addy Osmani\n\tTracking ES6 support by Addy Osmani\n\tES6 Tools List by Addy Osmani\n\tUsing Grunt and the ES6 Module Transpiler by Thomas Boyt\n\tJavaScript Modules and Dependencies with jspm by myself\n\tUsing ES6 Modules Today by Guy Bedford", "year": "2014", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2014-12-03T00:00:00+00:00", "url": "https://24ways.org/2014/javascript-modules-the-es6-way/", "topic": "code"} {"rowid": 38, "title": "Websites of Christmas Past, Present and Future", "contents": "The websites of Christmas past\n\nThe first website was created at CERN. It was launched on 20 December 1990 (just in time for Christmas!), and it still works today, after twenty-four years. Isn\u2019t that incredible?!\n\nWhy does this website still work after all this time? I can think of a few reasons.\n\nFirst, the authors of this document chose HTML. Of course they couldn\u2019t have known back then the extent to which we would be creating documents in HTML, but HTML always had a lot going for it. It\u2019s built on top of plain text, which means it can be opened in any text editor, and it\u2019s pretty readable, even without any parsing.\n\nDespite the fact that HTML has changed quite a lot over the past twenty-four years, extensions to the specification have always been implemented in a backwards-compatible manner. Reading through the 1992 W3C document HTML Tags, you\u2019ll see just how it has evolved. We still have h1 \u2013 h6 elements, but I\u2019d not heard of the element before. Despite being deprecated since HTML2, it still works in several browsers. You can see it in action on my website.\n\nAs well as being written in HTML, there is no run-time compilation of code; the first website simply consists of HTML files transmitted over the web. Due to its lack of complexity, it stood a good chance of surviving in the turbulent World Wide Web.\n\nThat\u2019s all well and good for a simple, static website. But websites created today are increasingly interactive. Many require a login and provide experiences that are tailored to the individual user. This type of dynamic website requires code to be executed somewhere.\n\nTraditionally, dynamic websites would execute such code on the server, and transmit a simple HTML file to the user. As far as the browser was concerned, this wasn\u2019t much different from the first website, as the additional complexity all happened before the document was sent to the browser.\n\nDoing it all in the browser\n\nIn 2003, the first single page interface was created at slashdotslash.com. A single page interface or single page app is a website where the page is created in the browser via JavaScript. The benefit of this technique is that, after the initial page load, subsequent interactions can happen instantly, or very quickly, as they all happen in the browser.\n\nWhen software runs on the client rather than the server, it is often referred to as a fat client. This means that the bulk of the processing happens on the client rather than the server (which can now be thin).\n\nA fat client is preferred over a thin client because:\n\n\n\tIt takes some processing requirements away from the server, thereby reducing the cost of servers (a thin server requires cheaper, or fewer servers).\n\tThey can often continue working offline, provided no server communication is required to complete tasks after initial load.\n\tThe latency of internet communications is bypassed after initial load, as interactions can appear near instantaneous when compared to waiting for a response from the server.\n\n\nBut there are also some big downsides, and these are often overlooked:\n\n\n\tThey can\u2019t work without JavaScript. Obviously JavaScript is a requirement for any client-side code execution. And as the UK Government Digital Service discovered, 1.1% of their visitors did not receive JavaScript enhancements. Of that 1.1%, 81% had JavaScript enabled, but their browsers failed to execute it (possibly due to dropping the internet connection). If you care about 1.1% of your visitors, you should care about the non-JavaScript experience for your website.\n\tThe browser needs to do all the processing. This means that the hardware it runs on needs to be fast. It also means that we require all clients to have largely the same capabilities and browser APIs.\n\tThe initial payload is often much larger, and nothing will be rendered for the user until this payload has been fully downloaded and executed. If the connection drops at any point, or the code fails to execute owing to a bug, we\u2019re left with the non-JavaScript experience.\n\tThey are not easily indexed as every crawler now needs to run JavaScript just to receive the content of the website.\n\n\nThese are not merely edge case issues to shirk off. The first three issues will affect some of your visitors; the fourth affects everyone, including you.\n\nWhat problem are we trying to solve?\n\nSo what can be done to address these issues? Whereas fat clients solve some inherent issues with the web, they seem to create as many problems. When attempting to resolve any issue, it\u2019s always good to try to uncover the original problem and work forwards from there. One of the best ways to frame a problem is as a user story. A user story considers the who, what and why of a need. Here\u2019s a template:\n\n\n\tAs a {who} I want {what} so that {why}\n\n\nI haven\u2019t got a specific project in mind, so let\u2019s refer to the who as user. Here\u2019s one that could explain the use of thick clients.\n\n\n\tAs a user I want the site to respond to my actions quickly so that I get immediate feedback when I do something.\n\n\nThis user story could probably apply to a great number of websites, but so could this:\n\n\n\tAs a user I want to get to the content quickly, so that I don\u2019t have to wait too long to find out what the site is all about or get the content I need.\n\n\nA better solution\n\nHow can we balance both these user needs? How can we have a website that loads fast, and also reacts fast? The solution is to have a thick server, that serves the complete document, and then a thick client, that manages subsequent actions and replaces parts of the page. What we\u2019re talking about here is simply progressive enhancement, but from the user\u2019s perspective.\n\nThe initial payload contains the entire document. At this point, all interactions would happen in a traditional way using links or form elements. Then, once we\u2019ve downloaded the JavaScript (asynchronously, after load) we can enhance the experience with JavaScript interactions. If for whatever reason our JavaScript fails to download or execute, it\u2019s no biggie \u2013 we\u2019ve already got a fully functioning website. If an API that we need isn\u2019t available in this browser, it\u2019s not a problem. We just fall back to the basic experience.\n\nThis second point, of having some minimum requirement for an enhanced experience, is often referred to as cutting the mustard, first used in this sense by the BBC News team. Essentially it\u2019s an if statement like this:\n\nif('querySelector' in document\n && 'localStorage' in window\n && 'addEventListener' in window) {\n // bootstrap the JavaScript application\n }\n\nThis code states that the browser must support the following methods before downloading and executing the JavaScript:\n\n\n\tdocument.querySelector (can it find elements by CSS selectors)\n\twindow.localStorage (can it store strings)\n\twindow.addEventListener (can it bind to events in a standards-compliant way)\n\n\nThese three properties are what the BBC News team decided to test for, as they are present in their website\u2019s JavaScript. Each website will have its own requirements. The last method, window.addEventListener is in interesting one. Although it\u2019s simple to bind to events on IE8 and earlier, these browsers have very inconsistent support for standards. Making any JavaScript-heavy website work on IE8 and earlier is a painful exercise, and comes at a cost to all users on other browsers, as they\u2019ll download unnecessary code to patch support for IE.\n\n JavaScript API support by browser.\n\nI discovered that IE8 supports 12% of the current JavaScript APIs, while IE9 supports 16%, and IE10 51%. It seems, then, that IE10 could be the earliest version of IE that I\u2019d like to develop JavaScript for. That doesn\u2019t mean that users on browsers earlier than 10 can\u2019t use the website. On the contrary, they get the core experience, and because it\u2019s just HTML and CSS, it\u2019s much more likely to be bug-free, and could even provide a better experience than trying to run JavaScript in their browser. They receive the thin client experience.\n\nBy reducing the number of platforms that our enhanced JavaScript version supports, we can better focus our efforts on those platforms and offer an even greater experience to those users. But we can only do that if we use progressive enhancement. Otherwise our website would be completely broken for all other users.\n\nSo what we have is a thick server, capable of serving the entire website to our users, complete with all core functionality needed for our users to complete their tasks; and we have a thick client on supported browsers, which can bring an even greater experience to those users.\n\nThis is all transparent to users. They may notice that the website seems snappier on the new iPhone they received for Christmas than on the Windows 7 machine they got five years ago, but then they probably expected it to be faster on their iPhone anyway.\n\nIsn\u2019t this just more work?\n\nIt\u2019s true that making a thick server and a thick client is more work than just making one or the other. But there are some big advantages:\n\n\n\tThe website works for everyone.\n\tYou can decide when users get the enhanced experience.\n\tYou can enhance features in an iterative (or agile) manner.\n\tWhen the website breaks, it doesn\u2019t break down.\n\tThe more you practise this approach, the quicker you will become.\n\n\nThe websites of Christmas present\n\nThe best way to discover websites using this technique of progressive enhancement is to disable JavaScript and see if the website breaks. I use the Web Developer extension, which is available for Chrome and Firefox. It lets me quickly disable JavaScript.\n\n Web Developer extension.\n\n24 ways works with and without JavaScript. Try using the menu icon to view the navigation. Without JavaScript, it\u2019s a jump link to the bottom of the page, but with JavaScript, the menu slides in from the right.\n\n 24 ways navigation with JavaScript disabled.\n\n 24 ways navigation with working JavaScript.\n\nGoogle search will also work without JavaScript. You won\u2019t get instant search results or any prerendering, because those are enhancements.\n\nFor a more app-like example, try using Twitter. Without JavaScript, it still works, and looks nearly identical. But when you load JavaScript, links open in modal windows and all pages are navigated much quicker, as only the content that has changed is loaded. You can read about how they achieved this in Twitter\u2019s blog posts Improving performance on twitter.com and Implementing pushState for twitter.com.\n\nUnfortunately Facebook doesn\u2019t use progressive enhancement, which not only means that the website doesn\u2019t work without JavaScript, but it takes longer to load. I tested it on WebPagetest and if you compare the load times of Twitter and Facebook, you\u2019ll notice that, despite putting similar content on the page, Facebook takes two and a half times longer to render the core content on the page.\n\n Facebook takes two and a half times longer to load than Twitter.\n\nWebsites of Christmas yet to come\n\nEvery project is different, and making a website that enjoys a long life, or serves a larger number of users may or may not be a high priority. But I hope I\u2019ve convinced you that it certainly is possible to look to the past and future simultaneously, and that there can be significant advantages to doing so.", "year": "2014", "author": "Josh Emerson", "author_slug": "joshemerson", "published": "2014-12-08T00:00:00+00:00", "url": "https://24ways.org/2014/websites-of-christmas-past-present-and-future/", "topic": "code"} {"rowid": 42, "title": "An Overview of SVG Sprite Creation Techniques", "contents": "SVG can be used as an icon system to replace icon fonts. The reasons why SVG makes for a superior icon system are numerous, but we won\u2019t be going over them in this article. If you don\u2019t use SVG icons and are interested in knowing why you may want to use them, I recommend you check out \u201cInline SVG vs Icon Fonts\u201d by Chris Coyier \u2013 it covers the most important aspects of both systems and compares them with each other to help you make a better decision about which system to choose.\n\nOnce you\u2019ve made the decision to use SVG instead of icon fonts, you\u2019ll need to think of the best way to optimise the delivery of your icons, and ways to make the creation and use of icons faster.\n\nJust like bitmaps, we can create image sprites with SVG \u2013 they don\u2019t look or work exactly alike, but the basic concept is pretty much the same.\n\nThere are several ways to create SVG sprites, and this article will give you an overview of three of them. While we\u2019re at it, we\u2019re going to take a look at some of the available tools used to automate sprite creation and fallback for us.\n\nPrerequisites\n\nThe content of this article assumes you are familiar with SVG. If you\u2019ve never worked with SVG before, you may want to look at some of the introductory tutorials covering SVG syntax, structure and embedding techniques. I recommend the following:\n\n\n\tSVG basics: Using SVG.\n\tStructure: Structuring, Grouping, and Referencing in SVG \u2014 The <g>, <use>, <defs> and <symbol> Elements. We\u2019ll mention <use> and <symbol> quite a bit in this article.\n\tEmbedding techniques: Styling and Animating SVGs with CSS. The article covers several topics, but the section linked focuses on embedding techniques.\n\tA compendium of SVG resources compiled by Chris Coyier \u2014 contains resources to almost every aspect of SVG you might be interested in.\n\n\nAnd if you\u2019re completely new to the concept of spriting, Chris Coyier\u2019s CSS Sprites explains all about them.\n\nAnother important SVG feature is the viewBox attribute. For some of the techniques, knowing your way around this attribute is not required, but it\u2019s definitely more useful if you understand \u2013 even if just vaguely \u2013 how it works. The last technique mentioned in the article requires that you do know the attribute\u2019s syntax and how to use it. To learn all about viewBox, you can refer to my blog post about SVG coordinate systems.\n\nWith the prerequisites in place, let\u2019s move on to spriting SVGs!\n\nBefore you sprite\u2026\n\nIn order to create an SVG sprite with your icons, you\u2019ll of course need to have these icons ready for use.\n\nSome spriting tools require that you place your icons in a folder to which a certain spriting process is to be applied. As such, for all of the upcoming sections we\u2019ll work on the assumption that our SVG icons are placed in a folder named SVG.\n\nEach icon is an individual .svg file.\n\nYou\u2019ll need to make sure each icon is well-prepared and optimised for use \u2013 make sure you\u2019ve cleaned up the code by running it through one of the optimisation tools or processes available (or doing it manually if it\u2019s not tedious).\n\nAfter prepping the icon files and placing them in a folder, we\u2019re ready to create our SVG sprite.\n\nHTML inline SVG sprites\n\nSince SVG is XML code, it can be embedded inline in an HTML document as a code island using the <svg> element. Chris Coyier wrote about this technique first on CSS-Tricks.\n\nThe embedded SVG will serve as a container for our icons and is going to be the actual sprite we\u2019re going to use. So we\u2019ll start by including the SVG in our document.\n\n<!DOCTYPE html>\n<!-- HTML document stuff -->\n\n<svg style=\"display:none;\">\n <!-- icons here -->\n</svg>\n\n<!-- other document stuff -->\n</html>\n\nNext, we\u2019re going to place the icons inside the <svg>. Each icon will be wrapped in a <symbol> element we can then reference and use elsewhere in the page using the SVG <use> element. The <symbol> element has many benefits, and we\u2019re using it because it allows us to define a symbol (which is a convenient markup for an icon) without rendering that symbol on the screen. The elements defined inside <symbol> will only be rendered when they are referenced \u2013 or called \u2013 by the <use> element.\n\nMoreover, <symbol> can have its own viewBox attribute, which makes it possible to control the positioning of its content inside its container at any time.\n\nBefore we move on, I\u2019d like to shed some light on the style=\"display:none;\" part of the snippet above. Without setting the display of the SVG to none, and even though its contents are not rendered on the page, the SVG will still take up space in the page, resulting in a big empty area. In order to avoid that, we\u2019re hiding the SVG entirely with CSS.\n\nNow, suppose we have a Twitter icon in the icons folder. twitter.svg might look something like this:\n\n<!-- twitter.svg -->\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\" \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<svg version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"32\" height=\"32\" viewBox=\"0 0 32 32\">\n<path d=\"M32 6.076c-1.177 0.522-2.443 0.875-3.771 1.034 1.355-0.813 2.396-2.099 2.887-3.632-1.269 0.752-2.674 1.299-4.169 1.593-1.198-1.276-2.904-2.073-4.792-2.073-3.626 0-6.565 2.939-6.565 6.565 0 0.515 0.058 1.016 0.17 1.496-5.456-0.274-10.294-2.888-13.532-6.86-0.565 0.97-0.889 2.097-0.889 3.301 0 2.278 1.159 4.287 2.921 5.465-1.076-0.034-2.088-0.329-2.974-0.821-0.001 0.027-0.001 0.055-0.001 0.083 0 3.181 2.263 5.834 5.266 6.437-0.551 0.15-1.131 0.23-1.73 0.23-0.423 0-0.834-0.041-1.235-0.118 0.835 2.608 3.26 4.506 6.133 4.559-2.247 1.761-5.078 2.81-8.154 2.81-0.53 0-1.052-0.031-1.566-0.092 2.905 1.863 6.356 2.95 10.064 2.95 12.076 0 18.679-10.004 18.679-18.68 0-0.285-0.006-0.568-0.019-0.849 1.283-0.926 2.396-2.082 3.276-3.398z\" fill=\"#000000\"></path>\n</svg>\n\nWe don\u2019t need the root svg element, so we\u2019ll strip the code and only keep the parts that make up the Twitter icon\u2019s shape, which in this example is just the <path> element.Let\u2019s drop that into the sprite container like so:\n\n<svg style=\"display:none;\">\n <symbol id=\"twitter-icon\" viewBox=\"0 0 32 32\">\n <path d=\"M32 6.076c-1.177 \u2026\" fill=\"#000000\"></path>\n </symbol>\n\n <!-- remaining icons here -->\n <symbol id=\"instagram-icon\" viewBox=\"0 0 32 32\">\n <!-- icon contents -->\n </symbol>\n\n <!-- etc. -->\n</svg>\n\nRepeat for the other icons.\n\nThe value of the <symbol> element\u2019s viewBox attribute depends on the size of the SVG. You don\u2019t need to know how the viewBox works to use it in this case. Its value is made up of four parts: the first two will almost always be \u201c0 0\u201d; the second two will be equal to the size of the icon. For example, our Twitter icon is 32px by 32px (see twitter.svg above), so the viewBox value is \u201c0 0 32 32\u201d.\n\nThat said, it is certainly useful to understand how the viewBox works \u2013 it can help you troubleshoot SVG sometimes and gives you better control over it, allowing you to scale, position and even crop SVGs manually without having to resort to an editor. My blog post explains all about the viewBox attribute and its related attributes.\n\nOnce you have your SVG sprite ready, you can display the icons anywhere on the page by referencing them using the SVG <use> element:\n\n<svg class=\"twitter-icon\">\n <use xlink:href=\"#twitter-icon\"></use>\n<svg>\n\nAnd that\u2019s all there is to it!\n\nHTML-inline SVG sprites are simple to create and use, but when you have a lot of icons (and the more icon sets you create) it can easily become daunting if you have to manually transfer the icons into the <svg>. Fortunately, you don\u2019t have to do that. Fabrice Weinberg created a Grunt plugin called grunt-svgstore which takes the icons in your SVG folder and generates the SVG sprites for you; all you have to do is just drop the sprites into your page and use the icons like we did earlier.\n\nThis technique works in all browsers supporting SVG. There seems to be a bug in Safari on iOS which causes the icons not to show up when the SVG sprite is defined at the bottom of the document after the <use> references to the icons, so it\u2019s safest to include the sprite before you use the icons until this bug is fixed.\n\nThis technique has one disadvantage: the SVG sprite cannot be cached. We\u2019re saving an extra HTTP request here but the browser cannot cache the image, so we aren\u2019t speeding up any subsequent page loads by inlining the SVG. There must be a better way \u2013 and there is.\n\nStyling the icons is possible, but getting deep into the styles becomes a bit harder owing to the nature of the contents of the <use> element \u2013 these contents are cloned into a shadow DOM, and hence selecting elements in CSS the traditional way is not possible. However, some techniques to work around that do exist, and give us slightly more styling flexibility. Animations work as expected.\n\nReferencing an external SVG sprite in HTML\n\nInstead of including the SVG inline in the document, you can reference the sprite and the icons inside it externally, taking advantage of fragment identifiers to select individual icons in the sprite.\n\nFor example, the above reference to the Twitter icon would look something like this instead:\n\n<svg class=\"twitter-icon\">\n <use xlink:href=\"path/to/icons.svg#twitter-icon\"></use>\n<svg>\n\n\nicons.svg is the name of the SVG file that contains all of our icons as symbols, and the fragment identifier #twitter-icon is the reference to the <symbol> wrapping the Twitter icon\u2019s contents. Very convenient, isn\u2019t it? The browser will request the sprite and then cache it, speeding up subsequent page loads. Win!\n\nThis technique also works in all browsers supporting SVG except Internet Explorer \u2013 not even IE9+ with SVG support permits this technique. No version of IE supports referencing an external SVG in <use>.\n\nFortunately (again), Jonathan Neil has created a plugin called svg4everybody which fills this gap in IE; you can reference an external sprite in <use> and also provide fallback for browsers that do not support SVG. However, it requires you to have the fallback images (PNG or JPEG, for example) available to do so. For details, refer to the plugin\u2019s Github repository\u2019s readme file.\n\nCSS inline SVG sprites\n\nAnother way to create an SVG sprite is by inlining the SVG icons in a style sheet using data URIs, and providing fallback for non-supporting browsers \u2013 also within the CSS.\n\nUsing this approach, we\u2019re turning the style sheet into the sprite that includes our icons. The style sheet is normally cached by the browser, so we have that concern out of the way.\n\nThis technique is put into practice in Filament Group\u2019s icon system approach, which uses their Grunticon plugin \u2013 or its sister Grumpicon web app \u2013 for generating the necessary CSS for the sprite. As such, we\u2019re going to cover this technique by following a workflow that uses one of these tools.\n\nAgain, we start with our icon SVG files. To focus on the actual spriting method and not on the tooling, I\u2019ll go over the process of sprite creation using the Grumpicon web app, instead of the Grunticon plugin. Both tools generate the same resources that we\u2019re going to use for the icon system. Whether you choose the web app or the Grunt set-up, after processing your SVG folder you\u2019re going to end up with the same set of resources that we\u2019ll be using throughout this section.\n\nThe first step is to drop your icons into the Grumpicon web app.\n\n Grumpicon homepage screenshot.\n\nThe application will then show you a preview of your icons, and a download button will allow you to download the generated files. These files will contain everything you need for your icon system \u2013 all that\u2019s left is for you to drop the generated files and code into your project as recommended and you\u2019ll have your sprite and icons ready to use anywhere you want in your page.\n\nGrumpicon generates five files and one folder in the downloaded package: a png folder containing PNG versions of your icons; three style sheets (that we\u2019ll go over briefly); a loader script file; and preview.html which is a live example showing you the other files in action.\n\nThe script in the loader goes into the <head> of your page. This script handles browser and feature detection, and requests the necessary style sheet depending on browser support for SVG and base64 data URIs. If you view the source code of the preview page, you can see exactly how the script is added.\n\nicons.data.svg.css is the style sheet that contains your icons \u2013 the sprite. The icons are embedded inline inside the style sheet using data URIs, and applied to elements of your choice as background images, using class names. For example:\n\n.twitter-icon{\n background-image: url('data:image/svg+xml;\u2026'); /* the ellipsis is where the icon\u2019s data would go */\n background-repeat: no-repeat;\n background-position: 50% 50%;\n height: 2em;\n width: 2em;\n /* etc. */\n}\n\nThen, you only have to apply the twitter-icon class name to an element in your HTML to apply the icon as a background to it:\n\n<span class=\"twitter-icon\"></span>\n\nAnd that\u2019s all you need to do to get an icon on the page.\n\nicons.data.svg.css, along with the other two style sheets and the png folder should be added to your CSS folder.\n\nicons.data.png.css is the style sheet the script will load in browsers that don\u2019t support SVG, such as IE8. Fallback for the inline SVG is provided as a base64-encoded PNG. For instance, the fallback for the Twitter icon from our example would look like so:\n\n.twitter-icon{\n background-image: url('data:image/png;base64;\u2026\u2019);\n /* etc. */\n}\n\nicons.fallback.css is the style sheet required for browsers that don\u2019t support base64-encoded PNGs \u2013 the PNG images are loaded as usual using the image\u2019s URL. The script will load this style sheet for IE6 and IE7, for example.\n\n.twitter-icon{\n background-image: url(png/twitter-icon.png);\n /* etc. */\n}\n\nThis technique is very different from the previous one. The sprite in this case is literally the style sheet, not an SVG container, and the icon usage is very similar to that of a CSS sprite \u2013 the icons are provided as background images.\n\nThis technique has advantages and disadvantages. For the sake of brevity, I won\u2019t go into further details, but the main limitations worth mentioning are that SVGs embedded as background images cannot be styled with CSS; and animations are restricted to those defined inside the <svg> for each icon. CSS interactions (such as hover effects) don\u2019t work either. Thus, to apply an effect for an icon that changes its color on hover, for example, you\u2019ll need to export a set of SVGs for each colour in order for Grumpicon to create matching fallback PNG images that can then be used for the animation.\n\nFor more details about the Grumpicon workflow, I recommend you check out \u201cA Designer\u2019s Guide to Grumpicon\u201d on Filament Group\u2019s website.\n\nUsing SVG fragment identifiers and views\n\nThis spriting technique is, again, different from the previous ones, and it is my personal favourite.\n\nSVG comes with a standard way of cropping to a specific area in a particular SVG image. If you\u2019ve ever worked with CSS sprites before then this definitely sounds familiar: it\u2019s almost exactly what we do with CSS sprites \u2013 the image containing all of the icons is cropped, so to speak, to show only the one icon that we want in the background positioning area of the element, using background size and positioning properties.\n\nInstead of using background properties, we\u2019ll be using SVG\u2019s viewBox attribute to crop our SVG to the specific icon we want.\n\nWhat I like about this technique is that it is more visual than the previous ones. Using this technique, the SVG sprite is treated like an actual image containing other images (the icons), instead of treating it as a piece of code containing other code.\n\nAgain, our SVG icons are placed inside a main SVG container that is going to be our SVG sprite. If you\u2019re working in a graphics editor, position or arrange your icons inside the canvas any way you want them to be, and then export the graphic as is. Of course, the less empty space there is in your SVG, the better.\n\nIn our example, the sprite contains three icons as shown in the following image. The sprite is open in Sketch. Notice how the SVG is just big enough to fit the icons inside it. It doesn\u2019t have to be like this, but it\u2019s cleaner this way.\n\n Screenshot showing the SVG sprite containing our icons.\n\nNow, suppose you want to display only the Instagram icon. Using the SVG viewBox attribute, we can crop the SVG to the icon. The Instagram icon is positioned at 64px along the positive x-axis, and zero pixels along the y-axis. It is also 32px by 32px in size.\n\n Screenshot showing the position (offset) of the Instagram icon inside the SVG sprite, and its size.\n\nUsing this information, we can specify the value of the viewBox as: 64 0 32 32. This area of the view box contains only the Instagram icon. 64 0 specifies the top-left corner of the view box area, and 32 32 specify its dimensions.\n\nNow, if we were to change the viewBox value on the SVG sprite to this value, only the Instagram icon will be visible inside the SVG viewport. Great. But how do we use this information to display the icon in our page using our sprite?\n\nSVG comes with a native way to link to portions or areas of an image using fragment identifiers. Fragment identifiers are used to link into a particular view area of an SVG document. Thus, using a fragment identifier and the boundaries of the area that we want (from the viewBox), we can link to that area and display it.\n\nFor example, if you want to display the icon from the sprite using an <img> tag, you can reference the icon in the sprite like so:\n\n<img src='uiIcons.svg#svgView(viewBox(64, 0, 32, 32))' alt=\"Settings icon\"/>\n\nThe fragment identifier in the snippet above (#svgView(viewBox(64, 0, 32, 32))) is the important part. This will result in only the Instagram icon\u2019s area of the sprite being displayed.\n\nThere is also another way to do this, using the SVG <view> element. The <view> element can be used to define a view area and then reference that area somewhere else. For example, to define the view box containing the Instagram icon, we can do the following:\n\n<view id='instagram-icon' viewBox='64 0 32 32' />\n\nThen, we can reference this view in our <img> element like this:\n\n<img src='sprite.svg#instagram-icon' alt=\"Instagram icon\" />\n\nThe best part about this technique \u2013 besides the ability to reference an external SVG and hence make use of browser caching \u2013 is that it allows us to use practically any SVG embedding technique and does not restrict us to specific tags.\n\nIt goes without saying that this feature can be used for more than just icon systems, owing to viewBox\u2019s power in controlling an SVG\u2019s viewable area.\n\nSVG fragment identifiers have decent browser support, but the technique is buggy in Safari: there is a bug that causes problems when loading a server SVG file and then using fragment identifiers with it. Bear Travis has documented the issue and a workaround.\n\nWhere to go from here\n\nPick the technique that works best for your project. Each technique has its own pros and cons, relating to convenience and maintainability, performance, and styling and scripting. Each technique also requires its own fallback mechanism.\n\nThe spriting techniques mentioned here are not the only techniques available. Other methods exist, such as SVG stacks, and others may surface in future, but these are the three main ones today.\n\nThe third technique using SVG\u2019s built-in viewBox features is my favourite, and with better browser support and fewer (ideally, no) bugs, I believe it is more likely to become the standard way to create and use SVG sprites. Fallback techniques can be created, of course, in one of many possible ways.\n\nDo you use SVG for your icon system? If so, which is your favourite technique? Do you know or have worked with other ways for creating SVG sprites?", "year": "2014", "author": "Sara Soueidan", "author_slug": "sarasoueidan", "published": "2014-12-16T00:00:00+00:00", "url": "https://24ways.org/2014/an-overview-of-svg-sprite-creation-techniques/", "topic": "code"} {"rowid": 46, "title": "Responsive Enhancement", "contents": "24 ways has been going strong for ten years. That\u2019s an aeon in internet timescales. Just think of all the changes we\u2019ve seen in that time: the rise of Ajax, the explosion of mobile devices, the unrecognisably changed landscape of front-end tooling.\n\nTools and technologies come and go, but one thing has remained constant for me over the past decade: progressive enhancement.\n\nProgressive enhancement isn\u2019t a technology. It\u2019s more like a way of thinking. Instead of thinking about the specifics of how a finished website might look, progressive enhancement encourages you to think about the fundamental meaning of what the website is providing. So instead of thinking of a website in terms of its ideal state in a modern browser on a nice widescreen device, progressive enhancement allows you to think about the core functionality in a more abstract way.\n\nOnce you\u2019ve figured out what the core functionality is \u2013 adding an item to a shopping cart, posting a message, sharing a photo \u2013 then you can enable that functionality in the simplest possible way. That usually means starting with good old-fashioned HTML. Links and forms are often all you need. Then, once you have the core functionality working in a basic way, you can start to enhance to make a progressively better experience for more modern browsers.\n\nThe advantage of working this way isn\u2019t just that your site will work in older browsers (albeit in a rudimentary way). It also ensures that if anything goes wrong in a modern browser, it won\u2019t be catastrophic.\n\nThere\u2019s a common misconception that progressive enhancement means that you\u2019ll spend your time dealing with older browsers, but in fact the opposite is true. Putting the basic functionality into place doesn\u2019t take very long at all. And once you\u2019ve done that, you\u2019re free to spend all your time experimenting with the latest and greatest browser technologies, secure in the knowledge that even if they aren\u2019t universally supported yet, that\u2019s OK: you\u2019ve already got your fallback in place.\n\nThe key to thinking about web development this way is realising that there isn\u2019t one final interface \u2013 there could be many, slightly different interfaces depending on the properties and capabilities of any particular user agent at any particular moment. And that\u2019s OK. Websites do not need to look the same in every browser.\n\nOnce you truly accept that, it\u2019s an immensely liberating idea. Instead of spending your time trying to make websites look the same in wildly varying browsers, you can spend your time making sure that the core functionality of what you build works everywhere, while providing the best possible experience for more capable browsers.\n\nAllow me to demonstrate with a simple example: navigation.\n\nStep one: core functionality\n\nLet\u2019s say we have a straightforward website about the twelve days of Christmas, with a page for each day. The core functionality is pretty clear:\n\n\n\tTo read about any particular day.\n\tTo browse from day to day.\n\n\nThe first is easily satisfied by marking up the text with headings, paragraphs and all the usual structural HTML elements. The second is satisfied by providing a list of good ol\u2019 hyperlinks.\n\nNow where\u2019s the best place to position this navigation list? Personally, I\u2019m a big fan of the jump-to-footer pattern. This puts the content first and the navigation second. At the top of the page there\u2019s a link with an href attribute pointing to the fragment identifier for the navigation.\n\n<body>\n <main role=\"main\" id=\"top\">\n <a href=\"#menu\" class=\"control\">Menu</a>\n ...\n </main>\n <nav role=\"navigation\" id=\"menu\">\n ...\n <a href=\"#top\" class=\"control\">Dismiss</a>\n </nav>\n</body>\n\nSee the footer-anchor pattern in action.\n\nBecause it\u2019s nothing more than a hyperlink, this works in just about every browser since the dawn of the web. Following hyperlinks is what web browsers were made to do (hence the name).\n\nStep two: layout as an enhancement\n\nThe footer-anchor pattern is a particularly neat solution on small-screen devices, like mobile phones. Once more screen real estate is available, I can use the magic of CSS to reposition the navigation above the content. I could use position: absolute, flexbox or, in this case, display: table.\n\n@media all and (min-width: 35em) {\n .control {\n display: none;\n }\n body {\n display: table;\n }\n [role=\"navigation\"] {\n display: table-caption;\n columns: 6 15em;\n }\n}\n\nSee the styles for wider screens in action\n\nStep three: enhance!\n\nRight. At this point I\u2019m providing core functionality to everyone, and I\u2019ve got nice responsive styles for wider screens. I could stop here, but the real advantage of progressive enhancement is that I don\u2019t have to. From here on, I can go crazy adding all sorts of fancy enhancements for modern browsers, without having to worry about providing a fallback for older browsers \u2013 the fallback is already in place.\n\nWhat I\u2019d really like is to provide a swish off-canvas pattern for small-screen devices. Here\u2019s my plan:\n\n\n\tPosition the navigation under the main content.\n\tListen out for the .control links being activated and intercept that action.\n\tWhen those links are activated, toggle a class of .active on the body.\n\tIf the .active class exists, slide the content out to reveal the navigation.\n\n\nHere\u2019s the CSS for positioning the content and navigation:\n\n@media all and (max-width: 35em) {\n [role=\"main\"] {\n transition: all .25s;\n width: 100%;\n position: absolute;\n z-index: 2;\n top: 0;\n right: 0;\n }\n [role=\"navigation\"] {\n width: 75%;\n position: absolute;\n z-index: 1;\n top: 0;\n right: 0;\n }\n .active [role=\"main\"] {\n transform: translateX(-75%);\n }\n}\n\nIn my JavaScript, I\u2019m going to listen out for any clicks on the .control links and toggle the .active class on the body accordingly:\n\n(function (win, doc) {\n 'use strict';\n var linkclass = 'control',\n activeclass = 'active',\n toggleClassName = function (element, toggleClass) {\n var reg = new RegExp('(s|^)' + toggleClass + '(s|$)');\n if (!element.className.match(reg)) {\n element.className += ' ' + toggleClass;\n } else {\n element.className = element.className.replace(reg, '');\n }\n },\n navListener = function (ev) {\n ev = ev || win.event;\n var target = ev.target || ev.srcElement;\n if (target.className.indexOf(linkclass) !== -1) {\n ev.preventDefault();\n toggleClassName(doc.body, activeclass);\n }\n };\n doc.addEventListener('click', navListener, false);\n}(this, this.document));\n\nI\u2019m all set, right? Not so fast!\n\nCutting the mustard\n\nI\u2019ve made the assumption that addEventListener will be available in my JavaScript. That isn\u2019t a safe assumption. That\u2019s because JavaScript \u2013 unlike HTML or CSS \u2013 isn\u2019t fault-tolerant. If you use an HTML element or attribute that a browser doesn\u2019t understand, or if you use a CSS selector, property or value that a browser doesn\u2019t understand, it\u2019s no big deal. The browser will just ignore what it doesn\u2019t understand: it won\u2019t throw an error, and it won\u2019t stop parsing the file.\n\nJavaScript is different. If you make an error in your JavaScript, or use a JavaScript method or property that a browser doesn\u2019t recognise, that browser will throw an error, and it will stop parsing the file. That\u2019s why it\u2019s important to test for features before using them in JavaScript. That\u2019s also why it isn\u2019t safe to rely on JavaScript for core functionality.\n\nIn my case, I need to test for the existence of addEventListener:\n\n(function (win, doc) {\n if (!win.addEventListener) {\n return;\n }\n ...\n}(this, this.document));\n\nThe good folk over at the BBC call this kind of feature test cutting the mustard. If a browser passes the test, it cuts the mustard, and so it gets the enhancements. If a browser doesn\u2019t cut the mustard, it doesn\u2019t get the enhancements. And that\u2019s fine because, remember, websites don\u2019t need to look the same in every browser.\n\nI want to make sure that my off-canvas styles are only going to apply to mustard-cutting browsers. I\u2019m going to use JavaScript to add a class of .cutsthemustard to the document:\n\n(function (win, doc) {\n if (!win.addEventListener) {\n return;\n }\n ...\n var enhanceclass = 'cutsthemustard';\n doc.documentElement.className += ' ' + enhanceclass;\n}(this, this.document));\n\nNow I can use the existence of that class name to adjust my CSS:\n\n@media all and (max-width: 35em) {\n .cutsthemustard [role=\"main\"] {\n transition: all .25s;\n width: 100%;\n position: absolute;\n z-index: 2;\n top: 0;\n right: 0;\n }\n .cutsthemustard [role=\"navigation\"] {\n width: 75%;\n position: absolute;\n z-index: 1;\n top: 0;\n right: 0;\n }\n .cutsthemustard .active [role=\"main\"] {\n transform: translateX(-75%);\n }\n}\n\nSee the enhanced mustard-cutting off-canvas navigation. Remember, this only applies to small screens so you might have to squish your browser window.\n\nEnhance all the things!\n\nThis was a relatively simple example, but it illustrates the thinking behind progressive enhancement: once you\u2019re providing the core functionality to everyone, you\u2019re free to go crazy with all the latest enhancements for modern browsers.\n\nProgressive enhancement doesn\u2019t mean you have to provide all the same functionality to everyone \u2013 quite the opposite. That\u2019s why it\u2019s key to figure out early on what the core functionality is, and make sure that it can be provided with the most basic technology. But from that point on, you\u2019re free to add many more features that aren\u2019t mission-critical. You should reward more capable browsers by giving them more of those features, such as animation in CSS, geolocation in JavaScript, and new input types in HTML.\n\nLike I said, progressive enhancement isn\u2019t a technology. It\u2019s a way of thinking. Once you start thinking this way, you\u2019ll be prepared for whatever the next ten years throws at us.", "year": "2014", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2014-12-09T00:00:00+00:00", "url": "https://24ways.org/2014/responsive-enhancement/", "topic": "code"} {"rowid": 49, "title": "Universal React", "contents": "One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications.\nMore generally we\u2019ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google\u2019s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed.\nThe good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows:\n\nThe user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page.\nIn the background, the client-side JavaScript is executed and takes over the duty of rendering the page.\nThe next time a user clicks, rather than being sent to the server, the client-side app is in control.\nIf the user doesn\u2019t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again.\n\nThis means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that\u2019s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem.\nIt\u2019s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you\u2019d like a refresher, the ReactJS docs are a good place to start.\n\u00a0Getting started\nWe\u2019re going to create a tiny ReactJS application that will work on the server and the client. First we\u2019ll need to create a new project and install some dependencies. In a new, blank directory, run:\nnpm init -y\nnpm install --save ejs express react react-router react-dom\nThat will create a new project and install our dependencies:\n\nejs is a templating engine that we\u2019ll use to render our HTML on the server.\nexpress is a small web framework we\u2019ll run our server on.\nreact-router is a popular routing solution for React so our app can fully support and respect URLs.\nreact-dom is a small React library used for rendering React components.\n\nWe\u2019re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too.\nnpm install --save-dev babel-cli babel-preset-es2015 babel-preset-react\nThen, create a .babelrc file that contains the following:\n{\n \"presets\": [\"es2015\", \"react\"]\n}\nWhat we\u2019ve done here is install Babel\u2019s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We\u2019ll need the React transforms when we start writing JSX when working with React.\nCreating a server\nFor now, our ExpressJS server is pretty straightforward. All we\u2019ll do is render a view that says \u2018Hello World\u2019. Here\u2019s our server code:\nimport express from 'express';\nimport http from 'http';\n\nconst app = express();\n\napp.use(express.static('public'));\n\napp.set('view engine', 'ejs');\n\napp.get('*', (req, res) => {\n res.render('index');\n});\n\nconst server = http.createServer(app);\n\nserver.listen(3003);\nserver.on('listening', () => {\n console.log('Listening on 3003');\n});\nHere we\u2019re using ES6 modules, which I wrote about on 24 ways last year, if you\u2019d like a reminder. We tell the app to render the index view on any GET request (that\u2019s what app.get('*') means, the wildcard matches any route).\nWe now need to create the index view file, which Express expects to be defined in views/index.ejs:\n<!DOCTYPE html>\n<html>\n <head>\n <title>My App</title>\n </head>\n\n <body>\n Hello World\n </body>\n</html>\nFinally, we\u2019re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command:\n./node_modules/.bin/babel-node server.js\nAnd you should now be able to visit http://localhost:3003 and see \u2018Hello World\u2019 right there:\n\nBuilding the React app\nNow we\u2019ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs.\nFirstly, let\u2019s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so:\n<body>\n <%- markup %>\n</body>\nNext, we\u2019ll define the routes we want our app to have using React Router. For now we\u2019ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it\u2019s clearer to define them as an object. Here\u2019s what we\u2019re starting with:\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\nThese are just placed at the top of server.js, after the import statements. Later we\u2019ll move these into a separate file, but for now they are fine where they are.\nNotice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let\u2019s quickly define components/app.js and components/index.js. app.js looks like so:\nimport React from 'react';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n <div>\n <h2>Welcome to my App</h2>\n { this.props.children }\n </div>\n );\n }\n}\nWhen a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland:\nimport React from 'react';\n\nexport default class IndexComponent extends React.Component {\n render() {\n return (\n <div>\n <p>This is the index page</p>\n </div>\n );\n }\n}\nServer-side routing with React Router\nHead back into server.js, and firstly we\u2019ll need to add some new imports:\nimport React from 'react';\nimport { renderToString } from 'react-dom/server';\nimport { match, RoutingContext } from 'react-router';\n\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nThe ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It\u2019s this method that we\u2019ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we\u2019ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don\u2019t need to concern yourself about how this component works, so don\u2019t worry too much.\nNow for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes:\napp.get('*', (req, res) => {\n // routes is our object of React routes defined above\n match({ routes, location: req.url }, (err, redirectLocation, props) => {\n if (err) {\n // something went badly wrong, so 500 with a message\n res.status(500).send(err.message);\n } else if (redirectLocation) {\n // we matched a ReactRouter redirect, so redirect from the server\n res.redirect(302, redirectLocation.pathname + redirectLocation.search);\n } else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString(<RoutingContext {...props} />);\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n\n } else {\n // no route match, so 404. In a real app you might render a custom\n // 404 view here\n res.sendStatus(404);\n }\n });\n});\nWe call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err, redirectLocation and props as the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional, else if (props). If we got given props and we\u2019ve made it this far it means we found a matching component to render and we can use this code to render it:\n...\n} else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString(<RoutingContext {...props} />);\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n} else {\n ...\n}\nThe renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components.\nNote the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent:\n<MyComponent a=\"foo\" b=\"bar\" />\n\n// OR:\n\nconst props = { a: \"foo\", b: \"bar\" };\n<MyComponent {...props} />\nRunning the server again\nI know that felt like a lot of work, but the good news is that once you\u2019ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working!\n\nRefactoring and one more route\nBefore we move on to getting this code running on the client, let\u2019s add one more route and do some tidying up. First, move our routes object out into routes.js:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\n\nexport { routes };\nAnd then update server.js. You can remove the two component imports and replace them with:\nimport { routes } from './routes';\nFinally, let\u2019s add one more route for ./about and links between them. Create components/about.js:\nimport React from 'react';\n\nexport default class AboutComponent extends React.Component {\n render() {\n return (\n <div>\n <p>A little bit about me.</p>\n </div>\n );\n }\n}\nAnd then you can add it to routes.js too:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nimport AboutComponent from './components/about';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n },\n {\n path: '/about',\n component: AboutComponent\n }\n ]\n}\n\nexport { routes };\nIf you now restart the server and head to http://localhost:3003/about` you\u2019ll see the about page!\n\nFor the finishing touch we\u2019ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so:\nimport React from 'react';\nimport { Link } from 'react-router';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n <div>\n <h2>Welcome to my App</h2>\n <ul>\n <li><Link to='/'>Home</Link></li>\n <li><Link to='/about'>About</Link></li>\n </ul>\n { this.props.children }\n </div>\n );\n }\n}\nYou can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we\u2019re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience.\nClient-side rendering\nFirst, we\u2019re going to make a small change to views/index.ejs. React doesn\u2019t like rendering directly into the body and will give a warning when you do so. To prevent this we\u2019ll wrap our app in a div:\n<body>\n <div id=\"app\"><%- markup %></div>\n <script src=\"build.js\"></script>\n</body>\nI\u2019ve also added in a script tag to build.js, which is the file we\u2019ll generate containing all our client-side code.\nNext, create client-render.js. This is going to be the only bit of JavaScript that\u2019s exclusive to the client side. In it we need to pull in our routes and render them to the DOM.\nimport React from 'react';\nimport ReactDOM from 'react-dom';\nimport { Router } from 'react-router';\n\nimport { routes } from './routes';\n\nimport createBrowserHistory from 'history/lib/createBrowserHistory';\n\nReactDOM.render(\n <Router routes={routes} history={createBrowserHistory()} />,\n document.getElementById('app')\n)\nThe first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser\u2019s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we\u2019ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation.\nFinally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element.\nGenerating build.js\nWe\u2019re actually almost there! The final thing we need to do is generate our client side bundle. For this we\u2019re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We\u2019ll install it and babel-loader, a webpack plugin for transforming code through Babel.\nnpm install --save-dev webpack babel-loader\nTo run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the root of our application and add the following code:\nvar path = require('path');\nmodule.exports = {\n entry: path.join(process.cwd(), 'client-render.js'),\n output: {\n path: './public/',\n filename: 'build.js'\n },\n module: {\n loaders: [\n {\n test: /.js$/,\n loader: 'babel'\n }\n ]\n }\n}\nNote first that this file can\u2019t be written in ES6 as it doesn\u2019t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location \u2013 if we just gave it the string \u2018client-render.js\u2019, webpack wouldn\u2019t be able to find it.\nNext, we tell webpack where to output our file, and here I\u2019m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first.\nNow we\u2019re ready to generate the bundle!\n./node_modules/.bin/webpack\nThis will take a fair few seconds to run (on my machine it\u2019s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you\u2019ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect!\nThe first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you\u2019re developing.\nConclusions\nFirst, if you\u2019d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions.\nNext, I want to stress that you shouldn\u2019t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you\u2019d be right. I used it as it\u2019s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it\u2019s a suitable infrastructure for you.\nWith that, all that\u2019s left for me to do is wish you a very merry Christmas and best of luck with your React applications!", "year": "2015", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2015-12-05T00:00:00+00:00", "url": "https://24ways.org/2015/universal-react/", "topic": "code"} {"rowid": 52, "title": "Git Rebasing: An Elfin Workshop Workflow", "contents": "This year Santa\u2019s helpers have been tasked with making a garland. It\u2019s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they\u2019re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn\u2019t. It\u2019s worse; it\u2019s about Git.)\nFor the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they\u2019re able to work independently, but towards the common goal of making a beautiful garland.\nAt first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they\u2019re going to need a system to change the beads out when mistakes are made in the chain.\nThe first common mistake is not looking to see what the latest chain is that\u2019s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It\u2019s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It\u2019s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command\ngit pull --rebase=preserve\n(They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they\u2019ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available.\nThe next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I\u2019ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it\u2019s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands:\ngit checkout master\ngit checkout -b 1234_pattern-name\n1234 represents the work order number and pattern-name describes the pattern they\u2019re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland.\nTo combine their work with the main garland, the elves need to make a few decisions. If they\u2019re making a single strand, they issue the following commands:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\nTo share their work they publish the new version of the main garland to the workshop spool with the command git push origin master.\nSometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order. \nTo update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command:\ngit reset --merge ORIG_HEAD\nThis works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf\u2019s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally.\nWith these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf\u2019s beads will be restrung on the tip of the main workshop garland.\nSometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf\u2019s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they\u2019re ready to close out their work order.\nOnce their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\ngit push origin master\nGenerally this process works well for the elves. Sometimes, though, when they\u2019re tired or bored or a little drunk on festive cheer, they realise there\u2019s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options.\nLet\u2019s pretend the elf has a sequence of five beads she\u2019s been working on. The work order says the pattern should be red-blue-red-blue-red.\n\nIf the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5.\n\nIf she\u2019s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5.\n\nIf one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence.\n\nUsing a tool that\u2019s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available.\nStart the garland surgery process with the command:\ngit rebase --interactive HEAD~4\nA new screen comes up with the following information (the oldest bead is on top):\npick c2e4877 Red bead\npick 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nThe elf adjusts the list, changing \u201cpick\u201d to \u201cedit\u201d next to the first blue bead:\npick c2e4877 Red bead\nedit 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nShe then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead.\n\nShe needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes.\nOnce she\u2019s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message \u201cRed bead \u2013 added\u201d so she can easily find it.\n\nThe garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue.\nThe new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool.\n\nShe knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads.\n\nWith the conflict resolved, the elf saves her changes and quits the mergetool.\nBack at the command line, the elf checks the status of her work using the command git status.\nrebase in progress; onto 4a9cb9d\nYou are currently rebasing branch '2_RBRBR' on '4a9cb9d'.\n (all conflicts fixed: run \"git rebase --continue\")\n\nChanges to be committed:\n (use \"git reset HEAD <file>...\" to unstage)\n\n modified: beads.txt\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\n beads.txt.orig\nShe removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands:\ngit add beads.txt\ngit commit --message \"Blue bead -- resolved conflict\"\n\nWith the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files.\n\nShe incorporates the changes from the left and right column to ensure her bead sequence is correct.\n\nOnce the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland:\ngit add beads.txt\ngit commit --message \"Red bead -- resolved conflict\"\nand then runs the final verification steps in the rebase process (git rebase --continue).\n\nThe verification process runs through to the end, and the elf checks her work using the command git log --oneline.\n9269914 Red bead -- resolved conflict\n4916353 Blue bead -- resolved conflict\naef0d5c Red bead -- added\n9b5555e Blue bead\nc2e4877 Red bead\nShe knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct.\nSometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It\u2019s made her more confident at restringing beads when she\u2019s found real mistakes. And she doesn\u2019t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don\u2019t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked.\n\nOur lessons from the workshop:\n\nBy using rebase to update your branches, you avoid merge commits and keep a clean commit history.\nIf you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard.\nIf you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch.\nIf you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.", "year": "2015", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2015-12-07T00:00:00+00:00", "url": "https://24ways.org/2015/git-rebasing/", "topic": "code"} {"rowid": 54, "title": "Putting My Patterns through Their Paces", "contents": "Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.\nThe thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.\nMeet the Teaser\nHere\u2019s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)\nIn its simplest form, a teaser contains a headline, which links to an article:\n\nFairly straightforward, sure. But it\u2019s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types \u2013 some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.\n\n Nearly every element visible on this page is built out of our generic \u201cteaser\u201d pattern.\n \nBut the teaser variation I\u2019d like to call out is the one that appears on The Toast\u2019s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.\n\n The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.\n \nAnd this is, as it happens, the teaser variation that gave me pause. Back in the old days \u2013 you know, like six months ago \u2013 I probably would\u2019ve marked this module up to match the design. In other words, I would\u2019ve looked at the module\u2019s visual hierarchy (metadata up top, headline and content below) and written the following HTML:\n<div class=\"teaser\">\n <p class=\"article-byline\">By <a href=\"#\">Author Name</a></p>\n <a class=\"comment-count\" href=\"#\">126 <i>comments</i></a>\n <h1 class=\"article-title\"><a href=\"#\">Article Title</a></h1>\n <p class=\"teaser-excerpt\">Lorem ipsum dolor sit amet, consectetur\u2026</p>\n</div>\nBut then I caught myself, and realized this wasn\u2019t the best approach.\nMoving Beyond Layout\nSince I\u2019ve started working responsively, there\u2019s a question I work into every step of my design process. Whether I\u2019m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:\n\nWhat if someone doesn\u2019t browse the web like I do?\n\n\u2026Okay, that doesn\u2019t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it\u2019s been invaluable to so many aspects of my practice. If I\u2019m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I\u2019m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It\u2019s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.\nAnd that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it\u2019d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they\u2019re associated.\nThat\u2019s why I find it\u2019s helpful to begin designing a pattern\u2019s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I\u2019m trying to support. In other words, if someone\u2019s encountering my design without the CSS I\u2019ve written, what should their experience be?\nSo I took a step back, and came up with a different approach:\n<div class=\"teaser\">\n <h1 class=\"article-title\"><a href=\"#\">Article Title</a></h1>\n <h2 class=\"article-byline\">By <a href=\"#\">Author Name</a></h2>\n <p class=\"teaser-excerpt\">\n Lorem ipsum dolor sit amet, consectetur\u2026\n <a class=\"comment-count\" href=\"#\">126 <i>comments</i></a>\n </p>\n</div>\nMuch, much better. This felt like a better match for the content I was designing: the headline \u2013 easily most important element \u2013 was at the top, followed by the author\u2019s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that\u2019s why it\u2019s at the very end of the excerpt, the last element within our teaser. And with some light styling, we\u2019ve got a respectable-looking hierarchy in place:\n\nYeah, you\u2019re right \u2013 it\u2019s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we\u2019ll bolster the markup with an extra element around our title and byline:\n<div class=\"teaser\">\n <div class=\"teaser-hed\">\n <h1 class=\"article-title\"><a href=\"#\">Article Title</a></h1>\n <h2 class=\"article-byline\">By <a href=\"#\">Author Name</a></h2>\n </div>\n \u2026\n</div>\nWith that in place, we can use flexbox to tweak our layout, like so:\n.teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\nflex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.\n\nGetting closer! But as great as flexbox is, it doesn\u2019t do anything for elements outside our container, like our little comment count, which is, as you\u2019ve probably noticed, still stranded at the very bottom of our teaser.\nFlexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser\u2019s using a sufficiently modern version of flexbox. Here\u2019s the one we used:\nvar doc = document.body || document.documentElement;\nvar style = doc.style;\n\nif ( style.webkitFlexWrap == '' ||\n style.msFlexWrap == '' ||\n style.flexWrap == '' ) {\n doc.className += \" supports-flex\";\n}\nEagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser\u2019s design:\n.supports-flex .teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\n.supports-flex .teaser .comment-count {\n position: absolute;\n right: 0;\n top: 1.1em;\n}\nIf the supports-flex class is present, we can apply our flexbox layout to the title area, sure \u2013 but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don\u2019t meet our threshold for our advanced styles are left with an attractive design that matches our HTML\u2019s content hierarchy; but the ones that pass our test receive the finished, final design.\n\nAnd with that, our teaser\u2019s complete.\nDiving Into Device-Agnostic Design\nThis is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I\u2019d recommend Heydon Pickering\u2019s \u201cFlexbox Grid Finesse\u201d, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you\u2019d be right! In fact, it\u2019s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I\u2019ve found that thinking about my design as existing in broad experience tiers \u2013 in layers \u2013 is one of the best ways of designing for the modern web. And what\u2019s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.\nOpen video\n \n Even a simple search form can be conditionally enhanced, given a little layered thinking.\n \nThis more layered approach to interface design isn\u2019t a new one, mind you: it\u2019s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I\u2019ve found to practice responsive design. As Trent Walton once wrote,\n\nLike cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web\u2019s inherent variability.\n\nWe have a weird job, working on the web. We\u2019re designing for the latest mobile devices, sure, but we\u2019re increasingly aware that our definition of \u201csmartphone\u201d is much too narrow. Browsers have started appearing on our wrists and in our cars\u2019 dashboards, but much of the world\u2019s mobile data flows over sub-3G networks. After all, the web\u2019s evolution has never been charted along a straight line: it\u2019s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don\u2019t yet know about, a more device-agnostic, more layered design process can better prepare our patterns \u2013 and ourselves \u2013 for the future.\n(It won\u2019t help you get enough to eat at holiday parties, though.)", "year": "2015", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2015-12-10T00:00:00+00:00", "url": "https://24ways.org/2015/putting-my-patterns-through-their-paces/", "topic": "code"} {"rowid": 55, "title": "How Tabs Should Work", "contents": "Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They\u2019ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented.\nSo this post is my definition of how a tabbing system should work, and one approach of implementing that.\nBut\u2026 tabs are easy, right?\nI\u2019ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system:\nvar tabs = $('.tab').click(function () {\n tabs.hide().filter(this.hash).show();\n}).map(function () {\n return $(this.hash)[0];\n});\n\n$('.tab:first').click();\nSimple, right? Nearly fits in a tweet (ignoring the whole jQuery library\u2026). Still, it\u2019s riddled with problems that make it a far from perfect solution.\nRequirements: what makes the perfect tab?\n\nAll content is navigable and available without JavaScript (crawler-compatible and low JS-compatible).\nARIA roles.\nThe tabs are anchor links that:\n\nare clickable\nhave block layout\nhave their href pointing to the id of the panel element\nuse the correct cursor (i.e. cursor: pointer).\n\nSince tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open.\nRight-clicking (and Shift-clicking) doesn\u2019t cause the tab to be selected.\nNative browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place).\n\nThe first three points are all to do with the semantics of the markup and how the markup has been styled. I think it\u2019s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work.\nThe last three points are JavaScript problems. Let\u2019s investigate that.\nThe shitmus test\nLike a litmus test, here\u2019s a couple of quick ways you can tell if a tabbing system is poorly implemented:\n\nChange tab, then use the Back button (or keyboard shortcut) and it breaks\nThe tab isn\u2019t a link, so you can\u2019t open it in a new tab\n\nThese two basic things are, to me, the bare minimum that a tabbing system should have.\nWhy is this important?\nThe people who push their so-called native apps on users can\u2019t have more reasons why the web sucks. If something as basic as a tab doesn\u2019t work, obviously there\u2019s more ammo to push a closed native app or platform on your users.\nIf you\u2019re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn\u2019t mean don\u2019t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. </rant> :breath:\nURI fragment, absolute URL or query string?\nA URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content.\nThis decision really depends on the context of your tabbing system. For something like GitHub\u2019s tabs to view a pull request, it makes sense that the full URL changes.\nFor our problem though, I want to solve the issue when the page doesn\u2019t do a full URL update; that is, your regular run-of-the-mill tabbing system.\nI used to be from the school of using the hash to show the correct tab, but I\u2019ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don\u2019t work, and comma-separated hash fragments don\u2019t make any sense to control multiple tabs (since it doesn\u2019t actually link to anything).\nFor this article, I\u2019ll keep focused on using a single tabbing system and a hash on the URL to control the tabs.\nMarkup\nI\u2019m going to assume subcontent, so my markup would look like this (yes, this is a cat demo\u2026):\n<ul class=\"tabs\">\n <li><a class=\"tab\" href=\"#dizzy\">Dizzy</a></li>\n <li><a class=\"tab\" href=\"#ninja\">Ninja</a></li>\n <li><a class=\"tab\" href=\"#missy\">Missy</a></li>\n</ul>\n\n<div id=\"dizzy\">\n <!-- panel content -->\n</div>\n<div id=\"ninja\">\n <!-- panel content -->\n</div>\n<div id=\"missy\">\n <!-- panel content -->\n</div>\nIt\u2019s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we\u2019ll see once we\u2019re on to writing the code.\nURL-driven tabbing systems\nInstead of making the code responsive to the user\u2019s input, we\u2019re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free.\nWith that in mind, let\u2019s start building up our code. I\u2019ll assume we have the jQuery library, but I\u2019ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support).\nNote that I\u2019ll start with the simplest solution, and I\u2019ll refactor the code as I go along, like in places where I keep calling jQuery selectors.\nfunction show(id) {\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n $('.tab').removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n $('.panel').hide().filter(id).show();\n}\n\n$(window).on('hashchange', function () {\n show(location.hash);\n});\n\n// initialise by showing the first panel\nshow('#dizzy');\nThis works pretty well for such little code. Notice that we don\u2019t have any click handlers for the user and the Back button works right out of the box.\nHowever, there\u2019s a number of problems we need to fix:\n\nThe initialised tab is hard-coded to the first panel, rather than what\u2019s on the URL.\nIf there\u2019s no hash on the URL, all the panels are hidden (and thus broken).\nIf you scroll to the bottom of the example, you\u2019ll find a \u201ctop\u201d link; clicking that will break our tabbing system.\nI\u2019ve purposely made the page long, so that when you click on a tab, you\u2019ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying.\n\nFrom our criteria at the start of this post, we\u2019ve already solved items 4 and 5. Not a terrible start. Let\u2019s solve items 1 through 3 next.\nUsing the URL to initialise correctly and protect from breakage\nInstead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it\u2019s available.\nThe problem is: what if the hash on the URL isn\u2019t actually for a tab?\nThe solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won\u2019t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached.\nSo now the code will collect all the tabs, then find the related panels from those tabs, and we\u2019ll use that list to double the values we give the show function (during initialisation, for instance).\n// collect all the tabs\nvar tabs = $('.tab');\n\n// get an array of the panel ids (from the anchor hash)\nvar targets = tabs.map(function () {\n return this.hash;\n}).get();\n\n// use those ids to get a jQuery collection of panels\nvar panels = $(targets.join(','));\n\nfunction show(id) {\n // if no value was given, let's take the first panel\n if (!id) {\n id = targets[0];\n }\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n tabs.removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n panels.hide().filter(id).show();\n}\n\n$(window).on('hashchange', function () {\n var hash = location.hash;\n if (targets.indexOf(hash) !== -1) {\n show(hash);\n }\n});\n\n// initialise\nshow(targets.indexOf(location.hash) !== -1 ? location.hash : '');\nThe core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab.\nThe second breakage we saw in the original demo was that clicking the \u201ctop\u201d link would break our tabs. This was due to the hashchange event firing and the code didn\u2019t validate the hash that was passed. Now this happens, the panels don\u2019t break.\nSo far we\u2019ve got a tabbing system that:\n\nWorks without JavaScript.\nSupports right-click and Shift-click (and doesn\u2019t select in these cases).\nLoads the correct panel if you start with a hash.\nSupports native browser navigation.\nSupports the keyboard.\n\nThe only annoying problem we have now is that the page jumps when a tab is selected. That\u2019s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it\u2019s all for a good cause.\nRemoving the jump to tab\nYou\u2019d be forgiven for thinking you just need to hook a click handler and return false. It\u2019s what I started with. Only that\u2019s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support.\nThere may be another way to solve this, but what follows is the way I found \u2013 and it works. It\u2019s just a bit\u2026 hairy, as I said.\nWe\u2019re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won\u2019t jump the page.\nThe change involves the following:\n\nAdd a click handle that removes the id from the target panel, and cache this in a target variable that we\u2019ll use later in hashchange (see point 4).\nIn the same click handler, set the location.hash to the current link\u2019s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line).\nFor each panel, put a backup copy of the id attribute in a data property (I\u2019ve called it old-id).\nWhen the hashchange event fires, if we have a target value, let\u2019s put the id back on the panel.\n\nThese changes result in this final code:\n/*global $*/\n\n// a temp value to cache *what* we're about to show\nvar target = null;\n\n// collect all the tabs\nvar tabs = $('.tab').on('click', function () {\n target = $(this.hash).removeAttr('id');\n\n // if the URL isn't going to change, then hashchange\n // event doesn't fire, so we trigger the update manually\n if (location.hash === this.hash) {\n // but this has to happen after the DOM update has\n // completed, so we wrap it in a setTimeout 0\n setTimeout(update, 0);\n }\n});\n\n// get an array of the panel ids (from the anchor hash)\nvar targets = tabs.map(function () {\n return this.hash;\n}).get();\n\n// use those ids to get a jQuery collection of panels\nvar panels = $(targets.join(',')).each(function () {\n // keep a copy of what the original el.id was\n $(this).data('old-id', this.id);\n});\n\nfunction update() {\n if (target) {\n target.attr('id', target.data('old-id'));\n target = null;\n }\n\n var hash = window.location.hash;\n if (targets.indexOf(hash) !== -1) {\n show(hash);\n }\n}\n\nfunction show(id) {\n // if no value was given, let's take the first panel\n if (!id) {\n id = targets[0];\n }\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n tabs.removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n panels.hide().filter(id).show();\n}\n\n$(window).on('hashchange', update);\n\n// initialise\nif (targets.indexOf(window.location.hash) !== -1) {\n update();\n} else {\n show();\n}\nThis version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add.\nARIA roles\nThis article on ARIA tabs made it very easy to get the tabbing system working as I wanted.\nThe tasks were simple:\n\nAdd aria-role set to tab for the tabs, and tabpanel for the panels.\nSet aria-controls on the tabs to point to their related panel (by id).\nI use JavaScript to add tabindex=0 to all the tab elements.\nWhen I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false.\nWhen I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false.\n\nAnd that\u2019s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible.\nCheck out the final version (and the non-jQuery version as promised).\nIn conclusion\nThere\u2019s a lot of tab implementations out there, but there\u2019s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there\u2019s a special hell for those tab systems that don\u2019t even use links, but I think it\u2019s clear that even in something that\u2019s relatively simple, it\u2019s the small details that make or break the user experience.\nObviously there are corners I\u2019ve not explored, like when there\u2019s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that\u2019s for another year!", "year": "2015", "author": "Remy Sharp", "author_slug": "remysharp", "published": "2015-12-22T00:00:00+00:00", "url": "https://24ways.org/2015/how-tabs-should-work/", "topic": "code"} {"rowid": 63, "title": "Be Fluid with Your Design Skills: Build Your Own Sites", "contents": "Just five years ago in 2010, when we were all busy trying to surprise and delight, learning CSS3 and trying to get whole websites onto one page, we had a poster on our studio wall. It was entitled \u2018Designers Vs Developers\u2019, an infographic that showed us the differences between the men(!) who created websites. \nDesigners wore skinny jeans and used Macs and developers wore cargo pants and brought their own keyboards to work. We began to learn that designers and developers were not only doing completely different jobs but were completely different people in every way. This opinion was backed up by hundreds of memes, millions of tweets and pages of articles which used words like void and battle and versus.\nThankfully, things move quickly in this industry; the wide world of web design has moved on in the last five years. There are new devices, technologies, tools \u2013 and even a few women. Designers have been helped along by great apps, software, open source projects, conferences, and a community of people who, to my unending pride, love to share their knowledge and their work.\nSo the world has moved on, and if Miley Cyrus, Ruby Rose and Eliot Sumner are identifying as gender fluid (an identity which refers to a gender which varies over time or is a combination of identities), then I would like to come out as discipline fluid! \nOK, I will probably never identify as a developer, but I will identify as fluid! How can we be anything else in an industry that moves so quickly? That\u2019s how we should think of our skills, our interests and even our job titles. After all, Steve Jobs told us that \u201cDesign is not just what it looks like and feels like. Design is how it works.\u201d Sorry skinny-jean-wearing designers \u2013 this means we\u2019re all designing something together. And it\u2019s not just about knowing the right words to use: you have to know how it feels. How it feels when you make something work, when you fix that bug, when you make it work on IE.\nLike anything in life, things run smoothly when you make the effort to share experiences, empathise and deeply understand the needs of others. How can designers do that if they\u2019ve never built their own site? I\u2019m not talking the big stuff, I\u2019m talking about your portfolio site, your mate\u2019s business website, a website for that great idea you\u2019ve had. I\u2019m talking about doing it yourself to get an unique insight into how it feels.\nWe all know that designers and developers alike love an <ol>, so here it is.\nTen reasons designers should be fluid with their skills and build their own sites\n1. It\u2019s never been easier\nNow here\u2019s where the definition of \u2018build\u2019 is going to get a bit loose and people are going to get angry, but when I say it\u2019s never been easier I mean because of the existence of apps and software like WordPress, Squarespace, Tumblr, et al. It\u2019s easy to make something and get it out there into the world, and these are all gateway drugs to hard coding!\n2. You\u2019ll understand how it feels\nHow it feels to be so proud that something actually works that you momentarily don\u2019t notice if the kerning is off or the padding is inconsistent. How it feels to see your site appear when you\u2019ve redirected a URL. How it feels when you just can\u2019t work out where that one extra space is in a line of PHP that has killed your whole site.\n3. It makes you a designer\nNot a better designer, it makes you a designer when you are designing how things look and how they work. \n4. You learn about movement\nPhotoshop and Sketch just don\u2019t cut it yet. Until you see your site in a browser or your app on a phone, it\u2019s hard to imagine how it moves. Building your own sites shows you that it\u2019s not just about how the content looks on the screen, but how it moves, interacts and feels.\n5. You make techie friends\nAll the tutorials and forums in the world can\u2019t beat your network of techie friends. Since I started working in web design I have worked with, sat next to, and co-created with some of the greatest developers. Developers who\u2019ve shared their knowledge, encouraged me to build things, patiently explained HTML, CSS, servers, divs, web fonts, iOS development. There has been no void, no versus, very few battles; just people who share an interest and love of making things. \n6. You will own domain names\nWhen something is paid for, online and searchable then it\u2019s real and you\u2019ve got to put the work in. Buying domains has taught me how to stop procrastinating, but also about DNS, FTP, email, and how servers work.\n7. People will ask you to do things\u2028\nLearning about code and development opens a whole new world of design. When you put your own personal websites and projects out there people ask you to do more things. OK, so sometimes those things are \u201cMake me a website for free\u201d, but more often it\u2019s cool things like \u201cCome and speak at my conference\u201d, \u201cWrite an article for my magazine\u201d and \u201cCollaborate with me.\u201d\n8. The young people are coming!\nThey love typography, they love print, they love layout, but they\u2019ve known how to put a website together since they started their first blog aged five and they show me clever apps they\u2019ve knocked together over the weekend! They\u2019re new, they\u2019re fluid, and they\u2019re better than us!\n9. Your portfolio is your portfolio\nOK, it\u2019s an obvious one, but as designers our work is our CV, our legacy! We need to show our skill, our attention to detail and our creativity in the way we showcase our work. Building your portfolio is the best way to start building your own websites. (And please be that designer who\u2019s bothered to work out how to change the Squarespace favicon!) \n10. It keeps you fluid!\nBuilding your own websites is tough. You\u2019ll never be happy with it, you\u2019ll constantly be updating it to keep up with technology and fashion, and by the time you\u2019ve finished it you\u2019ll want to start all over again. Perfect for forcing you to stay up-to-date with what\u2019s going on in the industry.\n</ol>", "year": "2015", "author": "Ros Horner", "author_slug": "roshorner", "published": "2015-12-12T00:00:00+00:00", "url": "https://24ways.org/2015/be-fluid-with-your-design-skills-build-your-own-sites/", "topic": "code"} {"rowid": 64, "title": "Being Responsive to the Small Things", "contents": "It\u2019s that time of the year again to trim the tree with decorations. Or maybe a DOM tree?\nAny web page is made of HTML elements that lay themselves out in a tree structure. We start at the top and then have multiple branches with branches that branch out from there. \n\nTo decorate our tree, we use CSS to specify which branches should receive the tinsel we wish to adorn upon it. It\u2019s all so lovely.\nIn years past, this was rather straightforward. But these days, our trees need to be versatile. They need to be responsive!\nResponsive web design is pretty wonderful, isn\u2019t it? Based on our viewport, we can decide how elements on the page should change their appearance to accommodate various constraints using media queries.\nClearleft have a delightfully clean and responsive site\nAlas, it\u2019s not all sunshine, lollipops, and rainbows. \nWith complex layouts, we may have design chunks \u2014 let\u2019s call them components \u2014 that appear in different contexts. Each context may end up providing its own constraints on the design, both in its default state and in its possibly various responsive states.\n\nMedia queries, however, limit us to the context of the entire viewport, not individual containers on the page. For every container our component lives in, we need to specify how to rearrange things in that context. The more complex the system, the more contexts we need to write code for.\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\nEach new component and each new breakpoint just makes the entire system that much more difficult to maintain. \n@media (min-width: 600px) {\n .features > .component { }\n .grid > .component {}\n}\n\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\n\n@media (min-width: 1024px) {\n .features > .component { }\n}\nEnter container queries\nContainer queries, also known as element queries, allow you to specify conditional CSS based on the width (or maybe height) of the container that an element lives in. In doing so, you no longer have to consider the entire page and the interplay of all the elements within. \nWith container queries, you\u2019ll be able to consider the breakpoints of just the component you\u2019re designing. As a result, you end up specifying less code and the components you develop have fewer dependencies on the things around them. (I guess that makes your components more independent.)\nAwesome, right?\nThere\u2019s only one catch.\nBrowsers can\u2019t do container queries. There\u2019s not even an official specification for them yet. The Responsive Issues (n\u00e9e Images) Community Group is looking into solving how such a thing would actually work. \nSee, container queries are tricky from an implementation perspective. The contents of a container can affect the size of the container. Because of this, you end up with troublesome circular references. \nFor example, if the width of the container is under 500px then the width of the child element should be 600px, and if the width of the container is over 500px then the width of the child element should be 400px. \nCan you see the dilemma? When the container is under 500px, the child element resizes to 600px and suddenly the container is 600px. If the container is 600px, then the child element is 400px! And so on, forever. This is bad.\nI guess we should all just go home and sulk about how we just got a pile of socks when we really wanted the Millennium Falcon. \nOur saviour this Christmas: JavaScript\nThe three wise men \u2014 Tim Berners-Lee, H\u00e5kon Wium Lie, and Brendan Eich \u2014 brought us the gifts of HTML, CSS, and JavaScript. \nTo date, there are a handful of open source solutions to fill the gap until a browser implementation sees the light of day.\n\nElementary by Scott Jehl\nElementQuery by Tyson Matanich\nEQ.js by Sam Richards\nCSS Element Queries from Marcj\n\nUsing any of these can sometimes feel like your toy broke within ten minutes of unwrapping it.\nEach take their own approach on how to specify the query conditions. For example, Elementary, the smallest of the group, only supports min-width declarations made in a :before selector.\n.mod-foo:before {\n content: \u201c300 410 500\u201d;\n}\nThe script loops through all the elements that you specify, reading the content property and then setting an attribute value on the HTML element, allowing you to use CSS to style that condition. \n.mod-foo[data-minwidth~=\"300\"] {\n background: blue;\n}\nTo get the script to run, you\u2019ll need to set up event handlers for when the page loads and for when it resizes. \nwindow.addEventListener( \"load\", window.elementary, false );\nwindow.addEventListener( \"resize\", window.elementary, false );\nThis works okay for static sites but breaks down on pages where elements can expand or contract, or where new content is dynamically inserted.\nIn the case of EQ.js, the implementation requires the creation of the breakpoints in the HTML. That means that you have implementation details in HTML, JavaScript, and CSS. (Although, with the JavaScript, once it\u2019s in the build system, it shouldn\u2019t ever be much of a concern unless you\u2019re tracking down a bug.)\nAnother problem you may run into is the use of content delivery networks (CDNs) or cross-origin security issues. The ElementQuery and CSS Element Queries libraries need to be able to read the CSS file. If you are unable to set up proper cross-origin resource sharing (CORS) headers, these libraries won\u2019t help.\nAt Shopify, for example, we had all of these problems. The admin that store owners use is very dynamic and the CSS and JavaScript were being loaded from a CDN that prevented the JavaScript from reading the CSS. \nTo go responsive, the team built their own solution \u2014 one similar to the other scripts above, in that it loops through elements and adds or removes classes (instead of data attributes) based on minimum or maximum width.\nThe caveat to this particular approach is that the declaration of breakpoints had to be done in JavaScript. \n elements = [\n { \u2018module\u2019: \u201c.carousel\u201d, \u201cclassName\u201d:\u2019alpha\u2019, minWidth: 768, maxWidth: 1024 },\n { \u2018module\u2019: \u201c.button\u201d, \u201cclassName\u201d:\u2019beta\u2019, minWidth: 768, maxWidth: 1024 } ,\n { \u2018module\u2019: \u201c.grid\u201d, \u201cclassName\u201d:\u2019cappa\u2019, minWidth: 768, maxWidth: 1024 }\n ]\nWith that done, the script then had to be set to run during various events such as inserting new content via Ajax calls. This sometimes reveals itself in flashes of unstyled breakpoints (FOUB). An unfortunate side effect but one largely imperceptible.\nUsing this approach, however, allowed the Shopify team to make the admin responsive really quickly. Each member of the team was able to tackle the responsive story for a particular component without much concern for how all the other components would react. \n\nEach element responds to its own breakpoint that would amount to dozens of breakpoints using traditional breakpoints. This approach allows for a truly fluid and adaptive interface for all screens.\nChristmas is over\nI wish I were the bearer of greater tidings and cheer. It\u2019s not all bad, though. We may one day see browsers implement container queries natively. At which point, we shall all rejoice!", "year": "2015", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2015-12-19T00:00:00+00:00", "url": "https://24ways.org/2015/being-responsive-to-the-small-things/", "topic": "code"} {"rowid": 65, "title": "The Accessibility Mindset", "contents": "Accessibility is often characterized as additional work, hard to learn and only affecting a small number of people. Those myths have no logical foundation and often stem from outdated information or misconceptions.\nIndeed, it is an additional skill set to acquire, quite like learning new JavaScript frameworks, CSS layout techniques or new HTML elements. But it isn\u2019t particularly harder to learn than those other skills.\nA World Health Organization (WHO) report on disabilities states that,\n\n[i]ncluding children, over a billion people (or about 15% of the world\u2019s population) were estimated to be living with disability.\n\nBeing disabled is not as unusual as one might think. Due to chronic health conditions and older people having a higher risk of disability, we are also currently paving the cowpath to an internet that we can still use in the future.\nAccessibility has a very close relationship with usability, and advancements in accessibility often yield improvements in the usability of a website. Websites are also more adaptable to users\u2019 needs when they are built in an accessible fashion.\nBeyond the bare minimum\nIn the time of table layouts, web developers could create code that passed validation rules but didn\u2019t adhere to the underlying semantic HTML model. We later developed best practices, like using lists for navigation, and with HTML5 we started to wrap those lists in nav elements. Working with accessibility standards is similar. The Web Content Accessibility Guidelines (WCAG) 2.0 can inform your decision to make websites accessible and can be used to test that you met the success criteria. What it can\u2019t do is measure how well you met them. \nW3C developed a long list of techniques that can be used to make your website accessible, but you might find yourself in a situation where you need to adapt those techniques to be the most usable solution for your particular problem.\nThe checkbox below is implemented in an accessible way: The input element has an id and the label associated with the checkbox refers to the input using the for attribute. The hover area is shown with a yellow background and a black dotted border:\nOpen video\n \nThe label is clickable and the checkbox has an accessible description. Job done, right? Not really. Take a look at the space between the label and the checkbox:\nOpen video\n \nThe gutter is created using a right margin which pushes the label to the right. Users would certainly expect this space to be clickable as well. The simple solution is to wrap the label around the checkbox and the text:\nOpen video\n \nYou can also set the label to display:block; to further increase the clickable area:\nOpen video\n \nAnd while we\u2019re at it, users might expect the whole box to be clickable anyway. Let\u2019s apply the CSS that was on a wrapping div element to the label directly:\nOpen video\n \nThe result enhances the usability of your form element tremendously for people with lower dexterity, using a voice mouse, or using touch interfaces. And we only used basic HTML and CSS techniques; no JavaScript was added and not one extra line of CSS.\n<form action=\"#\"> \n <label for=\"uniquecheckboxid\">\n <input type=\"checkbox\" name=\"checkbox\" id=\"uniquecheckboxid\" />\n Checkbox 4\n </label>\n</form> \nButton Example\nThe button below looks like a typical edit button: a pencil icon on a real button element. But if you are using a screen reader or a braille keyboard, the button is just read as \u201cbutton\u201d without any indication of what this button is for.\nOpen video\n A screen reader announcing a button. Contains audio.\nThe code snippet shows why the button is not properly announced:\n<button>\n <span class=\"icon icon-pencil\"></span>\n</button>\nAn icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users:\nOpen video\n A screen reader announcing a button with a title.\nHowever, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn\u2019t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts:\nThe mobile GitHub view with and without external fonts.\nEven if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn\u2019t load.\n<button>\n <img src=\"icon-pencil.svg\" alt=\"Edit\">\n</button>\nProviding always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons.\n<button>\n <span class=\"icon icon-pencil\"></span> Edit\n</button>\nThis also reads just fine in screen readers:\nOpen video\n A screen reader announcing the revised button.\nClever usability enhancements don\u2019t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the \u201ccaptioned videos\u201d or \u201caudio description\u201d categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don\u2019t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study.\nMore information\nW3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out \u201cHow People with Disabilities Use the Web\u201d, there are \u201cTips for Getting Started\u201d for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way.\nConclusion\nYou can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don\u2019t notice when those fundamental aspects of good website design and development are done right. But they\u2019ll always know when they are implemented poorly.\nIf you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn\u2019t know they\u2019d need it:\nOpen video\n \nIn this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks \u201cWhat did she say?\u201d the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.", "year": "2015", "author": "Eric Eggert", "author_slug": "ericeggert", "published": "2015-12-17T00:00:00+00:00", "url": "https://24ways.org/2015/the-accessibility-mindset/", "topic": "code"} {"rowid": 68, "title": "Grid, Flexbox, Box Alignment: Our New System for Layout", "contents": "Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers.\nGrid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work.\nAnother new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported. \nOwing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks. \nIf there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we\u2019ll be easily able to date a website to circa 2015 by seeing <div class=\"row\"> or <div class=\"col-md-3\"> in the markup.\nIn this article, I\u2019m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them.\nTo see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here. \nRelationship\nItems only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It\u2019s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout.\nAt a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox:\nSee the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen.\n\nAnd in grid layout (requires a CSS grid-supporting browser):\nSee the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen.\n\nAlignment\nFull-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you\u2019d like to have under your tree this Christmas, then this is the box you\u2019ll want to unwrap first.\nThe box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things.\nOur examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property.\nThe examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self.\nTake a look at an example in flexbox:\nSee the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen.\n\nAnd also in grid layout (requires a CSS grid-supporting browser):\nSee the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen.\n\nThe alignment properties used with CSS grid layout.\nFluid grids\nA cornerstone of responsive design is the concept of fluid grids.\n\n\u201c[\u2026]every aspect of the grid\u2014and the elements laid upon it\u2014can be expressed as a proportion relative to its container.\u201d\n\u2014Ethan Marcotte, \u201cFluid Grids\u201d\n\nThe method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element.\nh1 {\n margin-left: 14.575%; /*\u00a0144px / 988px = 0.14575\u00a0*/\n width: 70.85%; /*\u00a0700px / 988px = 0.7085\u00a0*/\n}\nIn more recent years, we\u2019ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple.\nThe most basic of flexbox demos shows this fluidity in action. The justify-content property \u2013 another property defined in the box alignment module \u2013 can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion.\nIn this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between.\nSee the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen.\n\nFor true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion.\nThe flexbox flex property is a shorthand for:\n\nflex-grow\nflex-shrink\nflex-basis\n\nThe flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value.\n\nflex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis.\nflex: 0 0 200px: a box that will be 200px and cannot grow or shrink.\nflex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller.\n\nIn this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis.\nSee the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.\n\nWhat I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out.\nIf I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo:\nSee the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.\n\nOnce you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes.\nIn this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr.\ngrid-template-columns: 1fr 2fr 2fr 2fr;\nSee the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen.\n\nThe four-track grid.\nSeparation of concerns\nMy younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup.\nBrowsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another.\nHere is a snippet from the Bootstrap grid examples \u2013 two columns with two nested columns:\n<div class=\"row\">\n <div class=\"col-md-8\">\n .col-md-8\n <div class=\"row\">\n <div class=\"col-md-6\">\n .col-md-6\n </div>\n <div class=\"col-md-6\">\n .col-md-6\n </div>\n </div>\n </div>\n <div class=\"col-md-4\">\n .col-md-4\n </div>\n</div>\nNot a million miles away from something I might have written in 1999.\n<table>\n <tr>\n <td class=\"col-md-8\">\n .col-md-8\n <table>\n <tr>\n <td class=\"col-md-6\">\n .col-md-6\n </td>\n <td class=\"col-md-6\">\n .col-md-6\n </td>\n </tr>\n </table>\n </td>\n <td class=\"col-md-4\">\n .col-md-4\n </td>\n </tr>\n</table>\nGrid and flexbox layouts do not need to be described in markup. The layout description happens entirely in the CSS, meaning that elements can be moved around from within the presentation layer.\nFlexbox gives us the ability to reverse the flow of elements, but also to set the order of elements with the order property. This is demonstrated here, where Widget the cat is in position 1 in the source, but I have used the order property to display him after the things that are currently unimpressive to him.\nSee the Pen Flexbox: order by rachelandrew (@rachelandrew) on CodePen.\n\nGrid layout takes this a step further. Where flexbox lets us set the order of items in a single dimension, grid layout gives us the ability to position things in two dimensions: both rows and columns. Defined in the CSS, this positioning can be changed at any breakpoint without needing additional markup. Compare the source order with the display order in this example (requires a CSS grid-supporting browser):\nSee the Pen Grid positioning in two dimensions by rachelandrew (@rachelandrew) on CodePen.\n\nLaying out our items in two dimensions using grid layout.\nAs these demos show, a straightforward way to decide if you should use grid layout or flexbox is whether you want to position items in one dimension or two. If two, you want grid layout.\nA note on accessibility and reordering\nThe issues arising from this powerful ability to change the way items are ordered visually from how they appear in the source have been the subject of much discussion. The current flexbox editor\u2019s draft states\n\n\u201cAuthors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.\u201d\n\u2014CSS Flexible Box Layout Module Level 1, Editor\u2019s Draft (3 December 2015)\n\nThis is to ensure that non-visual user agents (a screen reader, for example) can rely on the document source order as being correct. Take care when reordering that you do so from the basis of a sound document that makes sense in terms of source order. Avoid using visual order to convey meaning.\nAutomatic content placement with rules\nHaving control over the order of items, or placing items on a predefined grid, is nice. However, we can often do that already with one method or another and we have frameworks and tools to help us. Tools such as Susy mean we can even get away from stuffing our markup full of grid classes. However, our new layout methods give us some interesting new possibilities.\nSomething that is useful to be able to do when dealing with content coming out of a CMS or being pulled from some other source, is to define a bunch of rules and then say, \u201cDisplay this content, using these rules.\u201d\nAs an example of this, I will leave you with a Christmas poem displayed in a document alongside Widget the cat and some of the decorations that are bringing him no Christmas cheer whatsoever.\nThe poem is displayed first in the source as a set of paragraphs. I\u2019ve added a class identifying each of the four paragraphs but they are displayed in the source as one text. Below that are all my images, some landscape and some portrait; I\u2019ve added a class of landscape to the landscape ones.\nThe mobile-first grid is a single column and I use line-based placement to explicitly position my poem paragraphs. The grid layout auto-placement rules then take over and place the images into the empty cells left in the grid.\nAt wider screen widths, I declare a four-track grid, and position my poem around the grid, keeping it in a readable order.\nI also add rules to my landscape class, stating that these items should span two tracks. Once again the grid layout auto-placement rules position the rest of my images without my needing to position them. You will see that grid layout takes items out of source order to fill gaps in the grid. It does this because I have set the property grid-auto-flow to dense. The default is sparse meaning that grid will not attempt this backfilling behaviour.\nTake a look and play around with the full demo (requires a CSS grid layout-supporting browser):\nSee the Pen Grid auto-flow with rules by rachelandrew (@rachelandrew) on CodePen.\n\nThe final automatic placement example.\nMy wish for 2016\nI really hope that in 2016, we will see CSS grid layout finally emerge from behind browser flags, so that we can start to use these features in production \u2014 that we can start to move away from using the wrong tools for the job.\nHowever, I also hope that we\u2019ll see developers fully embracing these tools as the new system that they are. I want to see people exploring the possibilities they give us, rather than trying to get them to behave like the grid systems of 2015. As you discover these new modules, treat them as the new paradigm that they are, get creative with them. And, as you find the edges of possibility with them, take that feedback to the CSS Working Group. Help improve the layout systems that will shape the look of the future web.\nSome further reading\n\nI maintain a site of grid layout examples and resources at Grid by Example.\nThe three CSS specifications I\u2019ve discussed can be found as editor\u2019s drafts: CSS grid, flexbox, box alignment.\nI wrote about the last three years of my interest in CSS grid layout, which gives something of a history of the specification.\nMore examples of box alignment and grid layout.\nMy presentation at Fronteers earlier this year, in which I explain more about these concepts.", "year": "2015", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2015-12-15T00:00:00+00:00", "url": "https://24ways.org/2015/grid-flexbox-box-alignment-our-new-system-for-layout/", "topic": "code"} {"rowid": 70, "title": "Bringing Your Code to the Streets", "contents": "\u2014 or How to Be a Street VJ\nOur amazing world of web code is escaping out of the browser at an alarming rate and appearing in every aspect of the environment around us. Over the past few years we\u2019ve already seen JavaScript used server-side, hardware coded with JavaScript, a rise of native style and desktop apps created with HTML, CSS and JavaScript, and even virtual reality (VR) is getting its fair share of front-end goodness.\nYou can go ahead and play with JavaScript-powered hardware such as the Tessel or the Espruino to name a couple. Just check out the Tessel project page to see JavaScript in the world of coffee roasting or sleep tracking your pet. With the rise of the internet of things, JavaScript can be seen collecting information on flooding among other things. And if that\u2019s not enough \u2018outside the browser\u2019 implementations, Node.js servers can even be found in aircraft!\nI previously mentioned VR and with three.js\u2019s extra StereoEffect.js module it\u2019s relatively simple to get browser 3D goodness to be Google Cardboard-ready, and thus set the stage for all things JavaScript and VR. It\u2019s been pretty popular in the art world too, with interactive works such as Seb Lee-Delisle\u2019s Lunar Trails installation, featuring the old arcade game Lunar Lander, which you can now play in your browser while others watch (it is the web after all). The Science Museum in London held Chrome Web Lab, an interactive exhibition featuring five experiments, showcasing the magic of the web. And it\u2019s not even the connectivity of the web that\u2019s being showcased; we can even take things offline and use web code for amazing things, such as fighting Ebola.\nOne thing is for sure, JavaScript is awesome. Hell, if you believe those telly programs (as we all do), JavaScript can even take down the stock market, purely through the witchcraft of canvas! Go JavaScript!\nNow it\u2019s our turn\nSo I wanted to create a little project influenced by this theme, and as it\u2019s Christmas, take it to the streets for a little bit of party fun! Something that could take code anywhere. Here\u2019s how I made a portable visual projection pack, a piece of video mixing software and created some web-coded street art.\nStep one: The equipment\nYou will need:\n\nOne laptop: with HDMI output and a modern browser installed, such as Google Chrome.\nOne battery-powered mini projector: I\u2019ve used a Texas Instruments DLP; for its 120 lumens it was the best cost-to-lumens ratio I could find.\nOne MIDI controller (optional): mine is an ICON iDJ as it suits mixing visuals. However, there is more affordable hardware on the market such as an Akai LPD8 or a Korg nanoPAD2. As you\u2019ll see in the article, this is optional as it can be emulated within the software.\nA case to carry it all around in.\n\n\nStep two: The software\nThe projected visuals, I imagined, could be anything you can create within a browser, whether that be simple HTML and CSS, images, videos, SVG or canvas. The only requirement I have is that they move or change with sound and that I can mix any one visual into another.\nYou may remember a couple of years ago I created a demo on this very site, allowing audio-triggered visuals from the ambient sounds your device mic was picking up. That was a great starting point \u2013 I used that exact method to pick up the audio and thus the first requirement was complete. If you want to see some more examples of visuals I\u2019ve put together for this, there\u2019s a showcase on CodePen.\nThe second requirement took a little more thought. I needed two screens, which could at any point show any of the visuals I had coded, but could be mixed from one into the other and back again. So let\u2019s start with two divs, both absolutely positioned so they\u2019re on top of each other, but at the start the second screen\u2019s opacity is set to zero.\nNow all we need is a slider, which when moved from one side to the other slowly sets the second screen\u2019s opacity to 1, thereby fading it in.\nSee the Pen Mixing Screens (Software Version) by Rumyra (@Rumyra) on CodePen.\nMixing Screens (CodePen)\n\nAs you saw above, I have a MIDI controller and although the software method works great, I\u2019d quite like to make use of this nifty piece of kit. That\u2019s easily done with the Web MIDI API. All I need to do is call it, and when I move one of the sliders on the controller (I\u2019ve allocated the big cross fader in the middle for this), pick up on the change of value and use that to control the opacity instead.\nvar midi, data;\n// start talking to MIDI controller\nif (navigator.requestMIDIAccess) {\n navigator.requestMIDIAccess({\n sysex: false\n }).then(onMIDISuccess, onMIDIFailure);\n} else {\n alert(\u201cNo MIDI support in your browser.\u201d);\n}\n\n// on success\nfunction onMIDISuccess(midiData) {\n // this is all our MIDI data\n midi = midiData;\n\n var allInputs = midi.allInputs.values();\n // loop over all available inputs and listen for any MIDI input\n for (var input = allInputs.next(); input && !input.done; input = allInputs.next()) {\n // when a MIDI value is received call the onMIDIMessage function\n input.value.onmidimessage = onMIDIMessage;\n }\n}\n\nfunction onMIDIMessage(message) {\n // data comes in the form [command/channel, note, velocity]\n data = message.data;\n\n // Opacity change for screen. The cross fader values are [176, 8, {0-127}]\n if ( (data[0] === 176) && (data[1] === 8) ) {\n // this value will change as the fader is moved\n var opacity = data[2]/127;\n screenTwo.style.opacity = opacity;\n }\n}\n\nThe final code was slightly more complicated than this, as I decided to switch the two screens based on the frequencies of the sound that was playing, and use the cross fader to depict the frequency threshold value. This meant they flickered in and out of each other, rather than just faded. There\u2019s a very rough-and-ready first version of the software on GitHub.\nPhew, Great! Now we need to get all this to the streets!\nStep three: Portable kit\nDid you notice how I mentioned a case to carry it all around in? I wanted the case to be morphable, so I could use the equipment from it too, a sort of bag-to-usherette-tray-type affair. Well, I had an unused laptop bag\u2026\n\nI strengthened it with some MDF, so when I opened the bag it would hold like a tray where the laptop and MIDI controller would sit. The projector was Velcroed to the external pocket of the bag, so when it was a tray it would project from underneath. I added two durable straps, one for my shoulders and one round my waist, both attached to the bag itself. There was a lot of cutting and trimming. As it was a laptop bag it was pretty thick to start and sewing was tricky. However, I only broke one sewing machine needle; I\u2019ve been known to break more working with leather, so I figured I was doing well. By the way, you can actually buy usherette trays, but I just couldn\u2019t resist hacking my own :)\nStep four: Take to the streets\nFirst, make sure everything is charged \u2013 everything \u2013 a lot! The laptop has to power both the MIDI controller and the projector, and although I have a mobile phone battery booster pack, that\u2019ll only charge the projector should it run out. I estimated I could get a good hour of visual artistry before I needed to worry, though.\nI had a couple of ideas about time of day and location. Here in the UK at this time of year, it gets dark around half past four, so I could easily head out in a city around 5pm and it would be dark enough for the projections to be seen pretty well. I chose Bristol, around the waterfront, as there were some interesting locations to try it out in. The best was Millennium Square: busy but not crowded and plenty of surfaces to try projecting on to.\nMy first time out with the portable audio/visual pack (PAVP as it will now be named) was brilliant. I played music and projected visuals, like a one-woman band of A/V!\n\n\nYou might be thinking what the point of this was, besides, of course, it being a bit of fun. Well, this project got me to look at canvas and SVG more closely. The Web MIDI API was really interesting; MIDI as a data format has some great practical uses. I think without our side projects we may not have all these wonderful uses for our everyday code. Not only do they remind us coding can, and should, be fun, they also help us learn and grow as makers.\nMy favourite part? When I was projecting into a water feature in Millennium Square. For those who are familiar, you\u2019ll know it\u2019s like a wall of water so it produced a superb effect. I drew quite a crowd and a kid came to stand next to me and all I could hear him say with enthusiasm was, \u2018Oh wow! That\u2019s so cool!\u2019\nYes\u2026 yes, kid, it was cool. Making things with code is cool.\nMassive thanks to the lovely Drew McLellan for his incredibly well-directed photography, and also Simon Johnson who took a great hand in perfecting the kit while it was attached.", "year": "2015", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2015-12-06T00:00:00+00:00", "url": "https://24ways.org/2015/bringing-your-code-to-the-streets/", "topic": "code"} {"rowid": 71, "title": "Upping Your Web Security Game", "contents": "When I started working in web security fifteen years ago, web development looked very different. The few non-static web applications were built using a waterfall process and shipped quarterly at best, making it possible to add security audits before every release; applications were deployed exclusively on in-house servers, allowing Info Sec to inspect their configuration and setup; and the few third-party components used came from a small set of well-known and trusted providers. And yet, even with these favourable conditions, security teams were quickly overwhelmed and called for developers to build security in.\nIf the web security game was hard to win before, it\u2019s doomed to fail now. In today\u2019s web development, every other page is an application, accepting inputs and private data from users; software is built continuously, designed to eliminate manual gates, including security gates; infrastructure is code, with servers spawned with little effort and even less security scrutiny; and most of the code in a typical application is third-party code, pulled in through open source repositories with rarely a glance at who provided them.\nSecurity teams, when they exist at all, cannot solve this problem. They are vastly outnumbered by developers, and cannot keep up with the application\u2019s pace of change. For us to have a shot at making the web secure, we must bring security into the core. We need to give it no less attention than that we give browser compatibility, mobile design or web page load times. More broadly, we should see security as an aspect of quality, expecting both ourselves and our peers to address it, and taking pride when we do it well.\nWhere To Start?\nEmbracing security isn\u2019t something you do overnight.\nA good place to start is by reviewing things you\u2019re already doing \u2013 and trying to make them more secure. Here are three concrete steps you can take to get going.\nHTTPS\nThreats begin when your system interacts with the outside world, which often means HTTP. As is, HTTP is painfully insecure, allowing attackers to easily steal and manipulate data going to or from the server. HTTPS adds a layer of crypto that ensures the parties know who they\u2019re talking to, and that the information exchanged can be neither modified nor sniffed.\nHTTPS is relevant to any site. If your non-HTTPS site holds opinions, reading it may get your users in trouble with employers or governments. If your users believe what you say, attackers can modify your non-HTTPS to take advantage of and abuse that trust. If you want to use new browser technologies like HTTP2 and service workers, your site will need to be HTTPS. And if you want to be discovered on the web, using HTTPS can help your Google ranking. For more details on why I think you should make the switch to HTTPS, check out this post, these slides and this video.\nUsing HTTPS is becoming easier and cheaper. Here are a few free tools that can help:\n\nGet free and easy HTTPS delivery from Cloudflare (be sure to use \u201cFull SSL\u201d!)\nGet a free and automation-friendly certificate from Let\u2019s Encrypt (now in open beta).\nTest how well your HTTPS is set up using SSLTest.\n\nOther vendors and platforms are rapidly simplifying and reducing the cost of their HTTPS offering, as demand and importance grows.\nTwo-Factor Authentication\nThe most sensitive data is usually stored behind a login, and the authentication process is the primary gate in front of this data. Making this process secure has many aspects, including using HTTPS when accepting credentials, having a strong password policy, never storing the password, and more.\nAll of these are important, but the best single step to boost your authentication security is to introduce two-factor authentication (2FA). Adding 2FA usually means prompting users for an additional one-time code when logging in, which they get via SMS or a mobile app (e.g. Google Authenticator). This code is short-lived and is extremely hard for a remote attacker to guess, thus vastly reducing the risk a leaked or easily guessed password presents.\nThe typical algorithm for 2FA is based on an IETF standard called the time-based one-time password (TOTP) algorithm, and it isn\u2019t that hard to implement. Joel Franusic wrote a great post on implementing 2FA; modules like speakeasy make it even easier; and you can swap SMS with Google Authenticator or your own app if you prefer. If you don\u2019t want to build 2FA support yourself, you can purchase two/multi-factor authentication services from vendors such as DuoSecurity, Auth0, Clef, Hypr and others.\nIf implementing 2FA still feels like too much work, you can also choose to offload your entire authentication process to an OAuth-based federated login. Many companies offer this today, including Facebook, Google, Twitter, GitHub and others. These bigger players tend to do authentication well and support 2FA, but you should consider what data you\u2019re sharing with them in the process.\nTracking Known Vulnerabilities\nMost of the code in a modern application was actually written by third parties, and pulled into your app as frameworks, modules and libraries. While using these components makes us much more productive, along with their functionality we also adopt their security flaws. To make things worse, some of these flaws are well-known vulnerabilities, making it easy for hackers to take advantage of them in an attack.\nThis is a real problem and happens on pretty much every platform. Do you develop in Java? In 2014, over 6% of Java modules downloaded from Maven had a known severe security issue, the typical Java applications containing 24 flaws. Are you coding in Node.js? Roughly 14% of npm packages carry a known vulnerability, and over 60% of dev shops find vulnerabilities in their code. 30% of Docker Hub containers include a high priority known security hole, and 60% of the top 100,000 websites use client-side libraries with known security gaps.\nTo find known security issues, take stock of your dependencies and match them against language-specific lists such as Snyk\u2019s vulnerability DB for Node.js, rubysec for Ruby, victims-db for Python and OWASP\u2019s Dependency Check for Java. Once found, you can fix most issues by upgrading the component in question, though that may be tricky for indirect dependencies. \nThis process is still way too painful, which means most teams don\u2019t do it. The Snyk team and I are hoping to change that by making it as easy as possible to find, fix and monitor known vulnerabilities in your dependencies. Snyk\u2019s wizard will help you find and fix these issues through guided upgrades and patches, and adding Snyk\u2019s test to your continuous integration and deployment (CI/CD) will help you stay secure as your code evolves.\nNote that newly disclosed vulnerabilities usually impact old code \u2013 the one you\u2019re running in production. This means you have to stay alert when new vulnerabilities are disclosed, so you can fix them before attackers can exploit them. You can do so by subscribing to vulnerability lists like US-CERT, OSVDB and NVD. Snyk\u2019s monitor will proactively let you know about new disclosures relevant to your code, but only for Node.js for now \u2013 you can register to get updated when we expand.\nSecuring Yourself\nIn addition to making your application secure, you should make the contributors to that application secure \u2013 including you. Earlier this year we\u2019ve seen attackers target mobile app developers with a malicious Xcode. The real target, however, wasn\u2019t these developers, but rather the users of the apps they create. That you create. Securing your own work environment is a key part of keeping your apps secure, and your users from being compromised.\nThere\u2019s no single step that will make you fully secure, but here are a few steps that can make a big impact:\n\nUse 2FA on all the services related to the application, notably source control (e.g. GitHub), cloud platform (e.g. AWS), CI/CD, CDN, DNS provider and domain registrar. If an attacker compromises any one of those, they could modify or replace your entire application. I\u2019d recommend using 2FA on all your personal services too.\nUse a password manager (e.g. 1Password, LastPass) to ensure you have a separate and complex password for each service. Some of these services will get hacked, and passwords will leak. When that happens, don\u2019t let the attackers access your other systems too.\nSecure your workstation. Be careful what you download, lock your screen when you walk away, change default passwords on services you install, run antivirus software, etc. Malware on your machine can translate to malware in your applications.\nBe very wary of phishing. Smart attackers use \u2018spear phishing\u2019 techniques to gain access to specific systems, and can trick even security savvy users. There are even phishing scams targeting users with 2FA. Be alert to phishy emails.\nDon\u2019t install things through curl <somewhere-on-the-web> | sudo bash, especially if the URL is on GitHub, meaning someone else controls it. Don\u2019t do it on your machines, and definitely don\u2019t do it in your CI/CD systems. Seriously.\n\nStaying secure should be important to you personally, but it\u2019s doubly important when you have privileged access to an application. Such access makes you a way to reach many more users, and therefore a more compelling target for bad actors.\nA Culture of Security\nUsing HTTPS, enabling two-factor authentication and fixing known vulnerabilities are significant steps in building security at your core. As you implement them, remember that these are just a few steps in a longer journey.\nThe end goal is to embrace security as an aspect of quality, and accept we all share the responsibility of keeping ourselves \u2013 and our users \u2013 safe.", "year": "2015", "author": "Guy Podjarny", "author_slug": "guypodjarny", "published": "2015-12-11T00:00:00+00:00", "url": "https://24ways.org/2015/upping-your-web-security-game/", "topic": "code"} {"rowid": 75, "title": "A Harder-Working Class", "contents": "Class is only becoming more important. Focusing on its original definition as an attribute for grouping (or classifying) as well as linking HTML to CSS, recent front-end development practices are emphasizing class as a vessel for structured, modularized style packages. These patterns reduce the need for repetitive declarations that can seriously bloat file sizes, and instil human-readable understanding of how the interface, layout, and aesthetics are constructed.\n\nIn the next handful of paragraphs, we will look at how these emerging practices \u2013 such as object-oriented CSS and SMACSS \u2013 are pushing the relevance of class. We will also explore how HTML and CSS architecture can be further simplified, performance can be boosted, and CSS utility sharpened by combining class with the attribute selector.\n\nA primer on attribute selectors\n\nWhile attribute selectors were introduced in the CSS 2 spec, they are still considered rather exotic. These well-established and well-supported features give us vastly improved flexibility in targeting elements in CSS, and offer us opportunities for smarter markup. With an attribute selector, you can directly style an element based on any of its unique \u2013 or uniquely shared \u2013 attributes, without the need for an ID or extra classes. Unlike pseudo-classes, pseudo-elements, and other exciting features of CSS3, attribute selectors do not require any browser-specific syntax or prefix, and are even supported in Internet Explorer 7. \n\nFor example, say we want to target all anchor tags on a page that link to our homepage. Where otherwise we might need to manually identify and add classes to the HTML for these specific links, we could simply write:\n\n[href=index.html] { }\n\nThis selector reads: target every element that has an href attribute of \u201cindex.html\u201d. \n\nAttribute selectors are more faceted, though, as they also give us some very simple regular expression-like logic that helps further narrow (or widen) a selector\u2019s scope. In our previous example, what if we wanted to also give indicative styles to any anchor tag linking to an external site? With no way to know what the exact href value would be for every external link, we need to use an expression to match a common aspect of those links. In this case, we know that all external links need to start with \u201chttp\u201d, so we can use that as a hook:\n\n[href^=http] { }\n\nThe selector here reads: target every element that has an href attribute that begins with \u201chttp\u201d (which will also include \u201chttps\u201d). The ^= means \u201cstarts with\u201d. There are a few other simple expressions that give us a lot of flexibility in targeting elements, and I have found that a deep understanding of these and other selector types to be very useful.\n\nThe class-attribute selector\n\nBy matching classes with the attribute selector, CSS can be pushed to accomplish some exciting new feats. What I call a class-attribute selector combines the advantages of classes with attribute selectors by targeting the class attribute, rather than a specific class. Instead of selecting .urgent, you could select [class*=urgent]. The latter may seem like a more verbose way of accomplishing the former, but each would actually match two subtly different groups of elements.\n\nEric Meyer first explored the possibility of using classes with attribute selectors over a decade ago. While his interest in this technique mostly explored the different facets of the syntax, I have found that using class-attribute selectors can have distinct advantages over either using an attribute selector or a straightforward class selector.\n\nFirst, let\u2019s explore some of the subtleties of why we would target class before other attributes:\n\n\n\tClasses are ubiquitous. They have been supported since the HTML 4 spec was released in 1999. Newer attributes, such as the custom data attribute, have only recently begun to be adopted by browsers.\n\tClasses have multiple ways of being targeted. You can use the class selector or attribute selector (.classname or [class=classname]), allowing more flexible specificity than resorting to an ID or !important.\n\tClasses are already widely used, so adding more classes will usually require less markup than adding more attributes.\n\tClasses were designed to abstractly group and specify elements, making them the most appropriate attribute for styling using object-oriented methods (as we will learn in a moment).\n\n\nAlso, as Meyer pointed out, we can use the class-attribute selector to be more strict about class declarations. Of these two elements:\n\n<h2 class=\"very urgent\">\n\n<h2 class=\"urgent\">\n\n\u2026only the second h2 would be selected by [class=urgent], while .urgent would select both. The use of = matches any element with the exact class value of \u201curgent\u201d. Eric explores these nuances further in his series on attribute selectors, but perhaps more dramatic is the added power that class-attribute selectors can bring to our CSS.\n\nMore object-oriented, more scalable and modular\n\nNicole Sullivan has been pushing abstracted, object-oriented thinking in CSS development for years now. She has shared stacks of knowledge on how behemoth sites have seen impressive gains in maintenance overhead and CSS file sizes by leaning heavier on classes derived from common patterns. Jonathan Snook also speaks, writes and is genuinely passionate about improving our markup by using more stratified and modular class name conventions. With SMACSS, he shows this to be highly useful across sites \u2013 both complex and simple \u2013 that exhibit repeated design patterns. Sullivan and Snook both push the use of class for styling over other attributes, and many front-end developers are fast advocating such thinking as best practice.\n\nWith class-attribute selectors, we can further abstract our CSS, pushing its scalability. In his chapter on modules, Snook gives the example of a .pod class that might represent a certain set of styles. A .pod style set might be used in varying contexts, leading to CSS that might normally look like this:\n\n.pod { }\nform .pod { }\naside .pod { }\n\nAccording to Snook, we can make these styles more portable by targeting more verbose classes, rather than context:\n\n.pod { }\n.pod-form { }\n.pod-sidebar { }\n\n\u2026resulting in the following HTML:\n\n<div class=\"pod\">\n<div class=\"pod pod-form\">\n<div class=\"pod pod-sidebar\">\n\nThis divorces the <div>\u2019s styles from its context, making it applicable to any situation in which it is needed. The markup is clean and portable, and the classes are imbued with meaning as to what module they belong to. \n\nUsing class-attribute selectors, we can simplify this further:\n\n[class*=pod] { }\n.pod-form { }\n.pod-sidebar { }\n\nThe *= tells the browser to look for any element with a class attribute containing \u201cpod\u201d, so it matches \u201cpod\u201d, \u201cpod-form\u201d, \u201cpod-sidebar\u201d, etc. This allows only one class per element, resulting in simpler HTML:\n\n<div class=\"pod\">\n<div class=\"pod-form\">\n<div class=\"pod-sidebar\">\n\nWe could further abstract the concept of \u201cform\u201d and \u201csidebar\u201d adjustments if we knew that each of those alterations would always need the same treatment.\n\n/* Modules */\n[class*=pod] { }\n[class*=btn] { }\n\n/* Alterations */\n[class*=-form] { }\n[class*=-sidebar] { }\n\nIn this case, all elements with classes appended \u201c-form\u201d or \u201c-sidebar\u201d would be altered in the same manner, allowing the markup to stay simple:\n\n<form>\n <h2 class=\"pod-form\">\n <a class=\"btn-form\" href=\"#\">\n\n<aside>\n <h2 class=\"pod-sidebar\">\n <a class=\"btn-sidebar\" href=\"#\">\n\n50+ shades of specificity\n\nClasses are just powerful enough to override element selectors and default styling, but still leave room to be trumped by IDs and !important styles. This makes them more suitable for object-oriented patterns and helps avoid messy specificity issues that can not only be a pain for developers to maintain, but can also affect a site\u2019s performance. As Sullivan notes, \u201cIn almost every case, classes work well and have fewer unintended consequences than either IDs or element selectors\u201d. Proper use of specificity and cascade is crucial in building straightforward, efficient CSS.\n\nOne interesting aspect of attribute selectors is that they can be compounded for increasing levels of specificity. Attribute selectors are assigned a specificity level of ten, the same as class selectors, but both class and attribute selectors can be chained together, giving them more and more specificity with each link. Some examples:\n\n.box { } \n/* Specificity of 10 */\n\n.box.promo { } \n/* Specificity of 20 */\n\n[class*=box] { } \n/* Specificity of 10 */\n\n[class*=box][class*=promo] { } \n/* Specificity of 20 */\n\nYou can chain both types together, too:\n\n.box[class*=promo] { } \n/* Specificity of 20 */\n\nI was amused to find, though, that you can chain the exact same class and attribute selectors for infinite levels of specificity\n\n.box { } \n/* Specificity of 10 */\n\n.box.box { } \n/* Specificity of 20 */\n\n.box.box.box { } \n/* Specificity of 30 */\n\n[class*=box] { }\n/* Specificity of 10 */\n\n[class*=box][class*=box] { }\n/* Specificity of 20 */\n\n[class*=box][class*=box][class*=box] { }\n/* Specificity of 30 */\n\n.box[class*=box].box[class*=box] { } \n/* Specificity of 40 */\n\nTo override .box styles for promo, we wouldn\u2019t need to add an ID, change the order of .promo and .box in the CSS, or resort to an !important style. Granted, any issue that might need this fine level of specificity tweaking could probably be better solved with clever cascades, but having options never hurts.\n\nSmarter CSS\n\nOne of the most powerful aspects of the class-attribute selector is its ability to expand the simple logic found in CSS. When developing Gridset (an online tool for building grids and outputting them as CSS), I realized that with the right class name conventions, class-attribute selectors would allow the CSS to be smart enough to automatically adjust for column offsets without the need for extra classes. This imbued the CSS output with logic that other frameworks lacked, and makes a developer\u2019s job much easier. \n\nSay you need an element that spans column five (c5) to column six (c6) on your grid, and is preceded by an element spanning column one (c1) to column three (c3). The CSS can anticipate such a scenario:\n\n.c1-c3 + .c5-c6 {\n margin-left: 25%; /* \u2026or the width of column four plus two gutter widths */\n}\n\n\u2026but to accommodate all of the margin offsets that could span that same gap, we would need to write a rather protracted list for just a six column grid:\n\n.c1-c3 + .c5-c6,\n.c1-c3 + .c5,\n.c2-c3 + .c5-c6,\n.c2-c3 + .c5,\n.c3 + .c5-c6,\n.c3 + .c5 {\n margin-left: 25%; \n}\n\nNow imagine how the verbosity compounds when we repeat this type of declaration for every possible margin in a grid. The more columns added to the grid, the longer this selector list would get, too, making the CSS harder for the developer to maintain and slowing the load time. Using class-attribute selectors, though, this can be much simpler:\n\n[class*=c3] + [class*=c5] {\n margin-left: 25%;\n}\n\nI\u2019ve detailed how we extract as much logic as possible from as little CSS as needed on the Gridset blog.\n\nMore flexible selectors\n\nIn a recent project, I was working with Drupal-generated classes to change styles for certain special pages on a site. Without being able to change the code base, I was left trying to find some specific aspect of the generated HTML to target. I noticed that every special page was given a prefixed class, unique to the page, resulting in CSS like this:\n\n.specialpage-about,\n.specialpage-contact,\n.specialpage-info,\n\u2026\n\n\u2026and the list kept growing with each new special page. Such bloat would lead to problems down the line, and add development overhead to editorial decisions, which was a situation we were trying to avoid. I was easily able to fix this, though, with a concise class-attribute selector:\n\n[class*=specialpage-]\n\nThe CSS was now flexible enough to accommodate both the editorial needs of the client, and the development restrictions of the CMS.\n\nSelector performance\n\nAs Snook tells us in his chapter on Selector Performance, selectors are read by the browser from right to left, matching every element that adheres to each rule (or part of the selector). The more specific we can make the right-most rules \u2013 and every other part of your selectors \u2013 the more performant your CSS will be. So this selector:\n\n.home-page .promo .main-header\n\n\u2026would be more performant than:\n\n.home-page div header\n\n\u2026because there are likely many more header and div elements on the page, but not so many elements with those specific classes.\n\nNow, the class-attribute selector could be more general than a class selector, but not by much. I ran numerous tests based on the work of Steve Souders (and a few others) to test a class-attribute selector against a normal class selector. Given that Javascript will freeze during style rendering, I created a script that will add, then remove, a stylesheet on a page 5000 times, and measure only the time that elapses during the rendering freeze. The script runs four tests, essentially: one where a class selector and class-attribute Selector match a single element, and one they match multiple elements on the page.\n\nAfter running the test over 100 times and averaging the results, I have not seen a significant difference in rendering times. (As of this writing, the class-attribute selector has been 0.398% slower on average.) View the results here.\n\nGiven the sheer amount of bytes potentially saved by reducing selector lists, though, I am confident class-attribute selectors could shorten load times on larger sites and, at the very least, save precious development time.\n\nConclusion\n\nWith its flexibility and broad remit, class has at times been derided as too lenient, allowing CMSes and lazy developers to fill its values with presentational hacks or verbose gibberish. There have even been calls for an early retirement. Class continues, though, to be one of our most crucial tools.\n\nFront-end developers are rightfully eager to expand production abilities through innovations such as Sass or LESS, but this should not preclude us from honing the tools we already know as well. Every technique demonstrated in this article was achievable over a decade ago and most of the same thinking could be applied to IDs, rels, or any other attribute (though the reasons listed above give class an edge). The recent advent of methods such as object-oriented CSS and SMACSS shows there is still much room left to expand what simple HTML and CSS can accomplish. Progress may not always be found in the innovation of our tools, but through sharpening our understanding of them.", "year": "2012", "author": "Nathan Ford", "author_slug": "nathanford", "published": "2012-12-15T00:00:00+00:00", "url": "https://24ways.org/2012/a-harder-working-class/", "topic": "code"} {"rowid": 76, "title": "Giving CSS Animations and Transitions Their Place", "contents": "CSS animations and transitions may not sit squarely in the realm of the behaviour layer, but they\u2019re stepping up into this area that used to be pure JavaScript territory. Heck, CSS might even perform better than its JavaScript equivalents in some cases. That\u2019s pretty serious! With CSS\u2019s new tricks blurring the lines between presentation and behaviour, it can start to feel bloated and messy in our CSS files. It\u2019s an uncomfortable feeling.\n\nHere are a pair of methods I\u2019ve found to be pretty helpful in keeping the potential bloat and wire-crossing under control when CSS has its hands in both presentation and behaviour.\n\nSame eggs, more baskets\n\nStructuring your CSS to have separate files for layout, typography, grids, and so on is a fairly common approach these days. But which one do you put your transitions and animations in? The initial answer, as always, is \u201cit depends\u201d.\n\nSmall effects here and there will likely sit just fine with your other styles. When you move into more involved effects that require multiple animations and some logic support from JavaScript, it\u2019s probably time to choose none of the above, and create a separate CSS file just for them.\n\nPutting all your animations in one file is a huge help for code organization. Even if you opt for a name less literal than animations.css, you\u2019ll know exactly where to go for anything CSS animation related. That saves time and effort when it comes to editing and maintenance. Keeping track of which animations are still currently used is easier when they\u2019re all grouped together as well. And as an added bonus, you won\u2019t have to look at all those horribly unattractive and repetitive prefixed @-keyframe rules unless you actually need to.\n\nAn animations.css file might look something like the snippet below. It defines each animation\u2019s keyframes and defines a class for each variation of that animation you\u2019ll be using. Depending on the situation, you may also want to include transitions here in a similar way. (I\u2019ve found defining transitions as their own class, or mixin, to be a huge help in past projects for me.)\n\n// defining the animation\n@keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n@-webkit-keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n@-moz-keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n@-ms-keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n\n\u2026\n\n// class that assigns the animation\n\n.catsBackground {\n height: 100%;\n background: transparent url(../endlessKittens.png) 0 0 repeat-y;\n animation: catFall 1s linear infinite;\n -webkit-animation: catFall 1s linear infinite;\n -moz-animation: catFall 1s linear infinite;\n -ms-animation: catFall 1s linear infinite;\n}\n\nIf we don\u2019t need it, why load it?\n\nHaving all those CSS animations and transitions in one file gives us the added flexibility to load them only when we want to. Loading a whole lot of things that will never be used might seem like a bit of a waste.\n\nWhile CSS has us impressed with its motion chops, it falls flat when it comes to the logic and fine-grained control. JavaScript, on the other hand, is pretty good at both those things. Chances are the content of your animations.css file isn\u2019t acting alone. You\u2019ll likely be adding and removing classes via JavaScript to manage your CSS animations at the very least. If your CSS animations are so entwined with JavaScript, why not let them hang out with the rest of the behaviour layer and only come out to play when JavaScript is supported?\n\nDynamically linking your animations.css file like this means it will be completely ignored if JavaScript is off or not supported. No JavaScript? No additional behaviour, not even the parts handled by CSS.\n\n<script>\ndocument.write('<link rel=\"stylesheet\" type=\"text/css\" href=\"animations.css\">');\n</script>\n\nThis technique comes up in progressive enhancement techniques as well, but it can help here to keep your presentation and behaviour nicely separated when more than one language is involved. The aim in both cases is to avoid loading files we won\u2019t be using.\n\nIf you happen to be doing something a bit fancier \u2013 like 3-D transforms or critical animations that require more nuanced fallbacks \u2013 you might need something like modernizr to step in to determine support more specifically. But the general idea is the same.\n\nSumming it all up\n\nUsing a couple of simple techniques like these, we get to pick where to best draw the line between behaviour and presentation based on the situation at hand, not just on what language we\u2019re using. The power of when to separate and how to reassemble the individual pieces can be even greater if you use preprocessors as part of your process. We\u2019ve got a lot of options! The important part is to make forward-thinking choices to save your future self, and even your current self, unnecessary headaches.", "year": "2012", "author": "Val Head", "author_slug": "valhead", "published": "2012-12-08T00:00:00+00:00", "url": "https://24ways.org/2012/giving-css-animations-and-transitions-their-place/", "topic": "code"} {"rowid": 79, "title": "Responsive Images: What We Thought We Needed", "contents": "If you were to read a web designer\u2019s Christmas wish list, it would likely include a solution for displaying images responsively. For those concerned about users downloading unnecessary image data, or serving images that look blurry on high resolution displays, finding a solution has become a frustrating quest.\n\nHaving experimented with complex and sometimes devilish hacks, consensus is forming around defining new standards that could solve this problem. Two approaches have emerged.\n\nThe <picture> element markup pattern was proposed by Mat Marquis and is now being developed by the Responsive Images Community Group. By providing a means of declaring multiple sources, authors could use media queries to control which version of an image is displayed and under what conditions:\n\n<picture width=\"500\" height=\"500\">\n <source media=\"(min-width: 45em)\" src=\"large.jpg\">\n <source media=\"(min-width: 18em)\" src=\"med.jpg\">\n <source src=\"small.jpg\">\n <img src=\"small.jpg\" alt=\"\">\n <p>Accessible text</p>\n</picture>\n\nA second proposal put forward by Apple, the srcset attribute, uses a more concise syntax intended for use with the <img> element, although it could be compatible with the <picture> element too. This would allow authors to provide a set of images, but with the decision on which to use left to the browser:\n\n<img src=\"fallback.jpg\" alt=\"\" srcset=\"small.jpg 640w 1x, small-hd.jpg 640w 2x, med.jpg 1x, med-hd.jpg 2x \">\n\nEnter Scrooge\n\n\n\tMen\u2019s courses will foreshadow certain ends, to which, if persevered in, they must lead.\nEbenezer Scrooge\n\n\nGiven the complexity of this issue, there\u2019s a heated debate about which is the best option. Yet code belies a certain truth. That both feature verbose and opaque syntax, I\u2019m not sure either should find its way into the browser \u2013 especially as alternative approaches have yet to be fully explored.\n\nSo, as if to dampen the festive cheer, here are five reasons why I believe both proposals are largely redundant.\n\n1. We need better formats, not more markup\n\nAs we move away from designs defined with fixed pixel values, bitmap images look increasingly unsuitable. While simple images and iconography can use scalable vector formats like SVG, for detailed photographic imagery, raster formats like GIF, PNG and JPEG remain the only suitable option.\n\nThere is scope within current formats to account for varying bandwidth but this requires cooperation from browser vendors. Newer formats like JPEG2000 and WebP generate higher quality images with smaller file sizes, but aren\u2019t widely supported.\n\nWhile it\u2019s tempting to try to solve this issue by inventing new markup, the crux of it remains at the file level.\n\nDaan Jobsis\u2019s experimentation with image compression strengthens this argument. He discovered that by increasing the dimensions of a JPEG image while simultaneously reducing its quality, a smaller files could be produced, with the resulting image looking just as good on both standard and high-resolution displays.\n\nThis may be a hack in lieu of a more permanent solution, but it\u2019s applied in the right place. Easy to accomplish with existing tools and without compatibility issues, it has few downsides. Further experimentation in this area should be encouraged, with standardisation efforts more helpful if focused on developing new image formats or, preferably, extending existing ones.\n\n2. Art direction doesn\u2019t belong in markup\n\nA desired benefit of the <picture> markup pattern is to allow for greater art direction. For example, rather than scaling down images on smaller displays to the point that their content is hard to discern, we could present closer crops instead:\n\n\n\nThis can be achieved with CSS of course, although with a download penalty for those parts of an image not shown. This point may be negligible, however, since in the context of adaptable layouts, these hidden areas may end up being revealed anyway.\n\nArt direction concerns design, not content. If we wish to maintain a separation of concerns, including presentation within our markup seems misguided.\n\n3. The size of a display has little relation to the size of an image\n\nBy using media queries, the <picture> element allows authors to choose which characteristics of the screen or viewport to query for different images to be displayed.\n\nIn developing sites at Clearleft, we have noticed that the viewport is essentially arbitrary, with the size of an image\u2019s containing element more important. For example, look at how this grid of images may adapt at different viewport widths:\n\n\n\nAs we build more modular systems, components need to be adaptable in and of themselves. There is a case to be made for developing more contextual methods of querying, rather than those based on attributes of the display.\n\n4. We haven\u2019t lived with the problem long enough\n\nA key strength of the web is that the underlying platform can be continually iterated. This can also be problematic if snap judgements are made about what constitutes an improvement.\n\nThe early history of the web is littered with such examples, be it the perceived need for blinking text or inline typographic styling. To build a platform for the future, additions to it should be carefully considered. And if we want more consistent support across browsers, burdening vendors with an ever increasing list of features seems counterproductive.\n\nOnly once the need for a new feature is sufficiently proven, should we look to standardise it. Before we could declare hover effects, rounded corners and typographic styling in CSS, we used JavaScript as a polyfill. Sure, doing so was painful, but use cases were fully explored, and the CSS specification better reflected the needs of authors.\n\n5. Images and the web aesthetic\n\nThe srcset proposal has emerged from a company that markets its phones as being able to browse the real \u2013 yet squashed down, tapped and zoomable \u2013 web. Perhaps Apple should make its own website responsive before suggesting how the rest of us should do so.\n\nConverserly, while the <picture> proposal has the backing of a few respected developers and designers, it was born out of the work Mat Marquis and Filament Group did for the Boston Globe. As the first large-scale responsive design, this was a landmark project that ignited the responsive web design movement and proved its worth. But it was the first.\n\nIts design shares a vernacular to that of contemporary newspaper websites, with a columnar, image-laden and densely packed layout. Compared to more recent examples \u2013 Quartz, The Next Web and the New York Times Skimmer \u2013 it feels out of step with the future direction of news sites. In seeking out a truer aesthetic for the web in which software interfaces have greater influence, we might discover that the need for responsive images isn\u2019t as great as originally thought.\n\n\n\nBuilding for the future\n\nWith responsive design, we\u2019ve accepted the idea that a fully fluid layout, rather than a set of fixed layouts, is best suited to the web\u2019s unpredictable nature. Current responsive image proposals are antithetical to this approach.\n\nWe need solutions that lack complexity, are device-agnostic and work within existing workflows. Any proposal that requires different versions of the same image to be created, is likely to have to acquiesce under the pressure of reality.\n\nWhile it\u2019s easy to get distracted about the size and quality of an image, and how we might choose to serve it, often the simplest solution is not to include it at all. After years of gluttonous design practice, in which fast connections and expansive display sizes were an accepted norm, we have got use to filling pages with needless images and countless items of page furniture.\n\nTo design more adaptable experiences, the presence of every element needs to be questioned, for its existence requires additional data to be downloaded or futher complexity within a design system. Conditional loading techniques mean that the inclusion of images is no longer a binary choice, but can instead appear in a progressively enhanced manner.\n\nSo here is my proposal. Instead of spending the next year worrying about responsive images, let\u2019s embrace the constraints of the medium, and seek out new solutions that can work within them.", "year": "2012", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2012-12-11T00:00:00+00:00", "url": "https://24ways.org/2012/responsive-images-what-we-thought-we-needed/", "topic": "code"} {"rowid": 80, "title": "HTML5 Video Bumpers", "contents": "Video is a bigger part of the web experience than ever before. With native browser support for HTML5 video elements freeing us from the tyranny of plugins, and the availability of faster internet connections to the workplace, home and mobile networks, it\u2019s now pretty straightforward to publish video in a way that can be consumed in all sorts of ways on all sorts of different web devices.\n\nI recently worked on a project where the client had shot some dedicated video shorts to publish on their site. They also had some five-second motion graphics produced to top and tail the videos with context and branding. This pretty common requirement is a great idea on the web, where a user might land at your video having followed a link and be viewing a page without much context.\n\nKnown as bumpers, these short introduction clips help brand a video and make it look a lot more professional.\n\n\n\nAdding bumpers to a video\n\nThe simplest way to add bumpers to a video would be to edit them on to the start and end of the video file itself. Cooking the bumpers into the video file is easy, but should you ever want to update them it can become a real headache. If the branding needs updating, for example, you\u2019d need to re-edit and re-encode all your videos. Not a fun task.\n\nWhat if the bumpers could be added dynamically? That would enable you to use the same bumper for multiple videos (decreasing download time for users who might watch more than one) and to update the bumpers whenever you wanted. You could change them seasonally, update them for special promotions, run different advertising slots, perform multivariate testing, or even target different bumpers to different users.\n\nThe trade-off, of course, is that if you dynamically add your bumpers, there\u2019s a chance that a user in a given circumstance might not see the bumper. For example, if the main video feature was uploaded to YouTube, you\u2019d have no way to control the playback. As always, you need to weigh up the pros and cons and make your choice.\n\nHTML5 bumpers\n\nIf you wanted to dynamically add bumpers to your HTML5 video, how would you go about it? That was the question I found myself needing to answer for this particular client project.\n\nMy initial thought was to treat it just like an image slideshow. If I were building a slideshow that moved between images, I\u2019d use CSS absolute positioning with z-index to stack the images up on top of each other in a pile, with the first image on top. To transition to the second image, I\u2019d use JavaScript to fade the top image out, revealing the second image beneath it.\n\n\n\nNow that video is just a native object in the DOM, just like an image, why not do the same? Stack the videos up with the opening bumper on top, listen for the video\u2019s onended event, and fade it out to reveal the main feature behind. Good idea, right?\n\nWrong\n\nRemember that this is the web. It\u2019s never going to be that easy. The problem here is that many non-desktop devices use native, dedicated video players. Think about watching a video on a mobile phone \u2013 when you play the video, the phone often goes full-screen in its native player, leaving the web page behind. There\u2019s no opportunity to fade or switch z-index, as the video isn\u2019t being viewed in the page. Your page is left powerless. Powerless!\n\n\n\nSo what can we do? What can we control?\n\nThose of us with particularly long memories might recall a time before CSS, when we\u2019d have to use JavaScript to perform image rollovers. As CSS background images weren\u2019t a practical reality, we would use lots of <img> elements, and perform a rollover by modifying the src attribute of the image. \n\nTurns out, this old trick of modifying the source can help us out with video, too. In most cases, modifying the src attribute of a <video> element, or perhaps more likely the src attribute of a source element, will swap from one video to another.\n\nSwappin\u2019 it\n\nLet\u2019s take a deliberately simple example of a super-basic video tag:\n\n<video src=\"mycat.webm\" controls>no fallback coz i is lame, innit.</video>\n\nWe could very simply write a script to find all video tags and give them a new src to show our bumper.\n\n<script>\n\tvar videos, i, l;\n\tvideos = document.getElementsByTagName('video');\n\tfor(i=0, l=videos.length; i<l; i++) {\n\t\tvideos[i].setAttribute('src', 'bumper-in.webm');\n\t}\n</script>\n\nView the example in a browser with WebM support. You\u2019ll see that the video is swapped out for the opening bumper. Great!\n\nBeefing it up\n\nOf course, we can\u2019t just publish video in one format. In practical use, you need a <video> element with multiple <source> elements containing your different source formats.\n\n<video controls>\n <source src=\"mycat.mp4\" type=\"video/mp4\" />\n <source src=\"mycat.webm\" type=\"video/webm\" />\n <source src=\"mycat.ogv\" type=\"video/ogg\" />\n</video>\n\nThis time, our script needs to loop through the sources, not the videos. We\u2019ll use a regular expression replacement to swap out the file name while maintaining the correct file extension.\n\n<script>\n var sources, i, l, orig;\n sources = document.getElementsByTagName('source');\n for(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('src');\n sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2'));\n // reload the video\n sources[i].parentNode.load();\n }\n</script>\n\nThe difference this time is that when changing the src of a <source> we need to call the .load() method on the video to get it to acknowledge the change.\n\nSee the code in action, this time in a wider range of browsers.\n\nBut, my video!\n\nI guess we should get the original video playing again. Keeping the same markup, we need to modify the script to do two things:\n\n\n\tStore the original src in a data- attribute so we can access it later\n\tAdd an event listener so we can detect the end of the bumper playing, and load the original video back in\n\n\nAs we need to loop through the videos this time to add the event listener, I\u2019ve moved the .load() call into that loop. It\u2019s a bit more efficient to call it only once after modifying all the video\u2019s sources.\n\n<script>\nvar videos, sources, i, l, orig;\nsources = document.getElementsByTagName('source');\nfor(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('src');\n sources[i].setAttribute('data-orig', orig);\n sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2'));\n}\nvideos = document.getElementsByTagName('video');\nfor(i=0, l=videos.length; i<l; i++) {\n videos[i].load();\n videos[i].addEventListener('ended', function(){\n sources = this.getElementsByTagName('source');\n for(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('data-orig');\n if (orig) {\n sources[i].setAttribute('src', orig);\n }\n sources[i].setAttribute('data-orig','');\n }\n this.load();\n this.play();\n });\n}\n</script>\n\nAgain, view the example to see the bumper play, followed by our spectacular main feature. (That\u2019s my cat, Widget. His interests include sleeping and internet marketing.)\n\nTidying things up\n\nThe final thing to do is add our closing bumper after the main video has played. This involves the following changes:\n\n\n\tWe need to keep track of whether the src has been changed, so we only play the video if it\u2019s changed. I\u2019ve added the modified variable to track this, and it stops us getting into a situation where the video just loops forever.\n\tAdd an else to the event listener, for when the orig is false (so the main feature has been playing) to load in the end bumper. We also check that we\u2019re not already playing the end bumper. Because looping.\n\n\n<script>\nvar videos, sources, i, l, orig, current, modified;\nsources = document.getElementsByTagName('source');\nfor(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('src');\n sources[i].setAttribute('data-orig', orig);\n sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2'));\n}\nvideos = document.getElementsByTagName('video');\nfor(i=0, l=videos.length; i<l; i++) {\n videos[i].load();\n modified = false;\n videos[i].addEventListener('ended', function(){\n sources = this.getElementsByTagName('source');\n for(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('data-orig');\n if (orig) {\n sources[i].setAttribute('src', orig);\n modified = true;\n }else{\n current = sources[i].getAttribute('src');\n if (current.indexOf('bumper-out')==-1) {\n sources[i].setAttribute('src', current.replace(/([w]+).(w+)/, 'bumper-out.$2'));\n modified = true;\n }else{\n this.pause();\n modified = false;\n }\n }\n sources[i].setAttribute('data-orig','');\n }\n if (modified) {\n this.load();\n this.play();\n }\n });\n}\n</script>\n\nYo ho ho, that\u2019s a lot of JavaScript. See it in action \u2013 you should get a bumper, the cat video, and an end bumper.\n\nOf course, this code works fine for demonstrating the principle, but it\u2019s very procedural. Nothing wrong with that, but to do something similar in production, you\u2019d probably want to make the code more modular to ease maintainability. Besides, you may want to use a framework, rather than basic JavaScript. \n\nThe end credits\n\nOne really important principle here is that of progressive enhancement. If the browser doesn\u2019t support JavaScript, the user won\u2019t see your bumper, but they will get the main video. If the browser supports JavaScript but doesn\u2019t allow you to modify the src (as was the case with older versions of iOS), the user won\u2019t see your bumper, but they will get the main video.\n\nIf a search engine or social media bot grabs your page and looks for content, they won\u2019t see your bumper, but they will get the main video \u2013 which is absolutely what you want.\n\nThis means that if the bumper is absolutely crucial, you may still need to cook it into the video. However, for many applications, running it dynamically can work quite well.\n\nAs always, it comes down to three things:\n\n\n\tMeasure your audience: know how people access your site\n\tTest the solution: make sure it works for your audience\n\tPlan for failure: it\u2019s the web and that\u2019s how things work \u2018round these parts\n\n\nBut most of all play around with it, have fun and build something awesome.", "year": "2012", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2012-12-01T00:00:00+00:00", "url": "https://24ways.org/2012/html5-video-bumpers/", "topic": "code"} {"rowid": 83, "title": "Cut Copy Paste", "contents": "Long before I got into this design thing, I was heavily into making my own music inspired by the likes of Coldcut and Steinski. I would scour local second-hand record shops in search of obscure beats, loops and bits of dialogue in the hope of finding that killer sample I could then splice together with other things to make a huge hit that everyone would love. While it did eventually lead to a record contract and getting to release a few 12\u2033 singles, ultimately I knew I\u2019d have to look for something else to pay the bills.\n\nI may not make my own records any more, but the approach I took back then \u2013 finding (even stealing) things, cutting and pasting them into interesting combinations \u2013 is still at the centre of how I work, only these days it\u2019s pretty much bits of code rather than bits of vinyl. Over the years I\u2019ve stored these little bits of code (some I\u2019ve found, some I\u2019ve created myself) in Evernote, ready to be dialled up whenever I need them. \n\nSo when Drew got in touch and asked if I\u2019d like to do something for this year\u2019s 24 ways I thought it might be kind of cool to share with you a few of these snippets that I find really useful. Think of these as a kind of coding mix tape; but remember \u2013 don\u2019t just copy as is: play around, combine and remix them into other wonderful things. \n\nSome of this stuff is dirty; some of it will make hardcore programmers feel ill. For those people, remember this \u2013 while you were complaining about the syntax, I made something.\n\nCreate unique colours\n\nLet\u2019s start right away with something I stole. Well, actually it was given away at the time by Matt Biddulph who was then at Dopplr before Nokia destroyed it. Imagine you have thousands of words and you want to assign each one a unique colour. Well, Matt came up with a crazily simple but effective way to do that using an MD5 hash. Just encode said word using an MD5 hash, then take the first six characters of the string you get back to create a hexadecimal colour representation. \n\nI can\u2019t guarantee that it will be a harmonious colour palette, but it\u2019s still really useful. The thing I love the most about this technique is the left-field thinking of using an encryption system to create colours! Here\u2019s an example using JavaScript:\n\n// requires the MD5 library available at http://pajhome.org.uk/crypt/md5\n\n function MD5Hex(str){\n result = MD5.hex(str).substring(0, 6);\n return result;\n }\n\nMake something breathe using a sine wave\n\nI never paid attention in school, especially during double maths. As a matter of fact, the only time I received corporal punishment \u2013 several strokes of the ruler \u2013 was in maths class. Anyway, if they had shown me then how beautiful mathematics actually is, I might have paid more attention. Here\u2019s a little example of how a sine wave can be used to make something appear to breathe. \n\nI recently used this on an Arduino project where an LED ring surrounding a button would gently breathe. Because of that it felt much more inviting. I love mathematics.\n\nfor(int i = 0; i<360; i++){ \n float rad = DEG_TO_RAD * i;\n int sinOut = constrain((sin(rad) * 128) + 128, 0, 255);\n analogWrite(LED, sinOut);\n delay(10); \n}\n\nSnap position to grid\n\nThis is so elegant I love it, and it was shown to me by Gary Burgess, or Boom Boom as myself and others like to call him. It snaps a position, in this case the X-position, to a grid. Just define your grid size (say, twenty pixels) and you\u2019re good.\n\nsnappedXpos = floor( xPos / gridSize) * gridSize;\n\nCalculate the distance between two objects\n\nFor me, interaction design is about the relationship between two objects: you and another object; you and another person; or simply one object to another. How close these two things are to each other can be a handy thing to know, allowing you to react to that information within your design. Here\u2019s how to calculate the distance between two objects in a 2-D plane:\n\ndeltaX = round(p2.x-p1.x);\ndeltaY = round(p2.y-p1.y);\ndiff = round(sqrt((deltaX*deltaX)+(deltaY*deltaY)));\n\nFind the X- and Y-position between two objects\n\nWhat if you have two objects and you want to place something in-between them? A little bit of interruption and disruption can be a good thing. This small piece of code will allow you to place an object in-between two other objects:\n\n// set the position: 0.5 = half-way\t\n\nfloat position = 0.5;\nfloat x = x1 + (x2 - x1) *position; \nfloat y = y1 + (y2 - y1) *position; \n\nDistribute objects equally around a circle \t\n\nMore fun with maths, this time adding cosine to our friend sine. Let\u2019s say you want to create a circular navigation of arbitrary elements (yeah, Jakob, you heard), or you want to place images around a circle. Well, this piece of code will do just that. You can adjust the size of the circle by changing the distance variable and alter the number of objects with the numberOfObjects variable. Example below is for use in Processing.\n\n// Example for Processing available for free download at processing.org\n\nvoid setup() {\n\n size(800,800);\n int numberOfObjects = 12;\n int distance = 100;\n float inc = (TWO_PI)/numberOfObjects;\n float x,y;\n float a = 0;\n\n for (int i=0; i < numberOfObjects; i++) {\n x = (width/2) + sin(a)*distance;\n y = (height/2) + cos(a)*distance;\n ellipse(x,y,10,10);\n a += inc;\n\n }\n}\n\nUse modulus to make a grid\n\nThe modulus operator, represented by %, returns the remainder of a division. Fallen into a coma yet? Hold on a minute \u2013 this seemingly simple function is very powerful in lots of ways. At a simple level, you can use it to determine if a number is odd or even, great for creating alternate row colours in a table for instance:\n\nboolean checkForEven(numberToCheck) {\n if (numberToCheck % 2 == 0) \n return true;\n } else {\n return false; \n }\n}\n\nThat\u2019s all well and good, but here\u2019s a use of modulus that might very well blow your mind. Construct a grid with only a few lines of code. Again the example is in Processing but can easily be ported to any other language.\n\nvoid setup() {\n\nsize(600,600);\nint numItems = 120;\nint numOfColumns = 12;\nint xSpacing = 40;\nint ySpacing = 40;\nint totalWidth = xSpacing*numOfColumns;\n\nfor (int i=0; i < numItems; i++) {\n\nellipse(floor((i*xSpacing)%totalWidth),floor((i*xSpacing)/totalWidth)*ySpacing,10,10);\n\n}\n}\n\nNot all the bits of code I keep around are for actual graphical output. I also have things that are very utilitarian, but which I still consider part of the design process. Here\u2019s a couple of things that I\u2019ve found really handy lately in my design workflow. They may be a little specific, but I hope they demonstrate that it\u2019s not about working harder, it\u2019s about working smarter. \n\nMerge CSV files into one file\n\nRecently, I\u2019ve had to work with huge \u2013 about 1GB \u2013 CSV text files that I then needed to combine into one master CSV file so I could then process the data. Opening up each text file and then copying and pasting just seemed really dumb, not to mention slow, so I thought there must be a better way. After some Googling I found this command line script that would combine .txt files into one file and add a new line after each: \n\nawk 1 *.txt > finalfile.txt\n\nBut that wasn\u2019t what I was ideally after. I wanted to merge the CSV files, keeping the first row of the first file (the column headings) and then ignore the first row of subsequent files. Sure enough I found the answer after some Googling and it worked like a charm. Apologies to the original author but I can\u2019t remember where I found it, but you, sir or madam, are awesome. Save this as a shell script:\n\nFIRST=\n\nfor FILE in *.csv\n do\n exec 5<\"$FILE\" # Open file\n read LINE <&5 # Read first line\n [ -z \"$FIRST\" ] && echo \"$LINE\" # Print it only from first file\n FIRST=\"no\"\n\n cat <&5 # Print the rest directly to standard output\n exec 5<&- # Close file\n # Redirect stdout for this section into file.out \n\ndone > file.out\n\nCreate a symbolic link to another file or folder\n\nOftentimes, I\u2019ll find myself hunting through a load of directories to load a file to be processed, like a CSV file. Use a symbolic link (in the Terminal) to place a link on your desktop or wherever is most convenient and it\u2019ll save you loads of time. Especially great if you\u2019re going through a Java file dialogue box in Processing or something that doesn\u2019t allow the normal Mac dialog box or aliases.\n\ncd /DirectoryYouWantShortcutToLiveIn\nln -s /Directory/You/Want/ShortcutTo/ TheShortcut\n\nYou can do it, in the mix\n\nI hope you\u2019ve found some of the above useful and that they\u2019ve inspired a few ideas here and there. Feel free to tell me better ways of doing things or offer up any other handy pieces of code. Most of all though, collect, remix and combine the things you discover to make lovely new things.", "year": "2012", "author": "Brendan Dawes", "author_slug": "brendandawes", "published": "2012-12-17T00:00:00+00:00", "url": "https://24ways.org/2012/cut-copy-paste/", "topic": "code"} {"rowid": 86, "title": "Flashless Animation", "contents": "Animation in a Flashless world\n\nWhen I splashed down in web design four years ago, the first thing I wanted to do was animate a cartoon in the browser. I\u2019d been drawing comics for years, and I\u2019ve wanted to see them come to life for nearly as long. Flash animation was still riding high, but I didn\u2019t want to learn Flash. I wanted to learn JavaScript!\n\nSadly, animating with JavaScript was limiting and resource-intensive. My initial foray into an infinitely looping background did more to burn a hole in my CPU than amaze my friends (although it still looks pretty cool). And there was still no simple way to incorporate audio. The browser technology just wasn\u2019t there.\n\nThings are different now. CSS3 transitions and animations can do most of the heavy lifting and HTML5 audio can serve up the music and audio clips. You can do a lot without leaning on JavaScript at all, and when you lean on JavaScript, you can do so much more!\n\nIn this project, I\u2019m going to show you how to animate a simple walk cycle with looping audio. I hope this will inspire you to do something really cool and impress your friends. I\u2019d love to see what you come up with, so please send your creations my way at rachelnabors.com!\n\nNote: Because every browser wants to use its own prefixes with CSS3 animations, and I have neither the time nor the space to write all of them out, I will use the W3C standard syntaxes; that is, going prefix-less. You can implement them out of the box with something like Prefixfree, or you can add prefixes on your own. If you take the latter route, I recommend using Sass and Compass so you can focus on your animations, not copying and pasting.\n\nThe walk cycle\n\nWalk cycles are the \u201cHello world\u201d of animation. One of the first projects of animation students is to spend hours drawing dozens of frames to complete a simple loopable animation of a character walking.\n\nMost animators don\u2019t have to draw every frame themselves, though. They draw a few key frames and send those on to production animators to work on the between frames (or tween frames). This is meticulous, grueling work requiring an eye for detail and natural movement. This is also why so much production animation gets shipped overseas where labor is cheaper.\n\nLuckily, we don\u2019t have to worry about our frame count because we can set our own frames-per-second rate on the fly in CSS3. Since we\u2019re trying to impress friends, not animation directors, the inconsistency shouldn\u2019t be a problem. (Unless your friend is an animation director.)\n\nThis is a simple walk cycle I made of my comic character Tuna for my CSS animation talk at CSS Dev Conference this year:\n\n\n\nThe magic lies here:\n\nanimation: walk-cycle 1s steps(12) infinite;\n\nBreaking those properties down:\n\nanimation: <name> <duration> <timing-function> <iteration-count>;\n\nwalk-cycle is a simple @keyframes block that moves the background sprite on .tuna around:\n\n@keyframes walk-cycle { \n 0% {background-position: 0 0; }\n 100% {background-position: 0 -2391px;}\n}\n\nThe background sprite has exactly twelve images of Tuna that complete a full walk cycle. We\u2019re setting it to cycle through the entire sprite every second, infinitely. So why isn\u2019t the background image scrolling down the .tuna container? It\u2019s all down to the timing function steps(). Using steps() let us tell the CSS to make jumps instead of the smooth transitions you\u2019d get from something like linear. Chris Mills at dev.opera wrote in his excellent intro to CSS3 animation :\n\n\n\tInstead of giving a smooth animation throughout, [steps()] causes the animation to jump between a set number of steps placed equally along the duration. For example, steps(10) would make the animation jump along in ten equal steps. There\u2019s also an optional second parameter that takes a value of start or end. steps(10, start) would specify that the change in property value should happen at the start of each step, while steps(10, end) means the change would come at the end.\n\n\n(Seriously, go read his full article. I\u2019m not going to touch on half the stuff he does because I cannot improve on the basics any more than he already has.)\n\nThe background\n\nA cat walking in a void is hardly an impressive animation and certainly your buddy one cube over could do it if he chopped up some of those cat GIFs he keeps using in group chat. So let\u2019s add a parallax background! Yes, yes, all web designers signed a peace treaty to not abuse parallax anymore, but this is its true calling\u2014treaty be damned.\n\n\n\nAnd to think we used to need JavaScript to do this! It\u2019s still pretty CPU intensive but much less complicated. We start by splitting up the page into different layers, .foreground, .midground, and .background. We put .tuna in the .midground.\n\n.background has multiple background images, all set to repeat horizontally:\n\nbackground-image:\n url(background_mountain5.png),\n url(background_mountain4.png),\n url(background_mountain3.png),\n url(background_mountain2.png),\n url(background_mountain1.png);\nbackground-repeat: repeat-x;\n\nWith parallax, things in the foreground move faster than those in the background. Next time you\u2019re driving, notice how the things closer to you move out of your field of vision faster than something in the distance, like a mountain or a large building. We can imitate that here by making the background images on top (in the foreground, closer to us) wider than those on the bottom of the stack (in the distance).\n\nThe different lengths let us use one animation to move all the background images at different rates in the same interval of time: \n\nanimation: parallax_bg linear 40s infinite;\n\nThe shorter images have less distance to cover in the same amount of time as the longer images, so they move slower.\n\n\n\nLet\u2019s have a look at the background\u2019s animation:\n\n@keyframes parallax_bg { \n 0% {\n background-position: -2400px 100%, -2000px 100%, -1800px 100%, -1600px 100%, -1200px 100%;\n }\n 100% {\n background-position: 0 100%, 0 100%, 0 100%, 0 100%, 0 100%;\n }\n}\n\nAt 0%, all the background images are positioned at the negative value of their own widths. Then they start moving toward background-position: 0 100%. If we wanted to move them in the reverse direction, we\u2019d remove the negative values at 0% (so they would start at 2400px 100%, 2000px 100%, etc.). Try changing the values in the codepen above or changing background-repeat to none to see how the images play together.\n\n.foreground and .midground operate on the same principles, only they use single background images.\n\nThe music\n\nAfter finishing the first draft of my original walk cycle, I made a GIF with it and posted it on YTMND with some music from the movie Paprika, specifically the track \u201cThe Girl in Byakkoya.\u201d After showing it to some colleagues in my community, it became clear that this was a winning combination sure to drive away dresscode blues. So let\u2019s use HTML5 to get a clip of that music looping in there!\n\nWarning, there is sound. Please adjust your volume or apply headphones as needed.\n\n\n\nWe\u2019re using HTML5 audio\u2019s loop and autoplay abilities to automatically play and loop a sound file on page load:\n\n<audio loop autoplay>\n <source src=\"http://music.com/clip.mp3\" />\n</audio>\n\nUnfortunately, you may notice there is a small pause between loops. HTML5 audio, thou art half-baked still. Let\u2019s hope one day the Web Audio API will be able to help us out, but until things improve, we\u2019ll have to hack our way around these shortcomings.\n\nTurns out there\u2019s a handy little script called seamlessLoop.js which we can use to patch this. Mind you, if we were really getting crazy with the Cheese Whiz, we\u2019d want to get out big guns like sound.js. But that\u2019d be overkill for a mere loop (and explaining the Web Audio API might bore, rather than impress your friends)!\n\nInstalling seamlessLoop.js will get rid of the pause, and now our walk cycle is complete.\n\n(I\u2019ve done some very rough sniffing to see if the browser can play MP3 files. If not, we fall back to using .ogg formatted clips (Opera and Firefox users, you\u2019re welcome).)\n\nReally impress your friends by adding a run cycle\n\nSo we have music, we have a walk cycle, we have parallax. It will be a snap to bring them all together and have a simple, endless animation. But let\u2019s go one step further and knock the socks off our viewers by adding a run cycle.\n\nThe run cycle\n\nTacking a run cycle on to our walk cycle will require a third animation sequence: a transitional animation of Tuna switching from walking to running. I have added all these to the sprite:\n\n\n\nLet\u2019s work on getting that transition down. We\u2019re going to use multiple animations on the same .tuna div, but we\u2019re going to kick them off at different intervals using animation-delay\u2014no JavaScript required! Isn\u2019t that magical?\n\n\n\nIt requires a wee bit of math (not much, it doesn\u2019t hurt) to line them up. We want to:\n\n\n\tLoop the walk animation twice\n\tPlay the transitional cycle once (it has a finite beginning and end perfectly drawn to pick up between the last frame of the walk cycle and the first frame of the run cycle\u2014no looping this baby)\n\tRUN FOREVER.\n\n\nUsing the pattern animation: <name> <duration> <timing-function> <delay> <iteration-count>, here\u2019s what that looks like:\n\nanimation:\n walk-cycle 1s steps(12) 2,\n walk-to-run .75s steps(12) 2s 1,\n run-cycle .75s steps(13) 2.75s infinite;\n\nI played with the times to get make the movement more realistic. You may notice that the running animation looks smoother than the walking animation. That\u2019s because it has 13 keyframes running over .75 second instead of 12 running in one second. Remember, professional animation studios use super-high frame counts. This little animation isn\u2019t even up to PBS\u2019s standards!\n\nThe music: extended play with HTML5 audio sprites\n\nMy favorite part in the The Girl in Byakkoya is when the calm opening builds and transitions into a bouncy motif. I want to start with Tuna walking during the opening, and then loop the running and bounciness together for infinity.\n\n\n\tThe intro lasts for 24 seconds, so we set our 1 second walk cycle to run for 24 repetitions: \nwalk-cycle 1s steps(12) 24\n\tWe delay walk-to-run by 24 seconds so it runs for .75 seconds before\u2026\n\tWe play run-cycle at 24.75 seconds and loop it infinitely\n\n\nFor the music, we need to think of it as two parts: the intro and the bouncy loop. We can do this quite nicely with audio sprites: using one HTML5 audio element and using JavaScript to change the play head location, like skipping tracks with a CD player. Although this technique will result in a small gap in music shifts, I think it\u2019s worth using here to give you some ideas.\n\n// Get the audio element\nvar byakkoya = document.querySelector('audio');\n// create function to play and loop audio\nfunction song(a){\n //start playing at 0\n a.currentTime = 0;\n a.play();\n //when we hit 64 seconds...\n setTimeout(function(){\n // skip back to 24.5 seconds and keep playing...\n a.currentTime = 24.55;\n // then loop back when we hit 64 again, or every 59.5 seconds.\n setInterval(function(){\n a.currentTime = 24.55;\n },39450);\n },64000);\n}\n\nThe load screen\n\nI\u2019ve put it off as long as I can, but now that the music and the CSS are both running on their own separate clocks, it\u2019s imperative that both images and music be fully downloaded and ready to run when we kick this thing off. So we need a load screen (also, it\u2019s nice to give people a heads-up that you\u2019re about to blast them with music, no matter how wonderful that music may be).\n\nSince the two timers are so closely linked, we\u2019d best not run the animations until we run the music:\n\n* { animation-play-state: paused; }\n\nanimation-play-state can be set to paused or running, and it\u2019s the most useful thing you will learn today.\n\nFirst we use an event listener to see when the browser thinks we can play through from the beginning to end of the music without pause for buffering:\n\nbyakkoya.addEventListener(\"canplaythrough\", function () { });\n\n(More on HTML5 audio\u2019s media events at HTML5doctor.com)\n\nInside our event listener, I use a bit of jQuery to add class of .playable to the body when we\u2019re ready to enable the play button:\n\n$(\"body\").addClass(\"playable\");\n $(\"#play-me\").html(\"Play me.\").click(function(){\n song(byakkoya);\n $(\"body\").addClass(\"playing\");\n });\n\nThat .playing class is special because it turns on the animations at the same time we start playing the song:\n\n.playing * { animation-play-state: running; }\n\nThe background\n\nWe\u2019re almost done here! When we add the background, it needs to speed up at the same time that Tuna starts running. The music picks up speed around 24.75 seconds in, and so we\u2019re going to use animation-delay on those backgrounds, too.\n\nThis will require some math. If you try to simply shorten the animation\u2019s duration at the 24.75s mark, the backgrounds will, mid-scroll, jump back to their initial background positions to start the new animation! Argh! So let\u2019s make a new @keyframe and calculate where the background position would be just before we speed up the animation.\n\nHere\u2019s the formula:\n\nnew 0% value = delay \u00f7 old duration \u00d7 length of image\n\nnew 100% value = new 0% value + length of image\n\nHere\u2019s the formula put to work on a smaller scale:\n\n\n\nVoil\u00e0! The finished animation!\n\n\n\nI\u2019ve always wanted to bring my illustrations to life. Then I woke up one morning and realized that I had all the tools to do so in my browser and in my head. Now I have fallen in love with Flashless animation.\n\nI\u2019m sure there will be detractors who say HTML wasn\u2019t meant for this and it\u2019s a gross abuse of the DOM! But I say that these explorations help us expand what we expect from devices and software and challenge us in good ways as artists and programmers. The browser might not be the most appropriate place for animation, but is certainly a fun place to start.\n\nThere is so much you can do with the spec implemented today, and so much of the territory is still unexplored. I have not yet begun to show you everything. In eight months I expect this demo will represent the norm, not the bleeding edge. I look forward to seeing the wonderful things you create.\n\n(Also, someone, please, do something about that gappy HTML5 audio looping. It\u2019s a crying shame!)", "year": "2012", "author": "Rachel Nabors", "author_slug": "rachelnabors", "published": "2012-12-06T00:00:00+00:00", "url": "https://24ways.org/2012/flashless-animation/", "topic": "code"} {"rowid": 89, "title": "Direction, Distance and Destinations", "contents": "With all these new smartphones in the hands of lost and confused owners, we need a better way to represent distances and directions to destinations. The immediate examples that jump to mind are augmented reality apps which let you see another world through your phone\u2019s camera. While this is interesting, there is a simpler way: letting people know how far away they are and if they are getting warmer or colder. \n\nIn the app world, you can easily tap into the phone\u2019s array of sensors such as the GPS and compass, but what people rarely know is that you can do the same with HTML. The native versus web app debate will never subside, but at least we can show you how to replicate some of the functionality progressively in HTML and JavaScript.\n\nIn this tutorial, we\u2019ll walk through how to create a simple webpage listing distances and directions of a few popular locations around the world. We\u2019ll use JavaScript to access the device\u2019s geolocation API and also attempt to access the compass to get a heading. Both of these APIs are documented, to be included in the W3C geolocation API specification, and can be used on both desktop and mobile devices today.\n\nTo get started, we need a list of a few locations around the world. I have chosen the highest mountain peak on each continent so you can see a diverse set of distances and directions. \n\n\n\t\t\n\t\t\tMountain \n\t\t\t\u00b0Latitude \n\t\t\t\u00b0Longitude \n\t\t\n\t\t\n\t\t\tKilimanjaro\n\t\t\t-3.075833\n\t\t\t37.353333\n\t\t\n\t\t\n\t\t\tVinson Massif\n\t\t\t-78.525483\n\t\t\t-85.617147\n\t\t\n\t\t\n\t\t\tPuncak Jaya\n\t\t\t-4.078889\n\t\t\t137.158333\n\t\t\n\t\t\n\t\t\tEverest\n\t\t\t27.988056\n\t\t\t86.925278\n\t\t\n\t\t\n\t\t\tElbrus\n\t\t\t43.355\n\t\t\t42.439167\n\t\t\n\t\t\n\t\t\tMount McKinley\n\t\t\t63.0695\n\t\t\t-151.0074\n\t\t\n\t\t\n\t\t\tAconcagua\n\t\t\t-32.653431\n\t\t\t-70.011083\n\t\t\n\n\nSource: Wikipedia \n\nWe can put those into an HTML list to be styled and accessed by JavaScript to create some distance and directions calculations.\n\nThe next thing we need to do is check to see if the browser and operating system have geolocation support. To do this we test to see if the function is available or not using a single JavaScript if statement.\n\n<script>\n// If this is true, then the method is supported and we can try to access the location\nif (navigator.geolocation) {\n\tnavigator.geolocation.getCurrentPosition(geo_success, geo_error);\n}\n</script>\n\nThe if statement will be false if geolocation support is not present, and then it is up to you to do something else instead as a fallback. For this example, we\u2019ll do nothing since our page should work as is and only get progressively better if more functionality is available. \n\nThe if statement will be true if there is support and therefore will continue inside the curly brackets to try to get the location. This should prompt the reader to accept or deny the request to get their location. If they say no, the second function callback is processed, in this case a function called geo_error; whereas if the location is available, it fires the geo_success function callback.\n\nThe function geo_error(){ } isn\u2019t that exciting. You can handle this in any way you see fit. The success function is more interesting. We get a position object passed into the function which contains a series of exciting attributes, namely the latitude and longitude of the device\u2019s current location.\n\nfunction geo_success(position){\n\tgLat = position.coords.latitude;\n\tgLon = position.coords.longitude;\n}\n\nNow, in the variables gLat and gLon we have the user\u2019s approximate geographical position. We can use this information to start to calculate some distances between where they are and all the destinations.\n\nAt the time of writing, you can also get position.coords.heading, but on Windows and iOS devices this returned NULL. In the future, if and when this is supported, this is also where you can easily grab the compass information.\n\nInside the geo_success function, we want to loop through the HTML to get all of the mountain peaks\u2019 latitudes and longitudes and compute the distance.\n\n...\n$('.geo').each(function(){\n\t// Get the lat/lon from the HTML\n\ttLat = $(this).find('.lat').html()\n\ttLon = $(this).find('.lon').html()\n\n\t// compute the distances between the current location and this points location\n\tdist = distance(tLat,tLon,gLat,gLon);\n\n\t// set the return values into something useful\n\td = parseInt(dist[0]*10)/10;\n\ta = parseFloat(dist[1]);\n\n\t// display the value in the HTML and style the arrow\n\t$(this).find('.distance').html(d+' km away');\n\t$(this).find('.direction').css('-webkit-transform','rotate(-' + a + 'deg)');\n\n\t// store the arc for later use if compass is available\n\t$(this).attr('data-arc',a);\n}\n\nIn the variable d we have the distance between the current location and the location of the mountain peak based on the Haversine Formula. The variable a is the arc, which has a value from 0 to 359.99. This will be useful later if we have compass support. Given these two values we have a distance and a heading to style the HTML.\n\nThe next thing we want to do is check to see if the device has a compass and then get access to the the current heading. As we\u2019ll see, there are several ways to do this, some of which work on certain devices but not others. The W3C geolocation spec says that, along with the coordinates, there are several other attributes: accuracy; altitude; and heading. Heading is the direction to true north, which is different than magnetic north! WebKit and Windows return NULL for the heading value, but WebKit has an experimental method to fetch the heading. If you get into accessing these sensors, you\u2019ll have to try to catch a few of these methods to finally get a value. Assuming you do, we can move on to the more interesting display opportunities.\n\nIn an ideal world, this would succeed and set a variable we\u2019ll call compassHeading to get a value between 0 and 359.99 degrees. Now we know which direction north is, we also know the direction relative to north of the path to our destination, so we can can subtract the two values to get an arrow to display on the screen. But we\u2019re not finished yet: we also need to get the device\u2019s orientation (landscape or portrait) and subtract the correct amount from the angle for the arrow. Once we have a value, we can use CSS to rotate the arrow the correct number of degrees.\n\n-webkit-transform: rotate(-180deg)\n\nNot all devices support a standard way to access compass information, so in the meantime we need to use a work around. On iOS, you can use the experimental event method e.webkitCompassHeading. We want the compass to update in real time as the device is moved around, so we\u2019ll put this inside an event listener.\n\nwindow.addEventListener('deviceorientation', function(e) {\n\t// Loop through all the locations on the page\n\t$('.geo').each(function(){\n\t\t// get the arc value from north we computed and stored earlier\n\t\tdestination_arc = parseInt($(this).attr('data-arc'))\n\t\tcompassHeading = e.webkitCompassHeading + window.orientation + destination_arc;\n\t\t// find the arrow element and rotate it accordingly\n\t\t$(this).find('.direction').css('-webkit-transform','rotate(-' + compassHeading + 'deg)');\t\t\n\t}\n}\n\nAs the device is rotated, the compass arrow will constantly be updated. If you want to see an example, you can have a look at this page which shows the distances to all the peaks on each continent.\n\nWith progressive enhancement, we slowly layer on additional functionality as we go. The reader will first see the list of locations with a latitude and longitude. If the device is capable and permissions allow, it will then compute the distance. If a compass is available, with the correct permissions it will then add the final layer which is direction.\n\nYou should consider this code a stub for your projects. If you are making a hyperlocal webpage with restaurant locations, for example, then consider adding these features. Knowing not only how far away a place is, but also the direction can be hugely important, and since the compass is always active, it acts as a guide to the location. \n\nFuture developments\n\nImprovements to this could include setting a timer and recalling the navigator.geolocation.getCurrentPosition() function and updating the distances. I chose very distant mountains so kilometres made sense, but you can divide again by 1,000 to convert to metres if you are dealing with much nearer places. Walking or driving would change the distances so the ability to refresh would be important. \n\nIt is outside the scope of this article, but if you manage to get this HTML to work offline, then you can make a nice web app which sits on your devices\u2019 homescreens and works even without an internet connection. This could be ideal for travellers in an unknown city looking for your destination. Just with offline storage, base64 encoding and data URIs, it is possible to embed plenty of design and functionality into a small offline webpage.\n\nNow you know how to use JavaScript to look up a destination\u2019s location and figure out the distance and direction \u2013 never get lost again.", "year": "2012", "author": "Brian Suda", "author_slug": "briansuda", "published": "2012-12-19T00:00:00+00:00", "url": "https://24ways.org/2012/direction-distance-and-destinations/", "topic": "code"} {"rowid": 91, "title": "Infinite Canvas: Moving Beyond the Page", "contents": "Remember Web 2.0? I do. In fact, that phrase neatly bifurcates my life on the internet. Pre-2.0, I was occupied by chatting on AOL and eventually by learning HTML so I could build sites on Geocities. Around 2002, however, I saw a WYSIWYG demo in Dreamweaver. The instructor was dragging boxes and images around a canvas. With a few clicks he was able to build a dynamic, single-page interface. Coming from the world of tables and inline HTML styles, I was stunned.\n\nAs I entered college the next year, the web was blossoming: broadband, Wi-Fi, mobile (proud PDA owner, right here), CSS, Ajax, Bloglines, Gmail and, soon, Google Maps. I was a technology fanatic and a hobbyist web developer. For me, the web had long been informational. It was now rapidly becoming something else, something more: sophisticated, presentational, actionable.\n\nIn 2003 we watched as the internet changed. The predominant theme of those early Web 2.0 years was the withering of Internet Explorer 6 and the triumph of web standards. Upon cresting that mountain, we looked around and collectively breathed the rarefied air of pristine HMTL and CSS, uncontaminated by toxic hacks and forks \u2013 only to immediately begin hurtling down the other side at what is, frankly, terrifying speed.\n\nTen years later, we are still riding that rocket. Our days (and nights) are spent cramming for exams on CSS3 and RWD and Sass and RESS. We are the proud, frazzled owners of tiny pocket computers that annihilate the best laptops we could have imagined, and the architects of websites that are no longer restricted to big screens nor even segregated by device. We dragoon our sites into working any time, anywhere. At this point, we can hardly ask the spec developers to slow down to allow us to catch our breath, nor should we. It is, without a doubt, a most wonderful time to be a web developer.\n\nBut despite the newfound luxury of rounded corners, gradients, embeddable fonts, low-level graphics APIs, and, glory be, shadows, the canyon between HTML and native appears to be as wide as ever. The improvements in HTML and CSS have, for the most part, been conveniences rather than fundamental shifts. What I\u2019d like to do now, if you\u2019ll allow me, is outline just a few of the remaining gaps that continue to separate web sites and applications from their native companions.\n\nWhat I\u2019d like for Christmas\n\nThere is one irritant which is the grandfather of them all, the one from which all others flow and have their being, and it is, simply, the page refresh. That\u2019s right, the foundational principle of the web is our single greatest foe. To paraphrase a patron saint of designers everywhere, if you see a page refresh, we blew it.\n\nThe page refresh brings with it, of course, many noble and lovely benefits: addressability, for one; and pagination, for another. (See also caching, resource loading, and probably half a dozen others.) Still, those concerns can be answered (and arguably answered more compellingly) by replacing the weary page with the young and hearty document. Flash may be dead, but it has many lessons yet to bequeath.\n\nPreparing a single document when the site loads allows us to engage the visitor in a smooth and engrossing experience. We have long known this, of course. Twitter was not the first to attempt, via JavaScript, to envelop the user in a single-page application, nor the first to abandon it. Our shared task is to move those technologies down the stack, to make them more primitive, so that the next Twitter can be built with the most basic combination of HTML and CSS rather than relying on complicated, slow, and unreliable scripted solutions.\n\nSo, let\u2019s take a look at what we can do, right now, that we might have a better idea of where our current tools fall short.\n\nA print magazine in HTML clothing\n\nLike many others, I suspect, one of my earliest experiences with publishing was laying out newsletters and newspapers on a computer for print. If you\u2019ve ever used InDesign or Quark or even Microsoft Publisher, you\u2019ll remember reflowing content from page to page. The advent of the internet signaled, in many ways, the abandonment of that model. Articles were no longer constrained by the physical limitations of paper. In shedding our chains, however, it is arguable that we\u2019ve lost something useful. We had a self-contained and complete package, a closed loop. It was a thing that could be handled and finished, and doing so provided a sense of accomplishment that our modern, infinitely scrolling, ever-fractal web of content has stolen.\n\nFor our purposes today, we will treat 24 ways as the online equivalent of that newspaper or magazine. A single year\u2019s worth of articles could easily be considered an issue. Right now, navigating between articles means clicking on the article you\u2019d like to view and being taken to that specific address via a page reload. If Drew wanted to, it wouldn\u2019t be difficult to update the page in place (via JavaScript) and change the address (again via JavaScript with the History API) to reflect the new content found at the new location. But what if Drew wanted to do that without JavaScript? And what if he wanted the site to not merely load the content but actually whisk you along the page in a compelling and delightful way, \u00e0 la the Mag+ demo we all saw a few years ago when the iPad was first introduced? Uh, no.\n\nWe\u2019re all familiar with websites that have attempted to go beyond the page by weaving many chunks of content together into a large document and for good reason. There is tremendous appeal in opening and exploring the canvas beyond the edges of our screens.\n\nIn one rather straightforward example from last year, Mozilla contacted Full Stop to build a website promoting Aza Raskin\u2019s proposal for a set of Creative Commons-style privacy icons. Like a lot of the sites we build (including our own), the amount of information we were presenting was minimal. In these instances, we encourage our clients to consider including everything on a single page. The result was a horizontally driven site that was, if not whimsical, at least clever and attractive to the intended audience. An experience that is taken for granted when using device-native technology is utterly, maddeningly impossible to replicate on the web without jumping through JavaScript hoops.\n\nIn another, more complex example, we again had the pleasure of working with Aza earlier this year, this time on a redesign of the Massive Health website. Our assignment was to design and build a site that communicated Massive\u2019s commitment to modern personal health. The site had to be visually and interactively stunning while maintaining a usable and clear interface for the casual visitor. Our solution was to extend the infinite company logo into a ribbon that carried the visitor through the site narrative. It also meant we\u2019d be asking the browser to accommodate something it was never designed to handle: a non-linear design. (Be sure to play around. There\u2019s a lot going on under the hood. We were also this close to a ZUI, if WebKit didn\u2019t freak out when pages were scaled beyond 10\u00d7.) Despite the apparent and deliberate design simplicity, the techniques necessary to implement it are anything but. From updating the URL to moving the visitor from section to section, we\u2019re firmly in JavaScript territory. And that\u2019s a shame.\n\nWhat can we do?\n\nWe might not be able to specify these layouts in HTML and CSS just yet, but that doesn\u2019t mean we can\u2019t learn a few new tricks while we wait. Let\u2019s see how close we can come to recreating the privacy icons design, the Massive design, or the Mag+ design without resorting to JavaScript.\n\nA horizontally paginated site\n\nThe first thing we\u2019re going to need is the concept of a page within our HTML document. Using plain old HTML and CSS, we can stack a series of <div>s sideways (with a little assist from our new friend, the viewport-width unit, not that he was strictly necessary). All we need to know is how many pages we have. (And, boy, wouldn\u2019t it be nice to be able to know that without having to predetermine it or use JavaScript?)\n\n.window {\noverflow: hidden;\n width: 100%;\n}\n.pages {\n width: 200vw;\n}\n.page {\n float: left;\n overflow: hidden;\n width: 100vw;\n}\n\nIf you look carefully, you\u2019ll see that the conceit we\u2019ll use in the rest of the demos is in place. Despite the document containing multiple pages, only one is visible at any given time. This allows us to keep the user focused on the task (or content) at hand.\n\nBy the way, you\u2019ll need to use a modern, WebKit-based browser for these demos. I recommend downloading the WebKit nightly builds, Chrome Canary, or being comfortable with setting flags in Chrome.\n\nA horizontally paginated site, with transitions\n\nAh, here\u2019s the rub. We have functional navigation, but precious few cues for the user. It\u2019s not much good shoving the visitor around various parts of the document if they don\u2019t get the pleasant whooshing experience of the journey. You might be thinking, what about that new CSS selector, target-something\u2026? Well, my friend, you\u2019re on the right track. Let\u2019s test it. We\u2019re going to need to use a bit of sleight of hand. While we\u2019d like to simply offset the containing element by the number of pages we\u2019re moving (like we did on Massive), CSS alone can\u2019t give us that information, and that means we\u2019re going to need to fake it by expanding and collapsing pages as you navigate. Here are the bits we\u2019re going to need:\n\n.page {\n -webkit-transition: width 1s; // Naturally you're going to want to include all the relevant prefixes here\n float: left;\n left: 0;\n overflow: hidden;\n position: relative;\n width: 100vw;\n}\n.page:not(:target) {\n width: 0;\n}\n\nAh, but we\u2019re not fooling anyone with that trick. As soon as you move beyond a single page, the visitor\u2019s disbelief comes tumbling down when the linear page transitions are unaffected by the distance the pages are allegedly traveling. And you may have already noticed an even more fatal flaw: I secretly linked you to the first page rather than the unadorned URL. If you visit the same page with no URL fragment, you get a blank screen. Sure, we could force a redirect with some server-side trickery, but that feels like cheating. Perhaps if we had the CSS4 subject selector we could apply styles to the parent based on the child being targeted by the URL. We might also need a few more abilities, like determining the total number of pages and having relative sibling selectors (e.g. nth-sibling), but we\u2019d sure be a lot closer.\n\nA horizontally paginated site, with transitions \u2013 no cheating\n\nWell, what other cards can we play? How about the checkbox hack? Sure, it\u2019s a garish trick, but it might be the best we can do today. Check it out. \n\nlabel {\n cursor: pointer;\n}\ninput {\n display: none;\n}\ninput:not(:checked) + .page {\n max-height: 100vh;\n width: 0;\n}\n\nFinally, we can see the first page thanks to the state we are able to set on the appropriate radio button. Of course, now we don\u2019t have URLs, so maybe this isn\u2019t a winning plan after all. While our HTML and CSS toolkit may feel primitive at the moment, we certainly don\u2019t want to sacrifice the addressability of the web. If there\u2019s one bedrock principle, that\u2019s it.\n\nA horizontally paginated site, with transitions \u2013 no cheating and a gorgeous homepage\n\nGorgeous may not be the right word, but our little magazine is finally shaping up. Thanks to the CSS regions spec, we\u2019ve got an exciting new power, the ability to begin an article in one place and bend it to our will. (Remember, your everyday browser isn\u2019t going to work for these demos. Try the WebKit nightly build to see what we\u2019re talking about.) As with the rest of the examples, we\u2019re clearly abusing these features. Off-canvas layouts (you can thank Luke Wroblewski for the name) are simply not considered to be normal patterns\u2026 yet.\n\nHere\u2019s a quick look at what\u2019s going on:\n\n.excerpt-container {\n float: left;\n padding: 2em;\n position: relative;\n width: 100%;\n}\n.excerpt {\n height: 16em;\n}\n.excerpt_name_article-1,\n.page-1 .article-flow-region {\n -webkit-flow-from: article-1;\n}\n.article-content_for_article-1 {\n -webkit-flow-into: article-1;\n}\n\nThe regions pattern is comprised of at least three components: a beginning; an ending; and a source. Using CSS, we\u2019re able to define specific elements that should be available for the content to flow through. If magazine-style layouts are something you\u2019re interested in learning more about (and you should be), be sure to check out the great work Adobe has been doing.\n\nLooking forward, and backward\n\nAs designers, builders, and consumers of the web, we share a desire to see the usability and enjoyability of websites continue to rise. We are incredibly lucky to be working in a time when a three-month-old website can be laughably outdated. Our goal ought to be to improve upon both the weaknesses and the strengths of the web platform. We seek not only smoother transitions and larger canvases, but fine-grained addressability. Our URLs should point directly and unambiguously to specific content elements, be they pages, sections, paragraphs or words. Moreover, off-screen design patterns are essential to accommodating and empowering the multitude of devices we use to access the web. We should express the desire that interpage links take advantage of the CSS transitions which have been put to such good effect in every other aspect of our designs. Transitions aren\u2019t just nice to have, they\u2019re table stakes in the highly competitive world of native applications. \n\nThe tools and technologies we have right now allow us to create smart, beautiful, useful webpages. With a little help, we can begin removing the seams and sutures that bind the web to an earlier, less sophisticated generation.", "year": "2012", "author": "Nathan Peretic", "author_slug": "nathanperetic", "published": "2012-12-21T00:00:00+00:00", "url": "https://24ways.org/2012/infinite-canvas-moving-beyond-the-page/", "topic": "code"} {"rowid": 92, "title": "Redesigning the Media Query", "contents": "Responsive web design is showing us that designing content is more important than designing containers. But if you\u2019ve given RWD a serious try, you know that shifting your focus from the container is surprisingly hard to do. There are many factors and\ninstincts working against you, and one culprit is a perpetrator you\u2019d least suspect.\n\nThe media query is the ringmaster of responsive design. It lets us establish the rules of the game and gives us what we need most: control. However, like some kind of evil double agent, the media query is actually working against you.\n\nIts very nature diverts your attention away from content and forces you to focus on the container.\n\nThe very act of choosing a media query value means choosing a screen size.\n\nLook at the history of the media query\u2014it\u2019s always been about the container. Values like screen, print, handheld and tv don\u2019t have anything to do with content. The modern media query lets us choose screen dimensions, which is great because it makes RWD possible. But it\u2019s still the act of choosing something that is completely unpredictable.\n\nContent should dictate our breakpoints, not the container. In order to get our focus back to the only thing that matters, we need a reengineered media query\u2014one that frees us from thinking about screen dimensions. A media query that works for your content, not the window. Fortunately, Sass 3.2 is ready and willing to take on this challenge.\n\nThinking in Columns\n\nFluid grids never clicked for me. I feel so disoriented and confused by their squishiness. Responsive design demands their use though, right?\n\nI was ready to surrender until I found a grid that turned my world upright again. The Frameless Grid by Joni Korpi demonstrates that column and gutter sizes can stay fixed. As the screen size changes, you simply add or remove columns to accommodate. This made sense to me and armed with this concept I was able to give Sass the first component it needs to rewrite the media query: fixed column and gutter size variables.\n\n$grid-column: 60px;\n$grid-gutter: 20px;\n\nWe\u2019re going to want some resolution independence too, so let\u2019s create a function that converts those nasty pixel values into ems.\n\n@function em($px, $base: $base-font-size) {\n\t@return ($px / $base) * 1em;\n}\n\nWe now have the components needed to figure out the width of multiple columns in ems. Let\u2019s put them together in a function that will take any number of columns and return the fixed width value of their size.\n\n@function fixed($col) {\n\t@return $col * em($grid-column + $grid-gutter)\n}\n\nWith the math in place we can now write a mixin that takes a column count as a parameter, then generates the perfect media query necessary to fit that number of columns on the screen. We can also build in some left and right margin for our layout by adding an additional gutter value (remembering that we already have one gutter built into our fixed function).\n\n@mixin breakpoint($min) {\n\t@media (min-width: fixed($min) + em($grid-gutter)) {\n\t\t@content\n\t}\n}\n\nAnd, just like that, we\u2019ve rewritten the media query. Instead of picking a minimum screen size for our layout, we can simply determine the number of columns needed. Let\u2019s add a wrapper class so that we can center our content on the screen.\n\n@mixin breakpoint($min) {\n @media (min-width: fixed($min) + em($grid-gutter)) {\n\t.wrapper {\n\t\twidth: fixed($min) - em($grid-gutter);\n\t\tmargin-left: auto; margin-right: auto;\n\t}\n\t@content\n }\n}\n\nDesigning content with a column count gives us nice, easy, whole numbers to work with. Sizing content, sidebars or widgets is now as simple as specifying a single-digit number.\n\n@include breakpoint(8) {\n\t.main { width: fixed(5); }\n\t.sidebar { width: fixed(3); }\n}\n\nThose four lines of Sass just created a responsive layout for us. When the screen is big enough to fit eight columns, it will trigger a fixed width layout. And give widths to our main content and sidebar. The following is the outputted CSS\u2026\n\n@media (min-width: 41.25em) {\n .wrapper {\n width: 38.75em;\n margin-left: auto; margin-right: auto;\n }\n .main { width: 25em; }\n .sidebar { width: 15em; }\n}\n\nDemo\n\nI\u2019ve created a Codepen demo that demonstrates what we\u2019ve covered so far. I\u2019ve added to the demo some grid classes based on Griddle by Nicolas Gallagher to create a floatless layout. I\u2019ve also added a CSS gradient overlay to help you visualize columns. Try changing the column variable sizes or the breakpoint includes to see how the layout reacts to different screen sizes.\n\nResponsive Images\n\nResponsive images are a serious problem, but I\u2019m excited to see the community talk so passionately about a solution. Now, there are some excellent stopgaps while we wait for something official, but these solutions require you to mirror your breakpoints in JavaScript or HTML. This poses a serious problem for my Sass-generated media queries, because I have no idea what the real values of my breakpoints are anymore. For responsive images to work, JavaScript needs to recognize which media query is active so that proper images can be loaded for that layout.\n\nWhat I need is a way to label my breakpoints. Fortunately, people much smarter than I have figured this out. Jeremy Keith devised a labeling method by using CSS-generated content as the storage method for breakpoint labels. We can use this technique in our breakpoint mixin by passing a label as another argument.\n\n@include breakpoint(8, 'desktop') { /* styles */ }\n\nSass can take that label and use it when writing the corresponding media query. We just need to slightly modify our breakpoint mixin.\n\n@mixin breakpoint($min, $label) {\n @media (min-width: fixed($min) + em($grid-gutter)) {\n\n // label our mq with CSS generated content\n\tbody::before { content: $label; display: none; }\n\n\t.wrapper {\n\t\twidth: fixed($min) - em($grid-gutter);\n\t\tmargin-left: auto; margin-right: auto;\n\t}\n\t@content\n }\n}\n\nThis allows us to label our breakpoints with a user-friendly string. Now that our media queries are defined and labeled, we just need JavaScript to step in and read which label is active.\n\n// get css generated label for active media query\nvar label = getComputedStyle(document.body, '::before')['content'];\n\nJavaScript now knows which layout is active by reading the label in the current media query\u2014we just need to match that label to an image. I prefer to store references to different image sizes as data attributes on my image tag.\n\n<img class=\"responsive-image\" data-mobile=\"mobile.jpg\" data-desktop=\"desktop.jpg\" />\n<noscript><img src=\"desktop.jpg\" /></noscript>\n\nThese data attributes have names that match the labels set in my CSS. So while there is some duplication going on, setting a keyword like \u2018tablet\u2019 in two places is much easier than hardcoding media query values. With matching labels in CSS and HTML our script can marry the two and load the right sized image for our layout.\n\n// get css generated label for active media query\nvar label = getComputedStyle(document.body, '::before')['content'];\n\n// select image\nvar $image = $('.responsive-image');\n\n// create source from data attribute\n$image.attr('src', $image.data(label));\n\nDemo\n\nWith some slight additions to our previous Codepen demo you can see this responsive image technique in action. While the above JavaScript will work it is not nearly robust enough for production so the demo uses a jQuery plugin that can accomodate multiple images, reloading on screen resize and fallbacks if something doesn\u2019t match up.\n\nCreating a Framework\n\nThis media query mixin and responsive image JavaScript are the center piece of a front end framework I use to develop websites. It\u2019s a fluid, mobile first foundation that uses the breakpoint mixin to structure fixed width layouts for tablet and desktop. Significant effort was focused on making this framework completely cross-browser. For example, one of the problems with using media queries is that essential desktop structure code ends up being hidden from legacy Internet Explorer. Respond.js is an excellent polyfill, but if you\u2019re comfortable serving a single desktop layout to older IE, we don\u2019t need JavaScript. We simply need to capture layout code outside of a media query and sandbox it under an IE only class name.\n\n// set IE fallback layout to 8 columns\n$ie-support = 8;\n\n// inside of our breakpoint mixin (but outside the media query)\n@if ($ie-support and $min <= $ie-support) {\n\t.lt-ie9 { @content; }\n}\n\nPerspective Regained\n\nThinking in columns means you are thinking about content layout. How big of a screen do you need for 12 columns? Who cares? Having Sass write media queries means you can use intuitive numbers for content layout. A fixed grid means more layout control and less edge cases to test than a fluid grid. Using CSS labels for activating responsive images means you don\u2019t have to duplicate breakpoints across separations of concern. \n\nIt\u2019s a harmonious blend of approaches that gives us something we need\u2014responsive design that feels intuitive. And design that, from the very outset, focuses on what matters most. Just like our kindergarten teachers taught us: It\u2019s what\u2019s inside that counts.", "year": "2012", "author": "Les James", "author_slug": "lesjames", "published": "2012-12-13T00:00:00+00:00", "url": "https://24ways.org/2012/redesigning-the-media-query/", "topic": "code"} {"rowid": 95, "title": "Giving Content Priority with CSS3 Grid Layout", "contents": "Browser support for many of the modules that are part of CSS3 have enabled us to use CSS for many of the things we used to have to use images for. The rise of mobile browsers and the concept of responsive web design has given us a whole new way of looking at design for the web. However, when it comes to layout, we haven\u2019t moved very far at all. We have talked for years about separating our content and source order from the presentation of that content, yet most of us have had to make decisions on source order in order to get a certain visual layout. \n\nOwing to some interesting specifications making their way through the W3C process at the moment, though, there is hope of change on the horizon. In this article I\u2019m going to look at one CSS module, the CSS3 grid layout module, that enables us to define a grid and place elements on to it. This article comprises a practical demonstration of the basics of grid layout, and also a discussion of one way in which we can start thinking of content in a more adaptive way.\n\nBefore we get started, it is important to note that, at the time of writing, these examples work only in Internet Explorer 10. CSS3 grid layout is a module created by Microsoft, and implemented using the -ms prefix in IE10. My examples will all use the -ms prefix, and not include other prefixes simply because this is such an early stage specification, and by the time there are implementations in other browsers there may be inconsistencies. The implementation I describe today may well change, but is also there for your feedback.\n\nIf you don\u2019t have access to IE10, then one way to view and test these examples is by signing up for an account with Browserstack \u2013 the free trial would give you time to have a look. I have also included screenshots of all relevant stages in creating the examples.\n\nWhat is CSS3 grid layout?\n\nCSS3 grid layout aims to let developers divide up a design into a grid and place content on to that grid. Rather than trying to fabricate a grid from floats, you can declare an actual grid on a container element and then use that to position the elements inside. Most importantly, the source order of those elements does not matter. \n\nDeclaring a grid\n\nWe declare a grid using a new value for the display property: display: grid. As we are using the IE10 implementation here, we need to prefix that value: display: -ms-grid;.\n\nOnce we have declared our grid, we set up the columns and rows using the grid-columns and grid-rows properties.\n\n.wrapper {\n display: -ms-grid;\n -ms-grid-columns: 200px 20px auto 20px 200px;\n -ms-grid-rows: auto 1fr;\n}\n\nIn the above example, I have declared a grid on the .wrapper element. I have used the grid-columns property to create a grid with a 200 pixel-wide column, a 20 pixel gutter, a flexible width auto column that will stretch to fill the available space, another 20 pixel-wide gutter and a final 200 pixel sidebar: a flexible width layout with two fixed width sidebars. Using the grid-rows property I have created two rows: the first is set to auto so it will stretch to fill whatever I put into it; the second row is set to 1fr, a new value used in grids that means one fraction unit. In this case, one fraction unit of the available space, effectively whatever space is left.\n\nPositioning items on the grid\n\nNow I have a simple grid, I can pop items on to it. If I have a <div> with a class of .main that I want to place into the second row, and the flexible column set to auto I would use the following CSS:\n\n.content {\n -ms-grid-column: 3;\n -ms-grid-row: 2;\n -ms-grid-row-span: 1;\n}\n\nIf you are old-school, you may already have realised that we are essentially creating an HTML table-like layout structure using CSS. I found the concept of a table the most helpful way to think about the grid layout module when trying to work out how to place elements.\n\nCreating grid systems\n\nAs soon as I started to play with CSS3 grid layout, I wanted to see if I could use it to replicate a flexible grid system like this fluid 16-column 960 grid system.\n\nI started out by defining a grid on my wrapper element, using fractions to make this grid fluid.\n\n.wrapper {\t \n width: 90%;\n margin: 0 auto 0 auto;\n display: -ms-grid;\n -ms-grid-columns: 1fr (4.25fr 1fr)[16];\n -ms-grid-rows: (auto 20px)[24];\n}\n\nLike the 960 grid system I was using as an example, my grid starts with a gutter, followed by the first actual column, plus another gutter repeated sixteen times. What this means is that if I want to span two columns, as far as the grid layout module is concerned that is actually three columns: two wide columns, plus one gutter. So this needs to be accounted for when positioning items.\n\nI created a CSS class for each positioning option: column position; rows position; and column span. For example:\n\n.grid1 {-ms-grid-column: 2;} /* applying this class positions an item in the first column (the gutter is column 1) */\n.grid2 {-ms-grid-column: 4;} /* 2nd column - gutter|column 1|gutter */\n.grid3 {-ms-grid-column: 6;} /* 3rd column - gutter|column 1|gutter|column2|gutter */\n\n.row1 {-ms-grid-row:1;}\n.row2 {-ms-grid-row:3;}\n.row3 {-ms-grid-row:5;}\n\n.colspan1 {-ms-grid-column-span:1;}\n.colspan2 {-ms-grid-column-span:3;}\n.colspan3 {-ms-grid-column-span:5;}\n\nI could then add multiple classes to each element to set the position on on the grid.\n\n\n\nThis then gives me a replica of the fluid grid using CSS3 grid layout. To see this working fire up IE10 and view Example 1.\n\nThis works, but\u2026\n\nThis worked, but isn\u2019t ideal. I considered not showing this stage of my experiment \u2013 however, I think it clearly shows how the grid layout module works and is a useful starting point. That said, it\u2019s not an approach I would take in production. First, we have to add classes to our markup that tie an element to a position on the grid. This might not be too much of a problem if we are always going to maintain the sixteen-column grid, though, as I will show you that the real power of the grid layout module appears once you start to redefine the grid, using different grids based on media queries. If you drop to a six-column layout for small screens, positioning items into column 16 makes no sense any more.\n\nCalculating grid position using LESS\n\nAs we\u2019ve seen, if you want to use a grid with main columns and gutters, you have to take into account the spacing between columns as well as the actual columns. This means we have to do some calculating every time we place an item on the grid. In my example above I got around this by creating a CSS class for each position, allowing me to think in sixteen rather than thirty-two columns. But by using a CSS preprocessor, I can avoid using all the classes yet still think in main columns.\n\nI\u2019m using LESS for my example. My simple grid framework consists of one simple mixin.\n\n.position(@column,@row,@colspan,@rowspan) {\n -ms-grid-column: @column*2;\n -ms-grid-row: @row*2-1;\n -ms-grid-column-span: @colspan*2-1;\n -ms-grid-row-span: @rowspan*2-1;\n}\n\nMy mixin takes four parameters: column; row; colspan; and rowspan. So if I wanted to place an item on column four, row three, spanning two columns and one row, I would write the following CSS:\n\n.box {\n .position(4,3,2,1);\n}\n\nThe mixin would return:\n\n.box {\n -ms-grid-column: 8;\n -ms-grid-row: 5;\n -ms-grid-column-span: 3;\n -ms-grid-row-span: 1;\n}\n\nThis saves me some typing and some maths. I could also add other prefixed values into my mixin as other browsers started to add support.\n\nWe can see this in action creating a new grid. Instead of adding multiple classes to each element, I can add one class; that class uses the mixin to create the position. I have also played around with row spans using my mixin and you can see we end up with a quite complicated arrangement of boxes. Have a look at example two in IE10. I\u2019ve used the JavaScript LESS parser so that you can view the actual LESS that I use. Note that I have needed to escape the -ms prefixed properties with ~\"\" to get LESS to accept them.\n\n\n\nThis is looking better. I don\u2019t have direct positioning information on each element in the markup, just a class name \u2013 I\u2019ve used grid(x), but it could be something far more semantic. We can now take the example a step further and redefine the grid based on screen width.\n\nMedia queries and the grid\n\nThis example uses exactly the same markup as the previous example. However, we are now using media queries to detect screen width and redefine the grid using a different number of columns depending on that width.\n\nI start out with a six-column grid, defining that on .wrapper, then setting where the different items sit on this grid:\n\n.wrapper {\t \n width: 90%;\n margin: 0 auto 0 auto;\n display: ~\"-ms-grid\"; /* escaped for the LESS parser */\n -ms-grid-columns: ~\"1fr (4.25fr 1fr)[6]\"; /* escaped for the LESS parser */\n -ms-grid-rows: ~\"(auto 20px)[40]\"; /* escaped for the LESS parser */\n}\n.grid1 { .position(1,1,1,1); } \n.grid2 { .position(2,1,1,1); } \n/* ... see example for all declarations ... */\n\n\n\nUsing media queries, I redefine the grid to nine columns when we hit a minimum width of 700 pixels.\n\n@media only screen and (min-width: 700px) {\n.wrapper {\n -ms-grid-columns: ~\"1fr (4.25fr 1fr)[9]\";\n -ms-grid-rows: ~\"(auto 20px)[50]\";\n}\n.grid1 { .position(1,1,1,1); } \n.grid2 { .position(2,1,1,1); } \n/* ... */\n}\n\n\n\nFinally, we redefine the grid for 960 pixels, back to the sixteen-column grid we started out with.\n\n@media only screen and (min-width: 940px) {\n.wrapper {\t \n -ms-grid-columns:~\" 1fr (4.25fr 1fr)[16]\";\n -ms-grid-rows:~\" (auto 20px)[24]\";\n}\n.grid1 { .position(1,1,1,1); } \n.grid2 { .position(2,1,1,1); } \n/* ... */\n}\n\nIf you view example three in Internet Explorer 10 you can see how the items reflow to fit the window size. You can also see, looking at the final set of blocks, that source order doesn\u2019t matter. You can pick up a block from anywhere and place it in any position on the grid.\n\nLaying out a simple website\n\nSo far, like a toddler on Christmas Day, we\u2019ve been playing with boxes rather than thinking about what might be in them. So let\u2019s take a quick look at a more realistic layout, in order to see why the CSS3 grid layout module can be really useful. At this time of year, I am very excited to get out of storage my collection of odd nativity sets, prompting my family to suggest I might want to open a museum. Should I ever do so, I\u2019ll need a website, and here is an example layout.\n\n\n\nAs I am using CSS3 grid layout, I can order my source in a logical manner. In this example my document is as follows, though these elements could be in any order I please:\n\n<div class=\"wrapper\">\n <div class=\"welcome\">\n ...\n </div>\n <article class=\"main\">\n ...\n </article>\n <div class=\"info\">\n ...\n </div>\n <div class=\"ads\">\n ...\n </div>\n</div>\n\nFor wide viewports I can use grid layout to create a sidebar, with the important information about opening times on the top righ,t with the ads displayed below it. This creates the layout shown in the screenshot above.\n\n@media only screen and (min-width: 940px) {\n .wrapper {\t \n -ms-grid-columns:~\" 1fr (4.25fr 1fr)[16]\";\n -ms-grid-rows:~\" (auto 20px)[24]\";\n }\n .welcome {\n .position(1,1,12,1);\n padding: 0 5% 0 0;\n }\n .info {\n .position(13,1,4,1);\n border: 0;\n padding:0;\n }\n .main {\n .position(1,2,12,1);\n padding: 0 5% 0 0;\n } \n .ads {\n .position(13,2,4,1);\n display: block;\n margin-left: 0;\n }\n}\n\nIn a floated layout, a sidebar like this often ends up being placed under the main content at smaller screen widths. For my situation this is less than ideal. I want the important information about opening times to end up above the main article, and to push the ads below it. With grid layout I can easily achieve this at the smallest width .info ends up in row two and .ads in row five with the article between.\n\n.wrapper {\t \n display: ~\"-ms-grid\";\n -ms-grid-columns: ~\"1fr (4.25fr 1fr)[4]\";\n -ms-grid-rows: ~\"(auto 20px)[40]\";\n}\n.welcome {\n .position(1,1,4,1);\n}\n.info {\n .position(1,2,4,1);\n border: 4px solid #fff;\n padding: 10px;\n}\n.content {\n .position(1,3,4,5);\n}\n.main {\n .position(1,3,4,1);\n}\n.ads {\n .position(1,4,4,1);\n}\n\n\n\nFinally, as an extra tweak I add in a breakpoint at 600 pixels and nest a second grid on the ads area, arranging those three images into a row when they sit below the article at a screen width wider than the very narrow mobile width but still too narrow to support a sidebar. \n\n@media only screen and (min-width: 600px) {\n .ads {\n display: ~\"-ms-grid\";\n -ms-grid-columns: ~\"20px 1fr 20px 1fr 20px 1fr\";\n -ms-grid-rows: ~\"1fr\";\n margin-left: -20px;\n }\n .ad:nth-child(1) {\n .position(1,1,1,1);\n }\n .ad:nth-child(2) {\n .position(2,1,1,1);\n }\n .ad:nth-child(3) {\n .position(3,1,1,1);\n }\n}\n\nView example four in Internet Explorer 10.\n\n\n\nThis is a very simple example to show how we can use CSS grid layout without needing to add a lot of classes to our document. It also demonstrates how we can mainpulate the content depending on the context in which the user is viewing it.\n\nLayout, source order and the idea of content priority\n\nCSS3 grid layout isn\u2019t the only module that starts to move us away from the issue of visual layout being linked to source order. However, with good support in Internet Explorer 10, it is a nice way to start looking at how this might work. If you look at the grid layout module as something to be used in conjunction with the flexible box layout module and the very interesting CSS regions and exclusions specifications, we have, tantalizingly on the horizon, a powerful set of tools for layout.\n\nI am particularly keen on the potential separation of source order from layout as it dovetails rather neatly into something I spend a lot of time thinking about. As a CMS developer, working on larger scale projects as well as our CMS product Perch, I am interested in how we better enable content editors to create content for the web. In particular, I search for better ways to help them create adaptive content; content that will work in a variety of contexts rather than being tied to one representation of that content.\n\nIf the concept of adaptive content is new to you, then Karen McGrane\u2019s presentation Adapting Ourselves to Adaptive Content is the place to start. Karen talks about needing to think of content as chunks, that might be used in many different places, displayed differently depending on context.\n\nI absolutely agree with Karen\u2019s approach to content. We have always attempted to move content editors away from thinking about creating a page and previewing it on the desktop. However at some point content does need to be published as a page, or a collection of content if you prefer, and bits of that content have priority. Particularly in a small screen context, content gets linearized, we can only show so much at a time, and we need to make sure important content rises to the top. In the case of my example, I wanted to ensure that the address information was clearly visible without scrolling around too much. Dropping it with the entire sidebar to the bottom of the page would not have been so helpful, though neither would moving the whole sidebar to the top of the screen so a visitor had to scroll past advertising to get to the article.\n\nIf our layout is linked to our source order, then enabling the content editor to make decisions about priority is really hard. Only a system that can do some regeneration of the source order on the server-side \u2013 perhaps by way of multiple templates \u2013 can allow those kinds of decisions to be made. For larger systems this might be a possibility; for smaller ones, or when using an off-the-shelf CMS, it is less likely to be. Fortunately, any system that allows some form of custom field type can be used to pop a class on to an element, and with CSS grid layout that is all that is needed to be able to target that element and drop it into the right place when the content is viewed, be that on a desktop or a mobile device.\n\nThis approach can move us away from forcing editors to think visually. At the moment, I might have to explain to an editor that if a certain piece of content needs to come first when viewed on a mobile device, it needs to be placed in the sidebar area, tying it to a particular layout and design. I have to do this because we have to enforce fairly strict rules around source order to make the mechanics of the responsive design work. If I can instead advise an editor to flag important content as high priority in the CMS, then I can make decisions elsewhere as to how that is displayed, and we can maintain the visual hierarchy across all the different ways content might be rendered.\n\nWhy frustrate ourselves with specifications we can\u2019t yet use in production?\n\nThe CSS3 grid layout specification is listed under the Exploring section of the list of current work of the CSS Working Group. While discussing a module at this stage might seem a bit pointless if we can\u2019t use it in production work, there is a very real reason for doing so. If those of us who will ultimately be developing sites with these tools find out about them early enough, then we can start to give our feedback to the people responsible for the specification. There is information on the same page about how to get involved with the disussions.\n\nSo, if you have a bit of time this holiday season, why not have a play with the CSS3 grid layout module? I have outlined here some of my thoughts on how grid layout and other modules that separate layout from source order can be used in the work that I do. Likewise, wherever in the stack you work, playing with and thinking about new specifications means you can think about how you would use them to enhance your work. Spot a problem? Think that a change to the specification would improve things for a specific use case? Then you have something you could post to www-style to add to the discussion around this module.\n\nAll the examples are on CodePen so feel free to play around and fork them.", "year": "2012", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2012-12-18T00:00:00+00:00", "url": "https://24ways.org/2012/css3-grid-layout/", "topic": "code"} {"rowid": 98, "title": "Absolute Columns", "contents": "CSS layouts have come quite a long way since the dark ages of web publishing, with all sorts of creative applications of floats, negative margins, and even background images employed in order to give us that most basic building block, the column. As the title implies, we are indeed going to be discussing columns today\u2014more to the point, a handy little application of absolute positioning that may be exactly what you\u2019ve been looking for\u2026\n\nCare for a nightcap?\n\nIf you\u2019ve been developing for the web for long enough, you may be familiar with this little children\u2019s fable, passed down from wizened Shaolin monks sitting atop the great Mt. Geocities: \u201cOnce upon a time, multiple columns of the same height could be easily created using TABLES.\u201d Now, though we\u2019re all comfortably seated on the standards train (and let\u2019s be honest: even if you like to think you\u2019ve fallen off, if you\u2019ve given up using tables for layout, rest assured your sleeper car is still reserved), this particular\u2014and as page layout goes, quite basic\u2014trick is still a thorn in our CSSides compared to the ease of achieving the same effect using said Tables of Evil\u2122.\n\nSee, the orange juice masks the flavor\u2026\n\nCreative solutions such as Dan Cederholm\u2019s Faux Columns do a good job of making it appear as though adjacent columns maintain equal height as content expands, using a background image to fill the space that the columns cannot.\n\nNow, the Holy Grail of CSS columns behaving exactly how they would as table cells\u2014or more to the point, as columns\u2014still eludes us (cough CSS3 Multi-column layout module cough), but sometimes you just need, for example, a secondary column (say, a sidebar) to match the height of a primary column, without involving the creation of images. This is where a little absolute positioning can save you time, while possibly giving your layout a little more flexibility.\n\nShaken, not stirred\n\nYou\u2019re probably familiar by now with the concept of Making the Absolute, Relative as set forth long ago by Doug Bowman, but let\u2019s quickly review just in case: an element set to position:absolute will position itself relative to its nearest ancestor set to position:relative, rather than the browser window (see Figure 1).\n\n Figure 1.\n\nHowever, what you may not know is that we can anchor more than two sides of an absolutely positioned element. Yes, that\u2019s right, all four sides (top, right, bottom, left) can be set, though in this example we\u2019re only going to require the services of three sides (see Figure 2 for the end result).\n\n Figure 2.\n\nTrust me, this will make you feel better\n\nOur requirements are essentially the same as the standard \u201cabsolute-relative\u201d trick\u2014a container <div> set to position:relative, and our sidebar <div> set to position:absolute \u2014 plus another <div> that will serve as our main content column. We\u2019ll also add a few other common layout elements (wrapper, header, and footer) so our example markup looks more like a real layout and less like a test case:\n\n<div id=\"wrapper\">\n\t<div id=\"header\">\n\t\t<h2>#header</h2>\n\t</div>\n\t<div id=\"container\">\n\t\t<div id=\"column-left\">\n\t\t\t<h2>#left</h2>\n\t\t\t<p>Lorem ipsum dolor sit amet\u2026</p>\n\t\t</div>\n\t\t<div id=\"column-right\">\n\t\t\t<h2>#right</h2>\n\t\t</div>\n\t</div>\n\t<div id=\"footer\">\n\t\t<h2>#footer</h2>\n\t</div>\n</div>\n\nIn this example, our main column (#column-left) is only being given a width to fit within the context of the layout, and is otherwise untouched (though we\u2019re using pixels here, this trick will of course work with fluid layouts as well), and our right keeping our styles nice and minimal:\n\n#container {\n\tposition: relative;\n}\n#column-left {\n\twidth: 480px;\n}\n#column-right {\n\tposition: absolute;\n\ttop: 10px;\n\tright: 10px;\n\tbottom: 10px;\n\twidth: 250px;\n}\n\nThe trick is a simple one: the #container <div> will expand vertically to fit the content within #column-left. By telling our sidebar <div> (#column-right) to attach itself not only to the top and right edges of #container, but also to the bottom, it too will expand and contract to match the height of the left column (duplicate the \u201clorem ipsum\u201d paragraph a few times to see it in action).\n\n Figure 3.\n\nOn the rocks\n\n\u201cBut wait!\u201d I hear you exclaim, \u201cwhen the right column has more content than the left column, it doesn\u2019t expand! My text runneth over!\u201d Sure enough, that\u2019s exactly what happens, and what\u2019s more, it\u2019s supposed to: Absolutely positioned elements do exactly what you tell them to do, and unfortunately aren\u2019t very good at thinking outside the box (get it? sigh\u2026). \n\nHowever, this needn\u2019t get your spirits down, because there\u2019s an easy way to address the issue: by adding overflow:auto to #column-right, a scrollbar will automatically appear if and when needed:\n\n#column-right {\n\tposition: absolute;\n\ttop: 10px;\n\tright: 10px;\n\tbottom: 10px;\n\twidth: 250px;\n\toverflow: auto;\n}\n\nWhile this may limit the trick\u2019s usefulness to situations where the primary column will almost always have more content than the secondary column\u2014or where the secondary column\u2019s content can scroll with wild abandon\u2014a little prior planning will make it easy to incorporate into your designs.\n\nDriving us to drink\n\nIt just wouldn\u2019t be right to have a friendly, festive holiday tutorial without inviting IE6, though in this particular instance there will be no shaming that old browser into admitting it has a problem, nor an intervention and subsequent 12-step program. That\u2019s right my friends, this tutorial has abstained from IE6-abuse now for 30 days, thanks to the wizard Dean Edwards and his amazingly talented IE7 Javascript library.\n\nSimply drop the Conditional Comment and <script> element into the <head> of your document, along with one tiny CSS hack that only IE6 (and below) will ever see, and that browser will be back on the straight and narrow:\n\n<!--[if lt IE 7]>\n<script src=\"http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js\" type=\"text/javascript\"></script>\n<style type=\"text/css\" media=\"screen\">\n\t#container {\n\t\tzoom:1; /* helps fix IE6 by initiating hasLayout */\n\t}\n</style>\n<![endif]-->\n\nEggnog is supposed to be spiked, right?\n\nOf course, this is one simple example of what can be a much more powerful technique, depending on your needs and creativity. Just don\u2019t go coding up your wildest fantasies until you\u2019ve had a chance to sleep off the Christmas turkey and whatever tasty liquids you happen to imbibe along the way\u2026", "year": "2008", "author": "Dan Rubin", "author_slug": "danrubin", "published": "2008-12-22T00:00:00+00:00", "url": "https://24ways.org/2008/absolute-columns/", "topic": "code"} {"rowid": 99, "title": "A Christmas hCard From Me To You", "contents": "So apparently Christmas is coming. And what is Christmas all about? Well, cleaning out your address book, of course! What better time to go through your contacts, making sure everyone\u2019s details are up date and that you\u2019ve deleted all those nasty clients who never paid on time?\n\nIt\u2019s also a good time to make sure your current clients and colleagues have your most up-to-date details, so instead of filling up their inboxes with e-cards, why not send them something useful? Something like a\u2026 vCard! (See what I did there?)\n\nJust in case you\u2019ve been working in a magical toy factory in the upper reaches of Scandinavia for the last few years, I\u2019m going to tell you that now would also be the perfect time to get into microformats. Using the hCard format, we\u2019ll build a very simple web page and markup our contact details in such a way that they\u2019ll be understood by microformats plugins, like Operator or Tails for Firefox, or the cross-browser Microformats Bookmarklet.\n\nOh, and because Christmas is all about dressing up and being silly, we\u2019ll make the whole thing look nice and have a bit of fun with some CSS3 progressive enhancement. \n\nIf you can\u2019t wait to see what we end up with, you can preview it here.\n\n\n\nStep 1: Contact Details\n\nFirst, let\u2019s decide what details we want to put on the page. I\u2019d put my full name, my email address, my phone number, and my postal address, but I\u2019d rather not get surprise visits from strangers when I\u2019m fannying about with my baubles, so I\u2019m going to use Father Christmas instead (that\u2019s Santa to you Yanks).\n\nFather Christmas\nfatherchristmas@elliotjaystocks.com\n25 Laughingallthe Way\nSnow Falls\nLapland\nFinland\n010 60 58 000\n\nStep 2: hCard Creator\n\nNow I\u2019m not sure about you, but I rather like getting the magical robot pixies to do the work for me, so head on over to the hCard Creator and put those pixies to work! Pop in your details and they\u2019ll give you some nice microformatted HTML in turn.\n\n\n\n<div id=\"hcard-Father-Christmas\" class=\"vcard\">\n\t<a class=\"url fn\" href=\"http://elliotjaystocks.com/fatherchristmas\">Father Christmas</a>\n\t<a class=\"email\" href=\"mailto:fatherchristmas@elliotjaystocks.com\"> fatherchristmas@elliotjaystocks.com</a>\n\t<div class=\"adr\">\n\t<div class=\"street-address\">25 Laughingallthe Way</div>\n\t<span class=\"locality\">Snow Falls</span>\n\t, \n\t<span class=\"region\">Lapland</span>\n\t, \n\t<span class=\"postal-code\">FI-00101</span>\n\t<span class=\"country-name\">Finland</span>\n</div>\n<div class=\"tel\">010 60 58 000</div>\n\t<p style=\"font-size:smaller;\">This <a href=\"http://microformats.org/wiki/hcard\">hCard</a> created with the <a href=\"http://microformats.org/code/hcard/creator\">hCard creator</a>.</p>\n</div>\n\nStep 3: Editing The Code\n\nOne of the great things about microformats is that you can use pretty much whichever HTML tags you want, so just because the hCard Creator Fairies say something should be wrapped in a <span> doesn\u2019t mean you can\u2019t change it to a <blink>. Actually, no, don\u2019t do that. That\u2019s not even excusable at Christmas.\n\nI personally have a penchant for marking up each line of an address inside a <li> tag, where the parent url retains the class of adr. As long as you keep the class names the same, you\u2019ll be fine.\n\n<div id=\"hcard-Father-Christmas\" class=\"vcard\">\n\t<h1><a class=\"url fn\" href=\"http://elliotjaystocks.com/fatherchristmas\">Father Christmas </a></h1>\n\t<a class=\"email\" href=\"mailto:fatherchristmas@elliotjaystocks.com?subject=Here, have some Christmas cheer!\">fatherchristmas@elliotjaystocks.com</a>\n\t<ul class=\"adr\">\n\t\t<li class=\"street-address\">25 Laughingallthe Way</li>\n\t\t<li class=\"locality\">Snow Falls</li>\n\t\t<li class=\"region\">Lapland</li>\n\t\t<li class=\"postal-code\">FI-00101</li>\n\t\t<li class=\"country-name\">Finland</li>\n\t</ul>\n\t<span class=\"tel\">010 60 58 000</span>\n</div>\n\nStep 4: Testing The Microformats\n\nWith our microformats in place, now would be a good time to test that they\u2019re working before we start making things look pretty. If you\u2019re on Firefox, you can install the Operator or Tails extensions, but if you\u2019re on another browser, just add the Microformats Bookmarklet. Regardless of your choice, the results is the same: if you\u2019ve code microformatted content on a web page, one of these bad boys should pick it up for you and allow you to export the contact info. Give it a try and you should see father Christmas appearing in your address book of choice. Now you\u2019ll never forget where to send those Christmas lists!\n\n\n\nStep 5: Some Extra Markup\n\nOne of the first things we\u2019re going to do is put a photo of Father Christmas on the hCard. We\u2019ll be using CSS to apply a background image to a div, so we\u2019ll be needing an extra div with a class name of \u201cphoto\u201d. In turn, we\u2019ll wrap the text-based elements of our hCard inside a div cunningly called \u201ctext\u201d. Unfortunately, because of the float technique we\u2019ll be using, we\u2019ll have to use one of those nasty float-clearing techniques. I shall call this \u201cchristmas-cheer\u201d, since that is what its presence will inevitably bring, of course.\n\nOh, and let\u2019s add a bit of text to give the page context, too:\n\n<p>Send your Christmas lists my way...</p>\n<div id=\"hcard-Father-Christmas\" class=\"vcard\">\n\t<div class=\"text\">\n\t\t<h1><a class=\"url fn\" href=\"http://elliotjaystocks.com/fatherchristmas\">Father Christmas </a></h1>\n\t\t<a class=\"email\" href=\"mailto:fatherchristmas@elliotjaystocks.com?subject=Here, have some Christmas cheer!\">fatherchristmas@elliotjaystocks.com</a>\n\t\t<ul class=\"adr\">\n\t\t\t<li class=\"street-address\">25 Laughingallthe Way</li>\n\t\t\t<li class=\"locality\">Snow Falls</li>\n\t\t\t<li class=\"region\">Lapland</li>\n\t\t\t<li class=\"postal-code\">FI-00101</li>\n\t\t\t<li class=\"country-name\">Finland</li>\n\t\t</ul>\n\t\t<span class=\"tel\">010 60 58 000</span>\n\t</div>\n\t<div class=\"photo\"></div>\n\t<br class=\"christmas-cheer\" />\n</div>\n<div class=\"credits\">\n\t<p>A tutorial by <a href=\"http://elliotjaystocks.com\">Elliot Jay Stocks</a> for <a href=\"http://24ways.org/\">24 Ways</a></p>\n\t<p>Background: <a href=\"http://sxc.hu/photo/1108741\">stock.xchng</a> | Father Christmas: <a href=\"http://istockphoto.com/file_closeup/people/4575943-active-santa.php?id=4575943\">iStockPhoto</a></p>\n</div>\n\nStep 6: Some Christmas Sparkle\n\nSo far, our hCard-housing web page is slightly less than inspiring, isn\u2019t it? It\u2019s time to add a bit of CSS. There\u2019s nothing particularly radical going on here; just a simple layout, some basic typographic treatment, and the placement of the Father Christmas photo. I\u2019d usually use a more thorough CSS reset like the one found in the YUI or Eric Meyer\u2019s, but for this basic page, the simple * solution will do.\n\nCheck out the step 6 demo to see our basic styles in place.\n\nFrom this\u2026\n\n\n\n\u2026 to this:\n\n\n\nStep 7: Fun With imagery\n\nNow it\u2019s time to introduce a repeating background image to the <body> element. This will seamlessly repeat for as wide as the browser window becomes.\n\nBut that\u2019s fairly straightforward. How about having some fun with the Father Christmas image? If you look at the image file itself, you\u2019ll see that it\u2019s twice as wide as the area we can see and contains a \u2018hidden\u2019 photo of our rather camp St. Nick.\n\n\n\nAs a light-hearted visual\u2026 er\u2026 \u2018treat\u2019 for users who move their mouse over the image, we move the position of the background image on the \u201cphoto\u201d div. Check out the step 7 demo to see it working.\n\nStep 8: Progressive Enhancement\n\nFinally, this fun little project is a great opportunity for us to mess around with some advanced CSS features (some from the CSS3 spec) that we rarely get to use on client projects. (Don\u2019t forget: no Christmas pressies for clients who want you to support IE6!)\n\nHere are the rules we\u2019re using to give some browsers a superior viewing experience:\n\n\n\t@font-face allows us to use Jos Buivenga\u2019s free font \u2018Fertigo Pro\u2019 on all text;\n\ttext-shadow adds a little emphasis on the opening paragraph;\n\tbody > p:first-child causes only the first paragraph to receive this treatment;\n\tborder-radius created rounded corners on our main div and the links within it;\n\tand webkit-transition allows us to gently fade in between the default and hover states of those links.\n\n\nAnd with that, we\u2019re done! You can see the results here. It\u2019s time to customise the page to your liking, upload it to your site, and send out the URL. And do it quickly, because I\u2019m sure you\u2019ve got some last-minute Christmas shopping to finish off!", "year": "2008", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2008-12-10T00:00:00+00:00", "url": "https://24ways.org/2008/a-christmas-hcard-from-me-to-you/", "topic": "code"} {"rowid": 100, "title": "Moo'y Christmas", "contents": "A note from the editors: Moo has changed their API since this article was written.\n \n \n \n As the web matures, it is less and less just about the virtual world. It is becoming entangled with our world and it is harder to tell what is virtual and what is real. There are several companies who are blurring this line and make the virtual just an extension of the physical. Moo is one such company. \n\nMoo offers simple print on demand services. You can print business cards, moo mini cards, stickers, postcards and more. They give you the ability to upload your images, customize them, then have them sent to your door. Many companies allow this sort of digital to physical interaction, but Moo has taken it one step further and has built an API. \n\nPrintable stocking stuffers \n\nThe Moo API consists of a simple XML file that is sent to their servers. It describes all the information needed to dynamically assemble and print your object. This is very helpful, not just for when you want to print your own stickers, but when you want to offer them to your customers, friends, organization or community with no hassle. Moo handles the check-out and shipping, all you need to do is what you do best, create! \n\nNow using an API sounds complicated, but it is actually very easy. I am going to walk you through the options so you can easily be printing in no time. \n\nBefore you can begin sending data to the Moo API, you need to register and get an API key. This is important, because it allows Moo to track usage and to credit you. To register, visit http://www.moo.com/api/ and click \u201cRequest an API key\u201d. \n\nIn the following examples, I will use {YOUR API KEY HERE} as a place holder, replace that with your API key and everything will work fine. \n\nFirst thing you need to do is to create an XML file to describe the check-out basket. Open any text-editor and start with some XML basics. Don\u2019t worry, this is pretty simple and Moo gives you a few tools to check your XML for errors before you order. \n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?> \n<moo xsi:noNamespaceSchemaLocation=\"http://www.moo.com/xsd/api_0.7.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> \n\t <request>\n\t\t <version>0.7</version>\n\t\t <api_key>{YOUR API KEY HERE}</api_key>\n\t\t <call>build</call>\n\t\t <return_to>http://www.example.com/return.html</return_to>\n\t\t <fail_to>http://www.example.com/fail.html</fail_to>\n\t </request>\n\t <payload>\n\t ...\n\t </payload>\n</moo>\n\nMuch like HTML\u2019s <head> and <body>, Moo has created <request> and <payload> elements all wrapped in a <moo> element. \n\nThe <request> element contains a few pieces of information that is the same across all the API calls. The <version> element describes which version of the API is being used. This is more important for Moo than for you, so just stick with \u201c0.7\u201d for now. \n\nThe <api_key> allows Moo to track sales, referrers and credit your account. \n\nThe <call> element can only take \u201cbuild\u201d so that is pretty straight forward. The <return_to> and <fail_to> elements are URLs. These are optional and are the URLs the customer is redirected to if there is an error, or when the check out process is complete. This allows for some basic branding and a custom \u201cthank you\u201d page which is under your control. That\u2019s it for the <request> element, pretty easy so far! \n\nNext up is the <payload> element. What goes inside here describes what is to be printed. There are two possible elements, we can put <chooser> or we can put <products> directly inside <payload>. They work in a similar ways, but they drop the customer into different parts of the Moo checkout process. \n\nIf you specify <products> then you send the customer straight to the Moo payment process. If you specify <chooser> then you send the customer one-step earlier where they are allowed to pick and choose some images, remove the ones they don\u2019t like, adjust the crop, etc. The example here will use <chooser> but with a little bit of homework you can easily adjust to <products> if you desire. \n\n... \n<chooser> \n\t <product_type>sticker</product_type> \n\t <images> \n\t\t <url>http://example.com/images/christmas1.jpg</url> \n\t </images> \n</chooser> \n...\n\nInside the <chooser> element, we can see there are two basic piece of information. The type of product we want to print, and the images that are to be printed. The <product_type> element can take one of five options and is required! The possibilities are: minicard, notecard, sticker, postcard or greetingcard. We\u2019ll now look at two of these more closely. \n\nMoo Stickers \n\nIn the Moo sticker books you get 90 small squarish stickers in a small little booklet. \n\n\n\nThe simplest XML you could send would be something like the following payload:\n\n...\n<payload>\n\t<chooser>\n\t\t<product_type>sticker</product_type>\n\t\t<images>\n\t\t\t<url>http://example.com/image1.jpg</url>\n\t\t</images>\n\t\t<images>\n\t\t\t<url>http://example.com/image2.jpg</url>\n\t\t</images>\n\t\t<images>\n\t\t\t<url>http://example.com/image3.jpg</url>\n\t\t</images>\n\t</chooser>\n</payload>\n...\n\nThis creates a sticker book with only 3 unique images, but 30 copies of each image. The Sticker books always print 90 stickers in multiples of the images you uploaded. That example only has 3 <images> elements, but you can easily duplicate the XML and send up to 90. The <url> should be the full path to your image and the image needs to be a minimum of 300 pixels by 300 pixels.\n\nYou can add more XML to describe cropping, but the simplest option is to either, let your customers choose or to pre-crop all your images square so there are no issues.\n\nThe full XML you would post to the Moo API to print sticker books would look like this:\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?> \n<moo xsi:noNamespaceSchemaLocation=\"http://www.moo.com/xsd/api_0.7.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> \n\t<request>\n\t\t<version>0.7</version>\n\t\t<api_key>{YOUR API KEY HERE}</api_key>\n\t\t<call>build</call>\n\t\t<return_to>http://www.example.com/return.html</return_to>\n\t\t<fail_to>http://www.example.com/fail.html</fail_to>\n\t</request>\n\t<payload>\n\t\t<chooser>\n\t\t\t<product_type>sticker</product_type>\n\t\t\t<images>\n\t\t\t\t<url>http://example.com/image1.jpg</url>\n\t\t\t</images>\n\t\t\t<images>\n\t\t\t\t<url>http://example.com/image2.jpg</url>\n\t\t\t</images>\n\t\t\t<images>\n\t\t\t\t<url>http://example.com/image3.jpg</url>\n\t\t\t</images>\n\t\t</chooser>\n\t</payload> \n</moo>\n\nMini-cards \n\nThe mini-cards are the small cute business cards in 14\u00d735 dimensions and come in packs of 100. \n\n\n\nSince the mini-cards are print on demand, this allows you to have 100 unique images on the back of the cards.\n\nJust like the stickers example, we need the same XML setup. The <moo> element and <request> elements will be the same as before. The part you will focus on is the <payload> section. \n\nSince you are sending along specific information, we can\u2019t use the <chooser> option any more. Switch this to <products> which has a child of <product>, which in turn has a <product_type> and <designs>. This might seem like a lot of work, but once you have it set up you won\u2019t need to change it.\n\n...\n<payload>\n\t<products>\n\t\t<product>\n\t\t\t<product_type>minicard</product_type>\n\t\t\t<designs>\n\t\t\t\t...\n\t\t\t</designs>\n\t\t</product>\n\t</products>\n</payload>\n...\n\nSo now that we have the basic framework, we can talk about the information specific to minicards. Inside the <designs> element, you will have one <design> for each card. Much like before, this contains a way to describe the image. Note that this time the element is called \n</design>\n...\n\nSo far, we have managed to build a pack of 100 Moo mini-cards with the same image on the front. If you wanted 100 different images, you just need to replicate this snippit, 99 more times.\n\nThat describes the front design, but the flip-side of your mini-cards can contain 6 lines of text, which is customizable in a variety of colors, fonts and styles.\n\nThe API allows you to create different text on the back of each mini-card, something the web interface doesn\u2019t implement. To describe the text on the mini-card we need to add a <text_collection> element inside the <design> element. If you skip this element, the back of your mini-card will just be blank, but that\u2019s not very festive!\n\nInside the <text_collection> element, we need to describe the type of text we want to format, so we add a <minicard> element, which in turn contains all the lines of text. Each of Moo\u2019s printed products take different numbers of lines of text, so if you are not planning on making mini-cards, be sure to consult the documentation.\n\nFor mini-cards, we can have 6 distinct lines, each with their own style and layout. Each line is represented by an element <text_line> which has several optional children. The <id> tells which line of the 6 to print the text one. The <string> is the text you want to print and it must be shorter than 38 characters. The <bold> element is false by default, but if you want your text bolded, then add this and set it to true. \n\nThe <align> element is also optional. By default it is set to align left. You can also set this to right or center if you desirer. The <font> element takes one of 3 types, modern, traditional or typewriter. The default is modern. Finally, you can set the <colour>, yes that\u2019s color with a \u2018u\u2019, Moo is a British company, so they get to make the rules. When you start a print on demand company, you can spell it however you want. The <colour> element takes a 6 character hex value with a leading #.\n\n<design>\n\t...\n\t<text_collection>\n\t\t<minicard>\n\t\t\t<text_line>\n\t\t\t\t<id>(1-6)</id>\n\t\t\t\t<string>String, I must be less than 38 chars!</string>\n\t\t\t\t<bold>true</bold>\n\t\t\t\t<align>left</align>\n\t\t\t\t<font>modern</font>\n\t\t\t\t<colour>#ff0000</colour> \n\t\t\t</text_line>\n\t\t</minicard>\n\t</text_collection>\n</design>\n\nIf you combine all of this into a mini-card request you\u2019d get this example:\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?> \n<moo xsi:noNamespaceSchemaLocation=\"http://www.moo.com/xsd/api_0.7.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> \n\t<request>\n\t\t<version>0.7</version>\n\t\t<api_key>{YOUR API KEY HERE}</api_key>\n\t\t<call>build</call>\n\t\t<return_to>http://www.example.com/return.html</return_to>\n\t\t<fail_to>http://www.example.com/fail.html</fail_to>\n\t</request>\n\t<payload>\n\t\t<products>\n\t\t\t<product>\n\t\t\t\t<product_type>minicard</product_type>\n\t\t\t\t<designs>\n\t\t\t\t\t<design>\n\t\t\t\t\t\t\n\t\t\t\t\t\t<text_collection>\n\t\t\t\t\t\t\t<minicard>\n\t\t\t\t\t\t\t\t<text_line>\n\t\t\t\t\t\t\t\t<id>1</id>\n\t\t\t\t\t\t\t\t<string>String, I must be less than 38 chars!</string>\n\t\t\t\t\t\t\t\t<bold>true</bold>\n\t\t\t\t\t\t\t\t<align>left</align>\n\t\t\t\t\t\t\t\t<font>modern</font>\n\t\t\t\t\t\t\t\t<colour>#ff0000</colour> \n\t\t\t\t\t\t\t\t</text_line>\n\t\t\t\t\t\t\t</minicard>\n\t\t\t\t\t\t</text_collection>\n\t\t\t\t\t</design>\n\t\t\t\t</designs>\n\t\t\t</product>\n\t\t</products>\n\t</payload> \n</moo>\n\nNow you know how to construct the XML that describes what to print. Next, you need to know how to send it to Moo to make it happen!\n\nPosting to the API\n\nSo your XML is file ready to go. First thing we need to do is check it to make sure it\u2019s valid. Moo has created a simple validator where you paste in your XML, and it alerts you to problems.\n\nWhen you have a fully valid XML file, you\u2019ll want to send that to the Moo API. There are a few ways to do this, but the simplest is with an HTML form. \n\nThis is the sample code for an HTML form with a big \u201cBuy My Stickers\u201d button. Once you know that it is working, you can use all your existing HTML knowledge to style it up any way you like.\n\n<form method=\"POST\" action=\"http://www.moo.com/api/api.php\">\n\t<input type=\"hidden\" name=\"xml\" value=\"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <moo xsi:noNamespaceSchemaLocation=\"http://www.moo.com/xsd/api_0.7.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <request>....</request> <payload>...</payload> </moo> \"> \n\t<input type=\"submit\" name=\"submit\" value=\"Buy My Stickers\"/>\n</form>\n\nThis is just a basic <form> element that submits to the Moo API, http://www.moo.com/api/api.php, when someone clicks the button. There is a hidden input called \u201cxml\u201d which contains the value the XML file we created previously.\n\nFor those of you who need to \u201cview source\u201d to fully understand what\u2019s happening can see a working version and peek under the hood.\n\nUsing the API has advantages over uploading the images directly yourself. The images and text that you send via the API can be dynamic. Some companies, like Dopplr, have taken user profiles and dynamic data that changes every minute to generate customer stickers of places that you\u2019ve travelled to or mini-cards with a world map of all the cities you have visited. Every single customer has different travel plans and therefore different sets of stickers and mini-card maps. The API allows for the utmost current information to be printed, on demand, in real-time.\n\nGo forth and Moo\u2019ltiply\n\nSee, making an API call wasn\u2019t that hard was it? You are now 90% of the way to creating anything with the Moo API. With a bit of reading, you can learn that extra 10% and print any Moo product. Be on the lookout in 2009 for the official release of the 1.0 API with improvements and some extras that were not available when this article was written.\n\nThis article is released under the creative-commons attribution share-a-like license. That means you are free to re-distribute it, mash it up, translate it and otherwise re-using it ways the author never considered, in return he only asks you mention his name.\n\n\nThis work by Brian Suda is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.", "year": "2008", "author": "Brian Suda", "author_slug": "briansuda", "published": "2008-12-19T00:00:00+00:00", "url": "https://24ways.org/2008/mooy-christmas/", "topic": "code"} {"rowid": 104, "title": "Sitewide Search On A Shoe String", "contents": "One of the questions I got a lot when I was building web sites for smaller businesses was if I could create a search engine for their site. Visitors should be able to search only this site and find things without the maintainer having to put \u201crelated articles\u201d or \u201cfeatured content\u201d links on every page by hand. \n\nBack when this was all fields this wasn\u2019t easy as you either had to write your own scraping tool, use ht://dig or a paid service from providers like Yahoo, Altavista or later on Google. In the former case you had to swallow the bitter pill of computing and indexing all your content and storing it in a database for quick access and in the latter it hurt your wallet.\n\nTimes have moved on and nowadays you can have the same functionality for free using Yahoo\u2019s \u201cBuild your own search service\u201d \u2013 BOSS. The cool thing about BOSS is that it allows for a massive amount of hits a day and you can mash up the returned data in any format you want. Another good feature of it is that it comes with JSON-P as an output format which makes it possible to use it without any server-side component!\n\nStarting with a working HTML form\n\nIn order to add a search to your site, you start with a simple HTML form which you can use without JavaScript. Most search engines will allow you to filter results by domain. In this case we will search \u201cbbc.co.uk\u201d. If you use Yahoo as your standard search, this could be: \n\n<form id=\"customsearch\" action=\"http://search.yahoo.com/search\">\n\t<div>\n\t\t<label for=\"p\">Search this site:</label>\n\t\t<input type=\"text\" name=\"p\" id=\"term\">\n\t\t<input type=\"hidden\" name=\"vs\" id=\"site\" value=\"bbc.co.uk\">\n\t\t<input type=\"submit\" value=\"go\">\n\t</div>\n</form>\n\nThe Google equivalent is:\n\n<form id=\"customsearch\" action=\"http://www.google.co.uk/search\">\n\t<div>\n\t\t<label for=\"p\">Search this site:</label>\n\t\t<input type=\"text\" name=\"as_q\" id=\"term\">\n\t\t<input type=\"hidden\" name=\"as_sitesearch\" id=\"site\" value=\"bbc.co.uk\">\n\t\t<input type=\"submit\" value=\"go\">\n\t</div>\n</form>\n\nIn any case make sure to use the ID term for the search term and site for the site, as this is what we are going to use for the script. To make things easier, also have an ID called customsearch on the form.\n\nTo use BOSS, you should get your own developer API for BOSS and replace the one in the demo code. There is click tracking on the search results to see how successful your app is, so you should make it your own.\n\nAdding the BOSS magic\n\nBOSS is a REST API, meaning you can use it in any HTTP request or in a browser by simply adding the right parameters to a URL. Say for example you want to search \u201cbbc.co.uk\u201d for \u201cchristmas\u201d all you need to do is open the following URL:\n\nhttp://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&format=xml&appid=YOUR-APPLICATION-ID\n\nTry it out and click it to see the results in XML. We don\u2019t want XML though, which is why we get rid of the format=xml parameter which gives us the same information in JSON:\n\nhttp://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&appid=YOUR-APPLICATION-ID\n\nJSON makes most sense when you can send the output to a function and immediately use it. For this to happen all you need is to add a callback parameter and the JSON will be wrapped in a function call. Say for example we want to call SITESEARCH.found() when the data was retrieved we can do it this way:\n\nhttp://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=YOUR-APPLICATION-ID\n\nYou can use this immediately in a script node if you want to. The following code would display the total amount of search results for the term christmas on bbc.co.uk as an alert:\n\n<script type=\"text/javascript\">\n\tvar SITESEARCH = {};\n\tSITESEARCH.found = function(o){\n\t\talert(o.ysearchresponse.totalhits);\n\t}\n</script>\n<script type=\"text/javascript\" src=\"http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=Kzv_lcHV34HIybw0GjVkQNnw4AEXeyJ9Rb1gCZSGxSRNrcif_HdMT9qTE1y9LdI-\">\n</script>\n\nHowever, for our example, we need to be a bit more clever with this.\n\nEnhancing the search form\n\n\n\n\nHere\u2019s the script that enhances a search form to show results below it.\n\nSITESEARCH = function(){\n\tvar config = {\n\t\tIDs:{\n\t\t\tsearchForm:'customsearch',\n\t\t\tterm:'term',\n\t\t\tsite:'site'\n\t\t},\n\t\tloading:'Loading results...',\n\t\tnoresults:'No results found.',\n\t\tappID:'YOUR-APP-ID',\n\t\tresults:20\n\t};\n\tvar form;\n\tvar out;\n\tfunction init(){\n\t\tif(config.appID === 'YOUR-APP-ID'){\n\t\t\talert('Please get a real application ID!');\n\t\t} else {\n\t\t\tform = document.getElementById(config.IDs.searchForm);\n\t\t\tif(form){\n\t\t\t\tform.onsubmit = function(){\n\t\t\t\t\tvar site = document.getElementById(config.IDs.site).value;\n\t\t\t\t\tvar term = document.getElementById(config.IDs.term).value;\n\t\t\t\t\tif(typeof site === 'string' && typeof term === 'string'){\n\t\t\t\t\t\tif(typeof out !== 'undefined'){\n\t\t\t\t\t\t\tout.parentNode.removeChild(out);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tout = document.createElement('p');\n\t\t\t\t\t\tout.appendChild(document.createTextNode(config.loading));\n\t\t\t\t\t\tform.appendChild(out);\n\t\t\t\t\t\tvar APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + \n\t\t\t\t\t\t\t\t\t\t\t\t\tterm + '?callback=SITESEARCH.found&sites=' + \n\t\t\t\t\t\t\t\t\t\t\t\t\tsite + '&count=' + config.results + \n\t\t\t\t\t\t\t\t\t\t\t\t\t'&appid=' + config.appID;\n\t\t\t\t\t\tvar s = document.createElement('script');\n\t\t\t\t\t\ts.setAttribute('src',APIurl);\n\t\t\t\t\t\ts.setAttribute('type','text/javascript');\n\t\t\t\t\t\tdocument.getElementsByTagName('head')[0].appendChild(s);\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t};\n\t\t\t}\n\t\t}\n\t};\n\tfunction found(o){\n\t\tvar list = document.createElement('ul');\n\t\tvar results = o.ysearchresponse.resultset_web;\n\t\tif(results){\n\t\t\tvar item,link,description;\n\t\t\tfor(var i=0,j=results.length;i<j;i++){\n\t\t\t\titem = document.createElement('li');\n\t\t\t\tlink = document.createElement('a');\n\t\t\t\tlink.setAttribute('href',results[i].clickurl);\n\t\t\t\tlink.innerHTML = results[i].title;\n\t\t\t\titem.appendChild(link);\n\t\t\t\tdescription = document.createElement('p');\n\t\t\t\tdescription.innerHTML = results[i]['abstract'];\n\t\t\t\titem.appendChild(description);\n\t\t\t\tlist.appendChild(item);\n\t\t\t}\n\t\t} else {\n\t\t\tlist = document.createElement('p');\n\t\t\tlist.appendChild(document.createTextNode(config.noresults));\n\t\t}\n\t\tform.replaceChild(list,out);\n\t\tout = list;\n\t};\n\treturn{\n\t\tconfig:config,\n\t\tinit:init,\n\t\tfound:found\n\t};\n}();\n\nOooohhhh scary code! Let\u2019s go through this one bit at a time:\n\nWe start by creating a module called SITESEARCH and give it an configuration object: \n\nSITESEARCH = function(){\n\tvar config = {\n\t\tIDs:{\n\t\t\tsearchForm:'customsearch',\n\t\t\tterm:'term',\n\t\t\tsite:'site'\n\t\t},\n\t\tloading:'Loading results...',\n\t\tappID:'YOUR-APP-ID',\n\t\tresults:20\n\t}\n\nConfiguration objects are a great idea to make your code easy to change and also to override. In this case you can define different IDs than the one agreed upon earlier, define a message to show when the results are loading, when there aren\u2019t any results, the application ID and the number of results that should be displayed.\n\nNote: you need to replace \u201cYOUR-APP-ID\u201d with the real ID you retrieved from BOSS, otherwise the script will complain! \n\nvar form;\nvar out;\nfunction init(){\n\tif(config.appID === 'YOUR-APP-ID'){\n\t\talert('Please get a real application ID!');\n\t} else {\n\nWe define form and out as variables to make sure that all the methods in the module have access to them. We then check if there was a real application ID defined. If there wasn\u2019t, the script complains and that\u2019s that. \n\nform = document.getElementById(config.IDs.searchForm);\nif(form){\n\tform.onsubmit = function(){\n\t\tvar site = document.getElementById(config.IDs.site).value;\n\t\tvar term = document.getElementById(config.IDs.term).value;\n\t\tif(typeof site === 'string' && typeof term === 'string'){ \n\nIf the application ID was a winner, we check if the form with the provided ID exists and apply an onsubmit event handler. The first thing we get is the values of the site we want to search in and the term that was entered and check that those are strings.\n\nif(typeof out !== 'undefined'){\n\tout.parentNode.removeChild(out);\n}\nout = document.createElement('p');\nout.appendChild(document.createTextNode(config.loading));\nform.appendChild(out); \n\nIf both are strings we check of out is undefined. We will create a loading message and subsequently the list of search results later on and store them in this variable. So if out is defined, it\u2019ll be an old version of a search (as users will re-submit the form over and over again) and we need to remove that old version.\n\nWe then create a paragraph with the loading message and append it to the form.\n\nvar APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + \n\t\t\t\t\t\t\t\t\t\t\t\tterm + '?callback=SITESEARCH.found&sites=' + \n\t\t\t\t\t\t\t\t\t\t\t\tsite + '&count=' + config.results + \n\t\t\t\t\t\t\t\t\t\t\t\t'&appid=' + config.appID;\n\t\t\t\t\tvar s = document.createElement('script');\n\t\t\t\t\ts.setAttribute('src',APIurl);\n\t\t\t\t\ts.setAttribute('type','text/javascript');\n\t\t\t\t\tdocument.getElementsByTagName('head')[0].appendChild(s);\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\t\t\t};\n\t\t}\n\t}\n};\n\nNow it is time to call the BOSS API by assembling a correct REST URL, create a script node and apply it to the head of the document. We return false to ensure the form does not get submitted as we want to stay on the page.\n\nNotice that we are using SITESEARCH.found as the callback method, which means that we need to define this one to deal with the data returned by the API.\n\nfunction found(o){\n\tvar list = document.createElement('ul');\n\tvar results = o.ysearchresponse.resultset_web;\n\tif(results){\n\t\tvar item,link,description;\n\nWe create a new list and then get the resultset_web array from the data returned from the API. If there aren\u2019t any results returned, this array will not exist which is why we need to check for it. Once we done that we can define three variables to repeatedly store the item title we want to display, the link to point to and the description of the link.\n\nfor(var i=0,j=results.length;i<j;i++){\n\titem = document.createElement('li');\n\tlink = document.createElement('a');\n\tlink.setAttribute('href',results[i].clickurl);\n\tlink.innerHTML = results[i].title;\n\titem.appendChild(link);\n\tdescription = document.createElement('p');\n\tdescription.innerHTML = results[i]['abstract'];\n\titem.appendChild(description);\n\tlist.appendChild(item);\n}\n\nWe then loop over the results array and assemble a list of results with the titles in links and paragraphs with the abstract of the site. Notice the bracket notation for abstract as abstract is a reserved word in JavaScript2 :).\n\n} else {\n\t\tlist = document.createElement('p');\n\t\tlist.appendChild(document.createTextNode(config.noresults));\n\t}\n\tform.replaceChild(list,out);\n\tout = list;\n}; \n\nIf there aren\u2019t any results, we define a paragraph with the no results message as list. In any case we replace the old out (the loading message) with the list and re-define out as the list. \n\nreturn{\n\t\tconfig:config,\n\t\tinit:init,\n\t\tfound:found\n\t};\n}();\n\nAll that is left to do is return the properties and methods we want to make public. In this case found needs to be public as it is accessed by the API return. We return init to make it accessible and config to allow implementers to override any of the properties.\n\nUsing the script\n\n\nIn order to use this script, all you need to do is to add it after the form in the document, override the API key with your own and call init():\n\n<form id=\"customsearch\" action=\"http://search.yahoo.com/search\">\n\t<div>\n\t\t<label for=\"p\">Search this site:</label>\n\t\t<input type=\"text\" name=\"p\" id=\"term\">\n\t\t<input type=\"hidden\" name=\"vs\" id=\"site\" value=\"bbc.co.uk\">\n\t\t<input type=\"submit\" value=\"go\">\n\t</div>\n</form>\n<script type=\"text/javascript\" src=\"boss-site-search.js\"></script>\n<script type=\"text/javascript\">\n\tSITESEARCH.config.appID = 'copy-the-id-you-know-to-get-where';\n\tSITESEARCH.init();\n</script>\n\nWhere to go from here\n\nThis is just a very simple example of what you can do with BOSS. You can define languages and regions, retrieve and display images and news and mix the results with other data sources before displaying them. One very cool feature is that by adding a view=keyterms parameter to the URL you can get the keywords of each of the results to drill deeper into the search. An example for this written in PHP is available on the YDN blog. For JavaScript solutions there is a handy wrapper called yboss available to help you go nuts.", "year": "2008", "author": "Christian Heilmann", "author_slug": "chrisheilmann", "published": "2008-12-04T00:00:00+00:00", "url": "https://24ways.org/2008/sitewide-search-on-a-shoestring/", "topic": "code"} {"rowid": 109, "title": "Geotag Everywhere with Fire Eagle", "contents": "A note from the editors: Since this article was written Yahoo! has retired the Fire Eagle service.\n \n \n \n Location, they say, is everywhere. Everyone has one, all of the time. But on the web, it\u2019s taken until this year to see the emergence of location in the applications we use and build.\n\nThe possibilities are broad. Increasingly, mobile phones provide SDKs to approximate your location wherever you are, browser extensions such as Loki and Mozilla\u2019s Geode provide browser-level APIs to establish your location from the proximity of wireless networks to your laptop. Yahoo\u2019s Brickhouse group launched Fire Eagle, an ambitious location broker enabling people to take their location from any of these devices or sources, and provide it to a plethora of web services. It enables you to take the location information that only your iPhone knows about and use it anywhere on the web.\n\nThat said, this is still a time of location as an emerging technology. Fire Eagle stores your location on the web (protected by application-specific access controls), but to try and give an idea of how useful and powerful your location can be \u2014 regardless of the services you use now \u2014 today\u2019s 24ways is going to build a bookmarklet to call up your location on demand, in any web application.\n\nLocation Support on the Web\n\nOver the past year, the number of applications implementing location features has increased dramatically. Plazes and Brightkite are both full featured social networks based around where you are, whilst Pownce rolled in Fire Eagle support to allow geotagging of all the content you post to their microblogging service. Dipity\u2019s beautiful timeline shows for you moving from place to place and Six Apart\u2019s activity stream for Movable Type started exposing your movements.\n\nThe number of services that hook into Fire Eagle will increase as location awareness spreads through the developer community, but you can use your location on other sites indirectly too.\n\nConsider Flickr. Now world renowned for their incredible mapping and places features, geotagging on Flickr started out as a grassroots extension of regular tagging. That same technique can be used to start rolling geotagging in any publishing platform you come across, for any kind of content. Machine-tags (geo:lat= and geo:lon=) and the adr and geo microformats can be used to enhance anything you write with location information.\n\nA crash course in avian inflammability\n\nFire Eagle is a location store. A broker between services and devices which provide location and those which consume it. It\u2019s a switchboard that controls which pieces of your location different applications can see and use, and keeps hidden anything you want kept private. A blog widget that displays your current location in public can be restricted to display just your current city, whilst a service that provides you with a list of the nearest ATMs will operate better with a precise street address. \n\nEven if your iPhone tells Fire Eagle exactly where you are, consuming applications only see what you want them to see. That\u2019s important for users to realise that they\u2019re in control, but also important for application developers to remember that you cannot rely on having super-accurate information available all the time. You need to build location aware applications which degrade gracefully, because users will provide fuzzier information \u2014 either through choice, or through less accurate sources.\n\nApplication specific permissions are controlled through an OAuth API. Each application has a unique key, used to request a second, user-specific key that permits access to that user\u2019s information. You store that user key and it remains valid until such a time as the user revokes your application\u2019s access. Unlike with passwords, these keys are unique per application, so revoking the access rights of one application doesn\u2019t break all the others.\n\nBuilding your first Fire Eagle app; Geomarklet\n\nFire Eagle\u2019s developer documentation can take you through examples of writing simple applications using server side technologies (PHP, Python). Here, we\u2019re going to write a client-side bookmarklet to make your location available in every site you use. It\u2019s designed to fast-track the experience of having location available everywhere on web, and show you how that can be really handy. Hopefully, this will set you thinking about how location can enhance the new applications you build in 2009.\n\nAn oddity of bookmarklets\n\nBookmarklets (or \u2018favlets\u2019, for those of an MSIE persuasion) are a strange environment to program in. Critically, you have no persistent storage available. As such, using token-auth APIs in a static environment requires you to build you application in a slightly strange way; authing yourself in advance and then hardcoding the keys into your script.\n\nGet started\n\nBefore you do anything else, go to http://fireeagle.com and log in, get set up if you need to and by all means take a look around. Take a look at the mobile updaters section of the application gallery and perhaps pick out an app that will update Fire Eagle from your phone or laptop.\n\nOnce that\u2019s done, you need to register for an application key in the developer section. Head straight to /developer/create and complete the form. Since you\u2019re building a standalone application, choose \u2018Auth for desktop applications\u2019 (rather than web applications), and select that you\u2019ll be \u2018accessing location\u2019, not updating.\n\nAt the end of this process, you\u2019ll have two application keys, a \u2018Consumer Key\u2019 and a \u2018Consumer Secret\u2019, which look like these:\n\n \n Consumer Key\n luKrM9U1pMnu\n Consumer Secret\n ZZl9YXXoJX5KLiKyVrMZffNEaBnxnd6M\n \n\nThese keys combined allow your application to make requests to Fire Eagle.\n\nNext up, you need to auth yourself; granting your new application permission to use your location. Because bookmarklets don\u2019t have local storage, you can\u2019t integrate the auth process into the bookmarklet itself \u2014 it would have no way of storing the returned key. Instead, I\u2019ve put together a simple web frontend through which you can auth with your application.\n\nHead to Auth me, Amadeus!, enter the application keys you just generated and hit \u2018Authorize with Fire Eagle\u2019. You\u2019ll be taken to the Fire Eagle website, just as in regular Fire Eagle applications, and after granting access to your app, be redirected back to Amadeus which will provide you your user tokens. These tokens are used in subsequent requests to read your location.\n\nAnd, skip to the end\u2026\n\nThe process of building the bookmarklet, making requests to Fire Eagle, rendering it to the page and so forth follows, but if you\u2019re the impatient type, you might like to try this out right now. Take your four API keys from above, and drag the following link to your Bookmarks Toolbar; it contains all the code described below. Before you can use it, you need to edit in your own API keys. Open your browser\u2019s bookmark editor and where you find text like \u2018YOUR_CONSUMER_KEY_HERE\u2019, swap in the corresponding key you just generated.\n\nGet Location\n\nBookmarklet Basics\n\nTo start on the bookmarklet code, set out a basic JavaScript module-pattern structure:\n\nvar Geomarklet = function() {\n\treturn ({\n\t\tcallback: function(json) {},\n\t\trun: function() {}\n\t});\n};\nGeomarklet.run();\n\nNext we\u2019ll add the keys obtained in the setup step, and also some basic Fire Eagle support objects:\n\nvar Geomarklet = function() {\n\tvar Keys = {\n\t\t\tconsumer_key: 'IuKrJUHU1pMnu',\n\t\t\tconsumer_secret: 'ZZl9YXXoJX5KLiKyVEERTfNEaBnxnd6M',\n\t\t\tuser_token: 'xxxxxxxxxxxx',\n\t\t\tuser_secret: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'\n\t};\n\tvar LocationDetail = {\n\t\t\tEXACT: 0,\n\t\t\tPOSTAL: 1,\n\t\t\tNEIGHBORHOOD: 2,\n\t\t\tCITY: 3,\n\t\t\tREGION: 4,\n\t\t\tSTATE: 5,\n\t\t\tCOUNTRY: 6\n\t};\n\tvar index_offset;\n\treturn ({\n\t\tcallback: function(json) {},\n\t\trun: function() {}\n\t});\n};\nGeomarklet.run();\n\nThe Location Hierarchy\n\nA successful Fire Eagle query returns an object called the \u2018location hierarchy\u2019. Depending on the level of detail shared, the index of a particular piece of information in the array will vary. The LocationDetail object maps the array indices of each level in the hierarchy to something comprehensible, whilst the index_offset variable is an adjustment based on the detail of the result returned.\n\nThe location hierarchy object looks like this, providing a granular breakdown of a location, in human consumable and machine-friendly forms.\n\n\"user\": {\n\t\t\"location_hierarchy\": [{\n\t\t\t\"level\": 0,\n\t\t\t\"level_name\": \"exact\",\n\t\t\t\"name\": \"707 19th St, San Francisco, CA\",\n\t\t\t\"normal_name\": \"94123\",\n\t\t\t\"geometry\": {\n\t\t\t\t\t\"type\": \"Point\",\n\t\t\t\t\t\"coordinates\": [ - 0.2347530752, 67.232323]\n\t\t\t},\n\t\t\t\"label\": null,\n\t\t\t\"best_guess\": true,\n\t\t\t\"id\": ,\n\t\t\t\"located_at\": \"2008-12-18T00:49:58-08:00\",\n\t\t\t\"query\": \"q=707%2019th%20Street,%20Sf\"\n\t\t},\n\t\t{\n\t\t\t\t\"level\": 1,\n\t\t\t\t\"level_name\": \"postal\",\n\t\t\t\t\"name\": \"San Francisco, CA 94114\",\n\t\t\t\t\"normal_name\": \"12345\",\n\t\t\t\t\"woeid\": ,\n\t\t\t\t\"place_id\": \"\",\n\t\t\t\t\"geometry\": {\n\t\t\t\t\t\t\"type\": \"Polygon\",\n\t\t\t\t\t\t\"coordinates\": [],\n\t\t\t\t\t\t\"bbox\": []\n\t\t\t\t},\n\t\t\t\t\"label\": null,\n\t\t\t\t\"best_guess\": false,\n\t\t\t\t\"id\": 59358791,\n\t\t\t\t\"located_at\": \"2008-12-18T00:49:58-08:00\"\n\t\t},\n\t\t{\n\t\t\t\t\"level\": 2,\n\t\t\t\t\"level_name\": \"neighborhood\",\n\t\t\t\t\"name\": \"The Mission, San Francisco, CA\",\n\t\t\t\t\"normal_name\": \"The Mission\",\n\t\t\t\t\"woeid\": 23512048,\n\t\t\t\t\"place_id\": \"Y12JWsKbApmnSQpbQg\",\n\t\t\t\t\"geometry\": {\n\t\t\t\t\t\t\"type\": \"Polygon\",\n\t\t\t\t\t\t\"coordinates\": [],\n\t\t\t\t\t\t\"bbox\": []\n\t\t\t\t},\n\t\t\t\t\"label\": null,\n\t\t\t\t\"best_guess\": false,\n\t\t\t\t\"id\": 59358801,\n\t\t\t\t\"located_at\": \"2008-12-18T00:49:58-08:00\"\n\t\t\t},\n\t\t}\n\nIn this case the first object has a level of 0, so the index_offset is also 0.\n\nPrerequisites\n\nTo query Fire Eagle we call in some existing libraries to handle the OAuth layer and the Fire Eagle API call. Your bookmarklet will need to add the following scripts into the page:\n\n\n\tThe SHA1 encryption algorithm\n\tThe OAuth wrapper\n\tAn extension for the OAuth wrapper\n\tThe Fire Eagle wrapper itself\n\n\nWhen the bookmarklet is first run, we\u2019ll insert these scripts into the document. We\u2019re also inserting a stylesheet to dress up the UI that will be generated.\n\nIf you want to follow along any of the more mundane parts of the bookmarklet, you can download the full source code.\n\nRendering\n\nThis bookmarklet can be extended to support any formatting of your location you like, but for sake of example I\u2019m going to build three common formatters that you\u2019ll find useful for common location scenarios: Sites which already ask for your location; and in publishing systems that accept tags or HTML mark-up.\n\nAll the rendering functions are items in a renderers object, so they can be iterated through easily, making it trivial to add new formatting functions as your find new use cases (just add another function to the object).\n\nvar renderers = {\ngeotag: function(user) {\n\tif(LocationDetail.EXACT !== index_offset) {\n\t\t\treturn false;\n\t}\n\telse {\n\t\tvar coords =\n\t\t\tuser.location_hierarchy[LocationDetail.EXACT].geometry.coordinates;\n\t\treturn \"geo:lat=\" + coords[0] + \", geo:lon=\" + coords[1];\n\t}\n},\ncity: function(user) {\n\tif(LocationDetail.CITY < index_offset) {\n\t\treturn false;\n\t}\n\telse {\n\t\treturn user.location_hierarchy[LocationDetail.CITY - index_offset].name; \n\t}\t\t\t\t\t\t \n}\n\nYou should always fail gracefully, and in line with catering to users who choose not to share their location precisely, always check that the location has been returned at the level you require. Geotags are expected to be precise, so if an exact location is unavailable, returning false will tell the rendering aspect of the bookmarklet to ignore the function altogether.\n\nThese first two are quite simple, geotag returns geo:lat=-0.2347530752, geo:lon=67.232323 and city returns San Francisco, CA.\n\nThis final renderer creates a chunk of HTML using the adr and geo microformats, using all available aspects of the location hierarchy, and can be used to geotag any content you write on your blog or in comments:\n\nhtml: function(user) {\n\tvar geostring = '';\n\tvar adrstring = '';\n\tvar adr = [];\t\t \n\tadr.push('<p class=\"adr\">');\n\t// city\n\tif(LocationDetail.CITY >= index_offset) {\n\t\tadr.push(\n\t\t\t'\\n\t\t <span class=\"locality\">'\n\t\t+ user.location_hierarchy[LocationDetail.CITY-index_offset].normal_name\n\t\t+ '</span>,'\n\t\t);\n\t}\n\t// county\n\tif(LocationDetail.REGION >= index_offset) {\n\t\tadr.push(\n\t\t\t'\\n\t\t <span class=\"region\">' \n\t\t+ user.location_hierarchy[LocationDetail.REGION-index_offset].normal_name\n\t\t+ '</span>,'\n\t\t\t);\n\t}\n\t// locality\n\tif(LocationDetail.STATE >= index_offset) {\n\t\tadr.push(\n\t\t\t'\\n\t\t <span class=\"region\">'\n\t\t+ user.location_hierarchy[LocationDetail.STATE-index_offset].normal_name\n\t\t+ '</span>,'\n\t\t);\n\t}\n\t// country\n\tif(LocationDetail.COUNTRY >= index_offset) {\n\t\tadr.push(\n\t\t\t'\\n\t\t <span class=\"country-name\">'\n\t\t+ user.location_hierarchy[LocationDetail.COUNTRY-index_offset].normal_name\n\t\t+ '</span>'\n\t\t);\n\t}\n\t// postal\n\tif(LocationDetail.POSTAL >= index_offset) {\n\t\tadr.push(\n\t\t\t'\\n\t\t <span class=\"postal-code\">'\n\t\t+ user.location_hierarchy[LocationDetail.POSTAL-index_offset].normal_name\n\t\t+ '</span>,'\n\t\t);\n\t}\n\tadr.push('\\n</p>\\n');\n\tadrstring = adr.join('');\n\tif(LocationDetail.EXACT === index_offset) {\n\t\tvar coords = \n\t\t\tuser.location_hierarchy[LocationDetail.EXACT].geometry.coordinates;\n\t\tgeostring = '<p class=\"geo\">'\n\t\t\t+'\\n\t\t<span class=\"latitude\">'\n\t\t\t+ coords[0]\n\t\t\t+ '</span>;'\n\t\t\t+ '\\n\t\t <span class=\"longitude\">'\n\t\t\t+ coords[1]\n\t\t\t+ '</span>\\n</p>\\n';\n\t}\n\treturn (adrstring + geostring);\n}\n\nHere we check the availability of every level of location and build it into the adr and geo patterns as appropriate. Just as for the geotag function, if there\u2019s no exact location the geo markup won\u2019t be returned.\n\nFinally, there\u2019s a rendering method which creates a container for all this data, renders all the applicable location formats and then displays them in the page for a user to copy and paste. You can throw this together with DOM methods and some simple styling, or roll in some components from YUI or JQuery to handle drawing full featured overlays.\n\nYou can see this simple implementation for rendering in the full source code.\n\nMake the call\n\nWith a framework in place to render Fire Eagle\u2019s location hierarchy, the only thing that remains is to actually request your location. Having already authed through Amadeus earlier, that\u2019s as simple as instantiating the Fire Eagle JavaScript wrapper and making a single function call. It\u2019s a big deal that whilst a lot of new technologies like OAuth add some complexity and require new knowledge to work with, APIs like Fire Eagle are really very simple indeed.\n\nreturn {\n\trun: function() {\n\t\tinsert_prerequisites();\n\t\tsetTimeout(\n\t\t\tfunction() {\n\t\t\t\tvar fe = new FireEagle(\n\t\t\t\t\tKeys.consumer_key,\n\t\t\t\t\tKeys.consumer_secret,\n\t\t\t\t\tKeys.user_token,\n\t\t\t\t\tKeys.user_secret\n\t\t\t\t);\n\t\t\t\tvar script = document.createElement('script');\n\t\t\t\tscript.type = 'text/javascript';\n\t\t\t\tscript.src = fe.getUserUrl(\n\t\t\t\t\tFireEagle.RESPONSE_FORMAT.json,\n\t\t\t\t\t'Geomarklet.callback'\n\t\t\t\t);\n\t\t\t\tdocument.body.appendChild(script);\n\t\t\t},\n\t\t\t2000\n\t\t);\n\t},\n\tcallback: function(json) {\n\t\tif(json.rsp && 'fail' == json.rsp.stat) {\n\t\t\talert('Error ' + json.rsp.code + \": \" + json.rsp.message);\n\t\t}\n\t\telse {\n\t\t\tindex_offset = json.user.location_hierarchy[0].level;\t\t\t\t\t\t \n\t\t\tdraw_selector(json);\n\t\t}\n\t}\n};\n\nWe first insert the prerequisite scripts required for the Fire Eagle request to function, and to prevent trying to instantiate the FireEagle object before it\u2019s been loaded over the wire, the remaining instantiation and request is wrapped inside a setTimeout delay.\n\nWe then create the request URL, referencing the Geomarklet.callback callback function and then append the script to the document body \u2014 allowing a cross-domain request.\n\nThe callback itself is quite simple. Check for the presence and value of rsp.status to test for errors, and display them as required. If the request is successful set the index_offset \u2014 to adjust for the granularity of the location hierarchy \u2014 and then pass the object to the renderer.\n\nThe result? When Geomarklet.run() is called, your location from Fire Eagle is read, and each renderer displayed on the page in an easily copy and pasteable form, ready to be used however you need.\n\nDeploy\n\nThe final step is to convert this code into a long string for use as a bookmarklet. Easiest for Mac users is the JavaScript bundle in TextMate \u2014 choose Bundles: JavaScript: Copy as Bookmarklet to Clipboard. Then create a new \u2018Get Location\u2019 bookmark in your browser of choice and paste in.\n\nThose without TextMate can shrink their code down into a single line by first running their code through the JSLint tool (to ensure the code is free from errors and has all the required semi-colons) and then use a find-and-replace tool to remove line breaks from your code (or even run your code through JSMin to shrink it down).\n\nWith the bookmarklet created and added to your bookmarks bar, you can now call up your location on any page at all. Get a feel for a web where your location is just another reliable part of the browsing experience.\n\nWhere next?\n\nSo, the Geomarklet you\u2019ve been guided through is a pretty simple premise and pretty simple output. But from this base you can start to extend: Add code that will insert each of the location renderings directly into form fields, perhaps, or how about site-specific handlers to add your location tags into the correct form field in Wordpress or Tumblr? Paste in your current location to Google Maps? Or Flickr?\n\nGeomarklet gives you a base to start experimenting with location on your own pages and the sites you browse daily.\n\nThe introduction of consumer accessible geo to the web is an adventure of discovery; not so much discovering new locations, but discovering location itself.", "year": "2008", "author": "Ben Ward", "author_slug": "benward", "published": "2008-12-21T00:00:00+00:00", "url": "https://24ways.org/2008/geotag-everywhere-with-fire-eagle/", "topic": "code"} {"rowid": 110, "title": "Shiny Happy Buttons", "contents": "Since Mac OS X burst onto our screens, glossy, glassy, shiny buttons have been almost de rigeur, and have essentially, along with reflections and rounded corners, become a clich\u00e9 of Web 2.0 \u201cdesign\u201d. But if you can\u2019t beat \u2018em you\u2019d better join \u2018em. So, in this little contribution to our advent calendar, we\u2019re going to take a plain old boring HTML button, and 2.0 it up the wazoo. \n\nBut, here\u2019s the catch. We\u2019ll use no images, either in our HTML or our CSS. No sliding doors, no image replacement techniques. Just straight up, CSS, CSS3 and a bit of experimental CSS. And, it will be compatible with pretty much any browser (though with some progressive enhancement for those who keep up with the latest browsers).\n\nThe HTML\n\nWe\u2019ll start with our HTML.\n\n<button type=\"submit\">This is a shiny button</button>\n\nOK, so it\u2019s not shiny yet \u2013 but boy will it ever be.\n\nBefore styling, that\u2019s going to look like this.\n\nIronically, depending on the operating system and browser you are using, it may well be a shiny button already, but that\u2019s not the point. We want to make it shiny 2.0. Our mission is to make it look something like this\n\n\n\nIf you want to follow along at home keep in mind that depending on which browser you are using you may see fewer of the CSS effects we\u2019ve added to create the button. As of writing, only in Safari are all the effects we\u2019ll apply supported.\n\nTaking a look at our finished product, here\u2019s what we\u2019ve done to it:\n\n\n\tWe\u2019ve given the button some padding and a width.\n\tWe\u2019ve changed the text color, and given the text a drop shadow.\n\tWe\u2019ve given the button a border.\n\tWe\u2019ve given the button some rounded corners.\n\tWe\u2019ve given the button a drop shadow.\n\tWe\u2019ve given the button a gradient background.\n\n\nand remember, all without using any images.\n\nStyling the button\n\nSo, let\u2019s get to work.\n\nFirst, we\u2019ll add given the element some padding and a width:\n\nbutton {\n\tpadding: .5em;\n\twidth: 15em;\n}\n\nNext, we\u2019ll add the text color, and the drop shadow:\n\ncolor: #ffffff;\ntext-shadow: 1px 1px 1px #000;\n\nA note on text-shadow\n\nIf you\u2019ve not seen text-shadows before well, here\u2019s the quick back-story. Text shadow was introduced in CSS2, but only supported in Safari (version 1!) some years later. It was removed from CSS2.1, but returned in CSS3 (in the text module). It\u2019s now supported in Safari, Opera and Firefox (3.1). Internet Explorer has a shadow filter, but the syntax is completely different.\n\nSo, how do text-shadows work? The three length values specify respectively a horizontal offset, a vertical offset and a blur (the greater the number the more blurred the shadow will be), and finally a color value for the shadow.\n\nRounding the corners\n\nNow we\u2019ll add a border, and round the corners of the element:\n\nborder: solid thin #882d13;\n-webkit-border-radius: .7em;\n-moz-border-radius: .7em;\nborder-radius: .7em;\n\nHere, we\u2019ve used the same property in three slightly different forms. We add the browser specific prefix for Webkit and Mozilla browsers, because right now, both of these browsers only support border radius as an experimental property. We also add the standard property name, for browsers that do support the property fully in the future. \n\nThe benefit of the browser specific prefix is that if a browser only partly supports a given property, we can easily avoid using the property with that browser simply by not adding the browser specific prefix. At present, as you might guess, border-radius is supported in Safari and Firefox, but in each the relevant prefix is required.\n\nborder-radius takes a length value, such as pixels. (It can also take two length values, but that\u2019s for another Christmas.) In this case, as with padding, I\u2019ve used ems, which means that as the user scales the size of text up and down, the radius will scale as well. You can test the difference by making the radius have a value of say 5px, and then zooming up and down the text size. \n\nWe\u2019re well and truly on the way now. All we need to do is add a shadow to the button, and then a gradient background.\n\nIn CSS3 there\u2019s the box-shadow property, currently only supported in Safari 3. It\u2019s very similar to text-shadow \u2013 you specify a horizontal and vertical offset, a blur value and a color.\n\n-webkit-box-shadow: 2px 2px 3px #999; \nbox-shadow: 2px 2px 2px #bbb;\n\nOnce more, we require the \u201cexperimental\u201d -webkit- prefix, as Safari\u2019s support for this property is still considered by its developers to be less than perfect.\n\nGradient Background\n\nSo, all we have left now is to add our shiny gradient effect. Now of course, people have been doing this kind of thing with images for a long time. But if we can avoid them all the better. Smaller pages, faster downloads, and more scalable designs that adapt better to the user\u2019s font size preference. But how can we add a gradient background without an image?\n\nHere we\u2019ll look at the only property that is not as yet part of the CSS standard \u2013 Apple\u2019s gradient function for use anywhere you can use images with CSS (in this case backgrounds). In essence, this takes SVG gradients, and makes them available via CSS syntax.\n\nHere\u2019s what the property and its value looks like:\n\nbackground-image: -webkit-gradient(linear, left top, left bottom, from(#e9ede8), to(#ce401c),color-stop(0.4, #8c1b0b));\n\nZooming in on the gradient function, it has this basic form:\n\n-webkit-gradient(type, point, point, from(color), to(color),color-stop(where, color));\n\nWhich might look complicated, but is less so than at first glance.\n\nThe name of the function is gradient (and in this case, because it is an experimental property, we use the -webkit- prefix).\n\nYou might not have seen CSS functions before, but there are others, including the attr() function, used with generated content. A function returns a value that can be used as a property value \u2013 here we are using it as a background image.\n\nNext we specify the type of the gradient. Here we have a linear gradient, and there are also radial gradients. \n\nAfter that, we specify the start and end points of the gradient \u2013 in our case the top and bottom of the element, in a vertical line. \n\nWe then specify the start and end colors \u2013 and finally one stop color, located at 40% of the way down the element. Together, this creates a gradient that smoothly transitions from the start color in the top, vertically to the stop color, then smoothly transitions to the end color.\n\nThere\u2019s one last thing. What color will the background of our button be if the browser doesn\u2019t support gradients? It will be white (or possibly some default color for buttons). Which may make the text difficult or impossible to read. So, we\u2019ll add a background color as well (see why the validator is always warning you when a color but not a background color is specified for an element?).\n\nIf we put it all together, here\u2019s what we have:\n\nbutton {\n\twidth: 15em;\n\tpadding: .5em;\n\tcolor: #ffffff;\n\ttext-shadow: 1px 1px 1px #000;\n\tborder: solid thin #882d13;\n\t-webkit-border-radius: .7em;\n\t-moz-border-radius: .7em;\n\tborder-radius: .7em;\n\t-webkit-box-shadow: 2px 2px 3px #999; \n\tbox-shadow: 2px 2px 2px #bbb;\n\tbackground-color: #ce401c;\n\tbackground-image: -webkit-gradient(linear, left top, left bottom, from(#e9ede8), to(#ce401c),color-stop(0.4, #8c1b0b));\n}\n\nWhich looks like this in various browsers:\n\nIn Safari (3)\n\n\n\nIn Firefox 3.1 (3.0 supports border-radius but not text-shadow)\n\n\n\nIn Opera 10\n\n\n\nand of course in Internet Explorer (version 8 shown here)\n\n\n\nBut it looks different in different browsers\n\nYes, it does look different in different browsers, but we all know the answer to the question \u201cdo web sites need to look the same in every browser?\u201c.\n\nEven if you really think sites should look the same in every browser, hopefully this little tutorial has whet your appetite for what CSS3 and experimental CSS that\u2019s already supported in widely used browsers (and we haven\u2019t even touched on animations and similar effects!).\n\nI hope you\u2019ve enjoyed out little CSSMas present, and look forward to seeing your shiny buttons everywhere on the web.\n\nOh, and there\u2019s just a bit of homework \u2013 your job is to use the :hover selector, and make a gradient in the hover state.", "year": "2008", "author": "John Allsopp", "author_slug": "johnallsopp", "published": "2008-12-18T00:00:00+00:00", "url": "https://24ways.org/2008/shiny-happy-buttons/", "topic": "code"} {"rowid": 116, "title": "The IE6 Equation", "contents": "It is the destiny of one browser to serve as the nemesis of web developers everywhere. At the birth of the Web Standards movement, that role was played by Netscape Navigator 4; an outdated browser that refused to die. Its tenacious existence hampered the adoption of modern standards. Today that role is played by Internet Explorer 6.\n\nThere\u2019s a sensation that I\u2019m sure you\u2019re familiar with. It\u2019s a horrible mixture of dread and nervousness. It\u2019s the feeling you get when\u2014after working on a design for a while in a standards-compliant browser like Firefox, Safari or Opera\u2014you decide that you can no longer put off the inevitable moment when you must check the site in IE6. Fingers are crossed, prayers are muttered, but alas, to no avail. The nemesis browser invariably screws something up.\n\nWhat do you do next? If the differences in IE6 are minor, you could just leave it be. After all, websites don\u2019t need to look exactly the same in all browsers. But if there are major layout issues and a significant portion of your audience is still using IE6, you\u2019ll probably need to roll up your sleeves and start fixing the problems.\n\nA common approach is to quarantine IE6-specific CSS in a separate stylesheet. This stylesheet can then be referenced from the HTML document using conditional comments like this:\n\n<!--[if lt IE 7]>\n<link rel=\"stylesheet\" href=\"ie6.css\" type=\"text/css\" media=\"screen\" />\n<![endif]-->\n\nThat stylesheet will only be served up to Internet Explorer where the version number is less than 7.\n\nYou can put anything inside a conditional comment. You could put a script element in there. So as well as serving up browser-specific CSS, it\u2019s possible to serve up browser-specific JavaScript.\n\nA few years back, before Microsoft released Internet Explorer 7, JavaScript genius Dean Edwards wrote a script called IE7. This amazing piece of code uses JavaScript to make Internet Explorer 5 and 6 behave like a standards-compliant browser. Dean used JavaScript to bootstrap IE\u2019s CSS support.\n\nBecause the script is specifically targeted at Internet Explorer, there\u2019s no point in serving it up to other browsers. Conditional comments to the rescue:\n\n<!--[if lt IE 7]>\n<script src=\"http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js\" type=\"text/javascript\"></script>\n<![endif]-->\n\nStandards-compliant browsers won\u2019t fetch the script. Users of IE6, on the hand, will pay a kind of bad browser tax by having to download the JavaScript file.\n\nSo when should you develop an IE6-specific stylesheet and when should you just use Dean\u2019s JavaScript code? This is the question that myself and my co-worker Natalie Downe set out to answer one morning at Clearleft. We realised that in order to answer that question you need to first answer two other questions, how much time does it take to develop for IE6? and how much of your audience is using IE6?\n\nLet\u2019s say that t represents the total development time. Let t6 represent the portion of that time you spend developing for IE6. If your total audience is a, then a6 is the portion of your audience using IE6. With some algebraic help from our mathematically minded co-worker Cennydd Bowles, Natalie and I came up with the following equation to calculate the percentage likelihood that you should be using Dean\u2019s IE7 script:\n\n\n\np = 50 [ log ( at6 / ta6 ) + 1 ]\n\nTry plugging in your own numbers. If you spend a lot of time developing for IE6 and only a small portion of your audience is using that browser, you\u2019ll get a very high number out of the equation; you should probably use the IE7 script. But if you only spend a little time developing for IE6 and a significant portion of you audience are still using that browser, you\u2019ll get a very small value for p; you might as well write an IE6-specific stylesheet.\n\nOf course this equation is somewhat disingenuous. While it\u2019s entirely possible to research the percentage of your audience still using IE6, it\u2019s not so easy to figure out how much of your development time will be spent developing for that one browser. You can\u2019t really know until you\u2019ve already done the development, by which time the equation is irrelevant.\n\nInstead of using the equation, you could try imposing a limit on how long you will spend developing for IE6. Get your site working in standards-compliant browsers first, then give yourself a time limit to get it working in IE6. If you can\u2019t solve all the issues in that time limit, switch over to using Dean\u2019s script. You could even make the time limit directly proportional to the percentage of your audience using IE6. If 20% of your audience is still using IE6 and you\u2019ve just spent five days getting the site working in standards-compliant browsers, give yourself one day to get it working in IE6. But if 50% of your audience is still using IE6, be prepared to spend 2.5 days wrestling with your nemesis.\n\nAll of these different methods for dealing with IE6 demonstrate that there\u2019s no one single answer that works for everyone. They also highlight a problem with the current debate around dealing with IE6. There\u2019s no shortage of blog posts, articles and even entire websites discussing when to drop support for IE6. But very few of them take the time to define what they mean by \u201csupport.\u201d This isn\u2019t a binary issue. There is no Boolean answer. Instead, there\u2019s a sliding scale of support:\n\n\n\tBlock IE6 users from your site.\n\tDevelop with web standards and don\u2019t spend any development time testing in IE6.\n\tUse the Dean Edwards IE7 script to bootstrap CSS support in IE6.\n\tWrite an IE6 stylesheet to address layout issues.\n\tMake your site look exactly the same in IE6 as in any other browser.\n\n\nEach end of that scale is extreme. I don\u2019t think that anybody should be actively blocking any browser but neither do I think that users of an outdated browser should get exactly the same experience as users of a more modern browser. The real meanings of \u201csupporting\u201d or \u201cnot supporting\u201d IE6 lie somewhere in-between those extremes.\n\nJust as I think that semantics are important in markup, they are equally important in our discussion of web development. So let\u2019s try to come up with some better terms than using the catch-all verb \u201csupport.\u201d If you say in your client contract that you \u201csupport\u201d IE6, define exactly what that means. If you find yourself in a discussion about \u201cdropping support\u201d for IE6, take the time to explain what you think that entails.\n\nThe web developers at Yahoo! are on the right track with their concept of graded browser support. I\u2019m interested in hearing more ideas of how to frame this discussion. If we can all agree to use clear and precise language, we stand a better chance of defeating our nemesis.", "year": "2008", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2008-12-08T00:00:00+00:00", "url": "https://24ways.org/2008/the-ie6-equation/", "topic": "code"} {"rowid": 117, "title": "The First Tool You Reach For", "contents": "Microsoft recently announced that Internet Explorer 8 will be released in the first half of 2009. Compared to the standards support of other major browsers, IE8 will not be especially great, but it will finally catch up with the state of the art in one specific area: support for CSS tables. This milestone has the potential to trigger an important change in the way you approach web design.\n\nTo show you just how big a difference CSS tables can make, think about how you might code a fluid, three-column layout from scratch. Just to make your life more difficult, give it one fixed-width column, with a background colour that differs from the rest of the page. Ready? Go!\n\nOkay, since you\u2019re the sort of discerning web designer who reads 24ways, I\u2019m going to assume you at least considered doing this without using HTML tables for the layout. If you\u2019re especially hardcore, I imagine you began thinking of CSS floats, negative margins, and faux columns. If you did, colour me impressed!\n\nNow admit it: you probably also gave an inward sigh about the time it would take to figure out the math on the negative margin overlaps, check for dropped floats in Internet Explorer and generally wrestle each of the major browsers into giving you what you want. If after all that you simply gave up and used HTML tables, I can\u2019t say I blame you.\n\nThere are plenty of professional web designers out there who still choose to use HTML tables as their main layout tool. Sure, they may know that users with screen readers get confused by inappropriate use of tables, but they have a job to do, and they want tools that will make that job easy, not difficult.\n\nNow let me show you how to do it with CSS tables. First, we have a div element for each of our columns, and we wrap them all in another two divs:\n\n<div class=\"container\">\n\t<div>\n\t\t<div id=\"menu\">\n\t\t\u22ee\n\t\t</div>\n\t\t<div id=\"content\">\n\t\t\u22ee\n\t\t</div>\n\t\t<div id=\"sidebar\">\n\t\t\u22ee\n\t\t</div>\n\t</div>\n</div>\n\nDon\u2019t sweat the \u201cdiv clutter\u201d in this code. Unlike tables, divs have no semantic meaning, and can therefore be used liberally (within reason) to provide hooks for the styles you want to apply to your page.\n\nUsing CSS, we can set the outer div to display as a table with collapsed borders (i.e. adjacent cells share a border) and a fixed layout (i.e. cell widths unaffected by their contents):\n\n.container {\n\tdisplay: table;\n\tborder-collapse: collapse;\n\ttable-layout: fixed;\n}\n\nWith another two rules, we set the middle div to display as a table row, and each of the inner divs to display as table cells:\n\n.container > div {\n\tdisplay: table-row;\n}\n.container > div > div {\n\tdisplay: table-cell;\n}\n\nFinally, we can set the widths of the cells (and of the table itself) directly:\n\n.container {\n\twidth: 100%;\n}\n#menu {\n\twidth: 200px;\n}\n#content {\n\twidth: auto;\n}\n#sidebar {\n\twidth: 25%;\n}\n\nAnd, just like that, we have a rock solid three-column layout, ready to be styled to your own taste, like in this example:\n\n\n\nThis example will render perfectly in reasonably up-to-date versions of Firefox, Safari and Opera, as well as the current beta release of Internet Explorer 8.\n\nCSS tables aren\u2019t only useful for multi-column page layout; they can come in handy in most any situation that calls for elements to be displayed side-by-side on the page. Consider this simple login form layout:\n\n\n\nThe incantation required to achieve this layout using CSS floats may be old hat to you by now, but try to teach it to a beginner, and watch his eyes widen in horror at the hoops you have to jump through (not to mention the assumptions you have to build into your design about the length of the form labels).\n\nHere\u2019s how to do it with CSS tables:\n\n<form action=\"/login\" method=\"post\">\n\t<div>\n\t\t<div>\n\t\t\t<label for=\"username\">Username:</label>\n\t\t\t<span class=\"input\"><input type=\"text\" name=\"username\" id=\"username\"/></span>\n\t\t</div>\n\t\t<div>\n\t\t\t<label for=\"userpass\">Password:</label>\n\t\t\t<span class=\"input\"><input type=\"password\" name=\"userpass\" id=\"userpass\"/></span>\n\t\t</div>\n\t\t<div class=\"submit\">\n\t\t\t<label for=\"login\"></label>\n\t\t\t<span class=\"input\"><input type=\"submit\" name=\"login\" id=\"login\" value=\"Login\"/></span>\n\t\t</div>\n\t</div>\n</form>\n\nThis time, we\u2019re using a mixture of divs and spans as semantically transparent styling hooks. Let\u2019s look at the CSS code.\n\nFirst, we set up the outer div to display as a table, the inner divs to display as table rows, and the labels and spans as table cells (with right-aligned text):\n\nform > div {\n\tdisplay: table;\n}\nform > div > div {\n\tdisplay: table-row;\n}\nform label,\nform span {\n\tdisplay: table-cell;\n\ttext-align: right;\n}\n\nWe want the first column of the table to be wide enough to accommodate our labels, but no wider. With CSS float techniques, we had to guess at what that width was likely to be, and adjust it whenever we changed our form labels. With CSS tables, we can simply set the width of the first column to something very small (1em), and then use the white-space property to force the column to the required width:\n\nform label {\n\twhite-space: nowrap;\n\twidth: 1em;\n}\n\nTo polish off the layout, we\u2019ll make our text and password fields occupy the full width of the table cells that contain them:\n\ninput[type=text],\ninput[type=password] {\n\twidth: 100%;\n}\n\nThe rest is margins, padding and borders to get the desired look. Check out the finished example.\n\nAs the first tool you reach for when approaching any layout task, CSS tables make a lot more sense to your average designer than the cryptic incantations called for by CSS floats. When IE8 is released and all major browsers support CSS tables, we can begin to gradually deploy CSS table-based layouts on sites that are more and more mainstream.\n\nIn our new book, Everything You Know About CSS Is Wrong!, Rachel Andrew and I explore in much greater detail how CSS tables work as a page layout tool in the real world. CSS tables have their quirks just like floats do, but they don\u2019t tend to affect common layout tasks, and the workarounds tend to be less fiddly too. Check it out, and get ready for the next big step forward in web design with CSS.", "year": "2008", "author": "Kevin Yank", "author_slug": "kevinyank", "published": "2008-12-13T00:00:00+00:00", "url": "https://24ways.org/2008/the-first-tool-you-reach-for/", "topic": "code"} {"rowid": 121, "title": "Hide And Seek in The Head", "contents": "If you want your JavaScript-enhanced pages to remain accessible and understandable to scripted and noscript users alike, you have to think before you code. Which functionalities are required (ie. should work without JavaScript)? Which ones are merely nice-to-have (ie. can be scripted)? You should only start creating the site when you\u2019ve taken these decisions.\n\nSpecial HTML elements\n\nOnce you have a clear idea of what will work with and without JavaScript, you\u2019ll likely find that you need a few HTML elements for the noscript version only.\n\nTake this example: A form has a nifty bit of Ajax that automatically and silently sends a request once the user enters something in a form field. However, in order to preserve accessibility, the user should also be able to submit the form normally. So the form should have a submit button in noscript browsers, but not when the browser supports sufficient JavaScript.\n\nSince the button is meant for noscript browsers, it must be hard-coded in the HTML:\n\n<input type=\"submit\" value=\"Submit form\" id=\"noScriptButton\" />\n\nWhen JavaScript is supported, it should be removed:\n\nvar checkJS = [check JavaScript support];\nwindow.onload = function () {\n\tif (!checkJS) return;\n\tdocument.getElementById('noScriptButton').style.display = 'none';\n}\n\nProblem: the load event\n\nAlthough this will likely work fine in your testing environment, it\u2019s not completely correct. What if a user with a modern, JavaScript-capable browser visits your page, but has to wait for a huge graphic to load? The load event fires only after all assets, including images, have been loaded. So this user will first see a submit button, but then all of a sudden it\u2019s removed. That\u2019s potentially confusing.\n\nFortunately there\u2019s a simple solution: play a bit of hide and seek in the <head>:\n\nvar checkJS = [check JavaScript support];\nif (checkJS) {\n\tdocument.write('<style>#noScriptButton{display: none}</style>');\n}\n\nFirst, check if the browser supports enough JavaScript. If it does, document.write an extra <style> element that hides the button.\n\nThe difference with the previous technique is that the document.write command is outside any function, and is therefore executed while the JavaScript is being parsed. Thus, the #noScriptButton{display: none} rule is written into the document before the actual HTML is received.\n\nThat\u2019s exactly what we want. If the rule is already present at the moment the HTML for the submit button is received and parsed, the button is hidden immediately. Even if the user (and the load event) have to wait for a huge image, the button is already hidden, and both scripted and noscript users see the interface they need, without any potentially confusing flashes of useless content.\n\nIn general, if you want to hide content that\u2019s not relevant to scripted users, give the hide command in CSS, and make sure it\u2019s given before the HTML element is loaded and parsed.\n\nAlternative\n\nSome people won\u2019t like to use document.write. They could also add an empty <link /> element to the <head> and give it an href attribute once the browser\u2019s JavaScript capabilities have been evaluated. The <link /> element is made to refer to a style sheet that contains the crucial #noScriptButton{display: none}, and everything works fine.\n\nImportant note: The script needs access to the <link />, and the only way to ensure that access is to include the empty <link /> element before your <script> tag.", "year": "2006", "author": "Peter-Paul Koch", "author_slug": "ppk", "published": "2006-12-06T00:00:00+00:00", "url": "https://24ways.org/2006/hide-and-seek-in-the-head/", "topic": "code"} {"rowid": 124, "title": "Writing Responsible JavaScript", "contents": "Without a doubt, JavaScript has been making something of a comeback in the last year. If you\u2019re involved in client-side development in any way at all, chances are that you\u2019re finding yourself writing more JavaScript now than you have in a long time.\n\nIf you learned most of your JavaScript back when DHTML was all the rage and before DOM Scripting was in vogue, there have been some big shifts in the way scripts are written. Most of these are in the way event handlers are assigned and functions declared. Both of these changes are driven by the desire to write scripts that are responsible page citizens, both in not tying behaviour to content and in taking care not to conflict with other scripts. I thought it may be useful to look at some of these more responsible approaches to learn how to best write scripts that are independent of the page content and are safely portable between different applications.\n\nEvent Handling\n\nBack in the heady days of Web 1.0, if you wanted to have an object on the page react to something like a click, you would simply go ahead and attach an onclick attribute. This was easy and understandable, but much like the font tag or the style attribute, it has the downside of mixing behaviour or presentation in with our content. As we\u2019re learned with CSS, there are big benefits in keeping those layers separate. Hey, if it works for CSS, it should work for JavaScript too.\n\nJust like with CSS, instead of adding an attribute to our element within the document, the more responsible way to do that is to look for the item from your script (like CSS does with a selector) and then assign the behaviour to it. To give an example, take this oldskool onclick use case:\n\n<a id=\"anim-link\" href=\"#\" onclick=\"playAnimation()\">Play the animation</a>\n\nThis could be rewritten by removing the onclick attribute, and instead doing the following from within your JavaScript.\n\ndocument.getElementById('anim-link').onclick = playAnimation;\n\nIt\u2019s all in the timing\n\nOf course, it\u2019s never quite that easy. To be able to attach that onclick, the element you\u2019re targeting has to exist in the page, and the page has to have finished loading for the DOM to be available. This is where the onload event is handy, as it fires once everything has finished loading. Common practise is to have a function called something like init() (short for initialise) that sets up all these event handlers as soon as the page is ready.\n\nBack in the day we would have used the onload attibute on the <body> element to do this, but of course what we really want is:\n\nwindow.onload = init;\n\nAs an interesting side note, we\u2019re using init here rather than init() so that the function is assigned to the event. If we used the parentheses, the init function would have been run at that moment, and the result of running the function (rather than the function itself) would be assigned to the event. Subtle, but important.\n\nAs is becoming apparent, nothing is ever simple, and we can\u2019t just go around assigning our initialisation function to window.onload. What if we\u2019re using other scripts in the page that might also want to listen out for that event? Whichever script got there last would overwrite everything that came before it. To manage this, we need a script that checks for any existing event handlers, and adds the new handler to it. Most of the JavaScript libraries have their own systems for doing this. If you\u2019re not using a library, Simon Willison has a good stand-alone example\n\nfunction addLoadEvent(func) {\n\tvar oldonload = window.onload;\n\tif (typeof window.onload != 'function') {\n\t\twindow.onload = func;\n\t} else {\n\t\twindow.onload = function() {\n\t\t\tif (oldonload) {\n\t\t\t\toldonload();\n\t\t\t}\n\t\t\tfunc();\n\t\t}\n\t}\n}\n\nObviously this is just a toe in the events model\u2019s complex waters. Some good further reading is PPK\u2019s Introduction to Events.\n\nCarving out your own space\n\nAnother problem that rears its ugly head when combining multiple scripts on a single page is that of making sure that the scripts don\u2019t conflict. One big part of that is ensuring that no two scripts are trying to create functions or variables with the same names. Reusing a name in JavaScript just over-writes whatever was there before it.\n\nWhen you create a function in JavaScript, you\u2019ll be familiar with doing something like this.\n\nfunction foo() {\n\t... goodness ...\n}\n\nThis is actually just creating a variable called foo and assigning a function to it. It\u2019s essentially the same as the following.\n\nvar foo = function() {\n\t... goodness ...\n}\n\nThis name foo is by default created in what\u2019s known as the \u2018global namespace\u2019 \u2013 the general pool of variables within the page. You can quickly see that if two scripts use foo as a name, they will conflict because they\u2019re both creating those variables in the global namespace.\n\nA good solution to this problem is to add just one name into the global namespace, make that one item either a function or an object, and then add everything else you need inside that. This takes advantage of JavaScript\u2019s variable scoping to contain you mess and stop it interfering with anyone else.\n\nCreating An Object\n\nSay I was wanting to write a bunch of functions specifically for using on a site called \u2018Foo Online\u2019. I\u2019d want to create my own object with a name I think is likely to be unique to me.\n\nvar FOOONLINE = {};\n\nWe can then start assigning functions are variables to it like so:\n\nFOOONLINE.message = 'Merry Christmas!';\nFOOONLINE.showMessage = function() {\n\talert(this.message);\n};\n\nCalling FOOONLINE.showMessage() in this example would alert out our seasonal greeting. The exact same thing could also be expressed in the following way, using the object literal syntax.\n\nvar FOOONLINE = {\n\tmessage: 'Merry Christmas!',\n\tshowMessage: function() {\n\t\talert(this.message);\n\t}\n};\n\nCreating A Function to Create An Object\n\nWe can extend this idea bit further by using a function that we run in place to return an object. The end result is the same, but this time we can use closures to give us something like private methods and properties of our object.\n\nvar FOOONLINE = function(){\n\tvar message = 'Merry Christmas!';\n\treturn {\n\t\tshowMessage: function(){\n\t\t\talert(message);\n\t\t}\n\t}\n}();\n\nThere are two important things to note here. The first is the parentheses at the end of line 10. Just as we saw earlier, this runs the function in place and causes its result to be assigned. In this case the result of our function is the object that is returned at line 4.\n\nThe second important thing to note is the use of the var keyword on line 2. This ensures that the message variable is created inside the scope of the function and not in the global namespace. Because of the way closure works (which if you\u2019re not familiar with, just suspend your disbelief for a moment) that message variable is visible to everything inside the function but not outside. Trying to read FOOONLINE.message from the page would return undefined.\n\nThis is useful for simulating the concept of private class methods and properties that exist in other programming languages. I like to take the approach of making everything private unless I know it\u2019s going to be needed from outside, as it makes the interface into your code a lot clearer for someone else to read. \n\nAll Change, Please\n\nSo that was just a whistle-stop tour of a couple of the bigger changes that can help to make your scripts better page citizens. I hope it makes useful Sunday reading, but obviously this is only the tip of the iceberg when it comes to designing modular, reusable code.\n\nFor some, this is all familiar ground already. If that\u2019s the case, I encourage you to perhaps submit a comment with any useful resources you\u2019ve found that might help others get up to speed. Ultimately it\u2019s in all of our interests to make sure that all our JavaScript interoperates well \u2013 share your tips.", "year": "2006", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2006-12-10T00:00:00+00:00", "url": "https://24ways.org/2006/writing-responsible-javascript/", "topic": "code"} {"rowid": 126, "title": "Intricate Fluid Layouts in Three Easy Steps", "contents": "The Year of the Script may have drawn attention away from CSS but building fluid, multi-column, cross-browser CSS layouts can still be as unpleasant as a lump of coal. Read on for a worry-free approach in three quick steps.\n\nThe layout system I developed, YUI Grids CSS, has three components. They can be used together as we\u2019ll see, or independently.\n\nThe Three Easy Steps\n\n\n\tChoose fluid or fixed layout, and choose the width (in percents or pixels) of the page.\n\tChoose the size, orientation, and source-order of the main and secondary blocks of content.\n\tChoose the number of columns and how they distribute (for example 50%-50% or 25%-75%), using stackable and nestable grid structures.\n\n\nThe Setup\n\nThere are two prerequisites: We need to normalize the size of an em and opt into the browser rendering engine\u2019s Strict Mode.\n\nEms are a superior unit of measure for our case because they represent the current font size and grow as the user increases their font size setting. This flexibility\u2014the container growing with the user\u2019s wishes\u2014means larger text doesn\u2019t get crammed into an unresponsive container. We\u2019ll use YUI Fonts CSS to set the base size because it provides consistent-yet-adaptive font-sizes while preserving user control.\n\nThe second prerequisite is to opt into Strict Mode (more info on rendering modes) by declaring a Doctype complete with URI. You can choose XHTML or HTML, and Transitional or Strict. I prefer HTML 4.01 Strict, which looks like this:\n\n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\" \"http://www.w3.org/TR/html4/strict.dtd\">\n\nIncluding the CSS\n\nA single small CSS file powers a nearly-infinite number of layouts thanks to a recursive system and the interplay between the three distinct components. You could prune to a particular layout\u2019s specific needs, but why bother when the complete file weighs scarcely 1.8kb uncompressed? Compressed, YUI Fonts and YUI Grids combine for a miniscule 0.9kb over the wire.\n\nYou could save an HTTP request by concatenating the two CSS files, or by adding their contents to your own CSS, but I\u2019ll keep them separate for now:\n\n<link href=\"fonts.css\" rel=\"stylesheet\" type=\"text/css\">\n<link href=\"grids.css\" rel=\"stylesheet\" type=\"text/css\">\n\nExample: The Setup\n\nNow we\u2019re ready to build some layouts.\n\nStep 1: Choose Fluid or Fixed Layout\n\nChoose between preset widths of 750px, 950px, and 100% by giving a document-wrapping div an ID of doc, doc2, or doc3. These options cover most use cases, but it\u2019s easy to define a custom fixed width.\n\nThe fluid 100% grid (doc3) is what I\u2019ve been using almost exclusively since it was introduced in the last YUI released.\n\n<body>\n\t<div id=\"doc3\"></div>\n</body>\n\nAll pages are centered within the viewport, and grow with font size. The 100% width page (doc3) preserves 10px of breathing room via left and right margins. If you prefer your content flush to the viewport, just add doc3 {margin:auto} to your CSS.\n\nRegardless of what you choose in the other two steps, you can always toggle between these widths and behaviors by simply swapping the ID value. It\u2019s really that simple.\n\nExample: 100% fluid layout\n\nStep 2: Choose a Template Preset\n\nThis is perhaps the most frequently omitted step (they\u2019re all optional), but I use it nearly every time. In a source-order-independent way (good for accessibility and SEO), \u201cTemplate Presets\u201d provide commonly used template widths compatible with ad-unit dimension standards defined by the Interactive Advertising Bureau, an industry association.\n\nChoose between the six Template Presets (.yui-t1 through .yui-t6) by setting the class value on the document-wrapping div established in Step 1. Most frequently I use yui-t3, which puts the narrow secondary block on the left and makes it 300px wide. \n\n<body>\n\t<div id=\"doc3\" class=\"yui-t3\"></div>\n</body>\n\nThe Template Presets control two \u201cblocks\u201d of content, which are defined by two divs, each with yui-b (\u201cb\u201d for \u201cblock\u201d) class values. Template Presets describe the width and orientation of the secondary block; the main block will take up the rest of the space.\n\n<body>\n\t<div id=\"doc3\" class=\"yui-t3\">\n\t <div class=\"yui-b\"></div>\n\t <div class=\"yui-b\"></div>\n\t</div>\n</body>\n\nUse a wrapping div with an ID of yui-main to structurally indicate which block is the main block. This wrapper\u2014not the source order\u2014identifies the main block.\n\n<body>\n\t<div id=\"doc3\" class=\"yui-t3\">\n\t <div id=\"yui-main\">\n\t <div class=\"yui-b\"></div>\n\t </div>\n\t <div class=\"yui-b\"></div>\n\t</div>\n</body>\n\nExample: Main and secondary blocks sized and oriented with .yui-t3 Template Preset\n\nAgain, regardless of what values you choose in the other steps, you can always toggle between these Template Presets by toggling the class value of your document-wrapping div. It\u2019s really that simple.\n\nStep 3: Nest and Stack Grid Structures.\n\nThe bulk of the power of the system is in this third step. The key is that columns are built by parents telling children how to behave. By default, two children each consume half of their parent\u2019s area. Put two units inside a grid structure, and they will sit side-by-side, and they will each take up half the space. Nest this structure and two columns become four. Stack them for rows of columns.\n\nAn Even Number of Columns\n\nThe default behavior creates two evenly-distributed columns. It\u2019s easy. Define one parent grid with .yui-g (\u201cg\u201d for grid) and two child units with .yui-u (\u201cu\u201d for unit). The code looks like this:\n\n<div class=\"yui-g\">\n\t<div class=\"yui-u first\"></div>\n\t<div class=\"yui-u\"></div>\n</div>\n\nBe sure to indicate the \u201cfirst\u201c unit because the :first-child pseudo-class selector isn\u2019t supported across all A-grade browsers. It\u2019s unfortunate we need to add this, but luckily it\u2019s not out of place in the markup layer since it is structural information.\n\nExample: Two evenly-distributed columns in the main content block\n\nAn Odd Number of Columns\n\nThe default system does not work for an odd number of columns without using the included \u201cSpecial Grids\u201d classes. To create three evenly distributed columns, use the \u201cyui-gb\u201c Special Grid:\n\n<div class=\"yui-gb\">\n\t<div class=\"yui-u first\"></div>\n\t<div class=\"yui-u\"></div>\n\t<div class=\"yui-u\"></div>\n</div>\n\nExample: Three evenly distributed columns in the main content block\n\nUneven Column Distribution\n\nSpecial Grids are also used for unevenly distributed column widths. For example, .yui-ge tells the first unit (column) to take up 75% of the parent\u2019s space and the other unit to take just 25%.\n\n<div class=\"yui-ge\">\n\t<div class=\"yui-u first\"></div>\n\t<div class=\"yui-u\"></div>\n</div>\n\nExample: Two columns in the main content block split 75%-25%\n\nPutting It All Together\n\nStart with a full-width fluid page (div#doc3). Make the secondary block 180px wide on the right (div.yui-t4). Create three rows of columns: Three evenly distributed columns in the first row (div.yui-gb), two uneven columns (66%-33%) in the second row (div.yui-gc), and two evenly distributed columns in the thrid row.\n\n<body>\n\t<!-- choose fluid page and Template Preset -->\n\t<div id=\"doc3\" class=\"yui-t4\">\n\t\t<!-- main content block -->\n\t\t<div id=\"yui-main\">\n\t\t\t<div class=\"yui-b\">\n\t\t\t\t<!-- stacked grid structure, Special Grid \"b\" -->\n\t\t\t\t<div class=\"yui-gb\">\n\t\t\t\t\t<div class=\"yui-u first\"></div>\n\t\t\t\t\t<div class=\"yui-u\"></div>\n\t\t\t\t\t<div class=\"yui-u\"></div>\n\t\t\t\t</div>\n\t\t\t\t<!-- stacked grid structure, Special Grid \"c\" -->\n\t\t\t\t<div class=\"yui-gc\">\n\t\t\t\t\t<div class=\"yui-u first\"></div>\n\t\t\t\t\t<div class=\"yui-u\"></div>\n\t\t\t\t</div>\n\t\t\t\t<!-- stacked grid structure -->\n\t\t\t\t<div class=\"yui-g\">\n\t\t\t\t\t<div class=\"yui-u first\"></div>\n\t\t\t\t\t<div class=\"yui-u\"></div>\n\t\t\t\t</div>\n\t\t\t</div>\n\t\t</div>\n\t\t<!-- secondary content block -->\n\t\t<div class=\"yui-b\"></div>\n\t</div>\n</body>\n\nExample: A complex layout.\n\nWasn\u2019t that easy? Now that you know the three \u201clevers\u201d of YUI Grids CSS, you\u2019ll be creating headache-free fluid layouts faster than you can say \u201cPeace on Earth\u201d.", "year": "2006", "author": "Nate Koechley", "author_slug": "natekoechley", "published": "2006-12-20T00:00:00+00:00", "url": "https://24ways.org/2006/intricate-fluid-layouts/", "topic": "code"} {"rowid": 128, "title": "Boost Your Hyperlink Power", "contents": "There are HTML elements and attributes that we use every day. Headings, paragraphs, lists and images are the mainstay of every Web developer\u2019s toolbox. Perhaps the most common tool of all is the anchor. The humble a element is what joins documents together to create the gloriously chaotic collection we call the World Wide Web.\n\nAnatomy of an Anchor\n\nThe power of the anchor element lies in the href attribute, short for hypertext reference. This creates a one-way link to another resource, usually another page on the Web:\n\n<a href=\"http://allinthehead.com/\">\n\nThe href attribute sits in the opening a tag and some descriptive text sits between the opening and closing tags:\n\n<a href=\"http://allinthehead.com/\">Drew McLellan</a>\n\n\u201cWhoop-dee-freakin\u2019-doo,\u201d I hear you say, \u201cthis is pretty basic stuff\u201d \u2013 and you\u2019re quite right. But there\u2019s more to the anchor element than just the href attribute.\n\nThe Theory of relativity\n\nYou might be familiar with the rel attribute from the link element. I bet you\u2019ve got something like this in the head of your documents:\n\n<link rel=\"stylesheet\" type=\"text/css\" media=\"screen\" href=\"styles.css\" />\n\nThe rel attribute describes the relationship between the linked document and the current document. In this case, the value of rel is \u201cstylesheet\u201d. This means that the linked document is the stylesheet for the current document: that\u2019s its relationship.\n\nHere\u2019s another common use of rel:\n\n<link rel=\"alternate\" type=\"application/rss+xml\" title=\"my RSS feed\" href=\"index.xml\" />\n\nThis describes the relationship of the linked file \u2013 an RSS feed \u2013 as \u201calternate\u201d: an alternate view of the current document.\n\nBoth of those examples use the link element but you are free to use the rel attribute in regular hyperlinks. Suppose you\u2019re linking to your RSS feed in the body of your page:\n\nSubscribe to <a href=\"index.xml\">my RSS feed</a>.\n\nYou can add extra information to this anchor using the rel attribute:\n\nSubscribe to <a href=\"index.xml\" rel=\"alternate\" type=\"application/rss+xml\">my RSS feed</a>.\n\nThere\u2019s no prescribed list of values for the rel attribute so you can use whatever you decide is semantically meaningful. Let\u2019s say you\u2019ve got a complex e-commerce application that includes a link to a help file. You can explicitly declare the relationship of the linked file as being \u201chelp\u201d:\n\n<a href=\"help.html\" rel=\"help\">need help?</a>\n\nElemental Microformats\n\nAlthough it\u2019s completely up to you what values you use for the rel attribute, some consensus is emerging in the form of microformats. Some of the simplest microformats make good use of rel. For example, if you are linking to a license that covers the current document, use the rel-license microformat:\n\nLicensed under a <a href=\"http://creativecommons.org/licenses/by/2.0/\" rel=\"license\">Creative Commons attribution license</a>\n\nThat describes the relationship of the linked document as \u201clicense.\u201d\n\nThe rel-tag microformat goes a little further. It uses rel to describe the final part of the URL of the linked file as a \u201ctag\u201d for the current document:\n\nLearn more about <a href=\"http://en.wikipedia.org/wiki/Microformats\" rel=\"tag\">semantic markup</a>\n\nThis states that the current document is being tagged with the value \u201cMicroformats.\u201d\n\nXFN, which stands for XHTML Friends Network, is a way of describing relationships between people:\n\n<a href=\"http://allinthehead.com/\" rel=\"friend\">Drew McLellan</a>\n\nThis microformat makes use of a very powerful property of the rel attribute. Like the class attribute, rel can take multiple values, separated by spaces:\n\n<a href=\"http://allinthehead.com/\" rel=\"friend met colleague\">Drew McLellan</a>\n\nHere I\u2019m describing Drew as being a friend, someone I\u2019ve met, and a colleague (because we\u2019re both Web monkies).\n\nYou Say You Want a revolution\n\nWhile rel describes the relationship of the linked resource to the current document, the rev attribute describes the reverse relationship: it describes the relationship of the current document to the linked resource. Here\u2019s an example of a link that might appear on help.html:\n\n<a href=\"shoppingcart.html\" rev=\"help\">continue shopping</a>\n\nThe rev attribute declares that the current document is \u201chelp\u201d for the linked file.\n\nThe vote-links microformat makes use of the rev attribute to allow you to qualify your links. By using the value \u201cvote-for\u201d you can describe your document as being an endorsement of the linked resource:\n\nI agree with <a href=\"http://richarddawkins.net/home\" rev=\"vote-for\">Richard Dawkins</a>.\n\nThere\u2019s a corresponding vote-against value. This means that you can link to a document but explicitly state that you don\u2019t agree with it.\n\nI agree with <a href=\"http://richarddawkins.net/home\" rev=\"vote-for\">Richard Dawkins</a>\nabout those <a href=\"http://www.icr.org/\" rev=\"vote-against\">creationists</a>. \n\nOf course there\u2019s nothing to stop you using both rel and rev on the same hyperlink:\n\n<a href=\"http://richarddawkins.net/home\" rev=\"vote-for\" rel=\"muse\">Richard Dawkins</a>\n\nThe Wisdom of Crowds\n\nThe simplicity of rel and rev belies their power. They allow you to easily add extra semantic richness to your hyperlinks. This creates a bounty that can be harvested by search engines, aggregators and browsers. Make it your New Year\u2019s resolution to make friends with these attributes and extend the power of hypertext.", "year": "2006", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2006-12-18T00:00:00+00:00", "url": "https://24ways.org/2006/boost-your-hyperlink-power/", "topic": "code"} {"rowid": 129, "title": "Knockout Type - Thin Is Always In", "contents": "OS X has gorgeous native anti-aliasing (although I will admit to missing 10px aliased Geneva \u2014 *sigh*). This is especially true for dark text on a light background. However, things can go awry when you start using light text on a dark background. Strokes thicken. Counters constrict. Letterforms fill out like seasonal snackers.\n\n \n\nSo how do we combat the fat? In Safari and other Webkit-based browsers we can use the CSS \u2018text-shadow\u2019 property. While trying to add a touch more contrast to the navigation on haveamint.com I noticed an interesting side-effect on the weight of the type. \n\n\n\nThe second line in the example image above has the following style applied to it:\n\n \n\nThis creates an invisible drop-shadow. (Why is it invisible? The shadow is positioned directly behind the type (the first two zeros) and has no spread (the third zero). So the color, black, is completely eclipsed by the type it is supposed to be shadowing.)\n\n \n\nWhy applying an invisible drop-shadow effectively lightens the weight of the type is unclear. What is clear is that our light-on-dark text is now of a comparable weight to its dark-on-light counterpart.\n\n \n\nYou can see this trick in effect all over ShaunInman.com and in the navigation on haveamint.com and Subtraction.com. The HTML and CSS source code used to create the example images used in this article can be found here.", "year": "2006", "author": "Shaun Inman", "author_slug": "shauninman", "published": "2006-12-17T00:00:00+00:00", "url": "https://24ways.org/2006/knockout-type/", "topic": "code"} {"rowid": 132, "title": "Tasty Text Trimmer", "contents": "In most cases, when designing a user interface it\u2019s best to make a decision about how data is best displayed and stick with it. Failing to make a decision ultimately leads to too many user options, which in turn can be taxing on the poor old user. \n\nUnder some circumstances, however, it\u2019s good to give the user freedom in customising their workspace. One good example of this is the \u2018Article Length\u2019 tool in Apple\u2019s Safari RSS reader. Sliding a slider left of right dynamically changes the length of each article shown. It\u2019s that kind of awesomey magic stuff that\u2019s enough to keep you from sleeping. Let\u2019s build one.\n\nThe Setup\n\nLet\u2019s take a page that has lots of long text items, a bit like a news page or like Safari\u2019s RSS items view. If we were to attach a class name to each element we wanted to resize, that would give us something to hook onto from the JavaScript. \n\nExample 1: The basic page.\n\nAs you can see, I\u2019ve wrapped my items in a DIV and added a class name of chunk to them. It\u2019s these chunks that we\u2019ll be finding with the JavaScript. Speaking of which \u2026\n\nOur Core Functions\n\nThere are two main tasks that need performing in our script. The first is to find the chunks we\u2019re going to be resizing and store their original contents away somewhere safe. We\u2019ll need this so that if we trim the text down we\u2019ll know what it was if the user decides they want it back again. We\u2019ll call this loadChunks. \n\nvar loadChunks = function(){\n\tvar everything = document.getElementsByTagName('*');\n\tvar i, l;\n\tchunks\t= [];\n\tfor (i=0, l=everything.length; i<l; i++){\n\t\tif (everything[i].className.indexOf(chunkClass) > -1){\n\t\t\tchunks.push({\n\t\t\t\tref: everything[i],\n\t\t\t\toriginal: everything[i].innerHTML\n\t\t\t});\n\t\t}\n\t}\n};\n\nThe variable chunks is stored outside of this function so that we can access it from our next core function, which is doTrim.\n\nvar doTrim = function(interval) {\n\tif (!chunks) loadChunks();\n\tvar i, l;\n\tfor (i=0, l=chunks.length; i<l; i++){\n\t\tvar a = chunks[i].original.split(' ');\n\t\ta = a.slice(0, interval);\n\t\tchunks[i].ref.innerHTML\t= a.join(' ');\n\t}\n};\n\nThe first thing that needs to be done is to call loadChunks if the chunks variable isn\u2019t set. This should only happen the first time doTrim is called, as from that point the chunks will be loaded.\n\nThen all we do is loop through the chunks and trim them. The trimming itself (lines 6-8) is very simple. We split the text into an array of words (line 6), then select only a portion from the beginning of the array up until the number we want (line 7). Finally the words are glued back together (line 8).\n\nIn essense, that\u2019s it, but it leaves us needing to know how to get the number into this function in the first place, and how that number is generated by the user. Let\u2019s look at the latter case first.\n\nThe YUI Slider Widget\n\nThere are lots of JavaScript libraries available at the moment. A fair few of those are really good. I use the Yahoo! User Interface Library professionally, but have only recently played with their pre-build slider widget. Turns out, it\u2019s pretty good and perfect for this task. \n\nOnce you have the library files linked in (check the docs linked above) it\u2019s fairly straightforward to create yourself a slider.\n\nslider = YAHOO.widget.Slider.getHorizSlider(\"sliderbg\", \"sliderthumb\", 0, 100, 5); \nslider.setValue(50);\nslider.subscribe(\"change\", doTrim);\n\nAll that\u2019s needed then is some CSS to make the slider look like a slider, and of course a few bits of HTML. We\u2019ll see those later.\n\nSee It Working!\n\nRather than spell out all the nuts and bolts of implementing this fairly simple script, let\u2019s just look at in it action and then pick on some interesting bits I\u2019ve added.\n\nExample 2: Try the Tasty Text Trimmer.\n\nAt the top of the JavaScript file I\u2019ve added a small number of settings.\n\nvar chunkClass\t= 'chunk';\nvar minValue\t= 10;\nvar maxValue\t= 100;\nvar multiplier\t= 5;\n\nObvious candidates for configuration are the class name used to find the chunks, and also the some minimum and maximum values. The minValue is the fewest number of words we wish to display when the slider is all the way down. The maxValue is the length of the slider, in this case 100.\n\nMoving the slider makes a call to our doTrim function with the current value of the slider. For a slider 100 pixels long, this is going to be in the range of 0-100. That might be okay for some things, but for longer items of text you\u2019ll want to allow for displaying more than 100 words. I\u2019ve accounted for this by adding in a multiplier \u2013 in my code I\u2019m multiplying the value by 5, so a slider value of 50 shows 250 words. You\u2019ll probably want to tailor the multiplier to the type of content you\u2019re using.\n\nKeeping it Accessible \n\nThis effect isn\u2019t something we can really achieve without JavaScript, but even so we must make sure that this functionality has no adverse impact on the page when JavaScript isn\u2019t available. This is achieved by adding the slider markup to the page from within the insertSliderHTML function.\n\nvar insertSliderHTML = function(){\n\tvar s = '<a id=\"slider-less\" href=\"#less\"><img src=\"icon_min.gif\" width=\"10\" height=\"10\" alt=\"Less text\" class=\"first\" /></a>';\n\ts +=' <div id=\"sliderbg\"><div id=\"sliderthumb\"><img src=\"sliderthumbimg.gif\" /></div></div>';\n\ts +=' <a id=\"slider-more\" href=\"#more\"><img src=\"icon_max.gif\" width=\"10\" height=\"10\" alt=\"More text\" /></a>';\n\tdocument.getElementById('slider').innerHTML = s;\n}\n\nThe other important factor to consider is that a slider can be tricky to use unless you have good eyesight and pretty well controlled motor skills. Therefore we should provide a method of changing the value by the keyboard.\n\nI\u2019ve done this by making the icons at either end of the slider links so they can be tabbed to. Clicking on either icon fires the appropriate JavaScript function to show more or less of the text.\n\nIn Conclusion\n\nThe upshot of all this is that without JavaScript the page just shows all the text as it normally would. With JavaScript we have a slider for trimming the text excepts that can be controlled with the mouse or with a keyboard.\n\nIf you\u2019re like me and have just scrolled to the bottom to find the working demo, here it is again:\n\nTry the Tasty Text Trimmer\n\nTrimmer for Christmas? Don\u2019t say I never give you anything!", "year": "2006", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2006-12-01T00:00:00+00:00", "url": "https://24ways.org/2006/tasty-text-trimmer/", "topic": "code"} {"rowid": 135, "title": "A Scripting Carol", "contents": "We all know the stories of the Ghost of Scripting Past \u2013 a time when the web was young and littered with nefarious scripting, designed to bestow ultimate control upon the developer, to pollute markup with event handler after event handler, and to entrench advertising in the minds of all that gazed upon her.\n\nAnd so it came to be that JavaScript became a dirty word, thrown out of solutions by many a Scrooge without regard to the enhancements that JavaScript could bring to any web page. JavaScript, as it was, was dead as a door-nail.\n\nWith the arrival of our core philosophy that all standardistas hold to be true: \u201cseparate your concerns \u2013 content, presentation and behaviour,\u201d we are in a new era of responsible development the Web Standards Way\u2122. Or are we? Have we learned from the Ghosts of Scripting Past? Or are we now faced with new problems that come with new ways of implementing our solutions?\n\nThe Ghost of Scripting Past\n\nIf the Ghost of Scripting Past were with us it would probably say: \n\n\n\tYou must remember your roots and where you came from, and realize the misguided nature of your early attempts for control. That person you see down there, is real and they are the reason you exist in the first place\u2026 without them, you are nothing.\n\n\nIn many ways we\u2019ve moved beyond the era of control and we do take into account the user, or at least much more so than we used to. Sadly \u2013 there is one advantage that old school inline event handlers had where we assigned and reassigned CSS style property values on the fly \u2013 we knew that if JavaScript wasn\u2019t supported, the styles wouldn\u2019t be added because we ended up doing them at the same time.\n\nIf anything, we need to have learned from the past that just because it works for us doesn\u2019t mean it is going to work for anyone else \u2013 we need to test more scenarios than ever to observe the multitude of browsing arrangements we\u2019ll observe: CSS on with JavaScript off, CSS off/overridden with JavaScript on, both on, both off/not supported. It is a situation that is ripe for conflict.\n\nThis may shock some of you, but there was a time when testing was actually easier: back in the day when Netscape 4 was king. Yes, that\u2019s right. I actually kind of enjoyed Netscape 4 (hear me out, please). With NS4\u2019s CSS implementation known as JavaScript Style Sheets, you knew that if JavaScript was off the styles were off too.\n\nThe Ghost of Scripting Present\n\nWith current best practice \u2013 we keep our CSS and JavaScript separate from each other. So what happens when some of our fancy, unobtrusive DOM Scripting doesn\u2019t play nicely with our wonderfully defined style rules?\n\nLets look at one example of a collapsing and expanding menu to illustrate where we are now:\n\nSimple Collapsing/Expanding Menu Example\n\nWe\u2019re using some simple JavaScript (I\u2019m using jquery in this case) to toggle between a CSS state for expanded and not expanded:\n\nJavaScript\n\n$(document).ready(function(){\n\tTWOFOURWAYS.enableTree();\n});\nvar TWOFOURWAYS = new Object();\nTWOFOURWAYS.enableTree = function ()\n{\n\t$(\"ul li a\").toggle(function(){\n\t\t$(this.parentNode).addClass(\"expanded\");\n\t}, function() {\n\t\t$(this.parentNode).removeClass(\"expanded\");\n\t});\n\treturn false;\t\n}\n\nCSS\n\nul li ul {\n\tdisplay: none;\n}\nul li.expanded ul {\n\tdisplay: block;\n}\n\nAt this point we\u2019ve separated our presentation from our content and behaviour, and all is well, right?\n\nNot quite.\n\nHere\u2019s where I typically see failures in the assessment work that I do on web sites and applications (Yes, call me Scrooge \u2013 I don\u2019t care!). We know our page needs to work with or without scripting, and we know it needs to work with or without CSS. All too often the testing scenarios don\u2019t take into account combinations.\n\nTesting it out\n\nSo what happens when we test this? Make sure you test with:\n\n\n\tCSS off\n\tJavaScript off\n\n\nUse the simple example again.\n\nWith CSS off, we revert to a simple nested list of links and all functionality is maintained. With JavaScript off, however, we run into a problem \u2013 we have now removed the ability to expand the menu using the JavaScript triggered CSS change.\n\nHopefully you see the problem \u2013 we have a JavaScript and CSS dependency that is critical to the functionality of the page. Unobtrusive scripting and binary on/off tests aren\u2019t enough. We need more.\n\nThis Ghost of Scripting Present sighting is seen all too often.\n\nLets examine the JavaScript off scenario a bit further \u2013 if we require JavaScript to expand/show the branch of the tree we should use JavaScript to hide them in the first place. That way we guarantee functionality in all scenarios, and have achieved our baseline level of interoperability. \n\nTo revise this then, we\u2019ll start with the sub items expanded, use JavaScript to collapse them, and then use the same JavaScript to expand them.\n\nHTML\n\n<ul>\n\t<li><a href=\"#\">Main Item</a>\n\t\t<ul class=\"collapseme\">\n\t\t\t<li><a href=\"#\">Sub item 1</a></li>\n\t\t\t<li><a href=\"#\">Sub item 2</a></li>\n\t\t\t<li><a href=\"#\">Sub item 3</a></li>\n\t\t</ul>\n\t</li>\n</ul>\n\nCSS\n\n/* initial style is expanded */\nul li ul.collapseme {\n\tdisplay: block;\n}\n\nJavaScript\n\n// remove the class collapseme after the page loads\n$(\"ul ul.collapseme\").removeClass(\"collapseme\");\n\nAnd there you have it \u2013 a revised example with better interoperability.\n\nThis isn\u2019t rocket surgery by any means. It is a simple solution to a ghostly problem that is too easily overlooked (and often is).\n\nThe Ghost of Scripting Future\n\nWell, I\u2019m not so sure about this one, but I\u2019m guessing that in a few years\u2019 time, we\u2019ll all have seen a few more apparitions and have a few more tales to tell. And hopefully we\u2019ll be able to share them on 24 ways.\n\nThanks to Drew for the invitation to contribute and thanks to everyone else out there for making this a great (and haunting) year on the web!", "year": "2006", "author": "Derek Featherstone", "author_slug": "derekfeatherstone", "published": "2006-12-21T00:00:00+00:00", "url": "https://24ways.org/2006/a-scripting-carol/", "topic": "code"} {"rowid": 136, "title": "Making XML Beautiful Again: Introducing Client-Side XSL", "contents": "Remember that first time you saw XML and got it? When you really understood what was possible and the deep meaning each element could carry? Now when you see XML, it looks ugly, especially when you navigate to a page of XML in a browser. Well, with every modern browser now supporting XSL 1.0, I\u2019m going to show you how you can turn something as simple as an ATOM feed into a customised page using a browser, Notepad and some XSL.\n\nWhat on earth is this XSL?\n\nXSL is a family of recommendations for defining XML document transformation and presentation. It consists of three parts:\n\n\n\tXSLT 1.0 \u2013 Extensible Stylesheet Language Transformation, a language for transforming XML\n\tXPath 1.0 \u2013 XML Path Language, an expression language used by XSLT to access or refer to parts of an XML document. (XPath is also used by the XML Linking specification)\n\tXSL-FO 1.0 \u2013 Extensible Stylesheet Language Formatting Objects, an XML vocabulary for specifying formatting semantics\n\n\nXSL transformations are usually a one-to-one transformation, but with newer versions (XSL 1.1 and XSL 2.0) its possible to create many-to-many transformations too. So now you have an overview of XSL, on with the show\u2026\n\nSo what do I need?\n\nSo to get going you need a browser an supports client-side XSL transformations such as Firefox, Safari, Opera or Internet Explorer. Second, you need a source XML file \u2013 for this we\u2019re going to use an ATOM feed from Flickr.com. And lastly, you need an editor of some kind. I find Notepad++ quick for short XSLs, while I tend to use XMLSpy or Oxygen for complex XSL work. \n\nBecause we\u2019re doing a client-side transformation, we need to modify the XML file to tell it where to find our yet-to-be-written XSL file. Take a look at the source XML file, which originates from my Flickr photos tagged sky, in ATOM format.\n\nThe top of the ATOM file now has an additional <?xml-stylesheet /> instruction, as can been seen on Line 2 below. This instructs the browser to use the XSL file to transform the document.\n\n<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"yes\"?>\n<?xml-stylesheet type=\"text/xsl\" href=\"flickr_transform.xsl\"?>\n<feed xmlns=\"http://www.w3.org/2005/Atom\"\n\txmlns:dc=\"http://purl.org/dc/elements/1.1/\">\n\nYour first transformation\n\nYour first XSL will look something like this:\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<xsl:stylesheet version=\"1.0\"\n\txmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n\txmlns:atom=\"http://www.w3.org/2005/Atom\"\n\txmlns:dc=\"http://purl.org/dc/elements/1.1/\">\n\t<xsl:output method=\"html\" encoding=\"utf-8\"/>\n</xsl:stylesheet>\n\nThis is pretty much the starting point for most XSL files. You will notice the standard XML processing instruction at the top of the file (line 1). We then switch into XSL mode using the XSL namespace on all XSL elements (line 2). In this case, we have added namespaces for ATOM (line 4) and Dublin Core (line 5). This means the XSL can now read and understand those elements from the source XML. \n\nAfter we define all the namespaces, we then move onto the xsl:output element (line 6). This enables you to define the final method of output. Here we\u2019re specifying html, but you could equally use XML or Text, for example. The encoding attributes on each element do what they say on the tin. As with all XML, of course, we close every element including the root.\n\nThe next stage is to add a template, in this case an <xsl:template /> as can be seen below:\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<xsl:stylesheet version=\"1.0\"\n\txmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n\txmlns:atom=\"http://www.w3.org/2005/Atom\"\n\txmlns:dc=\"http://purl.org/dc/elements/1.1/\">\n\t<xsl:output method=\"html\" encoding=\"utf-8\"/>\n\t<xsl:template match=\"/\">\n\t\t<html>\n\t\t\t<head>\n\t\t\t\t<title>Making XML beautiful again : Transforming ATOM</title>\n\t\t\t</head>\n\t\t\t<body>\n\t\t\t\t<xsl:apply-templates select=\"/atom:feed\"/>\n\t\t\t</body>\n\t\t</html>\n\t</xsl:template>\n</xsl:stylesheet>\n\nThe beautiful thing about XSL is its English syntax, if you say it out loud it tends to make sense. \n\nThe / value for the match attribute on line 8 is our first example of XPath syntax. The expression / matches any element \u2013 so this <xsl:template/> will match against any element in the document. As the first element in any XML document is the root element, this will be the one matched and processed first.\n\nOnce we get past our standard start of a HTML document, the only instruction remaining in this <xsl:template/> is to look for and match all <atom:feed/> elements using the <xsl:apply-templates/> in line 14, above.\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<xsl:stylesheet version=\"1.0\"\n\txmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n\txmlns:atom=\"http://www.w3.org/2005/Atom\"\n\txmlns:dc=\"http://purl.org/dc/elements/1.1/\">\n\t<xsl:output method=\"html\" encoding=\"utf-8\"/>\n\t<xsl:template match=\"/\">\n\t\t<xsl:apply-templates select=\"/atom:feed\"/>\n\t</xsl:template>\n\t<xsl:template match=\"/atom:feed\">\n\t\t<div id=\"content\">\n\t\t\t<h1>\n\t\t\t\t<xsl:value-of select=\"atom:title\"/>\n\t\t\t</h1>\n\t\t\t<p>\n\t\t\t\t<xsl:value-of select=\"atom:subtitle\"/>\n\t\t\t</p>\n\t\t\t<ul id=\"entries\">\n\t\t\t\t<xsl:apply-templates select=\"atom:entry\"/>\n\t\t\t</ul>\n\t\t</div>\n\t</xsl:template>\n</xsl:stylesheet>\n\nThis new template (line 12, above) matches <feed/> and starts to write the new HTML elements out to the output stream. The <xsl:value-of/> does exactly what you\u2019d expect \u2013 it finds the value of the item specifed in its select attribute. With XPath you can select any element or attribute from the source XML. \n\nThe last part is a repeat of the now familiar <xsl:apply-templates/> from before, but this time we\u2019re using it inside of a called template. Yep, XSL is full of recursion\u2026\n\n<xsl:template match=\"atom:entry\">\n\t<li class=\"entry\">\n\t\t<h2>\n\t\t\t<a href=\"{atom:link/@href}\">\n\t\t\t\t<xsl:value-of select=\"atom:title\"/>\n\t\t\t</a>\n\t\t</h2>\n\t\t<p class=\"date\">\n\t\t\t(<xsl:value-of select=\"substring-before(atom:updated,'T')\"/>)\n\t\t</p>\n\t\t<p class=\"content\">\n\t\t\t<xsl:value-of select=\"atom:content\" disable-output-escaping=\"yes\"/>\n\t\t</p>\n\t\t<xsl:apply-templates select=\"atom:category\"/>\n\t</li>\n</xsl:template>\n\nThe <xsl:template/> which matches atom:entry (line 1) occurs every time there is a <entry/> element in the source XML file. So in total that is 20 times, this is naturally why XSLT is full of recursion. This <xsl:template/> has been matched and therefore called higher up in the document, so we can start writing list elements directly to the output stream. The first part is simply a <h2/> with a link wrapped within it (lines 3-7). We can select attributes using XPath using @. \n\nThe second part of this template selects the date, but performs a XPath string function on it. This means that we only get the date and not the time from the string (line 9). This is achieved by getting only the part of the string that exists before the T. \n\nRegular Expressions are not part of the XPath 1.0 string functions, although XPath 2.0 does include them. Because of this, in XSL we tend to rely heavily on the available XML output. \n\nThe third part of the template (line 12) is a <xsl:value-of/> again, but this time we use an attribute of <xsl:value-of/> called disable output escaping to turn escaped characters back into XML. \n\nThe very last section is another <xsl:apply-template/> call, taking us three templates deep. Do not worry, it is not uncommon to write XSL which go 20 or more templates deep!\n\n<xsl:template match=\"atom:category\">\n\t<xsl:for-each select=\".\">\n\t\t<xsl:element name=\"a\">\n\t\t\t<xsl:attribute name=\"rel\">\n\t\t\t\t<xsl:text>tag</xsl:text>\n\t\t\t</xsl:attribute>\n\t\t\t<xsl:attribute name=\"href\">\n\t\t\t\t<xsl:value-of select=\"concat(@scheme, @term)\"/>\n\t\t\t</xsl:attribute>\n\t\t\t<xsl:value-of select=\"@term\"/>\n\t\t</xsl:element>\n\t\t<xsl:text> </xsl:text>\n\t</xsl:for-each>\n</xsl:template>\n\nIn our final <xsl:template/>, we see a combination of what we have done before with a couple of twists. Once we match atom:category we then count how many elements there are at that same level (line 2). The XPath . means \u2018self\u2019, so we count how many category elements are within the <entry/> element. \n\nFollowing that, we start to output a link with a rel attribute of the predefined text, tag (lines 4-6). In XSL you can just type text, but results can end up with strange whitespace if you do (although there are ways to simply remove all whitespace). \n\nThe only new XPath function in this example is concat(), which simply combines what XPaths or text there might be in the brackets. We end the output for this tag with an actual tag name (line 10) and we add a space afterwards (line 12) so it won\u2019t touch the next tag. (There are better ways to do this in XSL using the last() XPath function). \n\nAfter that, we go back to the <xsl:for-each/> element again if there is another category element, otherwise we end the <xsl:for-each/> loop and end this <xsl:template/>.\n\nA touch of style\n\nBecause we\u2019re using recursion through our templates, you will find this is the end of the templates and the rest of the XML will be ignored by the parser. Finally, we can add our CSS to finish up. (I have created one for Flickr and another for News feeds)\n\n<style type=\"text/css\" media=\"screen\">@import \"flickr_overview.css?v=001\";</style>\n\nSo we end up with a nice simple to understand but also quick to write XSL which can be used on ATOM Flickr feeds and ATOM News feeds. With a little playing around with XSL, you can make XML beautiful again.\n\nAll the files can be found in the zip file (14k)", "year": "2006", "author": "Ian Forrester", "author_slug": "ianforrester", "published": "2006-12-07T00:00:00+00:00", "url": "https://24ways.org/2006/beautiful-xml-with-xsl/", "topic": "code"} {"rowid": 138, "title": "Rounded Corner Boxes the CSS3 Way", "contents": "If you\u2019ve been doing CSS for a while you\u2019ll know that there are approximately 3,762 ways to create a rounded corner box. The simplest techniques rely on the addition of extra mark-up directly to your page, while the more complicated ones add the mark-up though DOM manipulation. While these techniques are all very interesting, they do seem somewhat of a kludge. The goal of CSS is to separate structure from presentation, yet here we are adding superfluous mark-up to our code in order to create a visual effect. The reason we are doing this is simple. CSS2.1 only allows a single background image per element.\n\nThankfully this looks set to change with the addition of multiple background images into the CSS3 specification. With CSS3 you\u2019ll be able to add not one, not four, but eight background images to a single element. This means you\u2019ll be able to create all kinds of interesting effects without the need of those additional elements.\n\nWhile the CSS working group still seem to be arguing over the exact syntax, Dave Hyatt went ahead and implemented the currently suggested mechanism into Safari. The technique is fiendishly simple, and I think we\u2019ll all be a lot better off once the W3C stop arguing over the details and allow browser vendors to get on and provide the tools we need to build better websites.\n\nTo create a CSS3 rounded corner box, simply start with your box element and apply your 4 corner images, separated by commas.\n\n.box {\n\tbackground-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif);\n}\n\nWe don\u2019t want these background images to repeat, which is the normal behaviour, so lets set all their background-repeat properties to no-repeat.\n\n.box {\n\tbackground-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif);\n\tbackground-repeat: no-repeat, no-repeat, no-repeat, no-repeat;\n}\n\nLastly, we need to define the positioning of each corner image.\n\n.box {\n\tbackground-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif);\n\tbackground-repeat: no-repeat, no-repeat, no-repeat, no-repeat;\n\tbackground-position: top left, top right, bottom left, bottom right;\n}\n\nAnd there we have it, a simple rounded corner box with no additional mark-up.\n\nAs well as using multiple background images, CSS3 also has the ability to create rounded corners without the need of any images at all. You can do this by setting the border-radius property to your desired value as seen in the next example.\n\n.box {\n\tborder-radius: 1.6em;\n}\n\nThis technique currently works in Firefox/Camino and creates a nice, if somewhat jagged rounded corner. If you want to create a box that works in both Mozilla and WebKit based browsers, why not combine both techniques and see what happens.", "year": "2006", "author": "Andy Budd", "author_slug": "andybudd", "published": "2006-12-04T00:00:00+00:00", "url": "https://24ways.org/2006/rounded-corner-boxes-the-css3-way/", "topic": "code"} {"rowid": 139, "title": "Flickr Photos On Demand with getFlickr", "contents": "In case you don\u2019t know it yet, Flickr is great. It is a lot of fun to upload, tag and caption photos and it is really handy to get a vast network of contacts through it. \n\nUsing Flickr photos outside of it is a bit of a problem though. There is a Flickr API, and you can get almost every page as an RSS feed, but in general it is a bit tricky to use Flickr photos inside your blog posts or web sites. You might not want to get into the whole API game or use a server side proxy script as you cannot retrieve RSS with Ajax because of the cross-domain security settings.\n\nHowever, Flickr also provides an undocumented JSON output, that can be used to hack your own solutions in JavaScript without having to use a server side script.\n\n\n\tIf you enter the URL http://flickr.com/photos/tags/panda you get to the flickr page with photos tagged \u201cpanda\u201d.\n\tIf you enter the URL http://api.flickr.com/services/feeds/photos_public.gne?tags=panda&format=rss_200 you get the same page as an RSS feed.\n\tIf you enter the URL http://api.flickr.com/services/feeds/photos_public.gne?tags=panda&format=json you get a JavaScript function called jsonFlickrFeed with a parameter that contains the same data in JSON format\n\n\nYou can use this to easily hack together your own output by just providing a function with the same name. I wanted to make it easier for you, which is why I created the helper getFlickr for you to download and use.\n\ngetFlickr for Non-Scripters\n\nSimply include the javascript file getflickr.js and the style getflickr.css in the head of your document:\n\n<script type=\"text/javascript\" src=\"getflickr.js\"></script>\n<link rel=\"stylesheet\" href=\"getflickr.css\" type=\"text/css\">\n\nOnce this is done you can add links to Flickr pages anywhere in your document, and when you give them the CSS class getflickrphotos they get turned into gallery links. When a visitor clicks these links they turn into loading messages and show a \u201cpopup\u201d gallery with the connected photos once they were loaded. As the JSON returned is very small it won\u2019t take long. You can close the gallery, or click any of the thumbnails to view a photo. Clicking the photo makes it disappear and go back to the thumbnails.\n\nCheck out the example page and click the different gallery links to see the results.\n\nNotice that getFlickr works with Unobtrusive JavaScript as when scripting is disabled the links still get to the photos on Flickr.\n\ngetFlickr for JavaScript Hackers\n\nIf you want to use getFlickr with your own JavaScripts you can use its main method leech():\n\ngetFlickr.leech(sTag, sCallback);\n\n \n\tsTag\n\tthe tag you are looking for\n\tsCallback\n\tan optional function to call when the data was retrieved.\n \n\nAfter you called the leech() method you have two strings to use:\n\n \n\tgetFlickr.html[sTag]\n\tcontains an HTML list (without the outer UL element) of all the images linked to the correct pages at flickr. The images are the medium size, you can easily change that by replacing _m.jpg with _s.jpg for thumbnails.\n\tgetFlickr.tags[sTag]\n\tcontains a string of all the other tags flickr users added with the tag you searched for(space separated)\n \n\nYou can call getFlickr.leech() several times when the page has loaded to cache several result feeds before the page gets loaded. This\u2019ll make the photos quicker for the end user to show up. If you want to offer a form for people to search for flickr photos and display them immediately you can use the following HTML:\n\n<form onsubmit=\"getFlickr.leech(document.getElementById('tag').value, 'populate');return false\">\n <label for=\"tag\">Enter Tag</label>\n <input type=\"text\" id=\"tag\" name=\"tag\" />\n <input type=\"submit\" value=\"energize\" />\n <h3>Tags:</h3><div id=\"tags\"></div>\n <h3>Photos:</h3><ul id=\"photos\"></ul>\n</form>\n\nAll the JavaScript you\u2019ll need (for a basic display) is this:\n\nfunction populate(){\n var tag = document.getElementById('tag').value;\n document.getElementById('photos').innerHTML = getFlickr.html[tag].replace(/_m\\.jpg/g,'_s.jpg');\n document.getElementById('tags').innerHTML = getFlickr.tags[tag];\n return false;\n}\n\nEasy as pie, enjoy!\n\nCheck out the example page and try the form to see the results.", "year": "2006", "author": "Christian Heilmann", "author_slug": "chrisheilmann", "published": "2006-12-03T00:00:00+00:00", "url": "https://24ways.org/2006/flickr-photos-on-demand/", "topic": "code"} {"rowid": 143, "title": "Marking Up a Tag Cloud", "contents": "Everyone\u2019s doing it. \n\nThe problem is, everyone\u2019s doing it wrong. \n\nHarsh words, you might think. But the crimes against decent markup are legion in this area. You see, I\u2019m something of a markup and semantics junkie. So I\u2019m going to analyse some of the more well-known tag clouds on the internet, explain what\u2019s wrong, and then show you one way to do it better. \n\ndel.icio.us \n\nI think the first ever tag cloud I saw was on del.icio.us. Here\u2019s how they mark it up. \n\n<div class=\"alphacloud\">\n\t<a href=\"/tag/.net\" class=\"lb s2\">.net</a>\n\t<a href=\"/tag/advertising\" class=\" s3\">advertising</a>\n\t<a href=\"/tag/ajax\" class=\" s5\">ajax</a>\n\t...\n</div>\n\nUnfortunately, that is one of the worst examples of tag cloud markup I have ever seen. The page states that a tag cloud is a list of tags where size reflects popularity. However, despite describing it in this way to the human readers, the page\u2019s author hasn\u2019t described it that way in the markup. It isn\u2019t a list of tags, just a bunch of anchors in a <div>. This is also inaccessible because a screenreader will not pause between adjacent links, and in some configurations will not announce the individual links, but rather all of the tags will be read as just one link containing a whole bunch of words. Markup crime number one. \n\nFlickr \n\nAh, Flickr. The darling photo sharing site of the internet, and the biggest blind spot in every standardista\u2019s vision. Forgive it for having atrocious markup and sometimes confusing UI because it\u2019s just so much damn fun to use. Let\u2019s see what they do. \n\n<p id=\"TagCloud\">\n\t\u00a0<a href=\"/photos/tags/06/\" style=\"font-size: 14px;\">06</a>\u00a0\n\t\u00a0<a href=\"/photos/tags/africa/\" style=\"font-size: 12px;\">africa</a>\u00a0\n\t\u00a0<a href=\"/photos/tags/amsterdam/\" style=\"font-size: 14px;\">amsterdam</a>\u00a0\n\t...\n</p>\n\nAgain we have a simple collection of anchors like del.icio.us, only this time in a paragraph. But rather than using a class to represent the size of the tag they use an inline style. An inline style using a pixel-based font size. That\u2019s so far away from the goal of separating style from content, they might as well use a <font> tag. You could theoretically parse that to extract the information, but you have more work to guess what the pixel sizes represent. Markup crime number two (and extra jail time for using non-breaking spaces purely for visual spacing purposes.)\n\nTechnorati \n\nAh, now. Here, you\u2019d expect something decent. After all, the Overlord of microformats and King of Semantics Tantek \u00c7elik works there. Surely we\u2019ll see something decent here? \n\n<ol class=\"heatmap\">\n\t<li><em><em><em><em><a href=\"/tag/Britney+Spears\">Britney Spears</a></em></em></em></em></li>\n\t<li><em><em><em><em><em><em><em><em><em><a href=\"/tag/Bush\">Bush</a></em></em></em></em></em></em></em></em></em></li>\n\t<li><em><em><em><em><em><em><em><em><em><em><em><em><em><a href=\"/tag/Christmas\">Christmas</a></em></em></em></em></em></em></em></em></em></em></em></em></em></li>\n\t...\n\t<li><em><em><em><em><em><em><a href=\"/tag/SEO\">SEO</a></em></em></em></em></em></em></li>\n\t<li><em><em><em><em><em><em><em><em><em><em><em><em><em><em><em><a href=\"/tag/Shopping\">Shopping</a></em></em></em></em></em></em></em></em></em></em></em></em></em></em></em></li>\n\t...\n</ol>\n\nUnfortunately it turns out not to be that decent, and stop calling me Shirley. It\u2019s not exactly terrible code. It does recognise that a tag cloud is a list of links. And, since they\u2019re in alphabetical order, that it\u2019s an ordered list of links. That\u2019s nice. However \u2026 fifteen nested <em> tags? FIFTEEN? That\u2019s emphasis for you. Yes, it is parse-able, but it\u2019s also something of a strange way of looking at emphasis. The HTML spec states that <em> is emphasis, and <strong> is for stronger emphasis. Nesting <em> tags seems counter to the idea that different tags are used for different levels of emphasis. Plus, if you had a screen reader that stressed the voice for emphasis, what would it do? Shout at you? Markup crime number three. \n\nSo what should it be? \n\nAs del.icio.us tells us, a tag cloud is a list of tags where the size that they are rendered at contains extra information. However, by hiding the extra context purely within the CSS or the HTML tags used, you are denying that context to some users. The basic assumption being made is that all users will be able to see the difference between font sizes, and this is demonstrably false. \n\nA better way to code a tag cloud is to put the context of the cloud within the content, not the markup or CSS alone. As an example, I\u2019m going to take some of my favourite flickr tags and put them into a cloud which communicates the relative frequency of each tag. \n\nTo start with a tag cloud in its most basic form is just a list of links. I am going to present them in alphabetical order, so I\u2019ll use an ordered list. Into each list item I add the number of photos I have with that particular tag. The tag itself is linked to the page on flickr which contains those photos. So we end up with this first example. To display this as a traditional tag cloud, we need to alter it in a few ways: \n\n\n\tThe items need to be displayed next to each other, rather than one-per-line\n\tThe context information should be hidden from display (but not from screen readers)\n\tThe tag should link to the page of items with that tag\n\n\nDisplaying the items next to each other simply means setting the display of the list elements to inline. The context can be hidden by wrapping it in a <span> and then using the off-left method to hide it. And the link just means adding an anchor (with rel=\"tag\" for some extra microformats bonus points). So, now we have a simple collection of links in our second example. \n\nThe last stage is to add the sizes. Since we already have context in our content, the size is purely for visual rendering, so we can just use classes to define the different sizes. For my example, I\u2019ll use a range of class names from not-popular through ultra-popular, in order of smallest to largest, and then use CSS to define different font sizes. If you preferred, you could always use less verbose class names such as size1 through size6. Anyway, adding some classes and CSS gives us our final example, a semantic and more accessible tag cloud.", "year": "2006", "author": "Mark Norman Francis", "author_slug": "marknormanfrancis", "published": "2006-12-09T00:00:00+00:00", "url": "https://24ways.org/2006/marking-up-a-tag-cloud/", "topic": "code"} {"rowid": 145, "title": "The Neverending (Background Image) Story", "contents": "Everyone likes candy for Christmas, and there\u2019s none better than eye candy. Well, that, and just more of the stuff. Today we\u2019re going to combine both of those good points and look at how to create a beautiful background image that goes on and on\u2026 forever!\n\nOf course, each background image is different, so instead of agonising over each and every pixel, I\u2019m going to concentrate on five key steps that you can apply to any of your own repeating background images. In this example, we\u2019ll look at the Miami Beach background image used on the new FOWA site, which I\u2019m afraid is about as un-festive as you can get.\n\n1. Choose your image wisely\n\nI find there are three main criteria when judging photos you\u2019re considering for repetition manipulation (or \u2018repetulation\u2019, as I like to say)\u2026\n\n\n\tsimplicity (beware of complex patterns)\n\tangle and perspective (watch out for shadows and obvious vanishing points)\n\tconsistent elements (for easy cloning)\n\n\nYou might want to check out this annotated version of the image, where I\u2019ve highlighted elements of the photo that led me to choose it as the right one.\n\nThe original image purchased from iStockPhoto.\n\nThe Photoshopped version used on the FOWA site.\n\n2. The power of horizontal lines\n\nWith the image chosen and your cursor poised for some Photoshop magic, the most useful thing you can do is drag out the edge pixels from one side of the image to create a kind of rough colour \u2018template\u2019 on which to work over. It doesn\u2019t matter which side you choose, although you might find it beneficial to use the one with the simplest spread of colour and complex elements.\n\nClick and hold on the marquee tool in the toolbar and select the \u2018single column marquee tool\u2019, which will span the full height of your document but will only be one pixel wide. Make the selection right at the edge of your document, press ctrl-c / cmd-c to copy the selection you made, create a new layer, and hit ctrl-v / cmd-v to paste the selection onto your new layer. using free transform (ctrl-t / cmd-t), drag out your selection so that it becomes as wide as your entire canvas. \n\nA one-pixel-wide selection stretched out to the entire width of the canvas.\n\n3. Cloning\n\nIt goes without saying that the trusty clone tool is one of the most important in the process of creating a seamlessly repeating background image, but I think it\u2019s important to be fairly loose with it. Always clone on to a new layer so that you\u2019ve got the freedom to move it around, but above all else, use the eraser tool to tweak your cloned areas: let that handle the precision stuff and you won\u2019t have to worry about getting your clones right first time.\n\nIn the example below, you can see how I overcame the problem of the far-left tree shadow being chopped off by cloning the shadow from the tree on its right. \n\nThe edge of the shadow is cut off and needs to be \u2018made\u2019 from a pre-existing element.\n\nThe successful clone completes the missing shadow.\n\nThe two elements are obviously very similar but it doesn\u2019t look like a clone because the majority of the shape is \u2018genuine\u2019 and only a small part is a duplicate. Also, after cloning I transformed the duplicate, erased parts of it, used gradients, and \u2014 ooh, did someone mention gradients?\n\n4. Never underestimate a gradient\n\nFor this image, I used gradients in a similar way to a brush: covering large parts of the canvas with a colour that faded out to a desired point, before erasing certain parts for accuracy.\n\nSeveral of the gradients and brushes that make up the \u2018customised\u2019 part of the image, visible when the main photograph layer is hidden.\n\nThe full composite.\n\nGradients are also a bit of an easy fix: you can use a gradient on one side of the image, flip it horizontally, and then use it again on the opposite side to make a more seamless join.\n\nSpeaking of which\u2026\n\n5. Sewing the seams\n\nNo matter what kind of magic Photoshop dust you sprinkle over your image, there will still always be the area where the two edges meet: that scary \u2018loop\u2019 point. Fret ye not, however, for there\u2019s help at hand in the form of a nice little cheat. Even though the loop point might still be apparent, we can help hide it by doing something to throw viewers off the scent.\n\nThe seam is usually easy to spot because it\u2019s a blank area with not much detail or colour variation, so in order to disguise it, go against the rule: put something across it!\n\nThis isn\u2019t quite as challenging as it may sound, because if we intentionally make our own \u2018object\u2019 to span the join, we can accurately measure the exact halfway point where we need to split it across the two sides of the image. This is exactly what I did with the FOWA background image: I made some clouds!\n\nA sky with no clouds in an unhappy one.\n\nA simple soft white brush creates a cloud-like formation in the sky.\n\nAfter taking the cloud\u2019s opacity down to 20%, I used free transform to highlight the boundaries of the layer. I then moved it over to the right, so that the middle of the layer perfectly aligned with the right side of the canvas.\n\nFinally, I duplicated the layer and did the same in reverse: dragging the layer over to the left and making sure that the middle of the duplicate layer perfectly aligned with the left side of the canvas.\n\nAnd there you have it! Boom! Ta-da! Et Voila! To see the repeating background image in action, visit futureofwebapps.com on a large widescreen monitor or see a simulation of the effect.\n\nThanks for reading, folks. Have a great Christmas!", "year": "2007", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2007-12-03T00:00:00+00:00", "url": "https://24ways.org/2007/the-neverending-background-image-story/", "topic": "code"}