rowid,title,contents,year,author,author_slug,published,url,topic 8,Coding Towards Accessibility,"“Can we make it AAA-compliant?” – does this question strike fear into your heart? Maybe for no other reason than because you will soon have to wade through the impenetrable WCAG documentation once again, to find out exactly what AAA-compliant means? I’m not here to talk about that. The Web Content Accessibility Guidelines are a comprehensive and peer-reviewed resource which we’re lucky to have at our fingertips. But they are also a pig to read, and they may have contributed to the sense of mystery and dread with which some developers associate the word accessibility. This Christmas, I want to share with you some thoughts and some practical tips for building accessible interfaces which you can start using today, without having to do a ton of reading or changing your tools and workflow. But first, let’s clear up a couple of misconceptions. Dreary, flat experiences I recently built a front-end framework for the Post Office. This was a great gig for a developer, but when I found out about my client’s stringent accessibility requirements I was concerned that I’d have to scale back what was quite a complex set of visual designs. Sites like Jakob Neilsen’s old workhorse useit.com and even the pioneering GOV.UK may have to shoulder some of the blame for this. They put a premium on usability and accessibility over visual flourish. (Although, in fairness to Mr Neilsen, his new site nngroup.com is really quite a snazzy affair, comparatively.) Of course, there are other reasons for these sites’ aesthetics — and it’s not because of the limitations of the form. You can make an accessible site look as glossy or as plain as you want it to look. It’s always our own ingenuity and attention to detail that are going to be the limiting factors. Synecdoche We must always guard against the tendency to assume that catering to screen readers means we have the whole accessibility ballgame covered. There’s so much more to accessibility than assistive technology, as you know. And within the field of assistive technology there are plenty of other devices for us to consider. Planning to accommodate all these users and devices can be daunting. When I first started working in this field I thought that the breadth of technology was prohibitive. I didn’t even know what a screen reader looked like. (I assumed they were big and heavy, perhaps like an old typewriter, and certainly they would be expensive and difficult to fathom.) This is nonsense, of course. Screen reader emulators are readily available as browser extensions and can be activated in seconds. Chromevox and Fangs are both excellent and you should download one or the other right now. But the really good news is that you can emulate many other types of assistive technology without downloading a byte. And this is where we move from misconceptions into some (hopefully) useful advice. The mouse trap The simplest and most effective way to improve your abilities as a developer of accessible interfaces is to unplug your mouse. Keyboard operation has its own WCAG chapter, because most users of assistive technology are navigating the web using only their keyboards. You can go some way towards putting yourself into their shoes so easily — just by ditching a peripheral. Learning this was a lightbulb moment for me. When I build interfaces I am constantly flicking between code and the browser, testing or viewing the changes I have made. Now, instead of checking a new element once, I check it twice: once with my mouse and then again without. Don’t just :hover The reality is that when you first start doing this you can find your site becomes unusable straightaway. It’s easy to lose track of which element is in focus as you hit the tab key repeatedly. One of the easiest changes you can make to your coding practice is to add :focus and :active pseudo-classes to every hover state that you write. I’m still amazed at how many sites fail to provide a decent focus state for links (and despite previous 24 ways authors in 2007 and 2009 writing on this same issue!). You may find that in some cases it makes sense to have something other than, or in addition to, the hover state on focus, but start with the hover state that your designer has taken the time to provide you with. It’s a tiny change and there is no downside. So instead of this: .my-cool-link:hover { background-color: MistyRose ; } …try writing this: .my-cool-link:hover, .my-cool-link:focus, .my-cool-link:active { background-color: MistyRose ; } I’ve toyed with the idea of making a Sass mixin to take care of this for me, but I haven’t yet. I worry that people reading my code won’t see that I’m explicitly defining my focus and active states so I take the hit and write my hover rules out longhand. JavaScript can play, too This was another revelation for me. Keyboard-only navigation doesn’t necessitate a JavaScript-free experience, and up-to-date screen readers can execute JavaScript. So we’re able to create complex JavaScript-driven interfaces which all users can interact with. Some of the hard work has already been done for us. First, there are already conventions around keyboard-driven interfaces. Think about the last time you viewed a photo album on Facebook. You can use the arrow keys to switch between photos, and the escape key closes whichever lightbox-y UI thing Facebook is showing its photos in this week. Arrow keys (up/down as well as left/right) for progression through content; Escape to back out of something; Enter or space bar to indicate a positive intention — these are established keyboard conventions which we can apply to our interfaces to improve their accessiblity. Of course, by doing so we are improving our interfaces in general, giving all users the option to switch between keyboard and mouse actions as and when it suits them. Second, this guy wants to help you out. Hans Hillen is a developer who has done a great deal of work around accessibility and JavaScript-powered interfaces. Along with The Paciello Group he has created a version of the jQuery UI library which has been fully optimised for keyboard navigation and screen reader use. It’s a fantastic reference which I revisit all the time I’m not a huge fan of the jQuery UI library. It’s a pain to style and the code is a bit bloated. So I’ve not used this demo as a code resource to copy wholesale. I use it by playing with the various components and seeing how they react to keyboard controls. Each component is also fully marked up with the relevant ARIA roles to improve screen reader announcement where possible (more on this below). Coding for accessibility promotes good habits This is a another observation around accessibility and JavaScript. I noticed an improvement in the structure and abstraction of my code when I started adding keyboard controls to my interface elements. Your code has to become more modular and event-driven, because any number of events could trigger the same interaction. A mouse-click, the Enter key and the space bar could all conceivably trigger the same open function on a collapsed accordion element. (And you want to keep things DRY, don’t you?) If you aren’t already in the habit of separating out your interface functionality into discrete functions, you will be soon. var doSomethingCool = function(){ // Do something cool here. } // Bind function to a button click - pretty vanilla $('.myCoolButton').on('click', function(){ doSomethingCool(); return false; }); // Bind the same function to a range of keypresses $(document).keyup(function(e){ switch(e.keyCode) { case 13: // enter case 32: // spacebar doSomethingCool(); break; case 27: // escape doSomethingElse(); break; } }); To be honest, if you’re doing complex UI stuff with JavaScript these days, or if you’ve been building any responsive interfaces which rely on JavaScript, then you are most likely working with an application framework such as Backbone, Angular or Ember, so an abstraced and event-driven application structure will be familar to you. It should be super easy for you to start helping out your keyboard-only users if you aren’t already — just add a few more event bindings into your UI layer! Manipulating the tab order So, you’ve adjusted your mindset and now you test every change to your codebase using a keyboard as well as a mouse. You’ve applied all your hover states to :focus and :active so you can see where you’re tabbing on the page, and your interactive components react seamlessly to a mixture of mouse and keyboard commands. Feels good, right? There’s another level of optimisation to consider: manipulating the tab order. Certain DOM elements are naturally part of the tab order, and others are excluded. Links and input elements are the main elements included in the tab order, and static elements like paragraphs and headings are excluded. What if you want to make a static element ‘tabbable’? A good example would be in an expandable accordion component. Each section of the accordion should be separated by a heading, and there’s no reason to make that heading into a link simply because it’s interactive.

Tyrannosaurus

Tyrannosaurus; meaning ""tyrant lizard""...

Utahraptor

Utahraptor is a genus of theropod dinosaurs...

Dromiceiomimus

Ornithomimus is a genus of ornithomimid dinosaurs...

Adding the heading elements to the tab order is trivial. We just set their tabindex attribute to zero. You could do this on the server or the client. I prefer to do it with JavaScript as part of the accordion setup and initialisation process. $('.accordion-widget h3').attr('tabindex', '0'); You can apply this trick in reverse and take elements out of the tab order by setting their tabindex attribute to −1, or change the tab order completely by using other integers. This should be done with great care, if at all. You have to be sure that the markup you remove from the tab order comes out because it genuinely improves the keyboard interaction experience. This is hard to validate without user testing. The danger is that developers will try to sweep complicated parts of the UI under the carpet by taking them out of the tab order. This would be considered a dark pattern — at least on my team! A farewell ARIA This is where things can get complex, and I’m no expert on the ARIA specification: I feel like I’ve only dipped my toe into this aspect of coding for accessibility. But, as with WCAG, I’d like to demystify things a little bit to encourage you to look into this area further yourself. ARIA roles are of most benefit to screen reader users, because they modify and augment screen reader announcements. Let’s take our dinosaur accordion from the previous section. The markup is semantic, so a screen reader that can’t handle JavaScript will announce all the content within the accordion, no problem. But modern screen readers can deal with JavaScript, and this means that all the lovely dino information beneath each heading has probably been hidden on document.ready, when the accordion initialised. It might have been hidden using display:none, which prevents a screen reader from announcing content. If that’s as far as you have gone, then you’ve committed an accessibility sin by hiding content from screen readers. Your user will hear a set of headings being announced, with no content in between. It would sound something like this if you were using Chromevox: > Tyrannosaurus. Heading Three. > Utahraptor. Heading Three. > Dromiceiomimus. Heading Three. We can add some ARIA magic to the markup to improve this, using the tablist role. Start by adding a role of tablist to the widget, and roles of tab and tabpanel to the headings and paragraphs respectively. Set boolean values for aria-selected, aria-hidden and aria-expanded. The markup could end up looking something like this.

Utahraptor

Utahraptor is a genus of theropod dinosaurs...

Now, if a screen reader user encounters this markup they will hear the following: > Tyrannosaurus. Tab not selected; one of three. > Utahraptor. Tab not selected; two of three. > Dromiceiomimus. Tab not selected; three of three. You could add arrow key events to help the user browse up and down the tab list items until they find one they like. Your accordion open() function should update the ARIA boolean values as well as adding whatever classes and animations you have built in as standard. Your users know that unselected tabs are meant to be interacted with, so if a user triggers the open function (say, by hitting Enter or the space bar on the second item) they will hear this: > Utahraptor. Selected; two of three. The paragraph element for the expanded item will not be hidden by your CSS, which means it will be announced as normal by the screen reader. This kind of thing makes so much more sense when you have a working example to play with. Again, I refer you to the fantastic resource that Hans Hillen has put together: this is his take on an accessible accordion, on which much of my example is based. Conclusion Getting complex interfaces right for all of your users can be difficult — there’s no point pretending otherwise. And there’s no substitute for user testing with real users who navigate the web using assistive technology every day. This kind of testing can be time-consuming to recruit for and to conduct. On top of this, we now have accessibility on mobile devices to contend with. That’s a huge area in itself, and it’s one which I have not yet had a chance to research properly. So, there’s lots to learn, and there’s lots to do to get it right. But don’t be disheartened. If you have read this far then I’ll leave you with one final piece of advice: don’t wait. Don’t wait until you’re building a site which mandates AAA-compliance to try this stuff out. Don’t wait for a client with the will or the budget to conduct the full spectrum of user testing to come along. Unplug your mouse, and start playing with your interfaces in a new way. You’ll be surprised at the things that you learn and the issues you uncover. And the next time an true accessibility project comes along, you will be way ahead of the game.",2013,Charlie Perrins,charlieperrins,2013-12-03T00:00:00+00:00,https://24ways.org/2013/coding-towards-accessibility/,code 11,JavaScript: Taking Off the Training Wheels,"JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it’s understandable that when 24 ways asked, “What one thing do you wish you had more time to learn about?”, a number of you answered “JavaScript!” This article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can’t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources. Why learn JavaScript? So what’s in it for you? Why take the next step and learn the fundamentals? Confidence with jQuery If nothing else, learning JavaScript will improve your jQuery code; you’ll be comfortable writing jQuery from scratch and feel happy bending others’ code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it’s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence. In fact, you could say that jQuery’s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That’s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn’t needed. An example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here’s a comparison of some jQuery code and the equivalent without. $('.counter').each(function (index) { $(this).text(index + 1); }); var counters = document.querySelectorAll('.counter'); [].slice.call(counters).forEach(function (elem, index) { elem.textContent = index + 1; }); Solving problems no one else has! When you have to go to the internet to solve a problem, you’re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has. Node.js Node.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don’t worry: the Node community is thriving, very friendly and willing to help. I think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting! Here’s an example web server written with Node. http is a module that allows you to create servers and, like jQuery’s $.ajax, make requests. It’s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it’s certainly not out of your reach. var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World'); }).listen(1337); console.log('Server running at http://localhost:1337/'); Grunt and other website tools Node has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I’ll talk a little bit about Grunt here. Grunt is a task runner, and many people use it for compiling Sass or compressing their site’s JavaScript and images. It’s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer. Ways to improve your skills So you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas. Rebuild a jQuery app Converting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you’re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way’s jQuery to JavaScript article. Find a mentor If you think you’d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I’d look out for someone who’s active and friendly on Twitter, and does the kind of work you’d like to do. Introduce yourself over Twitter or send them an email. I wouldn’t expect a full tutoring course (although that is another option!) but they’ll be very glad to answer a question and any follow-ups every now and then. Go to a workshop Many conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there’s one in your area. Workshops are great because you can ask direct questions, and you’re in an environment where others are learning just like you are — no need to learn alone! Set yourself challenges This is one way I like to learn new things. I have a new thing that I’m not very good at, so I pick something that I think is just out of my reach and I try to build it. It’s learning by doing and, even if you fail, it can be enormously valuable. Where to start? If you’ve decided learning JavaScript is an important step for you, your next question may well be where to go from here. I’ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want. Beginner If you’re just getting started with JavaScript, I’d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They’re all reputable sources (although, I’ve included something I wrote — you can decide about that one!) and will not lead you astray. jQuery’s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro. Codecademy’s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you. HTMLDog’s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!] The tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript. Getting in-depth For more comprehensive documentation and help I’d recommend adding these places to your list of go-tos. MDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it’s a great place to just go and browse. Axel Rauschmayer’s 2ality is a stunning collection of articles that will take you deep into JavaScript. It’s certainly worth looking at. Addy Osmani’s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications. And finally… I think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you’ll get there. I bet you’ll learn a whole lot along the way. Good luck! Many thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag.",2013,Tom Ashworth,tomashworth,2013-12-05T00:00:00+00:00,https://24ways.org/2013/javascript-taking-off-the-training-wheels/,code 15,Git for Grown-ups,"You are a clever and talented person. You create beautiful designs, or perhaps you have architected a system that even my cat could use. Your peers adore you. Your clients love you. But, until now, you haven’t *&^#^! been able to make Git work. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^! command to upload your work. It’s not you. It’s Git. Promise. Yes, this is an article about the popular version control system, Git. But unlike just about every other article written about Git, I’m not going to give you the top five commands that you need to memorize; and I’m not going to tell you all your problems would be solved if only you were using this GUI wrapper or that particular workflow. You see, I’ve come to a grand realization: when we teach Git, we’re doing it wrong. Let me back up for a second and tell you a little bit about the field of adult education. (Bear with me, it gets good and will leave you feeling both empowered and righteous.) Andragogy, unlike pedagogy, is a learner-driven educational experience. There are six main tenets to adult education: Adults prefer to know why they are learning something. The foundation of the learning activities should include experience. Adults prefer to be able to plan and evaluate their own instruction. Adults are more interested in learning things which directly impact their daily activities. Adults prefer learning to be oriented not towards content, but towards problems. Adults relate more to their own motivators than to external ones. Nowhere in this list does it include “memorize the five most popular Git commands”. And yet this is how we teach version control: init, add, commit, branch, push. You’re an expert! Sound familiar? In the hierarchy of learning, memorizing commands is the lowest, or most basic, form of learning. At the peak of learning you are able to not just analyze and evaluate a problem space, but create your own understanding in relation to your existing body of knowledge. “Fine,” I can hear you saying to yourself. “But I’m here to learn about version control.” Right you are! So how can we use this knowledge to master Git? First of all: I give you permission to use Git as a tool. A tool which you control and which you assign tasks to. A tool like a hammer, or a saw. Yes, your mastery of your tools will shape the kinds of interactions you have with your work, and your peers. But it’s yours to control. Git was written by kernel developers for kernel development. The web world has adopted Git, but it is not a tool designed for us and by us. It’s no Sass, y’know? Git wasn’t developed out of our frustration with managing CSS files in an increasingly complex ecosystem of components and atomic design. So, as you work through the next part of this article, give yourself a bit of a break. We’re in this together, and it’s going to be OK. We’re going to do a little activity. We’re going to create your perfect Git cheatsheet. I want you to start by writing down a list of all the people on your code team. This list may include: developers designers project managers clients Next, I want you to write down a list of all the ways you interact with your team. Maybe you’re a solo developer and you do all the tasks. Maybe you only do a few things. But I want you to write down a list of all the tasks you’re actually responsible for. For example, my list looks like this: writing code reviewing code publishing tested code to your server(s) troubleshooting broken code The next list will end up being a series of boxes in a diagram. But to start, I want you to write down a list of your tools and constraints. This list potentially has a lot of noun-like items and verb-like items: code hosting system (Bitbucket? GitHub? Unfuddle? self-hosted?) server ecosystem (dev/staging/live) automated testing systems or review gates automated build systems (that Jenkins dude people keep referring to) Brilliant! Now you’ve got your actors and your actions, it’s time to shuffle them into a diagram. There are many popular workflow patterns. None are inherently right or wrong; rather, some are more or less appropriate for what you are trying to accomplish. Centralized workflow Everyone saves to a single place. This workflow may mean no version control, or a very rudimentary version control system which only ever has a single copy of the work available to the team at any point in time. Branching workflow Everyone works from a copy of the same place, merging their changes into the main copy as their work is completed. Think of the branches as a motorcycle sidecar: they’re along for the ride and probably cannot exist in isolation of the main project for long without serious danger coming to the either the driver or sidecar passenger. Branches are a fundamental concept in version control — they allow you to work on new features, bug fixes, and experimental changes within a single repository, but without forcing the changes onto others working from the same branch. Forking workflow Everyone works from their own, independent repository. A fork is an exact duplicate of a repository that a developer can make their own changes to. It can be kept up to date with additional changes made in other repositories, but it cannot force its changes onto another’s repository. A fork is a complete repository which can use its own workflow strategies. If developers wish to merge their work with the main project, they must make a request of some kind (submit a patch, or a pull request) which the project collaborators may choose to adopt or reject. This workflow is popular for open source projects as it enforces a review process. Gitflow workflow A specific workflow convention which includes five streams of parallel coding efforts: master, development, feature branches, release branches, and hot fixes. This workflow is often simplified down to a few elements by web teams, but may be used wholesale by software product teams. The original article describing this workflow was written by Vincent Driessen back in January 2010. But these workflows aren’t about you yet, are they? So let’s make the connections. From the list of people on your team you identified earlier, draw a little circle. Give each of these circles some eyes and a smile. Now I want you to draw arrows between each of these people in the direction that code (ideally) flows. Does your designer create responsive prototypes which are pushed to the developer? Draw an arrow to represent this. Chances are high that you don’t just have people on your team, but you also have some kind of infrastructure. Hopefully you wrote about it earlier. For each of the servers and code repositories in your infrastructure, draw a square. Now, add to your diagram the relationships between the people and each of the machines in the infrastructure. Who can deploy code to the live server? How does it really get there? I bet it goes through some kind of code hosting system, such as GitHub. Draw in those arrows. But wait! The code that’s on your development machine isn’t the same as the live code. This is where we introduce the concept of a branch in version control. In Git, a repository contains all of the code (sort of). A branch is a fragment of the code that has been worked on in isolation to the other branches within a repository. Often branches will have elements in common. When we compare two (or more) branches, we are asking about the difference (or diff) between these two slivers. Often the master branch is used on production, and the development branch is used on our dev server. The difference between these two branches is the untested code that is not yet deployed. On your diagram, see if you can colour-code according to the branch names at each of the locations within your infrastructure. You might find it useful to make a few different copies of the diagram to isolate each of the tasks you need to perform. For example: our team has a peer review process that each branch must go through before it is merged into the shared development branch. Finally, we are ready to add the Git commands necessary to make sense of the arrows in our diagram. If we are bringing code to our own workstation we will issue one of the following commands: clone (the first time we bring code to our workstation) or pull. Remembering that a repository contains all branches, we will issue the command checkout to switch from one branch to another within our own workstation. If we want to share a particular branch with one of our team mates, we will push this branch back to the place we retrieved it from (the origin). Along each of the arrows in your diagram, write the name of the command you are are going to use when you perform that particular task. From here, it’s up to you to be selfish. Before asking Git what command it would like you to use, sketch the diagram of what you want. Git is your tool, you are not Git’s tool. Draw the diagram. Communicate your tasks with your team as explicitly as you can. Insist on being a selfish adult learner — demand that others explain to you, in ways that are relevant to you, how to do the things you need to do today.",2013,Emma Jane Westby,emmajanewestby,2013-12-04T00:00:00+00:00,https://24ways.org/2013/git-for-grownups/,code 16,URL Rewriting for the Fearful,"I think it was Marilyn Monroe who said, “If you can’t handle me at my worst, please just fix these rewrite rules, I’m getting an internal server error.” Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from. The majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable — I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we’re going to go back to basics to try to make the whole rigmarole more understandable. When we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that’s the most common case, that’s what I’ll be sticking to here. If you work with a different server, there’s often documentation specifically for translating from Apache’s mod_rewrite rules. I even found an automatic converter for nginx. This isn’t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you’re doing. If you’ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don’t worry. As Michael Jackson once insipidly whined, you are not alone. The basics Rewrite rules form part of the Apache web server’s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is: RewriteEngine on The general formula for a rewrite rule is: RewriteRule URL/to/match URL/to/use/if/it/matches [options] When we talk about URL rewriting, we’re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We’ll look at those in turn. Redirects Redirects match an incoming URL, and then redirect the user’s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work. In 1998, Sir Tim Berners-Lee wrote that Cool URIs don’t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it’s fine to move things around — especially to correct bad URL design choices of the past — provided that you can do so while keeping those old URLs working. That’s where redirects can help. A redirect might look like this RewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L] Rewriting By default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file. A rewrite rule changes that process by breaking the direct relationship between the URL and the file system. “When there’s a request for /about/history.html” a rewrite rule might say, “use the file /about_section.php instead.” This opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page. RewriteRule ^for/this/url/$ /use/this/file.php [L] Matching patterns By now you’ll have noted the weird ^ and $ characters wrapped around the URL we’re trying to match. That’s because what we’re actually using here is a pattern. Technically, it is what’s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We’ll call it a pattern because we’re not animals. What are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you’d wonder what I wanted your credit card details for, but you’d know that I wanted a two-digit month, a slash, and a two-digit year. That’s not a regular expression, but it’s the same idea: using some placeholder characters to define the pattern of the input you’re trying to match. We’ve already met two regexp characters. ^ Matches the beginning of a string $ Matches the end of a string When a pattern starts with ^ and ends with $ it’s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too: [0-9] Matches a number, 0–9. [2-4] would match numbers 2 to 4 inclusive. [a-z] Matches lowercase letters a–z [A-Z] Matches uppercase letters A–Z [a-z0-9] Combining some of these, this matches letters a–z and numbers 0–9 These are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you’re looking for within the brackets, as well as the ranges shown above. However, all these just match one single character. [0-9] would match 8 but not 84 — to match 84 we’d need to use [0-9] twice. [0-9][0-9] So, if we wanted to match 1984 we could to do this: [0-9][0-9][0-9][0-9] …but that’s getting silly. Instead, we can do this: [0-9]{4} That means any character between 0 and 9, four times. If we wanted to match a number, but didn’t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more. [0-9]+ This now matches 1, 123 and 1234567. Putting it into practice Let’s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this: 2013/article-title/ They start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We’d match it like this: ^[0-9]{4}/[a-z0-9-]+/$ If that looks frightening, don’t worry. Breaking it down, from the start of the URL (^) we’re looking for four numbers ([0-9]{4}). Then a slash — that’s just literal — and then anything lowercase a–z or 0–9 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$). Putting that into a rewrite rule, we end up with this: RewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php We’re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it’s supposed to display. Capturing groups, and replacements When rewriting URLs you’ll often want to take important parts of the URL you’re matching and pass them along to the script that handles the request. That’s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we’re looking for. That means we need to call it like this: /article.php?year=2013&slug=article-title To do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what’s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses. Our pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes: ^([0-9]{4})/([a-z0-9-]+)/$ To use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.) RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 The value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern. Options Several brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are: R=301 Perform an HTTP 301 redirect to send the user’s browser to the new URL. A status of 301 means a resource has moved permanently and so it’s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes. L Last. If this rule matches, don’t bother processing the following rules. Options are set in square brackets at the end of the rule. You can set multiple options by separating them with commas: RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L] or RewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L] Common pitfalls Once you’ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag. L for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does — the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes. One way to avoid this problem is to keep your ‘real’ pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules. Useful snippets I find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to. Excluding a directory As mentioned above, if you’re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won’t be applied. RewriteRule ^_source - [L] This is also useful for excluding things like CMS folders from your website’s rewrite rules RewriteRule ^perch - [L] Adding or removing www from the domain Some folk like to use a www and others don’t. Usually, it’s best to pick one and go with it, and redirect the one you don’t want. On this site, we don’t use www.24ways.org so we redirect those requests to 24ways.org. This uses a RewriteCond which is like an if for a rewrite rule: “If this condition matches, then apply the following rule.” In this case, it’s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything: RewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC] RewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L] The [NC] flag means ‘no case’ — the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance. Removing file extensions Sometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^(.+)$ $1.php [L,QSA] This says if the file being asked for isn’t a file (!-f) and if it isn’t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means ‘query string append’: append the existing query string onto the rewritten URL. It’s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL. Logging for when it all goes wrong Although not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it’s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance. RewriteEngine On RewriteLog ""/full/system/path/to/rewrite.log"" RewriteLogLevel 5 To be doubly clear: this will not work from an .htaccess file — it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.) The white screen of death One of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn’t an error message, of course. It’s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log. If you have access to your server logs, check the Apache error log and you’ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.) In conclusion Rewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future. If you’re redesigning a site, remember that cool URIs don’t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working. Further reading To find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources. From the horse’s mouth, the Apache mod_rewrite documentation Particularly useful with that documentation is the RewriteRule Flags listing You may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial Friend of 24 ways, Neil Crosby has a mod_rewrite Beginner’s Guide which I’ve found handy over the years. As noted at the start, this isn’t a fully comprehensive guide, but I hope it’s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments.",2013,Drew McLellan,drewmclellan,2013-12-01T00:00:00+00:00,https://24ways.org/2013/url-rewriting-for-the-fearful/,code 18,Grunt for People Who Think Things Like Grunt are Weird and Hard,"Front-end developers are often told to do certain things: Work in as small chunks of CSS and JavaScript as makes sense to you, then concatenate them together for the production website. Compress your CSS and minify your JavaScript to make their file sizes as small as possible for your production website. Optimize your images to reduce their file size without affecting quality. Use Sass for CSS authoring because of all the useful abstraction it allows. That’s not a comprehensive list of course, but those are the kind of things we need to do. You might call them tasks. I bet you’ve heard of Grunt. Well, Grunt is a task runner. Grunt can do all of those things for you. Once you’ve got it set up, which isn’t particularly difficult, those things can happen automatically without you having to think about them again. But let’s face it: Grunt is one of those fancy newfangled things that all the cool kids seem to be using but at first glance feels strange and intimidating. I hear you. This article is for you. Let’s nip some misconceptions in the bud right away Perhaps you’ve heard of Grunt, but haven’t done anything with it. I’m sure that applies to many of you. Maybe one of the following hang-ups applies to you. I don’t need the things Grunt does You probably do, actually. Check out that list up top. Those things aren’t nice-to-haves. They are pretty vital parts of website development these days. If you already do all of them, that’s awesome. Perhaps you use a variety of different tools to accomplish them. Grunt can help bring them under one roof, so to speak. If you don’t already do all of them, you probably should and Grunt can help. Then, once you are doing those, you can keep using Grunt to do more for you, which will basically make you better at doing your job. Grunt runs on Node.js — I don’t know Node You don’t have to know Node. Just like you don’t have to know Ruby to use Sass. Or PHP to use WordPress. Or C++ to use Microsoft Word. I have other ways to do the things Grunt could do for me Are they all organized in one place, configured to run automatically when needed, and shared among every single person working on that project? Unlikely, I’d venture. Grunt is a command line tool — I’m just a designer I’m a designer too. I prefer native apps with graphical interfaces when I can get them. But I don’t think that’s going to happen with Grunt1. The extent to which you need to use the command line is: Navigate to your project’s directory. Type grunt and press Return. After set-up, that is, which again isn’t particularly difficult. OK. Let’s get Grunt installed Node is indeed a prerequisite for Grunt. If you don’t have Node installed, don’t worry, it’s very easy. You literally download an installer and run it. Click the big Install button on the Node website. You install Grunt on a per-project basis. Go to your project’s folder. It needs a file there named package.json at the root level. You can just create one and put it there. package.json at root The contents of that file should be this: { ""name"": ""example-project"", ""version"": ""0.1.0"", ""devDependencies"": { ""grunt"": ""~0.4.1"" } } Feel free to change the name of the project and the version, but the devDependencies thing needs to be in there just like that. This is how Node does dependencies. Node has a package manager called NPM (Node packaged modules) for managing Node dependencies (like a gem for Ruby if you’re familiar with that). You could even think of it a bit like a plug-in for WordPress. Once that package.json file is in place, go to the terminal and navigate to your folder. Terminal rubes like me do it like this: Terminal rube changing directories Then run the command: npm install After you’ve run that command, a new folder called node_modules will show up in your project. Example of node_modules folder The other files you see there, README.md and LICENSE are there because I’m going to put this project on GitHub and that’s just standard fare there. The last installation step is to install the Grunt CLI (command line interface). That’s what makes the grunt command in the terminal work. Without it, typing grunt will net you a “Command Not Found”-style error. It is a separate installation for efficiency reasons. Otherwise, if you had ten projects you’d have ten copies of Grunt CLI. This is a one-liner again. Just run this command in the terminal: npm install -g grunt-cli You should close and reopen the terminal as well. That’s a generic good practice to make sure things are working right. Kinda like restarting your computer after you install a new application was in the olden days. Let’s make Grunt concatenate some files Perhaps in our project there are three separate JavaScript files: jquery.js – The library we are using. carousel.js – A jQuery plug-in we are using. global.js – Our authored JavaScript file where we configure and call the plug-in. In production, we would concatenate all those files together for performance reasons (one request is better than three). We need to tell Grunt to do this for us. But wait. Grunt actually doesn’t do anything all by itself. Remember Grunt is a task runner. The tasks themselves we will need to add. We actually haven’t set up Grunt to do anything yet, so let’s do that. The official Grunt plug-in for concatenating files is grunt-contrib-concat. You can read about it on GitHub if you want, but all you have to do to use it on your project is to run this command from the terminal (it will henceforth go without saying that you need to run the given commands from your project’s root folder): npm install grunt-contrib-concat --save-dev A neat thing about doing it this way: your package.json file will automatically be updated to include this new dependency. Open it up and check it out. You’ll see a new line: ""grunt-contrib-concat"": ""~0.3.0"" Now we’re ready to use it. To use it we need to start configuring Grunt and telling it what to do. You tell Grunt what to do via a configuration file named Gruntfile.js2 Just like our package.json file, our Gruntfile.js has a very special format that must be just right. I wouldn’t worry about what every word of this means. Just check out the format: module.exports = function(grunt) { // 1. All configuration goes here grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), concat: { // 2. Configuration for concatinating files goes here. } }); // 3. Where we tell Grunt we plan to use this plug-in. grunt.loadNpmTasks('grunt-contrib-concat'); // 4. Where we tell Grunt what to do when we type ""grunt"" into the terminal. grunt.registerTask('default', ['concat']); }; Now we need to create that configuration. The documentation can be overwhelming. Let’s focus just on the very simple usage example. Remember, we have three JavaScript files we’re trying to concatenate. We’ll list file paths to them under src in an array of file paths (as quoted strings) and then we’ll list a destination file as dest. The destination file doesn’t have to exist yet. It will be created when this task runs and squishes all the files together. Both our jquery.js and carousel.js files are libraries. We most likely won’t be touching them. So, for organization, we’ll keep them in a /js/libs/ folder. Our global.js file is where we write our own code, so that will be right in the /js/ folder. Now let’s tell Grunt to find all those files and squish them together into a single file named production.js, named that way to indicate it is for use on our real live website. concat: { dist: { src: [ 'js/libs/*.js', // All JS in the libs folder 'js/global.js' // This specific file ], dest: 'js/build/production.js', } } Note: throughout this article there will be little chunks of configuration code like above. The intention is to focus in on the important bits, but it can be confusing at first to see how a particular chunk fits into the larger file. If you ever get confused and need more context, refer to the complete file. With that concat configuration in place, head over to the terminal, run the command: grunt and watch it happen! production.js will be created and will be a perfect concatenation of our three files. This was a big aha! moment for me. Feel the power course through your veins. Let’s do more things! Let’s make Grunt minify that JavaScript We have so much prep work done now, adding new tasks for Grunt to run is relatively easy. We just need to: Find a Grunt plug-in to do what we want Learn the configuration style of that plug-in Write that configuration to work with our project The official plug-in for minifying code is grunt-contrib-uglify. Just like we did last time, we just run an NPM command to install it: npm install grunt-contrib-uglify --save-dev Then we alter our Gruntfile.js to load the plug-in: grunt.loadNpmTasks('grunt-contrib-uglify'); Then we configure it: uglify: { build: { src: 'js/build/production.js', dest: 'js/build/production.min.js' } } Let’s update that default task to also run minification: grunt.registerTask('default', ['concat', 'uglify']); Super-similar to the concatenation set-up, right? Run grunt at the terminal and you’ll get some deliciously minified JavaScript: Minified JavaScript That production.min.js file is what we would load up for use in our index.html file. Let’s make Grunt optimize our images We’ve got this down pat now. Let’s just go through the motions. The official image minification plug-in for Grunt is grunt-contrib-imagemin. Install it: npm install grunt-contrib-imagemin --save-dev Register it in the Gruntfile.js: grunt.loadNpmTasks('grunt-contrib-imagemin'); Configure it: imagemin: { dynamic: { files: [{ expand: true, cwd: 'images/', src: ['**/*.{png,jpg,gif}'], dest: 'images/build/' }] } } Make sure it runs: grunt.registerTask('default', ['concat', 'uglify', 'imagemin']); Run grunt and watch that gorgeous squishification happen: Squished images Gotta love performance increases for nearly zero effort. Let’s get a little bit smarter and automate What we’ve done so far is awesome and incredibly useful. But there are a couple of things we can get smarter on and make things easier on ourselves, as well as Grunt: Run these tasks automatically when they should Run only the tasks needed at the time For instance: Concatenate and minify JavaScript when JavaScript changes Optimize images when a new image is added or an existing one changes We can do this by watching files. We can tell Grunt to keep an eye out for changes to specific places and, when changes happen in those places, run specific tasks. Watching happens through the official grunt-contrib-watch plugin. I’ll let you install it. It is exactly the same process as the last few plug-ins we installed. We configure it by giving watch specific files (or folders, or both) to watch. By watch, I mean monitor for file changes, file deletions or file additions. Then we tell it what tasks we want to run when it detects a change. We want to run our concatenation and minification when anything in the /js/ folder changes. When it does, we should run the JavaScript-related tasks. And when things happen elsewhere, we should not run the JavaScript-related tasks, because that would be irrelevant. So: watch: { scripts: { files: ['js/*.js'], tasks: ['concat', 'uglify'], options: { spawn: false, }, } } Feels pretty comfortable at this point, hey? The only weird bit there is the spawn thing. And you know what? I don’t even really know what that does. From what I understand from the documentation it is the smart default. That’s real-world development. Just leave it alone if it’s working and if it’s not, learn more. Note: Isn’t it frustrating when something that looks so easy in a tutorial doesn’t seem to work for you? If you can’t get Grunt to run after making a change, it’s very likely to be a syntax error in your Gruntfile.js. That might look like this in the terminal: Errors running Grunt Usually Grunt is pretty good about letting you know what happened, so be sure to read the error message. In this case, a syntax error in the form of a missing comma foiled me. Adding the comma allowed it to run. Let’s make Grunt do our preprocessing The last thing on our list from the top of the article is using Sass — yet another task Grunt is well-suited to run for us. But wait? Isn’t Sass technically in Ruby? Indeed it is. There is a version of Sass that will run in Node and thus not add an additional dependency to our project, but it’s not quite up-to-snuff with the main Ruby project. So, we’ll use the official grunt-contrib-sass plug-in which just assumes you have Sass installed on your machine. If you don’t, follow the command line instructions. What’s neat about Sass is that it can do concatenation and minification all by itself. So for our little project we can just have it compile our main global.scss file: sass: { dist: { options: { style: 'compressed' }, files: { 'css/build/global.css': 'css/global.scss' } } } We wouldn’t want to manually run this task. We already have the watch plug-in installed, so let’s use it! Within the watch configuration, we’ll add another subtask: css: { files: ['css/*.scss'], tasks: ['sass'], options: { spawn: false, } } That’ll do it. Now, every time we change any of our Sass files, the CSS will automaticaly be updated. Let’s take this one step further (it’s absolutely worth it) and add LiveReload. With LiveReload, you won’t have to go back to your browser and refresh the page. Page refreshes happen automatically and in the case of CSS, new styles are injected without a page refresh (handy for heavily state-based websites). It’s very easy to set up, since the LiveReload ability is built into the watch plug-in. We just need to: Install the browser plug-in Add to the top of the watch configuration: . watch: { options: { livereload: true, }, scripts: { /* etc */ Restart the browser and click the LiveReload icon to activate it. Update some Sass and watch it change the page automatically. Live reloading browser Yum. Prefer a video? If you’re the type that likes to learn by watching, I’ve made a screencast to accompany this article that I’ve published over on CSS-Tricks: First Moments with Grunt Leveling up As you might imagine, there is a lot of leveling up you can do with your build process. It surely could be a full time job in some organizations. Some hardcore devops nerds might scoff at the simplistic setup we have going here. But I’d advise them to slow their roll. Even what we have done so far is tremendously valuable. And don’t forget this is all free and open source, which is amazing. You might level up by adding more useful tasks: Running your CSS through Autoprefixer (A+ Would recommend) instead of a preprocessor add-ons. Writing and running JavaScript unit tests (example: Jasmine). Build your image sprites and SVG icons automatically (example: Grunticon). Start a server, so you can link to assets with proper file paths and use services that require a real URL like TypeKit and such, as well as remove the need for other tools that do this, like MAMP. Check for code problems with HTML-Inspector, CSS Lint, or JS Hint. Have new CSS be automatically injected into the browser when it ever changes. Help you commit or push to a version control repository like GitHub. Add version numbers to your assets (cache busting). Help you deploy to a staging or production environment (example: DPLOY). You might level up by simply understanding more about Grunt itself: Read Grunt Boilerplate by Mark McDonnell. Read Grunt Tips and Tricks by Nicolas Bevacqua. Organize your Gruntfile.js by splitting it up into smaller files. Check out other people’s and projects’ Gruntfile.js. Learn more about Grunt by digging into its source and learning about its API. Let’s share I think some group sharing would be a nice way to wrap this up. If you are installing Grunt for the first time (or remember doing that), be especially mindful of little frustrating things you experience(d) but work(ed) through. Those are the things we should share in the comments here. That way we have this safe place and useful resource for working through those confusing moments without the embarrassment. We’re all in this thing together! 1 Maybe someday someone will make a beautiful Grunt app for your operating system of choice. But I’m not sure that day will come. The configuration of the plug-ins is the important part of using Grunt. Each plug-in is a bit different, depending on what it does. That means a uniquely considered UI for every single plug-in, which is a long shot. Perhaps a decent middleground is this Grunt DevTools Chrome add-on. 2 Gruntfile.js is often referred to as Gruntfile in documentation and examples. Don’t literally name it Gruntfile — it won’t work.",2013,Chris Coyier,chriscoyier,2013-12-11T00:00:00+00:00,https://24ways.org/2013/grunt-is-not-weird-and-hard/,code 20,Make Your Browser Dance,"It was a crisp winter’s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue. This was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these… and working out ways to mix them as the music played. This was it. This was me VJing. This was all a lifetime (well a decade!) ago. When I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API But wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser? Well, there’s only one thing to do: let’s try it! Let’s take to the dance floor Over the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let’s take the same approach here. The main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right? So, let’s start at the beginning: creating a simple visual. For this I’m going to create a CSS animation. It’s just a funky i element with the opacity being changed to make it flash. See the Pen Creating a light by Rumyra (@Rumyra) on CodePen A note about prefixes: I’ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article Start the music Well, that’s pretty easy so far. Next up: loading in some sound. For this we’ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device’s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext. So, let’s start by initialising our audio context. var contextClass = window.AudioContext; if (contextClass) { //web audio api available. var audioContext = new contextClass(); } else { //web audio api unavailable //warn user to upgrade/change browser } Now let’s load our sound file into the new context we created with an XMLHttpRequest. function loadSound() { //set audio file url var audioFileUrl = '/octave.ogg'; //create new request var request = new XMLHttpRequest(); request.open(""GET"", audioFileUrl, true); request.responseType = ""arraybuffer""; request.onload = function() { //take from http request and decode into buffer context.decodeAudioData(request.response, function(buffer) { audioBuffer = buffer; }); } request.send(); } Phew! Now we’ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O’Reilly Web Audio API book by Boris Smus is available to read online free. All we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have. Learning the steps Let’s take a minute to step back and remember our school days and science class. I’m sure if I drew a picture of a sound wave, we would all start nodding our heads. The sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size. This is great when everything is analogue, but the waveform varies continuously and it’s not suitable for digital processing: there’s an infinite set of values. For digital processing, we need discrete numbers. We have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we’re doing in the code above is piping that data in the audio context. All we need to do now is access it. We can do this with the Web Audio API’s analysing functionality. Just pop in an analysing node before we connect the source to its destination node. function createAnalyser(source) { //create analyser node analyser = audioContext.createAnalyser(); //connect to source source.connect(analyzer); //pipe to speakers analyser.connect(audioContext.destination); } The data I’m really interested in here is frequency. Later we could look into amplitude or time, but for now I’m going to stick with frequency. The analyser node gives us frequency data via the getFrequencyByteData method. Don’t forget to count! To collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it? This is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context’s sampleRate attribute. This is good to bear in mind when you’re thinking about your resolution of frequencies. var sampleRate = audioContext.sampleRate; Let’s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we’re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0–20,000hz. So, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property. //set our FFT size analyzer.fftSize = 2048; //create an empty array with 1024 items var frequencyData = new Uint8Array(1024); So, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 ÷ 1,024 = 23.44Hz apart. The thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame. function update() { //constantly getting feedback from data requestAnimationFrame(update); analyzer.getByteFrequencyData(frequencyData); } Putting it all together Sweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier. We can easily pause and run our CSS animation from JavaScript: element.style.webkitAnimationPlayState = ""paused""; element.style.webkitAnimationPlayState = ""running""; Unfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle. There is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms. This seems a bit much for our proof of concept, so let’s backtrack a little. We know by the animation we’ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript. element.style.opacity = ""1""; element.style.opacity = ""0.2""; So let’s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I’ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold. //get light elements var lights = document.getElementsByTagName('i'); var totalLights = lights.length; for (var i=0; i 160){ //start animation on element lights[i].style.opacity = ""1""; } else { lights[i].style.opacity = ""0.2""; } } See all the code in action here. I suggest viewing in a modern browser :) Awesome! It is true — we can VJ in our browser! Let’s dance! So, let’s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs. Check it out! I don’t know about you, but I’m pretty excited — that’s just a bit of HTML, CSS and JavaScript! The other thing to think about, of course, is the sound that you would get at a venue. We don’t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I’ve found, is to capture what my laptop’s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia. Let’s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash. And relax :) There you have it. Sit back, play some music and enjoy the Winamp like experience in front of you. So, where do we go from here? I already have a wealth of ideas. We haven’t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it’s questionable whether the browser is the best environment for this. For one, I’m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it’s fun, and it looks cool and sometimes I think it’s OK to just dance.",2013,Ruth John,ruthjohn,2013-12-02T00:00:00+00:00,https://24ways.org/2013/make-your-browser-dance/,code 21,Keeping Parts of Your Codebase Private on GitHub,"Open source is brilliant, there’s no denying that, and GitHub has been instrumental in open source’s recent success. I’m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private! A slightly less common issue, and one I’ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes. Before we begin, it’s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site. N.B. This post requires some basic Git knowledge. Adding your public remote Let’s assume you’re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the “Adding your private remote” section.) So, we have a clean slate: nothing has been set up yet, we’re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account). On your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff — whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this: $ git remote add origin git@github.com:[user]/site.com.git Here we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you’ve explicitly changed it) to that remote: $ git push -u origin master Here we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like. This is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote. Adding your private remote We’ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one. In your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don’t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository. $ git checkout -b dev We have now made a new branch called dev off the branch we were on last (master, unless you renamed it). Now we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote: $ git remote add private git@github.com:[user]/private.site.com.git Like before, we are just telling Git to add a new remote to this repo, only this time we’ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it. Now we need to tell our dev branch to push to our private remote: $ git push -u private dev Here, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we’ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise). Now you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively. Any work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote. Adding more branches So far we’ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you’d like. Of course, you’d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let’s imagine we want to create a branch to try something out real quickly: $ git checkout -b test Now, when we come to push this branch, we can choose which remote we send it to: $ git push -u private test This pushes the new test branch to our private remote (again, setting the persistent tracking with -u). You can have as many or as few remotes or branches as you like. Combining the two Let’s say you’ve been working on a new feature in private for a few days, and you’ve kept that on the private remote. You’ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch: $ git checkout master Then merge in the branch that contained the feature: $ git merge dev Now master contains the commits that were made on dev and, once you’ve pushed master to its remote, those commits will be viewable publicly on GitHub: $ git push Note that we can just run $ git push on the master branch as we’d previously set up our upstream tracking (-u). Multiple machines So far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let’s say you’ve got yourself a new Mac (yay!) and you want to clone an existing project: $ git clone git@github.com:[user]/site.com.git This will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above. Done! If you’d like to see me blitz through all that in one go, check the showterm recording. The beauty of this is that we can still share our code, but we don’t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it’s ready for merge. Have a blog post in a Jekyll site that you’re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it’s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain. All this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you!",2013,Harry Roberts,harryroberts,2013-12-09T00:00:00+00:00,https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/,code 30,"Making Sites More Responsive, Responsibly","With digital projects we’re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations. A product’s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities. The popularisation of responsive web design is a great example of how we are able to shape the web’s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn’t know or care about the difference in these approaches, but we did. Responsive design was similar in this respect – momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.  We tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text. The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens. When we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more responsive to people’s needs, enhancing their usability and accessibility along the way. Being responsibly responsive Of course, when we think about being more responsive, there’s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that: A responsible responsive design equally considers the following throughout a project: Usability: The way a website’s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions. Access: The ability for users of all devices, browsers, and assistive technologies to access and understand a site’s features and content. Sustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future. Performance: The speed at which a site’s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface. Scott’s book covers these ideas in a lot more detail than I’ll be able to here (put it on your Christmas list if it’s not there already), but for now let’s think a bit more about our roles as digital creators and the power this gives us. Our choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies. The power of the web We all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions. You’ll have seen articles with flashy titles such as “Top 5 JavaScript APIs You’ve Never Heard Of!”, which you’ll probably read, think “That’s quite cool”, yet never use in any real work. There is great potential for technologies like these to be misused, but there are also great prospects for them to be used well to enhance experiences. Let’s have a look at a few examples you may not have considered. Offline first When we make websites, many of us follow a process which involves user stories – standardised snippets of context explaining who needs what, and why. “As a student I want to pay online for my course so I don’t have to visit the college in person.” “As a retailer I want to generate unique product codes so I can manage my stock.” We very often focus heavily on what needs doing, but may not consider carefully how it will be done. As in Scott’s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense – including under different conditions. Offline first is yet another ‘first’ methodology (my personal favourite being ‘tea first’), which encourages us to develop so that connectivity itself is an enhancement – letting users continue with tasks even when they’re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK’s signal-barren East Anglian wilderness as I do), then you’ll realise that connectivity isn’t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I’m sure we’re all familiar with – the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone’s likely to be offline after all. Wouldn’t it be better if we could do something like this instead? Someone visits our conference website. On this initial run, some assets may be cached for future use: the conference schedule, the site’s CSS, photos of the speakers. When the attendee revisits the site on the day, the page shell loads up from the cache. If we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use. If we don’t have something cached already, then we can try grabbing it online. If for any reason our requests for new content fail (we’re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is cached. There are a number of ways we can make something like this, including using the application cache (AppCache) if you’re that way inclined. However, you may want to look into service workers instead. There are also some great resources on Offline First! if you’d like to find out more about this. Building in offline functionality isn’t necessarily about starting offline first, and it’s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we’d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott’s criteria. As I mentioned, this isn’t necessarily the kind of development choice that our clients will ask us for, but it’s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation. Even more accessible We’ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices? Again using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen. Input Ambient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It’s not new – many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels. If our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this: window.addEventListener('devicelight', function(e) { var lux = e.value; if (lux < 50) { //Change things for dim light } if (lux >= 50 && lux <= 10000) { //Change things for normal light } if (lux > 10000) { //Change things for bright light } }); Live demo (requires light sensor and supported browser). Soon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it’ll probably look something like this: @media (light-level: dim) { /*Change things for dim light*/ } @media (light-level: normal) { /*Change things for normal light*/ } @media (light-level: washed) { /*Change things for bright light*/ } While we may be quick to dismiss this kind of detection as being a gimmick, it’s important to consider that apps such as Light Detector, listed on Apple’s accessibility page, provide important context around exactly this functionality. “If you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.” everywaretechnologies.com/apps/lightdetector Input can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft. Output Web technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users. When we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we’re seeing and hearing. Haptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here’s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds. navigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]); Live demo (requires supported browser) With great power… What you’ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn’t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects – if we don’t root for them, they probably won’t happen. This is where our professional responsibility comes in. We won’t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we’ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let’s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations. When you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible.",2014,Sally Jenkinson,sallyjenkinson,2014-12-10T00:00:00+00:00,https://24ways.org/2014/making-sites-more-responsive-responsibly/,code 31,Dealing with Emergencies in Git,"The stockings were hung by the chimney with care, In hopes that version control soon would be there. This summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I’ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information. Let’s start by painting an updated version of Clement Clarke Moore’s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen. Perhaps a few of these images are familiar, or maybe they’re just settings you’ve seen in a movie. It doesn’t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: ‘What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?’ It’s okay if the map isn’t perfect. As you refine your understanding of a new topic, you’ll outgrow the initial metaphors, analogies, and comparisons. With apologies to Mr. Moore, let’s give it a try. Getting Interrupted in Git When on the roof there arose such a clatter! You’re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you’ve been working on is going to need to wait while you investigate the commotion. If you’ve got even a little bit of experience working with Git, you know that you cannot simply change what you’re working on in times of emergency. If you’ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state. Up to this point, you’ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of ‘switching branches for a sec’. This isn’t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren’t finished with what you were working on. You don’t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren’t putting your book away though, you’re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down. Let’s say you’ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof: $ git stash After running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work. With the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it’s not reindeer after all, but just your boss who thought they’d help out by writing some code on the project you’ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time. You fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy: $ git fetch $ git branch -r $ git checkout -b helpful-boss-branch origin/helpful-boss-branch You are now in a local copy of the branch where you are free to look around, and figure out exactly what’s going on. You sigh audibly and say, ‘Okay. Tell me what was happening when you first realised you’d gotten into a mess’ as you look through the log messages for the branch. $ git log --oneline $ git log By using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof. You may also want to compare the work your boss has done to the main branch for your project. For this article, we’ll assume the main branch is named master. $ git diff master Looking through the commits, you may be able to see that things started out okay but then took a turn for the worse. Checking out a single commit Using commands you’re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch. Using the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let’s say the unique identifier you want to checkout is 25f6d7f. $ git checkout 25f6d7f Note: checking out '25f6d7f'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example: $ git checkout -b new_branch_name HEAD is now at 25f6d7f... Removed first paragraph. This is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic. Take a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you’ve temporarily disconnected from a known chain of events. In other words, you’re currently looking at the middle of a story (or branch) about what happened – and you’re not at the endpoint for this particular story. Git allows you to view the history of your repository as a timeline (technically it’s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you’re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work. $ git checkout master Warning: you are leaving 1 commit behind, not connected to any of your branches: 7a85788 Your witty holiday commit message. If you want to keep them by creating a new branch, this may be a good time to do so with: $ git branch new_branch_name 7a85788 Switched to branch 'master' Your branch is up-to-date with 'origin/master'. So, if you want to save the commits you’ve made while in a detached HEAD state, you simply need to put them on a new branch. $ git branch saved-headless-commits 7a85788 With this trick under your belt, you can jingle around in history as much as you’d like. It’s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you’ll need to move back to the branch tip by checking out the branch again. $ git checkout helpful-boss-branch You’re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you’ve followed the instructions Git gave you. That wasn’t so scary after all, now, was it? Back to our reindeer problem. If your boss is anything like the bosses I’ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you’ll want to capture the good work using one of several different methods. Back in the living room, we’ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work. Saving just one commit About that bowl of nuts. If you’re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we’re just going to pick out one nut from the bowl. In Git terms, we’re going to cherry-pick a commit and save it to another branch. First, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into. $ git checkout master $ git checkout -b rescue-the-boss From your boss’s branch, helpful-boss-branch locate the commit you want to keep. $ git log --oneline helpful-boss-branch Let’s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch. $ git cherry-pick e08740b If you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss’s branch. At this point you might need to make a few additional fixes to help your boss out. (You’re angling for a bonus out of all this. Go the extra mile.) Once you’ve made your additional changes, you’ll need to add that work to the branch as well. $ git add [filename(s)] $ git commit -m ""Building on boss's work to improve feature X."" Go ahead and test everything, and make sure it’s perfect. You don’t want to introduce your own mistakes during the rescue mission! Uploading the fixed branch The next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch. $ git push -u origin rescue-the-boss Cleaning up and getting back to work With your boss rescued, and your bonus secured, you can now delete the local temporary branches. $ git branch --delete rescue-the-boss $ git branch --delete helpful-boss-branch And settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat. $ git checkout waiting-for-st-nicholas $ git stash pop Your working directory has been returned to exactly the same state you were in at the beginning of the article. Having fun with analogies I’ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it’s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it’s like trying to find that one broken light on the string of Christmas lights). It doesn’t matter if the analogy isn’t perfect. It’s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner’s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works. Or, if you’re like me, you can choose to never grow old and just keep mucking about in the analogies. I’d argue it’s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.",2014,Emma Jane Westby,emmajanewestby,2014-12-02T00:00:00+00:00,https://24ways.org/2014/dealing-with-emergencies-in-git/,code 36,Naming Things,"There are only two hard things in computer science: cache invalidation and naming things. Phil Karlton Being a professional web developer means taking responsibility for the code you write and ensuring it is comprehensible to others. Having a documented code style is one means of achieving this, although the size and type of project you’re working on will dictate the conventions used and how rigorously they are enforced. Working in-house may mean working with multiple developers, perhaps in distributed teams, who are all committing changes – possibly to a significant codebase – at the same time. Left unchecked, this codebase can become unwieldy. Coding conventions ensure everyone can contribute, and help build a product that works as a coherent whole. Even on smaller projects, perhaps working within an agency or by yourself, at some point the resulting product will need to be handed over to a third party. It’s sensible, therefore, to ensure that your code can be understood by those who’ll eventually take ownership of it. Put simply, code is read more often than it is written or changed. A consistent and predictable naming scheme can make code easier for other developers to understand, improve and maintain, presumably leaving them free to worry about cache invalidation. Let’s talk about semantics Names not only allow us to identify objects, but they can also help us describe the objects being identified. Semantics (the meaning or interpretation of words) is the cornerstone of standards-based web development. Using appropriate HTML elements allows us to create documents and applications that have implicit structural meaning. Thanks to HTML5, the vocabulary we can choose from has grown even larger. HTML elements provide one level of meaning: a widely accepted description of a document’s underlying structure. It’s only with the mutual agreement of browser vendors and developers that

indicates a paragraph. Yet (with the exception of widely accepted microdata and microformat schemas) only HTML elements convey any meaning that can be parsed consistently by user agents. While using semantic values for class names is a noble endeavour, they provide no additional information to the visitor of a website; take them away and a document will have exactly the same semantic value. I didn’t always think this was the case, but the real world has a habit of changing your opinion. Much of my thinking around semantics has been informed by the writing of my peers. In “About HTML semantics and front-end architecture”, Nicholas Gallagher wrote: The important thing for class name semantics in non-trivial applications is that they be driven by pragmatism and best serve their primary purpose – providing meaningful, flexible, and reusable presentational/behavioural hooks for developers to use. These thoughts are echoed by Harry Roberts in his CSS Guidelines: The debate surrounding semantics has raged for years, but it is important that we adopt a more pragmatic, sensible approach to naming things in order to work more efficiently and effectively. Instead of focussing on ‘semantics’, look more closely at sensibility and longevity – choose names based on ease of maintenance, not for their perceived meaning. Naming methodologies Front-end development has undergone a revolution in recent years. As the projects we’ve worked on have grown larger and more important, our development practices have matured. The pros and cons of object-orientated approaches to CSS can be endlessly debated, yet their introduction has highlighted the usefulness of having documented naming schemes. Jonathan Snook’s SMACSS (Scalable and Modular Architecture for CSS) collects style rules into five categories: base, layout, module, state and theme. This grouping makes it clear what each rule does, and is aided by a naming convention: By separating rules into the five categories, naming convention is beneficial for immediately understanding which category a particular style belongs to and its role within the overall scope of the page. On large projects, it is more likely to have styles broken up across multiple files. In these cases, naming convention also makes it easier to find which file a style belongs to. I like to use a prefix to differentiate between layout, state and module rules. For layout, I use l- but layout- would work just as well. Using prefixes like grid- also provide enough clarity to separate layout styles from other styles. For state rules, I like is- as in is-hidden or is-collapsed. This helps describe things in a very readable way. SMACSS is more a set of suggestions than a rigid framework, so its ideas can be incorporated into your own practice. Nicholas Gallagher’s SUIT CSS project is far more strict in its naming conventions: SUIT CSS relies on structured class names and meaningful hyphens (i.e., not using hyphens merely to separate words). This helps to work around the current limits of applying CSS to the DOM (i.e., the lack of style encapsulation), and to better communicate the relationships between classes. Over the last year, I’ve favoured a BEM-inspired approach to CSS. BEM stands for block, element, modifier, which describes the three types of rule that contribute to the style of a single component. This means that, given the following markup:

I know that: .sleigh is a containing block or component. .sleigh__reindeer is used only as a descendent element of .sleigh. .sleigh__reindeer––famous is used only as a modifier of .sleigh__reindeer. With this naming scheme in place, I know which styles relate to a particular component, and which are shared. Beyond reducing specificity-related head-scratching, this approach has given me a framework within which I can consistently label items, and has sped up my workflow considerably. Each of these methodologies shows that any robust CSS naming convention will have clear rules around case (lowercase, camelCase, PascalCase) and the use of special (allowed) characters like hyphens and underscores. What makes for a good name? Regardless of higher-level conventions, there’s no getting away from the fact that, at some point, we’re still going to have to name things. Recognising that classes should be named with other developers in mind, what makes for a good name? Understandable The most important aspect is for a name to be understandable. Words used in your project may come from a variety of sources: some may be widely understood, and others only be recognised by people working within a particular environment. Culture Most words you’ll choose will have common currency outside the world of web development, although they may have a particular interpretation among developers (think menu, list, input). However, words may have a narrower cultural significance; for example, in Germany and other German-speaking countries, impressum is the term used for legally mandated statements of ownership. Industry Industries often use specific terms to describe common business practices and concepts. Publishing has a number of these (headline, standfirst, masthead, colophon…) all have well understood meanings – and not all of them are relevant to online usage. Organisation Companies may have internal names (or nicknames) for their products and services. The Guardian is rife with such names: bisons (and buffalos), pixies (and super-pixies), bentos (and mini-bentos)… all of which mean something very different outside the organisation. Although such names can be useful inside smaller teams, in larger organisations they can become a barrier to entry, a sort of secret code used among employees who have been around long enough to know what they mean. Product Your team will undoubtedly have created names for specific features or interface components used in your product. For example, at Clearleft we coined the term gravigation for a navigation bar that was pinned to the bottom of the viewport. Elements of a visual design language may have names, too. Transport for London’s bar and circle logo is known internally as the roundel, while Nike’s logo is called the swoosh. Branding agencies often christen colours within a brand palette, too, either to evoke aspects of the identity or to indicate intended usage. Once you recognise the origin of the words you use, you’ll be better able to judge their appropriateness. Using Latin words for class names may satisfy a need to use semantic-sounding terms but, unless you work in a company whose employees have a basic grasp of Latin, a degree of translation will be required. Military ranks might be a clever way of declaring sizes without implying actual values, but I’d venture most people outside the armed forces don’t know how they’re ordered. Obvious Quite often, the first name that comes into your head will be the best option. Names that obliquely reference the function of a class (e.g. receptacle instead of container, kevlar instead of no-bullets) only serve to add an additional layer of abstraction. Don’t overthink it! One way of knowing if the names you use are well understood is to look at what similar concepts are called in existing vocabularies. schema.org, Dublin Core and the BBC’s ontologies are all useful sources for object names. Functional While we’ve learned to avoid using presentational classes, there remains a tension between naming things based on their content, and naming them for their intended presentation or behaviour (which may change at different breakpoints). Rather than think about a component’s appearance or behaviour, instead look to its function, its purpose. To clarify, ask what a component’s function is, and not how the component functions. For example, the Guardian’s internal content system uses the following names for different types of image placement: supporting, showcase and thumbnail, with inline being the default. These options make no promise of the resulting position on a webpage (or smartphone app, or television screen…), but do suggest intended use, and therefore imply the likely presentation. Consistent Being consistent in your approach to names will allow for easier naming of successive components, and extending the vocabulary when necessary. For example, a predictably named hierarchy might use names like primary and secondary. Should another level need to be added, tertiary is clearly be preferred over third. Appropriate Your project will feature a mix of style rules. Some will perform utility functions (clearing floats, removing bullets from a list, reseting margins), while others will perform specific functions used only once or twice in a project. Names should reflect this. For commonly used classes, be generic; for unique components be more specific. It’s also worth remembering that you can use multiple classes on an element, so combining both generic and specific can give you a powerful modular design system: Generic: list Specific: naughty-children Combined: naughty-children list If following the BEM methodology, you might use the following classes: Generic: list Specific: list––nice-children Combined: list list––nice-children Extensible Good naming schemes can be extended. One way of achieving this is to use namespaces, which are basically a way of grouping related names under a higher-level term. Microformats are a good example of a well-designed naming scheme, with many of its vocabularies taking property names from existing and related specifications (e.g. hCard is a 1:1 representation of vCard). Microformats 2 goes one step further by grouping properties under several namespaces: h-* for root class names (e.g. h-card) p-* for simple (text) properties (e.g. p-name) u-* for URL properties (e.g. u-photo) dt-* for date/time properties (e.g. dt-bday) e-* for embedded markup properties (e.g. e-note) The inclusion of namespaces is a massive improvement over the earlier specification, but the downside is that microformats now occupy five separate namespaces. This might be problematic if you are using u-* for your utility classes. While nothing will break, your naming system won’t be as robust, so plan accordingly. (Note: Microformats perform a very specific function, separate from any presentational concerns. It’s therefore considered best practice to not use microformat classes as styling hooks, but instead use additional classes that relate to the function of the component and adhere to your own naming conventions.) Short Names should be as long as required, but no longer. When looking for words to describe a particular function, I try to look for single words where possible. Avoid abbreviations unless they are understood within the contexts described above. rrp is fine if labelling a recommended retail price in an online shop, but not very helpful if used to mean ragged-right paragraph, for example. Fun! Finally, names can be an opportunity to have some fun! Names can give character to a project, be it by providing an outlet for in-jokes or adding little easter eggs for those inclined to look. The copyright statement on Apple’s website has long been named sosumi, a word that has a nice little history inside Apple. Until recently, the hamburger menu icon on the Guardian website was labelled honest-burger, after the developer’s favourite burger restaurant. A few thoughts on preprocessors CSS preprocessors have solved a lot of problems, but they have an unfortunate downside: they require you to name yet more things! Whereas we needed to worry only about style rules, now we need names for variables, mixins, functions… oh my! A second article could be written about naming these, so for now I’ll offer just a few thoughts. The first is to note that preprocessors make it easier to change things, as they allow for DRYer code. So while the names of variables are important (and the advice in this article still very much applies), you can afford to relax a little. Looking to name colour variables? If possible, find out if colours have been assigned names in a brand palette. If not, use obvious names (based on appearance or function, depending on your preference) and adapt as the palette grows. If it becomes difficult to name colours that are too similar, I’d venture that the problem lies with the design rather than the naming scheme. The same is true for responsive breakpoints. Preprocessors allow you to move awkward naming conventions out of the markup and into the CSS. Although terms like mobile, tablet and desktop are not desirable given the need to think about device-agnostic design, if these terms are widely understood within a product team and among stakeholders, using them will ensure everyone is using the same language (they can always be changed later). It still feels like we’re at the very beginning of understanding how preprocessors fit into a development workflow, if at all! I suspect over the next few years, best practices will emerge for all of these considerations. In the meantime, use your brain! Even with sensible rules and conventions in place, naming things can remain difficult, but hopefully I’ve made this exercise a little less painful. Christmas is a time of giving, so to the developer reading your code in a year’s time, why not make your gift one of clearer class names.",2014,Paul Lloyd,paulrobertlloyd,2014-12-21T00:00:00+00:00,https://24ways.org/2014/naming-things/,code 37,JavaScript Modules the ES6 Way,"JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. This is something nearly all other languages come with out of the box, whether it be Ruby’s require, Python’s import, or any other language you’re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let’s be clear: it doesn’t mean the new module system in the upcoming version of JavaScript won’t be useful to you if you’re building smaller websites rather than the next Instagram. Thankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won’t be landing in browsers for a while yet – but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I’d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It’s much simpler than you might think. What is ES6? ECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It’s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn’t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet. The ES6 module spec The ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I’ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it’s what I’ll focus on more. Module syntax The key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we’ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it. Another important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There’s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous. Named exports Modules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword: export function double(x) { return x + x; }; You can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces. var double = function(x) { return x + x; } export { double }; A module can then import the double function like so: import { double } from 'mymodule'; double(2); // 4 Again, curly braces are required around the variable you would like to import. It’s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension. The reason for those extra braces is that this syntax lets you export multiple variables: var double = function(x) { return x + x; } var square = function(x) { return x * x; } export { double, square } I personally prefer this syntax over the export function …, but only because it makes it much clearer to me what the module exports. Typically I will have my export {…} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting. A file importing both double and square can do so in just the way you’d expect: import { double, square } from 'mymodule'; double(2); // 4 square(3); // 9 With this approach you can’t easily import an entire module and all its methods. This is by design – it’s much better and you’re encouraged to import just the functions you need to use. Default exports Along with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so: export default function(x) { return x + x; } And that can be imported: import double from 'mymodule'; double(2); // 4 This time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you’d like. Default exports are not named, so you can import them as anything you like: import christmas from 'mymodule'; christmas(2); // 4 The above is entirely valid. Although it’s not something that is used too often, a module can have both named exports and a default export, if you wish. One of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that’s fine, and you shouldn’t change that to meet the preferences of those designing the spec. Programmatic API Along with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It’s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user’s browser didn’t support a feature your app relied on. An example of doing this is: if(someFeatureNotSupported) { System.import('my-polyfill').then(function(myPolyFill) { // use the module from here }); } System.import will return a promise, which, if you’re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete. This programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task. How to use it today It’s all well and good having this new syntax, but right now it won’t work in any browser – and it’s not likely to for a long time. Maybe in next year’s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we’re stuck with a bit of extra work. ES6 module transpiler One solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it – not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp. The advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don’t have any system in place for handling modules in the browser, using the transpiler doesn’t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn’t do anything to help you get that code running in the browser, but if you have that part sorted it’s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler. SystemJS Another solution is SystemJS. It’s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser). To load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn’t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you. All you need to do to get SystemJS running is to add a When you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn’t make a lot of requests on the live site. In development though, it makes for a really nice workflow. When working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application’s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file. SystemJS also provides a workflow for bundling your application together into one file. Conclusion ES6 modules may be at least six months to a year away (if not more) but that doesn’t mean they can’t be used today. Although there is an overhead to using them now – with the work required to set up SystemJS, the module transpiler, or another solution – that doesn’t mean it’s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You’ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution. If you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road. Further reading If you’d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources: ECMAScript 6 modules: the final syntax by Axel Rauschmayer Practical Workflows for ES6 Modules by Guy Bedford ECMAScript 6 resources for the curious JavaScripter by Addy Osmani Tracking ES6 support by Addy Osmani ES6 Tools List by Addy Osmani Using Grunt and the ES6 Module Transpiler by Thomas Boyt JavaScript Modules and Dependencies with jspm by myself Using ES6 Modules Today by Guy Bedford",2014,Jack Franklin,jackfranklin,2014-12-03T00:00:00+00:00,https://24ways.org/2014/javascript-modules-the-es6-way/,code 38,"Websites of Christmas Past, Present and Future","The websites of Christmas past The first website was created at CERN. It was launched on 20 December 1990 (just in time for Christmas!), and it still works today, after twenty-four years. Isn’t that incredible?! Why does this website still work after all this time? I can think of a few reasons. First, the authors of this document chose HTML. Of course they couldn’t have known back then the extent to which we would be creating documents in HTML, but HTML always had a lot going for it. It’s built on top of plain text, which means it can be opened in any text editor, and it’s pretty readable, even without any parsing. Despite the fact that HTML has changed quite a lot over the past twenty-four years, extensions to the specification have always been implemented in a backwards-compatible manner. Reading through the 1992 W3C document HTML Tags, you’ll see just how it has evolved. We still have h1 – h6 elements, but I’d not heard of the element before. Despite being deprecated since HTML2, it still works in several browsers. You can see it in action on my website. As well as being written in HTML, there is no run-time compilation of code; the first website simply consists of HTML files transmitted over the web. Due to its lack of complexity, it stood a good chance of surviving in the turbulent World Wide Web. That’s all well and good for a simple, static website. But websites created today are increasingly interactive. Many require a login and provide experiences that are tailored to the individual user. This type of dynamic website requires code to be executed somewhere. Traditionally, dynamic websites would execute such code on the server, and transmit a simple HTML file to the user. As far as the browser was concerned, this wasn’t much different from the first website, as the additional complexity all happened before the document was sent to the browser. Doing it all in the browser In 2003, the first single page interface was created at slashdotslash.com. A single page interface or single page app is a website where the page is created in the browser via JavaScript. The benefit of this technique is that, after the initial page load, subsequent interactions can happen instantly, or very quickly, as they all happen in the browser. When software runs on the client rather than the server, it is often referred to as a fat client. This means that the bulk of the processing happens on the client rather than the server (which can now be thin). A fat client is preferred over a thin client because: It takes some processing requirements away from the server, thereby reducing the cost of servers (a thin server requires cheaper, or fewer servers). They can often continue working offline, provided no server communication is required to complete tasks after initial load. The latency of internet communications is bypassed after initial load, as interactions can appear near instantaneous when compared to waiting for a response from the server. But there are also some big downsides, and these are often overlooked: They can’t work without JavaScript. Obviously JavaScript is a requirement for any client-side code execution. And as the UK Government Digital Service discovered, 1.1% of their visitors did not receive JavaScript enhancements. Of that 1.1%, 81% had JavaScript enabled, but their browsers failed to execute it (possibly due to dropping the internet connection). If you care about 1.1% of your visitors, you should care about the non-JavaScript experience for your website. The browser needs to do all the processing. This means that the hardware it runs on needs to be fast. It also means that we require all clients to have largely the same capabilities and browser APIs. The initial payload is often much larger, and nothing will be rendered for the user until this payload has been fully downloaded and executed. If the connection drops at any point, or the code fails to execute owing to a bug, we’re left with the non-JavaScript experience. They are not easily indexed as every crawler now needs to run JavaScript just to receive the content of the website. These are not merely edge case issues to shirk off. The first three issues will affect some of your visitors; the fourth affects everyone, including you. What problem are we trying to solve? So what can be done to address these issues? Whereas fat clients solve some inherent issues with the web, they seem to create as many problems. When attempting to resolve any issue, it’s always good to try to uncover the original problem and work forwards from there. One of the best ways to frame a problem is as a user story. A user story considers the who, what and why of a need. Here’s a template: As a {who} I want {what} so that {why} I haven’t got a specific project in mind, so let’s refer to the who as user. Here’s one that could explain the use of thick clients. As a user I want the site to respond to my actions quickly so that I get immediate feedback when I do something. This user story could probably apply to a great number of websites, but so could this: As a user I want to get to the content quickly, so that I don’t have to wait too long to find out what the site is all about or get the content I need. A better solution How can we balance both these user needs? How can we have a website that loads fast, and also reacts fast? The solution is to have a thick server, that serves the complete document, and then a thick client, that manages subsequent actions and replaces parts of the page. What we’re talking about here is simply progressive enhancement, but from the user’s perspective. The initial payload contains the entire document. At this point, all interactions would happen in a traditional way using links or form elements. Then, once we’ve downloaded the JavaScript (asynchronously, after load) we can enhance the experience with JavaScript interactions. If for whatever reason our JavaScript fails to download or execute, it’s no biggie – we’ve already got a fully functioning website. If an API that we need isn’t available in this browser, it’s not a problem. We just fall back to the basic experience. This second point, of having some minimum requirement for an enhanced experience, is often referred to as cutting the mustard, first used in this sense by the BBC News team. Essentially it’s an if statement like this: if('querySelector' in document && 'localStorage' in window && 'addEventListener' in window) { // bootstrap the JavaScript application } This code states that the browser must support the following methods before downloading and executing the JavaScript: document.querySelector (can it find elements by CSS selectors) window.localStorage (can it store strings) window.addEventListener (can it bind to events in a standards-compliant way) These three properties are what the BBC News team decided to test for, as they are present in their website’s JavaScript. Each website will have its own requirements. The last method, window.addEventListener is in interesting one. Although it’s simple to bind to events on IE8 and earlier, these browsers have very inconsistent support for standards. Making any JavaScript-heavy website work on IE8 and earlier is a painful exercise, and comes at a cost to all users on other browsers, as they’ll download unnecessary code to patch support for IE. JavaScript API support by browser. I discovered that IE8 supports 12% of the current JavaScript APIs, while IE9 supports 16%, and IE10 51%. It seems, then, that IE10 could be the earliest version of IE that I’d like to develop JavaScript for. That doesn’t mean that users on browsers earlier than 10 can’t use the website. On the contrary, they get the core experience, and because it’s just HTML and CSS, it’s much more likely to be bug-free, and could even provide a better experience than trying to run JavaScript in their browser. They receive the thin client experience. By reducing the number of platforms that our enhanced JavaScript version supports, we can better focus our efforts on those platforms and offer an even greater experience to those users. But we can only do that if we use progressive enhancement. Otherwise our website would be completely broken for all other users. So what we have is a thick server, capable of serving the entire website to our users, complete with all core functionality needed for our users to complete their tasks; and we have a thick client on supported browsers, which can bring an even greater experience to those users. This is all transparent to users. They may notice that the website seems snappier on the new iPhone they received for Christmas than on the Windows 7 machine they got five years ago, but then they probably expected it to be faster on their iPhone anyway. Isn’t this just more work? It’s true that making a thick server and a thick client is more work than just making one or the other. But there are some big advantages: The website works for everyone. You can decide when users get the enhanced experience. You can enhance features in an iterative (or agile) manner. When the website breaks, it doesn’t break down. The more you practise this approach, the quicker you will become. The websites of Christmas present The best way to discover websites using this technique of progressive enhancement is to disable JavaScript and see if the website breaks. I use the Web Developer extension, which is available for Chrome and Firefox. It lets me quickly disable JavaScript. Web Developer extension. 24 ways works with and without JavaScript. Try using the menu icon to view the navigation. Without JavaScript, it’s a jump link to the bottom of the page, but with JavaScript, the menu slides in from the right. 24 ways navigation with JavaScript disabled. 24 ways navigation with working JavaScript. Google search will also work without JavaScript. You won’t get instant search results or any prerendering, because those are enhancements. For a more app-like example, try using Twitter. Without JavaScript, it still works, and looks nearly identical. But when you load JavaScript, links open in modal windows and all pages are navigated much quicker, as only the content that has changed is loaded. You can read about how they achieved this in Twitter’s blog posts Improving performance on twitter.com and Implementing pushState for twitter.com. Unfortunately Facebook doesn’t use progressive enhancement, which not only means that the website doesn’t work without JavaScript, but it takes longer to load. I tested it on WebPagetest and if you compare the load times of Twitter and Facebook, you’ll notice that, despite putting similar content on the page, Facebook takes two and a half times longer to render the core content on the page. Facebook takes two and a half times longer to load than Twitter. Websites of Christmas yet to come Every project is different, and making a website that enjoys a long life, or serves a larger number of users may or may not be a high priority. But I hope I’ve convinced you that it certainly is possible to look to the past and future simultaneously, and that there can be significant advantages to doing so.",2014,Josh Emerson,joshemerson,2014-12-08T00:00:00+00:00,https://24ways.org/2014/websites-of-christmas-past-present-and-future/,code 42,An Overview of SVG Sprite Creation Techniques,"SVG can be used as an icon system to replace icon fonts. The reasons why SVG makes for a superior icon system are numerous, but we won’t be going over them in this article. If you don’t use SVG icons and are interested in knowing why you may want to use them, I recommend you check out “Inline SVG vs Icon Fonts” by Chris Coyier – it covers the most important aspects of both systems and compares them with each other to help you make a better decision about which system to choose. Once you’ve made the decision to use SVG instead of icon fonts, you’ll need to think of the best way to optimise the delivery of your icons, and ways to make the creation and use of icons faster. Just like bitmaps, we can create image sprites with SVG – they don’t look or work exactly alike, but the basic concept is pretty much the same. There are several ways to create SVG sprites, and this article will give you an overview of three of them. While we’re at it, we’re going to take a look at some of the available tools used to automate sprite creation and fallback for us. Prerequisites The content of this article assumes you are familiar with SVG. If you’ve never worked with SVG before, you may want to look at some of the introductory tutorials covering SVG syntax, structure and embedding techniques. I recommend the following: SVG basics: Using SVG. Structure: Structuring, Grouping, and Referencing in SVG — The <g>, <use>, <defs> and <symbol> Elements. We’ll mention <use> and <symbol> quite a bit in this article. Embedding techniques: Styling and Animating SVGs with CSS. The article covers several topics, but the section linked focuses on embedding techniques. A compendium of SVG resources compiled by Chris Coyier — contains resources to almost every aspect of SVG you might be interested in. And if you’re completely new to the concept of spriting, Chris Coyier’s CSS Sprites explains all about them. Another important SVG feature is the viewBox attribute. For some of the techniques, knowing your way around this attribute is not required, but it’s definitely more useful if you understand – even if just vaguely – how it works. The last technique mentioned in the article requires that you do know the attribute’s syntax and how to use it. To learn all about viewBox, you can refer to my blog post about SVG coordinate systems. With the prerequisites in place, let’s move on to spriting SVGs! Before you sprite… In order to create an SVG sprite with your icons, you’ll of course need to have these icons ready for use. Some spriting tools require that you place your icons in a folder to which a certain spriting process is to be applied. As such, for all of the upcoming sections we’ll work on the assumption that our SVG icons are placed in a folder named SVG. Each icon is an individual .svg file. You’ll need to make sure each icon is well-prepared and optimised for use – make sure you’ve cleaned up the code by running it through one of the optimisation tools or processes available (or doing it manually if it’s not tedious). After prepping the icon files and placing them in a folder, we’re ready to create our SVG sprite. HTML inline SVG sprites Since SVG is XML code, it can be embedded inline in an HTML document as a code island using the <svg> element. Chris Coyier wrote about this technique first on CSS-Tricks. The embedded SVG will serve as a container for our icons and is going to be the actual sprite we’re going to use. So we’ll start by including the SVG in our document. <!DOCTYPE html> <!-- HTML document stuff --> <svg style=""display:none;""> <!-- icons here --> </svg> <!-- other document stuff --> </html> Next, we’re going to place the icons inside the <svg>. Each icon will be wrapped in a <symbol> element we can then reference and use elsewhere in the page using the SVG <use> element. The <symbol> element has many benefits, and we’re using it because it allows us to define a symbol (which is a convenient markup for an icon) without rendering that symbol on the screen. The elements defined inside <symbol> will only be rendered when they are referenced – or called – by the <use> element. Moreover, <symbol> can have its own viewBox attribute, which makes it possible to control the positioning of its content inside its container at any time. Before we move on, I’d like to shed some light on the style=""display:none;"" part of the snippet above. Without setting the display of the SVG to none, and even though its contents are not rendered on the page, the SVG will still take up space in the page, resulting in a big empty area. In order to avoid that, we’re hiding the SVG entirely with CSS. Now, suppose we have a Twitter icon in the icons folder. twitter.svg might look something like this: <!-- twitter.svg --> <?xml version=""1.0"" encoding=""utf-8""?> <!DOCTYPE svg PUBLIC ""-//W3C//DTD SVG 1.1//EN"" ""http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd""> <svg version=""1.1"" xmlns=""http://www.w3.org/2000/svg"" xmlns:xlink=""http://www.w3.org/1999/xlink"" width=""32"" height=""32"" viewBox=""0 0 32 32""> <path d=""M32 6.076c-1.177 0.522-2.443 0.875-3.771 1.034 1.355-0.813 2.396-2.099 2.887-3.632-1.269 0.752-2.674 1.299-4.169 1.593-1.198-1.276-2.904-2.073-4.792-2.073-3.626 0-6.565 2.939-6.565 6.565 0 0.515 0.058 1.016 0.17 1.496-5.456-0.274-10.294-2.888-13.532-6.86-0.565 0.97-0.889 2.097-0.889 3.301 0 2.278 1.159 4.287 2.921 5.465-1.076-0.034-2.088-0.329-2.974-0.821-0.001 0.027-0.001 0.055-0.001 0.083 0 3.181 2.263 5.834 5.266 6.437-0.551 0.15-1.131 0.23-1.73 0.23-0.423 0-0.834-0.041-1.235-0.118 0.835 2.608 3.26 4.506 6.133 4.559-2.247 1.761-5.078 2.81-8.154 2.81-0.53 0-1.052-0.031-1.566-0.092 2.905 1.863 6.356 2.95 10.064 2.95 12.076 0 18.679-10.004 18.679-18.68 0-0.285-0.006-0.568-0.019-0.849 1.283-0.926 2.396-2.082 3.276-3.398z"" fill=""#000000""></path> </svg> We don’t need the root svg element, so we’ll strip the code and only keep the parts that make up the Twitter icon’s shape, which in this example is just the <path> element.Let’s drop that into the sprite container like so: <svg style=""display:none;""> <symbol id=""twitter-icon"" viewBox=""0 0 32 32""> <path d=""M32 6.076c-1.177 …"" fill=""#000000""></path> </symbol> <!-- remaining icons here --> <symbol id=""instagram-icon"" viewBox=""0 0 32 32""> <!-- icon contents --> </symbol> <!-- etc. --> </svg> Repeat for the other icons. The value of the <symbol> element’s viewBox attribute depends on the size of the SVG. You don’t need to know how the viewBox works to use it in this case. Its value is made up of four parts: the first two will almost always be “0 0”; the second two will be equal to the size of the icon. For example, our Twitter icon is 32px by 32px (see twitter.svg above), so the viewBox value is “0 0 32 32”. That said, it is certainly useful to understand how the viewBox works – it can help you troubleshoot SVG sometimes and gives you better control over it, allowing you to scale, position and even crop SVGs manually without having to resort to an editor. My blog post explains all about the viewBox attribute and its related attributes. Once you have your SVG sprite ready, you can display the icons anywhere on the page by referencing them using the SVG <use> element: <svg class=""twitter-icon""> <use xlink:href=""#twitter-icon""></use> <svg> And that’s all there is to it! HTML-inline SVG sprites are simple to create and use, but when you have a lot of icons (and the more icon sets you create) it can easily become daunting if you have to manually transfer the icons into the <svg>. Fortunately, you don’t have to do that. Fabrice Weinberg created a Grunt plugin called grunt-svgstore which takes the icons in your SVG folder and generates the SVG sprites for you; all you have to do is just drop the sprites into your page and use the icons like we did earlier. This technique works in all browsers supporting SVG. There seems to be a bug in Safari on iOS which causes the icons not to show up when the SVG sprite is defined at the bottom of the document after the <use> references to the icons, so it’s safest to include the sprite before you use the icons until this bug is fixed. This technique has one disadvantage: the SVG sprite cannot be cached. We’re saving an extra HTTP request here but the browser cannot cache the image, so we aren’t speeding up any subsequent page loads by inlining the SVG. There must be a better way – and there is. Styling the icons is possible, but getting deep into the styles becomes a bit harder owing to the nature of the contents of the <use> element – these contents are cloned into a shadow DOM, and hence selecting elements in CSS the traditional way is not possible. However, some techniques to work around that do exist, and give us slightly more styling flexibility. Animations work as expected. Referencing an external SVG sprite in HTML Instead of including the SVG inline in the document, you can reference the sprite and the icons inside it externally, taking advantage of fragment identifiers to select individual icons in the sprite. For example, the above reference to the Twitter icon would look something like this instead: <svg class=""twitter-icon""> <use xlink:href=""path/to/icons.svg#twitter-icon""></use> <svg> icons.svg is the name of the SVG file that contains all of our icons as symbols, and the fragment identifier #twitter-icon is the reference to the <symbol> wrapping the Twitter icon’s contents. Very convenient, isn’t it? The browser will request the sprite and then cache it, speeding up subsequent page loads. Win! This technique also works in all browsers supporting SVG except Internet Explorer – not even IE9+ with SVG support permits this technique. No version of IE supports referencing an external SVG in <use>. Fortunately (again), Jonathan Neil has created a plugin called svg4everybody which fills this gap in IE; you can reference an external sprite in <use> and also provide fallback for browsers that do not support SVG. However, it requires you to have the fallback images (PNG or JPEG, for example) available to do so. For details, refer to the plugin’s Github repository’s readme file. CSS inline SVG sprites Another way to create an SVG sprite is by inlining the SVG icons in a style sheet using data URIs, and providing fallback for non-supporting browsers – also within the CSS. Using this approach, we’re turning the style sheet into the sprite that includes our icons. The style sheet is normally cached by the browser, so we have that concern out of the way. This technique is put into practice in Filament Group’s icon system approach, which uses their Grunticon plugin – or its sister Grumpicon web app – for generating the necessary CSS for the sprite. As such, we’re going to cover this technique by following a workflow that uses one of these tools. Again, we start with our icon SVG files. To focus on the actual spriting method and not on the tooling, I’ll go over the process of sprite creation using the Grumpicon web app, instead of the Grunticon plugin. Both tools generate the same resources that we’re going to use for the icon system. Whether you choose the web app or the Grunt set-up, after processing your SVG folder you’re going to end up with the same set of resources that we’ll be using throughout this section. The first step is to drop your icons into the Grumpicon web app. Grumpicon homepage screenshot. The application will then show you a preview of your icons, and a download button will allow you to download the generated files. These files will contain everything you need for your icon system – all that’s left is for you to drop the generated files and code into your project as recommended and you’ll have your sprite and icons ready to use anywhere you want in your page. Grumpicon generates five files and one folder in the downloaded package: a png folder containing PNG versions of your icons; three style sheets (that we’ll go over briefly); a loader script file; and preview.html which is a live example showing you the other files in action. The script in the loader goes into the <head> of your page. This script handles browser and feature detection, and requests the necessary style sheet depending on browser support for SVG and base64 data URIs. If you view the source code of the preview page, you can see exactly how the script is added. icons.data.svg.css is the style sheet that contains your icons – the sprite. The icons are embedded inline inside the style sheet using data URIs, and applied to elements of your choice as background images, using class names. For example: .twitter-icon{ background-image: url('data:image/svg+xml;…'); /* the ellipsis is where the icon’s data would go */ background-repeat: no-repeat; background-position: 50% 50%; height: 2em; width: 2em; /* etc. */ } Then, you only have to apply the twitter-icon class name to an element in your HTML to apply the icon as a background to it: <span class=""twitter-icon""></span> And that’s all you need to do to get an icon on the page. icons.data.svg.css, along with the other two style sheets and the png folder should be added to your CSS folder. icons.data.png.css is the style sheet the script will load in browsers that don’t support SVG, such as IE8. Fallback for the inline SVG is provided as a base64-encoded PNG. For instance, the fallback for the Twitter icon from our example would look like so: .twitter-icon{ background-image: url('data:image/png;base64;…’); /* etc. */ } icons.fallback.css is the style sheet required for browsers that don’t support base64-encoded PNGs – the PNG images are loaded as usual using the image’s URL. The script will load this style sheet for IE6 and IE7, for example. .twitter-icon{ background-image: url(png/twitter-icon.png); /* etc. */ } This technique is very different from the previous one. The sprite in this case is literally the style sheet, not an SVG container, and the icon usage is very similar to that of a CSS sprite – the icons are provided as background images. This technique has advantages and disadvantages. For the sake of brevity, I won’t go into further details, but the main limitations worth mentioning are that SVGs embedded as background images cannot be styled with CSS; and animations are restricted to those defined inside the <svg> for each icon. CSS interactions (such as hover effects) don’t work either. Thus, to apply an effect for an icon that changes its color on hover, for example, you’ll need to export a set of SVGs for each colour in order for Grumpicon to create matching fallback PNG images that can then be used for the animation. For more details about the Grumpicon workflow, I recommend you check out “A Designer’s Guide to Grumpicon” on Filament Group’s website. Using SVG fragment identifiers and views This spriting technique is, again, different from the previous ones, and it is my personal favourite. SVG comes with a standard way of cropping to a specific area in a particular SVG image. If you’ve ever worked with CSS sprites before then this definitely sounds familiar: it’s almost exactly what we do with CSS sprites – the image containing all of the icons is cropped, so to speak, to show only the one icon that we want in the background positioning area of the element, using background size and positioning properties. Instead of using background properties, we’ll be using SVG’s viewBox attribute to crop our SVG to the specific icon we want. What I like about this technique is that it is more visual than the previous ones. Using this technique, the SVG sprite is treated like an actual image containing other images (the icons), instead of treating it as a piece of code containing other code. Again, our SVG icons are placed inside a main SVG container that is going to be our SVG sprite. If you’re working in a graphics editor, position or arrange your icons inside the canvas any way you want them to be, and then export the graphic as is. Of course, the less empty space there is in your SVG, the better. In our example, the sprite contains three icons as shown in the following image. The sprite is open in Sketch. Notice how the SVG is just big enough to fit the icons inside it. It doesn’t have to be like this, but it’s cleaner this way. Screenshot showing the SVG sprite containing our icons. Now, suppose you want to display only the Instagram icon. Using the SVG viewBox attribute, we can crop the SVG to the icon. The Instagram icon is positioned at 64px along the positive x-axis, and zero pixels along the y-axis. It is also 32px by 32px in size. Screenshot showing the position (offset) of the Instagram icon inside the SVG sprite, and its size. Using this information, we can specify the value of the viewBox as: 64 0 32 32. This area of the view box contains only the Instagram icon. 64 0 specifies the top-left corner of the view box area, and 32 32 specify its dimensions. Now, if we were to change the viewBox value on the SVG sprite to this value, only the Instagram icon will be visible inside the SVG viewport. Great. But how do we use this information to display the icon in our page using our sprite? SVG comes with a native way to link to portions or areas of an image using fragment identifiers. Fragment identifiers are used to link into a particular view area of an SVG document. Thus, using a fragment identifier and the boundaries of the area that we want (from the viewBox), we can link to that area and display it. For example, if you want to display the icon from the sprite using an <img> tag, you can reference the icon in the sprite like so: <img src='uiIcons.svg#svgView(viewBox(64, 0, 32, 32))' alt=""Settings icon""/> The fragment identifier in the snippet above (#svgView(viewBox(64, 0, 32, 32))) is the important part. This will result in only the Instagram icon’s area of the sprite being displayed. There is also another way to do this, using the SVG <view> element. The <view> element can be used to define a view area and then reference that area somewhere else. For example, to define the view box containing the Instagram icon, we can do the following: <view id='instagram-icon' viewBox='64 0 32 32' /> Then, we can reference this view in our <img> element like this: <img src='sprite.svg#instagram-icon' alt=""Instagram icon"" /> The best part about this technique – besides the ability to reference an external SVG and hence make use of browser caching – is that it allows us to use practically any SVG embedding technique and does not restrict us to specific tags. It goes without saying that this feature can be used for more than just icon systems, owing to viewBox’s power in controlling an SVG’s viewable area. SVG fragment identifiers have decent browser support, but the technique is buggy in Safari: there is a bug that causes problems when loading a server SVG file and then using fragment identifiers with it. Bear Travis has documented the issue and a workaround. Where to go from here Pick the technique that works best for your project. Each technique has its own pros and cons, relating to convenience and maintainability, performance, and styling and scripting. Each technique also requires its own fallback mechanism. The spriting techniques mentioned here are not the only techniques available. Other methods exist, such as SVG stacks, and others may surface in future, but these are the three main ones today. The third technique using SVG’s built-in viewBox features is my favourite, and with better browser support and fewer (ideally, no) bugs, I believe it is more likely to become the standard way to create and use SVG sprites. Fallback techniques can be created, of course, in one of many possible ways. Do you use SVG for your icon system? If so, which is your favourite technique? Do you know or have worked with other ways for creating SVG sprites?",2014,Sara Soueidan,sarasoueidan,2014-12-16T00:00:00+00:00,https://24ways.org/2014/an-overview-of-svg-sprite-creation-techniques/,code 46,Responsive Enhancement,"24 ways has been going strong for ten years. That’s an aeon in internet timescales. Just think of all the changes we’ve seen in that time: the rise of Ajax, the explosion of mobile devices, the unrecognisably changed landscape of front-end tooling. Tools and technologies come and go, but one thing has remained constant for me over the past decade: progressive enhancement. Progressive enhancement isn’t a technology. It’s more like a way of thinking. Instead of thinking about the specifics of how a finished website might look, progressive enhancement encourages you to think about the fundamental meaning of what the website is providing. So instead of thinking of a website in terms of its ideal state in a modern browser on a nice widescreen device, progressive enhancement allows you to think about the core functionality in a more abstract way. Once you’ve figured out what the core functionality is – adding an item to a shopping cart, posting a message, sharing a photo – then you can enable that functionality in the simplest possible way. That usually means starting with good old-fashioned HTML. Links and forms are often all you need. Then, once you have the core functionality working in a basic way, you can start to enhance to make a progressively better experience for more modern browsers. The advantage of working this way isn’t just that your site will work in older browsers (albeit in a rudimentary way). It also ensures that if anything goes wrong in a modern browser, it won’t be catastrophic. There’s a common misconception that progressive enhancement means that you’ll spend your time dealing with older browsers, but in fact the opposite is true. Putting the basic functionality into place doesn’t take very long at all. And once you’ve done that, you’re free to spend all your time experimenting with the latest and greatest browser technologies, secure in the knowledge that even if they aren’t universally supported yet, that’s OK: you’ve already got your fallback in place. The key to thinking about web development this way is realising that there isn’t one final interface – there could be many, slightly different interfaces depending on the properties and capabilities of any particular user agent at any particular moment. And that’s OK. Websites do not need to look the same in every browser. Once you truly accept that, it’s an immensely liberating idea. Instead of spending your time trying to make websites look the same in wildly varying browsers, you can spend your time making sure that the core functionality of what you build works everywhere, while providing the best possible experience for more capable browsers. Allow me to demonstrate with a simple example: navigation. Step one: core functionality Let’s say we have a straightforward website about the twelve days of Christmas, with a page for each day. The core functionality is pretty clear: To read about any particular day. To browse from day to day. The first is easily satisfied by marking up the text with headings, paragraphs and all the usual structural HTML elements. The second is satisfied by providing a list of good ol’ hyperlinks. Now where’s the best place to position this navigation list? Personally, I’m a big fan of the jump-to-footer pattern. This puts the content first and the navigation second. At the top of the page there’s a link with an href attribute pointing to the fragment identifier for the navigation. <body> <main role=""main"" id=""top""> <a href=""#menu"" class=""control"">Menu</a> ... </main> <nav role=""navigation"" id=""menu""> ... <a href=""#top"" class=""control"">Dismiss</a> </nav> </body> See the footer-anchor pattern in action. Because it’s nothing more than a hyperlink, this works in just about every browser since the dawn of the web. Following hyperlinks is what web browsers were made to do (hence the name). Step two: layout as an enhancement The footer-anchor pattern is a particularly neat solution on small-screen devices, like mobile phones. Once more screen real estate is available, I can use the magic of CSS to reposition the navigation above the content. I could use position: absolute, flexbox or, in this case, display: table. @media all and (min-width: 35em) { .control { display: none; } body { display: table; } [role=""navigation""] { display: table-caption; columns: 6 15em; } } See the styles for wider screens in action Step three: enhance! Right. At this point I’m providing core functionality to everyone, and I’ve got nice responsive styles for wider screens. I could stop here, but the real advantage of progressive enhancement is that I don’t have to. From here on, I can go crazy adding all sorts of fancy enhancements for modern browsers, without having to worry about providing a fallback for older browsers – the fallback is already in place. What I’d really like is to provide a swish off-canvas pattern for small-screen devices. Here’s my plan: Position the navigation under the main content. Listen out for the .control links being activated and intercept that action. When those links are activated, toggle a class of .active on the body. If the .active class exists, slide the content out to reveal the navigation. Here’s the CSS for positioning the content and navigation: @media all and (max-width: 35em) { [role=""main""] { transition: all .25s; width: 100%; position: absolute; z-index: 2; top: 0; right: 0; } [role=""navigation""] { width: 75%; position: absolute; z-index: 1; top: 0; right: 0; } .active [role=""main""] { transform: translateX(-75%); } } In my JavaScript, I’m going to listen out for any clicks on the .control links and toggle the .active class on the body accordingly: (function (win, doc) { 'use strict'; var linkclass = 'control', activeclass = 'active', toggleClassName = function (element, toggleClass) { var reg = new RegExp('(s|^)' + toggleClass + '(s|$)'); if (!element.className.match(reg)) { element.className += ' ' + toggleClass; } else { element.className = element.className.replace(reg, ''); } }, navListener = function (ev) { ev = ev || win.event; var target = ev.target || ev.srcElement; if (target.className.indexOf(linkclass) !== -1) { ev.preventDefault(); toggleClassName(doc.body, activeclass); } }; doc.addEventListener('click', navListener, false); }(this, this.document)); I’m all set, right? Not so fast! Cutting the mustard I’ve made the assumption that addEventListener will be available in my JavaScript. That isn’t a safe assumption. That’s because JavaScript – unlike HTML or CSS – isn’t fault-tolerant. If you use an HTML element or attribute that a browser doesn’t understand, or if you use a CSS selector, property or value that a browser doesn’t understand, it’s no big deal. The browser will just ignore what it doesn’t understand: it won’t throw an error, and it won’t stop parsing the file. JavaScript is different. If you make an error in your JavaScript, or use a JavaScript method or property that a browser doesn’t recognise, that browser will throw an error, and it will stop parsing the file. That’s why it’s important to test for features before using them in JavaScript. That’s also why it isn’t safe to rely on JavaScript for core functionality. In my case, I need to test for the existence of addEventListener: (function (win, doc) { if (!win.addEventListener) { return; } ... }(this, this.document)); The good folk over at the BBC call this kind of feature test cutting the mustard. If a browser passes the test, it cuts the mustard, and so it gets the enhancements. If a browser doesn’t cut the mustard, it doesn’t get the enhancements. And that’s fine because, remember, websites don’t need to look the same in every browser. I want to make sure that my off-canvas styles are only going to apply to mustard-cutting browsers. I’m going to use JavaScript to add a class of .cutsthemustard to the document: (function (win, doc) { if (!win.addEventListener) { return; } ... var enhanceclass = 'cutsthemustard'; doc.documentElement.className += ' ' + enhanceclass; }(this, this.document)); Now I can use the existence of that class name to adjust my CSS: @media all and (max-width: 35em) { .cutsthemustard [role=""main""] { transition: all .25s; width: 100%; position: absolute; z-index: 2; top: 0; right: 0; } .cutsthemustard [role=""navigation""] { width: 75%; position: absolute; z-index: 1; top: 0; right: 0; } .cutsthemustard .active [role=""main""] { transform: translateX(-75%); } } See the enhanced mustard-cutting off-canvas navigation. Remember, this only applies to small screens so you might have to squish your browser window. Enhance all the things! This was a relatively simple example, but it illustrates the thinking behind progressive enhancement: once you’re providing the core functionality to everyone, you’re free to go crazy with all the latest enhancements for modern browsers. Progressive enhancement doesn’t mean you have to provide all the same functionality to everyone – quite the opposite. That’s why it’s key to figure out early on what the core functionality is, and make sure that it can be provided with the most basic technology. But from that point on, you’re free to add many more features that aren’t mission-critical. You should reward more capable browsers by giving them more of those features, such as animation in CSS, geolocation in JavaScript, and new input types in HTML. Like I said, progressive enhancement isn’t a technology. It’s a way of thinking. Once you start thinking this way, you’ll be prepared for whatever the next ten years throws at us.",2014,Jeremy Keith,jeremykeith,2014-12-09T00:00:00+00:00,https://24ways.org/2014/responsive-enhancement/,code 49,Universal React,"One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications. More generally we’ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google’s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed. The good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows: The user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page. In the background, the client-side JavaScript is executed and takes over the duty of rendering the page. The next time a user clicks, rather than being sent to the server, the client-side app is in control. If the user doesn’t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again. This means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that’s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem. It’s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you’d like a refresher, the ReactJS docs are a good place to start.  Getting started We’re going to create a tiny ReactJS application that will work on the server and the client. First we’ll need to create a new project and install some dependencies. In a new, blank directory, run: npm init -y npm install --save ejs express react react-router react-dom That will create a new project and install our dependencies: ejs is a templating engine that we’ll use to render our HTML on the server. express is a small web framework we’ll run our server on. react-router is a popular routing solution for React so our app can fully support and respect URLs. react-dom is a small React library used for rendering React components. We’re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too. npm install --save-dev babel-cli babel-preset-es2015 babel-preset-react Then, create a .babelrc file that contains the following: { ""presets"": [""es2015"", ""react""] } What we’ve done here is install Babel’s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We’ll need the React transforms when we start writing JSX when working with React. Creating a server For now, our ExpressJS server is pretty straightforward. All we’ll do is render a view that says ‘Hello World’. Here’s our server code: import express from 'express'; import http from 'http'; const app = express(); app.use(express.static('public')); app.set('view engine', 'ejs'); app.get('*', (req, res) => { res.render('index'); }); const server = http.createServer(app); server.listen(3003); server.on('listening', () => { console.log('Listening on 3003'); }); Here we’re using ES6 modules, which I wrote about on 24 ways last year, if you’d like a reminder. We tell the app to render the index view on any GET request (that’s what app.get('*') means, the wildcard matches any route). We now need to create the index view file, which Express expects to be defined in views/index.ejs: <!DOCTYPE html> <html> <head> <title>My App</title> </head> <body> Hello World </body> </html> Finally, we’re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command: ./node_modules/.bin/babel-node server.js And you should now be able to visit http://localhost:3003 and see ‘Hello World’ right there: Building the React app Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs. Firstly, let’s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so: <body> <%- markup %> </body> Next, we’ll define the routes we want our app to have using React Router. For now we’ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it’s clearer to define them as an object. Here’s what we’re starting with: const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent } ] } These are just placed at the top of server.js, after the import statements. Later we’ll move these into a separate file, but for now they are fine where they are. Notice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let’s quickly define components/app.js and components/index.js. app.js looks like so: import React from 'react'; export default class AppComponent extends React.Component { render() { return ( <div> <h2>Welcome to my App</h2> { this.props.children } </div> ); } } When a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland: import React from 'react'; export default class IndexComponent extends React.Component { render() { return ( <div> <p>This is the index page</p> </div> ); } } Server-side routing with React Router Head back into server.js, and firstly we’ll need to add some new imports: import React from 'react'; import { renderToString } from 'react-dom/server'; import { match, RoutingContext } from 'react-router'; import AppComponent from './components/app'; import IndexComponent from './components/index'; The ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It’s this method that we’ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we’ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don’t need to concern yourself about how this component works, so don’t worry too much. Now for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes: app.get('*', (req, res) => { // routes is our object of React routes defined above match({ routes, location: req.url }, (err, redirectLocation, props) => { if (err) { // something went badly wrong, so 500 with a message res.status(500).send(err.message); } else if (redirectLocation) { // we matched a ReactRouter redirect, so redirect from the server res.redirect(302, redirectLocation.pathname + redirectLocation.search); } else if (props) { // if we got props, that means we found a valid component to render // for the given route const markup = renderToString(<RoutingContext {...props} />); // render `index.ejs`, but pass in the markup we want it to display res.render('index', { markup }) } else { // no route match, so 404. In a real app you might render a custom // 404 view here res.sendStatus(404); } }); }); We call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err, redirectLocation and props as the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional, else if (props). If we got given props and we’ve made it this far it means we found a matching component to render and we can use this code to render it: ... } else if (props) { // if we got props, that means we found a valid component to render // for the given route const markup = renderToString(<RoutingContext {...props} />); // render `index.ejs`, but pass in the markup we want it to display res.render('index', { markup }) } else { ... } The renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components. Note the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent: <MyComponent a=""foo"" b=""bar"" /> // OR: const props = { a: ""foo"", b: ""bar"" }; <MyComponent {...props} /> Running the server again I know that felt like a lot of work, but the good news is that once you’ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working! Refactoring and one more route Before we move on to getting this code running on the client, let’s add one more route and do some tidying up. First, move our routes object out into routes.js: import AppComponent from './components/app'; import IndexComponent from './components/index'; const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent } ] } export { routes }; And then update server.js. You can remove the two component imports and replace them with: import { routes } from './routes'; Finally, let’s add one more route for ./about and links between them. Create components/about.js: import React from 'react'; export default class AboutComponent extends React.Component { render() { return ( <div> <p>A little bit about me.</p> </div> ); } } And then you can add it to routes.js too: import AppComponent from './components/app'; import IndexComponent from './components/index'; import AboutComponent from './components/about'; const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent }, { path: '/about', component: AboutComponent } ] } export { routes }; If you now restart the server and head to http://localhost:3003/about` you’ll see the about page! For the finishing touch we’ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so: import React from 'react'; import { Link } from 'react-router'; export default class AppComponent extends React.Component { render() { return ( <div> <h2>Welcome to my App</h2> <ul> <li><Link to='/'>Home</Link></li> <li><Link to='/about'>About</Link></li> </ul> { this.props.children } </div> ); } } You can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we’re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience. Client-side rendering First, we’re going to make a small change to views/index.ejs. React doesn’t like rendering directly into the body and will give a warning when you do so. To prevent this we’ll wrap our app in a div: <body> <div id=""app""><%- markup %></div> <script src=""build.js""></script> </body> I’ve also added in a script tag to build.js, which is the file we’ll generate containing all our client-side code. Next, create client-render.js. This is going to be the only bit of JavaScript that’s exclusive to the client side. In it we need to pull in our routes and render them to the DOM. import React from 'react'; import ReactDOM from 'react-dom'; import { Router } from 'react-router'; import { routes } from './routes'; import createBrowserHistory from 'history/lib/createBrowserHistory'; ReactDOM.render( <Router routes={routes} history={createBrowserHistory()} />, document.getElementById('app') ) The first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser’s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we’ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation. Finally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element. Generating build.js We’re actually almost there! The final thing we need to do is generate our client side bundle. For this we’re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We’ll install it and babel-loader, a webpack plugin for transforming code through Babel. npm install --save-dev webpack babel-loader To run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the root of our application and add the following code: var path = require('path'); module.exports = { entry: path.join(process.cwd(), 'client-render.js'), output: { path: './public/', filename: 'build.js' }, module: { loaders: [ { test: /.js$/, loader: 'babel' } ] } } Note first that this file can’t be written in ES6 as it doesn’t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location – if we just gave it the string ‘client-render.js’, webpack wouldn’t be able to find it. Next, we tell webpack where to output our file, and here I’m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first. Now we’re ready to generate the bundle! ./node_modules/.bin/webpack This will take a fair few seconds to run (on my machine it’s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you’ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect! The first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you’re developing. Conclusions First, if you’d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions. Next, I want to stress that you shouldn’t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you’d be right. I used it as it’s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it’s a suitable infrastructure for you. With that, all that’s left for me to do is wish you a very merry Christmas and best of luck with your React applications!",2015,Jack Franklin,jackfranklin,2015-12-05T00:00:00+00:00,https://24ways.org/2015/universal-react/,code 52,Git Rebasing: An Elfin Workshop Workflow,"This year Santa’s helpers have been tasked with making a garland. It’s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they’re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn’t. It’s worse; it’s about Git.) For the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they’re able to work independently, but towards the common goal of making a beautiful garland. At first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they’re going to need a system to change the beads out when mistakes are made in the chain. The first common mistake is not looking to see what the latest chain is that’s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It’s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It’s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command git pull --rebase=preserve (They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they’ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available. The next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I’ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it’s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands: git checkout master git checkout -b 1234_pattern-name 1234 represents the work order number and pattern-name describes the pattern they’re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland. To combine their work with the main garland, the elves need to make a few decisions. If they’re making a single strand, they issue the following commands: git checkout master git merge --ff-only 1234_pattern-name To share their work they publish the new version of the main garland to the workshop spool with the command git push origin master. Sometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order. To update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command: git reset --merge ORIG_HEAD This works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf’s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally. With these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf’s beads will be restrung on the tip of the main workshop garland. Sometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf’s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they’re ready to close out their work order. Once their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order: git checkout master git merge --ff-only 1234_pattern-name git push origin master Generally this process works well for the elves. Sometimes, though, when they’re tired or bored or a little drunk on festive cheer, they realise there’s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options. Let’s pretend the elf has a sequence of five beads she’s been working on. The work order says the pattern should be red-blue-red-blue-red. If the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5. If she’s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5. If one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence. Using a tool that’s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available. Start the garland surgery process with the command: git rebase --interactive HEAD~4 A new screen comes up with the following information (the oldest bead is on top): pick c2e4877 Red bead pick 9b5555e Blue bead pick 7afd66b Blue bead pick e1f2537 Red bead The elf adjusts the list, changing “pick” to “edit” next to the first blue bead: pick c2e4877 Red bead edit 9b5555e Blue bead pick 7afd66b Blue bead pick e1f2537 Red bead She then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead. She needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes. Once she’s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message “Red bead – added” so she can easily find it. The garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue. The new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool. She knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads. With the conflict resolved, the elf saves her changes and quits the mergetool. Back at the command line, the elf checks the status of her work using the command git status. rebase in progress; onto 4a9cb9d You are currently rebasing branch '2_RBRBR' on '4a9cb9d'. (all conflicts fixed: run ""git rebase --continue"") Changes to be committed: (use ""git reset HEAD <file>..."" to unstage) modified: beads.txt Untracked files: (use ""git add <file>..."" to include in what will be committed) beads.txt.orig She removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands: git add beads.txt git commit --message ""Blue bead -- resolved conflict"" With the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files. She incorporates the changes from the left and right column to ensure her bead sequence is correct. Once the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland: git add beads.txt git commit --message ""Red bead -- resolved conflict"" and then runs the final verification steps in the rebase process (git rebase --continue). The verification process runs through to the end, and the elf checks her work using the command git log --oneline. 9269914 Red bead -- resolved conflict 4916353 Blue bead -- resolved conflict aef0d5c Red bead -- added 9b5555e Blue bead c2e4877 Red bead She knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct. Sometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It’s made her more confident at restringing beads when she’s found real mistakes. And she doesn’t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don’t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked. Our lessons from the workshop: By using rebase to update your branches, you avoid merge commits and keep a clean commit history. If you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard. If you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch. If you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.",2015,Emma Jane Westby,emmajanewestby,2015-12-07T00:00:00+00:00,https://24ways.org/2015/git-rebasing/,code 54,Putting My Patterns through Their Paces,"Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work. The thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me. Meet the Teaser Here’s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.) In its simplest form, a teaser contains a headline, which links to an article: Fairly straightforward, sure. But it’s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types – some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile. Nearly every element visible on this page is built out of our generic “teaser” pattern. But the teaser variation I’d like to call out is the one that appears on The Toast’s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline. The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts. And this is, as it happens, the teaser variation that gave me pause. Back in the old days – you know, like six months ago – I probably would’ve marked this module up to match the design. In other words, I would’ve looked at the module’s visual hierarchy (metadata up top, headline and content below) and written the following HTML: <div class=""teaser""> <p class=""article-byline"">By <a href=""#"">Author Name</a></p> <a class=""comment-count"" href=""#"">126 <i>comments</i></a> <h1 class=""article-title""><a href=""#"">Article Title</a></h1> <p class=""teaser-excerpt"">Lorem ipsum dolor sit amet, consectetur…</p> </div> But then I caught myself, and realized this wasn’t the best approach. Moving Beyond Layout Since I’ve started working responsively, there’s a question I work into every step of my design process. Whether I’m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself: What if someone doesn’t browse the web like I do? …Okay, that doesn’t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it’s been invaluable to so many aspects of my practice. If I’m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I’m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It’s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web. And that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it’d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they’re associated. That’s why I find it’s helpful to begin designing a pattern’s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I’m trying to support. In other words, if someone’s encountering my design without the CSS I’ve written, what should their experience be? So I took a step back, and came up with a different approach: <div class=""teaser""> <h1 class=""article-title""><a href=""#"">Article Title</a></h1> <h2 class=""article-byline"">By <a href=""#"">Author Name</a></h2> <p class=""teaser-excerpt""> Lorem ipsum dolor sit amet, consectetur… <a class=""comment-count"" href=""#"">126 <i>comments</i></a> </p> </div> Much, much better. This felt like a better match for the content I was designing: the headline – easily most important element – was at the top, followed by the author’s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that’s why it’s at the very end of the excerpt, the last element within our teaser. And with some light styling, we’ve got a respectable-looking hierarchy in place: Yeah, you’re right – it’s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we’ll bolster the markup with an extra element around our title and byline: <div class=""teaser""> <div class=""teaser-hed""> <h1 class=""article-title""><a href=""#"">Article Title</a></h1> <h2 class=""article-byline"">By <a href=""#"">Author Name</a></h2> </div> … </div> With that in place, we can use flexbox to tweak our layout, like so: .teaser-hed { display: flex; flex-direction: column-reverse; } flex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children. Getting closer! But as great as flexbox is, it doesn’t do anything for elements outside our container, like our little comment count, which is, as you’ve probably noticed, still stranded at the very bottom of our teaser. Flexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser’s using a sufficiently modern version of flexbox. Here’s the one we used: var doc = document.body || document.documentElement; var style = doc.style; if ( style.webkitFlexWrap == '' || style.msFlexWrap == '' || style.flexWrap == '' ) { doc.className += "" supports-flex""; } Eagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser’s design: .supports-flex .teaser-hed { display: flex; flex-direction: column-reverse; } .supports-flex .teaser .comment-count { position: absolute; right: 0; top: 1.1em; } If the supports-flex class is present, we can apply our flexbox layout to the title area, sure – but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don’t meet our threshold for our advanced styles are left with an attractive design that matches our HTML’s content hierarchy; but the ones that pass our test receive the finished, final design. And with that, our teaser’s complete. Diving Into Device-Agnostic Design This is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I’d recommend Heydon Pickering’s “Flexbox Grid Finesse”, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you’d be right! In fact, it’s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web. And what’s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well. Open video Even a simple search form can be conditionally enhanced, given a little layered thinking. This more layered approach to interface design isn’t a new one, mind you: it’s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I’ve found to practice responsive design. As Trent Walton once wrote, Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability. We have a weird job, working on the web. We’re designing for the latest mobile devices, sure, but we’re increasingly aware that our definition of “smartphone” is much too narrow. Browsers have started appearing on our wrists and in our cars’ dashboards, but much of the world’s mobile data flows over sub-3G networks. After all, the web’s evolution has never been charted along a straight line: it’s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don’t yet know about, a more device-agnostic, more layered design process can better prepare our patterns – and ourselves – for the future. (It won’t help you get enough to eat at holiday parties, though.)",2015,Ethan Marcotte,ethanmarcotte,2015-12-10T00:00:00+00:00,https://24ways.org/2015/putting-my-patterns-through-their-paces/,code 55,How Tabs Should Work,"Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They’ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented. So this post is my definition of how a tabbing system should work, and one approach of implementing that. But… tabs are easy, right? I’ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system: var tabs = $('.tab').click(function () { tabs.hide().filter(this.hash).show(); }).map(function () { return $(this.hash)[0]; }); $('.tab:first').click(); Simple, right? Nearly fits in a tweet (ignoring the whole jQuery library…). Still, it’s riddled with problems that make it a far from perfect solution. Requirements: what makes the perfect tab? All content is navigable and available without JavaScript (crawler-compatible and low JS-compatible). ARIA roles. The tabs are anchor links that: are clickable have block layout have their href pointing to the id of the panel element use the correct cursor (i.e. cursor: pointer). Since tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open. Right-clicking (and Shift-clicking) doesn’t cause the tab to be selected. Native browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place). The first three points are all to do with the semantics of the markup and how the markup has been styled. I think it’s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work. The last three points are JavaScript problems. Let’s investigate that. The shitmus test Like a litmus test, here’s a couple of quick ways you can tell if a tabbing system is poorly implemented: Change tab, then use the Back button (or keyboard shortcut) and it breaks The tab isn’t a link, so you can’t open it in a new tab These two basic things are, to me, the bare minimum that a tabbing system should have. Why is this important? The people who push their so-called native apps on users can’t have more reasons why the web sucks. If something as basic as a tab doesn’t work, obviously there’s more ammo to push a closed native app or platform on your users. If you’re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn’t mean don’t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. </rant> :breath: URI fragment, absolute URL or query string? A URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content. This decision really depends on the context of your tabbing system. For something like GitHub’s tabs to view a pull request, it makes sense that the full URL changes. For our problem though, I want to solve the issue when the page doesn’t do a full URL update; that is, your regular run-of-the-mill tabbing system. I used to be from the school of using the hash to show the correct tab, but I’ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don’t work, and comma-separated hash fragments don’t make any sense to control multiple tabs (since it doesn’t actually link to anything). For this article, I’ll keep focused on using a single tabbing system and a hash on the URL to control the tabs. Markup I’m going to assume subcontent, so my markup would look like this (yes, this is a cat demo…): <ul class=""tabs""> <li><a class=""tab"" href=""#dizzy"">Dizzy</a></li> <li><a class=""tab"" href=""#ninja"">Ninja</a></li> <li><a class=""tab"" href=""#missy"">Missy</a></li> </ul> <div id=""dizzy""> <!-- panel content --> </div> <div id=""ninja""> <!-- panel content --> </div> <div id=""missy""> <!-- panel content --> </div> It’s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we’ll see once we’re on to writing the code. URL-driven tabbing systems Instead of making the code responsive to the user’s input, we’re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free. With that in mind, let’s start building up our code. I’ll assume we have the jQuery library, but I’ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support). Note that I’ll start with the simplest solution, and I’ll refactor the code as I go along, like in places where I keep calling jQuery selectors. function show(id) { // remove the selected class from the tabs, // and add it back to the one the user selected $('.tab').removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it $('.panel').hide().filter(id).show(); } $(window).on('hashchange', function () { show(location.hash); }); // initialise by showing the first panel show('#dizzy'); This works pretty well for such little code. Notice that we don’t have any click handlers for the user and the Back button works right out of the box. However, there’s a number of problems we need to fix: The initialised tab is hard-coded to the first panel, rather than what’s on the URL. If there’s no hash on the URL, all the panels are hidden (and thus broken). If you scroll to the bottom of the example, you’ll find a “top” link; clicking that will break our tabbing system. I’ve purposely made the page long, so that when you click on a tab, you’ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying. From our criteria at the start of this post, we’ve already solved items 4 and 5. Not a terrible start. Let’s solve items 1 through 3 next. Using the URL to initialise correctly and protect from breakage Instead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it’s available. The problem is: what if the hash on the URL isn’t actually for a tab? The solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won’t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached. So now the code will collect all the tabs, then find the related panels from those tabs, and we’ll use that list to double the values we give the show function (during initialisation, for instance). // collect all the tabs var tabs = $('.tab'); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')); function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', function () { var hash = location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } }); // initialise show(targets.indexOf(location.hash) !== -1 ? location.hash : ''); The core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab. The second breakage we saw in the original demo was that clicking the “top” link would break our tabs. This was due to the hashchange event firing and the code didn’t validate the hash that was passed. Now this happens, the panels don’t break. So far we’ve got a tabbing system that: Works without JavaScript. Supports right-click and Shift-click (and doesn’t select in these cases). Loads the correct panel if you start with a hash. Supports native browser navigation. Supports the keyboard. The only annoying problem we have now is that the page jumps when a tab is selected. That’s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it’s all for a good cause. Removing the jump to tab You’d be forgiven for thinking you just need to hook a click handler and return false. It’s what I started with. Only that’s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support. There may be another way to solve this, but what follows is the way I found – and it works. It’s just a bit… hairy, as I said. We’re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won’t jump the page. The change involves the following: Add a click handle that removes the id from the target panel, and cache this in a target variable that we’ll use later in hashchange (see point 4). In the same click handler, set the location.hash to the current link’s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line). For each panel, put a backup copy of the id attribute in a data property (I’ve called it old-id). When the hashchange event fires, if we have a target value, let’s put the id back on the panel. These changes result in this final code: /*global $*/ // a temp value to cache *what* we're about to show var target = null; // collect all the tabs var tabs = $('.tab').on('click', function () { target = $(this.hash).removeAttr('id'); // if the URL isn't going to change, then hashchange // event doesn't fire, so we trigger the update manually if (location.hash === this.hash) { // but this has to happen after the DOM update has // completed, so we wrap it in a setTimeout 0 setTimeout(update, 0); } }); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')).each(function () { // keep a copy of what the original el.id was $(this).data('old-id', this.id); }); function update() { if (target) { target.attr('id', target.data('old-id')); target = null; } var hash = window.location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } } function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', update); // initialise if (targets.indexOf(window.location.hash) !== -1) { update(); } else { show(); } This version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add. ARIA roles This article on ARIA tabs made it very easy to get the tabbing system working as I wanted. The tasks were simple: Add aria-role set to tab for the tabs, and tabpanel for the panels. Set aria-controls on the tabs to point to their related panel (by id). I use JavaScript to add tabindex=0 to all the tab elements. When I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false. When I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false. And that’s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible. Check out the final version (and the non-jQuery version as promised). In conclusion There’s a lot of tab implementations out there, but there’s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there’s a special hell for those tab systems that don’t even use links, but I think it’s clear that even in something that’s relatively simple, it’s the small details that make or break the user experience. Obviously there are corners I’ve not explored, like when there’s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that’s for another year!",2015,Remy Sharp,remysharp,2015-12-22T00:00:00+00:00,https://24ways.org/2015/how-tabs-should-work/,code 63,Be Fluid with Your Design Skills: Build Your Own Sites,"Just five years ago in 2010, when we were all busy trying to surprise and delight, learning CSS3 and trying to get whole websites onto one page, we had a poster on our studio wall. It was entitled ‘Designers Vs Developers’, an infographic that showed us the differences between the men(!) who created websites. Designers wore skinny jeans and used Macs and developers wore cargo pants and brought their own keyboards to work. We began to learn that designers and developers were not only doing completely different jobs but were completely different people in every way. This opinion was backed up by hundreds of memes, millions of tweets and pages of articles which used words like void and battle and versus. Thankfully, things move quickly in this industry; the wide world of web design has moved on in the last five years. There are new devices, technologies, tools – and even a few women. Designers have been helped along by great apps, software, open source projects, conferences, and a community of people who, to my unending pride, love to share their knowledge and their work. So the world has moved on, and if Miley Cyrus, Ruby Rose and Eliot Sumner are identifying as gender fluid (an identity which refers to a gender which varies over time or is a combination of identities), then I would like to come out as discipline fluid! OK, I will probably never identify as a developer, but I will identify as fluid! How can we be anything else in an industry that moves so quickly? That’s how we should think of our skills, our interests and even our job titles. After all, Steve Jobs told us that “Design is not just what it looks like and feels like. Design is how it works.” Sorry skinny-jean-wearing designers – this means we’re all designing something together. And it’s not just about knowing the right words to use: you have to know how it feels. How it feels when you make something work, when you fix that bug, when you make it work on IE. Like anything in life, things run smoothly when you make the effort to share experiences, empathise and deeply understand the needs of others. How can designers do that if they’ve never built their own site? I’m not talking the big stuff, I’m talking about your portfolio site, your mate’s business website, a website for that great idea you’ve had. I’m talking about doing it yourself to get an unique insight into how it feels. We all know that designers and developers alike love an <ol>, so here it is. Ten reasons designers should be fluid with their skills and build their own sites 1. It’s never been easier Now here’s where the definition of ‘build’ is going to get a bit loose and people are going to get angry, but when I say it’s never been easier I mean because of the existence of apps and software like WordPress, Squarespace, Tumblr, et al. It’s easy to make something and get it out there into the world, and these are all gateway drugs to hard coding! 2. You’ll understand how it feels How it feels to be so proud that something actually works that you momentarily don’t notice if the kerning is off or the padding is inconsistent. How it feels to see your site appear when you’ve redirected a URL. How it feels when you just can’t work out where that one extra space is in a line of PHP that has killed your whole site. 3. It makes you a designer Not a better designer, it makes you a designer when you are designing how things look and how they work. 4. You learn about movement Photoshop and Sketch just don’t cut it yet. Until you see your site in a browser or your app on a phone, it’s hard to imagine how it moves. Building your own sites shows you that it’s not just about how the content looks on the screen, but how it moves, interacts and feels. 5. You make techie friends All the tutorials and forums in the world can’t beat your network of techie friends. Since I started working in web design I have worked with, sat next to, and co-created with some of the greatest developers. Developers who’ve shared their knowledge, encouraged me to build things, patiently explained HTML, CSS, servers, divs, web fonts, iOS development. There has been no void, no versus, very few battles; just people who share an interest and love of making things. 6. You will own domain names When something is paid for, online and searchable then it’s real and you’ve got to put the work in. Buying domains has taught me how to stop procrastinating, but also about DNS, FTP, email, and how servers work. 7. People will ask you to do things
 Learning about code and development opens a whole new world of design. When you put your own personal websites and projects out there people ask you to do more things. OK, so sometimes those things are “Make me a website for free”, but more often it’s cool things like “Come and speak at my conference”, “Write an article for my magazine” and “Collaborate with me.” 8. The young people are coming! They love typography, they love print, they love layout, but they’ve known how to put a website together since they started their first blog aged five and they show me clever apps they’ve knocked together over the weekend! They’re new, they’re fluid, and they’re better than us! 9. Your portfolio is your portfolio OK, it’s an obvious one, but as designers our work is our CV, our legacy! We need to show our skill, our attention to detail and our creativity in the way we showcase our work. Building your portfolio is the best way to start building your own websites. (And please be that designer who’s bothered to work out how to change the Squarespace favicon!) 10. It keeps you fluid! Building your own websites is tough. You’ll never be happy with it, you’ll constantly be updating it to keep up with technology and fashion, and by the time you’ve finished it you’ll want to start all over again. Perfect for forcing you to stay up-to-date with what’s going on in the industry. </ol>",2015,Ros Horner,roshorner,2015-12-12T00:00:00+00:00,https://24ways.org/2015/be-fluid-with-your-design-skills-build-your-own-sites/,code 64,Being Responsive to the Small Things,"It’s that time of the year again to trim the tree with decorations. Or maybe a DOM tree? Any web page is made of HTML elements that lay themselves out in a tree structure. We start at the top and then have multiple branches with branches that branch out from there. To decorate our tree, we use CSS to specify which branches should receive the tinsel we wish to adorn upon it. It’s all so lovely. In years past, this was rather straightforward. But these days, our trees need to be versatile. They need to be responsive! Responsive web design is pretty wonderful, isn’t it? Based on our viewport, we can decide how elements on the page should change their appearance to accommodate various constraints using media queries. Clearleft have a delightfully clean and responsive site Alas, it’s not all sunshine, lollipops, and rainbows. With complex layouts, we may have design chunks — let’s call them components — that appear in different contexts. Each context may end up providing its own constraints on the design, both in its default state and in its possibly various responsive states. Media queries, however, limit us to the context of the entire viewport, not individual containers on the page. For every container our component lives in, we need to specify how to rearrange things in that context. The more complex the system, the more contexts we need to write code for. @media (min-width: 800px) { .features > .component { } .sidebar > .component {} .grid > .component {} } Each new component and each new breakpoint just makes the entire system that much more difficult to maintain. @media (min-width: 600px) { .features > .component { } .grid > .component {} } @media (min-width: 800px) { .features > .component { } .sidebar > .component {} .grid > .component {} } @media (min-width: 1024px) { .features > .component { } } Enter container queries Container queries, also known as element queries, allow you to specify conditional CSS based on the width (or maybe height) of the container that an element lives in. In doing so, you no longer have to consider the entire page and the interplay of all the elements within. With container queries, you’ll be able to consider the breakpoints of just the component you’re designing. As a result, you end up specifying less code and the components you develop have fewer dependencies on the things around them. (I guess that makes your components more independent.) Awesome, right? There’s only one catch. Browsers can’t do container queries. There’s not even an official specification for them yet. The Responsive Issues (née Images) Community Group is looking into solving how such a thing would actually work. See, container queries are tricky from an implementation perspective. The contents of a container can affect the size of the container. Because of this, you end up with troublesome circular references. For example, if the width of the container is under 500px then the width of the child element should be 600px, and if the width of the container is over 500px then the width of the child element should be 400px. Can you see the dilemma? When the container is under 500px, the child element resizes to 600px and suddenly the container is 600px. If the container is 600px, then the child element is 400px! And so on, forever. This is bad. I guess we should all just go home and sulk about how we just got a pile of socks when we really wanted the Millennium Falcon. Our saviour this Christmas: JavaScript The three wise men — Tim Berners-Lee, Håkon Wium Lie, and Brendan Eich — brought us the gifts of HTML, CSS, and JavaScript. To date, there are a handful of open source solutions to fill the gap until a browser implementation sees the light of day. Elementary by Scott Jehl ElementQuery by Tyson Matanich EQ.js by Sam Richards CSS Element Queries from Marcj Using any of these can sometimes feel like your toy broke within ten minutes of unwrapping it. Each take their own approach on how to specify the query conditions. For example, Elementary, the smallest of the group, only supports min-width declarations made in a :before selector. .mod-foo:before { content: “300 410 500”; } The script loops through all the elements that you specify, reading the content property and then setting an attribute value on the HTML element, allowing you to use CSS to style that condition. .mod-foo[data-minwidth~=""300""] { background: blue; } To get the script to run, you’ll need to set up event handlers for when the page loads and for when it resizes. window.addEventListener( ""load"", window.elementary, false ); window.addEventListener( ""resize"", window.elementary, false ); This works okay for static sites but breaks down on pages where elements can expand or contract, or where new content is dynamically inserted. In the case of EQ.js, the implementation requires the creation of the breakpoints in the HTML. That means that you have implementation details in HTML, JavaScript, and CSS. (Although, with the JavaScript, once it’s in the build system, it shouldn’t ever be much of a concern unless you’re tracking down a bug.) Another problem you may run into is the use of content delivery networks (CDNs) or cross-origin security issues. The ElementQuery and CSS Element Queries libraries need to be able to read the CSS file. If you are unable to set up proper cross-origin resource sharing (CORS) headers, these libraries won’t help. At Shopify, for example, we had all of these problems. The admin that store owners use is very dynamic and the CSS and JavaScript were being loaded from a CDN that prevented the JavaScript from reading the CSS. To go responsive, the team built their own solution — one similar to the other scripts above, in that it loops through elements and adds or removes classes (instead of data attributes) based on minimum or maximum width. The caveat to this particular approach is that the declaration of breakpoints had to be done in JavaScript. elements = [ { ‘module’: “.carousel”, “className”:’alpha’, minWidth: 768, maxWidth: 1024 }, { ‘module’: “.button”, “className”:’beta’, minWidth: 768, maxWidth: 1024 } , { ‘module’: “.grid”, “className”:’cappa’, minWidth: 768, maxWidth: 1024 } ] With that done, the script then had to be set to run during various events such as inserting new content via Ajax calls. This sometimes reveals itself in flashes of unstyled breakpoints (FOUB). An unfortunate side effect but one largely imperceptible. Using this approach, however, allowed the Shopify team to make the admin responsive really quickly. Each member of the team was able to tackle the responsive story for a particular component without much concern for how all the other components would react. Each element responds to its own breakpoint that would amount to dozens of breakpoints using traditional breakpoints. This approach allows for a truly fluid and adaptive interface for all screens. Christmas is over I wish I were the bearer of greater tidings and cheer. It’s not all bad, though. We may one day see browsers implement container queries natively. At which point, we shall all rejoice!",2015,Jonathan Snook,jonathansnook,2015-12-19T00:00:00+00:00,https://24ways.org/2015/being-responsive-to-the-small-things/,code 65,The Accessibility Mindset,"Accessibility is often characterized as additional work, hard to learn and only affecting a small number of people. Those myths have no logical foundation and often stem from outdated information or misconceptions. Indeed, it is an additional skill set to acquire, quite like learning new JavaScript frameworks, CSS layout techniques or new HTML elements. But it isn’t particularly harder to learn than those other skills. A World Health Organization (WHO) report on disabilities states that, [i]ncluding children, over a billion people (or about 15% of the world’s population) were estimated to be living with disability. Being disabled is not as unusual as one might think. Due to chronic health conditions and older people having a higher risk of disability, we are also currently paving the cowpath to an internet that we can still use in the future. Accessibility has a very close relationship with usability, and advancements in accessibility often yield improvements in the usability of a website. Websites are also more adaptable to users’ needs when they are built in an accessible fashion. Beyond the bare minimum In the time of table layouts, web developers could create code that passed validation rules but didn’t adhere to the underlying semantic HTML model. We later developed best practices, like using lists for navigation, and with HTML5 we started to wrap those lists in nav elements. Working with accessibility standards is similar. The Web Content Accessibility Guidelines (WCAG) 2.0 can inform your decision to make websites accessible and can be used to test that you met the success criteria. What it can’t do is measure how well you met them. W3C developed a long list of techniques that can be used to make your website accessible, but you might find yourself in a situation where you need to adapt those techniques to be the most usable solution for your particular problem. The checkbox below is implemented in an accessible way: The input element has an id and the label associated with the checkbox refers to the input using the for attribute. The hover area is shown with a yellow background and a black dotted border: Open video The label is clickable and the checkbox has an accessible description. Job done, right? Not really. Take a look at the space between the label and the checkbox: Open video The gutter is created using a right margin which pushes the label to the right. Users would certainly expect this space to be clickable as well. The simple solution is to wrap the label around the checkbox and the text: Open video You can also set the label to display:block; to further increase the clickable area: Open video And while we’re at it, users might expect the whole box to be clickable anyway. Let’s apply the CSS that was on a wrapping div element to the label directly: Open video The result enhances the usability of your form element tremendously for people with lower dexterity, using a voice mouse, or using touch interfaces. And we only used basic HTML and CSS techniques; no JavaScript was added and not one extra line of CSS. <form action=""#""> <label for=""uniquecheckboxid""> <input type=""checkbox"" name=""checkbox"" id=""uniquecheckboxid"" /> Checkbox 4 </label> </form> Button Example The button below looks like a typical edit button: a pencil icon on a real button element. But if you are using a screen reader or a braille keyboard, the button is just read as “button” without any indication of what this button is for. Open video A screen reader announcing a button. Contains audio. The code snippet shows why the button is not properly announced: <button> <span class=""icon icon-pencil""></span> </button> An icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users: Open video A screen reader announcing a button with a title. However, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn’t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts: The mobile GitHub view with and without external fonts. Even if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn’t load. <button> <img src=""icon-pencil.svg"" alt=""Edit""> </button> Providing always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons. <button> <span class=""icon icon-pencil""></span> Edit </button> This also reads just fine in screen readers: Open video A screen reader announcing the revised button. Clever usability enhancements don’t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the “captioned videos” or “audio description” categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don’t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study. More information W3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out “How People with Disabilities Use the Web”, there are “Tips for Getting Started” for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way. Conclusion You can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don’t notice when those fundamental aspects of good website design and development are done right. But they’ll always know when they are implemented poorly. If you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn’t know they’d need it: Open video In this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks “What did she say?” the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.",2015,Eric Eggert,ericeggert,2015-12-17T00:00:00+00:00,https://24ways.org/2015/the-accessibility-mindset/,code 68,"Grid, Flexbox, Box Alignment: Our New System for Layout","Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers. Grid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work. Another new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported. Owing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks. If there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we’ll be easily able to date a website to circa 2015 by seeing <div class=""row""> or <div class=""col-md-3""> in the markup. In this article, I’m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them. To see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here. Relationship Items only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It’s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout. At a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox: See the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen. And in grid layout (requires a CSS grid-supporting browser): See the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen. Alignment Full-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you’d like to have under your tree this Christmas, then this is the box you’ll want to unwrap first. The box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things. Our examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property. The examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self. Take a look at an example in flexbox: See the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen. And also in grid layout (requires a CSS grid-supporting browser): See the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen. The alignment properties used with CSS grid layout. Fluid grids A cornerstone of responsive design is the concept of fluid grids. “[…]every aspect of the grid—and the elements laid upon it—can be expressed as a proportion relative to its container.” —Ethan Marcotte, “Fluid Grids” The method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element. h1 { margin-left: 14.575%; /* 144px / 988px = 0.14575 */ width: 70.85%; /* 700px / 988px = 0.7085 */ } In more recent years, we’ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple. The most basic of flexbox demos shows this fluidity in action. The justify-content property – another property defined in the box alignment module – can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion. In this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between. See the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen. For true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion. The flexbox flex property is a shorthand for: flex-grow flex-shrink flex-basis The flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value. flex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis. flex: 0 0 200px: a box that will be 200px and cannot grow or shrink. flex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller. In this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis. See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen. What I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out. If I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo: See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen. Once you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes. In this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr. grid-template-columns: 1fr 2fr 2fr 2fr; See the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen. The four-track grid. Separation of concerns My younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup. Browsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another. Here is a snippet from the Bootstrap grid examples – two columns with two nested columns: <div class=""row""> <div class=""col-md-8""> .col-md-8 <div class=""row""> <div class=""col-md-6""> .col-md-6 </div> <div class=""col-md-6""> .col-md-6 </div> </div> </div> <div class=""col-md-4""> .col-md-4 </div> </div> Not a million miles away from something I might have written in 1999. <table> <tr> <td class=""col-md-8""> .col-md-8 <table> <tr> <td class=""col-md-6""> .col-md-6 </td> <td class=""col-md-6""> .col-md-6 </td> </tr> </table> </td> <td class=""col-md-4""> .col-md-4 </td> </tr> </table> Grid and flexbox layouts do not need to be described in markup. The layout description happens entirely in the CSS, meaning that elements can be moved around from within the presentation layer. Flexbox gives us the ability to reverse the flow of elements, but also to set the order of elements with the order property. This is demonstrated here, where Widget the cat is in position 1 in the source, but I have used the order property to display him after the things that are currently unimpressive to him. See the Pen Flexbox: order by rachelandrew (@rachelandrew) on CodePen. Grid layout takes this a step further. Where flexbox lets us set the order of items in a single dimension, grid layout gives us the ability to position things in two dimensions: both rows and columns. Defined in the CSS, this positioning can be changed at any breakpoint without needing additional markup. Compare the source order with the display order in this example (requires a CSS grid-supporting browser): See the Pen Grid positioning in two dimensions by rachelandrew (@rachelandrew) on CodePen. Laying out our items in two dimensions using grid layout. As these demos show, a straightforward way to decide if you should use grid layout or flexbox is whether you want to position items in one dimension or two. If two, you want grid layout. A note on accessibility and reordering The issues arising from this powerful ability to change the way items are ordered visually from how they appear in the source have been the subject of much discussion. The current flexbox editor’s draft states “Authors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.” —CSS Flexible Box Layout Module Level 1, Editor’s Draft (3 December 2015) This is to ensure that non-visual user agents (a screen reader, for example) can rely on the document source order as being correct. Take care when reordering that you do so from the basis of a sound document that makes sense in terms of source order. Avoid using visual order to convey meaning. Automatic content placement with rules Having control over the order of items, or placing items on a predefined grid, is nice. However, we can often do that already with one method or another and we have frameworks and tools to help us. Tools such as Susy mean we can even get away from stuffing our markup full of grid classes. However, our new layout methods give us some interesting new possibilities. Something that is useful to be able to do when dealing with content coming out of a CMS or being pulled from some other source, is to define a bunch of rules and then say, “Display this content, using these rules.” As an example of this, I will leave you with a Christmas poem displayed in a document alongside Widget the cat and some of the decorations that are bringing him no Christmas cheer whatsoever. The poem is displayed first in the source as a set of paragraphs. I’ve added a class identifying each of the four paragraphs but they are displayed in the source as one text. Below that are all my images, some landscape and some portrait; I’ve added a class of landscape to the landscape ones. The mobile-first grid is a single column and I use line-based placement to explicitly position my poem paragraphs. The grid layout auto-placement rules then take over and place the images into the empty cells left in the grid. At wider screen widths, I declare a four-track grid, and position my poem around the grid, keeping it in a readable order. I also add rules to my landscape class, stating that these items should span two tracks. Once again the grid layout auto-placement rules position the rest of my images without my needing to position them. You will see that grid layout takes items out of source order to fill gaps in the grid. It does this because I have set the property grid-auto-flow to dense. The default is sparse meaning that grid will not attempt this backfilling behaviour. Take a look and play around with the full demo (requires a CSS grid layout-supporting browser): See the Pen Grid auto-flow with rules by rachelandrew (@rachelandrew) on CodePen. The final automatic placement example. My wish for 2016 I really hope that in 2016, we will see CSS grid layout finally emerge from behind browser flags, so that we can start to use these features in production — that we can start to move away from using the wrong tools for the job. However, I also hope that we’ll see developers fully embracing these tools as the new system that they are. I want to see people exploring the possibilities they give us, rather than trying to get them to behave like the grid systems of 2015. As you discover these new modules, treat them as the new paradigm that they are, get creative with them. And, as you find the edges of possibility with them, take that feedback to the CSS Working Group. Help improve the layout systems that will shape the look of the future web. Some further reading I maintain a site of grid layout examples and resources at Grid by Example. The three CSS specifications I’ve discussed can be found as editor’s drafts: CSS grid, flexbox, box alignment. I wrote about the last three years of my interest in CSS grid layout, which gives something of a history of the specification. More examples of box alignment and grid layout. My presentation at Fronteers earlier this year, in which I explain more about these concepts.",2015,Rachel Andrew,rachelandrew,2015-12-15T00:00:00+00:00,https://24ways.org/2015/grid-flexbox-box-alignment-our-new-system-for-layout/,code 70,Bringing Your Code to the Streets,"— or How to Be a Street VJ Our amazing world of web code is escaping out of the browser at an alarming rate and appearing in every aspect of the environment around us. Over the past few years we’ve already seen JavaScript used server-side, hardware coded with JavaScript, a rise of native style and desktop apps created with HTML, CSS and JavaScript, and even virtual reality (VR) is getting its fair share of front-end goodness. You can go ahead and play with JavaScript-powered hardware such as the Tessel or the Espruino to name a couple. Just check out the Tessel project page to see JavaScript in the world of coffee roasting or sleep tracking your pet. With the rise of the internet of things, JavaScript can be seen collecting information on flooding among other things. And if that’s not enough ‘outside the browser’ implementations, Node.js servers can even be found in aircraft! I previously mentioned VR and with three.js’s extra StereoEffect.js module it’s relatively simple to get browser 3D goodness to be Google Cardboard-ready, and thus set the stage for all things JavaScript and VR. It’s been pretty popular in the art world too, with interactive works such as Seb Lee-Delisle’s Lunar Trails installation, featuring the old arcade game Lunar Lander, which you can now play in your browser while others watch (it is the web after all). The Science Museum in London held Chrome Web Lab, an interactive exhibition featuring five experiments, showcasing the magic of the web. And it’s not even the connectivity of the web that’s being showcased; we can even take things offline and use web code for amazing things, such as fighting Ebola. One thing is for sure, JavaScript is awesome. Hell, if you believe those telly programs (as we all do), JavaScript can even take down the stock market, purely through the witchcraft of canvas! Go JavaScript! Now it’s our turn So I wanted to create a little project influenced by this theme, and as it’s Christmas, take it to the streets for a little bit of party fun! Something that could take code anywhere. Here’s how I made a portable visual projection pack, a piece of video mixing software and created some web-coded street art. Step one: The equipment You will need: One laptop: with HDMI output and a modern browser installed, such as Google Chrome. One battery-powered mini projector: I’ve used a Texas Instruments DLP; for its 120 lumens it was the best cost-to-lumens ratio I could find. One MIDI controller (optional): mine is an ICON iDJ as it suits mixing visuals. However, there is more affordable hardware on the market such as an Akai LPD8 or a Korg nanoPAD2. As you’ll see in the article, this is optional as it can be emulated within the software. A case to carry it all around in. Step two: The software The projected visuals, I imagined, could be anything you can create within a browser, whether that be simple HTML and CSS, images, videos, SVG or canvas. The only requirement I have is that they move or change with sound and that I can mix any one visual into another. You may remember a couple of years ago I created a demo on this very site, allowing audio-triggered visuals from the ambient sounds your device mic was picking up. That was a great starting point – I used that exact method to pick up the audio and thus the first requirement was complete. If you want to see some more examples of visuals I’ve put together for this, there’s a showcase on CodePen. The second requirement took a little more thought. I needed two screens, which could at any point show any of the visuals I had coded, but could be mixed from one into the other and back again. So let’s start with two divs, both absolutely positioned so they’re on top of each other, but at the start the second screen’s opacity is set to zero. Now all we need is a slider, which when moved from one side to the other slowly sets the second screen’s opacity to 1, thereby fading it in. See the Pen Mixing Screens (Software Version) by Rumyra (@Rumyra) on CodePen. Mixing Screens (CodePen) As you saw above, I have a MIDI controller and although the software method works great, I’d quite like to make use of this nifty piece of kit. That’s easily done with the Web MIDI API. All I need to do is call it, and when I move one of the sliders on the controller (I’ve allocated the big cross fader in the middle for this), pick up on the change of value and use that to control the opacity instead. var midi, data; // start talking to MIDI controller if (navigator.requestMIDIAccess) { navigator.requestMIDIAccess({ sysex: false }).then(onMIDISuccess, onMIDIFailure); } else { alert(“No MIDI support in your browser.”); } // on success function onMIDISuccess(midiData) { // this is all our MIDI data midi = midiData; var allInputs = midi.allInputs.values(); // loop over all available inputs and listen for any MIDI input for (var input = allInputs.next(); input && !input.done; input = allInputs.next()) { // when a MIDI value is received call the onMIDIMessage function input.value.onmidimessage = onMIDIMessage; } } function onMIDIMessage(message) { // data comes in the form [command/channel, note, velocity] data = message.data; // Opacity change for screen. The cross fader values are [176, 8, {0-127}] if ( (data[0] === 176) && (data[1] === 8) ) { // this value will change as the fader is moved var opacity = data[2]/127; screenTwo.style.opacity = opacity; } } The final code was slightly more complicated than this, as I decided to switch the two screens based on the frequencies of the sound that was playing, and use the cross fader to depict the frequency threshold value. This meant they flickered in and out of each other, rather than just faded. There’s a very rough-and-ready first version of the software on GitHub. Phew, Great! Now we need to get all this to the streets! Step three: Portable kit Did you notice how I mentioned a case to carry it all around in? I wanted the case to be morphable, so I could use the equipment from it too, a sort of bag-to-usherette-tray-type affair. Well, I had an unused laptop bag… I strengthened it with some MDF, so when I opened the bag it would hold like a tray where the laptop and MIDI controller would sit. The projector was Velcroed to the external pocket of the bag, so when it was a tray it would project from underneath. I added two durable straps, one for my shoulders and one round my waist, both attached to the bag itself. There was a lot of cutting and trimming. As it was a laptop bag it was pretty thick to start and sewing was tricky. However, I only broke one sewing machine needle; I’ve been known to break more working with leather, so I figured I was doing well. By the way, you can actually buy usherette trays, but I just couldn’t resist hacking my own :) Step four: Take to the streets First, make sure everything is charged – everything – a lot! The laptop has to power both the MIDI controller and the projector, and although I have a mobile phone battery booster pack, that’ll only charge the projector should it run out. I estimated I could get a good hour of visual artistry before I needed to worry, though. I had a couple of ideas about time of day and location. Here in the UK at this time of year, it gets dark around half past four, so I could easily head out in a city around 5pm and it would be dark enough for the projections to be seen pretty well. I chose Bristol, around the waterfront, as there were some interesting locations to try it out in. The best was Millennium Square: busy but not crowded and plenty of surfaces to try projecting on to. My first time out with the portable audio/visual pack (PAVP as it will now be named) was brilliant. I played music and projected visuals, like a one-woman band of A/V! You might be thinking what the point of this was, besides, of course, it being a bit of fun. Well, this project got me to look at canvas and SVG more closely. The Web MIDI API was really interesting; MIDI as a data format has some great practical uses. I think without our side projects we may not have all these wonderful uses for our everyday code. Not only do they remind us coding can, and should, be fun, they also help us learn and grow as makers. My favourite part? When I was projecting into a water feature in Millennium Square. For those who are familiar, you’ll know it’s like a wall of water so it produced a superb effect. I drew quite a crowd and a kid came to stand next to me and all I could hear him say with enthusiasm was, ‘Oh wow! That’s so cool!’ Yes… yes, kid, it was cool. Making things with code is cool. Massive thanks to the lovely Drew McLellan for his incredibly well-directed photography, and also Simon Johnson who took a great hand in perfecting the kit while it was attached.",2015,Ruth John,ruthjohn,2015-12-06T00:00:00+00:00,https://24ways.org/2015/bringing-your-code-to-the-streets/,code 71,Upping Your Web Security Game,"When I started working in web security fifteen years ago, web development looked very different. The few non-static web applications were built using a waterfall process and shipped quarterly at best, making it possible to add security audits before every release; applications were deployed exclusively on in-house servers, allowing Info Sec to inspect their configuration and setup; and the few third-party components used came from a small set of well-known and trusted providers. And yet, even with these favourable conditions, security teams were quickly overwhelmed and called for developers to build security in. If the web security game was hard to win before, it’s doomed to fail now. In today’s web development, every other page is an application, accepting inputs and private data from users; software is built continuously, designed to eliminate manual gates, including security gates; infrastructure is code, with servers spawned with little effort and even less security scrutiny; and most of the code in a typical application is third-party code, pulled in through open source repositories with rarely a glance at who provided them. Security teams, when they exist at all, cannot solve this problem. They are vastly outnumbered by developers, and cannot keep up with the application’s pace of change. For us to have a shot at making the web secure, we must bring security into the core. We need to give it no less attention than that we give browser compatibility, mobile design or web page load times. More broadly, we should see security as an aspect of quality, expecting both ourselves and our peers to address it, and taking pride when we do it well. Where To Start? Embracing security isn’t something you do overnight. A good place to start is by reviewing things you’re already doing – and trying to make them more secure. Here are three concrete steps you can take to get going. HTTPS Threats begin when your system interacts with the outside world, which often means HTTP. As is, HTTP is painfully insecure, allowing attackers to easily steal and manipulate data going to or from the server. HTTPS adds a layer of crypto that ensures the parties know who they’re talking to, and that the information exchanged can be neither modified nor sniffed. HTTPS is relevant to any site. If your non-HTTPS site holds opinions, reading it may get your users in trouble with employers or governments. If your users believe what you say, attackers can modify your non-HTTPS to take advantage of and abuse that trust. If you want to use new browser technologies like HTTP2 and service workers, your site will need to be HTTPS. And if you want to be discovered on the web, using HTTPS can help your Google ranking. For more details on why I think you should make the switch to HTTPS, check out this post, these slides and this video. Using HTTPS is becoming easier and cheaper. Here are a few free tools that can help: Get free and easy HTTPS delivery from Cloudflare (be sure to use “Full SSL”!) Get a free and automation-friendly certificate from Let’s Encrypt (now in open beta). Test how well your HTTPS is set up using SSLTest. Other vendors and platforms are rapidly simplifying and reducing the cost of their HTTPS offering, as demand and importance grows. Two-Factor Authentication The most sensitive data is usually stored behind a login, and the authentication process is the primary gate in front of this data. Making this process secure has many aspects, including using HTTPS when accepting credentials, having a strong password policy, never storing the password, and more. All of these are important, but the best single step to boost your authentication security is to introduce two-factor authentication (2FA). Adding 2FA usually means prompting users for an additional one-time code when logging in, which they get via SMS or a mobile app (e.g. Google Authenticator). This code is short-lived and is extremely hard for a remote attacker to guess, thus vastly reducing the risk a leaked or easily guessed password presents. The typical algorithm for 2FA is based on an IETF standard called the time-based one-time password (TOTP) algorithm, and it isn’t that hard to implement. Joel Franusic wrote a great post on implementing 2FA; modules like speakeasy make it even easier; and you can swap SMS with Google Authenticator or your own app if you prefer. If you don’t want to build 2FA support yourself, you can purchase two/multi-factor authentication services from vendors such as DuoSecurity, Auth0, Clef, Hypr and others. If implementing 2FA still feels like too much work, you can also choose to offload your entire authentication process to an OAuth-based federated login. Many companies offer this today, including Facebook, Google, Twitter, GitHub and others. These bigger players tend to do authentication well and support 2FA, but you should consider what data you’re sharing with them in the process. Tracking Known Vulnerabilities Most of the code in a modern application was actually written by third parties, and pulled into your app as frameworks, modules and libraries. While using these components makes us much more productive, along with their functionality we also adopt their security flaws. To make things worse, some of these flaws are well-known vulnerabilities, making it easy for hackers to take advantage of them in an attack. This is a real problem and happens on pretty much every platform. Do you develop in Java? In 2014, over 6% of Java modules downloaded from Maven had a known severe security issue, the typical Java applications containing 24 flaws. Are you coding in Node.js? Roughly 14% of npm packages carry a known vulnerability, and over 60% of dev shops find vulnerabilities in their code. 30% of Docker Hub containers include a high priority known security hole, and 60% of the top 100,000 websites use client-side libraries with known security gaps. To find known security issues, take stock of your dependencies and match them against language-specific lists such as Snyk’s vulnerability DB for Node.js, rubysec for Ruby, victims-db for Python and OWASP’s Dependency Check for Java. Once found, you can fix most issues by upgrading the component in question, though that may be tricky for indirect dependencies. This process is still way too painful, which means most teams don’t do it. The Snyk team and I are hoping to change that by making it as easy as possible to find, fix and monitor known vulnerabilities in your dependencies. Snyk’s wizard will help you find and fix these issues through guided upgrades and patches, and adding Snyk’s test to your continuous integration and deployment (CI/CD) will help you stay secure as your code evolves. Note that newly disclosed vulnerabilities usually impact old code – the one you’re running in production. This means you have to stay alert when new vulnerabilities are disclosed, so you can fix them before attackers can exploit them. You can do so by subscribing to vulnerability lists like US-CERT, OSVDB and NVD. Snyk’s monitor will proactively let you know about new disclosures relevant to your code, but only for Node.js for now – you can register to get updated when we expand. Securing Yourself In addition to making your application secure, you should make the contributors to that application secure – including you. Earlier this year we’ve seen attackers target mobile app developers with a malicious Xcode. The real target, however, wasn’t these developers, but rather the users of the apps they create. That you create. Securing your own work environment is a key part of keeping your apps secure, and your users from being compromised. There’s no single step that will make you fully secure, but here are a few steps that can make a big impact: Use 2FA on all the services related to the application, notably source control (e.g. GitHub), cloud platform (e.g. AWS), CI/CD, CDN, DNS provider and domain registrar. If an attacker compromises any one of those, they could modify or replace your entire application. I’d recommend using 2FA on all your personal services too. Use a password manager (e.g. 1Password, LastPass) to ensure you have a separate and complex password for each service. Some of these services will get hacked, and passwords will leak. When that happens, don’t let the attackers access your other systems too. Secure your workstation. Be careful what you download, lock your screen when you walk away, change default passwords on services you install, run antivirus software, etc. Malware on your machine can translate to malware in your applications. Be very wary of phishing. Smart attackers use ‘spear phishing’ techniques to gain access to specific systems, and can trick even security savvy users. There are even phishing scams targeting users with 2FA. Be alert to phishy emails. Don’t install things through curl <somewhere-on-the-web> | sudo bash, especially if the URL is on GitHub, meaning someone else controls it. Don’t do it on your machines, and definitely don’t do it in your CI/CD systems. Seriously. Staying secure should be important to you personally, but it’s doubly important when you have privileged access to an application. Such access makes you a way to reach many more users, and therefore a more compelling target for bad actors. A Culture of Security Using HTTPS, enabling two-factor authentication and fixing known vulnerabilities are significant steps in building security at your core. As you implement them, remember that these are just a few steps in a longer journey. The end goal is to embrace security as an aspect of quality, and accept we all share the responsibility of keeping ourselves – and our users – safe.",2015,Guy Podjarny,guypodjarny,2015-12-11T00:00:00+00:00,https://24ways.org/2015/upping-your-web-security-game/,code 75,A Harder-Working Class,"Class is only becoming more important. Focusing on its original definition as an attribute for grouping (or classifying) as well as linking HTML to CSS, recent front-end development practices are emphasizing class as a vessel for structured, modularized style packages. These patterns reduce the need for repetitive declarations that can seriously bloat file sizes, and instil human-readable understanding of how the interface, layout, and aesthetics are constructed. In the next handful of paragraphs, we will look at how these emerging practices – such as object-oriented CSS and SMACSS – are pushing the relevance of class. We will also explore how HTML and CSS architecture can be further simplified, performance can be boosted, and CSS utility sharpened by combining class with the attribute selector. A primer on attribute selectors While attribute selectors were introduced in the CSS 2 spec, they are still considered rather exotic. These well-established and well-supported features give us vastly improved flexibility in targeting elements in CSS, and offer us opportunities for smarter markup. With an attribute selector, you can directly style an element based on any of its unique – or uniquely shared – attributes, without the need for an ID or extra classes. Unlike pseudo-classes, pseudo-elements, and other exciting features of CSS3, attribute selectors do not require any browser-specific syntax or prefix, and are even supported in Internet Explorer 7. For example, say we want to target all anchor tags on a page that link to our homepage. Where otherwise we might need to manually identify and add classes to the HTML for these specific links, we could simply write: [href=index.html] { } This selector reads: target every element that has an href attribute of “index.html”. Attribute selectors are more faceted, though, as they also give us some very simple regular expression-like logic that helps further narrow (or widen) a selector’s scope. In our previous example, what if we wanted to also give indicative styles to any anchor tag linking to an external site? With no way to know what the exact href value would be for every external link, we need to use an expression to match a common aspect of those links. In this case, we know that all external links need to start with “http”, so we can use that as a hook: [href^=http] { } The selector here reads: target every element that has an href attribute that begins with “http” (which will also include “https”). The ^= means “starts with”. There are a few other simple expressions that give us a lot of flexibility in targeting elements, and I have found that a deep understanding of these and other selector types to be very useful. The class-attribute selector By matching classes with the attribute selector, CSS can be pushed to accomplish some exciting new feats. What I call a class-attribute selector combines the advantages of classes with attribute selectors by targeting the class attribute, rather than a specific class. Instead of selecting .urgent, you could select [class*=urgent]. The latter may seem like a more verbose way of accomplishing the former, but each would actually match two subtly different groups of elements. Eric Meyer first explored the possibility of using classes with attribute selectors over a decade ago. While his interest in this technique mostly explored the different facets of the syntax, I have found that using class-attribute selectors can have distinct advantages over either using an attribute selector or a straightforward class selector. First, let’s explore some of the subtleties of why we would target class before other attributes: Classes are ubiquitous. They have been supported since the HTML 4 spec was released in 1999. Newer attributes, such as the custom data attribute, have only recently begun to be adopted by browsers. Classes have multiple ways of being targeted. You can use the class selector or attribute selector (.classname or [class=classname]), allowing more flexible specificity than resorting to an ID or !important. Classes are already widely used, so adding more classes will usually require less markup than adding more attributes. Classes were designed to abstractly group and specify elements, making them the most appropriate attribute for styling using object-oriented methods (as we will learn in a moment). Also, as Meyer pointed out, we can use the class-attribute selector to be more strict about class declarations. Of these two elements: <h2 class=""very urgent""> <h2 class=""urgent""> …only the second h2 would be selected by [class=urgent], while .urgent would select both. The use of = matches any element with the exact class value of “urgent”. Eric explores these nuances further in his series on attribute selectors, but perhaps more dramatic is the added power that class-attribute selectors can bring to our CSS. More object-oriented, more scalable and modular Nicole Sullivan has been pushing abstracted, object-oriented thinking in CSS development for years now. She has shared stacks of knowledge on how behemoth sites have seen impressive gains in maintenance overhead and CSS file sizes by leaning heavier on classes derived from common patterns. Jonathan Snook also speaks, writes and is genuinely passionate about improving our markup by using more stratified and modular class name conventions. With SMACSS, he shows this to be highly useful across sites – both complex and simple – that exhibit repeated design patterns. Sullivan and Snook both push the use of class for styling over other attributes, and many front-end developers are fast advocating such thinking as best practice. With class-attribute selectors, we can further abstract our CSS, pushing its scalability. In his chapter on modules, Snook gives the example of a .pod class that might represent a certain set of styles. A .pod style set might be used in varying contexts, leading to CSS that might normally look like this: .pod { } form .pod { } aside .pod { } According to Snook, we can make these styles more portable by targeting more verbose classes, rather than context: .pod { } .pod-form { } .pod-sidebar { } …resulting in the following HTML: <div class=""pod""> <div class=""pod pod-form""> <div class=""pod pod-sidebar""> This divorces the <div>’s styles from its context, making it applicable to any situation in which it is needed. The markup is clean and portable, and the classes are imbued with meaning as to what module they belong to. Using class-attribute selectors, we can simplify this further: [class*=pod] { } .pod-form { } .pod-sidebar { } The *= tells the browser to look for any element with a class attribute containing “pod”, so it matches “pod”, “pod-form”, “pod-sidebar”, etc. This allows only one class per element, resulting in simpler HTML: <div class=""pod""> <div class=""pod-form""> <div class=""pod-sidebar""> We could further abstract the concept of “form” and “sidebar” adjustments if we knew that each of those alterations would always need the same treatment. /* Modules */ [class*=pod] { } [class*=btn] { } /* Alterations */ [class*=-form] { } [class*=-sidebar] { } In this case, all elements with classes appended “-form” or “-sidebar” would be altered in the same manner, allowing the markup to stay simple: <form> <h2 class=""pod-form""> <a class=""btn-form"" href=""#""> <aside> <h2 class=""pod-sidebar""> <a class=""btn-sidebar"" href=""#""> 50+ shades of specificity Classes are just powerful enough to override element selectors and default styling, but still leave room to be trumped by IDs and !important styles. This makes them more suitable for object-oriented patterns and helps avoid messy specificity issues that can not only be a pain for developers to maintain, but can also affect a site’s performance. As Sullivan notes, “In almost every case, classes work well and have fewer unintended consequences than either IDs or element selectors”. Proper use of specificity and cascade is crucial in building straightforward, efficient CSS. One interesting aspect of attribute selectors is that they can be compounded for increasing levels of specificity. Attribute selectors are assigned a specificity level of ten, the same as class selectors, but both class and attribute selectors can be chained together, giving them more and more specificity with each link. Some examples: .box { } /* Specificity of 10 */ .box.promo { } /* Specificity of 20 */ [class*=box] { } /* Specificity of 10 */ [class*=box][class*=promo] { } /* Specificity of 20 */ You can chain both types together, too: .box[class*=promo] { } /* Specificity of 20 */ I was amused to find, though, that you can chain the exact same class and attribute selectors for infinite levels of specificity .box { } /* Specificity of 10 */ .box.box { } /* Specificity of 20 */ .box.box.box { } /* Specificity of 30 */ [class*=box] { } /* Specificity of 10 */ [class*=box][class*=box] { } /* Specificity of 20 */ [class*=box][class*=box][class*=box] { } /* Specificity of 30 */ .box[class*=box].box[class*=box] { } /* Specificity of 40 */ To override .box styles for promo, we wouldn’t need to add an ID, change the order of .promo and .box in the CSS, or resort to an !important style. Granted, any issue that might need this fine level of specificity tweaking could probably be better solved with clever cascades, but having options never hurts. Smarter CSS One of the most powerful aspects of the class-attribute selector is its ability to expand the simple logic found in CSS. When developing Gridset (an online tool for building grids and outputting them as CSS), I realized that with the right class name conventions, class-attribute selectors would allow the CSS to be smart enough to automatically adjust for column offsets without the need for extra classes. This imbued the CSS output with logic that other frameworks lacked, and makes a developer’s job much easier. Say you need an element that spans column five (c5) to column six (c6) on your grid, and is preceded by an element spanning column one (c1) to column three (c3). The CSS can anticipate such a scenario: .c1-c3 + .c5-c6 { margin-left: 25%; /* …or the width of column four plus two gutter widths */ } …but to accommodate all of the margin offsets that could span that same gap, we would need to write a rather protracted list for just a six column grid: .c1-c3 + .c5-c6, .c1-c3 + .c5, .c2-c3 + .c5-c6, .c2-c3 + .c5, .c3 + .c5-c6, .c3 + .c5 { margin-left: 25%; } Now imagine how the verbosity compounds when we repeat this type of declaration for every possible margin in a grid. The more columns added to the grid, the longer this selector list would get, too, making the CSS harder for the developer to maintain and slowing the load time. Using class-attribute selectors, though, this can be much simpler: [class*=c3] + [class*=c5] { margin-left: 25%; } I’ve detailed how we extract as much logic as possible from as little CSS as needed on the Gridset blog. More flexible selectors In a recent project, I was working with Drupal-generated classes to change styles for certain special pages on a site. Without being able to change the code base, I was left trying to find some specific aspect of the generated HTML to target. I noticed that every special page was given a prefixed class, unique to the page, resulting in CSS like this: .specialpage-about, .specialpage-contact, .specialpage-info, … …and the list kept growing with each new special page. Such bloat would lead to problems down the line, and add development overhead to editorial decisions, which was a situation we were trying to avoid. I was easily able to fix this, though, with a concise class-attribute selector: [class*=specialpage-] The CSS was now flexible enough to accommodate both the editorial needs of the client, and the development restrictions of the CMS. Selector performance As Snook tells us in his chapter on Selector Performance, selectors are read by the browser from right to left, matching every element that adheres to each rule (or part of the selector). The more specific we can make the right-most rules – and every other part of your selectors – the more performant your CSS will be. So this selector: .home-page .promo .main-header …would be more performant than: .home-page div header …because there are likely many more header and div elements on the page, but not so many elements with those specific classes. Now, the class-attribute selector could be more general than a class selector, but not by much. I ran numerous tests based on the work of Steve Souders (and a few others) to test a class-attribute selector against a normal class selector. Given that Javascript will freeze during style rendering, I created a script that will add, then remove, a stylesheet on a page 5000 times, and measure only the time that elapses during the rendering freeze. The script runs four tests, essentially: one where a class selector and class-attribute Selector match a single element, and one they match multiple elements on the page. After running the test over 100 times and averaging the results, I have not seen a significant difference in rendering times. (As of this writing, the class-attribute selector has been 0.398% slower on average.) View the results here. Given the sheer amount of bytes potentially saved by reducing selector lists, though, I am confident class-attribute selectors could shorten load times on larger sites and, at the very least, save precious development time. Conclusion With its flexibility and broad remit, class has at times been derided as too lenient, allowing CMSes and lazy developers to fill its values with presentational hacks or verbose gibberish. There have even been calls for an early retirement. Class continues, though, to be one of our most crucial tools. Front-end developers are rightfully eager to expand production abilities through innovations such as Sass or LESS, but this should not preclude us from honing the tools we already know as well. Every technique demonstrated in this article was achievable over a decade ago and most of the same thinking could be applied to IDs, rels, or any other attribute (though the reasons listed above give class an edge). The recent advent of methods such as object-oriented CSS and SMACSS shows there is still much room left to expand what simple HTML and CSS can accomplish. Progress may not always be found in the innovation of our tools, but through sharpening our understanding of them.",2012,Nathan Ford,nathanford,2012-12-15T00:00:00+00:00,https://24ways.org/2012/a-harder-working-class/,code 76,Giving CSS Animations and Transitions Their Place,"CSS animations and transitions may not sit squarely in the realm of the behaviour layer, but they’re stepping up into this area that used to be pure JavaScript territory. Heck, CSS might even perform better than its JavaScript equivalents in some cases. That’s pretty serious! With CSS’s new tricks blurring the lines between presentation and behaviour, it can start to feel bloated and messy in our CSS files. It’s an uncomfortable feeling. Here are a pair of methods I’ve found to be pretty helpful in keeping the potential bloat and wire-crossing under control when CSS has its hands in both presentation and behaviour. Same eggs, more baskets Structuring your CSS to have separate files for layout, typography, grids, and so on is a fairly common approach these days. But which one do you put your transitions and animations in? The initial answer, as always, is “it depends”. Small effects here and there will likely sit just fine with your other styles. When you move into more involved effects that require multiple animations and some logic support from JavaScript, it’s probably time to choose none of the above, and create a separate CSS file just for them. Putting all your animations in one file is a huge help for code organization. Even if you opt for a name less literal than animations.css, you’ll know exactly where to go for anything CSS animation related. That saves time and effort when it comes to editing and maintenance. Keeping track of which animations are still currently used is easier when they’re all grouped together as well. And as an added bonus, you won’t have to look at all those horribly unattractive and repetitive prefixed @-keyframe rules unless you actually need to. An animations.css file might look something like the snippet below. It defines each animation’s keyframes and defines a class for each variation of that animation you’ll be using. Depending on the situation, you may also want to include transitions here in a similar way. (I’ve found defining transitions as their own class, or mixin, to be a huge help in past projects for me.) // defining the animation @keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } @-webkit-keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } @-moz-keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } @-ms-keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } … // class that assigns the animation .catsBackground { height: 100%; background: transparent url(../endlessKittens.png) 0 0 repeat-y; animation: catFall 1s linear infinite; -webkit-animation: catFall 1s linear infinite; -moz-animation: catFall 1s linear infinite; -ms-animation: catFall 1s linear infinite; } If we don’t need it, why load it? Having all those CSS animations and transitions in one file gives us the added flexibility to load them only when we want to. Loading a whole lot of things that will never be used might seem like a bit of a waste. While CSS has us impressed with its motion chops, it falls flat when it comes to the logic and fine-grained control. JavaScript, on the other hand, is pretty good at both those things. Chances are the content of your animations.css file isn’t acting alone. You’ll likely be adding and removing classes via JavaScript to manage your CSS animations at the very least. If your CSS animations are so entwined with JavaScript, why not let them hang out with the rest of the behaviour layer and only come out to play when JavaScript is supported? Dynamically linking your animations.css file like this means it will be completely ignored if JavaScript is off or not supported. No JavaScript? No additional behaviour, not even the parts handled by CSS. <script> document.write('<link rel=""stylesheet"" type=""text/css"" href=""animations.css"">'); </script> This technique comes up in progressive enhancement techniques as well, but it can help here to keep your presentation and behaviour nicely separated when more than one language is involved. The aim in both cases is to avoid loading files we won’t be using. If you happen to be doing something a bit fancier – like 3-D transforms or critical animations that require more nuanced fallbacks – you might need something like modernizr to step in to determine support more specifically. But the general idea is the same. Summing it all up Using a couple of simple techniques like these, we get to pick where to best draw the line between behaviour and presentation based on the situation at hand, not just on what language we’re using. The power of when to separate and how to reassemble the individual pieces can be even greater if you use preprocessors as part of your process. We’ve got a lot of options! The important part is to make forward-thinking choices to save your future self, and even your current self, unnecessary headaches.",2012,Val Head,valhead,2012-12-08T00:00:00+00:00,https://24ways.org/2012/giving-css-animations-and-transitions-their-place/,code 79,Responsive Images: What We Thought We Needed,"If you were to read a web designer’s Christmas wish list, it would likely include a solution for displaying images responsively. For those concerned about users downloading unnecessary image data, or serving images that look blurry on high resolution displays, finding a solution has become a frustrating quest. Having experimented with complex and sometimes devilish hacks, consensus is forming around defining new standards that could solve this problem. Two approaches have emerged. The <picture> element markup pattern was proposed by Mat Marquis and is now being developed by the Responsive Images Community Group. By providing a means of declaring multiple sources, authors could use media queries to control which version of an image is displayed and under what conditions: <picture width=""500"" height=""500""> <source media=""(min-width: 45em)"" src=""large.jpg""> <source media=""(min-width: 18em)"" src=""med.jpg""> <source src=""small.jpg""> <img src=""small.jpg"" alt=""""> <p>Accessible text</p> </picture> A second proposal put forward by Apple, the srcset attribute, uses a more concise syntax intended for use with the <img> element, although it could be compatible with the <picture> element too. This would allow authors to provide a set of images, but with the decision on which to use left to the browser: <img src=""fallback.jpg"" alt="""" srcset=""small.jpg 640w 1x, small-hd.jpg 640w 2x, med.jpg 1x, med-hd.jpg 2x ""> Enter Scrooge Men’s courses will foreshadow certain ends, to which, if persevered in, they must lead. Ebenezer Scrooge Given the complexity of this issue, there’s a heated debate about which is the best option. Yet code belies a certain truth. That both feature verbose and opaque syntax, I’m not sure either should find its way into the browser – especially as alternative approaches have yet to be fully explored. So, as if to dampen the festive cheer, here are five reasons why I believe both proposals are largely redundant. 1. We need better formats, not more markup As we move away from designs defined with fixed pixel values, bitmap images look increasingly unsuitable. While simple images and iconography can use scalable vector formats like SVG, for detailed photographic imagery, raster formats like GIF, PNG and JPEG remain the only suitable option. There is scope within current formats to account for varying bandwidth but this requires cooperation from browser vendors. Newer formats like JPEG2000 and WebP generate higher quality images with smaller file sizes, but aren’t widely supported. While it’s tempting to try to solve this issue by inventing new markup, the crux of it remains at the file level. Daan Jobsis’s experimentation with image compression strengthens this argument. He discovered that by increasing the dimensions of a JPEG image while simultaneously reducing its quality, a smaller files could be produced, with the resulting image looking just as good on both standard and high-resolution displays. This may be a hack in lieu of a more permanent solution, but it’s applied in the right place. Easy to accomplish with existing tools and without compatibility issues, it has few downsides. Further experimentation in this area should be encouraged, with standardisation efforts more helpful if focused on developing new image formats or, preferably, extending existing ones. 2. Art direction doesn’t belong in markup A desired benefit of the <picture> markup pattern is to allow for greater art direction. For example, rather than scaling down images on smaller displays to the point that their content is hard to discern, we could present closer crops instead: This can be achieved with CSS of course, although with a download penalty for those parts of an image not shown. This point may be negligible, however, since in the context of adaptable layouts, these hidden areas may end up being revealed anyway. Art direction concerns design, not content. If we wish to maintain a separation of concerns, including presentation within our markup seems misguided. 3. The size of a display has little relation to the size of an image By using media queries, the <picture> element allows authors to choose which characteristics of the screen or viewport to query for different images to be displayed. In developing sites at Clearleft, we have noticed that the viewport is essentially arbitrary, with the size of an image’s containing element more important. For example, look at how this grid of images may adapt at different viewport widths: As we build more modular systems, components need to be adaptable in and of themselves. There is a case to be made for developing more contextual methods of querying, rather than those based on attributes of the display. 4. We haven’t lived with the problem long enough A key strength of the web is that the underlying platform can be continually iterated. This can also be problematic if snap judgements are made about what constitutes an improvement. The early history of the web is littered with such examples, be it the perceived need for blinking text or inline typographic styling. To build a platform for the future, additions to it should be carefully considered. And if we want more consistent support across browsers, burdening vendors with an ever increasing list of features seems counterproductive. Only once the need for a new feature is sufficiently proven, should we look to standardise it. Before we could declare hover effects, rounded corners and typographic styling in CSS, we used JavaScript as a polyfill. Sure, doing so was painful, but use cases were fully explored, and the CSS specification better reflected the needs of authors. 5. Images and the web aesthetic The srcset proposal has emerged from a company that markets its phones as being able to browse the real – yet squashed down, tapped and zoomable – web. Perhaps Apple should make its own website responsive before suggesting how the rest of us should do so. Converserly, while the <picture> proposal has the backing of a few respected developers and designers, it was born out of the work Mat Marquis and Filament Group did for the Boston Globe. As the first large-scale responsive design, this was a landmark project that ignited the responsive web design movement and proved its worth. But it was the first. Its design shares a vernacular to that of contemporary newspaper websites, with a columnar, image-laden and densely packed layout. Compared to more recent examples – Quartz, The Next Web and the New York Times Skimmer – it feels out of step with the future direction of news sites. In seeking out a truer aesthetic for the web in which software interfaces have greater influence, we might discover that the need for responsive images isn’t as great as originally thought. Building for the future With responsive design, we’ve accepted the idea that a fully fluid layout, rather than a set of fixed layouts, is best suited to the web’s unpredictable nature. Current responsive image proposals are antithetical to this approach. We need solutions that lack complexity, are device-agnostic and work within existing workflows. Any proposal that requires different versions of the same image to be created, is likely to have to acquiesce under the pressure of reality. While it’s easy to get distracted about the size and quality of an image, and how we might choose to serve it, often the simplest solution is not to include it at all. After years of gluttonous design practice, in which fast connections and expansive display sizes were an accepted norm, we have got use to filling pages with needless images and countless items of page furniture. To design more adaptable experiences, the presence of every element needs to be questioned, for its existence requires additional data to be downloaded or futher complexity within a design system. Conditional loading techniques mean that the inclusion of images is no longer a binary choice, but can instead appear in a progressively enhanced manner. So here is my proposal. Instead of spending the next year worrying about responsive images, let’s embrace the constraints of the medium, and seek out new solutions that can work within them.",2012,Paul Lloyd,paulrobertlloyd,2012-12-11T00:00:00+00:00,https://24ways.org/2012/responsive-images-what-we-thought-we-needed/,code 80,HTML5 Video Bumpers,"Video is a bigger part of the web experience than ever before. With native browser support for HTML5 video elements freeing us from the tyranny of plugins, and the availability of faster internet connections to the workplace, home and mobile networks, it’s now pretty straightforward to publish video in a way that can be consumed in all sorts of ways on all sorts of different web devices. I recently worked on a project where the client had shot some dedicated video shorts to publish on their site. They also had some five-second motion graphics produced to top and tail the videos with context and branding. This pretty common requirement is a great idea on the web, where a user might land at your video having followed a link and be viewing a page without much context. Known as bumpers, these short introduction clips help brand a video and make it look a lot more professional. Adding bumpers to a video The simplest way to add bumpers to a video would be to edit them on to the start and end of the video file itself. Cooking the bumpers into the video file is easy, but should you ever want to update them it can become a real headache. If the branding needs updating, for example, you’d need to re-edit and re-encode all your videos. Not a fun task. What if the bumpers could be added dynamically? That would enable you to use the same bumper for multiple videos (decreasing download time for users who might watch more than one) and to update the bumpers whenever you wanted. You could change them seasonally, update them for special promotions, run different advertising slots, perform multivariate testing, or even target different bumpers to different users. The trade-off, of course, is that if you dynamically add your bumpers, there’s a chance that a user in a given circumstance might not see the bumper. For example, if the main video feature was uploaded to YouTube, you’d have no way to control the playback. As always, you need to weigh up the pros and cons and make your choice. HTML5 bumpers If you wanted to dynamically add bumpers to your HTML5 video, how would you go about it? That was the question I found myself needing to answer for this particular client project. My initial thought was to treat it just like an image slideshow. If I were building a slideshow that moved between images, I’d use CSS absolute positioning with z-index to stack the images up on top of each other in a pile, with the first image on top. To transition to the second image, I’d use JavaScript to fade the top image out, revealing the second image beneath it. Now that video is just a native object in the DOM, just like an image, why not do the same? Stack the videos up with the opening bumper on top, listen for the video’s onended event, and fade it out to reveal the main feature behind. Good idea, right? Wrong Remember that this is the web. It’s never going to be that easy. The problem here is that many non-desktop devices use native, dedicated video players. Think about watching a video on a mobile phone – when you play the video, the phone often goes full-screen in its native player, leaving the web page behind. There’s no opportunity to fade or switch z-index, as the video isn’t being viewed in the page. Your page is left powerless. Powerless! So what can we do? What can we control? Those of us with particularly long memories might recall a time before CSS, when we’d have to use JavaScript to perform image rollovers. As CSS background images weren’t a practical reality, we would use lots of <img> elements, and perform a rollover by modifying the src attribute of the image. Turns out, this old trick of modifying the source can help us out with video, too. In most cases, modifying the src attribute of a <video> element, or perhaps more likely the src attribute of a source element, will swap from one video to another. Swappin’ it Let’s take a deliberately simple example of a super-basic video tag: <video src=""mycat.webm"" controls>no fallback coz i is lame, innit.</video> We could very simply write a script to find all video tags and give them a new src to show our bumper. <script> var videos, i, l; videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].setAttribute('src', 'bumper-in.webm'); } </script> View the example in a browser with WebM support. You’ll see that the video is swapped out for the opening bumper. Great! Beefing it up Of course, we can’t just publish video in one format. In practical use, you need a <video> element with multiple <source> elements containing your different source formats. <video controls> <source src=""mycat.mp4"" type=""video/mp4"" /> <source src=""mycat.webm"" type=""video/webm"" /> <source src=""mycat.ogv"" type=""video/ogg"" /> </video> This time, our script needs to loop through the sources, not the videos. We’ll use a regular expression replacement to swap out the file name while maintaining the correct file extension. <script> var sources, i, l, orig; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); // reload the video sources[i].parentNode.load(); } </script> The difference this time is that when changing the src of a <source> we need to call the .load() method on the video to get it to acknowledge the change. See the code in action, this time in a wider range of browsers. But, my video! I guess we should get the original video playing again. Keeping the same markup, we need to modify the script to do two things: Store the original src in a data- attribute so we can access it later Add an event listener so we can detect the end of the bumper playing, and load the original video back in As we need to loop through the videos this time to add the event listener, I’ve moved the .load() call into that loop. It’s a bit more efficient to call it only once after modifying all the video’s sources. <script> var videos, sources, i, l, orig; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('data-orig', orig); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); } videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].load(); videos[i].addEventListener('ended', function(){ sources = this.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('data-orig'); if (orig) { sources[i].setAttribute('src', orig); } sources[i].setAttribute('data-orig',''); } this.load(); this.play(); }); } </script> Again, view the example to see the bumper play, followed by our spectacular main feature. (That’s my cat, Widget. His interests include sleeping and internet marketing.) Tidying things up The final thing to do is add our closing bumper after the main video has played. This involves the following changes: We need to keep track of whether the src has been changed, so we only play the video if it’s changed. I’ve added the modified variable to track this, and it stops us getting into a situation where the video just loops forever. Add an else to the event listener, for when the orig is false (so the main feature has been playing) to load in the end bumper. We also check that we’re not already playing the end bumper. Because looping. <script> var videos, sources, i, l, orig, current, modified; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('data-orig', orig); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); } videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].load(); modified = false; videos[i].addEventListener('ended', function(){ sources = this.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('data-orig'); if (orig) { sources[i].setAttribute('src', orig); modified = true; }else{ current = sources[i].getAttribute('src'); if (current.indexOf('bumper-out')==-1) { sources[i].setAttribute('src', current.replace(/([w]+).(w+)/, 'bumper-out.$2')); modified = true; }else{ this.pause(); modified = false; } } sources[i].setAttribute('data-orig',''); } if (modified) { this.load(); this.play(); } }); } </script> Yo ho ho, that’s a lot of JavaScript. See it in action – you should get a bumper, the cat video, and an end bumper. Of course, this code works fine for demonstrating the principle, but it’s very procedural. Nothing wrong with that, but to do something similar in production, you’d probably want to make the code more modular to ease maintainability. Besides, you may want to use a framework, rather than basic JavaScript. The end credits One really important principle here is that of progressive enhancement. If the browser doesn’t support JavaScript, the user won’t see your bumper, but they will get the main video. If the browser supports JavaScript but doesn’t allow you to modify the src (as was the case with older versions of iOS), the user won’t see your bumper, but they will get the main video. If a search engine or social media bot grabs your page and looks for content, they won’t see your bumper, but they will get the main video – which is absolutely what you want. This means that if the bumper is absolutely crucial, you may still need to cook it into the video. However, for many applications, running it dynamically can work quite well. As always, it comes down to three things: Measure your audience: know how people access your site Test the solution: make sure it works for your audience Plan for failure: it’s the web and that’s how things work ‘round these parts But most of all play around with it, have fun and build something awesome.",2012,Drew McLellan,drewmclellan,2012-12-01T00:00:00+00:00,https://24ways.org/2012/html5-video-bumpers/,code 83,Cut Copy Paste,"Long before I got into this design thing, I was heavily into making my own music inspired by the likes of Coldcut and Steinski. I would scour local second-hand record shops in search of obscure beats, loops and bits of dialogue in the hope of finding that killer sample I could then splice together with other things to make a huge hit that everyone would love. While it did eventually lead to a record contract and getting to release a few 12″ singles, ultimately I knew I’d have to look for something else to pay the bills. I may not make my own records any more, but the approach I took back then – finding (even stealing) things, cutting and pasting them into interesting combinations – is still at the centre of how I work, only these days it’s pretty much bits of code rather than bits of vinyl. Over the years I’ve stored these little bits of code (some I’ve found, some I’ve created myself) in Evernote, ready to be dialled up whenever I need them. So when Drew got in touch and asked if I’d like to do something for this year’s 24 ways I thought it might be kind of cool to share with you a few of these snippets that I find really useful. Think of these as a kind of coding mix tape; but remember – don’t just copy as is: play around, combine and remix them into other wonderful things. Some of this stuff is dirty; some of it will make hardcore programmers feel ill. For those people, remember this – while you were complaining about the syntax, I made something. Create unique colours Let’s start right away with something I stole. Well, actually it was given away at the time by Matt Biddulph who was then at Dopplr before Nokia destroyed it. Imagine you have thousands of words and you want to assign each one a unique colour. Well, Matt came up with a crazily simple but effective way to do that using an MD5 hash. Just encode said word using an MD5 hash, then take the first six characters of the string you get back to create a hexadecimal colour representation. I can’t guarantee that it will be a harmonious colour palette, but it’s still really useful. The thing I love the most about this technique is the left-field thinking of using an encryption system to create colours! Here’s an example using JavaScript: // requires the MD5 library available at http://pajhome.org.uk/crypt/md5 function MD5Hex(str){ result = MD5.hex(str).substring(0, 6); return result; } Make something breathe using a sine wave I never paid attention in school, especially during double maths. As a matter of fact, the only time I received corporal punishment – several strokes of the ruler – was in maths class. Anyway, if they had shown me then how beautiful mathematics actually is, I might have paid more attention. Here’s a little example of how a sine wave can be used to make something appear to breathe. I recently used this on an Arduino project where an LED ring surrounding a button would gently breathe. Because of that it felt much more inviting. I love mathematics. for(int i = 0; i<360; i++){ float rad = DEG_TO_RAD * i; int sinOut = constrain((sin(rad) * 128) + 128, 0, 255); analogWrite(LED, sinOut); delay(10); } Snap position to grid This is so elegant I love it, and it was shown to me by Gary Burgess, or Boom Boom as myself and others like to call him. It snaps a position, in this case the X-position, to a grid. Just define your grid size (say, twenty pixels) and you’re good. snappedXpos = floor( xPos / gridSize) * gridSize; Calculate the distance between two objects For me, interaction design is about the relationship between two objects: you and another object; you and another person; or simply one object to another. How close these two things are to each other can be a handy thing to know, allowing you to react to that information within your design. Here’s how to calculate the distance between two objects in a 2-D plane: deltaX = round(p2.x-p1.x); deltaY = round(p2.y-p1.y); diff = round(sqrt((deltaX*deltaX)+(deltaY*deltaY))); Find the X- and Y-position between two objects What if you have two objects and you want to place something in-between them? A little bit of interruption and disruption can be a good thing. This small piece of code will allow you to place an object in-between two other objects: // set the position: 0.5 = half-way float position = 0.5; float x = x1 + (x2 - x1) *position; float y = y1 + (y2 - y1) *position; Distribute objects equally around a circle More fun with maths, this time adding cosine to our friend sine. Let’s say you want to create a circular navigation of arbitrary elements (yeah, Jakob, you heard), or you want to place images around a circle. Well, this piece of code will do just that. You can adjust the size of the circle by changing the distance variable and alter the number of objects with the numberOfObjects variable. Example below is for use in Processing. // Example for Processing available for free download at processing.org void setup() { size(800,800); int numberOfObjects = 12; int distance = 100; float inc = (TWO_PI)/numberOfObjects; float x,y; float a = 0; for (int i=0; i < numberOfObjects; i++) { x = (width/2) + sin(a)*distance; y = (height/2) + cos(a)*distance; ellipse(x,y,10,10); a += inc; } } Use modulus to make a grid The modulus operator, represented by %, returns the remainder of a division. Fallen into a coma yet? Hold on a minute – this seemingly simple function is very powerful in lots of ways. At a simple level, you can use it to determine if a number is odd or even, great for creating alternate row colours in a table for instance: boolean checkForEven(numberToCheck) { if (numberToCheck % 2 == 0) return true; } else { return false; } } That’s all well and good, but here’s a use of modulus that might very well blow your mind. Construct a grid with only a few lines of code. Again the example is in Processing but can easily be ported to any other language. void setup() { size(600,600); int numItems = 120; int numOfColumns = 12; int xSpacing = 40; int ySpacing = 40; int totalWidth = xSpacing*numOfColumns; for (int i=0; i < numItems; i++) { ellipse(floor((i*xSpacing)%totalWidth),floor((i*xSpacing)/totalWidth)*ySpacing,10,10); } } Not all the bits of code I keep around are for actual graphical output. I also have things that are very utilitarian, but which I still consider part of the design process. Here’s a couple of things that I’ve found really handy lately in my design workflow. They may be a little specific, but I hope they demonstrate that it’s not about working harder, it’s about working smarter. Merge CSV files into one file Recently, I’ve had to work with huge – about 1GB – CSV text files that I then needed to combine into one master CSV file so I could then process the data. Opening up each text file and then copying and pasting just seemed really dumb, not to mention slow, so I thought there must be a better way. After some Googling I found this command line script that would combine .txt files into one file and add a new line after each: awk 1 *.txt > finalfile.txt But that wasn’t what I was ideally after. I wanted to merge the CSV files, keeping the first row of the first file (the column headings) and then ignore the first row of subsequent files. Sure enough I found the answer after some Googling and it worked like a charm. Apologies to the original author but I can’t remember where I found it, but you, sir or madam, are awesome. Save this as a shell script: FIRST= for FILE in *.csv do exec 5<""$FILE"" # Open file read LINE <&5 # Read first line [ -z ""$FIRST"" ] && echo ""$LINE"" # Print it only from first file FIRST=""no"" cat <&5 # Print the rest directly to standard output exec 5<&- # Close file # Redirect stdout for this section into file.out done > file.out Create a symbolic link to another file or folder Oftentimes, I’ll find myself hunting through a load of directories to load a file to be processed, like a CSV file. Use a symbolic link (in the Terminal) to place a link on your desktop or wherever is most convenient and it’ll save you loads of time. Especially great if you’re going through a Java file dialogue box in Processing or something that doesn’t allow the normal Mac dialog box or aliases. cd /DirectoryYouWantShortcutToLiveIn ln -s /Directory/You/Want/ShortcutTo/ TheShortcut You can do it, in the mix I hope you’ve found some of the above useful and that they’ve inspired a few ideas here and there. Feel free to tell me better ways of doing things or offer up any other handy pieces of code. Most of all though, collect, remix and combine the things you discover to make lovely new things.",2012,Brendan Dawes,brendandawes,2012-12-17T00:00:00+00:00,https://24ways.org/2012/cut-copy-paste/,code 86,Flashless Animation,"Animation in a Flashless world When I splashed down in web design four years ago, the first thing I wanted to do was animate a cartoon in the browser. I’d been drawing comics for years, and I’ve wanted to see them come to life for nearly as long. Flash animation was still riding high, but I didn’t want to learn Flash. I wanted to learn JavaScript! Sadly, animating with JavaScript was limiting and resource-intensive. My initial foray into an infinitely looping background did more to burn a hole in my CPU than amaze my friends (although it still looks pretty cool). And there was still no simple way to incorporate audio. The browser technology just wasn’t there. Things are different now. CSS3 transitions and animations can do most of the heavy lifting and HTML5 audio can serve up the music and audio clips. You can do a lot without leaning on JavaScript at all, and when you lean on JavaScript, you can do so much more! In this project, I’m going to show you how to animate a simple walk cycle with looping audio. I hope this will inspire you to do something really cool and impress your friends. I’d love to see what you come up with, so please send your creations my way at rachelnabors.com! Note: Because every browser wants to use its own prefixes with CSS3 animations, and I have neither the time nor the space to write all of them out, I will use the W3C standard syntaxes; that is, going prefix-less. You can implement them out of the box with something like Prefixfree, or you can add prefixes on your own. If you take the latter route, I recommend using Sass and Compass so you can focus on your animations, not copying and pasting. The walk cycle Walk cycles are the “Hello world” of animation. One of the first projects of animation students is to spend hours drawing dozens of frames to complete a simple loopable animation of a character walking. Most animators don’t have to draw every frame themselves, though. They draw a few key frames and send those on to production animators to work on the between frames (or tween frames). This is meticulous, grueling work requiring an eye for detail and natural movement. This is also why so much production animation gets shipped overseas where labor is cheaper. Luckily, we don’t have to worry about our frame count because we can set our own frames-per-second rate on the fly in CSS3. Since we’re trying to impress friends, not animation directors, the inconsistency shouldn’t be a problem. (Unless your friend is an animation director.) This is a simple walk cycle I made of my comic character Tuna for my CSS animation talk at CSS Dev Conference this year: The magic lies here: animation: walk-cycle 1s steps(12) infinite; Breaking those properties down: animation: <name> <duration> <timing-function> <iteration-count>; walk-cycle is a simple @keyframes block that moves the background sprite on .tuna around: @keyframes walk-cycle { 0% {background-position: 0 0; } 100% {background-position: 0 -2391px;} } The background sprite has exactly twelve images of Tuna that complete a full walk cycle. We’re setting it to cycle through the entire sprite every second, infinitely. So why isn’t the background image scrolling down the .tuna container? It’s all down to the timing function steps(). Using steps() let us tell the CSS to make jumps instead of the smooth transitions you’d get from something like linear. Chris Mills at dev.opera wrote in his excellent intro to CSS3 animation : Instead of giving a smooth animation throughout, [steps()] causes the animation to jump between a set number of steps placed equally along the duration. For example, steps(10) would make the animation jump along in ten equal steps. There’s also an optional second parameter that takes a value of start or end. steps(10, start) would specify that the change in property value should happen at the start of each step, while steps(10, end) means the change would come at the end. (Seriously, go read his full article. I’m not going to touch on half the stuff he does because I cannot improve on the basics any more than he already has.) The background A cat walking in a void is hardly an impressive animation and certainly your buddy one cube over could do it if he chopped up some of those cat GIFs he keeps using in group chat. So let’s add a parallax background! Yes, yes, all web designers signed a peace treaty to not abuse parallax anymore, but this is its true calling—treaty be damned. And to think we used to need JavaScript to do this! It’s still pretty CPU intensive but much less complicated. We start by splitting up the page into different layers, .foreground, .midground, and .background. We put .tuna in the .midground. .background has multiple background images, all set to repeat horizontally: background-image: url(background_mountain5.png), url(background_mountain4.png), url(background_mountain3.png), url(background_mountain2.png), url(background_mountain1.png); background-repeat: repeat-x; With parallax, things in the foreground move faster than those in the background. Next time you’re driving, notice how the things closer to you move out of your field of vision faster than something in the distance, like a mountain or a large building. We can imitate that here by making the background images on top (in the foreground, closer to us) wider than those on the bottom of the stack (in the distance). The different lengths let us use one animation to move all the background images at different rates in the same interval of time: animation: parallax_bg linear 40s infinite; The shorter images have less distance to cover in the same amount of time as the longer images, so they move slower. Let’s have a look at the background’s animation: @keyframes parallax_bg { 0% { background-position: -2400px 100%, -2000px 100%, -1800px 100%, -1600px 100%, -1200px 100%; } 100% { background-position: 0 100%, 0 100%, 0 100%, 0 100%, 0 100%; } } At 0%, all the background images are positioned at the negative value of their own widths. Then they start moving toward background-position: 0 100%. If we wanted to move them in the reverse direction, we’d remove the negative values at 0% (so they would start at 2400px 100%, 2000px 100%, etc.). Try changing the values in the codepen above or changing background-repeat to none to see how the images play together. .foreground and .midground operate on the same principles, only they use single background images. The music After finishing the first draft of my original walk cycle, I made a GIF with it and posted it on YTMND with some music from the movie Paprika, specifically the track “The Girl in Byakkoya.” After showing it to some colleagues in my community, it became clear that this was a winning combination sure to drive away dresscode blues. So let’s use HTML5 to get a clip of that music looping in there! Warning, there is sound. Please adjust your volume or apply headphones as needed. We’re using HTML5 audio’s loop and autoplay abilities to automatically play and loop a sound file on page load: <audio loop autoplay> <source src=""http://music.com/clip.mp3"" /> </audio> Unfortunately, you may notice there is a small pause between loops. HTML5 audio, thou art half-baked still. Let’s hope one day the Web Audio API will be able to help us out, but until things improve, we’ll have to hack our way around these shortcomings. Turns out there’s a handy little script called seamlessLoop.js which we can use to patch this. Mind you, if we were really getting crazy with the Cheese Whiz, we’d want to get out big guns like sound.js. But that’d be overkill for a mere loop (and explaining the Web Audio API might bore, rather than impress your friends)! Installing seamlessLoop.js will get rid of the pause, and now our walk cycle is complete. (I’ve done some very rough sniffing to see if the browser can play MP3 files. If not, we fall back to using .ogg formatted clips (Opera and Firefox users, you’re welcome).) Really impress your friends by adding a run cycle So we have music, we have a walk cycle, we have parallax. It will be a snap to bring them all together and have a simple, endless animation. But let’s go one step further and knock the socks off our viewers by adding a run cycle. The run cycle Tacking a run cycle on to our walk cycle will require a third animation sequence: a transitional animation of Tuna switching from walking to running. I have added all these to the sprite: Let’s work on getting that transition down. We’re going to use multiple animations on the same .tuna div, but we’re going to kick them off at different intervals using animation-delay—no JavaScript required! Isn’t that magical? It requires a wee bit of math (not much, it doesn’t hurt) to line them up. We want to: Loop the walk animation twice Play the transitional cycle once (it has a finite beginning and end perfectly drawn to pick up between the last frame of the walk cycle and the first frame of the run cycle—no looping this baby) RUN FOREVER. Using the pattern animation: <name> <duration> <timing-function> <delay> <iteration-count>, here’s what that looks like: animation: walk-cycle 1s steps(12) 2, walk-to-run .75s steps(12) 2s 1, run-cycle .75s steps(13) 2.75s infinite; I played with the times to get make the movement more realistic. You may notice that the running animation looks smoother than the walking animation. That’s because it has 13 keyframes running over .75 second instead of 12 running in one second. Remember, professional animation studios use super-high frame counts. This little animation isn’t even up to PBS’s standards! The music: extended play with HTML5 audio sprites My favorite part in the The Girl in Byakkoya is when the calm opening builds and transitions into a bouncy motif. I want to start with Tuna walking during the opening, and then loop the running and bounciness together for infinity. The intro lasts for 24 seconds, so we set our 1 second walk cycle to run for 24 repetitions: walk-cycle 1s steps(12) 24 We delay walk-to-run by 24 seconds so it runs for .75 seconds before… We play run-cycle at 24.75 seconds and loop it infinitely For the music, we need to think of it as two parts: the intro and the bouncy loop. We can do this quite nicely with audio sprites: using one HTML5 audio element and using JavaScript to change the play head location, like skipping tracks with a CD player. Although this technique will result in a small gap in music shifts, I think it’s worth using here to give you some ideas. // Get the audio element var byakkoya = document.querySelector('audio'); // create function to play and loop audio function song(a){ //start playing at 0 a.currentTime = 0; a.play(); //when we hit 64 seconds... setTimeout(function(){ // skip back to 24.5 seconds and keep playing... a.currentTime = 24.55; // then loop back when we hit 64 again, or every 59.5 seconds. setInterval(function(){ a.currentTime = 24.55; },39450); },64000); } The load screen I’ve put it off as long as I can, but now that the music and the CSS are both running on their own separate clocks, it’s imperative that both images and music be fully downloaded and ready to run when we kick this thing off. So we need a load screen (also, it’s nice to give people a heads-up that you’re about to blast them with music, no matter how wonderful that music may be). Since the two timers are so closely linked, we’d best not run the animations until we run the music: * { animation-play-state: paused; } animation-play-state can be set to paused or running, and it’s the most useful thing you will learn today. First we use an event listener to see when the browser thinks we can play through from the beginning to end of the music without pause for buffering: byakkoya.addEventListener(""canplaythrough"", function () { }); (More on HTML5 audio’s media events at HTML5doctor.com) Inside our event listener, I use a bit of jQuery to add class of .playable to the body when we’re ready to enable the play button: $(""body"").addClass(""playable""); $(""#play-me"").html(""Play me."").click(function(){ song(byakkoya); $(""body"").addClass(""playing""); }); That .playing class is special because it turns on the animations at the same time we start playing the song: .playing * { animation-play-state: running; } The background We’re almost done here! When we add the background, it needs to speed up at the same time that Tuna starts running. The music picks up speed around 24.75 seconds in, and so we’re going to use animation-delay on those backgrounds, too. This will require some math. If you try to simply shorten the animation’s duration at the 24.75s mark, the backgrounds will, mid-scroll, jump back to their initial background positions to start the new animation! Argh! So let’s make a new @keyframe and calculate where the background position would be just before we speed up the animation. Here’s the formula: new 0% value = delay ÷ old duration × length of image new 100% value = new 0% value + length of image Here’s the formula put to work on a smaller scale: Voilà! The finished animation! I’ve always wanted to bring my illustrations to life. Then I woke up one morning and realized that I had all the tools to do so in my browser and in my head. Now I have fallen in love with Flashless animation. I’m sure there will be detractors who say HTML wasn’t meant for this and it’s a gross abuse of the DOM! But I say that these explorations help us expand what we expect from devices and software and challenge us in good ways as artists and programmers. The browser might not be the most appropriate place for animation, but is certainly a fun place to start. There is so much you can do with the spec implemented today, and so much of the territory is still unexplored. I have not yet begun to show you everything. In eight months I expect this demo will represent the norm, not the bleeding edge. I look forward to seeing the wonderful things you create. (Also, someone, please, do something about that gappy HTML5 audio looping. It’s a crying shame!)",2012,Rachel Nabors,rachelnabors,2012-12-06T00:00:00+00:00,https://24ways.org/2012/flashless-animation/,code 89,"Direction, Distance and Destinations","With all these new smartphones in the hands of lost and confused owners, we need a better way to represent distances and directions to destinations. The immediate examples that jump to mind are augmented reality apps which let you see another world through your phone’s camera. While this is interesting, there is a simpler way: letting people know how far away they are and if they are getting warmer or colder. In the app world, you can easily tap into the phone’s array of sensors such as the GPS and compass, but what people rarely know is that you can do the same with HTML. The native versus web app debate will never subside, but at least we can show you how to replicate some of the functionality progressively in HTML and JavaScript. In this tutorial, we’ll walk through how to create a simple webpage listing distances and directions of a few popular locations around the world. We’ll use JavaScript to access the device’s geolocation API and also attempt to access the compass to get a heading. Both of these APIs are documented, to be included in the W3C geolocation API specification, and can be used on both desktop and mobile devices today. To get started, we need a list of a few locations around the world. I have chosen the highest mountain peak on each continent so you can see a diverse set of distances and directions. Mountain °Latitude °Longitude Kilimanjaro -3.075833 37.353333 Vinson Massif -78.525483 -85.617147 Puncak Jaya -4.078889 137.158333 Everest 27.988056 86.925278 Elbrus 43.355 42.439167 Mount McKinley 63.0695 -151.0074 Aconcagua -32.653431 -70.011083 Source: Wikipedia We can put those into an HTML list to be styled and accessed by JavaScript to create some distance and directions calculations. The next thing we need to do is check to see if the browser and operating system have geolocation support. To do this we test to see if the function is available or not using a single JavaScript if statement. <script> // If this is true, then the method is supported and we can try to access the location if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(geo_success, geo_error); } </script> The if statement will be false if geolocation support is not present, and then it is up to you to do something else instead as a fallback. For this example, we’ll do nothing since our page should work as is and only get progressively better if more functionality is available. The if statement will be true if there is support and therefore will continue inside the curly brackets to try to get the location. This should prompt the reader to accept or deny the request to get their location. If they say no, the second function callback is processed, in this case a function called geo_error; whereas if the location is available, it fires the geo_success function callback. The function geo_error(){ } isn’t that exciting. You can handle this in any way you see fit. The success function is more interesting. We get a position object passed into the function which contains a series of exciting attributes, namely the latitude and longitude of the device’s current location. function geo_success(position){ gLat = position.coords.latitude; gLon = position.coords.longitude; } Now, in the variables gLat and gLon we have the user’s approximate geographical position. We can use this information to start to calculate some distances between where they are and all the destinations. At the time of writing, you can also get position.coords.heading, but on Windows and iOS devices this returned NULL. In the future, if and when this is supported, this is also where you can easily grab the compass information. Inside the geo_success function, we want to loop through the HTML to get all of the mountain peaks’ latitudes and longitudes and compute the distance. ... $('.geo').each(function(){ // Get the lat/lon from the HTML tLat = $(this).find('.lat').html() tLon = $(this).find('.lon').html() // compute the distances between the current location and this points location dist = distance(tLat,tLon,gLat,gLon); // set the return values into something useful d = parseInt(dist[0]*10)/10; a = parseFloat(dist[1]); // display the value in the HTML and style the arrow $(this).find('.distance').html(d+' km away'); $(this).find('.direction').css('-webkit-transform','rotate(-' + a + 'deg)'); // store the arc for later use if compass is available $(this).attr('data-arc',a); } In the variable d we have the distance between the current location and the location of the mountain peak based on the Haversine Formula. The variable a is the arc, which has a value from 0 to 359.99. This will be useful later if we have compass support. Given these two values we have a distance and a heading to style the HTML. The next thing we want to do is check to see if the device has a compass and then get access to the the current heading. As we’ll see, there are several ways to do this, some of which work on certain devices but not others. The W3C geolocation spec says that, along with the coordinates, there are several other attributes: accuracy; altitude; and heading. Heading is the direction to true north, which is different than magnetic north! WebKit and Windows return NULL for the heading value, but WebKit has an experimental method to fetch the heading. If you get into accessing these sensors, you’ll have to try to catch a few of these methods to finally get a value. Assuming you do, we can move on to the more interesting display opportunities. In an ideal world, this would succeed and set a variable we’ll call compassHeading to get a value between 0 and 359.99 degrees. Now we know which direction north is, we also know the direction relative to north of the path to our destination, so we can can subtract the two values to get an arrow to display on the screen. But we’re not finished yet: we also need to get the device’s orientation (landscape or portrait) and subtract the correct amount from the angle for the arrow. Once we have a value, we can use CSS to rotate the arrow the correct number of degrees. -webkit-transform: rotate(-180deg) Not all devices support a standard way to access compass information, so in the meantime we need to use a work around. On iOS, you can use the experimental event method e.webkitCompassHeading. We want the compass to update in real time as the device is moved around, so we’ll put this inside an event listener. window.addEventListener('deviceorientation', function(e) { // Loop through all the locations on the page $('.geo').each(function(){ // get the arc value from north we computed and stored earlier destination_arc = parseInt($(this).attr('data-arc')) compassHeading = e.webkitCompassHeading + window.orientation + destination_arc; // find the arrow element and rotate it accordingly $(this).find('.direction').css('-webkit-transform','rotate(-' + compassHeading + 'deg)'); } } As the device is rotated, the compass arrow will constantly be updated. If you want to see an example, you can have a look at this page which shows the distances to all the peaks on each continent. With progressive enhancement, we slowly layer on additional functionality as we go. The reader will first see the list of locations with a latitude and longitude. If the device is capable and permissions allow, it will then compute the distance. If a compass is available, with the correct permissions it will then add the final layer which is direction. You should consider this code a stub for your projects. If you are making a hyperlocal webpage with restaurant locations, for example, then consider adding these features. Knowing not only how far away a place is, but also the direction can be hugely important, and since the compass is always active, it acts as a guide to the location. Future developments Improvements to this could include setting a timer and recalling the navigator.geolocation.getCurrentPosition() function and updating the distances. I chose very distant mountains so kilometres made sense, but you can divide again by 1,000 to convert to metres if you are dealing with much nearer places. Walking or driving would change the distances so the ability to refresh would be important. It is outside the scope of this article, but if you manage to get this HTML to work offline, then you can make a nice web app which sits on your devices’ homescreens and works even without an internet connection. This could be ideal for travellers in an unknown city looking for your destination. Just with offline storage, base64 encoding and data URIs, it is possible to embed plenty of design and functionality into a small offline webpage. Now you know how to use JavaScript to look up a destination’s location and figure out the distance and direction – never get lost again.",2012,Brian Suda,briansuda,2012-12-19T00:00:00+00:00,https://24ways.org/2012/direction-distance-and-destinations/,code 91,Infinite Canvas: Moving Beyond the Page,"Remember Web 2.0? I do. In fact, that phrase neatly bifurcates my life on the internet. Pre-2.0, I was occupied by chatting on AOL and eventually by learning HTML so I could build sites on Geocities. Around 2002, however, I saw a WYSIWYG demo in Dreamweaver. The instructor was dragging boxes and images around a canvas. With a few clicks he was able to build a dynamic, single-page interface. Coming from the world of tables and inline HTML styles, I was stunned. As I entered college the next year, the web was blossoming: broadband, Wi-Fi, mobile (proud PDA owner, right here), CSS, Ajax, Bloglines, Gmail and, soon, Google Maps. I was a technology fanatic and a hobbyist web developer. For me, the web had long been informational. It was now rapidly becoming something else, something more: sophisticated, presentational, actionable. In 2003 we watched as the internet changed. The predominant theme of those early Web 2.0 years was the withering of Internet Explorer 6 and the triumph of web standards. Upon cresting that mountain, we looked around and collectively breathed the rarefied air of pristine HMTL and CSS, uncontaminated by toxic hacks and forks – only to immediately begin hurtling down the other side at what is, frankly, terrifying speed. Ten years later, we are still riding that rocket. Our days (and nights) are spent cramming for exams on CSS3 and RWD and Sass and RESS. We are the proud, frazzled owners of tiny pocket computers that annihilate the best laptops we could have imagined, and the architects of websites that are no longer restricted to big screens nor even segregated by device. We dragoon our sites into working any time, anywhere. At this point, we can hardly ask the spec developers to slow down to allow us to catch our breath, nor should we. It is, without a doubt, a most wonderful time to be a web developer. But despite the newfound luxury of rounded corners, gradients, embeddable fonts, low-level graphics APIs, and, glory be, shadows, the canyon between HTML and native appears to be as wide as ever. The improvements in HTML and CSS have, for the most part, been conveniences rather than fundamental shifts. What I’d like to do now, if you’ll allow me, is outline just a few of the remaining gaps that continue to separate web sites and applications from their native companions. What I’d like for Christmas There is one irritant which is the grandfather of them all, the one from which all others flow and have their being, and it is, simply, the page refresh. That’s right, the foundational principle of the web is our single greatest foe. To paraphrase a patron saint of designers everywhere, if you see a page refresh, we blew it. The page refresh brings with it, of course, many noble and lovely benefits: addressability, for one; and pagination, for another. (See also caching, resource loading, and probably half a dozen others.) Still, those concerns can be answered (and arguably answered more compellingly) by replacing the weary page with the young and hearty document. Flash may be dead, but it has many lessons yet to bequeath. Preparing a single document when the site loads allows us to engage the visitor in a smooth and engrossing experience. We have long known this, of course. Twitter was not the first to attempt, via JavaScript, to envelop the user in a single-page application, nor the first to abandon it. Our shared task is to move those technologies down the stack, to make them more primitive, so that the next Twitter can be built with the most basic combination of HTML and CSS rather than relying on complicated, slow, and unreliable scripted solutions. So, let’s take a look at what we can do, right now, that we might have a better idea of where our current tools fall short. A print magazine in HTML clothing Like many others, I suspect, one of my earliest experiences with publishing was laying out newsletters and newspapers on a computer for print. If you’ve ever used InDesign or Quark or even Microsoft Publisher, you’ll remember reflowing content from page to page. The advent of the internet signaled, in many ways, the abandonment of that model. Articles were no longer constrained by the physical limitations of paper. In shedding our chains, however, it is arguable that we’ve lost something useful. We had a self-contained and complete package, a closed loop. It was a thing that could be handled and finished, and doing so provided a sense of accomplishment that our modern, infinitely scrolling, ever-fractal web of content has stolen. For our purposes today, we will treat 24 ways as the online equivalent of that newspaper or magazine. A single year’s worth of articles could easily be considered an issue. Right now, navigating between articles means clicking on the article you’d like to view and being taken to that specific address via a page reload. If Drew wanted to, it wouldn’t be difficult to update the page in place (via JavaScript) and change the address (again via JavaScript with the History API) to reflect the new content found at the new location. But what if Drew wanted to do that without JavaScript? And what if he wanted the site to not merely load the content but actually whisk you along the page in a compelling and delightful way, à la the Mag+ demo we all saw a few years ago when the iPad was first introduced? Uh, no. We’re all familiar with websites that have attempted to go beyond the page by weaving many chunks of content together into a large document and for good reason. There is tremendous appeal in opening and exploring the canvas beyond the edges of our screens. In one rather straightforward example from last year, Mozilla contacted Full Stop to build a website promoting Aza Raskin’s proposal for a set of Creative Commons-style privacy icons. Like a lot of the sites we build (including our own), the amount of information we were presenting was minimal. In these instances, we encourage our clients to consider including everything on a single page. The result was a horizontally driven site that was, if not whimsical, at least clever and attractive to the intended audience. An experience that is taken for granted when using device-native technology is utterly, maddeningly impossible to replicate on the web without jumping through JavaScript hoops. In another, more complex example, we again had the pleasure of working with Aza earlier this year, this time on a redesign of the Massive Health website. Our assignment was to design and build a site that communicated Massive’s commitment to modern personal health. The site had to be visually and interactively stunning while maintaining a usable and clear interface for the casual visitor. Our solution was to extend the infinite company logo into a ribbon that carried the visitor through the site narrative. It also meant we’d be asking the browser to accommodate something it was never designed to handle: a non-linear design. (Be sure to play around. There’s a lot going on under the hood. We were also this close to a ZUI, if WebKit didn’t freak out when pages were scaled beyond 10×.) Despite the apparent and deliberate design simplicity, the techniques necessary to implement it are anything but. From updating the URL to moving the visitor from section to section, we’re firmly in JavaScript territory. And that’s a shame. What can we do? We might not be able to specify these layouts in HTML and CSS just yet, but that doesn’t mean we can’t learn a few new tricks while we wait. Let’s see how close we can come to recreating the privacy icons design, the Massive design, or the Mag+ design without resorting to JavaScript. A horizontally paginated site The first thing we’re going to need is the concept of a page within our HTML document. Using plain old HTML and CSS, we can stack a series of <div>s sideways (with a little assist from our new friend, the viewport-width unit, not that he was strictly necessary). All we need to know is how many pages we have. (And, boy, wouldn’t it be nice to be able to know that without having to predetermine it or use JavaScript?) .window { overflow: hidden; width: 100%; } .pages { width: 200vw; } .page { float: left; overflow: hidden; width: 100vw; } If you look carefully, you’ll see that the conceit we’ll use in the rest of the demos is in place. Despite the document containing multiple pages, only one is visible at any given time. This allows us to keep the user focused on the task (or content) at hand. By the way, you’ll need to use a modern, WebKit-based browser for these demos. I recommend downloading the WebKit nightly builds, Chrome Canary, or being comfortable with setting flags in Chrome. A horizontally paginated site, with transitions Ah, here’s the rub. We have functional navigation, but precious few cues for the user. It’s not much good shoving the visitor around various parts of the document if they don’t get the pleasant whooshing experience of the journey. You might be thinking, what about that new CSS selector, target-something…? Well, my friend, you’re on the right track. Let’s test it. We’re going to need to use a bit of sleight of hand. While we’d like to simply offset the containing element by the number of pages we’re moving (like we did on Massive), CSS alone can’t give us that information, and that means we’re going to need to fake it by expanding and collapsing pages as you navigate. Here are the bits we’re going to need: .page { -webkit-transition: width 1s; // Naturally you're going to want to include all the relevant prefixes here float: left; left: 0; overflow: hidden; position: relative; width: 100vw; } .page:not(:target) { width: 0; } Ah, but we’re not fooling anyone with that trick. As soon as you move beyond a single page, the visitor’s disbelief comes tumbling down when the linear page transitions are unaffected by the distance the pages are allegedly traveling. And you may have already noticed an even more fatal flaw: I secretly linked you to the first page rather than the unadorned URL. If you visit the same page with no URL fragment, you get a blank screen. Sure, we could force a redirect with some server-side trickery, but that feels like cheating. Perhaps if we had the CSS4 subject selector we could apply styles to the parent based on the child being targeted by the URL. We might also need a few more abilities, like determining the total number of pages and having relative sibling selectors (e.g. nth-sibling), but we’d sure be a lot closer. A horizontally paginated site, with transitions – no cheating Well, what other cards can we play? How about the checkbox hack? Sure, it’s a garish trick, but it might be the best we can do today. Check it out. label { cursor: pointer; } input { display: none; } input:not(:checked) + .page { max-height: 100vh; width: 0; } Finally, we can see the first page thanks to the state we are able to set on the appropriate radio button. Of course, now we don’t have URLs, so maybe this isn’t a winning plan after all. While our HTML and CSS toolkit may feel primitive at the moment, we certainly don’t want to sacrifice the addressability of the web. If there’s one bedrock principle, that’s it. A horizontally paginated site, with transitions – no cheating and a gorgeous homepage Gorgeous may not be the right word, but our little magazine is finally shaping up. Thanks to the CSS regions spec, we’ve got an exciting new power, the ability to begin an article in one place and bend it to our will. (Remember, your everyday browser isn’t going to work for these demos. Try the WebKit nightly build to see what we’re talking about.) As with the rest of the examples, we’re clearly abusing these features. Off-canvas layouts (you can thank Luke Wroblewski for the name) are simply not considered to be normal patterns… yet. Here’s a quick look at what’s going on: .excerpt-container { float: left; padding: 2em; position: relative; width: 100%; } .excerpt { height: 16em; } .excerpt_name_article-1, .page-1 .article-flow-region { -webkit-flow-from: article-1; } .article-content_for_article-1 { -webkit-flow-into: article-1; } The regions pattern is comprised of at least three components: a beginning; an ending; and a source. Using CSS, we’re able to define specific elements that should be available for the content to flow through. If magazine-style layouts are something you’re interested in learning more about (and you should be), be sure to check out the great work Adobe has been doing. Looking forward, and backward As designers, builders, and consumers of the web, we share a desire to see the usability and enjoyability of websites continue to rise. We are incredibly lucky to be working in a time when a three-month-old website can be laughably outdated. Our goal ought to be to improve upon both the weaknesses and the strengths of the web platform. We seek not only smoother transitions and larger canvases, but fine-grained addressability. Our URLs should point directly and unambiguously to specific content elements, be they pages, sections, paragraphs or words. Moreover, off-screen design patterns are essential to accommodating and empowering the multitude of devices we use to access the web. We should express the desire that interpage links take advantage of the CSS transitions which have been put to such good effect in every other aspect of our designs. Transitions aren’t just nice to have, they’re table stakes in the highly competitive world of native applications. The tools and technologies we have right now allow us to create smart, beautiful, useful webpages. With a little help, we can begin removing the seams and sutures that bind the web to an earlier, less sophisticated generation.",2012,Nathan Peretic,nathanperetic,2012-12-21T00:00:00+00:00,https://24ways.org/2012/infinite-canvas-moving-beyond-the-page/,code 92,Redesigning the Media Query,"Responsive web design is showing us that designing content is more important than designing containers. But if you’ve given RWD a serious try, you know that shifting your focus from the container is surprisingly hard to do. There are many factors and instincts working against you, and one culprit is a perpetrator you’d least suspect. The media query is the ringmaster of responsive design. It lets us establish the rules of the game and gives us what we need most: control. However, like some kind of evil double agent, the media query is actually working against you. Its very nature diverts your attention away from content and forces you to focus on the container. The very act of choosing a media query value means choosing a screen size. Look at the history of the media query—it’s always been about the container. Values like screen, print, handheld and tv don’t have anything to do with content. The modern media query lets us choose screen dimensions, which is great because it makes RWD possible. But it’s still the act of choosing something that is completely unpredictable. Content should dictate our breakpoints, not the container. In order to get our focus back to the only thing that matters, we need a reengineered media query—one that frees us from thinking about screen dimensions. A media query that works for your content, not the window. Fortunately, Sass 3.2 is ready and willing to take on this challenge. Thinking in Columns Fluid grids never clicked for me. I feel so disoriented and confused by their squishiness. Responsive design demands their use though, right? I was ready to surrender until I found a grid that turned my world upright again. The Frameless Grid by Joni Korpi demonstrates that column and gutter sizes can stay fixed. As the screen size changes, you simply add or remove columns to accommodate. This made sense to me and armed with this concept I was able to give Sass the first component it needs to rewrite the media query: fixed column and gutter size variables. $grid-column: 60px; $grid-gutter: 20px; We’re going to want some resolution independence too, so let’s create a function that converts those nasty pixel values into ems. @function em($px, $base: $base-font-size) { @return ($px / $base) * 1em; } We now have the components needed to figure out the width of multiple columns in ems. Let’s put them together in a function that will take any number of columns and return the fixed width value of their size. @function fixed($col) { @return $col * em($grid-column + $grid-gutter) } With the math in place we can now write a mixin that takes a column count as a parameter, then generates the perfect media query necessary to fit that number of columns on the screen. We can also build in some left and right margin for our layout by adding an additional gutter value (remembering that we already have one gutter built into our fixed function). @mixin breakpoint($min) { @media (min-width: fixed($min) + em($grid-gutter)) { @content } } And, just like that, we’ve rewritten the media query. Instead of picking a minimum screen size for our layout, we can simply determine the number of columns needed. Let’s add a wrapper class so that we can center our content on the screen. @mixin breakpoint($min) { @media (min-width: fixed($min) + em($grid-gutter)) { .wrapper { width: fixed($min) - em($grid-gutter); margin-left: auto; margin-right: auto; } @content } } Designing content with a column count gives us nice, easy, whole numbers to work with. Sizing content, sidebars or widgets is now as simple as specifying a single-digit number. @include breakpoint(8) { .main { width: fixed(5); } .sidebar { width: fixed(3); } } Those four lines of Sass just created a responsive layout for us. When the screen is big enough to fit eight columns, it will trigger a fixed width layout. And give widths to our main content and sidebar. The following is the outputted CSS… @media (min-width: 41.25em) { .wrapper { width: 38.75em; margin-left: auto; margin-right: auto; } .main { width: 25em; } .sidebar { width: 15em; } } Demo I’ve created a Codepen demo that demonstrates what we’ve covered so far. I’ve added to the demo some grid classes based on Griddle by Nicolas Gallagher to create a floatless layout. I’ve also added a CSS gradient overlay to help you visualize columns. Try changing the column variable sizes or the breakpoint includes to see how the layout reacts to different screen sizes. Responsive Images Responsive images are a serious problem, but I’m excited to see the community talk so passionately about a solution. Now, there are some excellent stopgaps while we wait for something official, but these solutions require you to mirror your breakpoints in JavaScript or HTML. This poses a serious problem for my Sass-generated media queries, because I have no idea what the real values of my breakpoints are anymore. For responsive images to work, JavaScript needs to recognize which media query is active so that proper images can be loaded for that layout. What I need is a way to label my breakpoints. Fortunately, people much smarter than I have figured this out. Jeremy Keith devised a labeling method by using CSS-generated content as the storage method for breakpoint labels. We can use this technique in our breakpoint mixin by passing a label as another argument. @include breakpoint(8, 'desktop') { /* styles */ } Sass can take that label and use it when writing the corresponding media query. We just need to slightly modify our breakpoint mixin. @mixin breakpoint($min, $label) { @media (min-width: fixed($min) + em($grid-gutter)) { // label our mq with CSS generated content body::before { content: $label; display: none; } .wrapper { width: fixed($min) - em($grid-gutter); margin-left: auto; margin-right: auto; } @content } } This allows us to label our breakpoints with a user-friendly string. Now that our media queries are defined and labeled, we just need JavaScript to step in and read which label is active. // get css generated label for active media query var label = getComputedStyle(document.body, '::before')['content']; JavaScript now knows which layout is active by reading the label in the current media query—we just need to match that label to an image. I prefer to store references to different image sizes as data attributes on my image tag. <img class=""responsive-image"" data-mobile=""mobile.jpg"" data-desktop=""desktop.jpg"" /> <noscript><img src=""desktop.jpg"" /></noscript> These data attributes have names that match the labels set in my CSS. So while there is some duplication going on, setting a keyword like ‘tablet’ in two places is much easier than hardcoding media query values. With matching labels in CSS and HTML our script can marry the two and load the right sized image for our layout. // get css generated label for active media query var label = getComputedStyle(document.body, '::before')['content']; // select image var $image = $('.responsive-image'); // create source from data attribute $image.attr('src', $image.data(label)); Demo With some slight additions to our previous Codepen demo you can see this responsive image technique in action. While the above JavaScript will work it is not nearly robust enough for production so the demo uses a jQuery plugin that can accomodate multiple images, reloading on screen resize and fallbacks if something doesn’t match up. Creating a Framework This media query mixin and responsive image JavaScript are the center piece of a front end framework I use to develop websites. It’s a fluid, mobile first foundation that uses the breakpoint mixin to structure fixed width layouts for tablet and desktop. Significant effort was focused on making this framework completely cross-browser. For example, one of the problems with using media queries is that essential desktop structure code ends up being hidden from legacy Internet Explorer. Respond.js is an excellent polyfill, but if you’re comfortable serving a single desktop layout to older IE, we don’t need JavaScript. We simply need to capture layout code outside of a media query and sandbox it under an IE only class name. // set IE fallback layout to 8 columns $ie-support = 8; // inside of our breakpoint mixin (but outside the media query) @if ($ie-support and $min <= $ie-support) { .lt-ie9 { @content; } } Perspective Regained Thinking in columns means you are thinking about content layout. How big of a screen do you need for 12 columns? Who cares? Having Sass write media queries means you can use intuitive numbers for content layout. A fixed grid means more layout control and less edge cases to test than a fluid grid. Using CSS labels for activating responsive images means you don’t have to duplicate breakpoints across separations of concern. It’s a harmonious blend of approaches that gives us something we need—responsive design that feels intuitive. And design that, from the very outset, focuses on what matters most. Just like our kindergarten teachers taught us: It’s what’s inside that counts.",2012,Les James,lesjames,2012-12-13T00:00:00+00:00,https://24ways.org/2012/redesigning-the-media-query/,code 95,Giving Content Priority with CSS3 Grid Layout,"Browser support for many of the modules that are part of CSS3 have enabled us to use CSS for many of the things we used to have to use images for. The rise of mobile browsers and the concept of responsive web design has given us a whole new way of looking at design for the web. However, when it comes to layout, we haven’t moved very far at all. We have talked for years about separating our content and source order from the presentation of that content, yet most of us have had to make decisions on source order in order to get a certain visual layout. Owing to some interesting specifications making their way through the W3C process at the moment, though, there is hope of change on the horizon. In this article I’m going to look at one CSS module, the CSS3 grid layout module, that enables us to define a grid and place elements on to it. This article comprises a practical demonstration of the basics of grid layout, and also a discussion of one way in which we can start thinking of content in a more adaptive way. Before we get started, it is important to note that, at the time of writing, these examples work only in Internet Explorer 10. CSS3 grid layout is a module created by Microsoft, and implemented using the -ms prefix in IE10. My examples will all use the -ms prefix, and not include other prefixes simply because this is such an early stage specification, and by the time there are implementations in other browsers there may be inconsistencies. The implementation I describe today may well change, but is also there for your feedback. If you don’t have access to IE10, then one way to view and test these examples is by signing up for an account with Browserstack – the free trial would give you time to have a look. I have also included screenshots of all relevant stages in creating the examples. What is CSS3 grid layout? CSS3 grid layout aims to let developers divide up a design into a grid and place content on to that grid. Rather than trying to fabricate a grid from floats, you can declare an actual grid on a container element and then use that to position the elements inside. Most importantly, the source order of those elements does not matter. Declaring a grid We declare a grid using a new value for the display property: display: grid. As we are using the IE10 implementation here, we need to prefix that value: display: -ms-grid;. Once we have declared our grid, we set up the columns and rows using the grid-columns and grid-rows properties. .wrapper { display: -ms-grid; -ms-grid-columns: 200px 20px auto 20px 200px; -ms-grid-rows: auto 1fr; } In the above example, I have declared a grid on the .wrapper element. I have used the grid-columns property to create a grid with a 200 pixel-wide column, a 20 pixel gutter, a flexible width auto column that will stretch to fill the available space, another 20 pixel-wide gutter and a final 200 pixel sidebar: a flexible width layout with two fixed width sidebars. Using the grid-rows property I have created two rows: the first is set to auto so it will stretch to fill whatever I put into it; the second row is set to 1fr, a new value used in grids that means one fraction unit. In this case, one fraction unit of the available space, effectively whatever space is left. Positioning items on the grid Now I have a simple grid, I can pop items on to it. If I have a <div> with a class of .main that I want to place into the second row, and the flexible column set to auto I would use the following CSS: .content { -ms-grid-column: 3; -ms-grid-row: 2; -ms-grid-row-span: 1; } If you are old-school, you may already have realised that we are essentially creating an HTML table-like layout structure using CSS. I found the concept of a table the most helpful way to think about the grid layout module when trying to work out how to place elements. Creating grid systems As soon as I started to play with CSS3 grid layout, I wanted to see if I could use it to replicate a flexible grid system like this fluid 16-column 960 grid system. I started out by defining a grid on my wrapper element, using fractions to make this grid fluid. .wrapper { width: 90%; margin: 0 auto 0 auto; display: -ms-grid; -ms-grid-columns: 1fr (4.25fr 1fr)[16]; -ms-grid-rows: (auto 20px)[24]; } Like the 960 grid system I was using as an example, my grid starts with a gutter, followed by the first actual column, plus another gutter repeated sixteen times. What this means is that if I want to span two columns, as far as the grid layout module is concerned that is actually three columns: two wide columns, plus one gutter. So this needs to be accounted for when positioning items. I created a CSS class for each positioning option: column position; rows position; and column span. For example: .grid1 {-ms-grid-column: 2;} /* applying this class positions an item in the first column (the gutter is column 1) */ .grid2 {-ms-grid-column: 4;} /* 2nd column - gutter|column 1|gutter */ .grid3 {-ms-grid-column: 6;} /* 3rd column - gutter|column 1|gutter|column2|gutter */ .row1 {-ms-grid-row:1;} .row2 {-ms-grid-row:3;} .row3 {-ms-grid-row:5;} .colspan1 {-ms-grid-column-span:1;} .colspan2 {-ms-grid-column-span:3;} .colspan3 {-ms-grid-column-span:5;} I could then add multiple classes to each element to set the position on on the grid. This then gives me a replica of the fluid grid using CSS3 grid layout. To see this working fire up IE10 and view Example 1. This works, but… This worked, but isn’t ideal. I considered not showing this stage of my experiment – however, I think it clearly shows how the grid layout module works and is a useful starting point. That said, it’s not an approach I would take in production. First, we have to add classes to our markup that tie an element to a position on the grid. This might not be too much of a problem if we are always going to maintain the sixteen-column grid, though, as I will show you that the real power of the grid layout module appears once you start to redefine the grid, using different grids based on media queries. If you drop to a six-column layout for small screens, positioning items into column 16 makes no sense any more. Calculating grid position using LESS As we’ve seen, if you want to use a grid with main columns and gutters, you have to take into account the spacing between columns as well as the actual columns. This means we have to do some calculating every time we place an item on the grid. In my example above I got around this by creating a CSS class for each position, allowing me to think in sixteen rather than thirty-two columns. But by using a CSS preprocessor, I can avoid using all the classes yet still think in main columns. I’m using LESS for my example. My simple grid framework consists of one simple mixin. .position(@column,@row,@colspan,@rowspan) { -ms-grid-column: @column*2; -ms-grid-row: @row*2-1; -ms-grid-column-span: @colspan*2-1; -ms-grid-row-span: @rowspan*2-1; } My mixin takes four parameters: column; row; colspan; and rowspan. So if I wanted to place an item on column four, row three, spanning two columns and one row, I would write the following CSS: .box { .position(4,3,2,1); } The mixin would return: .box { -ms-grid-column: 8; -ms-grid-row: 5; -ms-grid-column-span: 3; -ms-grid-row-span: 1; } This saves me some typing and some maths. I could also add other prefixed values into my mixin as other browsers started to add support. We can see this in action creating a new grid. Instead of adding multiple classes to each element, I can add one class; that class uses the mixin to create the position. I have also played around with row spans using my mixin and you can see we end up with a quite complicated arrangement of boxes. Have a look at example two in IE10. I’ve used the JavaScript LESS parser so that you can view the actual LESS that I use. Note that I have needed to escape the -ms prefixed properties with ~"""" to get LESS to accept them. This is looking better. I don’t have direct positioning information on each element in the markup, just a class name – I’ve used grid(x), but it could be something far more semantic. We can now take the example a step further and redefine the grid based on screen width. Media queries and the grid This example uses exactly the same markup as the previous example. However, we are now using media queries to detect screen width and redefine the grid using a different number of columns depending on that width. I start out with a six-column grid, defining that on .wrapper, then setting where the different items sit on this grid: .wrapper { width: 90%; margin: 0 auto 0 auto; display: ~""-ms-grid""; /* escaped for the LESS parser */ -ms-grid-columns: ~""1fr (4.25fr 1fr)[6]""; /* escaped for the LESS parser */ -ms-grid-rows: ~""(auto 20px)[40]""; /* escaped for the LESS parser */ } .grid1 { .position(1,1,1,1); } .grid2 { .position(2,1,1,1); } /* ... see example for all declarations ... */ Using media queries, I redefine the grid to nine columns when we hit a minimum width of 700 pixels. @media only screen and (min-width: 700px) { .wrapper { -ms-grid-columns: ~""1fr (4.25fr 1fr)[9]""; -ms-grid-rows: ~""(auto 20px)[50]""; } .grid1 { .position(1,1,1,1); } .grid2 { .position(2,1,1,1); } /* ... */ } Finally, we redefine the grid for 960 pixels, back to the sixteen-column grid we started out with. @media only screen and (min-width: 940px) { .wrapper { -ms-grid-columns:~"" 1fr (4.25fr 1fr)[16]""; -ms-grid-rows:~"" (auto 20px)[24]""; } .grid1 { .position(1,1,1,1); } .grid2 { .position(2,1,1,1); } /* ... */ } If you view example three in Internet Explorer 10 you can see how the items reflow to fit the window size. You can also see, looking at the final set of blocks, that source order doesn’t matter. You can pick up a block from anywhere and place it in any position on the grid. Laying out a simple website So far, like a toddler on Christmas Day, we’ve been playing with boxes rather than thinking about what might be in them. So let’s take a quick look at a more realistic layout, in order to see why the CSS3 grid layout module can be really useful. At this time of year, I am very excited to get out of storage my collection of odd nativity sets, prompting my family to suggest I might want to open a museum. Should I ever do so, I’ll need a website, and here is an example layout. As I am using CSS3 grid layout, I can order my source in a logical manner. In this example my document is as follows, though these elements could be in any order I please: <div class=""wrapper""> <div class=""welcome""> ... </div> <article class=""main""> ... </article> <div class=""info""> ... </div> <div class=""ads""> ... </div> </div> For wide viewports I can use grid layout to create a sidebar, with the important information about opening times on the top righ,t with the ads displayed below it. This creates the layout shown in the screenshot above. @media only screen and (min-width: 940px) { .wrapper { -ms-grid-columns:~"" 1fr (4.25fr 1fr)[16]""; -ms-grid-rows:~"" (auto 20px)[24]""; } .welcome { .position(1,1,12,1); padding: 0 5% 0 0; } .info { .position(13,1,4,1); border: 0; padding:0; } .main { .position(1,2,12,1); padding: 0 5% 0 0; } .ads { .position(13,2,4,1); display: block; margin-left: 0; } } In a floated layout, a sidebar like this often ends up being placed under the main content at smaller screen widths. For my situation this is less than ideal. I want the important information about opening times to end up above the main article, and to push the ads below it. With grid layout I can easily achieve this at the smallest width .info ends up in row two and .ads in row five with the article between. .wrapper { display: ~""-ms-grid""; -ms-grid-columns: ~""1fr (4.25fr 1fr)[4]""; -ms-grid-rows: ~""(auto 20px)[40]""; } .welcome { .position(1,1,4,1); } .info { .position(1,2,4,1); border: 4px solid #fff; padding: 10px; } .content { .position(1,3,4,5); } .main { .position(1,3,4,1); } .ads { .position(1,4,4,1); } Finally, as an extra tweak I add in a breakpoint at 600 pixels and nest a second grid on the ads area, arranging those three images into a row when they sit below the article at a screen width wider than the very narrow mobile width but still too narrow to support a sidebar. @media only screen and (min-width: 600px) { .ads { display: ~""-ms-grid""; -ms-grid-columns: ~""20px 1fr 20px 1fr 20px 1fr""; -ms-grid-rows: ~""1fr""; margin-left: -20px; } .ad:nth-child(1) { .position(1,1,1,1); } .ad:nth-child(2) { .position(2,1,1,1); } .ad:nth-child(3) { .position(3,1,1,1); } } View example four in Internet Explorer 10. This is a very simple example to show how we can use CSS grid layout without needing to add a lot of classes to our document. It also demonstrates how we can mainpulate the content depending on the context in which the user is viewing it. Layout, source order and the idea of content priority CSS3 grid layout isn’t the only module that starts to move us away from the issue of visual layout being linked to source order. However, with good support in Internet Explorer 10, it is a nice way to start looking at how this might work. If you look at the grid layout module as something to be used in conjunction with the flexible box layout module and the very interesting CSS regions and exclusions specifications, we have, tantalizingly on the horizon, a powerful set of tools for layout. I am particularly keen on the potential separation of source order from layout as it dovetails rather neatly into something I spend a lot of time thinking about. As a CMS developer, working on larger scale projects as well as our CMS product Perch, I am interested in how we better enable content editors to create content for the web. In particular, I search for better ways to help them create adaptive content; content that will work in a variety of contexts rather than being tied to one representation of that content. If the concept of adaptive content is new to you, then Karen McGrane’s presentation Adapting Ourselves to Adaptive Content is the place to start. Karen talks about needing to think of content as chunks, that might be used in many different places, displayed differently depending on context. I absolutely agree with Karen’s approach to content. We have always attempted to move content editors away from thinking about creating a page and previewing it on the desktop. However at some point content does need to be published as a page, or a collection of content if you prefer, and bits of that content have priority. Particularly in a small screen context, content gets linearized, we can only show so much at a time, and we need to make sure important content rises to the top. In the case of my example, I wanted to ensure that the address information was clearly visible without scrolling around too much. Dropping it with the entire sidebar to the bottom of the page would not have been so helpful, though neither would moving the whole sidebar to the top of the screen so a visitor had to scroll past advertising to get to the article. If our layout is linked to our source order, then enabling the content editor to make decisions about priority is really hard. Only a system that can do some regeneration of the source order on the server-side – perhaps by way of multiple templates – can allow those kinds of decisions to be made. For larger systems this might be a possibility; for smaller ones, or when using an off-the-shelf CMS, it is less likely to be. Fortunately, any system that allows some form of custom field type can be used to pop a class on to an element, and with CSS grid layout that is all that is needed to be able to target that element and drop it into the right place when the content is viewed, be that on a desktop or a mobile device. This approach can move us away from forcing editors to think visually. At the moment, I might have to explain to an editor that if a certain piece of content needs to come first when viewed on a mobile device, it needs to be placed in the sidebar area, tying it to a particular layout and design. I have to do this because we have to enforce fairly strict rules around source order to make the mechanics of the responsive design work. If I can instead advise an editor to flag important content as high priority in the CMS, then I can make decisions elsewhere as to how that is displayed, and we can maintain the visual hierarchy across all the different ways content might be rendered. Why frustrate ourselves with specifications we can’t yet use in production? The CSS3 grid layout specification is listed under the Exploring section of the list of current work of the CSS Working Group. While discussing a module at this stage might seem a bit pointless if we can’t use it in production work, there is a very real reason for doing so. If those of us who will ultimately be developing sites with these tools find out about them early enough, then we can start to give our feedback to the people responsible for the specification. There is information on the same page about how to get involved with the disussions. So, if you have a bit of time this holiday season, why not have a play with the CSS3 grid layout module? I have outlined here some of my thoughts on how grid layout and other modules that separate layout from source order can be used in the work that I do. Likewise, wherever in the stack you work, playing with and thinking about new specifications means you can think about how you would use them to enhance your work. Spot a problem? Think that a change to the specification would improve things for a specific use case? Then you have something you could post to www-style to add to the discussion around this module. All the examples are on CodePen so feel free to play around and fork them.",2012,Rachel Andrew,rachelandrew,2012-12-18T00:00:00+00:00,https://24ways.org/2012/css3-grid-layout/,code 98,Absolute Columns,"CSS layouts have come quite a long way since the dark ages of web publishing, with all sorts of creative applications of floats, negative margins, and even background images employed in order to give us that most basic building block, the column. As the title implies, we are indeed going to be discussing columns today—more to the point, a handy little application of absolute positioning that may be exactly what you’ve been looking for… Care for a nightcap? If you’ve been developing for the web for long enough, you may be familiar with this little children’s fable, passed down from wizened Shaolin monks sitting atop the great Mt. Geocities: “Once upon a time, multiple columns of the same height could be easily created using TABLES.” Now, though we’re all comfortably seated on the standards train (and let’s be honest: even if you like to think you’ve fallen off, if you’ve given up using tables for layout, rest assured your sleeper car is still reserved), this particular—and as page layout goes, quite basic—trick is still a thorn in our CSSides compared to the ease of achieving the same effect using said Tables of Evil™. See, the orange juice masks the flavor… Creative solutions such as Dan Cederholm’s Faux Columns do a good job of making it appear as though adjacent columns maintain equal height as content expands, using a background image to fill the space that the columns cannot. Now, the Holy Grail of CSS columns behaving exactly how they would as table cells—or more to the point, as columns—still eludes us (cough CSS3 Multi-column layout module cough), but sometimes you just need, for example, a secondary column (say, a sidebar) to match the height of a primary column, without involving the creation of images. This is where a little absolute positioning can save you time, while possibly giving your layout a little more flexibility. Shaken, not stirred You’re probably familiar by now with the concept of Making the Absolute, Relative as set forth long ago by Doug Bowman, but let’s quickly review just in case: an element set to position:absolute will position itself relative to its nearest ancestor set to position:relative, rather than the browser window (see Figure 1). Figure 1. However, what you may not know is that we can anchor more than two sides of an absolutely positioned element. Yes, that’s right, all four sides (top, right, bottom, left) can be set, though in this example we’re only going to require the services of three sides (see Figure 2 for the end result). Figure 2. Trust me, this will make you feel better Our requirements are essentially the same as the standard “absolute-relative” trick—a container <div> set to position:relative, and our sidebar <div> set to position:absolute — plus another <div> that will serve as our main content column. We’ll also add a few other common layout elements (wrapper, header, and footer) so our example markup looks more like a real layout and less like a test case: <div id=""wrapper""> <div id=""header""> <h2>#header</h2> </div> <div id=""container""> <div id=""column-left""> <h2>#left</h2> <p>Lorem ipsum dolor sit amet…</p> </div> <div id=""column-right""> <h2>#right</h2> </div> </div> <div id=""footer""> <h2>#footer</h2> </div> </div> In this example, our main column (#column-left) is only being given a width to fit within the context of the layout, and is otherwise untouched (though we’re using pixels here, this trick will of course work with fluid layouts as well), and our right keeping our styles nice and minimal: #container { position: relative; } #column-left { width: 480px; } #column-right { position: absolute; top: 10px; right: 10px; bottom: 10px; width: 250px; } The trick is a simple one: the #container <div> will expand vertically to fit the content within #column-left. By telling our sidebar <div> (#column-right) to attach itself not only to the top and right edges of #container, but also to the bottom, it too will expand and contract to match the height of the left column (duplicate the “lorem ipsum” paragraph a few times to see it in action). Figure 3. On the rocks “But wait!” I hear you exclaim, “when the right column has more content than the left column, it doesn’t expand! My text runneth over!” Sure enough, that’s exactly what happens, and what’s more, it’s supposed to: Absolutely positioned elements do exactly what you tell them to do, and unfortunately aren’t very good at thinking outside the box (get it? sigh…). However, this needn’t get your spirits down, because there’s an easy way to address the issue: by adding overflow:auto to #column-right, a scrollbar will automatically appear if and when needed: #column-right { position: absolute; top: 10px; right: 10px; bottom: 10px; width: 250px; overflow: auto; } While this may limit the trick’s usefulness to situations where the primary column will almost always have more content than the secondary column—or where the secondary column’s content can scroll with wild abandon—a little prior planning will make it easy to incorporate into your designs. Driving us to drink It just wouldn’t be right to have a friendly, festive holiday tutorial without inviting IE6, though in this particular instance there will be no shaming that old browser into admitting it has a problem, nor an intervention and subsequent 12-step program. That’s right my friends, this tutorial has abstained from IE6-abuse now for 30 days, thanks to the wizard Dean Edwards and his amazingly talented IE7 Javascript library. Simply drop the Conditional Comment and <script> element into the <head> of your document, along with one tiny CSS hack that only IE6 (and below) will ever see, and that browser will be back on the straight and narrow: <!--[if lt IE 7]> <script src=""http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js"" type=""text/javascript""></script> <style type=""text/css"" media=""screen""> #container { zoom:1; /* helps fix IE6 by initiating hasLayout */ } </style> <![endif]--> Eggnog is supposed to be spiked, right? Of course, this is one simple example of what can be a much more powerful technique, depending on your needs and creativity. Just don’t go coding up your wildest fantasies until you’ve had a chance to sleep off the Christmas turkey and whatever tasty liquids you happen to imbibe along the way…",2008,Dan Rubin,danrubin,2008-12-22T00:00:00+00:00,https://24ways.org/2008/absolute-columns/,code 99,A Christmas hCard From Me To You,"So apparently Christmas is coming. And what is Christmas all about? Well, cleaning out your address book, of course! What better time to go through your contacts, making sure everyone’s details are up date and that you’ve deleted all those nasty clients who never paid on time? It’s also a good time to make sure your current clients and colleagues have your most up-to-date details, so instead of filling up their inboxes with e-cards, why not send them something useful? Something like a… vCard! (See what I did there?) Just in case you’ve been working in a magical toy factory in the upper reaches of Scandinavia for the last few years, I’m going to tell you that now would also be the perfect time to get into microformats. Using the hCard format, we’ll build a very simple web page and markup our contact details in such a way that they’ll be understood by microformats plugins, like Operator or Tails for Firefox, or the cross-browser Microformats Bookmarklet. Oh, and because Christmas is all about dressing up and being silly, we’ll make the whole thing look nice and have a bit of fun with some CSS3 progressive enhancement. If you can’t wait to see what we end up with, you can preview it here. Step 1: Contact Details First, let’s decide what details we want to put on the page. I’d put my full name, my email address, my phone number, and my postal address, but I’d rather not get surprise visits from strangers when I’m fannying about with my baubles, so I’m going to use Father Christmas instead (that’s Santa to you Yanks). Father Christmas fatherchristmas@elliotjaystocks.com 25 Laughingallthe Way Snow Falls Lapland Finland 010 60 58 000 Step 2: hCard Creator Now I’m not sure about you, but I rather like getting the magical robot pixies to do the work for me, so head on over to the hCard Creator and put those pixies to work! Pop in your details and they’ll give you some nice microformatted HTML in turn. <div id=""hcard-Father-Christmas"" class=""vcard""> <a class=""url fn"" href=""http://elliotjaystocks.com/fatherchristmas"">Father Christmas</a> <a class=""email"" href=""mailto:fatherchristmas@elliotjaystocks.com""> fatherchristmas@elliotjaystocks.com</a> <div class=""adr""> <div class=""street-address"">25 Laughingallthe Way</div> <span class=""locality"">Snow Falls</span> , <span class=""region"">Lapland</span> , <span class=""postal-code"">FI-00101</span> <span class=""country-name"">Finland</span> </div> <div class=""tel"">010 60 58 000</div> <p style=""font-size:smaller;"">This <a href=""http://microformats.org/wiki/hcard"">hCard</a> created with the <a href=""http://microformats.org/code/hcard/creator"">hCard creator</a>.</p> </div> Step 3: Editing The Code One of the great things about microformats is that you can use pretty much whichever HTML tags you want, so just because the hCard Creator Fairies say something should be wrapped in a <span> doesn’t mean you can’t change it to a <blink>. Actually, no, don’t do that. That’s not even excusable at Christmas. I personally have a penchant for marking up each line of an address inside a <li> tag, where the parent url retains the class of adr. As long as you keep the class names the same, you’ll be fine. <div id=""hcard-Father-Christmas"" class=""vcard""> <h1><a class=""url fn"" href=""http://elliotjaystocks.com/fatherchristmas"">Father Christmas </a></h1> <a class=""email"" href=""mailto:fatherchristmas@elliotjaystocks.com?subject=Here, have some Christmas cheer!"">fatherchristmas@elliotjaystocks.com</a> <ul class=""adr""> <li class=""street-address"">25 Laughingallthe Way</li> <li class=""locality"">Snow Falls</li> <li class=""region"">Lapland</li> <li class=""postal-code"">FI-00101</li> <li class=""country-name"">Finland</li> </ul> <span class=""tel"">010 60 58 000</span> </div> Step 4: Testing The Microformats With our microformats in place, now would be a good time to test that they’re working before we start making things look pretty. If you’re on Firefox, you can install the Operator or Tails extensions, but if you’re on another browser, just add the Microformats Bookmarklet. Regardless of your choice, the results is the same: if you’ve code microformatted content on a web page, one of these bad boys should pick it up for you and allow you to export the contact info. Give it a try and you should see father Christmas appearing in your address book of choice. Now you’ll never forget where to send those Christmas lists! Step 5: Some Extra Markup One of the first things we’re going to do is put a photo of Father Christmas on the hCard. We’ll be using CSS to apply a background image to a div, so we’ll be needing an extra div with a class name of “photo”. In turn, we’ll wrap the text-based elements of our hCard inside a div cunningly called “text”. Unfortunately, because of the float technique we’ll be using, we’ll have to use one of those nasty float-clearing techniques. I shall call this “christmas-cheer”, since that is what its presence will inevitably bring, of course. Oh, and let’s add a bit of text to give the page context, too: <p>Send your Christmas lists my way...</p> <div id=""hcard-Father-Christmas"" class=""vcard""> <div class=""text""> <h1><a class=""url fn"" href=""http://elliotjaystocks.com/fatherchristmas"">Father Christmas </a></h1> <a class=""email"" href=""mailto:fatherchristmas@elliotjaystocks.com?subject=Here, have some Christmas cheer!"">fatherchristmas@elliotjaystocks.com</a> <ul class=""adr""> <li class=""street-address"">25 Laughingallthe Way</li> <li class=""locality"">Snow Falls</li> <li class=""region"">Lapland</li> <li class=""postal-code"">FI-00101</li> <li class=""country-name"">Finland</li> </ul> <span class=""tel"">010 60 58 000</span> </div> <div class=""photo""></div> <br class=""christmas-cheer"" /> </div> <div class=""credits""> <p>A tutorial by <a href=""http://elliotjaystocks.com"">Elliot Jay Stocks</a> for <a href=""http://24ways.org/"">24 Ways</a></p> <p>Background: <a href=""http://sxc.hu/photo/1108741"">stock.xchng</a> | Father Christmas: <a href=""http://istockphoto.com/file_closeup/people/4575943-active-santa.php?id=4575943"">iStockPhoto</a></p> </div> Step 6: Some Christmas Sparkle So far, our hCard-housing web page is slightly less than inspiring, isn’t it? It’s time to add a bit of CSS. There’s nothing particularly radical going on here; just a simple layout, some basic typographic treatment, and the placement of the Father Christmas photo. I’d usually use a more thorough CSS reset like the one found in the YUI or Eric Meyer’s, but for this basic page, the simple * solution will do. Check out the step 6 demo to see our basic styles in place. From this… … to this: Step 7: Fun With imagery Now it’s time to introduce a repeating background image to the <body> element. This will seamlessly repeat for as wide as the browser window becomes. But that’s fairly straightforward. How about having some fun with the Father Christmas image? If you look at the image file itself, you’ll see that it’s twice as wide as the area we can see and contains a ‘hidden’ photo of our rather camp St. Nick. As a light-hearted visual… er… ‘treat’ for users who move their mouse over the image, we move the position of the background image on the “photo” div. Check out the step 7 demo to see it working. Step 8: Progressive Enhancement Finally, this fun little project is a great opportunity for us to mess around with some advanced CSS features (some from the CSS3 spec) that we rarely get to use on client projects. (Don’t forget: no Christmas pressies for clients who want you to support IE6!) Here are the rules we’re using to give some browsers a superior viewing experience: @font-face allows us to use Jos Buivenga’s free font ‘Fertigo Pro’ on all text; text-shadow adds a little emphasis on the opening paragraph; body > p:first-child causes only the first paragraph to receive this treatment; border-radius created rounded corners on our main div and the links within it; and webkit-transition allows us to gently fade in between the default and hover states of those links. And with that, we’re done! You can see the results here. It’s time to customise the page to your liking, upload it to your site, and send out the URL. And do it quickly, because I’m sure you’ve got some last-minute Christmas shopping to finish off!",2008,Elliot Jay Stocks,elliotjaystocks,2008-12-10T00:00:00+00:00,https://24ways.org/2008/a-christmas-hcard-from-me-to-you/,code 100,Moo'y Christmas,"A note from the editors: Moo has changed their API since this article was written. As the web matures, it is less and less just about the virtual world. It is becoming entangled with our world and it is harder to tell what is virtual and what is real. There are several companies who are blurring this line and make the virtual just an extension of the physical. Moo is one such company. Moo offers simple print on demand services. You can print business cards, moo mini cards, stickers, postcards and more. They give you the ability to upload your images, customize them, then have them sent to your door. Many companies allow this sort of digital to physical interaction, but Moo has taken it one step further and has built an API. Printable stocking stuffers The Moo API consists of a simple XML file that is sent to their servers. It describes all the information needed to dynamically assemble and print your object. This is very helpful, not just for when you want to print your own stickers, but when you want to offer them to your customers, friends, organization or community with no hassle. Moo handles the check-out and shipping, all you need to do is what you do best, create! Now using an API sounds complicated, but it is actually very easy. I am going to walk you through the options so you can easily be printing in no time. Before you can begin sending data to the Moo API, you need to register and get an API key. This is important, because it allows Moo to track usage and to credit you. To register, visit http://www.moo.com/api/ and click “Request an API key”. In the following examples, I will use {YOUR API KEY HERE} as a place holder, replace that with your API key and everything will work fine. First thing you need to do is to create an XML file to describe the check-out basket. Open any text-editor and start with some XML basics. Don’t worry, this is pretty simple and Moo gives you a few tools to check your XML for errors before you order. <?xml version=""1.0"" encoding=""UTF-8""?> <moo xsi:noNamespaceSchemaLocation=""http://www.moo.com/xsd/api_0.7.xsd"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance""> <request> <version>0.7</version> <api_key>{YOUR API KEY HERE}</api_key> <call>build</call> <return_to>http://www.example.com/return.html</return_to> <fail_to>http://www.example.com/fail.html</fail_to> </request> <payload> ... </payload> </moo> Much like HTML’s <head> and <body>, Moo has created <request> and <payload> elements all wrapped in a <moo> element. The <request> element contains a few pieces of information that is the same across all the API calls. The <version> element describes which version of the API is being used. This is more important for Moo than for you, so just stick with “0.7” for now. The <api_key> allows Moo to track sales, referrers and credit your account. The <call> element can only take “build” so that is pretty straight forward. The <return_to> and <fail_to> elements are URLs. These are optional and are the URLs the customer is redirected to if there is an error, or when the check out process is complete. This allows for some basic branding and a custom “thank you” page which is under your control. That’s it for the <request> element, pretty easy so far! Next up is the <payload> element. What goes inside here describes what is to be printed. There are two possible elements, we can put <chooser> or we can put <products> directly inside <payload>. They work in a similar ways, but they drop the customer into different parts of the Moo checkout process. If you specify <products> then you send the customer straight to the Moo payment process. If you specify <chooser> then you send the customer one-step earlier where they are allowed to pick and choose some images, remove the ones they don’t like, adjust the crop, etc. The example here will use <chooser> but with a little bit of homework you can easily adjust to <products> if you desire. ... <chooser> <product_type>sticker</product_type> <images> <url>http://example.com/images/christmas1.jpg</url> </images> </chooser> ... Inside the <chooser> element, we can see there are two basic piece of information. The type of product we want to print, and the images that are to be printed. The <product_type> element can take one of five options and is required! The possibilities are: minicard, notecard, sticker, postcard or greetingcard. We’ll now look at two of these more closely. Moo Stickers In the Moo sticker books you get 90 small squarish stickers in a small little booklet. The simplest XML you could send would be something like the following payload: ... <payload> <chooser> <product_type>sticker</product_type> <images> <url>http://example.com/image1.jpg</url> </images> <images> <url>http://example.com/image2.jpg</url> </images> <images> <url>http://example.com/image3.jpg</url> </images> </chooser> </payload> ... This creates a sticker book with only 3 unique images, but 30 copies of each image. The Sticker books always print 90 stickers in multiples of the images you uploaded. That example only has 3 <images> elements, but you can easily duplicate the XML and send up to 90. The <url> should be the full path to your image and the image needs to be a minimum of 300 pixels by 300 pixels. You can add more XML to describe cropping, but the simplest option is to either, let your customers choose or to pre-crop all your images square so there are no issues. The full XML you would post to the Moo API to print sticker books would look like this: <?xml version=""1.0"" encoding=""UTF-8""?> <moo xsi:noNamespaceSchemaLocation=""http://www.moo.com/xsd/api_0.7.xsd"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance""> <request> <version>0.7</version> <api_key>{YOUR API KEY HERE}</api_key> <call>build</call> <return_to>http://www.example.com/return.html</return_to> <fail_to>http://www.example.com/fail.html</fail_to> </request> <payload> <chooser> <product_type>sticker</product_type> <images> <url>http://example.com/image1.jpg</url> </images> <images> <url>http://example.com/image2.jpg</url> </images> <images> <url>http://example.com/image3.jpg</url> </images> </chooser> </payload> </moo> Mini-cards The mini-cards are the small cute business cards in 14×35 dimensions and come in packs of 100. Since the mini-cards are print on demand, this allows you to have 100 unique images on the back of the cards. Just like the stickers example, we need the same XML setup. The <moo> element and <request> elements will be the same as before. The part you will focus on is the <payload> section. Since you are sending along specific information, we can’t use the <chooser> option any more. Switch this to <products> which has a child of <product>, which in turn has a <product_type> and <designs>. This might seem like a lot of work, but once you have it set up you won’t need to change it. ... <payload> <products> <product> <product_type>minicard</product_type> <designs> ... </designs> </product> </products> </payload> ... So now that we have the basic framework, we can talk about the information specific to minicards. Inside the <designs> element, you will have one <design> for each card. Much like before, this contains a way to describe the image. Note that this time the element is called  </design> ... So far, we have managed to build a pack of 100 Moo mini-cards with the same image on the front. If you wanted 100 different images, you just need to replicate this snippit, 99 more times. That describes the front design, but the flip-side of your mini-cards can contain 6 lines of text, which is customizable in a variety of colors, fonts and styles. The API allows you to create different text on the back of each mini-card, something the web interface doesn’t implement. To describe the text on the mini-card we need to add a <text_collection> element inside the <design> element. If you skip this element, the back of your mini-card will just be blank, but that’s not very festive! Inside the <text_collection> element, we need to describe the type of text we want to format, so we add a <minicard> element, which in turn contains all the lines of text. Each of Moo’s printed products take different numbers of lines of text, so if you are not planning on making mini-cards, be sure to consult the documentation. For mini-cards, we can have 6 distinct lines, each with their own style and layout. Each line is represented by an element <text_line> which has several optional children. The <id> tells which line of the 6 to print the text one. The <string> is the text you want to print and it must be shorter than 38 characters. The <bold> element is false by default, but if you want your text bolded, then add this and set it to true. The <align> element is also optional. By default it is set to align left. You can also set this to right or center if you desirer. The <font> element takes one of 3 types, modern, traditional or typewriter. The default is modern. Finally, you can set the <colour>, yes that’s color with a ‘u’, Moo is a British company, so they get to make the rules. When you start a print on demand company, you can spell it however you want. The <colour> element takes a 6 character hex value with a leading #. <design> ... <text_collection> <minicard> <text_line> <id>(1-6)</id> <string>String, I must be less than 38 chars!</string> <bold>true</bold> <align>left</align> <font>modern</font> <colour>#ff0000</colour> </text_line> </minicard> </text_collection> </design> If you combine all of this into a mini-card request you’d get this example: <?xml version=""1.0"" encoding=""UTF-8""?> <moo xsi:noNamespaceSchemaLocation=""http://www.moo.com/xsd/api_0.7.xsd"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance""> <request> <version>0.7</version> <api_key>{YOUR API KEY HERE}</api_key> <call>build</call> <return_to>http://www.example.com/return.html</return_to> <fail_to>http://www.example.com/fail.html</fail_to> </request> <payload> <products> <product> <product_type>minicard</product_type> <designs> <design>  <text_collection> <minicard> <text_line> <id>1</id> <string>String, I must be less than 38 chars!</string> <bold>true</bold> <align>left</align> <font>modern</font> <colour>#ff0000</colour> </text_line> </minicard> </text_collection> </design> </designs> </product> </products> </payload> </moo> Now you know how to construct the XML that describes what to print. Next, you need to know how to send it to Moo to make it happen! Posting to the API So your XML is file ready to go. First thing we need to do is check it to make sure it’s valid. Moo has created a simple validator where you paste in your XML, and it alerts you to problems. When you have a fully valid XML file, you’ll want to send that to the Moo API. There are a few ways to do this, but the simplest is with an HTML form. This is the sample code for an HTML form with a big “Buy My Stickers” button. Once you know that it is working, you can use all your existing HTML knowledge to style it up any way you like. <form method=""POST"" action=""http://www.moo.com/api/api.php""> <input type=""hidden"" name=""xml"" value=""<?xml version=""1.0"" encoding=""UTF-8""?> <moo xsi:noNamespaceSchemaLocation=""http://www.moo.com/xsd/api_0.7.xsd"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance""> <request>....</request> <payload>...</payload> </moo> ""> <input type=""submit"" name=""submit"" value=""Buy My Stickers""/> </form> This is just a basic <form> element that submits to the Moo API, http://www.moo.com/api/api.php, when someone clicks the button. There is a hidden input called “xml” which contains the value the XML file we created previously. For those of you who need to “view source” to fully understand what’s happening can see a working version and peek under the hood. Using the API has advantages over uploading the images directly yourself. The images and text that you send via the API can be dynamic. Some companies, like Dopplr, have taken user profiles and dynamic data that changes every minute to generate customer stickers of places that you’ve travelled to or mini-cards with a world map of all the cities you have visited. Every single customer has different travel plans and therefore different sets of stickers and mini-card maps. The API allows for the utmost current information to be printed, on demand, in real-time. Go forth and Moo’ltiply See, making an API call wasn’t that hard was it? You are now 90% of the way to creating anything with the Moo API. With a bit of reading, you can learn that extra 10% and print any Moo product. Be on the lookout in 2009 for the official release of the 1.0 API with improvements and some extras that were not available when this article was written. This article is released under the creative-commons attribution share-a-like license. That means you are free to re-distribute it, mash it up, translate it and otherwise re-using it ways the author never considered, in return he only asks you mention his name. This work by Brian Suda is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.",2008,Brian Suda,briansuda,2008-12-19T00:00:00+00:00,https://24ways.org/2008/mooy-christmas/,code 104,Sitewide Search On A Shoe String,"One of the questions I got a lot when I was building web sites for smaller businesses was if I could create a search engine for their site. Visitors should be able to search only this site and find things without the maintainer having to put “related articles” or “featured content” links on every page by hand. Back when this was all fields this wasn’t easy as you either had to write your own scraping tool, use ht://dig or a paid service from providers like Yahoo, Altavista or later on Google. In the former case you had to swallow the bitter pill of computing and indexing all your content and storing it in a database for quick access and in the latter it hurt your wallet. Times have moved on and nowadays you can have the same functionality for free using Yahoo’s “Build your own search service” – BOSS. The cool thing about BOSS is that it allows for a massive amount of hits a day and you can mash up the returned data in any format you want. Another good feature of it is that it comes with JSON-P as an output format which makes it possible to use it without any server-side component! Starting with a working HTML form In order to add a search to your site, you start with a simple HTML form which you can use without JavaScript. Most search engines will allow you to filter results by domain. In this case we will search “bbc.co.uk”. If you use Yahoo as your standard search, this could be: <form id=""customsearch"" action=""http://search.yahoo.com/search""> <div> <label for=""p"">Search this site:</label> <input type=""text"" name=""p"" id=""term""> <input type=""hidden"" name=""vs"" id=""site"" value=""bbc.co.uk""> <input type=""submit"" value=""go""> </div> </form> The Google equivalent is: <form id=""customsearch"" action=""http://www.google.co.uk/search""> <div> <label for=""p"">Search this site:</label> <input type=""text"" name=""as_q"" id=""term""> <input type=""hidden"" name=""as_sitesearch"" id=""site"" value=""bbc.co.uk""> <input type=""submit"" value=""go""> </div> </form> In any case make sure to use the ID term for the search term and site for the site, as this is what we are going to use for the script. To make things easier, also have an ID called customsearch on the form. To use BOSS, you should get your own developer API for BOSS and replace the one in the demo code. There is click tracking on the search results to see how successful your app is, so you should make it your own. Adding the BOSS magic BOSS is a REST API, meaning you can use it in any HTTP request or in a browser by simply adding the right parameters to a URL. Say for example you want to search “bbc.co.uk” for “christmas” all you need to do is open the following URL: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&format=xml&appid=YOUR-APPLICATION-ID Try it out and click it to see the results in XML. We don’t want XML though, which is why we get rid of the format=xml parameter which gives us the same information in JSON: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&appid=YOUR-APPLICATION-ID JSON makes most sense when you can send the output to a function and immediately use it. For this to happen all you need is to add a callback parameter and the JSON will be wrapped in a function call. Say for example we want to call SITESEARCH.found() when the data was retrieved we can do it this way: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=YOUR-APPLICATION-ID You can use this immediately in a script node if you want to. The following code would display the total amount of search results for the term christmas on bbc.co.uk as an alert: <script type=""text/javascript""> var SITESEARCH = {}; SITESEARCH.found = function(o){ alert(o.ysearchresponse.totalhits); } </script> <script type=""text/javascript"" src=""http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=Kzv_lcHV34HIybw0GjVkQNnw4AEXeyJ9Rb1gCZSGxSRNrcif_HdMT9qTE1y9LdI-""> </script> However, for our example, we need to be a bit more clever with this. Enhancing the search form Here’s the script that enhances a search form to show results below it. SITESEARCH = function(){ var config = { IDs:{ searchForm:'customsearch', term:'term', site:'site' }, loading:'Loading results...', noresults:'No results found.', appID:'YOUR-APP-ID', results:20 }; var form; var out; function init(){ if(config.appID === 'YOUR-APP-ID'){ alert('Please get a real application ID!'); } else { form = document.getElementById(config.IDs.searchForm); if(form){ form.onsubmit = function(){ var site = document.getElementById(config.IDs.site).value; var term = document.getElementById(config.IDs.term).value; if(typeof site === 'string' && typeof term === 'string'){ if(typeof out !== 'undefined'){ out.parentNode.removeChild(out); } out = document.createElement('p'); out.appendChild(document.createTextNode(config.loading)); form.appendChild(out); var APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + term + '?callback=SITESEARCH.found&sites=' + site + '&count=' + config.results + '&appid=' + config.appID; var s = document.createElement('script'); s.setAttribute('src',APIurl); s.setAttribute('type','text/javascript'); document.getElementsByTagName('head')[0].appendChild(s); return false; } }; } } }; function found(o){ var list = document.createElement('ul'); var results = o.ysearchresponse.resultset_web; if(results){ var item,link,description; for(var i=0,j=results.length;i<j;i++){ item = document.createElement('li'); link = document.createElement('a'); link.setAttribute('href',results[i].clickurl); link.innerHTML = results[i].title; item.appendChild(link); description = document.createElement('p'); description.innerHTML = results[i]['abstract']; item.appendChild(description); list.appendChild(item); } } else { list = document.createElement('p'); list.appendChild(document.createTextNode(config.noresults)); } form.replaceChild(list,out); out = list; }; return{ config:config, init:init, found:found }; }(); Oooohhhh scary code! Let’s go through this one bit at a time: We start by creating a module called SITESEARCH and give it an configuration object: SITESEARCH = function(){ var config = { IDs:{ searchForm:'customsearch', term:'term', site:'site' }, loading:'Loading results...', appID:'YOUR-APP-ID', results:20 } Configuration objects are a great idea to make your code easy to change and also to override. In this case you can define different IDs than the one agreed upon earlier, define a message to show when the results are loading, when there aren’t any results, the application ID and the number of results that should be displayed. Note: you need to replace “YOUR-APP-ID” with the real ID you retrieved from BOSS, otherwise the script will complain! var form; var out; function init(){ if(config.appID === 'YOUR-APP-ID'){ alert('Please get a real application ID!'); } else { We define form and out as variables to make sure that all the methods in the module have access to them. We then check if there was a real application ID defined. If there wasn’t, the script complains and that’s that. form = document.getElementById(config.IDs.searchForm); if(form){ form.onsubmit = function(){ var site = document.getElementById(config.IDs.site).value; var term = document.getElementById(config.IDs.term).value; if(typeof site === 'string' && typeof term === 'string'){ If the application ID was a winner, we check if the form with the provided ID exists and apply an onsubmit event handler. The first thing we get is the values of the site we want to search in and the term that was entered and check that those are strings. if(typeof out !== 'undefined'){ out.parentNode.removeChild(out); } out = document.createElement('p'); out.appendChild(document.createTextNode(config.loading)); form.appendChild(out); If both are strings we check of out is undefined. We will create a loading message and subsequently the list of search results later on and store them in this variable. So if out is defined, it’ll be an old version of a search (as users will re-submit the form over and over again) and we need to remove that old version. We then create a paragraph with the loading message and append it to the form. var APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + term + '?callback=SITESEARCH.found&sites=' + site + '&count=' + config.results + '&appid=' + config.appID; var s = document.createElement('script'); s.setAttribute('src',APIurl); s.setAttribute('type','text/javascript'); document.getElementsByTagName('head')[0].appendChild(s); return false; } }; } } }; Now it is time to call the BOSS API by assembling a correct REST URL, create a script node and apply it to the head of the document. We return false to ensure the form does not get submitted as we want to stay on the page. Notice that we are using SITESEARCH.found as the callback method, which means that we need to define this one to deal with the data returned by the API. function found(o){ var list = document.createElement('ul'); var results = o.ysearchresponse.resultset_web; if(results){ var item,link,description; We create a new list and then get the resultset_web array from the data returned from the API. If there aren’t any results returned, this array will not exist which is why we need to check for it. Once we done that we can define three variables to repeatedly store the item title we want to display, the link to point to and the description of the link. for(var i=0,j=results.length;i<j;i++){ item = document.createElement('li'); link = document.createElement('a'); link.setAttribute('href',results[i].clickurl); link.innerHTML = results[i].title; item.appendChild(link); description = document.createElement('p'); description.innerHTML = results[i]['abstract']; item.appendChild(description); list.appendChild(item); } We then loop over the results array and assemble a list of results with the titles in links and paragraphs with the abstract of the site. Notice the bracket notation for abstract as abstract is a reserved word in JavaScript2 :). } else { list = document.createElement('p'); list.appendChild(document.createTextNode(config.noresults)); } form.replaceChild(list,out); out = list; }; If there aren’t any results, we define a paragraph with the no results message as list. In any case we replace the old out (the loading message) with the list and re-define out as the list. return{ config:config, init:init, found:found }; }(); All that is left to do is return the properties and methods we want to make public. In this case found needs to be public as it is accessed by the API return. We return init to make it accessible and config to allow implementers to override any of the properties. Using the script In order to use this script, all you need to do is to add it after the form in the document, override the API key with your own and call init(): <form id=""customsearch"" action=""http://search.yahoo.com/search""> <div> <label for=""p"">Search this site:</label> <input type=""text"" name=""p"" id=""term""> <input type=""hidden"" name=""vs"" id=""site"" value=""bbc.co.uk""> <input type=""submit"" value=""go""> </div> </form> <script type=""text/javascript"" src=""boss-site-search.js""></script> <script type=""text/javascript""> SITESEARCH.config.appID = 'copy-the-id-you-know-to-get-where'; SITESEARCH.init(); </script> Where to go from here This is just a very simple example of what you can do with BOSS. You can define languages and regions, retrieve and display images and news and mix the results with other data sources before displaying them. One very cool feature is that by adding a view=keyterms parameter to the URL you can get the keywords of each of the results to drill deeper into the search. An example for this written in PHP is available on the YDN blog. For JavaScript solutions there is a handy wrapper called yboss available to help you go nuts.",2008,Christian Heilmann,chrisheilmann,2008-12-04T00:00:00+00:00,https://24ways.org/2008/sitewide-search-on-a-shoestring/,code 109,Geotag Everywhere with Fire Eagle,"A note from the editors: Since this article was written Yahoo! has retired the Fire Eagle service. Location, they say, is everywhere. Everyone has one, all of the time. But on the web, it’s taken until this year to see the emergence of location in the applications we use and build. The possibilities are broad. Increasingly, mobile phones provide SDKs to approximate your location wherever you are, browser extensions such as Loki and Mozilla’s Geode provide browser-level APIs to establish your location from the proximity of wireless networks to your laptop. Yahoo’s Brickhouse group launched Fire Eagle, an ambitious location broker enabling people to take their location from any of these devices or sources, and provide it to a plethora of web services. It enables you to take the location information that only your iPhone knows about and use it anywhere on the web. That said, this is still a time of location as an emerging technology. Fire Eagle stores your location on the web (protected by application-specific access controls), but to try and give an idea of how useful and powerful your location can be — regardless of the services you use now — today’s 24ways is going to build a bookmarklet to call up your location on demand, in any web application. Location Support on the Web Over the past year, the number of applications implementing location features has increased dramatically. Plazes and Brightkite are both full featured social networks based around where you are, whilst Pownce rolled in Fire Eagle support to allow geotagging of all the content you post to their microblogging service. Dipity’s beautiful timeline shows for you moving from place to place and Six Apart’s activity stream for Movable Type started exposing your movements. The number of services that hook into Fire Eagle will increase as location awareness spreads through the developer community, but you can use your location on other sites indirectly too. Consider Flickr. Now world renowned for their incredible mapping and places features, geotagging on Flickr started out as a grassroots extension of regular tagging. That same technique can be used to start rolling geotagging in any publishing platform you come across, for any kind of content. Machine-tags (geo:lat= and geo:lon=) and the adr and geo microformats can be used to enhance anything you write with location information. A crash course in avian inflammability Fire Eagle is a location store. A broker between services and devices which provide location and those which consume it. It’s a switchboard that controls which pieces of your location different applications can see and use, and keeps hidden anything you want kept private. A blog widget that displays your current location in public can be restricted to display just your current city, whilst a service that provides you with a list of the nearest ATMs will operate better with a precise street address. Even if your iPhone tells Fire Eagle exactly where you are, consuming applications only see what you want them to see. That’s important for users to realise that they’re in control, but also important for application developers to remember that you cannot rely on having super-accurate information available all the time. You need to build location aware applications which degrade gracefully, because users will provide fuzzier information — either through choice, or through less accurate sources. Application specific permissions are controlled through an OAuth API. Each application has a unique key, used to request a second, user-specific key that permits access to that user’s information. You store that user key and it remains valid until such a time as the user revokes your application’s access. Unlike with passwords, these keys are unique per application, so revoking the access rights of one application doesn’t break all the others. Building your first Fire Eagle app; Geomarklet Fire Eagle’s developer documentation can take you through examples of writing simple applications using server side technologies (PHP, Python). Here, we’re going to write a client-side bookmarklet to make your location available in every site you use. It’s designed to fast-track the experience of having location available everywhere on web, and show you how that can be really handy. Hopefully, this will set you thinking about how location can enhance the new applications you build in 2009. An oddity of bookmarklets Bookmarklets (or ‘favlets’, for those of an MSIE persuasion) are a strange environment to program in. Critically, you have no persistent storage available. As such, using token-auth APIs in a static environment requires you to build you application in a slightly strange way; authing yourself in advance and then hardcoding the keys into your script. Get started Before you do anything else, go to http://fireeagle.com and log in, get set up if you need to and by all means take a look around. Take a look at the mobile updaters section of the application gallery and perhaps pick out an app that will update Fire Eagle from your phone or laptop. Once that’s done, you need to register for an application key in the developer section. Head straight to /developer/create and complete the form. Since you’re building a standalone application, choose ‘Auth for desktop applications’ (rather than web applications), and select that you’ll be ‘accessing location’, not updating. At the end of this process, you’ll have two application keys, a ‘Consumer Key’ and a ‘Consumer Secret’, which look like these: Consumer Key luKrM9U1pMnu Consumer Secret ZZl9YXXoJX5KLiKyVrMZffNEaBnxnd6M These keys combined allow your application to make requests to Fire Eagle. Next up, you need to auth yourself; granting your new application permission to use your location. Because bookmarklets don’t have local storage, you can’t integrate the auth process into the bookmarklet itself — it would have no way of storing the returned key. Instead, I’ve put together a simple web frontend through which you can auth with your application. Head to Auth me, Amadeus!, enter the application keys you just generated and hit ‘Authorize with Fire Eagle’. You’ll be taken to the Fire Eagle website, just as in regular Fire Eagle applications, and after granting access to your app, be redirected back to Amadeus which will provide you your user tokens. These tokens are used in subsequent requests to read your location. And, skip to the end… The process of building the bookmarklet, making requests to Fire Eagle, rendering it to the page and so forth follows, but if you’re the impatient type, you might like to try this out right now. Take your four API keys from above, and drag the following link to your Bookmarks Toolbar; it contains all the code described below. Before you can use it, you need to edit in your own API keys. Open your browser’s bookmark editor and where you find text like ‘YOUR_CONSUMER_KEY_HERE’, swap in the corresponding key you just generated. Get Location Bookmarklet Basics To start on the bookmarklet code, set out a basic JavaScript module-pattern structure: var Geomarklet = function() { return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); Next we’ll add the keys obtained in the setup step, and also some basic Fire Eagle support objects: var Geomarklet = function() { var Keys = { consumer_key: 'IuKrJUHU1pMnu', consumer_secret: 'ZZl9YXXoJX5KLiKyVEERTfNEaBnxnd6M', user_token: 'xxxxxxxxxxxx', user_secret: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' }; var LocationDetail = { EXACT: 0, POSTAL: 1, NEIGHBORHOOD: 2, CITY: 3, REGION: 4, STATE: 5, COUNTRY: 6 }; var index_offset; return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); The Location Hierarchy A successful Fire Eagle query returns an object called the ‘location hierarchy’. Depending on the level of detail shared, the index of a particular piece of information in the array will vary. The LocationDetail object maps the array indices of each level in the hierarchy to something comprehensible, whilst the index_offset variable is an adjustment based on the detail of the result returned. The location hierarchy object looks like this, providing a granular breakdown of a location, in human consumable and machine-friendly forms. ""user"": { ""location_hierarchy"": [{ ""level"": 0, ""level_name"": ""exact"", ""name"": ""707 19th St, San Francisco, CA"", ""normal_name"": ""94123"", ""geometry"": { ""type"": ""Point"", ""coordinates"": [ - 0.2347530752, 67.232323] }, ""label"": null, ""best_guess"": true, ""id"": , ""located_at"": ""2008-12-18T00:49:58-08:00"", ""query"": ""q=707%2019th%20Street,%20Sf"" }, { ""level"": 1, ""level_name"": ""postal"", ""name"": ""San Francisco, CA 94114"", ""normal_name"": ""12345"", ""woeid"": , ""place_id"": """", ""geometry"": { ""type"": ""Polygon"", ""coordinates"": [], ""bbox"": [] }, ""label"": null, ""best_guess"": false, ""id"": 59358791, ""located_at"": ""2008-12-18T00:49:58-08:00"" }, { ""level"": 2, ""level_name"": ""neighborhood"", ""name"": ""The Mission, San Francisco, CA"", ""normal_name"": ""The Mission"", ""woeid"": 23512048, ""place_id"": ""Y12JWsKbApmnSQpbQg"", ""geometry"": { ""type"": ""Polygon"", ""coordinates"": [], ""bbox"": [] }, ""label"": null, ""best_guess"": false, ""id"": 59358801, ""located_at"": ""2008-12-18T00:49:58-08:00"" }, } In this case the first object has a level of 0, so the index_offset is also 0. Prerequisites To query Fire Eagle we call in some existing libraries to handle the OAuth layer and the Fire Eagle API call. Your bookmarklet will need to add the following scripts into the page: The SHA1 encryption algorithm The OAuth wrapper An extension for the OAuth wrapper The Fire Eagle wrapper itself When the bookmarklet is first run, we’ll insert these scripts into the document. We’re also inserting a stylesheet to dress up the UI that will be generated. If you want to follow along any of the more mundane parts of the bookmarklet, you can download the full source code. Rendering This bookmarklet can be extended to support any formatting of your location you like, but for sake of example I’m going to build three common formatters that you’ll find useful for common location scenarios: Sites which already ask for your location; and in publishing systems that accept tags or HTML mark-up. All the rendering functions are items in a renderers object, so they can be iterated through easily, making it trivial to add new formatting functions as your find new use cases (just add another function to the object). var renderers = { geotag: function(user) { if(LocationDetail.EXACT !== index_offset) { return false; } else { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; return ""geo:lat="" + coords[0] + "", geo:lon="" + coords[1]; } }, city: function(user) { if(LocationDetail.CITY < index_offset) { return false; } else { return user.location_hierarchy[LocationDetail.CITY - index_offset].name; } } You should always fail gracefully, and in line with catering to users who choose not to share their location precisely, always check that the location has been returned at the level you require. Geotags are expected to be precise, so if an exact location is unavailable, returning false will tell the rendering aspect of the bookmarklet to ignore the function altogether. These first two are quite simple, geotag returns geo:lat=-0.2347530752, geo:lon=67.232323 and city returns San Francisco, CA. This final renderer creates a chunk of HTML using the adr and geo microformats, using all available aspects of the location hierarchy, and can be used to geotag any content you write on your blog or in comments: html: function(user) { var geostring = ''; var adrstring = ''; var adr = []; adr.push('<p class=""adr"">'); // city if(LocationDetail.CITY >= index_offset) { adr.push( '\n <span class=""locality"">' + user.location_hierarchy[LocationDetail.CITY-index_offset].normal_name + '</span>,' ); } // county if(LocationDetail.REGION >= index_offset) { adr.push( '\n <span class=""region"">' + user.location_hierarchy[LocationDetail.REGION-index_offset].normal_name + '</span>,' ); } // locality if(LocationDetail.STATE >= index_offset) { adr.push( '\n <span class=""region"">' + user.location_hierarchy[LocationDetail.STATE-index_offset].normal_name + '</span>,' ); } // country if(LocationDetail.COUNTRY >= index_offset) { adr.push( '\n <span class=""country-name"">' + user.location_hierarchy[LocationDetail.COUNTRY-index_offset].normal_name + '</span>' ); } // postal if(LocationDetail.POSTAL >= index_offset) { adr.push( '\n <span class=""postal-code"">' + user.location_hierarchy[LocationDetail.POSTAL-index_offset].normal_name + '</span>,' ); } adr.push('\n</p>\n'); adrstring = adr.join(''); if(LocationDetail.EXACT === index_offset) { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; geostring = '<p class=""geo"">' +'\n <span class=""latitude"">' + coords[0] + '</span>;' + '\n <span class=""longitude"">' + coords[1] + '</span>\n</p>\n'; } return (adrstring + geostring); } Here we check the availability of every level of location and build it into the adr and geo patterns as appropriate. Just as for the geotag function, if there’s no exact location the geo markup won’t be returned. Finally, there’s a rendering method which creates a container for all this data, renders all the applicable location formats and then displays them in the page for a user to copy and paste. You can throw this together with DOM methods and some simple styling, or roll in some components from YUI or JQuery to handle drawing full featured overlays. You can see this simple implementation for rendering in the full source code. Make the call With a framework in place to render Fire Eagle’s location hierarchy, the only thing that remains is to actually request your location. Having already authed through Amadeus earlier, that’s as simple as instantiating the Fire Eagle JavaScript wrapper and making a single function call. It’s a big deal that whilst a lot of new technologies like OAuth add some complexity and require new knowledge to work with, APIs like Fire Eagle are really very simple indeed. return { run: function() { insert_prerequisites(); setTimeout( function() { var fe = new FireEagle( Keys.consumer_key, Keys.consumer_secret, Keys.user_token, Keys.user_secret ); var script = document.createElement('script'); script.type = 'text/javascript'; script.src = fe.getUserUrl( FireEagle.RESPONSE_FORMAT.json, 'Geomarklet.callback' ); document.body.appendChild(script); }, 2000 ); }, callback: function(json) { if(json.rsp && 'fail' == json.rsp.stat) { alert('Error ' + json.rsp.code + "": "" + json.rsp.message); } else { index_offset = json.user.location_hierarchy[0].level; draw_selector(json); } } }; We first insert the prerequisite scripts required for the Fire Eagle request to function, and to prevent trying to instantiate the FireEagle object before it’s been loaded over the wire, the remaining instantiation and request is wrapped inside a setTimeout delay. We then create the request URL, referencing the Geomarklet.callback callback function and then append the script to the document body — allowing a cross-domain request. The callback itself is quite simple. Check for the presence and value of rsp.status to test for errors, and display them as required. If the request is successful set the index_offset — to adjust for the granularity of the location hierarchy — and then pass the object to the renderer. The result? When Geomarklet.run() is called, your location from Fire Eagle is read, and each renderer displayed on the page in an easily copy and pasteable form, ready to be used however you need. Deploy The final step is to convert this code into a long string for use as a bookmarklet. Easiest for Mac users is the JavaScript bundle in TextMate — choose Bundles: JavaScript: Copy as Bookmarklet to Clipboard. Then create a new ‘Get Location’ bookmark in your browser of choice and paste in. Those without TextMate can shrink their code down into a single line by first running their code through the JSLint tool (to ensure the code is free from errors and has all the required semi-colons) and then use a find-and-replace tool to remove line breaks from your code (or even run your code through JSMin to shrink it down). With the bookmarklet created and added to your bookmarks bar, you can now call up your location on any page at all. Get a feel for a web where your location is just another reliable part of the browsing experience. Where next? So, the Geomarklet you’ve been guided through is a pretty simple premise and pretty simple output. But from this base you can start to extend: Add code that will insert each of the location renderings directly into form fields, perhaps, or how about site-specific handlers to add your location tags into the correct form field in Wordpress or Tumblr? Paste in your current location to Google Maps? Or Flickr? Geomarklet gives you a base to start experimenting with location on your own pages and the sites you browse daily. The introduction of consumer accessible geo to the web is an adventure of discovery; not so much discovering new locations, but discovering location itself.",2008,Ben Ward,benward,2008-12-21T00:00:00+00:00,https://24ways.org/2008/geotag-everywhere-with-fire-eagle/,code 110,Shiny Happy Buttons,"Since Mac OS X burst onto our screens, glossy, glassy, shiny buttons have been almost de rigeur, and have essentially, along with reflections and rounded corners, become a cliché of Web 2.0 “design”. But if you can’t beat ‘em you’d better join ‘em. So, in this little contribution to our advent calendar, we’re going to take a plain old boring HTML button, and 2.0 it up the wazoo. But, here’s the catch. We’ll use no images, either in our HTML or our CSS. No sliding doors, no image replacement techniques. Just straight up, CSS, CSS3 and a bit of experimental CSS. And, it will be compatible with pretty much any browser (though with some progressive enhancement for those who keep up with the latest browsers). The HTML We’ll start with our HTML. <button type=""submit"">This is a shiny button</button> OK, so it’s not shiny yet – but boy will it ever be. Before styling, that’s going to look like this. Ironically, depending on the operating system and browser you are using, it may well be a shiny button already, but that’s not the point. We want to make it shiny 2.0. Our mission is to make it look something like this If you want to follow along at home keep in mind that depending on which browser you are using you may see fewer of the CSS effects we’ve added to create the button. As of writing, only in Safari are all the effects we’ll apply supported. Taking a look at our finished product, here’s what we’ve done to it: We’ve given the button some padding and a width. We’ve changed the text color, and given the text a drop shadow. We’ve given the button a border. We’ve given the button some rounded corners. We’ve given the button a drop shadow. We’ve given the button a gradient background. and remember, all without using any images. Styling the button So, let’s get to work. First, we’ll add given the element some padding and a width: button { padding: .5em; width: 15em; } Next, we’ll add the text color, and the drop shadow: color: #ffffff; text-shadow: 1px 1px 1px #000; A note on text-shadow If you’ve not seen text-shadows before well, here’s the quick back-story. Text shadow was introduced in CSS2, but only supported in Safari (version 1!) some years later. It was removed from CSS2.1, but returned in CSS3 (in the text module). It’s now supported in Safari, Opera and Firefox (3.1). Internet Explorer has a shadow filter, but the syntax is completely different. So, how do text-shadows work? The three length values specify respectively a horizontal offset, a vertical offset and a blur (the greater the number the more blurred the shadow will be), and finally a color value for the shadow. Rounding the corners Now we’ll add a border, and round the corners of the element: border: solid thin #882d13; -webkit-border-radius: .7em; -moz-border-radius: .7em; border-radius: .7em; Here, we’ve used the same property in three slightly different forms. We add the browser specific prefix for Webkit and Mozilla browsers, because right now, both of these browsers only support border radius as an experimental property. We also add the standard property name, for browsers that do support the property fully in the future. The benefit of the browser specific prefix is that if a browser only partly supports a given property, we can easily avoid using the property with that browser simply by not adding the browser specific prefix. At present, as you might guess, border-radius is supported in Safari and Firefox, but in each the relevant prefix is required. border-radius takes a length value, such as pixels. (It can also take two length values, but that’s for another Christmas.) In this case, as with padding, I’ve used ems, which means that as the user scales the size of text up and down, the radius will scale as well. You can test the difference by making the radius have a value of say 5px, and then zooming up and down the text size. We’re well and truly on the way now. All we need to do is add a shadow to the button, and then a gradient background. In CSS3 there’s the box-shadow property, currently only supported in Safari 3. It’s very similar to text-shadow – you specify a horizontal and vertical offset, a blur value and a color. -webkit-box-shadow: 2px 2px 3px #999; box-shadow: 2px 2px 2px #bbb; Once more, we require the “experimental” -webkit- prefix, as Safari’s support for this property is still considered by its developers to be less than perfect. Gradient Background So, all we have left now is to add our shiny gradient effect. Now of course, people have been doing this kind of thing with images for a long time. But if we can avoid them all the better. Smaller pages, faster downloads, and more scalable designs that adapt better to the user’s font size preference. But how can we add a gradient background without an image? Here we’ll look at the only property that is not as yet part of the CSS standard – Apple’s gradient function for use anywhere you can use images with CSS (in this case backgrounds). In essence, this takes SVG gradients, and makes them available via CSS syntax. Here’s what the property and its value looks like: background-image: -webkit-gradient(linear, left top, left bottom, from(#e9ede8), to(#ce401c),color-stop(0.4, #8c1b0b)); Zooming in on the gradient function, it has this basic form: -webkit-gradient(type, point, point, from(color), to(color),color-stop(where, color)); Which might look complicated, but is less so than at first glance. The name of the function is gradient (and in this case, because it is an experimental property, we use the -webkit- prefix). You might not have seen CSS functions before, but there are others, including the attr() function, used with generated content. A function returns a value that can be used as a property value – here we are using it as a background image. Next we specify the type of the gradient. Here we have a linear gradient, and there are also radial gradients. After that, we specify the start and end points of the gradient – in our case the top and bottom of the element, in a vertical line. We then specify the start and end colors – and finally one stop color, located at 40% of the way down the element. Together, this creates a gradient that smoothly transitions from the start color in the top, vertically to the stop color, then smoothly transitions to the end color. There’s one last thing. What color will the background of our button be if the browser doesn’t support gradients? It will be white (or possibly some default color for buttons). Which may make the text difficult or impossible to read. So, we’ll add a background color as well (see why the validator is always warning you when a color but not a background color is specified for an element?). If we put it all together, here’s what we have: button { width: 15em; padding: .5em; color: #ffffff; text-shadow: 1px 1px 1px #000; border: solid thin #882d13; -webkit-border-radius: .7em; -moz-border-radius: .7em; border-radius: .7em; -webkit-box-shadow: 2px 2px 3px #999; box-shadow: 2px 2px 2px #bbb; background-color: #ce401c; background-image: -webkit-gradient(linear, left top, left bottom, from(#e9ede8), to(#ce401c),color-stop(0.4, #8c1b0b)); } Which looks like this in various browsers: In Safari (3) In Firefox 3.1 (3.0 supports border-radius but not text-shadow) In Opera 10 and of course in Internet Explorer (version 8 shown here) But it looks different in different browsers Yes, it does look different in different browsers, but we all know the answer to the question “do web sites need to look the same in every browser?“. Even if you really think sites should look the same in every browser, hopefully this little tutorial has whet your appetite for what CSS3 and experimental CSS that’s already supported in widely used browsers (and we haven’t even touched on animations and similar effects!). I hope you’ve enjoyed out little CSSMas present, and look forward to seeing your shiny buttons everywhere on the web. Oh, and there’s just a bit of homework – your job is to use the :hover selector, and make a gradient in the hover state.",2008,John Allsopp,johnallsopp,2008-12-18T00:00:00+00:00,https://24ways.org/2008/shiny-happy-buttons/,code 116,The IE6 Equation,"It is the destiny of one browser to serve as the nemesis of web developers everywhere. At the birth of the Web Standards movement, that role was played by Netscape Navigator 4; an outdated browser that refused to die. Its tenacious existence hampered the adoption of modern standards. Today that role is played by Internet Explorer 6. There’s a sensation that I’m sure you’re familiar with. It’s a horrible mixture of dread and nervousness. It’s the feeling you get when—after working on a design for a while in a standards-compliant browser like Firefox, Safari or Opera—you decide that you can no longer put off the inevitable moment when you must check the site in IE6. Fingers are crossed, prayers are muttered, but alas, to no avail. The nemesis browser invariably screws something up. What do you do next? If the differences in IE6 are minor, you could just leave it be. After all, websites don’t need to look exactly the same in all browsers. But if there are major layout issues and a significant portion of your audience is still using IE6, you’ll probably need to roll up your sleeves and start fixing the problems. A common approach is to quarantine IE6-specific CSS in a separate stylesheet. This stylesheet can then be referenced from the HTML document using conditional comments like this: <!--[if lt IE 7]> <link rel=""stylesheet"" href=""ie6.css"" type=""text/css"" media=""screen"" /> <![endif]--> That stylesheet will only be served up to Internet Explorer where the version number is less than 7. You can put anything inside a conditional comment. You could put a script element in there. So as well as serving up browser-specific CSS, it’s possible to serve up browser-specific JavaScript. A few years back, before Microsoft released Internet Explorer 7, JavaScript genius Dean Edwards wrote a script called IE7. This amazing piece of code uses JavaScript to make Internet Explorer 5 and 6 behave like a standards-compliant browser. Dean used JavaScript to bootstrap IE’s CSS support. Because the script is specifically targeted at Internet Explorer, there’s no point in serving it up to other browsers. Conditional comments to the rescue: <!--[if lt IE 7]> <script src=""http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js"" type=""text/javascript""></script> <![endif]--> Standards-compliant browsers won’t fetch the script. Users of IE6, on the hand, will pay a kind of bad browser tax by having to download the JavaScript file. So when should you develop an IE6-specific stylesheet and when should you just use Dean’s JavaScript code? This is the question that myself and my co-worker Natalie Downe set out to answer one morning at Clearleft. We realised that in order to answer that question you need to first answer two other questions, how much time does it take to develop for IE6? and how much of your audience is using IE6? Let’s say that t represents the total development time. Let t6 represent the portion of that time you spend developing for IE6. If your total audience is a, then a6 is the portion of your audience using IE6. With some algebraic help from our mathematically minded co-worker Cennydd Bowles, Natalie and I came up with the following equation to calculate the percentage likelihood that you should be using Dean’s IE7 script: p = 50 [ log ( at6 / ta6 ) + 1 ] Try plugging in your own numbers. If you spend a lot of time developing for IE6 and only a small portion of your audience is using that browser, you’ll get a very high number out of the equation; you should probably use the IE7 script. But if you only spend a little time developing for IE6 and a significant portion of you audience are still using that browser, you’ll get a very small value for p; you might as well write an IE6-specific stylesheet. Of course this equation is somewhat disingenuous. While it’s entirely possible to research the percentage of your audience still using IE6, it’s not so easy to figure out how much of your development time will be spent developing for that one browser. You can’t really know until you’ve already done the development, by which time the equation is irrelevant. Instead of using the equation, you could try imposing a limit on how long you will spend developing for IE6. Get your site working in standards-compliant browsers first, then give yourself a time limit to get it working in IE6. If you can’t solve all the issues in that time limit, switch over to using Dean’s script. You could even make the time limit directly proportional to the percentage of your audience using IE6. If 20% of your audience is still using IE6 and you’ve just spent five days getting the site working in standards-compliant browsers, give yourself one day to get it working in IE6. But if 50% of your audience is still using IE6, be prepared to spend 2.5 days wrestling with your nemesis. All of these different methods for dealing with IE6 demonstrate that there’s no one single answer that works for everyone. They also highlight a problem with the current debate around dealing with IE6. There’s no shortage of blog posts, articles and even entire websites discussing when to drop support for IE6. But very few of them take the time to define what they mean by “support.” This isn’t a binary issue. There is no Boolean answer. Instead, there’s a sliding scale of support: Block IE6 users from your site. Develop with web standards and don’t spend any development time testing in IE6. Use the Dean Edwards IE7 script to bootstrap CSS support in IE6. Write an IE6 stylesheet to address layout issues. Make your site look exactly the same in IE6 as in any other browser. Each end of that scale is extreme. I don’t think that anybody should be actively blocking any browser but neither do I think that users of an outdated browser should get exactly the same experience as users of a more modern browser. The real meanings of “supporting” or “not supporting” IE6 lie somewhere in-between those extremes. Just as I think that semantics are important in markup, they are equally important in our discussion of web development. So let’s try to come up with some better terms than using the catch-all verb “support.” If you say in your client contract that you “support” IE6, define exactly what that means. If you find yourself in a discussion about “dropping support” for IE6, take the time to explain what you think that entails. The web developers at Yahoo! are on the right track with their concept of graded browser support. I’m interested in hearing more ideas of how to frame this discussion. If we can all agree to use clear and precise language, we stand a better chance of defeating our nemesis.",2008,Jeremy Keith,jeremykeith,2008-12-08T00:00:00+00:00,https://24ways.org/2008/the-ie6-equation/,code 117,The First Tool You Reach For,"Microsoft recently announced that Internet Explorer 8 will be released in the first half of 2009. Compared to the standards support of other major browsers, IE8 will not be especially great, but it will finally catch up with the state of the art in one specific area: support for CSS tables. This milestone has the potential to trigger an important change in the way you approach web design. To show you just how big a difference CSS tables can make, think about how you might code a fluid, three-column layout from scratch. Just to make your life more difficult, give it one fixed-width column, with a background colour that differs from the rest of the page. Ready? Go! Okay, since you’re the sort of discerning web designer who reads 24ways, I’m going to assume you at least considered doing this without using HTML tables for the layout. If you’re especially hardcore, I imagine you began thinking of CSS floats, negative margins, and faux columns. If you did, colour me impressed! Now admit it: you probably also gave an inward sigh about the time it would take to figure out the math on the negative margin overlaps, check for dropped floats in Internet Explorer and generally wrestle each of the major browsers into giving you what you want. If after all that you simply gave up and used HTML tables, I can’t say I blame you. There are plenty of professional web designers out there who still choose to use HTML tables as their main layout tool. Sure, they may know that users with screen readers get confused by inappropriate use of tables, but they have a job to do, and they want tools that will make that job easy, not difficult. Now let me show you how to do it with CSS tables. First, we have a div element for each of our columns, and we wrap them all in another two divs: <div class=""container""> <div> <div id=""menu""> ⋮ </div> <div id=""content""> ⋮ </div> <div id=""sidebar""> ⋮ </div> </div> </div> Don’t sweat the “div clutter” in this code. Unlike tables, divs have no semantic meaning, and can therefore be used liberally (within reason) to provide hooks for the styles you want to apply to your page. Using CSS, we can set the outer div to display as a table with collapsed borders (i.e. adjacent cells share a border) and a fixed layout (i.e. cell widths unaffected by their contents): .container { display: table; border-collapse: collapse; table-layout: fixed; } With another two rules, we set the middle div to display as a table row, and each of the inner divs to display as table cells: .container > div { display: table-row; } .container > div > div { display: table-cell; } Finally, we can set the widths of the cells (and of the table itself) directly: .container { width: 100%; } #menu { width: 200px; } #content { width: auto; } #sidebar { width: 25%; } And, just like that, we have a rock solid three-column layout, ready to be styled to your own taste, like in this example: This example will render perfectly in reasonably up-to-date versions of Firefox, Safari and Opera, as well as the current beta release of Internet Explorer 8. CSS tables aren’t only useful for multi-column page layout; they can come in handy in most any situation that calls for elements to be displayed side-by-side on the page. Consider this simple login form layout: The incantation required to achieve this layout using CSS floats may be old hat to you by now, but try to teach it to a beginner, and watch his eyes widen in horror at the hoops you have to jump through (not to mention the assumptions you have to build into your design about the length of the form labels). Here’s how to do it with CSS tables: <form action=""/login"" method=""post""> <div> <div> <label for=""username"">Username:</label> <span class=""input""><input type=""text"" name=""username"" id=""username""/></span> </div> <div> <label for=""userpass"">Password:</label> <span class=""input""><input type=""password"" name=""userpass"" id=""userpass""/></span> </div> <div class=""submit""> <label for=""login""></label> <span class=""input""><input type=""submit"" name=""login"" id=""login"" value=""Login""/></span> </div> </div> </form> This time, we’re using a mixture of divs and spans as semantically transparent styling hooks. Let’s look at the CSS code. First, we set up the outer div to display as a table, the inner divs to display as table rows, and the labels and spans as table cells (with right-aligned text): form > div { display: table; } form > div > div { display: table-row; } form label, form span { display: table-cell; text-align: right; } We want the first column of the table to be wide enough to accommodate our labels, but no wider. With CSS float techniques, we had to guess at what that width was likely to be, and adjust it whenever we changed our form labels. With CSS tables, we can simply set the width of the first column to something very small (1em), and then use the white-space property to force the column to the required width: form label { white-space: nowrap; width: 1em; } To polish off the layout, we’ll make our text and password fields occupy the full width of the table cells that contain them: input[type=text], input[type=password] { width: 100%; } The rest is margins, padding and borders to get the desired look. Check out the finished example. As the first tool you reach for when approaching any layout task, CSS tables make a lot more sense to your average designer than the cryptic incantations called for by CSS floats. When IE8 is released and all major browsers support CSS tables, we can begin to gradually deploy CSS table-based layouts on sites that are more and more mainstream. In our new book, Everything You Know About CSS Is Wrong!, Rachel Andrew and I explore in much greater detail how CSS tables work as a page layout tool in the real world. CSS tables have their quirks just like floats do, but they don’t tend to affect common layout tasks, and the workarounds tend to be less fiddly too. Check it out, and get ready for the next big step forward in web design with CSS.",2008,Kevin Yank,kevinyank,2008-12-13T00:00:00+00:00,https://24ways.org/2008/the-first-tool-you-reach-for/,code 121,Hide And Seek in The Head,"If you want your JavaScript-enhanced pages to remain accessible and understandable to scripted and noscript users alike, you have to think before you code. Which functionalities are required (ie. should work without JavaScript)? Which ones are merely nice-to-have (ie. can be scripted)? You should only start creating the site when you’ve taken these decisions. Special HTML elements Once you have a clear idea of what will work with and without JavaScript, you’ll likely find that you need a few HTML elements for the noscript version only. Take this example: A form has a nifty bit of Ajax that automatically and silently sends a request once the user enters something in a form field. However, in order to preserve accessibility, the user should also be able to submit the form normally. So the form should have a submit button in noscript browsers, but not when the browser supports sufficient JavaScript. Since the button is meant for noscript browsers, it must be hard-coded in the HTML: <input type=""submit"" value=""Submit form"" id=""noScriptButton"" /> When JavaScript is supported, it should be removed: var checkJS = [check JavaScript support]; window.onload = function () { if (!checkJS) return; document.getElementById('noScriptButton').style.display = 'none'; } Problem: the load event Although this will likely work fine in your testing environment, it’s not completely correct. What if a user with a modern, JavaScript-capable browser visits your page, but has to wait for a huge graphic to load? The load event fires only after all assets, including images, have been loaded. So this user will first see a submit button, but then all of a sudden it’s removed. That’s potentially confusing. Fortunately there’s a simple solution: play a bit of hide and seek in the <head>: var checkJS = [check JavaScript support]; if (checkJS) { document.write('<style>#noScriptButton{display: none}</style>'); } First, check if the browser supports enough JavaScript. If it does, document.write an extra <style> element that hides the button. The difference with the previous technique is that the document.write command is outside any function, and is therefore executed while the JavaScript is being parsed. Thus, the #noScriptButton{display: none} rule is written into the document before the actual HTML is received. That’s exactly what we want. If the rule is already present at the moment the HTML for the submit button is received and parsed, the button is hidden immediately. Even if the user (and the load event) have to wait for a huge image, the button is already hidden, and both scripted and noscript users see the interface they need, without any potentially confusing flashes of useless content. In general, if you want to hide content that’s not relevant to scripted users, give the hide command in CSS, and make sure it’s given before the HTML element is loaded and parsed. Alternative Some people won’t like to use document.write. They could also add an empty <link /> element to the <head> and give it an href attribute once the browser’s JavaScript capabilities have been evaluated. The <link /> element is made to refer to a style sheet that contains the crucial #noScriptButton{display: none}, and everything works fine. Important note: The script needs access to the <link />, and the only way to ensure that access is to include the empty <link /> element before your <script> tag.",2006,Peter-Paul Koch,ppk,2006-12-06T00:00:00+00:00,https://24ways.org/2006/hide-and-seek-in-the-head/,code 124,Writing Responsible JavaScript,"Without a doubt, JavaScript has been making something of a comeback in the last year. If you’re involved in client-side development in any way at all, chances are that you’re finding yourself writing more JavaScript now than you have in a long time. If you learned most of your JavaScript back when DHTML was all the rage and before DOM Scripting was in vogue, there have been some big shifts in the way scripts are written. Most of these are in the way event handlers are assigned and functions declared. Both of these changes are driven by the desire to write scripts that are responsible page citizens, both in not tying behaviour to content and in taking care not to conflict with other scripts. I thought it may be useful to look at some of these more responsible approaches to learn how to best write scripts that are independent of the page content and are safely portable between different applications. Event Handling Back in the heady days of Web 1.0, if you wanted to have an object on the page react to something like a click, you would simply go ahead and attach an onclick attribute. This was easy and understandable, but much like the font tag or the style attribute, it has the downside of mixing behaviour or presentation in with our content. As we’re learned with CSS, there are big benefits in keeping those layers separate. Hey, if it works for CSS, it should work for JavaScript too. Just like with CSS, instead of adding an attribute to our element within the document, the more responsible way to do that is to look for the item from your script (like CSS does with a selector) and then assign the behaviour to it. To give an example, take this oldskool onclick use case: <a id=""anim-link"" href=""#"" onclick=""playAnimation()"">Play the animation</a> This could be rewritten by removing the onclick attribute, and instead doing the following from within your JavaScript. document.getElementById('anim-link').onclick = playAnimation; It’s all in the timing Of course, it’s never quite that easy. To be able to attach that onclick, the element you’re targeting has to exist in the page, and the page has to have finished loading for the DOM to be available. This is where the onload event is handy, as it fires once everything has finished loading. Common practise is to have a function called something like init() (short for initialise) that sets up all these event handlers as soon as the page is ready. Back in the day we would have used the onload attibute on the <body> element to do this, but of course what we really want is: window.onload = init; As an interesting side note, we’re using init here rather than init() so that the function is assigned to the event. If we used the parentheses, the init function would have been run at that moment, and the result of running the function (rather than the function itself) would be assigned to the event. Subtle, but important. As is becoming apparent, nothing is ever simple, and we can’t just go around assigning our initialisation function to window.onload. What if we’re using other scripts in the page that might also want to listen out for that event? Whichever script got there last would overwrite everything that came before it. To manage this, we need a script that checks for any existing event handlers, and adds the new handler to it. Most of the JavaScript libraries have their own systems for doing this. If you’re not using a library, Simon Willison has a good stand-alone example function addLoadEvent(func) { var oldonload = window.onload; if (typeof window.onload != 'function') { window.onload = func; } else { window.onload = function() { if (oldonload) { oldonload(); } func(); } } } Obviously this is just a toe in the events model’s complex waters. Some good further reading is PPK’s Introduction to Events. Carving out your own space Another problem that rears its ugly head when combining multiple scripts on a single page is that of making sure that the scripts don’t conflict. One big part of that is ensuring that no two scripts are trying to create functions or variables with the same names. Reusing a name in JavaScript just over-writes whatever was there before it. When you create a function in JavaScript, you’ll be familiar with doing something like this. function foo() { ... goodness ... } This is actually just creating a variable called foo and assigning a function to it. It’s essentially the same as the following. var foo = function() { ... goodness ... } This name foo is by default created in what’s known as the ‘global namespace’ – the general pool of variables within the page. You can quickly see that if two scripts use foo as a name, they will conflict because they’re both creating those variables in the global namespace. A good solution to this problem is to add just one name into the global namespace, make that one item either a function or an object, and then add everything else you need inside that. This takes advantage of JavaScript’s variable scoping to contain you mess and stop it interfering with anyone else. Creating An Object Say I was wanting to write a bunch of functions specifically for using on a site called ‘Foo Online’. I’d want to create my own object with a name I think is likely to be unique to me. var FOOONLINE = {}; We can then start assigning functions are variables to it like so: FOOONLINE.message = 'Merry Christmas!'; FOOONLINE.showMessage = function() { alert(this.message); }; Calling FOOONLINE.showMessage() in this example would alert out our seasonal greeting. The exact same thing could also be expressed in the following way, using the object literal syntax. var FOOONLINE = { message: 'Merry Christmas!', showMessage: function() { alert(this.message); } }; Creating A Function to Create An Object We can extend this idea bit further by using a function that we run in place to return an object. The end result is the same, but this time we can use closures to give us something like private methods and properties of our object. var FOOONLINE = function(){ var message = 'Merry Christmas!'; return { showMessage: function(){ alert(message); } } }(); There are two important things to note here. The first is the parentheses at the end of line 10. Just as we saw earlier, this runs the function in place and causes its result to be assigned. In this case the result of our function is the object that is returned at line 4. The second important thing to note is the use of the var keyword on line 2. This ensures that the message variable is created inside the scope of the function and not in the global namespace. Because of the way closure works (which if you’re not familiar with, just suspend your disbelief for a moment) that message variable is visible to everything inside the function but not outside. Trying to read FOOONLINE.message from the page would return undefined. This is useful for simulating the concept of private class methods and properties that exist in other programming languages. I like to take the approach of making everything private unless I know it’s going to be needed from outside, as it makes the interface into your code a lot clearer for someone else to read. All Change, Please So that was just a whistle-stop tour of a couple of the bigger changes that can help to make your scripts better page citizens. I hope it makes useful Sunday reading, but obviously this is only the tip of the iceberg when it comes to designing modular, reusable code. For some, this is all familiar ground already. If that’s the case, I encourage you to perhaps submit a comment with any useful resources you’ve found that might help others get up to speed. Ultimately it’s in all of our interests to make sure that all our JavaScript interoperates well – share your tips.",2006,Drew McLellan,drewmclellan,2006-12-10T00:00:00+00:00,https://24ways.org/2006/writing-responsible-javascript/,code 126,Intricate Fluid Layouts in Three Easy Steps,"The Year of the Script may have drawn attention away from CSS but building fluid, multi-column, cross-browser CSS layouts can still be as unpleasant as a lump of coal. Read on for a worry-free approach in three quick steps. The layout system I developed, YUI Grids CSS, has three components. They can be used together as we’ll see, or independently. The Three Easy Steps Choose fluid or fixed layout, and choose the width (in percents or pixels) of the page. Choose the size, orientation, and source-order of the main and secondary blocks of content. Choose the number of columns and how they distribute (for example 50%-50% or 25%-75%), using stackable and nestable grid structures. The Setup There are two prerequisites: We need to normalize the size of an em and opt into the browser rendering engine’s Strict Mode. Ems are a superior unit of measure for our case because they represent the current font size and grow as the user increases their font size setting. This flexibility—the container growing with the user’s wishes—means larger text doesn’t get crammed into an unresponsive container. We’ll use YUI Fonts CSS to set the base size because it provides consistent-yet-adaptive font-sizes while preserving user control. The second prerequisite is to opt into Strict Mode (more info on rendering modes) by declaring a Doctype complete with URI. You can choose XHTML or HTML, and Transitional or Strict. I prefer HTML 4.01 Strict, which looks like this: <!DOCTYPE HTML PUBLIC ""-//W3C//DTD HTML 4.01//EN"" ""http://www.w3.org/TR/html4/strict.dtd""> Including the CSS A single small CSS file powers a nearly-infinite number of layouts thanks to a recursive system and the interplay between the three distinct components. You could prune to a particular layout’s specific needs, but why bother when the complete file weighs scarcely 1.8kb uncompressed? Compressed, YUI Fonts and YUI Grids combine for a miniscule 0.9kb over the wire. You could save an HTTP request by concatenating the two CSS files, or by adding their contents to your own CSS, but I’ll keep them separate for now: <link href=""fonts.css"" rel=""stylesheet"" type=""text/css""> <link href=""grids.css"" rel=""stylesheet"" type=""text/css""> Example: The Setup Now we’re ready to build some layouts. Step 1: Choose Fluid or Fixed Layout Choose between preset widths of 750px, 950px, and 100% by giving a document-wrapping div an ID of doc, doc2, or doc3. These options cover most use cases, but it’s easy to define a custom fixed width. The fluid 100% grid (doc3) is what I’ve been using almost exclusively since it was introduced in the last YUI released. <body> <div id=""doc3""></div> </body> All pages are centered within the viewport, and grow with font size. The 100% width page (doc3) preserves 10px of breathing room via left and right margins. If you prefer your content flush to the viewport, just add doc3 {margin:auto} to your CSS. Regardless of what you choose in the other two steps, you can always toggle between these widths and behaviors by simply swapping the ID value. It’s really that simple. Example: 100% fluid layout Step 2: Choose a Template Preset This is perhaps the most frequently omitted step (they’re all optional), but I use it nearly every time. In a source-order-independent way (good for accessibility and SEO), “Template Presets” provide commonly used template widths compatible with ad-unit dimension standards defined by the Interactive Advertising Bureau, an industry association. Choose between the six Template Presets (.yui-t1 through .yui-t6) by setting the class value on the document-wrapping div established in Step 1. Most frequently I use yui-t3, which puts the narrow secondary block on the left and makes it 300px wide. <body> <div id=""doc3"" class=""yui-t3""></div> </body> The Template Presets control two “blocks” of content, which are defined by two divs, each with yui-b (“b” for “block”) class values. Template Presets describe the width and orientation of the secondary block; the main block will take up the rest of the space. <body> <div id=""doc3"" class=""yui-t3""> <div class=""yui-b""></div> <div class=""yui-b""></div> </div> </body> Use a wrapping div with an ID of yui-main to structurally indicate which block is the main block. This wrapper—not the source order—identifies the main block. <body> <div id=""doc3"" class=""yui-t3""> <div id=""yui-main""> <div class=""yui-b""></div> </div> <div class=""yui-b""></div> </div> </body> Example: Main and secondary blocks sized and oriented with .yui-t3 Template Preset Again, regardless of what values you choose in the other steps, you can always toggle between these Template Presets by toggling the class value of your document-wrapping div. It’s really that simple. Step 3: Nest and Stack Grid Structures. The bulk of the power of the system is in this third step. The key is that columns are built by parents telling children how to behave. By default, two children each consume half of their parent’s area. Put two units inside a grid structure, and they will sit side-by-side, and they will each take up half the space. Nest this structure and two columns become four. Stack them for rows of columns. An Even Number of Columns The default behavior creates two evenly-distributed columns. It’s easy. Define one parent grid with .yui-g (“g” for grid) and two child units with .yui-u (“u” for unit). The code looks like this: <div class=""yui-g""> <div class=""yui-u first""></div> <div class=""yui-u""></div> </div> Be sure to indicate the “first“ unit because the :first-child pseudo-class selector isn’t supported across all A-grade browsers. It’s unfortunate we need to add this, but luckily it’s not out of place in the markup layer since it is structural information. Example: Two evenly-distributed columns in the main content block An Odd Number of Columns The default system does not work for an odd number of columns without using the included “Special Grids” classes. To create three evenly distributed columns, use the “yui-gb“ Special Grid: <div class=""yui-gb""> <div class=""yui-u first""></div> <div class=""yui-u""></div> <div class=""yui-u""></div> </div> Example: Three evenly distributed columns in the main content block Uneven Column Distribution Special Grids are also used for unevenly distributed column widths. For example, .yui-ge tells the first unit (column) to take up 75% of the parent’s space and the other unit to take just 25%. <div class=""yui-ge""> <div class=""yui-u first""></div> <div class=""yui-u""></div> </div> Example: Two columns in the main content block split 75%-25% Putting It All Together Start with a full-width fluid page (div#doc3). Make the secondary block 180px wide on the right (div.yui-t4). Create three rows of columns: Three evenly distributed columns in the first row (div.yui-gb), two uneven columns (66%-33%) in the second row (div.yui-gc), and two evenly distributed columns in the thrid row. <body> <!-- choose fluid page and Template Preset --> <div id=""doc3"" class=""yui-t4""> <!-- main content block --> <div id=""yui-main""> <div class=""yui-b""> <!-- stacked grid structure, Special Grid ""b"" --> <div class=""yui-gb""> <div class=""yui-u first""></div> <div class=""yui-u""></div> <div class=""yui-u""></div> </div> <!-- stacked grid structure, Special Grid ""c"" --> <div class=""yui-gc""> <div class=""yui-u first""></div> <div class=""yui-u""></div> </div> <!-- stacked grid structure --> <div class=""yui-g""> <div class=""yui-u first""></div> <div class=""yui-u""></div> </div> </div> </div> <!-- secondary content block --> <div class=""yui-b""></div> </div> </body> Example: A complex layout. Wasn’t that easy? Now that you know the three “levers” of YUI Grids CSS, you’ll be creating headache-free fluid layouts faster than you can say “Peace on Earth”.",2006,Nate Koechley,natekoechley,2006-12-20T00:00:00+00:00,https://24ways.org/2006/intricate-fluid-layouts/,code 128,Boost Your Hyperlink Power,"There are HTML elements and attributes that we use every day. Headings, paragraphs, lists and images are the mainstay of every Web developer’s toolbox. Perhaps the most common tool of all is the anchor. The humble a element is what joins documents together to create the gloriously chaotic collection we call the World Wide Web. Anatomy of an Anchor The power of the anchor element lies in the href attribute, short for hypertext reference. This creates a one-way link to another resource, usually another page on the Web: <a href=""http://allinthehead.com/""> The href attribute sits in the opening a tag and some descriptive text sits between the opening and closing tags: <a href=""http://allinthehead.com/"">Drew McLellan</a> “Whoop-dee-freakin’-doo,” I hear you say, “this is pretty basic stuff” – and you’re quite right. But there’s more to the anchor element than just the href attribute. The Theory of relativity You might be familiar with the rel attribute from the link element. I bet you’ve got something like this in the head of your documents: <link rel=""stylesheet"" type=""text/css"" media=""screen"" href=""styles.css"" /> The rel attribute describes the relationship between the linked document and the current document. In this case, the value of rel is “stylesheet”. This means that the linked document is the stylesheet for the current document: that’s its relationship. Here’s another common use of rel: <link rel=""alternate"" type=""application/rss+xml"" title=""my RSS feed"" href=""index.xml"" /> This describes the relationship of the linked file – an RSS feed – as “alternate”: an alternate view of the current document. Both of those examples use the link element but you are free to use the rel attribute in regular hyperlinks. Suppose you’re linking to your RSS feed in the body of your page: Subscribe to <a href=""index.xml"">my RSS feed</a>. You can add extra information to this anchor using the rel attribute: Subscribe to <a href=""index.xml"" rel=""alternate"" type=""application/rss+xml"">my RSS feed</a>. There’s no prescribed list of values for the rel attribute so you can use whatever you decide is semantically meaningful. Let’s say you’ve got a complex e-commerce application that includes a link to a help file. You can explicitly declare the relationship of the linked file as being “help”: <a href=""help.html"" rel=""help"">need help?</a> Elemental Microformats Although it’s completely up to you what values you use for the rel attribute, some consensus is emerging in the form of microformats. Some of the simplest microformats make good use of rel. For example, if you are linking to a license that covers the current document, use the rel-license microformat: Licensed under a <a href=""http://creativecommons.org/licenses/by/2.0/"" rel=""license"">Creative Commons attribution license</a> That describes the relationship of the linked document as “license.” The rel-tag microformat goes a little further. It uses rel to describe the final part of the URL of the linked file as a “tag” for the current document: Learn more about <a href=""http://en.wikipedia.org/wiki/Microformats"" rel=""tag"">semantic markup</a> This states that the current document is being tagged with the value “Microformats.” XFN, which stands for XHTML Friends Network, is a way of describing relationships between people: <a href=""http://allinthehead.com/"" rel=""friend"">Drew McLellan</a> This microformat makes use of a very powerful property of the rel attribute. Like the class attribute, rel can take multiple values, separated by spaces: <a href=""http://allinthehead.com/"" rel=""friend met colleague"">Drew McLellan</a> Here I’m describing Drew as being a friend, someone I’ve met, and a colleague (because we’re both Web monkies). You Say You Want a revolution While rel describes the relationship of the linked resource to the current document, the rev attribute describes the reverse relationship: it describes the relationship of the current document to the linked resource. Here’s an example of a link that might appear on help.html: <a href=""shoppingcart.html"" rev=""help"">continue shopping</a> The rev attribute declares that the current document is “help” for the linked file. The vote-links microformat makes use of the rev attribute to allow you to qualify your links. By using the value “vote-for” you can describe your document as being an endorsement of the linked resource: I agree with <a href=""http://richarddawkins.net/home"" rev=""vote-for"">Richard Dawkins</a>. There’s a corresponding vote-against value. This means that you can link to a document but explicitly state that you don’t agree with it. I agree with <a href=""http://richarddawkins.net/home"" rev=""vote-for"">Richard Dawkins</a> about those <a href=""http://www.icr.org/"" rev=""vote-against"">creationists</a>. Of course there’s nothing to stop you using both rel and rev on the same hyperlink: <a href=""http://richarddawkins.net/home"" rev=""vote-for"" rel=""muse"">Richard Dawkins</a> The Wisdom of Crowds The simplicity of rel and rev belies their power. They allow you to easily add extra semantic richness to your hyperlinks. This creates a bounty that can be harvested by search engines, aggregators and browsers. Make it your New Year’s resolution to make friends with these attributes and extend the power of hypertext.",2006,Jeremy Keith,jeremykeith,2006-12-18T00:00:00+00:00,https://24ways.org/2006/boost-your-hyperlink-power/,code 129,Knockout Type - Thin Is Always In,"OS X has gorgeous native anti-aliasing (although I will admit to missing 10px aliased Geneva — *sigh*). This is especially true for dark text on a light background. However, things can go awry when you start using light text on a dark background. Strokes thicken. Counters constrict. Letterforms fill out like seasonal snackers. So how do we combat the fat? In Safari and other Webkit-based browsers we can use the CSS ‘text-shadow’ property. While trying to add a touch more contrast to the navigation on haveamint.com I noticed an interesting side-effect on the weight of the type. The second line in the example image above has the following style applied to it: This creates an invisible drop-shadow. (Why is it invisible? The shadow is positioned directly behind the type (the first two zeros) and has no spread (the third zero). So the color, black, is completely eclipsed by the type it is supposed to be shadowing.) Why applying an invisible drop-shadow effectively lightens the weight of the type is unclear. What is clear is that our light-on-dark text is now of a comparable weight to its dark-on-light counterpart. You can see this trick in effect all over ShaunInman.com and in the navigation on haveamint.com and Subtraction.com. The HTML and CSS source code used to create the example images used in this article can be found here.",2006,Shaun Inman,shauninman,2006-12-17T00:00:00+00:00,https://24ways.org/2006/knockout-type/,code 132,Tasty Text Trimmer,"In most cases, when designing a user interface it’s best to make a decision about how data is best displayed and stick with it. Failing to make a decision ultimately leads to too many user options, which in turn can be taxing on the poor old user. Under some circumstances, however, it’s good to give the user freedom in customising their workspace. One good example of this is the ‘Article Length’ tool in Apple’s Safari RSS reader. Sliding a slider left of right dynamically changes the length of each article shown. It’s that kind of awesomey magic stuff that’s enough to keep you from sleeping. Let’s build one. The Setup Let’s take a page that has lots of long text items, a bit like a news page or like Safari’s RSS items view. If we were to attach a class name to each element we wanted to resize, that would give us something to hook onto from the JavaScript. Example 1: The basic page. As you can see, I’ve wrapped my items in a DIV and added a class name of chunk to them. It’s these chunks that we’ll be finding with the JavaScript. Speaking of which … Our Core Functions There are two main tasks that need performing in our script. The first is to find the chunks we’re going to be resizing and store their original contents away somewhere safe. We’ll need this so that if we trim the text down we’ll know what it was if the user decides they want it back again. We’ll call this loadChunks. var loadChunks = function(){ var everything = document.getElementsByTagName('*'); var i, l; chunks = []; for (i=0, l=everything.length; i<l; i++){ if (everything[i].className.indexOf(chunkClass) > -1){ chunks.push({ ref: everything[i], original: everything[i].innerHTML }); } } }; The variable chunks is stored outside of this function so that we can access it from our next core function, which is doTrim. var doTrim = function(interval) { if (!chunks) loadChunks(); var i, l; for (i=0, l=chunks.length; i<l; i++){ var a = chunks[i].original.split(' '); a = a.slice(0, interval); chunks[i].ref.innerHTML = a.join(' '); } }; The first thing that needs to be done is to call loadChunks if the chunks variable isn’t set. This should only happen the first time doTrim is called, as from that point the chunks will be loaded. Then all we do is loop through the chunks and trim them. The trimming itself (lines 6-8) is very simple. We split the text into an array of words (line 6), then select only a portion from the beginning of the array up until the number we want (line 7). Finally the words are glued back together (line 8). In essense, that’s it, but it leaves us needing to know how to get the number into this function in the first place, and how that number is generated by the user. Let’s look at the latter case first. The YUI Slider Widget There are lots of JavaScript libraries available at the moment. A fair few of those are really good. I use the Yahoo! User Interface Library professionally, but have only recently played with their pre-build slider widget. Turns out, it’s pretty good and perfect for this task. Once you have the library files linked in (check the docs linked above) it’s fairly straightforward to create yourself a slider. slider = YAHOO.widget.Slider.getHorizSlider(""sliderbg"", ""sliderthumb"", 0, 100, 5); slider.setValue(50); slider.subscribe(""change"", doTrim); All that’s needed then is some CSS to make the slider look like a slider, and of course a few bits of HTML. We’ll see those later. See It Working! Rather than spell out all the nuts and bolts of implementing this fairly simple script, let’s just look at in it action and then pick on some interesting bits I’ve added. Example 2: Try the Tasty Text Trimmer. At the top of the JavaScript file I’ve added a small number of settings. var chunkClass = 'chunk'; var minValue = 10; var maxValue = 100; var multiplier = 5; Obvious candidates for configuration are the class name used to find the chunks, and also the some minimum and maximum values. The minValue is the fewest number of words we wish to display when the slider is all the way down. The maxValue is the length of the slider, in this case 100. Moving the slider makes a call to our doTrim function with the current value of the slider. For a slider 100 pixels long, this is going to be in the range of 0-100. That might be okay for some things, but for longer items of text you’ll want to allow for displaying more than 100 words. I’ve accounted for this by adding in a multiplier – in my code I’m multiplying the value by 5, so a slider value of 50 shows 250 words. You’ll probably want to tailor the multiplier to the type of content you’re using. Keeping it Accessible This effect isn’t something we can really achieve without JavaScript, but even so we must make sure that this functionality has no adverse impact on the page when JavaScript isn’t available. This is achieved by adding the slider markup to the page from within the insertSliderHTML function. var insertSliderHTML = function(){ var s = '<a id=""slider-less"" href=""#less""><img src=""icon_min.gif"" width=""10"" height=""10"" alt=""Less text"" class=""first"" /></a>'; s +=' <div id=""sliderbg""><div id=""sliderthumb""><img src=""sliderthumbimg.gif"" /></div></div>'; s +=' <a id=""slider-more"" href=""#more""><img src=""icon_max.gif"" width=""10"" height=""10"" alt=""More text"" /></a>'; document.getElementById('slider').innerHTML = s; } The other important factor to consider is that a slider can be tricky to use unless you have good eyesight and pretty well controlled motor skills. Therefore we should provide a method of changing the value by the keyboard. I’ve done this by making the icons at either end of the slider links so they can be tabbed to. Clicking on either icon fires the appropriate JavaScript function to show more or less of the text. In Conclusion The upshot of all this is that without JavaScript the page just shows all the text as it normally would. With JavaScript we have a slider for trimming the text excepts that can be controlled with the mouse or with a keyboard. If you’re like me and have just scrolled to the bottom to find the working demo, here it is again: Try the Tasty Text Trimmer Trimmer for Christmas? Don’t say I never give you anything!",2006,Drew McLellan,drewmclellan,2006-12-01T00:00:00+00:00,https://24ways.org/2006/tasty-text-trimmer/,code 135,A Scripting Carol,"We all know the stories of the Ghost of Scripting Past – a time when the web was young and littered with nefarious scripting, designed to bestow ultimate control upon the developer, to pollute markup with event handler after event handler, and to entrench advertising in the minds of all that gazed upon her. And so it came to be that JavaScript became a dirty word, thrown out of solutions by many a Scrooge without regard to the enhancements that JavaScript could bring to any web page. JavaScript, as it was, was dead as a door-nail. With the arrival of our core philosophy that all standardistas hold to be true: “separate your concerns – content, presentation and behaviour,” we are in a new era of responsible development the Web Standards Way™. Or are we? Have we learned from the Ghosts of Scripting Past? Or are we now faced with new problems that come with new ways of implementing our solutions? The Ghost of Scripting Past If the Ghost of Scripting Past were with us it would probably say: You must remember your roots and where you came from, and realize the misguided nature of your early attempts for control. That person you see down there, is real and they are the reason you exist in the first place… without them, you are nothing. In many ways we’ve moved beyond the era of control and we do take into account the user, or at least much more so than we used to. Sadly – there is one advantage that old school inline event handlers had where we assigned and reassigned CSS style property values on the fly – we knew that if JavaScript wasn’t supported, the styles wouldn’t be added because we ended up doing them at the same time. If anything, we need to have learned from the past that just because it works for us doesn’t mean it is going to work for anyone else – we need to test more scenarios than ever to observe the multitude of browsing arrangements we’ll observe: CSS on with JavaScript off, CSS off/overridden with JavaScript on, both on, both off/not supported. It is a situation that is ripe for conflict. This may shock some of you, but there was a time when testing was actually easier: back in the day when Netscape 4 was king. Yes, that’s right. I actually kind of enjoyed Netscape 4 (hear me out, please). With NS4’s CSS implementation known as JavaScript Style Sheets, you knew that if JavaScript was off the styles were off too. The Ghost of Scripting Present With current best practice – we keep our CSS and JavaScript separate from each other. So what happens when some of our fancy, unobtrusive DOM Scripting doesn’t play nicely with our wonderfully defined style rules? Lets look at one example of a collapsing and expanding menu to illustrate where we are now: Simple Collapsing/Expanding Menu Example We’re using some simple JavaScript (I’m using jquery in this case) to toggle between a CSS state for expanded and not expanded: JavaScript $(document).ready(function(){ TWOFOURWAYS.enableTree(); }); var TWOFOURWAYS = new Object(); TWOFOURWAYS.enableTree = function () { $(""ul li a"").toggle(function(){ $(this.parentNode).addClass(""expanded""); }, function() { $(this.parentNode).removeClass(""expanded""); }); return false; } CSS ul li ul { display: none; } ul li.expanded ul { display: block; } At this point we’ve separated our presentation from our content and behaviour, and all is well, right? Not quite. Here’s where I typically see failures in the assessment work that I do on web sites and applications (Yes, call me Scrooge – I don’t care!). We know our page needs to work with or without scripting, and we know it needs to work with or without CSS. All too often the testing scenarios don’t take into account combinations. Testing it out So what happens when we test this? Make sure you test with: CSS off JavaScript off Use the simple example again. With CSS off, we revert to a simple nested list of links and all functionality is maintained. With JavaScript off, however, we run into a problem – we have now removed the ability to expand the menu using the JavaScript triggered CSS change. Hopefully you see the problem – we have a JavaScript and CSS dependency that is critical to the functionality of the page. Unobtrusive scripting and binary on/off tests aren’t enough. We need more. This Ghost of Scripting Present sighting is seen all too often. Lets examine the JavaScript off scenario a bit further – if we require JavaScript to expand/show the branch of the tree we should use JavaScript to hide them in the first place. That way we guarantee functionality in all scenarios, and have achieved our baseline level of interoperability. To revise this then, we’ll start with the sub items expanded, use JavaScript to collapse them, and then use the same JavaScript to expand them. HTML <ul> <li><a href=""#"">Main Item</a> <ul class=""collapseme""> <li><a href=""#"">Sub item 1</a></li> <li><a href=""#"">Sub item 2</a></li> <li><a href=""#"">Sub item 3</a></li> </ul> </li> </ul> CSS /* initial style is expanded */ ul li ul.collapseme { display: block; } JavaScript // remove the class collapseme after the page loads $(""ul ul.collapseme"").removeClass(""collapseme""); And there you have it – a revised example with better interoperability. This isn’t rocket surgery by any means. It is a simple solution to a ghostly problem that is too easily overlooked (and often is). The Ghost of Scripting Future Well, I’m not so sure about this one, but I’m guessing that in a few years’ time, we’ll all have seen a few more apparitions and have a few more tales to tell. And hopefully we’ll be able to share them on 24 ways. Thanks to Drew for the invitation to contribute and thanks to everyone else out there for making this a great (and haunting) year on the web!",2006,Derek Featherstone,derekfeatherstone,2006-12-21T00:00:00+00:00,https://24ways.org/2006/a-scripting-carol/,code 136,Making XML Beautiful Again: Introducing Client-Side XSL,"Remember that first time you saw XML and got it? When you really understood what was possible and the deep meaning each element could carry? Now when you see XML, it looks ugly, especially when you navigate to a page of XML in a browser. Well, with every modern browser now supporting XSL 1.0, I’m going to show you how you can turn something as simple as an ATOM feed into a customised page using a browser, Notepad and some XSL. What on earth is this XSL? XSL is a family of recommendations for defining XML document transformation and presentation. It consists of three parts: XSLT 1.0 – Extensible Stylesheet Language Transformation, a language for transforming XML XPath 1.0 – XML Path Language, an expression language used by XSLT to access or refer to parts of an XML document. (XPath is also used by the XML Linking specification) XSL-FO 1.0 – Extensible Stylesheet Language Formatting Objects, an XML vocabulary for specifying formatting semantics XSL transformations are usually a one-to-one transformation, but with newer versions (XSL 1.1 and XSL 2.0) its possible to create many-to-many transformations too. So now you have an overview of XSL, on with the show… So what do I need? So to get going you need a browser an supports client-side XSL transformations such as Firefox, Safari, Opera or Internet Explorer. Second, you need a source XML file – for this we’re going to use an ATOM feed from Flickr.com. And lastly, you need an editor of some kind. I find Notepad++ quick for short XSLs, while I tend to use XMLSpy or Oxygen for complex XSL work. Because we’re doing a client-side transformation, we need to modify the XML file to tell it where to find our yet-to-be-written XSL file. Take a look at the source XML file, which originates from my Flickr photos tagged sky, in ATOM format. The top of the ATOM file now has an additional <?xml-stylesheet /> instruction, as can been seen on Line 2 below. This instructs the browser to use the XSL file to transform the document. <?xml version=""1.0"" encoding=""utf-8"" standalone=""yes""?> <?xml-stylesheet type=""text/xsl"" href=""flickr_transform.xsl""?> <feed xmlns=""http://www.w3.org/2005/Atom"" xmlns:dc=""http://purl.org/dc/elements/1.1/""> Your first transformation Your first XSL will look something like this: <?xml version=""1.0"" encoding=""utf-8""?> <xsl:stylesheet version=""1.0"" xmlns:xsl=""http://www.w3.org/1999/XSL/Transform"" xmlns:atom=""http://www.w3.org/2005/Atom"" xmlns:dc=""http://purl.org/dc/elements/1.1/""> <xsl:output method=""html"" encoding=""utf-8""/> </xsl:stylesheet> This is pretty much the starting point for most XSL files. You will notice the standard XML processing instruction at the top of the file (line 1). We then switch into XSL mode using the XSL namespace on all XSL elements (line 2). In this case, we have added namespaces for ATOM (line 4) and Dublin Core (line 5). This means the XSL can now read and understand those elements from the source XML. After we define all the namespaces, we then move onto the xsl:output element (line 6). This enables you to define the final method of output. Here we’re specifying html, but you could equally use XML or Text, for example. The encoding attributes on each element do what they say on the tin. As with all XML, of course, we close every element including the root. The next stage is to add a template, in this case an <xsl:template /> as can be seen below: <?xml version=""1.0"" encoding=""utf-8""?> <xsl:stylesheet version=""1.0"" xmlns:xsl=""http://www.w3.org/1999/XSL/Transform"" xmlns:atom=""http://www.w3.org/2005/Atom"" xmlns:dc=""http://purl.org/dc/elements/1.1/""> <xsl:output method=""html"" encoding=""utf-8""/> <xsl:template match=""/""> <html> <head> <title>Making XML beautiful again : Transforming ATOM</title> </head> <body> <xsl:apply-templates select=""/atom:feed""/> </body> </html> </xsl:template> </xsl:stylesheet> The beautiful thing about XSL is its English syntax, if you say it out loud it tends to make sense. The / value for the match attribute on line 8 is our first example of XPath syntax. The expression / matches any element – so this <xsl:template/> will match against any element in the document. As the first element in any XML document is the root element, this will be the one matched and processed first. Once we get past our standard start of a HTML document, the only instruction remaining in this <xsl:template/> is to look for and match all <atom:feed/> elements using the <xsl:apply-templates/> in line 14, above. <?xml version=""1.0"" encoding=""utf-8""?> <xsl:stylesheet version=""1.0"" xmlns:xsl=""http://www.w3.org/1999/XSL/Transform"" xmlns:atom=""http://www.w3.org/2005/Atom"" xmlns:dc=""http://purl.org/dc/elements/1.1/""> <xsl:output method=""html"" encoding=""utf-8""/> <xsl:template match=""/""> <xsl:apply-templates select=""/atom:feed""/> </xsl:template> <xsl:template match=""/atom:feed""> <div id=""content""> <h1> <xsl:value-of select=""atom:title""/> </h1> <p> <xsl:value-of select=""atom:subtitle""/> </p> <ul id=""entries""> <xsl:apply-templates select=""atom:entry""/> </ul> </div> </xsl:template> </xsl:stylesheet> This new template (line 12, above) matches <feed/> and starts to write the new HTML elements out to the output stream. The <xsl:value-of/> does exactly what you’d expect – it finds the value of the item specifed in its select attribute. With XPath you can select any element or attribute from the source XML. The last part is a repeat of the now familiar <xsl:apply-templates/> from before, but this time we’re using it inside of a called template. Yep, XSL is full of recursion… <xsl:template match=""atom:entry""> <li class=""entry""> <h2> <a href=""{atom:link/@href}""> <xsl:value-of select=""atom:title""/> </a> </h2> <p class=""date""> (<xsl:value-of select=""substring-before(atom:updated,'T')""/>) </p> <p class=""content""> <xsl:value-of select=""atom:content"" disable-output-escaping=""yes""/> </p> <xsl:apply-templates select=""atom:category""/> </li> </xsl:template> The <xsl:template/> which matches atom:entry (line 1) occurs every time there is a <entry/> element in the source XML file. So in total that is 20 times, this is naturally why XSLT is full of recursion. This <xsl:template/> has been matched and therefore called higher up in the document, so we can start writing list elements directly to the output stream. The first part is simply a <h2/> with a link wrapped within it (lines 3-7). We can select attributes using XPath using @. The second part of this template selects the date, but performs a XPath string function on it. This means that we only get the date and not the time from the string (line 9). This is achieved by getting only the part of the string that exists before the T. Regular Expressions are not part of the XPath 1.0 string functions, although XPath 2.0 does include them. Because of this, in XSL we tend to rely heavily on the available XML output. The third part of the template (line 12) is a <xsl:value-of/> again, but this time we use an attribute of <xsl:value-of/> called disable output escaping to turn escaped characters back into XML. The very last section is another <xsl:apply-template/> call, taking us three templates deep. Do not worry, it is not uncommon to write XSL which go 20 or more templates deep! <xsl:template match=""atom:category""> <xsl:for-each select="".""> <xsl:element name=""a""> <xsl:attribute name=""rel""> <xsl:text>tag</xsl:text> </xsl:attribute> <xsl:attribute name=""href""> <xsl:value-of select=""concat(@scheme, @term)""/> </xsl:attribute> <xsl:value-of select=""@term""/> </xsl:element> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> In our final <xsl:template/>, we see a combination of what we have done before with a couple of twists. Once we match atom:category we then count how many elements there are at that same level (line 2). The XPath . means ‘self’, so we count how many category elements are within the <entry/> element. Following that, we start to output a link with a rel attribute of the predefined text, tag (lines 4-6). In XSL you can just type text, but results can end up with strange whitespace if you do (although there are ways to simply remove all whitespace). The only new XPath function in this example is concat(), which simply combines what XPaths or text there might be in the brackets. We end the output for this tag with an actual tag name (line 10) and we add a space afterwards (line 12) so it won’t touch the next tag. (There are better ways to do this in XSL using the last() XPath function). After that, we go back to the <xsl:for-each/> element again if there is another category element, otherwise we end the <xsl:for-each/> loop and end this <xsl:template/>. A touch of style Because we’re using recursion through our templates, you will find this is the end of the templates and the rest of the XML will be ignored by the parser. Finally, we can add our CSS to finish up. (I have created one for Flickr and another for News feeds) <style type=""text/css"" media=""screen"">@import ""flickr_overview.css?v=001"";</style> So we end up with a nice simple to understand but also quick to write XSL which can be used on ATOM Flickr feeds and ATOM News feeds. With a little playing around with XSL, you can make XML beautiful again. All the files can be found in the zip file (14k)",2006,Ian Forrester,ianforrester,2006-12-07T00:00:00+00:00,https://24ways.org/2006/beautiful-xml-with-xsl/,code 138,Rounded Corner Boxes the CSS3 Way,"If you’ve been doing CSS for a while you’ll know that there are approximately 3,762 ways to create a rounded corner box. The simplest techniques rely on the addition of extra mark-up directly to your page, while the more complicated ones add the mark-up though DOM manipulation. While these techniques are all very interesting, they do seem somewhat of a kludge. The goal of CSS is to separate structure from presentation, yet here we are adding superfluous mark-up to our code in order to create a visual effect. The reason we are doing this is simple. CSS2.1 only allows a single background image per element. Thankfully this looks set to change with the addition of multiple background images into the CSS3 specification. With CSS3 you’ll be able to add not one, not four, but eight background images to a single element. This means you’ll be able to create all kinds of interesting effects without the need of those additional elements. While the CSS working group still seem to be arguing over the exact syntax, Dave Hyatt went ahead and implemented the currently suggested mechanism into Safari. The technique is fiendishly simple, and I think we’ll all be a lot better off once the W3C stop arguing over the details and allow browser vendors to get on and provide the tools we need to build better websites. To create a CSS3 rounded corner box, simply start with your box element and apply your 4 corner images, separated by commas. .box { background-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif); } We don’t want these background images to repeat, which is the normal behaviour, so lets set all their background-repeat properties to no-repeat. .box { background-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif); background-repeat: no-repeat, no-repeat, no-repeat, no-repeat; } Lastly, we need to define the positioning of each corner image. .box { background-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif); background-repeat: no-repeat, no-repeat, no-repeat, no-repeat; background-position: top left, top right, bottom left, bottom right; } And there we have it, a simple rounded corner box with no additional mark-up. As well as using multiple background images, CSS3 also has the ability to create rounded corners without the need of any images at all. You can do this by setting the border-radius property to your desired value as seen in the next example. .box { border-radius: 1.6em; } This technique currently works in Firefox/Camino and creates a nice, if somewhat jagged rounded corner. If you want to create a box that works in both Mozilla and WebKit based browsers, why not combine both techniques and see what happens.",2006,Andy Budd,andybudd,2006-12-04T00:00:00+00:00,https://24ways.org/2006/rounded-corner-boxes-the-css3-way/,code 139,Flickr Photos On Demand with getFlickr,"In case you don’t know it yet, Flickr is great. It is a lot of fun to upload, tag and caption photos and it is really handy to get a vast network of contacts through it. Using Flickr photos outside of it is a bit of a problem though. There is a Flickr API, and you can get almost every page as an RSS feed, but in general it is a bit tricky to use Flickr photos inside your blog posts or web sites. You might not want to get into the whole API game or use a server side proxy script as you cannot retrieve RSS with Ajax because of the cross-domain security settings. However, Flickr also provides an undocumented JSON output, that can be used to hack your own solutions in JavaScript without having to use a server side script. If you enter the URL http://flickr.com/photos/tags/panda you get to the flickr page with photos tagged “panda”. If you enter the URL http://api.flickr.com/services/feeds/photos_public.gne?tags=panda&format=rss_200 you get the same page as an RSS feed. If you enter the URL http://api.flickr.com/services/feeds/photos_public.gne?tags=panda&format=json you get a JavaScript function called jsonFlickrFeed with a parameter that contains the same data in JSON format You can use this to easily hack together your own output by just providing a function with the same name. I wanted to make it easier for you, which is why I created the helper getFlickr for you to download and use. getFlickr for Non-Scripters Simply include the javascript file getflickr.js and the style getflickr.css in the head of your document: <script type=""text/javascript"" src=""getflickr.js""></script> <link rel=""stylesheet"" href=""getflickr.css"" type=""text/css""> Once this is done you can add links to Flickr pages anywhere in your document, and when you give them the CSS class getflickrphotos they get turned into gallery links. When a visitor clicks these links they turn into loading messages and show a “popup” gallery with the connected photos once they were loaded. As the JSON returned is very small it won’t take long. You can close the gallery, or click any of the thumbnails to view a photo. Clicking the photo makes it disappear and go back to the thumbnails. Check out the example page and click the different gallery links to see the results. Notice that getFlickr works with Unobtrusive JavaScript as when scripting is disabled the links still get to the photos on Flickr. getFlickr for JavaScript Hackers If you want to use getFlickr with your own JavaScripts you can use its main method leech(): getFlickr.leech(sTag, sCallback); sTag the tag you are looking for sCallback an optional function to call when the data was retrieved. After you called the leech() method you have two strings to use: getFlickr.html[sTag] contains an HTML list (without the outer UL element) of all the images linked to the correct pages at flickr. The images are the medium size, you can easily change that by replacing _m.jpg with _s.jpg for thumbnails. getFlickr.tags[sTag] contains a string of all the other tags flickr users added with the tag you searched for(space separated) You can call getFlickr.leech() several times when the page has loaded to cache several result feeds before the page gets loaded. This’ll make the photos quicker for the end user to show up. If you want to offer a form for people to search for flickr photos and display them immediately you can use the following HTML: <form onsubmit=""getFlickr.leech(document.getElementById('tag').value, 'populate');return false""> <label for=""tag"">Enter Tag</label> <input type=""text"" id=""tag"" name=""tag"" /> <input type=""submit"" value=""energize"" /> <h3>Tags:</h3><div id=""tags""></div> <h3>Photos:</h3><ul id=""photos""></ul> </form> All the JavaScript you’ll need (for a basic display) is this: function populate(){ var tag = document.getElementById('tag').value; document.getElementById('photos').innerHTML = getFlickr.html[tag].replace(/_m\.jpg/g,'_s.jpg'); document.getElementById('tags').innerHTML = getFlickr.tags[tag]; return false; } Easy as pie, enjoy! Check out the example page and try the form to see the results.",2006,Christian Heilmann,chrisheilmann,2006-12-03T00:00:00+00:00,https://24ways.org/2006/flickr-photos-on-demand/,code 143,Marking Up a Tag Cloud,"Everyone’s doing it. The problem is, everyone’s doing it wrong. Harsh words, you might think. But the crimes against decent markup are legion in this area. You see, I’m something of a markup and semantics junkie. So I’m going to analyse some of the more well-known tag clouds on the internet, explain what’s wrong, and then show you one way to do it better. del.icio.us I think the first ever tag cloud I saw was on del.icio.us. Here’s how they mark it up. <div class=""alphacloud""> <a href=""/tag/.net"" class=""lb s2"">.net</a> <a href=""/tag/advertising"" class="" s3"">advertising</a> <a href=""/tag/ajax"" class="" s5"">ajax</a> ... </div> Unfortunately, that is one of the worst examples of tag cloud markup I have ever seen. The page states that a tag cloud is a list of tags where size reflects popularity. However, despite describing it in this way to the human readers, the page’s author hasn’t described it that way in the markup. It isn’t a list of tags, just a bunch of anchors in a <div>. This is also inaccessible because a screenreader will not pause between adjacent links, and in some configurations will not announce the individual links, but rather all of the tags will be read as just one link containing a whole bunch of words. Markup crime number one. Flickr Ah, Flickr. The darling photo sharing site of the internet, and the biggest blind spot in every standardista’s vision. Forgive it for having atrocious markup and sometimes confusing UI because it’s just so much damn fun to use. Let’s see what they do. <p id=""TagCloud"">  <a href=""/photos/tags/06/"" style=""font-size: 14px;"">06</a>   <a href=""/photos/tags/africa/"" style=""font-size: 12px;"">africa</a>   <a href=""/photos/tags/amsterdam/"" style=""font-size: 14px;"">amsterdam</a>  ... </p> Again we have a simple collection of anchors like del.icio.us, only this time in a paragraph. But rather than using a class to represent the size of the tag they use an inline style. An inline style using a pixel-based font size. That’s so far away from the goal of separating style from content, they might as well use a <font> tag. You could theoretically parse that to extract the information, but you have more work to guess what the pixel sizes represent. Markup crime number two (and extra jail time for using non-breaking spaces purely for visual spacing purposes.) Technorati Ah, now. Here, you’d expect something decent. After all, the Overlord of microformats and King of Semantics Tantek Çelik works there. Surely we’ll see something decent here? <ol class=""heatmap""> <li><em><em><em><em><a href=""/tag/Britney+Spears"">Britney Spears</a></em></em></em></em></li> <li><em><em><em><em><em><em><em><em><em><a href=""/tag/Bush"">Bush</a></em></em></em></em></em></em></em></em></em></li> <li><em><em><em><em><em><em><em><em><em><em><em><em><em><a href=""/tag/Christmas"">Christmas</a></em></em></em></em></em></em></em></em></em></em></em></em></em></li> ... <li><em><em><em><em><em><em><a href=""/tag/SEO"">SEO</a></em></em></em></em></em></em></li> <li><em><em><em><em><em><em><em><em><em><em><em><em><em><em><em><a href=""/tag/Shopping"">Shopping</a></em></em></em></em></em></em></em></em></em></em></em></em></em></em></em></li> ... </ol> Unfortunately it turns out not to be that decent, and stop calling me Shirley. It’s not exactly terrible code. It does recognise that a tag cloud is a list of links. And, since they’re in alphabetical order, that it’s an ordered list of links. That’s nice. However … fifteen nested <em> tags? FIFTEEN? That’s emphasis for you. Yes, it is parse-able, but it’s also something of a strange way of looking at emphasis. The HTML spec states that <em> is emphasis, and <strong> is for stronger emphasis. Nesting <em> tags seems counter to the idea that different tags are used for different levels of emphasis. Plus, if you had a screen reader that stressed the voice for emphasis, what would it do? Shout at you? Markup crime number three. So what should it be? As del.icio.us tells us, a tag cloud is a list of tags where the size that they are rendered at contains extra information. However, by hiding the extra context purely within the CSS or the HTML tags used, you are denying that context to some users. The basic assumption being made is that all users will be able to see the difference between font sizes, and this is demonstrably false. A better way to code a tag cloud is to put the context of the cloud within the content, not the markup or CSS alone. As an example, I’m going to take some of my favourite flickr tags and put them into a cloud which communicates the relative frequency of each tag. To start with a tag cloud in its most basic form is just a list of links. I am going to present them in alphabetical order, so I’ll use an ordered list. Into each list item I add the number of photos I have with that particular tag. The tag itself is linked to the page on flickr which contains those photos. So we end up with this first example. To display this as a traditional tag cloud, we need to alter it in a few ways: The items need to be displayed next to each other, rather than one-per-line The context information should be hidden from display (but not from screen readers) The tag should link to the page of items with that tag Displaying the items next to each other simply means setting the display of the list elements to inline. The context can be hidden by wrapping it in a <span> and then using the off-left method to hide it. And the link just means adding an anchor (with rel=""tag"" for some extra microformats bonus points). So, now we have a simple collection of links in our second example. The last stage is to add the sizes. Since we already have context in our content, the size is purely for visual rendering, so we can just use classes to define the different sizes. For my example, I’ll use a range of class names from not-popular through ultra-popular, in order of smallest to largest, and then use CSS to define different font sizes. If you preferred, you could always use less verbose class names such as size1 through size6. Anyway, adding some classes and CSS gives us our final example, a semantic and more accessible tag cloud.",2006,Mark Norman Francis,marknormanfrancis,2006-12-09T00:00:00+00:00,https://24ways.org/2006/marking-up-a-tag-cloud/,code 145,The Neverending (Background Image) Story,"Everyone likes candy for Christmas, and there’s none better than eye candy. Well, that, and just more of the stuff. Today we’re going to combine both of those good points and look at how to create a beautiful background image that goes on and on… forever! Of course, each background image is different, so instead of agonising over each and every pixel, I’m going to concentrate on five key steps that you can apply to any of your own repeating background images. In this example, we’ll look at the Miami Beach background image used on the new FOWA site, which I’m afraid is about as un-festive as you can get. 1. Choose your image wisely I find there are three main criteria when judging photos you’re considering for repetition manipulation (or ‘repetulation’, as I like to say)… simplicity (beware of complex patterns) angle and perspective (watch out for shadows and obvious vanishing points) consistent elements (for easy cloning) You might want to check out this annotated version of the image, where I’ve highlighted elements of the photo that led me to choose it as the right one. The original image purchased from iStockPhoto. The Photoshopped version used on the FOWA site. 2. The power of horizontal lines With the image chosen and your cursor poised for some Photoshop magic, the most useful thing you can do is drag out the edge pixels from one side of the image to create a kind of rough colour ‘template’ on which to work over. It doesn’t matter which side you choose, although you might find it beneficial to use the one with the simplest spread of colour and complex elements. Click and hold on the marquee tool in the toolbar and select the ‘single column marquee tool’, which will span the full height of your document but will only be one pixel wide. Make the selection right at the edge of your document, press ctrl-c / cmd-c to copy the selection you made, create a new layer, and hit ctrl-v / cmd-v to paste the selection onto your new layer. using free transform (ctrl-t / cmd-t), drag out your selection so that it becomes as wide as your entire canvas. A one-pixel-wide selection stretched out to the entire width of the canvas. 3. Cloning It goes without saying that the trusty clone tool is one of the most important in the process of creating a seamlessly repeating background image, but I think it’s important to be fairly loose with it. Always clone on to a new layer so that you’ve got the freedom to move it around, but above all else, use the eraser tool to tweak your cloned areas: let that handle the precision stuff and you won’t have to worry about getting your clones right first time. In the example below, you can see how I overcame the problem of the far-left tree shadow being chopped off by cloning the shadow from the tree on its right. The edge of the shadow is cut off and needs to be ‘made’ from a pre-existing element. The successful clone completes the missing shadow. The two elements are obviously very similar but it doesn’t look like a clone because the majority of the shape is ‘genuine’ and only a small part is a duplicate. Also, after cloning I transformed the duplicate, erased parts of it, used gradients, and — ooh, did someone mention gradients? 4. Never underestimate a gradient For this image, I used gradients in a similar way to a brush: covering large parts of the canvas with a colour that faded out to a desired point, before erasing certain parts for accuracy. Several of the gradients and brushes that make up the ‘customised’ part of the image, visible when the main photograph layer is hidden. The full composite. Gradients are also a bit of an easy fix: you can use a gradient on one side of the image, flip it horizontally, and then use it again on the opposite side to make a more seamless join. Speaking of which… 5. Sewing the seams No matter what kind of magic Photoshop dust you sprinkle over your image, there will still always be the area where the two edges meet: that scary ‘loop’ point. Fret ye not, however, for there’s help at hand in the form of a nice little cheat. Even though the loop point might still be apparent, we can help hide it by doing something to throw viewers off the scent. The seam is usually easy to spot because it’s a blank area with not much detail or colour variation, so in order to disguise it, go against the rule: put something across it! This isn’t quite as challenging as it may sound, because if we intentionally make our own ‘object’ to span the join, we can accurately measure the exact halfway point where we need to split it across the two sides of the image. This is exactly what I did with the FOWA background image: I made some clouds! A sky with no clouds in an unhappy one. A simple soft white brush creates a cloud-like formation in the sky. After taking the cloud’s opacity down to 20%, I used free transform to highlight the boundaries of the layer. I then moved it over to the right, so that the middle of the layer perfectly aligned with the right side of the canvas. Finally, I duplicated the layer and did the same in reverse: dragging the layer over to the left and making sure that the middle of the duplicate layer perfectly aligned with the left side of the canvas. And there you have it! Boom! Ta-da! Et Voila! To see the repeating background image in action, visit futureofwebapps.com on a large widescreen monitor or see a simulation of the effect. Thanks for reading, folks. Have a great Christmas!",2007,Elliot Jay Stocks,elliotjaystocks,2007-12-03T00:00:00+00:00,https://24ways.org/2007/the-neverending-background-image-story/,code 147,Christmas Is In The AIR,"That’s right, Christmas is coming up fast and there’s plenty of things to do. Get the tree and lights up, get the turkey, buy presents and who know what else. And what about Santa? He’s got a list. I’m pretty sure he’s checking it twice. Sure, we could use an existing list making web site or even a desktop widget. But we’re geeks! What’s the fun in that? Let’s build our own to-do list application and do it with Adobe AIR! What’s Adobe AIR? Adobe AIR, formerly codenamed Apollo, is a runtime environment that runs on both Windows and OSX (with Linux support to follow). This runtime environment lets you build desktop applications using Adobe technologies like Flash and Flex. Oh, and HTML. That’s right, you web standards lovin’ maniac. You can build desktop applications that can run cross-platform using the trio of technologies, HTML, CSS and JavaScript. If you’ve tried developing with AIR before, you’ll need to get re-familiarized with the latest beta release as many things have changed since the last one (such as the API and restrictions within the sandbox.) To get started To get started in building an AIR application, you’ll need two basic things: The AIR runtime. The runtime is needed to run any AIR-based application. The SDK. The software development kit gives you all the pieces to test your application. Unzip the SDK into any folder you wish. You’ll also want to get your hands on the JavaScript API documentation which you’ll no doubt find yourself getting into before too long. (You can download it, too.) Also of interest, some development environments have support for AIR built right in. Aptana doesn’t have support for beta 3 yet but I suspect it’ll be available shortly. Within the SDK, there are two main tools that we’ll use: one to test the application (ADL) and another to build a distributable package of our application (ADT). I’ll get into this some more when we get to that stage of development. Building our To-do list application The first step to building an application within AIR is to create an XML file that defines our default application settings. I call mine application.xml, mostly because Aptana does that by default when creating a new AIR project. It makes sense though and I’ve stuck with it. Included in the templates folder of the SDK is an example XML file that you can use. The first key part to this after specifying things like the application ID, version, and filename, is to specify what the default content should be within the content tags. Enter in the name of the HTML file you wish to load. Within this HTML file will be our application. <content>ui.html</content> Create a new HTML document and name it ui.html and place it in the same directory as the application.xml file. The first thing you’ll want to do is copy over the AIRAliases.js file from the frameworks folder of the SDK and add a link to it within your HTML document. <script type=""text/javascript"" src=""AIRAliases.js""></script> The aliases create shorthand links to all of the Flash-based APIs. Now is probably a good time to explain how to debug your application. Debugging our application So, with our XML file created and HTML file started, let’s try testing our ‘application’. We’ll need the ADL application located in BIN folder of the SDK and tell it to run the application.xml file. /path/to/adl /path/to/application.xml You can also just drag the XML file onto ADL and it’ll accomplish the same thing. If you just did that and noticed that your blank application didn’t load, you’d be correct. It’s running but isn’t visible. Which at this point means you’ll have to shut down the ADL process. Sorry about that! Changing the visibility You have two ways to make your application visible. You can do it automatically by setting the placing true in the visible tag within the application.xml file. <visible>true</visible> The other way is to do it programmatically from within your application. You’d want to do it this way if you had other startup tasks to perform before showing the interface. To turn the UI on programmatically, simple set the visible property of nativeWindow to true. <script type=""text/javascript""> nativeWindow.visible = true; </script> Sandbox Security Now that we have an application that we can see when we start it, it’s time to build the to-do list application. In doing so, you’d probably think that using a JavaScript library is a really good idea — and it can be but there are some limitations within AIR that have to be considered. An HTML document, by default, runs within the application sandbox. You have full access to the AIR APIs but once the onload event of the window has fired, you’ll have a limited ability to make use of eval and other dynamic script injection approaches. This limits the ability of external sources from gaining access to everything the AIR API offers, such as database and local file system access. You’ll still be able to make use of eval for evaluating JSON responses, which is probably the most important if you wish to consume JSON-based services. If you wish to create a greater wall of security between AIR and your HTML document loading in external resources, you can create a child sandbox. We won’t need to worry about it for our application so I won’t go any further into it but definitely keep this in mind. Finally, our application Getting tired of all this preamble? Let’s actually build our to-do list application. I’ll use jQuery because it’s small and should suit our needs nicely. Let’s begin with some structure: <body> <input type=""text"" id=""text"" value=""""> <input type=""button"" id=""add"" value=""Add""> <ul id=""list""></ul> </body> Now we need to wire up that button to actually add a new item to our to-do list. <script type=""text/javascript""> $(document).ready(function(){ // make sure the application is visible nativeWindow.visible = true; $('#add').click(function(){ var t = $('#text').val(); if(t) { // use DOM methods to create the new list item var li = document.createElement('li'); // the extra space at the end creates a buffer between the text // and the delete link we're about to add li.appendChild(document.createTextNode(t + ' ')); // create the delete link var del = document.createElement('a'); // this makes it a true link. I feel dirty doing this. del.setAttribute('href', '#'); del.addEventListener('click', function(evt){ this.parentNode.parentNode.removeChild(this.parentNode); }); del.appendChild(document.createTextNode('[del]')); li.appendChild(del); // append everything to the list $('#list').append(li); //reset the text box $('#text').val(''); } }) }); </script> And just like that, we’ve got a to-do list! That’s it! Just never close your application and you’ll remember everything. Okay, that’s not very practical. You need to have some way of storing your to-do items until the next time you open up the application. Storing Data You’ve essentially got 4 different ways that you can store data: Using the local database. AIR comes with SQLLite built in. That means you can create tables and insert, update and select data from that database just like on a web server. Using the file system. You can also create files on the local machine. You have access to a few folders on the local system such as the documents folder and the desktop. Using EcryptedLocalStore. I like using the EcryptedLocalStore because it allows you to easily save key/value pairs and have that information encrypted. All this within just a couple lines of code. Sending the data to a remote API. Our to-do list could sync up with Remember the Milk, for example. To demonstrate some persistence, we’ll use the file system to store our files. In addition, we’ll let the user specify where the file should be saved. This way, we can create multiple to-do lists, keeping them separate and organized. The application is now broken down into 4 basic tasks: Load data from the file system. Perform any interface bindings. Manage creating and deleting items from the list. Save any changes to the list back to the file system. Loading in data from the file system When the application starts up, we’ll prompt the user to select a file or specify a new to-do list. Within AIR, there are 3 main file objects: File, FileMode, and FileStream. File handles file and path names, FileMode is used as a parameter for the FileStream to specify whether the file should be read-only or for write access. The FileStream object handles all the read/write activity. The File object has a number of shortcuts to default paths like the documents folder, the desktop, or even the application store. In this case, we’ll specify the documents folder as the default location and then use the browseForSave method to prompt the user to specify a new or existing file. If the user specifies an existing file, they’ll be asked whether they want to overwrite it. var store = air.File.documentsDirectory; var fileStream = new air.FileStream(); store.browseForSave(""Choose To-do List""); Then we add an event listener for when the user has selected a file. When the file is selected, we check to see if the file exists and if it does, read in the contents, splitting the file on new lines and creating our list items within the interface. store.addEventListener(air.Event.SELECT, fileSelected); function fileSelected() { air.trace(store.nativePath); // load in any stored data var byteData = new air.ByteArray(); if(store.exists) { fileStream.open(store, air.FileMode.READ); fileStream.readBytes(byteData, 0, store.size); fileStream.close(); if(byteData.length > 0) { var s = byteData.readUTFBytes(byteData.length); oldlist = s.split(“\r\n”); // create todolist items for(var i=0; i < oldlist.length; i++) { createItem(oldlist[i], (new Date()).getTime() + i ); } } } } Perform Interface Bindings This is similar to before where we set the click event on the Add button but we’ve moved the code to save the list into a separate function. $('#add').click(function(){ var t = $('#text').val(); if(t){ // create an ID using the time createItem(t, (new Date()).getTime() ); } }) Manage creating and deleting items from the list The list management is now in its own function, similar to before but with some extra information to identify list items and with calls to save our list after each change. function createItem(t, id) { if(t.length == 0) return; // add it to the todo list todolist[id] = t; // use DOM methods to create the new list item var li = document.createElement('li'); // the extra space at the end creates a buffer between the text // and the delete link we're about to add li.appendChild(document.createTextNode(t + ' ')); // create the delete link var del = document.createElement('a'); // this makes it a true link. I feel dirty doing this. del.setAttribute('href', '#'); del.addEventListener('click', function(evt){ var id = this.id.substr(1); delete todolist[id]; // remove the item from the list this.parentNode.parentNode.removeChild(this.parentNode); saveList(); }); del.appendChild(document.createTextNode('[del]')); del.id = 'd' + id; li.appendChild(del); // append everything to the list $('#list').append(li); //reset the text box $('#text').val(''); saveList(); } Save changes to the file system Any time a change is made to the list, we update the file. The file will always reflect the current state of the list and we’ll never have to click a save button. It just iterates through the list, adding a new line to each one. function saveList(){ if(store.isDirectory) return; var packet = ''; for(var i in todolist) { packet += todolist[i] + '\r\n'; } var bytes = new air.ByteArray(); bytes.writeUTFBytes(packet); fileStream.open(store, air.FileMode.WRITE); fileStream.writeBytes(bytes, 0, bytes.length); fileStream.close(); } One important thing to mention here is that we check if the store is a directory first. The reason we do this goes back to our browseForSave call. If the user cancels the dialog without selecting a file first, then the store points to the documentsDirectory that we set it to initially. Since we haven’t specified a file, there’s no place to save the list. Hopefully by this point, you’ve been thinking of some cool ways to pimp out your list. Now we need to package this up so that we can let other people use it, too. Creating a Package Now that we’ve created our application, we need to package it up so that we can distribute it. This is a two step process. The first step is to create a code signing certificate (or you can pay for one from Thawte which will help authenticate you as an AIR application developer). To create a self-signed certificate, run the following command. This will create a PFX file that you’ll use to sign your application. adt -certificate -cn todo24ways 1024-RSA todo24ways.pfx mypassword After you’ve done that, you’ll need to create the package with the certificate adt -package -storetype pkcs12 -keystore todo24ways.pfx todo24ways.air application.xml . The important part to mention here is the period at the end of the command. We’re telling it to package up all files in the current directory. After that, just run the AIR file, which will install your application and run it. Important things to remember about AIR When developing an HTML application, the rendering engine is Webkit. You’ll thank your lucky stars that you aren’t struggling with cross-browser issues. (My personal favourites are multiple backgrounds and border radius!) Be mindful of memory leaks. Things like Ajax calls and event binding can cause applications to slowly leak memory over time. Web pages are normally short lived but desktop applications are often open for hours, if not days, and you may find your little desktop application taking up more memory than anything else on your machine! The WebKit runtime itself can also be a memory hog, usually taking about 15MB just for itself. If you create multiple HTML windows, it’ll add another 15MB to your memory footprint. Our little to-do list application shouldn’t be much of a concern, though. The other important thing to remember is that you’re still essentially running within a Flash environment. While you probably won’t notice this working in small applications, the moment you need to move to multiple windows or need to accomplish stuff beyond what HTML and JavaScript can give you, the need to understand some of the Flash-based elements will become more important. Lastly, the other thing to remember is that HTML links will load within the AIR application. If you want a link to open in the users web browser, you’ll need to capture that event and handle it on your own. The following code takes the HREF from a clicked link and opens it in the default web browser. air.navigateToURL(new air.URLRequest(this.href)); Only the beginning Of course, this is only the beginning of what you can do with Adobe AIR. You don’t have the same level of control as building a native desktop application, such as being able to launch other applications, but you do have more control than what you could have within a web application. Check out the Adobe AIR Developer Center for HTML and Ajax for tutorials and other resources. Now, go forth and create your desktop applications and hopefully you finish all your shopping before Christmas! Download the example files.",2007,Jonathan Snook,jonathansnook,2007-12-19T00:00:00+00:00,https://24ways.org/2007/christmas-is-in-the-air/,code 153,JavaScript Internationalisation,"or: Why Rudolph Is More Than Just a Shiny Nose Dunder sat, glumly staring at the computer screen. “What’s up, Dunder?” asked Rudolph, entering the stable and shaking off the snow from his antlers. “Well,” Dunder replied, “I’ve just finished coding the new reindeer intranet Santa Claus asked me to do. You know how he likes to appear to be at the cutting edge, talking incessantly about Web 2.0, AJAX, rounded corners; he even spooked Comet recently by talking about him as if he were some pushy web server. “I’ve managed to keep him happy, whilst also keeping it usable, accessible, and gleaming — and I’m still on the back row of the sleigh! But anyway, given the elves will be the ones using the site, and they come from all over the world, the site is in multiple languages. Which is great, except when it comes to the preview JavaScript I’ve written for the reindeer order form. Here, have a look…” As he said that, he brought up the textileRef:8234272265470b85d91702:linkStartMarker:“order form in French”:/examples/javascript-internationalisation/initial.fr.html on the screen. (Same in English). “Looks good,” said Rudolph. “But if I add some items,” said Dunder, “the preview appears in English, as it’s hard-coded in the JavaScript. I don’t want separate code for each language, as that’s just silly — I thought about just having if statements, but that doesn’t scale at all…” “And there’s more, you aren’t displaying large numbers in French properly, either,” added Rudolph, who had been playing and looking at part of the source code: function update_text() { var hay = getValue('hay'); var carrots = getValue('carrots'); var bells = getValue('bells'); var total = 50 * bells + 30 * hay + 10 * carrots; var out = 'You are ordering ' + pretty_num(hay) + ' bushel' + pluralise(hay) + ' of hay, ' + pretty_num(carrots) + ' carrot' + pluralise(carrots) + ', and ' + pretty_num(bells) + ' shiny bell' + pluralise(bells) + ', at a total cost of <strong>' + pretty_num(total) + '</strong> gold pieces. Thank you.'; document.getElementById('preview').innerHTML = out; } function pretty_num(n) { n += ''; var o = ''; for (i=n.length; i>3; i-=3) { o = ',' + n.slice(i-3, i) + o; } o = n.slice(0, i) + o; return o; } function pluralise(n) { if (n!=1) return 's'; return ''; } “Oh, botheration!” cried Dunder. “This is just so complicated.” “It doesn’t have to be,” said Rudolph, “you just have to think about things in a slightly different way from what you’re used to. As we’re only a simple example, we won’t be able to cover all possibilities, but for starters, we need some way of providing different information to the script dependent on the language. We’ll create a global i18n object, say, and fill it with the correct language information. The first variable we’ll need will be a thousands separator, and then we can change the pretty_num function to use that instead: function pretty_num(n) { n += ''; var o = ''; for (i=n.length; i>3; i-=3) { o = i18n.thousands_sep + n.slice(i-3, i) + o; } o = n.slice(0, i) + o; return o; } “The i18n object will also contain our translations, which we will access through a function called _() — that’s just an underscore. Other languages have a function of the same name doing the same thing. It’s very simple: function _(s) { if (typeof(i18n)!='undefined' && i18n[s]) { return i18n[s]; } return s; } “So if a translation is available and provided, we’ll use that; otherwise we’ll default to the string provided — which is helpful if the translation begins to lag behind the site’s text at all, as at least something will be output.” “Got it,” said Dunder. “ _('Hello Dunder') will print the translation of that string, if one exists, ‘Hello Dunder’ if not.” “Exactly. Moving on, your plural function breaks even in English if we have a word where the plural doesn’t add an s — like ‘children’.” “You’re right,” said Dunder. “How did I miss that?” “No harm done. Better to provide both singular and plural words to the function and let it decide which to use, performing any translation as well: function pluralise(s, p, n) { if (n != 1) return _(p); return _(s); } “We’d have to provide different functions for different languages as we employed more elves and got more complicated — for example, in Polish, the word ‘file’ pluralises like this: 1 plik, 2-4 pliki, 5-21 plików, 22-24 pliki, 25-31 plików, and so on.” (More information on plural forms) “Gosh!” “Next, as different languages have different word orders, we must stop using concatenation to construct sentences, as it would be impossible for other languages to fit in; we have to keep coherent strings together. Let’s rewrite your update function, and then go through it: function update_text() { var hay = getValue('hay'); var carrots = getValue('carrots'); var bells = getValue('bells'); var total = 50 * bells + 30 * hay + 10 * carrots; hay = sprintf(pluralise('%s bushel of hay', '%s bushels of hay', hay), pretty_num(hay)); carrots = sprintf(pluralise('%s carrot', '%s carrots', carrots), pretty_num(carrots)); bells = sprintf(pluralise('%s shiny bell', '%s shiny bells', bells), pretty_num(bells)); var list = sprintf(_('%s, %s, and %s'), hay, carrots, bells); var out = sprintf(_('You are ordering %s, at a total cost of <strong>%s</strong> gold pieces.'), list, pretty_num(total)); out += ' '; out += _('Thank you.'); document.getElementById('preview').innerHTML = out; } “ sprintf is a function in many other languages that, given a format string and some variables, slots the variables into place within the string. JavaScript doesn’t have such a function, so we’ll write our own. Again, keep it simple for now, only integers and strings; I’m sure more complete ones can be found on the internet. function sprintf(s) { var bits = s.split('%'); var out = bits[0]; var re = /^([ds])(.*)$/; for (var i=1; i<bits.length; i++) { p = re.exec(bits[i]); if (!p || arguments[i]==null) continue; if (p[1] == 'd') { out += parseInt(arguments[i], 10); } else if (p[1] == 's') { out += arguments[i]; } out += p[2]; } return out; } “Lastly, we need to create one file for each language, containing our i18n object, and then include that from the relevant HTML. Here’s what a blank translation file would look like for your order form: var i18n = { thousands_sep: ',', ""%s bushel of hay"": '', ""%s bushels of hay"": '', ""%s carrot"": '', ""%s carrots"": '', ""%s shiny bell"": '', ""%s shiny bells"": '', ""%s, %s, and %s"": '', ""You are ordering %s, at a total cost of <strong>%s</strong> gold pieces."": '', ""Thank you."": '' }; “If you implement this across the intranet, you’ll want to investigate the xgettext program, which can automatically extract all strings that need translating from all sorts of code files into a standard .po file (I think Python mode works best for JavaScript). You can then use a different program to take the translated .po file and automatically create the language-specific JavaScript files for us.” (e.g. German .po file for PledgeBank, mySociety’s .po-.js script, example output) With a flourish, Rudolph finished editing. “And there we go, localised JavaScript in English, French, or German, all using the same main code.” “Thanks so much, Rudolph!” said Dunder. “I’m not just a pretty nose!” Rudolph quipped. “Oh, and one last thing — please comment liberally explaining the context of strings you use. Your translator will thank you, probably at the same time as they point out the four hundred places you’ve done something in code that only works in your language and no-one else’s…” Thanks to Tim Morley and Edmund Grimley Evans for the French and German translations respectively.",2007,Matthew Somerville,matthewsomerville,2007-12-08T00:00:00+00:00,https://24ways.org/2007/javascript-internationalisation/,code 157,Capturing Caps Lock,"One of the more annoying aspects of having to remember passwords (along with having to remember loads of them) is that if you’ve got Caps Lock turned on accidentally when you type one in, it won’t work, and you won’t know why. Most desktop computers alert you in some way if you’re trying to enter your password to log on and you’ve enabled Caps Lock; there’s no reason why the web can’t do the same. What we want is a warning – maybe the user wants Caps Lock on, because maybe their password is in capitals – rather than something that interrupts what they’re doing. Something subtle. But that doesn’t answer the question of how to do it. Sadly, there’s no way of actually detecting whether Caps Lock is on directly. However, there’s a simple work-around; if the user presses a key, and it’s a capital letter, and they don’t have the Shift key depressed, why then they must have Caps Lock on! Simple. DOM scripting allows your code to be notified when a key is pressed in an element; when the key is pressed, you get the ASCII code for that key. Capital letters, A to Z, have ASCII codes 65 to 90. So, the code would look something like: on a key press if the ASCII code for the key is between 65 and 90 *and* if shift is pressed warn the user that they have Caps Lock on, but let them carry on end if end keypress The actual JavaScript for this is more complicated, because both event handling and keypress information differ across browsers. Your event handling functions are passed an event object, except in Internet Explorer where you use the global event object; the event object has a which parameter containing the ASCII code for the key pressed, except in Internet Explorer where the event object has a keyCode parameter; some browsers store whether the shift key is pressed in a shiftKey parameter and some in a modifiers parameter. All this boils down to code that looks something like this: keypress: function(e) { var ev = e ? e : window.event; if (!ev) { return; } var targ = ev.target ? ev.target : ev.srcElement; // get key pressed var which = -1; if (ev.which) { which = ev.which; } else if (ev.keyCode) { which = ev.keyCode; } // get shift status var shift_status = false; if (ev.shiftKey) { shift_status = ev.shiftKey; } else if (ev.modifiers) { shift_status = !!(ev.modifiers & 4); } // At this point, you have the ASCII code in “which”, // and shift_status is true if the shift key is pressed } Then it’s just a check to see if the ASCII code is between 65 and 90 and the shift key is pressed. (You also need to do the same work if the ASCII code is between 97 (a) and 122 (z) and the shift key is not pressed, because shifted letters are lower-case if Caps Lock is on.) if (((which >= 65 && which <= 90) && !shift_status) || ((which >= 97 && which <= 122) && shift_status)) { // uppercase, no shift key /* SHOW THE WARNING HERE */ } else { /* HIDE THE WARNING HERE */ } The warning can be implemented in many different ways: highlight the password field that the user is typing into, show a tooltip, display text next to the field. For simplicity, this code shows the warning as a previously created image, with appropriate alt text. Showing the warning means creating a new <img> tag with DOM scripting, dropping it into the page, and positioning it so that it’s next to the appropriate field. The image looks like this: You know the position of the field the user is typing into (from its offsetTop and offsetLeft properties) and how wide it is (from its offsetWidth properties), so use createElement to make the new img element, and then absolutely position it with style properties so that it appears in the appropriate place (near to the text field). The image is a transparent PNG with an alpha channel, so that the drop shadow appears nicely over whatever else is on the page. Because Internet Explorer version 6 and below doesn’t handle transparent PNGs correctly, you need to use the AlphaImageLoader technique to make the image appear correctly. newimage = document.createElement('img'); newimage.src = ""http://farm3.static.flickr.com/2145/2067574980_3ddd405905_o_d.png""; newimage.style.position = ""absolute""; newimage.style.top = (targ.offsetTop - 73) + ""px""; newimage.style.left = (targ.offsetLeft + targ.offsetWidth - 5) + ""px""; newimage.style.zIndex = ""999""; newimage.setAttribute(""alt"", ""Warning: Caps Lock is on""); if (newimage.runtimeStyle) { // PNG transparency for IE newimage.runtimeStyle.filter += ""progid:DXImageTransform.Microsoft.AlphaImageLoader(src='http://farm3.static.flickr.com/2145/2067574980_3ddd405905_o_d.png',sizingMethod='scale')""; } document.body.appendChild(newimage); Note that the alt text on the image is also correctly set. Next, all these parts need to be pulled together. On page load, identify all the password fields on the page, and attach a keypress handler to each. (This only needs to be done for password fields because the user can see if Caps Lock is on in ordinary text fields.) var inps = document.getElementsByTagName(""input""); for (var i=0, l=inps.length; i The “create an image” code from above should only be run if the image is not already showing, so instead of creating a newimage object, create the image and attach it to the password field so that it can be checked for later (and not shown if it’s already showing). For safety, all the code should be wrapped up in its own object, so that its functions don’t collide with anyone else’s functions. So, create a single object called capslock and make all the functions be named methods of the object: var capslock = { ... keypress: function(e) { } ... } Also, the “create an image” code is saved into its own named function, show_warning(), and the converse “remove the image” code into hide_warning(). This has the advantage that developers can include the JavaScript library that has been written here, but override what actually happens with their own code, using something like: <script src=""jscapslock.js"" type=""text/javascript""></script> <script type=""text/javascript""> capslock.show_warning(target) { // do something different here to warn the user } capslock.hide_warning(target) { // hide the warning that we created in show_warning() above } </script> And that’s all. Simply include the JavaScript library in your pages, override what happens on a warning if that’s more appropriate for what you’re doing, and that’s all you need. See the script in action.",2007,Stuart Langridge,stuartlangridge,2007-12-04T00:00:00+00:00,https://24ways.org/2007/capturing-caps-lock/,code 161,Keeping JavaScript Dependencies At Bay,"As we are writing more and more complex JavaScript applications we run into issues that have hitherto (god I love that word) not been an issue. The first decision we have to make is what to do when planning our app: one big massive JS file or a lot of smaller, specialised files separated by task. Personally, I tend to favour the latter, mainly because it allows you to work on components in parallel with other developers without lots of clashes in your version control. It also means that your application will be more lightweight as you only include components on demand. Starting with a global object This is why it is a good plan to start your app with one single object that also becomes the namespace for the whole application, say for example myAwesomeApp: var myAwesomeApp = {}; You can nest any necessary components into this one and also make sure that you check for dependencies like DOM support right up front. Adding the components The other thing to add to this main object is a components object, which defines all the components that are there and their file names. var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } } }; Technically you can also omit the loaded properties, but it is cleaner this way. The next thing to add is an addComponent function that can load your components on demand by adding new SCRIPT elements to the head of the documents when they are needed. var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } } }; This allows you to add new components on the fly when they are not defined: if(!myAwesomeApp.components.gallery.loaded){ myAwesomeApp.addComponent('gallery'); }; Verifying that components have been loaded However, this is not safe as the file might not be available. To make the dynamic adding of components safer each of the components should have a callback at the end of them that notifies the main object that they indeed have been loaded: var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } }, componentAvailable:function(component){ this.components[component].loaded = true; } } For example the gallery.js file should call this notification as a last line: myAwesomeApp.gallery = function(){ // [... other code ...] }(); myAwesomeApp.componentAvailable('gallery'); Telling the implementers when components are available The last thing to add (actually as a courtesy measure for debugging and implementers) is to offer a listener function that gets notified when the component has been loaded: var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } }, componentAvailable:function(component){ this.components[component].loaded = true; if(this.listener){ this.listener(component); }; } }; This allows you to write a main listener function that acts when certain components have been loaded, for example: myAwesomeApp.listener = function(component){ if(component === 'gallery'){ showGallery(); } }; Extending with other components As the main object is public, other developers can extend the components object with own components and use the listener function to load dependent components. Say you have a bespoke component with data and labels in extra files: myAwesomeApp.listener = function(component){ if(component === 'bespokecomponent'){ myAwesomeApp.addComponent('bespokelabels'); }; if(component === 'bespokelabels'){ myAwesomeApp.addComponent('bespokedata'); }; if(component === 'bespokedata'){ myAwesomeApp,bespokecomponent.init(); }; }; myAwesomeApp.components.bespokecomponent = { url:'bespoke.js', loaded:false }; myAwesomeApp.components.bespokelabels = { url:'bespokelabels.js', loaded:false }; myAwesomeApp.components.bespokedata = { url:'bespokedata.js', loaded:false }; myAwesomeApp.addComponent('bespokecomponent'); Following this practice you can write pretty complex apps and still have full control over what is available when. You can also extend this to allow for CSS files to be added on demand. Influences If you like this idea and wondered if someone already uses it, take a look at the Yahoo! User Interface library, and especially at the YAHOO_config option of the global YAHOO.js object.",2007,Christian Heilmann,chrisheilmann,2007-12-18T00:00:00+00:00,https://24ways.org/2007/keeping-javascript-dependencies-at-bay/,code 162,Conditional Love,"“Browser.” The four-letter word of web design. I mean, let’s face it: on the good days, when things just work in your target browsers, it’s marvelous. The air smells sweeter, birds’ songs sound more melodious, and both your design and your code are looking sharp. But on the less-than-good days (which is, frankly, most of them), you’re compelled to tie up all your browsers in a sack, heave them into the nearest river, and start designing all-imagemap websites. We all play favorites, after all: some will swear by Firefox, Opera fans are allegedly legion, and others still will frown upon anything less than the latest WebKit nightly. Thankfully, we do have an out for those little inconsistencies that crop up when dealing with cross-browser testing: CSS patches. Spare the Rod, Hack the Browser Before committing browsercide over some rendering bug, a designer will typically reach for a snippet of CSS fix the faulty browser. Historically referred to as “hacks,” I prefer Dan Cederholm’s more client-friendly alternative, “patches”. But whatever you call them, CSS patches all work along the same principle: supply the proper property value to the good browsers, while giving higher maintenance other browsers an incorrect value that their frustrating idiosyncratic rendering engine can understand. Traditionally, this has been done either by exploiting incomplete CSS support: #content { height: 1%; // Let's force hasLayout for old versions of IE. line-height: 1.6; padding: 1em; } html>body #content { height: auto; // Modern browsers get a proper height value. } or by exploiting bugs in their rendering engine to deliver alternate style rules: #content p { font-size: .8em; /* Hide from Mac IE5 \*/ font-size: .9em; /* End hiding from Mac IE5 */ } We’ve even used these exploits to serve up whole stylesheets altogether: @import url(""core.css""); @media tty { i{content:""\"";/*"" ""*/}} @import 'windows-ie5.css'; /*"";} }/* */ The list goes on, and on, and on. For every browser, for every bug, there’s a patch available to fix some rendering bug. But after some time working with standards-based layouts, I’ve found that CSS patches, as we’ve traditionally used them, become increasingly difficult to maintain. As stylesheets are modified over the course of a site’s lifetime, inline fixes we’ve written may become obsolete, making them difficult to find, update, or prune out of our CSS. A good patch requires a constant gardener to ensure that it adds more than just bloat to a stylesheet, and inline patches can be very hard to weed out of a decently sized CSS file. Giving the Kids Separate Rooms Since I joined Airbag Industries earlier this year, every project we’ve worked on has this in the head of its templates: <link rel=""stylesheet"" href=""-/css/screen/main.css"" type=""text/css"" media=""screen, projection"" /> <!--[if lt IE 7]> <link rel=""stylesheet"" href=""-/css/screen/patches/win-ie-old.css"" type=""text/css"" media=""screen, projection"" /> <![endif]--> <!--[if gte IE 7]> <link rel=""stylesheet"" href=""-/css/screen/patches/win-ie7-up.css"" type=""text/css"" media=""screen, projection"" /> <![endif]--> The first element is, simply enough, a link element that points to the project’s main CSS file. No patches, no hacks: just pure, modern browser-friendly style rules. Which, nine times out of ten, will net you a design that looks like spilled eggnog in various versions of Internet Explorer. But don’t reach for the mulled wine quite yet. Immediately after, we’ve got a brace of conditional comments wrapped around two other link elements. These odd-looking comments allow us to selectively serve up additional stylesheets just to the version of IE that needs them. We’ve got one for IE 6 and below: <!--[if lt IE 7]> <link rel=""stylesheet"" href=""-/css/screen/patches/win-ie-old.css"" type=""text/css"" media=""screen, projection"" /> <![endif]--> And another for IE7 and above: <!--[if gte IE 7]> <link rel=""stylesheet"" href=""-/css/screen/patches/win-ie7-up.css"" type=""text/css"" media=""screen, projection"" /> <![endif]--> Microsoft’s conditional comments aren’t exactly new, but they can be a valuable alternative to cooking CSS patches directly into a master stylesheet. And though they’re not a W3C-approved markup structure, I think they’re just brilliant because they innovate within the spec: non-IE devices will assume that the comments are just that, and ignore the markup altogether. This does, of course, mean that there’s a little extra markup in the head of our documents. But this approach can seriously cut down on the unnecessary patches served up to the browsers that don’t need them. Namely, we no longer have to write rules like this in our main stylesheet: #content { height: 1%; // Let's force hasLayout for old versions of IE. line-height: 1.6; padding: 1em; } html>body #content { height: auto; // Modern browsers get a proper height value. } Rather, we can simply write an un-patched rule in our core stylesheet: #content { line-height: 1.6; padding: 1em; } And now, our patch for older versions of IE goes in—you guessed it—the stylesheet for older versions of IE: #content { height: 1%; } The hasLayout patch is applied, our design’s repaired, and—most importantly—the patch is only seen by the browser that needs it. The “good” browsers don’t have to incur any added stylesheet weight from our IE patches, and Internet Explorer gets the conditional love it deserves. Most importantly, this “compartmentalized” approach to CSS patching makes it much easier for me to patch and maintain the fixes applied to a particular browser. If I need to track down a bug for IE7, I don’t need to scroll through dozens or hundreds of rules in my core stylesheet: instead, I just open the considerably slimmer IE7-specific patch file, make my edits, and move right along. Even Good Children Misbehave While IE may occupy the bulk of our debugging time, there’s no denying that other popular, modern browsers will occasionally disagree on how certain bits of CSS should be rendered. But without something as, well, pimp as conditional comments at our disposal, how do we bring the so-called “good browsers” back in line with our design? Assuming you’re loving the “one patch file per browser” model as much as I do, there’s just one alternative: JavaScript. function isSaf() { var isSaf = (document.childNodes && !document.all && !navigator.taintEnabled && !navigator.accentColorName) ? true : false; return isSaf; } function isOp() { var isOp = (window.opera) ? true : false; return isOp; } Instead of relying on dotcom-era tactics of parsing the browser’s user-agent string, we’re testing here for support for various DOM objects, whose presence or absence we can use to reasonably infer the browser we’re looking at. So running the isOp() function, for example, will test for Opera’s proprietary window.opera object, and thereby accurately tell you if your user’s running Norway’s finest browser. With scripts such as isOp() and isSaf() in place, you can then reasonably test which browser’s viewing your content, and insert additional link elements as needed. function loadPatches(dir) { if (document.getElementsByTagName() && document.createElement()) { var head = document.getElementsByTagName(""head"")[0]; if (head) { var css = new Array(); if (isSaf()) { css.push(""saf.css""); } else if (isOp()) { css.push(""opera.css""); } if (css.length) { var link = document.createElement(""link""); link.setAttribute(""rel"", ""stylesheet""); link.setAttribute(""type"", ""text/css""); link.setAttribute(""media"", ""screen, projection""); for (var i = 0; i < css.length; i++) { var tag = link.cloneNode(true); tag.setAttribute(""href"", dir + css[0]); head.appendChild(tag); } } } } } Here, we’re testing the results of isSaf() and isOp(), one after the other. For each function that returns true, then the name of a new stylesheet is added to the oh-so-cleverly named css array. Then, for each entry in css, we create a new link element, point it at our patch file, and insert it into the head of our template. Fire it up using your favorite onload or DOMContentLoaded function, and you’re good to go. Scripteat Emptor At this point, some of the audience’s more conscientious ‘scripters may be preparing to lob figgy pudding at this author’s head. And that’s perfectly understandable; relying on JavaScript to patch CSS chafes a bit against the normally clean separation we have between our pages’ content, presentation, and behavior layers. And beyond the philosophical concerns, this approach comes with a few technical caveats attached: Browser detection? So un-133t. Browser detection is not something I’d typically recommend. Whenever possible, a proper DOM script should check for the support of a given object or method, rather than the device with which your users view your content. It’s JavaScript, so don’t count on it being available. According to one site, roughly four percent of Internet users don’t have JavaScript enabled. Your site’s stats might be higher or lower than this number, but still: don’t expect that every member of your audience will see these additional stylesheets, and ensure that your content’s still accessible with JS turned off. Be a constant gardener. The sample isSaf() and isOp() functions I’ve written will tell you if the user’s browser is Safari or Opera. As a result, stylesheets written to patch issues in an old browser may break when later releases repair the relevant CSS bugs. You can, of course, add logic to these simple little scripts to serve up version-specific stylesheets, but that way madness may lie. In any event, test your work vigorously, and keep testing it when new versions of the targeted browsers come out. Make sure that a patch written today doesn’t become a bug tomorrow. Patching Firefox, Opera, and Safari isn’t something I’ve had to do frequently: still, there have been occasions where the above script’s come in handy. Between conditional comments, careful CSS auditing, and some judicious JavaScript, browser-based bugs can be handled with near-surgical precision. So pass the ‘nog. It’s patchin’ time.",2007,Ethan Marcotte,ethanmarcotte,2007-12-15T00:00:00+00:00,https://24ways.org/2007/conditional-love/,code 163,Get To Grips with Slippy Maps,"Online mapping has definitely hit mainstream. Google Maps made ‘slippy maps’ popular and made it easy for any developer to quickly add a dynamic map to his or her website. You can now find maps for store locations, friends nearby, upcoming events, and embedded in blogs. In this tutorial we’ll show you how to easily add a map to your site using the Mapstraction mapping library. There are many map providers available to choose from, each with slightly different functionality, design, and terms of service. Mapstraction makes deciding which provider to use easy by allowing you to write your mapping code once, and then easily switch providers. Assemble the pieces Utilizing any of the mapping library typically consists of similar overall steps: Create an HTML div to hold the map Include the Javascript libraries Create the Javascript Map element Set the initial map center and zoom level Add markers, lines, overlays and more Create the Map Div The HTML div is where the map will actually show up on your page. It needs to have a unique id, because we’ll refer to that later to actually put the map here. This also lets you have multiple maps on a page, by creating individual divs and Javascript map elements. The size of the div also sets the height and width of the map. You set the size using CSS, either inline with the element, or via a CSS reference to the element id or class. For this example, we’ll use inline styling. <div id=""map"" style=""width: 400px; height: 400px;""></div> Include Javascript libraries A mapping library is like any Javascript library. You need to include the library in your page before you use the methods of that library. For our tutorial, we’ll need to include at least two libraries: Mapstraction, and the mapping API(s) we want to display. Our first example we’ll use the ubiquitous Google Maps library. However, you can just as easily include Yahoo, MapQuest, or any of the other supported libraries. Another important aspect of the mapping libraries is that many of them require an API key. You will need to agree to the terms of service, and get an API key these. <script src=""http://maps.google.com/maps?file=api&v=2&key=YOUR_KEY"" type=""text/javascript""></script> <script type=""text/javascript"" src=""http://mapstraction.com/src/mapstraction.js""></script> Create the Map Great, we’ve now put in all the pieces we need to start actually creating our map. This is as simple as creating a new Mapstraction object with the id of the HTML div we created earlier, and the name of the mapping provider we want to use for this map. With several of the mapping libraries you will need to set the map center and zoom level before the map will appear. The map centering actually triggers the initialization of the map. var mapstraction = new Mapstraction('map','google'); var myPoint = new LatLonPoint(37.404,-122.008); mapstraction.setCenterAndZoom(myPoint, 10); A note about zoom levels. The setCenterAndZoom function takes two parameters, the center as a LatLonPoint, and a zoom level that has been defined by mapping libraries. The current usage is for zoom level 1 to be “zoomed out”, or view the entire earth – and increasing the zoom level as you zoom in. Typically 17 is the maximum zoom, which is about the size of a house. Different mapping providers have different quality of zoomed in maps over different parts of the world. This is a perfect reason why using a library like Mapstraction is very useful, because you can quickly change mapping providers to accommodate users in areas that have bad coverage with some maps. To switch providers, you just need to include the Javascript library, and then change the second parameter in the Mapstraction creation. Or, you can call the switch method to dynamically switch the provider. So for Yahoo Maps (demo): var mapstraction = new Mapstraction('map','yahoo'); or Microsoft Maps (demo): var mapstraction = new Mapstraction('map','microsoft'); want a 3D globe in your browser? try FreeEarth (demo): var mapstraction = new Mapstraction('map','freeearth'); or even OpenStreetMap (free your data!) (demo): var mapstraction = new Mapstraction('map','openstreetmap'); Visit the Mapstraction multiple map demo page for an example of how easy it is to have many maps on your page, each with a different provider. Adding Markers While adding your first map is fun, and you can probably spend hours just sliding around, the point of adding a map to your site is usually to show the location of something. So now you want to add some markers. There are a couple of ways to add to your map. The simplest is directly creating markers. You could either hard code this into a rather static page, or dynamically generate these using whatever tools your site is built on. var marker = new Marker( new LatLonPoint(37.404,-122.008) ); marker.setInfoBubble(""It's easy to add maps to your site""); mapstraction.addMarker( marker ); There is a lot more you can do with markers, including changing the icon, adding timestamps, automatically opening the bubble, or making them draggable. While it is straight-forward to create markers one by one, there is a much easier way to create a large set of markers. And chances are, you can make it very easy by extending some data you already are sharing: RSS. Specifically, using GeoRSS you can easily add a large set of markers directly to a map. GeoRSS is a community built standard (like Microformats) that added geographic markup to RSS and Atom entries. It’s as simple as adding <georss:point>42 -83</georss:point> to your feeds to share items via GeoRSS. Once you’ve done that, you can add that feed as an ‘overlay’ to your map using the function: mapstraction.addOverlay(""http://api.flickr.com/services/feeds/groups_pool.gne?id=322338@N20&format=rss_200&georss=1""); Mapstraction also supports KML for many of the mapping providers. So it’s easy to add various data sources together with your own data. Check out Mapufacture for a growing index of available GeoRSS feeds and KML documents. Play with your new toys Mapstraction offers a lot more functionality you can utilize for demonstrating a lot of geographic data on your website. It also includes geocoding and routing abstraction layers for making sure your users know where to go. You can see more on the Mapstraction website: http://mapstraction.com.",2007,Andrew Turner,andrewturner,2007-12-02T00:00:00+00:00,https://24ways.org/2007/get-to-grips-with-slippy-maps/,code 164,My Other Christmas Present Is a Definition List,"A note from the editors: readers should note that the HTML5 redefinition of definition lists has come to pass and is now à la mode. Last year, I looked at how the markup for tag clouds was generally terrible. I thought this year I would look not at a method of marking up a common module, but instead just at a simple part of HTML and how it generally gets abused. No, not tables. Definition lists. Ah, definition lists. Often used but rarely understood. Examining the definition of definitions To start with, let’s see what the HTML spec has to say about them. Definition lists vary only slightly from other types of lists in that list items consist of two parts: a term and a description. The canonical example of a definition list is a dictionary. Words can have multiple descriptions (even the word definition has at least five). Also, many terms can share a single definition (for example, the word colour can also be spelt color, but they have the same definition). Excellent, we can all grasp that. But it very quickly starts to fall apart. Even in the HTML specification the definition list is mis-used. Another application of DL, for example, is for marking up dialogues, with each DT naming a speaker, and each DD containing his or her words. Wrong. Completely and utterly wrong. This is the biggest flaw in the HTML spec, along with dropping support for the start attribute on ordered lists. “Why?”, you may ask. Let me give you an example from Romeo and Juliet, act 2, scene 2. <dt>Juliet</dt> <dd>Romeo!</dd> <dt>Romeo</dt> <dd>My niesse?</dd> <dt>Juliet</dt> <dd>At what o'clock tomorrow shall I send to thee?</dd> <dt>Romeo</dt> <dd>At the hour of nine.</dd> Now, the problem here is that a given definition can have multiple descriptions (the DD). Really the dialog “descriptions” should be rolled up under the terms, like so: <dt>Juliet</dt> <dd>Romeo!</dd> <dd>At what o'clock tomorrow shall I send to thee?</dd> <dt>Romeo</dt> <dd>My niesse?</dd> <dd>At the hour of nine.</dd> Suddenly the play won’t make anywhere near as much sense. (If it’s anything, the correct markup for a play is an ordered list of CITE and BLOCKQUOTE elements.) This is the first part of the problem. That simple example has turned definition lists in everyone’s mind from pure definitions to more along the lines of a list with pre-configured heading(s) and text(s). Screen reader, enter stage left. In many screen readers, a simple definition list would be read out as “definition term equals definition description”. So in our play excerpt, Juliet equals Romeo! That’s not right, either. But this also leads a lot of people astray with definition lists to believing that they are useful for key/value pairs. Behaviour and convention The WHAT-WG have noticed the common mis-use of the DL, and have codified it into the new spec. In the HTML5 draft, a definition list is no longer a definition list. The dl element introduces an unordered association list consisting of zero or more name-value groups (a description list). Each group must consist of one or more names (dt elements) followed by one or more values (dd elements). They also note that the “dl element is inappropriate for marking up dialogue, since dialogue is ordered”. So for that example they have created a DIALOG (sic) element. Strange, then, that they keep DL as-is but instead refer to it an “association list”. They have not created a new AL element, and kept DL for the original purpose. They have chosen not to correct the usage or to create a new opportunity for increased specificity in our HTML, but to “pave the cowpath” of convention. How to use a definition list Given that everyone else is using a DL incorrectly, should we? Well, if they all jumped off a bridge, would you too? No, of course you wouldn’t. We don’t have HTML5 yet, so we’re stuck with the existing semantics of HTML4 and XHTML1. Which means that: Listing dialogue is not defining anything. Listing the attributes of a piece of hardware (resolution = 1600×1200) is illustrating sample values, not defining anything (however, stating what ‘resolution’ actually means in this context would be a definition). Listing the cast and crew of a given movie is not defining the people involved in making movies. (Stuart Gordon may have been the director of Space Truckers, but that by no means makes him the true definition of a director.) A menu of navigation items is simply a nested ordered or unordered list of links, not a definition list. Applying styling handles to form labels and elements is not a good use for a definition list. And so on. Living by the specification, a definition list should be used for term definitions – glossaries, lexicons and dictionaries – only. Anything else is a crime against markup.",2007,Mark Norman Francis,marknormanfrancis,2007-12-05T00:00:00+00:00,https://24ways.org/2007/my-other-christmas-present-is-a-definition-list/,code 165,Transparent PNGs in Internet Explorer 6,"Newer breeds of browser such as Firefox and Safari have offered support for PNG images with full alpha channel transparency for a few years. With the use of hacks, support has been available in Internet Explorer 5.5 and 6, but the hacks are non-ideal and have been tricky to use. With IE7 winning masses of users from earlier versions over the last year, full PNG alpha-channel transparency is becoming more of a reality for day-to-day use. However, there are still numbers of IE6 users out there who we can’t leave out in the cold this Christmas, so in this article I’m going to look what we can do to support IE6 users whilst taking full advantage of transparency for the majority of a site’s visitors. So what’s alpha channel transparency? Cast your minds back to the Ghost of Christmas Past, the humble GIF. Images in GIF format offer transparency, but that transparency is either on or off for any given pixel. Each pixel’s either fully transparent, or a solid colour. In GIF, transparency is effectively just a special colour you can chose for a pixel. The PNG format tackles the problem rather differently. As well as having any colour you chose, each pixel also carries a separate channel of information detailing how transparent it is. This alpha channel enables a pixel to be fully transparent, fully opaque, or critically, any step in between. This enables designers to produce images that can have, for example, soft edges without any of the ‘halo effect’ traditionally associated with GIF transparency. If you’ve ever worked on a site that has different colour schemes and therefore requires multiple versions of each graphic against a different colour, you’ll immediately see the benefit. What’s perhaps more interesting than that, however, is the extra creative freedom this gives designers in creating beautiful sites that can remain web-like in their ability to adjust, scale and reflow. The Internet Explorer problem Up until IE7, there has been no fully native support for PNG alpha channel transparency in Internet Explorer. However, since IE5.5 there has been some support in the form of proprietary filter called the AlphaImageLoader. Internet Explorer filters can be applied directly in your CSS (for both inline and background images), or by setting the same CSS property with JavaScript. CSS: img { filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(...); } JavaScript: img.style.filter = ""progid:DXImageTransform.Microsoft.AlphaImageLoader(...)""; That may sound like a problem solved, but all is not as it may appear. Firstly, as you may realise, there’s no CSS property called filter in the W3C CSS spec. It’s a proprietary extension added by Microsoft that could potentially cause other browsers to reject your entire CSS rule. Secondly, AlphaImageLoader does not magically add full PNG transparency support so that a PNG in the page will just start working. Instead, when applied to an element in the page, it draws a new rendering surface in the same space that element occupies and loads a PNG into it. If that sounds weird, it’s because that’s precisely what it is. However, by and large the result is that PNGs with an alpha channel can be accommodated. The pitfalls So, whilst support for PNG transparency in IE5.5 and 6 is possible, it’s not without its problems. Background images cannot be positioned or repeated The AlphaImageLoader does work for background images, but only for the simplest of cases. If your design requires the image to be tiled (background-repeat) or positioned (background-position) you’re out of luck. The AlphaImageLoader allows you to set a sizingMethod to either crop the image (if necessary) or to scale it to fit. Not massively useful, but something at least. Delayed loading and resource use The AlphaImageLoader can be quite slow to load, and appears to consume more resources than a standard image when applied. Typically, you’d need to add thousands of GIFs or JPEGs to a page before you saw any noticeable impact on the browser, but with the AlphaImageLoader filter applied Internet Explorer can become sluggish after just a handful of alpha channel PNGs. The other noticeable effect is that as more instances of the AlphaImageLoader are applied, the longer it takes to render the PNGs with their transparency. The user sees the PNG load in its original non-supported state (with black or grey areas where transparency should be) before one by one the filter kicks in and makes them properly transparent. Both the issue of sluggish behaviour and delayed load only really manifest themselves with volume and size of image. Use just a couple of instances and it’s fine, but be careful adding more than five or six. As ever, test, test, test. Links become unclickable, forms unfocusable This is a big one. There’s a bug/weirdness with AlphaImageLoader that sometimes prevents interaction with links and forms when a PNG background image is used. This is sometimes reported as a z-index issue, but I don’t believe it is. Rather, it’s an artefact of that weird way the filter gets applied to the document almost outside of the normal render process. Often this can be solved by giving the links or form elements hasLayout using position: relative; where possible. However, this doesn’t always work and the non-interaction problem cannot always be solved. You may find yourself having to go back to the drawing board. Sidestepping the danger zones Frankly, it’s pretty bad news if you design a site, have that design signed off by your client, build it and then find out only at the end (because you don’t know what might trigger a problem) that your search field can’t be focused in IE6. That’s an absolute nightmare, and whilst it’s not likely to happen, it’s possible that it might. It’s happened to me. So what can you do? The best approach I’ve found to this scenario is Isolate the PNG or PNGs that are causing the problem. Step through the PNGs in your page, commenting them out one by one and retesting. Typically it’ll be the nearest PNG to the problem, so try there first. Keep going until you can click your links or focus your form fields. This is where you really need luck on your side, because you’re going to have to fake it. This will depend on the design of the site, but some way or other create a replacement GIF or JPEG image that will give you an acceptable result. Then use conditional comments to serve that image to only users of IE older than version 7. A hack, you say? Well, you started it chum. Applying AlphaImageLoader Because the filter property is invalid CSS, the safest pragmatic approach is to apply it selectively with JavaScript for only Internet Explorer versions 5.5 and 6. This helps ensure that by default you’re serving standard CSS to browsers that support both the CSS and PNG standards correct, and then selectively patching up only the browsers that need it. Several years ago, Aaron Boodman wrote and released a script called sleight for doing just that. However, sleight dealt only with images in the page, and not background images applied with CSS. Building on top of Aaron’s work, I hacked sleight and came up with bgsleight for applying the filter to background images instead. That was in 2003, and over the years I’ve made a couple of improvements here and there to keep it ticking over and to resolve conflicts between sleight and bgsleight when used together. However, with alpha channel PNGs becoming much more widespread, it’s time for a new version. Introducing SuperSleight SuperSleight adds a number of new and useful features that have come from the day-to-day needs of working with PNGs. Works with both inline and background images, replacing the need for both sleight and bgsleight Will automatically apply position: relative to links and form fields if they don’t already have position set. (Can be disabled.) Can be run on the entire document, or just a selected part where you know the PNGs are. This is better for performance. Detects background images set to no-repeat and sets the scaleMode to crop rather than scale. Can be re-applied by any other JavaScript in the page – useful if new content has been loaded by an Ajax request. Download SuperSleight Implementation Getting SuperSleight running on a page is quite straightforward, you just need to link the supplied JavaScript file (or the minified version if you prefer) into your document inside conditional comments so that it is delivered to only Internet Explorer 6 or older. <!--[if lte IE 6]> <script type=""text/javascript"" src=""supersleight-min.js""></script> <![endif]--> Supplied with the JavaScript is a simple transparent GIF file. The script replaces the existing PNG with this before re-layering the PNG over the top using AlphaImageLoaded. You can change the name or path of the image in the top of the JavaScript file, where you’ll also find the option to turn off the adding of position: relative to links and fields if you don’t want that. The script is kicked off with a call to supersleight.init() at the bottom. The scope of the script can be limited to just one part of the page by passing an ID of an element to supersleight.limitTo(). And that’s all there is to it. Update March 2008: a version of this script as a jQuery plugin is also now available.",2007,Drew McLellan,drewmclellan,2007-12-01T00:00:00+00:00,https://24ways.org/2007/supersleight-transparent-png-in-ie6/,code 168,Unobtrusively Mapping Microformats with jQuery,"Microformats are everywhere. You can’t shake an electronic stick these days without accidentally poking a microformat-enabled site, and many developers use microformats as a matter of course. And why not? After all, why invent your own class names when you can re-use pre-defined ones that give your site extra functionality for free? Nevertheless, while it’s good to know that users of tools such as Tails and Operator will derive added value from your shiny semantics, it’s nice to be able to reuse that effort in your own code. We’re going to build a map of some of my favourite restaurants in Brighton. Fitting with the principles of unobtrusive JavaScript, we’ll start with a semantically marked up list of restaurants, then use JavaScript to add the map, look up the restaurant locations and plot them as markers. We’ll be using a couple of powerful tools. The first is jQuery, a JavaScript library that is ideally suited for unobtrusive scripting. jQuery allows us to manipulate elements on the page based on their CSS selector, which makes it easy to extract information from microformats. The second is Mapstraction, introduced here by Andrew Turner a few days ago. We’ll be using Google Maps in the background, but Mapstraction makes it easy to change to a different provider if we want to later. Getting Started We’ll start off with a simple collection of microformatted restaurant details, representing my seven favourite restaurants in Brighton. The full, unstyled list can be seen in restaurants-plain.html. Each restaurant listing looks like this: <li class=""vcard""> <h3><a class=""fn org url"" href=""http://www.riddleandfinns.co.uk/"">Riddle & Finns</a></h3> <div class=""adr""> <p class=""street-address"">12b Meeting House Lane</p> <p><span class=""locality"">Brighton</span>, <abbr class=""country-name"" title=""United Kingdom"">UK</abbr></p> <p class=""postal-code"">BN1 1HB</p> </div> <p>Telephone: <span class=""tel"">+44 (0)1273 323 008</span></p> <p>E-mail: <a href=""mailto:info@riddleandfinns.co.uk"" class=""email"">info@riddleandfinns.co.uk</a></p> </li> Since we’re dealing with a list of restaurants, each hCard is marked up inside a list item. Each restaurant is an organisation; we signify this by placing the classes fn and org on the element surrounding the restaurant’s name (according to the hCard spec, setting both fn and org to the same value signifies that the hCard represents an organisation rather than a person). The address information itself is contained within a div of class adr. Note that the HTML <address> element is not suitable here for two reasons: firstly, it is intended to mark up contact details for the current document rather than generic addresses; secondly, address is an inline element and as such cannot contain the paragraphs elements used here for the address information. A nice thing about microformats is that they provide us with automatic hooks for our styling. For the moment we’ll just tidy up the whitespace a bit; for more advanced style tips consult John Allsop’s guide from 24 ways 2006. .vcard p { margin: 0; } .adr { margin-bottom: 0.5em; } To plot the restaurants on a map we’ll need latitude and longitude for each one. We can find this out from their address using geocoding. Most mapping APIs include support for geocoding, which means we can pass the API an address and get back a latitude/longitude point. Mapstraction provides an abstraction layer around these APIs which can be included using the following script tag: <script type=""text/javascript"" src=""http://mapstraction.com/src/mapstraction-geocode.js""></script> While we’re at it, let’s pull in the other external scripts we’ll be using: <script type=""text/javascript"" src=""jquery-1.2.1.js""></script> <script src=""http://maps.google.com/maps?file=api&v=2&key=YOUR_KEY"" type=""text/javascript""></script> <script type=""text/javascript"" src=""http://mapstraction.com/src/mapstraction.js""></script> <script type=""text/javascript"" src=""http://mapstraction.com/src/mapstraction-geocode.js""></script> That’s everything set up: let’s write some JavaScript! In jQuery, almost every operation starts with a call to the jQuery function. The function simulates method overloading to behave in different ways depending on the arguments passed to it. When writing unobtrusive JavaScript it’s important to set up code to execute when the page has loaded to the point that the DOM is available to be manipulated. To do this with jQuery, pass a callback function to the jQuery function itself: jQuery(function() { // This code will be executed when the DOM is ready }); Initialising the map The first thing we need to do is initialise our map. Mapstraction needs a div with an explicit width, height and ID to show it where to put the map. Our document doesn’t currently include this markup, but we can insert it with a single line of jQuery code: jQuery(function() { // First create a div to host the map var themap = jQuery('<div id=""themap""></div>').css({ 'width': '90%', 'height': '400px' }).insertBefore('ul.restaurants'); }); While this is technically just a single line of JavaScript (with line-breaks added for readability) it’s actually doing quite a lot of work. Let’s break it down in to steps: var themap = jQuery('<div id=""themap""></div>') Here’s jQuery’s method overloading in action: if you pass it a string that starts with a < it assumes that you wish to create a new HTML element. This provides us with a handy shortcut for the more verbose DOM equivalent: var themap = document.createElement('div'); themap.id = 'themap'; Next we want to apply some CSS rules to the element. jQuery supports chaining, which means we can continue to call methods on the object returned by jQuery or any of its methods: var themap = jQuery('<div id=""themap""></div>').css({ 'width': '90%', 'height': '400px' }) Finally, we need to insert our new HTML element in to the page. jQuery provides a number of methods for element insertion, but in this case we want to position it directly before the <ul> we are using to contain our restaurants. jQuery’s insertBefore() method takes a CSS selector indicating an element already on the page and places the current jQuery selection directly before that element in the DOM. var themap = jQuery('<div id=""themap""></div>').css({ 'width': '90%', 'height': '400px' }).insertBefore('ul.restaurants'); Finally, we need to initialise the map itself using Mapstraction. The Mapstraction constructor takes two arguments: the first is the ID of the element used to position the map; the second is the mapping provider to use (in this case google ): // Initialise the map var mapstraction = new Mapstraction('themap','google'); We want the map to appear centred on Brighton, so we’ll need to know the correct co-ordinates. We can use www.getlatlon.com to find both the co-ordinates and the initial map zoom level. // Show map centred on Brighton mapstraction.setCenterAndZoom( new LatLonPoint(50.82423734980143, -0.14007568359375), 15 // Zoom level appropriate for Brighton city centre ); We also want controls on the map to allow the user to zoom in and out and toggle between map and satellite view. mapstraction.addControls({ zoom: 'large', map_type: true }); Adding the markers It’s finally time to parse some microformats. Since we’re using hCard, the information we want is wrapped in elements with the class vcard. We can use jQuery’s CSS selector support to find them: var vcards = jQuery('.vcard'); Now that we’ve found them, we need to create a marker for each one in turn. Rather than using a regular JavaScript for loop, we can instead use jQuery’s each() method to execute a function against each of the hCards. jQuery('.vcard').each(function() { // Do something with the hCard }); Within the callback function, this is set to the current DOM element (in our case, the list item). If we want to call the magic jQuery methods on it we’ll need to wrap it in another call to jQuery: jQuery('.vcard').each(function() { var hcard = jQuery(this); }); The Google maps geocoder seems to work best if you pass it the street address and a postcode. We can extract these using CSS selectors: this time, we’ll use jQuery’s find() method which searches within the current jQuery selection: var streetaddress = hcard.find('.street-address').text(); var postcode = hcard.find('.postal-code').text(); The text() method extracts the text contents of the selected node, minus any HTML markup. We’ve got the address; now we need to geocode it. Mapstraction’s geocoding API requires us to first construct a MapstractionGeocoder, then use the geocode() method to pass it an address. Here’s the code outline: var geocoder = new MapstractionGeocoder(onComplete, 'google'); geocoder.geocode({'address': 'the address goes here'); The onComplete function is executed when the geocoding operation has been completed, and will be passed an object with the resulting point on the map. We just want to create a marker for the point: var geocoder = new MapstractionGeocoder(function(result) { var marker = new Marker(result.point); mapstraction.addMarker(marker); }, 'google'); For our purposes, joining the street address and postcode with a comma to create the address should suffice: geocoder.geocode({'address': streetaddress + ', ' + postcode}); There’s one last step: when the marker is clicked, we want to display details of the restaurant. We can do this with an info bubble, which can be configured by passing in a string of HTML. We’ll construct that HTML using jQuery’s html() method on our hcard object, which extracts the HTML contained within that DOM node as a string. var marker = new Marker(result.point); marker.setInfoBubble( '<div class=""bubble"">' + hcard.html() + '</div>' ); mapstraction.addMarker(marker); We’ve wrapped the bubble in a div with class bubble to make it easier to style. Google Maps can behave strangely if you don’t provide an explicit width for your info bubbles, so we’ll add that to our CSS now: .bubble { width: 300px; } That’s everything we need: let’s combine our code together: jQuery(function() { // First create a div to host the map var themap = jQuery('<div id=""themap""></div>').css({ 'width': '90%', 'height': '400px' }).insertBefore('ul.restaurants'); // Now initialise the map var mapstraction = new Mapstraction('themap','google'); mapstraction.addControls({ zoom: 'large', map_type: true }); // Show map centred on Brighton mapstraction.setCenterAndZoom( new LatLonPoint(50.82423734980143, -0.14007568359375), 15 // Zoom level appropriate for Brighton city centre ); // Geocode each hcard and add a marker jQuery('.vcard').each(function() { var hcard = jQuery(this); var streetaddress = hcard.find('.street-address').text(); var postcode = hcard.find('.postal-code').text(); var geocoder = new MapstractionGeocoder(function(result) { var marker = new Marker(result.point); marker.setInfoBubble( '<div class=""bubble"">' + hcard.html() + '</div>' ); mapstraction.addMarker(marker); }, 'google'); geocoder.geocode({'address': streetaddress + ', ' + postcode}); }); }); Here’s the finished code. There’s one last shortcut we can add: jQuery provides the $ symbol as an alias for jQuery. We could just go through our code and replace every call to jQuery() with a call to $(), but this would cause incompatibilities if we ever attempted to use our script on a page that also includes the Prototype library. A more robust approach is to start our code with the following: jQuery(function($) { // Within this function, $ now refers to jQuery // ... }); jQuery cleverly passes itself as the first argument to any function registered to the DOM ready event, which means we can assign a local $ variable shortcut without affecting the $ symbol in the global scope. This makes it easy to use jQuery with other libraries. Limitations of Geocoding You may have noticed a discrepancy creep in to the last example: whereas my original list included seven restaurants, the geocoding example only shows five. This is because the Google Maps geocoder incorporates a rate limit: more than five lookups in a second and it starts returning error messages instead of regular results. In addition to this problem, geocoding itself is an inexact science: while UK postcodes generally get you down to the correct street, figuring out the exact point on the street from the provided address usually isn’t too accurate (although Google do a pretty good job). Finally, there’s the performance overhead. We’re making five geocoding requests to Google for every page served, even though the restaurants themselves aren’t likely to change location any time soon. Surely there’s a better way of doing this? Microformats to the rescue (again)! The geo microformat suggests simple classes for including latitude and longitude information in a page. We can add specific points for each restaurant using the following markup: <li class=""vcard""> <h3 class=""fn org"">E-Kagen</h3> <div class=""adr""> <p class=""street-address"">22-23 Sydney Street</p> <p><span class=""locality"">Brighton</span>, <abbr class=""country-name"" title=""United Kingdom"">UK</abbr></p> <p class=""postal-code"">BN1 4EN</p> </div> <p>Telephone: <span class=""tel"">+44 (0)1273 687 068</span></p> <p class=""geo"">Lat/Lon: <span class=""latitude"">50.827917</span>, <span class=""longitude"">-0.137764</span> </p> </li> As before, I used www.getlatlon.com to find the exact locations – I find satellite view is particularly useful for locating individual buildings. Latitudes and longitudes are great for machines but not so useful for human beings. We could hide them entirely with display: none, but I prefer to merely de-emphasise them (someone might want them for their GPS unit): .vcard .geo { margin-top: 0.5em; font-size: 0.85em; color: #ccc; } It’s probably a good idea to hide them completely when they’re displayed inside an info bubble: .bubble .geo { display: none; } We can extract the co-ordinates in the same way we extracted the address. Since we’re no longer geocoding anything our code becomes a lot simpler: $('.vcard').each(function() { var hcard = $(this); var latitude = hcard.find('.geo .latitude').text(); var longitude = hcard.find('.geo .longitude').text(); var marker = new Marker(new LatLonPoint(latitude, longitude)); marker.setInfoBubble( '<div class=""bubble"">' + hcard.html() + '</div>' ); mapstraction.addMarker(marker); }); And here’s the finished geo example. Further reading We’ve only scratched the surface of what’s possible with microformats, jQuery (or just regular JavaScript) and a bit of imagination. If this example has piqued your interest, the following links should give you some more food for thought. The hCard specification Notes on parsing hCards jQuery for JavaScript programmers – my extended tutorial on jQuery. Dann Webb’s Sumo – a full JavaScript library for parsing microformats, based around some clever metaprogramming techniques. Jeremy Keith’s Adactio Austin – the first place I saw using microformats to unobtrusively plot locations on a map. Makes clever use of hEvent as well.",2007,Simon Willison,simonwillison,2007-12-12T00:00:00+00:00,https://24ways.org/2007/unobtrusively-mapping-microformats-with-jquery/,code 169,Incite A Riot,"Given its relatively limited scope, HTML can be remarkably expressive. With a bit of lateral thinking, we can mark up content such as tag clouds and progress meters, even when we don’t have explicit HTML elements for those patterns. Suppose we want to mark up a short conversation: Alice: I think Eve is watching. Bob: This isn’t a cryptography tutorial …we’re in the wrong example! A note in the the HTML 4.01 spec says it’s okay to use a definition list: Another application of DL, for example, is for marking up dialogues, with each DT naming a speaker, and each DD containing his or her words. That would give us: <dl> <dt>Alice</dt>: <dd>I think Eve is watching.</dd> <dt>Bob</dt>: <dd>This isn't a cryptography tutorial ...we're in the wrong example!</dd> </dl> This usage of a definition list is proof that writing W3C specifications and smoking crack are not mutually exclusive activities. “I think Eve is watching” is not a definition of “Alice.” If you (ab)use a definition list in this way, Norm will hunt you down. The conversation problem was revisited in HTML5. What if dt and dd didn’t always mean “definition title” and “definition description”? A new element was forged: dialog. Now the the “d” in dt and dd doesn’t stand for “definition”, it stands for “dialog” (or “dialogue” if you can spell): <dialog> <dt>Alice</dt>: <dd>I think Eve is watching.</dd> <dt>Bob</dt>: <dd>This isn't a cryptography tutorial ...we're in the wrong example!</dd> </dialog> Problem solved …except that dialog is no longer in the HTML5 spec. Hixie further expanded the meaning of dt and dd so that they could be used inside details (which makes sense—it starts with a “d”) and figure (…um). At the same time as the content model of details and figure were being updated, the completely-unrelated dialog element was dropped. Back to the drawing board, or in this case, the HTML 4.01 specification. The spec defines the cite element thusly: Contains a citation or a reference to other sources. Perfect! There’s even an example showing how this can applied when attributing quotes to people: As <CITE>Harry S. Truman</CITE> said, <Q lang=""en-us"">The buck stops here.</Q> For longer quotes, the blockquote element might be more appropriate. In a conversation, where the order matters, I think an ordered list would make a good containing element for this pattern: <ol> <li><cite>Alice</cite>: <q>I think Eve is watching.</q></li> <li><cite>Bob</cite>: <q>This isn't a cryptography tutorial ...we're in the wrong example!</q></li> </ol> Problem solved …except that the cite element has been redefined in the HTML5 spec: The cite element represents the title of a work … A person’s name is not the title of a work … and the element must therefore not be used to mark up people’s names. HTML5 is supposed to be backwards compatible with previous versions of HTML, yet here we have a semantic pattern already defined in HTML 4.01 that is now non-conforming in HTML5. The entire justification for the change boils down to this line of reasoning: Given that: titles of works are often italicised and given that: people’s names are not often italicised and given that: most browsers italicise the contents of the cite element, therefore: the cite element should not be used to mark up people’s names. In other words, the default browser styling is now dictating semantic meaning. The tail is wagging the dog. Not to worry, the HTML5 spec tells us how we can mark up names in conversations without using the cite element: In some cases, the b element might be appropriate for names I believe the colloquial response to this is a combination of the letters W, T and F, followed by a question mark. The non-normative note continues: In other cases, if an element is really needed, the span element can be used. This is not a joke. We are seriously being told to use semantically meaningless elements to mark up content that is semantically meaningful. We don’t have to take it. Firstly, any conformance checker—that’s the new politically correct term for “validator”—cannot possibly check every instance of the cite element to see if it’s really the title of a work and not the name of a person. So we can disobey the specification without fear of invalidating our documents. Secondly, Hixie has repeatedly stated that browser makers have a powerful voice in deciding what goes into the HTML5 spec; if a browser maker refuses to implement a feature, then that feature should come out of the spec because otherwise, the spec is fiction. Well, one of the design principles of HTML5 is the Priority of Constituencies: In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. That places us—authors—above browser makers. If we resolutely refuse to implement part of the HTML5 spec, then the spec becomes fiction. Join me in a campaign of civil disobedience against the unnecessarily restrictive, backwards-incompatible change to the cite element. Start using HTML5 but start using it sensibly. Let’s ensure that bad advice remains fictitious. Tantek has set up a page on the WHATWG wiki to document usage of the cite element for conversations. Please contribute to it.",2009,Jeremy Keith,jeremykeith,2009-12-11T00:00:00+00:00,https://24ways.org/2009/incite-a-riot/,code 171,Rock Solid HTML Emails,"At some stage in your career, it’s likely you’ll be asked by a client to design a HTML email. Before you rush to explain that all the cool kids are using social media, keep in mind that when done correctly, email is still one of the best ways to promote you and your clients online. In fact, a recent survey showed that every dollar spent on email marketing this year generated more than $40 in return. That’s more than any other marketing channel, including the cool ones. There are a whole host of ingredients that contribute to a good email marketing campaign. Permission, relevance, timeliness and engaging content are all important. Even so, the biggest challenge for designers still remains building an email that renders well across all the popular email clients. Same same, but different Before getting into the details, there are some uncomfortable facts that those new to HTML email should be aware of. Building an email is not like building for the web. While web browsers continue their onward march towards standards, many email clients have stubbornly stayed put. Some have even gone backwards. In 2007, Microsoft switched the Outlook rendering engine from Internet Explorer to Word. Yes, as in the word processor. Add to this the quirks of the major web-based email clients like Gmail and Hotmail, sprinkle in a little Lotus Notes and you’ll soon realize how different the email game is. While it’s not without its challenges, rest assured it can be done. In my experience the key is to focus on three things. First, you should keep it simple. The more complex your email design, the more likely is it to choke on one of the popular clients with poor standards support. Second, you need to take your coding skills back a good decade. That often means nesting tables, bringing CSS inline and following the coding guidelines I’ll outline below. Finally, you need to test your designs regularly. Just because a template looks nice in Hotmail now, doesn’t mean it will next week. Setting your lowest common denominator To maintain your sanity, it’s a good idea to decide exactly which email clients you plan on supporting when building a HTML email. While general research is helpful, the email clients your subscribers are using can vary significantly from list to list. If you have the time there are a number of tools that can tell you specifically which email clients your subscribers are using. Trust me, if the testing shows almost none of them are using a client like Lotus Notes, save yourself some frustration and ignore it altogether. Knowing which email clients you’re targeting not only makes the building process easier, it can save you lots of time in the testing phase too. For the purpose of this article, I’ll be sharing techniques that give the best results across all of the popular clients, including the notorious ones like Gmail, Lotus Notes 6 and Outlook 2007. Just remember that pixel perfection in all email clients is a pipe dream. Let’s get started. Use tables for layout Because clients like Gmail and Outlook 2007 have poor support for float, margin and padding, you’ll need to use tables as the framework of your email. While nested tables are widely supported, consistent treatment of width, margin and padding within table cells is not. For the best results, keep the following in mind when coding your table structure. Set the width in each cell, not the table When you combine table widths, td widths, td padding and CSS padding into an email, the final result is different in almost every email client. The most reliable way to set the width of your table is to set a width for each cell, not for the table itself. <table cellspacing=""0"" cellpadding=""10"" border=""0""> <tr> <td width=""80""></td> <td width=""280""></td> </tr> </table> Never assume that if you don’t specify a cell width the email client will figure it out. It won’t. Also avoid using percentage based widths. Clients like Outlook 2007 don’t respect them, especially for nested tables. Stick to pixels. If you want to add padding to each cell, use either the cellpadding attribute of the table or CSS padding for each cell, but never combine the two. Err toward nesting Table nesting is far more reliable than setting left and right margins or padding for table cells. If you can achieve the same effect by table nesting, that will always give you the best result across the buggier email clients. Use a container table for body background colors Many email clients ignore background colors specified in your CSS or the <body> tag. To work around this, wrap your entire email with a 100% width table and give that a background color. <table cellspacing=""0"" cellpadding=""0"" border=""0"" width=""100%""> <tr> <td bgcolor=”#000000”> Your email code goes here. </td> </tr> </table> You can use the same approach for background images too. Just remember that some email clients don’t support them, so always provide a fallback color. Avoid unnecessary whitespace in table cells Where possible, avoid whitespace between your <td> tags. Some email clients (ahem, Yahoo! and Hotmail) can add additional padding above or below the cell contents in some scenarios, breaking your design for no apparent reason. CSS and general font formatting While some email designers do their best to avoid CSS altogether and rely on the dreaded <font> tag, the truth is many CSS properties are well supported by most email clients. See this comprehensive list of CSS support across the major clients for a good idea of the safe properties and those that should be avoided. Always move your CSS inline Gmail is the culprit for this one. By stripping the CSS from the <head> and <body> of any email, we’re left with no choice but to move all CSS inline. The good news is this is something you can almost completely automate. Free services like Premailer will move all CSS inline with the click of a button. I recommend leaving this step to the end of your build process so you can utilize all the benefits of CSS. Avoid shorthand for fonts and hex notation A number of email clients reject CSS shorthand for the font property. For example, never set your font styles like this. p { font:bold 1em/1.2em georgia,times,serif; } Instead, declare the properties individually like this. p { font-weight: bold; font-size: 1em; line-height: 1.2em; font-family: georgia,times,serif; } While we’re on the topic of fonts, I recently tested every conceivable variation of @font-face across the major email clients. The results were dismal, so unfortunately it’s web-safe fonts in email for the foreseeable future. When declaring the color property in your CSS, some email clients don’t support shorthand hexadecimal colors like color:#f60; instead of color:#ff6600;. Stick to the longhand approach for the best results. Paragraphs Just like table cell spacing, paragraph spacing can be tricky to get a consistent result across the board. I’ve seen many designers revert to using double <br /> or DIVs with inline CSS margins to work around these shortfalls, but recent testing showed that paragraph support is now reliable enough to use in most cases (there was a time when Yahoo! didn’t support the paragraph tag at all). The best approach is to set the margin inline via CSS for every paragraph in your email, like so: p { margin: 0 0 1.6em 0; } Again, do this via CSS in the head when building your email, then use Premailer to bring it inline for each paragraph later. If part of your design is height-sensitive and calls for pixel perfection, I recommend avoiding paragraphs altogether and setting the text formatting inline in the table cell. You might need to use table nesting or cellpadding / CSS to get the desired result. Here’s an example: <td width=""200"" style=""font-weight:bold; font-size:1em; line-height:1.2em; font-family:georgia,'times',serif;"">your height sensitive text</td> Links Some email clients will overwrite your link colors with their defaults, and you can avoid this by taking two steps. First, set a default color for each link inline like so: <a href=""http://somesite.com/"" style=""color:#ff00ff"">this is a link</a> Next, add a redundant span inside the a tag. <a href=""http://somesite.com/"" style=""color:#ff00ff""><span style=""color:#ff00ff"">this is a link</span></a> To some this may be overkill, but if link color is important to your design then a superfluous span is the best way to achieve consistency. Images in HTML emails The most important thing to remember about images in email is that they won’t be visible by default for many subscribers. If you start your design with that assumption, it forces you to keep things simple and ensure no important content is suppressed by image blocking. With this in mind, here are the essentials to remember when using images in HTML email: Avoid spacer images While the combination of spacer images and nested tables was popular on the web ten years ago, image blocking in many email clients has ruled it out as a reliable technique today. Most clients replace images with an empty placeholder in the same dimensions, others strip the image altogether. Given image blocking is on by default in most email clients, this can lead to a poor first impression for many of your subscribers. Stick to fixed cell widths to keep your formatting in place with or without images. Always include the dimensions of your image If you forget to set the dimensions for each image, a number of clients will invent their own sizes when images are blocked and break your layout. Also, ensure that any images are correctly sized before adding them to your email. Some email clients will ignore the dimensions specified in code and rely on the true dimensions of your image. Avoid PNGs Lotus Notes 6 and 7 don’t support 8-bit or 24-bit PNG images, so stick with the GIF or JPG formats for all images, even if it means some additional file size. Provide fallback colors for background images Outlook 2007 has no support for background images (aside from this hack to get full page background images working). If you want to use a background image in your design, always provide a background color the email client can fall back on. This solves both the image blocking and Outlook 2007 problem simultaneously. Don’t forget alt text Lack of standards support means email clients have long destroyed the chances of a semantic and accessible HTML email. Even still, providing alt text is important from an image blocking perspective. Even with images suppressed by default, many email clients will display the provided alt text instead. Just remember that some email clients like Outlook 2007, Hotmail and Apple Mail don’t support alt text at all when images are blocked. Use the display hack for Hotmail For some inexplicable reason, Windows Live Hotmail adds a few pixels of additional padding below images. A workaround is to set the display property like so. img {display:block;} This removes the padding in Hotmail and still gives you the predicable result in other email clients. Don’t use floats Both Outlook 2007 and earlier versions of Notes offer no support for the float property. Instead, use the align attribute of the img tag to float images in your email. <img src=""image.jpg"" align=""right""> If you’re seeing strange image behavior in Yahoo! Mail, adding align=“top” to your images can often solve this problem. Video in email With no support for JavaScript or the object tag, video in email (if you can call it that) has long been limited to animated gifs. However, some recent research I did into the HTML5 video tag in email showed some promising results. Turns out HTML5 video does work in many email clients right now, including Apple Mail, Entourage 2008, MobileMe and the iPhone. The real benefit of this approach is that if the video isn’t supported, you can provide reliable fallback content such as an animated GIF or a clickable image linking to the video in the browser. Of course, the question of whether you should add video to email is another issue altogether. If you lean toward the “yes” side check out the technique with code samples. What about mobile email? The mobile email landscape was a huge mess until recently. With the advent of the iPhone, Android and big improvements from Palm and RIM, it’s becoming less important to think of mobile as a different email platform altogether. That said, there are a few key pointers to keep in mind when coding your emails to get a decent result for your more mobile subscribers. Keep the width less than 600 pixels Because of email client preview panes, this rule was important long before mobile email clients came of age. In truth, the iPhone and Pre have a viewport of 320 pixels, the Droid 480 pixels and the Blackberry models hover around 360 pixels. Sticking to a maximum of 600 pixels wide ensures your design should still be readable when scaled down for each device. This width also gives good results in desktop and web-based preview panes. Be aware of automatic text resizing In what is almost always a good feature, email clients using webkit (such as the iPhone, Pre and Android) can automatically adjust font sizes to increase readability. If testing shows this feature is doing more harm than good to your design, you can always disable it with the following CSS rule: -webkit-text-size-adjust: none; Don’t forget to test While standards support in email clients hasn’t made much progress in the last few years, there has been continual change (for better or worse) in some email clients. Web-based providers like Yahoo!, Hotmail and Gmail are notorious for this. On countless occasions I’ve seen a proven design suddenly stop working without explanation. For this reason alone it’s important to retest your email designs on a regular basis. I find a quick test every month or so does the trick, especially in the web-based clients. The good news is that after designing and testing a few HTML email campaigns, you will find that order will emerge from the chaos. Many of these pitfalls will become quite predictable and your inbox-friendly designs will take shape with them in mind. Looking ahead Designing HTML email can be a tough pill for new designers and standardistas to swallow, especially given the fickle and retrospective nature of email clients today. With HTML5 just around the corner we are entering a new, uncertain phase. Will email client developers take the opportunity to repent on past mistakes and bring email clients into the present? The aim of groups such as the Email Standards Project is to make much of the above advice as redundant as the long-forgotten <blink> and <marquee> tags, however, only time will tell if this is to become a reality. Although not the most compliant (or fashionable) medium, the results speak for themselves – email is, and will continue to be one of the most successful and targeted marketing channels available to you. As a designer with HTML email design skills in your arsenal, you have the opportunity to not only broaden your service offering, but gain a unique appreciation of how vital standards are. Next steps Ready to get started? There are a number of HTML email design galleries to provide ideas and inspiration for your own designs. http://www.campaignmonitor.com/gallery/ http://htmlemailgallery.com/ http://inboxaward.com/ Enjoy!",2009,David Greiner,davidgreiner,2009-12-13T00:00:00+00:00,https://24ways.org/2009/rock-solid-html-emails/,code 175,Front-End Code Reusability with CSS and JavaScript,"Most web standards-based developers are more than familiar with creating their sites with semantic HTML with lots and lots of CSS. With each new page in a design, the CSS tends to grow and grow and more elements and styles are added. But CSS can be used to better effect. The idea of object-oriented CSS isn’t new. Nicole Sullivan has written a presentation on the subject and outlines two main concepts: separate structure and visual design; and separate container and content. Jeff Croft talks about Applying OOP Concepts to CSS: I can make a class of .box that defines some basic layout structure, and another class of .rounded that provides rounded corners, and classes of .wide and .narrow that define some widths, and then easily create boxes of varying widths and styles by assigning multiple classes to an element, without having to duplicate code in my CSS. This concept helps reduce CSS file size, allows for great flexibility, rapid building of similar content areas and means greater consistency throughout the entire design. You can also take this concept one step further and apply it to site behaviour with JavaScript. Build a versatile slideshow I will show you how to build multiple slideshows using jQuery, allowing varying levels of functionality which you may find on one site design. The code will be flexible enough to allow you to add previous/next links, image pagination and the ability to change the animation type. More importantly, it will allow you to apply any combination of these features. Image galleries are simply a list of images, so the obvious choice of marking the content up is to use a <ul>. Many designs, however, do not cater to non-JavaScript versions of the website, and thus don’t take in to account large multiple images. You could also simply hide all the other images in the list, apart from the first image. This method can waste bandwidth because the other images might be downloaded when they are never going to be seen. Taking this second concept — only showing one image — the only code you need to start your slideshow is an <img> tag. The other images can be loaded dynamically via either a per-page JavaScript array or via AJAX. The slideshow concept is built upon the very versatile Cycle jQuery Plugin and is structured in to another reusable jQuery plugin. Below is the HTML and JavaScript snippet needed to run every different type of slideshow I have mentioned above. <img src=""path/to/image.jpg"" alt=""About the image"" title="""" height=""250"" width=""400"" class=""slideshow""> <script type=""text/javascript""> jQuery().ready(function($) { $('img.slideshow').slideShow({ images: ['1.jpg', '2.jpg', '3.jpg'] }); }); </script> Slideshow plugin If you’re not familiar with jQuery or how to write and author your own plugin there are plenty of articles to help you out. jQuery has a chainable interface and this is something your plugin must implement. This is easy to achieve, so your plugin simply returns the collection it is using: return this.each( function () {} }; Local Variables To keep the JavaScript clean and avoid any conflicts, you must set up any variables which are local to the plugin and should be used on each collection item. Defining all your variables at the top under one statement makes adding more and finding which variables are used easier. For other tips, conventions and improvements check out JSLint, the “JavaScript Code Quality Tool”. var $$, $div, $images, $arrows, $pager, id, selector, path, o, options, height, width, list = [], li = 0, parts = [], pi = 0, arrows = ['Previous', 'Next']; Cache jQuery Objects It is good practice to cache any calls made to jQuery. This reduces wasted DOM calls, can improve the speed of your JavaScript code and makes code more reusable. The following code snippet caches the current selected DOM element as a jQuery object using the variable name $$. Secondly, the plugin makes its settings available to the Metadata plugin‡ which is best practice within jQuery plugins. For each slideshow the plugin generates a <div> with a class of slideshow and a unique id. This is used to wrap the slideshow images, pagination and controls. The base path which is used for all the images in the slideshow is calculated based on the existing image which appears on the page. For example, if the path to the image on the page was /img/flowers/1.jpg the plugin would use the path /img/flowers/ to load the other images. $$ = $(this); o = $.metadata ? $.extend({}, settings, $$.metadata()) : settings; id = 'slideshow-' + (i++ + 1); $div = $('<div />').addClass('slideshow').attr('id', id); selector = '#' + id + ' '; path = $$.attr('src').replace(/[0-9]\.jpg/g, ''); options = {}; height = $$.height(); width = $$.width(); Note: the plugin uses conventions such as folder structure and numeric filenames. These conventions help with the reusable aspect of plugins and best practices. Build the Images The cycle plugin uses a list of images to create the slideshow. Because we chose to start with one image we must now build the list programmatically. This is a case of looping through the images which were added via the plugin options, building the appropriate HTML and appending the resulting <ul> to the DOM. $.each(o.images, function () { list[li++] = '<li>'; list[li++] = '<img src=""' + path + this + '"" height=""' + height + '"" width=""' + width + '"">'; list[li++] = '</li>'; }); $images = $('<ul />').addClass('cycle-images'); $images.append(list.join('')).appendTo($div); Although jQuery provides the append method it is much faster to create one really long string and append it to the DOM at the end. Update the Options Here are some of the options we’re making available by simply adding classes to the <img>. You can change the slideshow effect from the default fade to the sliding effect. By adding the class of stopped the slideshow will not auto-play and must be controlled via pagination or previous and next links. // different effect if ($$.is('.slide')) { options.fx = 'scrollHorz'; } // don't move by default if ($$.is('.stopped')) { options.timeout = 0; } If you are using the same set of images throughout a website you may wish to start on a different image on each page or section. This can be easily achieved by simply adding the appropriate starting class to the <img>. // based on the class name on the image if ($$.is('[class*=start-]')) { options.startingSlide = parseInt($$.attr('class').replace(/.*start-([0-9]+).*/g, ""$1""), 10) - 1; } For example: <img src=""/img/slideshow/3.jpg"" alt=""About the image"" title="""" height=""250"" width=""400"" class=""slideshow start-3""> By default, and without JavaScript, the third image in this slideshow is shown. When the JavaScript is applied to the page the slideshow must know to start from the correct place, this is why the start class is required. You could capture the default image name and parse it to get the position, but only the default image needs to be numeric to work with this plugin (and could easily be changed in future). Therefore, this extra specifically defined option means the plugin is more tolerant. Previous/Next Links A common feature of slideshows is previous and next links enabling the user to manually progress the images. The Cycle plugin supports this functionality, but you must generate the markup yourself. Most people add these directly in the HTML but normally only support their behaviour when JavaScript is enabled. This goes against progressive enhancement. To keep with the best practice progress enhancement method the previous/next links should be generated with JavaScript. The follow snippet checks whether the slideshow requires the previous/next links, via the arrows class. It restricts the Cycle plugin to the specific slideshow using the selector we created at the top of the plugin. This means multiple slideshows can run on one page without conflicting each other. The code creates a <ul> using the arrows array we defined at the top of the plugin. It also adds a class to the slideshow container, meaning you can style different combinations of options in your CSS. // create the arrows if ($$.is('.arrows') && list.length > 1) { options.next = selector + '.next'; options.prev = selector + '.previous'; $arrows = $('<ul />').addClass('cycle-arrows'); $.each(arrows, function (i, val) { parts[pi++] = '<li class=""' + val.toLowerCase() + '"">'; parts[pi++] = '<a href=""#' + val.toLowerCase() + '"">'; parts[pi++] = '<span>' + val + '</span>'; parts[pi++] = '</a>'; parts[pi++] = '</li>'; }); $arrows.append(parts.join('')).appendTo($div); $div.addClass('has-cycle-arrows'); } The arrow array could be placed inside the plugin settings to allow for localisation. Pagination The Cycle plugin creates its own HTML for the pagination of the slideshow. All our plugin needs to do is create the list and selector to use. This snippet creates the pagination container and appends it to our specific slideshow container. It sets the Cycle plugin pager option, restricting it to the specific slideshow using the selector we created at the top of the plugin. Like the previous/next links, a class is added to the slideshow container allowing you to style the slideshow itself differently. // create the clickable pagination if ($$.is('.pagination') && list.length > 1) { options.pager = selector + '.cycle-pagination'; $pager = $('<ul />').addClass('cycle-pagination'); $pager.appendTo($div); $div.addClass('has-cycle-pagination'); } Note: the Cycle plugin creates a <ul> with anchors listed directly inside without the surrounding <li>. Unfortunately this is invalid markup but the code still works. Demos Well, that describes all the ins-and-outs of the plugin, but demos make it easier to understand! Viewing the source on the demo page shows some of the combinations you can create with a simple <img>, a few classes and some thought-out JavaScript. View the demos → Decide on defaults The slideshow plugin uses the exact same settings as the Cycle plugin, but some are explicitly set within the slideshow plugin when using the classes you have set. When deciding on what functionality is going to be controlled via this class method, be careful to choose your defaults wisely. If all slideshows should auto-play, don’t make this an option — make the option to stop the auto-play. Similarly, if every slideshow should have previous/next functionality make this the default and expose the ability to remove them with a class such as “no-pagination”. In the examples presented on this article I have used a class on each <img>. You can easily change this to anything you want and simply apply the plugin based on the jQuery selector required. Grab your images If you are using AJAX to load in your images, you can speed up development by deciding on and keeping to a folder structure and naming convention. There are two methods: basing the image path based on the current URL; or based on the src of the image. The first allows a different slideshow on each page, but in many instances a site will have a couple of sets of images and therefore the second method is probably preferred. Metadata ‡ A method which allows you to directly modify settings in certain plugins, which also uses the classes from your HTML already exists. This is a jQuery plugin called Metadata. This method allows for finer control over the plugin settings themselves. Some people, however, may dislike the syntax and prefer using normal classes, like above which when sprinkled with a bit more JavaScript allows you to control what you need to control. The takeaway Hopefully you have understood not only what goes in to a basic jQuery plugin but also learnt a new and powerful idea which you can apply to other areas of your website. The idea can also be applied to other common interfaces such as lightboxes or mapping services such as Google Maps — for example creating markers based on a list of places, each with different pin icons based the anchor class.",2009,Trevor Morris,trevormorris,2009-12-06T00:00:00+00:00,https://24ways.org/2009/front-end-code-reusability-with-css-and-javascript/,code 177,"HTML5: Tool of Satan, or Yule of Santa?","It would lead to unseasonal arguments to discuss the title of this piece here, and the arguments are as indigestible as the fourth turkey curry of the season, so we’ll restrict our article to the practical rather than the philosophical: what HTML5 can you reasonably expect to be able to use reliably cross-browser in the early months of 2010? The answer is that you can use more than you might think, due to the seasonal tinsel of feature-detection and using the sparkly pixie-dust of IE-only VML (but used in a way that won’t damage your Elf). Canvas canvas is a 2D drawing API that defines a blank area of the screen of arbitrary size, and allows you to draw on it using JavaScript. The pictures can be animated, such as in this canvas mashup of Wolfenstein 3D and Flickr. (The difference between canvas and SVG is that SVG uses vector graphics, so is infinitely scalable. It also keeps a DOM, whereas canvas is just pixels so you have to do all your own book-keeping yourself in JavaScript if you want to know where aliens are on screen, or do collision detection.) Previously, you needed to do this using Adobe Flash or Java applets, requiring plugins and potentially compromising keyboard accessibility. Canvas drawing is supported now in Opera, Safari, Chrome and Firefox. The reindeer in the corner is, of course, Internet Explorer, which currently has zero support for canvas (or SVG, come to that). Now, don’t pull a face like all you’ve found in your Yuletide stocking is a mouldy satsuma and a couple of nuts—that’s not the end of the story. Canvas was originally an Apple proprietary technology, and Internet Explorer had a similar one called Vector Markup Language which was submitted to the W3C for standardisation in 1998 but which, unlike canvas, was not blessed with retrospective standardisation. What you need, then, is some way for Internet Explorer to translate canvas to VML on-the-fly, while leaving the other, more standards-compliant browsers to use the HTML5. And such a way exists—it’s a JavaScript library called excanvas. It’s downloadable from http://code.google.com/p/explorercanvas/ and it’s simple to include it via a conditional comment in the head for IE: <!--[if IE]> <script src=""excanvas.js""></script> <![endif]--> Simply include this, and your canvas will be natively supported in the modern browsers (and the library won’t even be downloaded) whereas IE will suddenly render your canvas using its own VML engine. Be sure, however, to check it carefully, as the IE JavaScript engine isn’t so fast and you’ll need to be sure that performance isn’t too degraded to use. Forms Since the beginning of the Web, developers have been coding forms, and then writing JavaScript to check whether an input is a correctly formed email address, URL, credit card number or conforms to some other pattern. The cumulative labour of the world’s developers over the last 15 years makes whizzing round in a sleigh and delivering presents seem like popping to the corner shop in comparison. With HTML5, that’s all about to change. As Yaili began to explore on Day 3, a host of new attributes to the input element provide built-in validation for email address formats (input type=email), URLs (input type=url), any pattern that can be expressed with a JavaScript-syntax regex (pattern=""[0-9][A-Z]{3}"") and the like. New attributes such as required, autofocus, input type=number min=3 max=50 remove much of the tedious JavaScript from form validation. Other, really exciting input types are available (see all input types). The datalist is reminiscent of a select box, but allows the user to enter their own text if they don’t want to choose one of the pre-defined options. input type=range is rendered as a slider, while input type=date pops up a date picker, all natively in the browser with no JavaScript required at all. Currently, support is most complete in an experimental implementation in Opera and a number of the new attributes in Webkit-based browsers. But don’t let that stop you! The clever thing about the specification of the new Web Forms is that all the new input types are attributes (rather than elements). input defaults to input type=text, so if a browser doesn’t understand a new HTML5 type, it gracefully degrades to a plain text input. So where does that leave validation in those browsers that don’t support Web Forms? The answer is that you don’t retire your pre-existing JavaScript validation just yet, but you leave it as a fallback after doing some feature detection. To detect whether (say) input type=email is supported, you make a new input type=email with JavaScript but don’t add it to the page. Then, you interrogate your new element to find out what its type attribute is. If it’s reported back as “email”, then the browser supports the new feature, so let it do its work and don’t bring in any JavaScript validation. If it’s reported back as “text”, it’s fallen back to the default, indicating that it’s not supported, so your code should branch to your old validation routines. Alternatively, use the small (7K) Modernizr library which will do this work for you and give you JavaScript booleans like Modernizr.inputtypes[email] set to true or false. So what does this buy you? Well, first and foremost, you’re future-proofing your code for that time when all browsers support these hugely useful additions to forms. Secondly, you buy a usability and accessibility win. Although it’s tempting to style the stuffing out of your form fields (which can, incidentally, lead to madness), whatever your branding people say, it’s better to leave forms as close to the browser defaults as possible. A browser’s slider and date pickers will be the same across different sites, making it much more comprehensible to users. And, by using native controls rather than faking sliders and date pickers with JavaScript, your forms are much more likely to be accessible to users of assistive technology. HTML5 DOCTYPE You can use the new DOCTYPE !doctype html now and – hey presto – you’re writing HTML5, as it’s pretty much a superset of HTML4. There are some useful advantages to doing this. The first is that the HTML5 validator (I use http://html5.validator.nu) also validates ARIA information, whereas the HTML4 validator doesn’t, as ARIA is a new spec developed after HTML4. (Actually, it’s more accurate to say that it doesn’t validate your ARIA attributes, but it doesn’t automatically report them as an error.) Another advantage is that HTML5 allows tabindex as a global attribute (that is, on any element). Although originally designed as an accessibility bolt-on, I ordinarily advise you don’t use it; a well-structured page should provide a logical tab order through links and form fields already. However, tabindex=""-1"" is a legal value in HTML5 as it allows for the element to be programmatically focussable by JavaScript. It’s also very useful for correcting a bug in Internet Explorer when used with a keyboard; in-page links go nowhere if the destination doesn’t have a proprietary property called hasLayout set or a tabindex of -1. So, whether it is the tool of Satan or yule of Santa, HTML5 is just around the corner. Some you can use now, and by the end of 2010 I predict you’ll be able to use a whole lot more as new browser versions are released.",2009,Bruce Lawson,brucelawson,2009-12-05T00:00:00+00:00,https://24ways.org/2009/html5-tool-of-satan-or-yule-of-santa/,code 179,Have a Field Day with HTML5 Forms,"Forms are usually seen as that obnoxious thing we have to markup and style. I respectfully disagree: forms (on a par with tables) are the most exciting thing we have to work with. Here we’re going to take a look at how to style a beautiful HTML5 form using some advanced CSS and latest CSS3 techniques. I promise you will want to style your own forms after you’ve read this article. Here’s what we’ll be creating: The form. (Icons from Chalkwork Payments) Meaningful markup We’re going to style a simple payment form. There are three main sections on this form: The person’s details The address details The credit card details We are also going to use some of HTML5’s new input types and attributes to create more meaningful fields and use less unnecessary classes and ids: email, for the email field tel, for the telephone field number, for the credit card number and security code required, for required fields placeholder, for the hints within some of the fields autofocus, to put focus on the first input field when the page loads There are a million more new input types and form attributes on HTML5, and you should definitely take a look at what’s new on the W3C website. Hopefully this will give you a good idea of how much more fun form markup can be. A good foundation Each section of the form will be contained within its own fieldset. In the case of the radio buttons for choosing the card type, we will enclose those options in another nested fieldset. We will also be using an ordered list to group each label / input pair. This will provide us with a (kind of) semantic styling hook and it will also make the form easier to read when viewing with no CSS applied: The unstyled form So here’s the markup we are going to be working with: <form id=payment> <fieldset> <legend>Your details</legend> <ol> <li> <label for=name>Name</label> <input id=name name=name type=text placeholder=""First and last name"" required autofocus> </li> <li> <label for=email>Email</label> <input id=email name=email type=email placeholder=""example@domain.com"" required> </li> <li> <label for=phone>Phone</label> <input id=phone name=phone type=tel placeholder=""Eg. +447500000000"" required> </li> </ol> </fieldset> <fieldset> <legend>Delivery address</legend> <ol> <li> <label for=address>Address</label> <textarea id=address name=address rows=5 required></textarea> </li> <li> <label for=postcode>Post code</label> <input id=postcode name=postcode type=text required> </li> <li> <label for=country>Country</label> <input id=country name=country type=text required> </li> </ol> </fieldset> <fieldset> <legend>Card details</legend> <ol> <li> <fieldset> <legend>Card type</legend> <ol> <li> <input id=visa name=cardtype type=radio> <label for=visa>VISA</label> </li> <li> <input id=amex name=cardtype type=radio> <label for=amex>AmEx</label> </li> <li> <input id=mastercard name=cardtype type=radio> <label for=mastercard>Mastercard</label> </li> </ol> </fieldset> </li> <li> <label for=cardnumber>Card number</label> <input id=cardnumber name=cardnumber type=number required> </li> <li> <label for=secure>Security code</label> <input id=secure name=secure type=number required> </li> <li> <label for=namecard>Name on card</label> <input id=namecard name=namecard type=text placeholder=""Exact name as on the card"" required> </li> </ol> </fieldset> <fieldset> <button type=submit>Buy it!</button> </fieldset> </form> Making things look nice First things first, so let’s start by adding some defaults to our form by resetting the margins and paddings of the elements and adding a default font to the page: html, body, h1, form, fieldset, legend, ol, li { margin: 0; padding: 0; } body { background: #ffffff; color: #111111; font-family: Georgia, ""Times New Roman"", Times, serif; padding: 20px; } Next we are going to style the form element that is wrapping our fields: form#payment { background: #9cbc2c; -moz-border-radius: 5px; -webkit-border-radius: 5px; border-radius: 5px; padding: 20px; width: 400px; } We will also remove the border from the fieldset and apply some bottom margin to it. Using the :last-of-type pseudo-class, we remove the bottom margin of the last fieldset — there is no need for it: form#payment fieldset { border: none; margin-bottom: 10px; } form#payment fieldset:last-of-type { margin-bottom: 0; } Next we’ll make the legends big and bold, and we will also apply a light-green text-shadow, to add that little extra special detail: form#payment legend { color: #384313; font-size: 16px; font-weight: bold; padding-bottom: 10px; text-shadow: 0 1px 1px #c0d576; } Our legends are looking great, but how about adding a clear indication of how many steps our form has? Instead of adding that manually to every legend, we can use automatically generated counters. To add a counter to an element, we have to use either the :before or :after pseudo-elements to add content via CSS. We will follow these steps: create a counter using the counter-reset property on the form element call the counter with the content property (using the same name we’ve created before) with the counter-incremet property, indicate that for each element that matches our selector, that counter will be increased by 1 form#payment > fieldset > legend:before { content: ""Step "" counter(fieldsets) "": ""; counter-increment: fieldsets; } Finally, we need to change the style of the legend that is part of the radio buttons group, to make it look like a label: form#payment fieldset fieldset legend { color: #111111; font-size: 13px; font-weight: normal; padding-bottom: 0; } Styling the lists For our list elements, we’ll just add some nice rounded corners and semi-transparent border and background. Because we are using RGBa colors, we should provide a fallback for browsers that don’t support them (that comes before the RBGa color). For the nested lists, we will remove these properties because they would be overlapping: form#payment ol li { background: #b9cf6a; background: rgba(255,255,255,.3); border-color: #e3ebc3; border-color: rgba(255,255,255,.6); border-style: solid; border-width: 2px; -moz-border-radius: 5px; -webkit-border-radius: 5px; border-radius: 5px; line-height: 30px; list-style: none; padding: 5px 10px; margin-bottom: 2px; } form#payment ol ol li { background: none; border: none; float: left; } Form controls Now we only need to style our labels, inputs and the button element. All our labels will look the same, with the exception of the one for the radio elements. We will float them to the left and give them a width. For the credit card type labels, we will add an icon as the background, and override some of the properties that aren’t necessary. We will be using the attribute selector to specify the background image for each label — in this case, we use the for attribute of each label. To add an extra user-friendly detail, we’ll add a cursor: pointer to the radio button labels on the :hover state, so the user knows that he can simply click them to select that option. form#payment label { float: left; font-size: 13px; width: 110px; } form#payment fieldset fieldset label { background:none no-repeat left 50%; line-height: 20px; padding: 0 0 0 30px; width: auto; } form#payment label[for=visa] { background-image: url(visa.gif); } form#payment label[for=amex] { background-image: url(amex.gif); } form#payment label[for=mastercard] { background-image: url(mastercard.gif); } form#payment fieldset fieldset label:hover { cursor: pointer; } Almost there! Now onto the input elements. Here we want to match all inputs, except for the radio ones, and the textarea. For that we will use the negation pseudo-class (:not()). With it we can target all input elements except for the ones with type of radio. We will also make sure to add some :focus styles and add the appropriate styling for the radio inputs: form#payment input:not([type=radio]), form#payment textarea { background: #ffffff; border: none; -moz-border-radius: 3px; -webkit-border-radius: 3px; -khtml-border-radius: 3px; border-radius: 3px; font: italic 13px Georgia, ""Times New Roman"", Times, serif; outline: none; padding: 5px; width: 200px; } form#payment input:not([type=submit]):focus, form#payment textarea:focus { background: #eaeaea; } form#payment input[type=radio] { float: left; margin-right: 5px; } And finally we come to our submit button. To it, we will just add some nice typography and text-shadow, align it to the center of the form and give it some background colors for its different states: form#payment button { background: #384313; border: none; -moz-border-radius: 20px; -webkit-border-radius: 20px; -khtml-border-radius: 20px; border-radius: 20px; color: #ffffff; display: block; font: 18px Georgia, ""Times New Roman"", Times, serif; letter-spacing: 1px; margin: auto; padding: 7px 25px; text-shadow: 0 1px 1px #000000; text-transform: uppercase; } form#payment button:hover { background: #1e2506; cursor: pointer; } And that’s it! See the completed form. This form will not look the same on every browser. Internet Explorer and Opera don’t support border-radius (at least not for now); the new input types are rendered as just normal inputs on some browsers; and some of the most advanced CSS, like the counter, :last-of-type or text-shadow are not supported on some browsers. But that doesn’t mean you can’t use them right now, and simplify your development process. My gift to you!",2009,Inayaili de León Persson,inayailideleon,2009-12-03T00:00:00+00:00,https://24ways.org/2009/have-a-field-day-with-html5-forms/,code 180,Going Nuts with CSS Transitions,"I’m going to show you how CSS 3 transforms and WebKit transitions can add zing to the way you present images on your site. Laying the foundations First we are going to make our images look like mini polaroids with captions. Here’s the markup: <div class=""polaroid pull-right""> <img src=""../img/seal.jpg"" alt=""""> <p class=""caption"">Found this little cutie on a walk in New Zealand!</p> </div> You’ll notice we’re using a somewhat presentational class of pull-right here. This means the logic is kept separate from the code that applies the polaroid effect. The polaroid class has no positioning, which allows it to be used generically anywhere that the effect is required. The pull classes set a float and add appropriate margins—they can be used for things like blockquotes as well. .polaroid { width: 150px; padding: 10px 10px 20px 10px; border: 1px solid #BFBFBF; background-color: white; -webkit-box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4); -moz-box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4); box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4); } The actual polaroid effect itself is simply applied using padding, a border and a background colour. We also apply a nice subtle box shadow, using a property that is supported by modern WebKit browsers and Firefox 3.5+. We include the box-shadow property last to ensure that future browsers that support the eventual CSS3 specified version natively will use that implementation over the legacy browser specific version. The box-shadow property takes four values: three lengths and a colour. The first is the horizontal offset of the shadow—positive values place the shadow on the right, while negative values place it to the left. The second is the vertical offset, positive meaning below. If both of these are set to 0, the shadow is positioned equally on all four sides. The last length value sets the blur radius—the larger the number, the blurrier the shadow (therefore the darker you need to make the colour to have an effect). The colour value can be given in any format recognised by CSS. Here, we’re using rgba as explained by Drew behind the first door of this year’s calendar. Rotation For browsers that understand it (currently our old favourites WebKit and FF3.5+) we can add some visual flair by rotating the image, using the transform CSS 3 property. -webkit-transform: rotate(9deg); -moz-transform: rotate(9deg); transform: rotate(9deg); Rotations can be specified in degrees, radians (rads) or grads. WebKit also supports turns unfortunately Firefox doesn’t just yet. For our example, we want any polaroid images on the left hand side to be rotated in the opposite direction, using a negative degree value: .pull-left.polaroid { -webkit-transform: rotate(-9deg); -moz-transform: rotate(-9deg); transform: rotate(-9deg); } Multiple class selectors don’t work in IE6 but as luck would have it, the transform property doesn’t work in any current IE version either. The above code is a good example of progressive enrichment: browsers that don’t support box-shadow or transform will still see the image and basic polaroid effect. Animation WebKit is unique amongst browser rendering engines in that it allows animation to be specified in pure CSS. Although this may never actually make it in to the CSS 3 specification, it degrades nicely and more importantly is an awful lot of fun! Let’s go nuts. In the next demo, the image is contained within a link and mousing over that link causes the polaroid to animate from being angled to being straight. Here’s our new markup: <a href=""http://www.flickr.com/photos/nataliedowne/2340993237/"" class=""polaroid""> <img src=""../img/raft.jpg"" alt=""""> White water rafting in Queenstown </a> And here are the relevant lines of CSS: a.polaroid { /* ... */ -webkit-transform: rotate(10deg); -webkit-transition: -webkit-transform 0.5s ease-in; } a.polaroid:hover, a.polaroid:focus, a.polaroid:active { /* ... */ -webkit-transform: rotate(0deg); } The @-webkit-transition@ property is the magic wand that sets up the animation. It takes three values: the property to be animated, the duration of the animation and a ‘timing function’ (which affects the animation’s acceleration, for a smoother effect). -webkit-transition only takes affect when the specified property changes. In pure CSS, this is done using dynamic pseudo-classes. You can also change the properties using JavaScript, but that’s a story for another time. Throwing polaroids at a table Imagine there are lots of differently sized polaroid photos scattered on a table. That’s the effect we are aiming for with our next demo. As an aside: we are using absolute positioning to arrange the images inside a flexible width container (with a minimum and maximum width specified in pixels). As some are positioned from the left and some from the right when you resize the browser they shuffle underneath each other. This is an effect used on the UX London site. This demo uses a darker colour shadow with more transparency than before. The grey shadow in the previous example worked fine, but it was against a solid background. Since the images are now overlapping each other, the more opaque shadow looked fake. -webkit-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); -moz-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); On hover, as well as our previous trick of animating the image rotation back to straight, we are also making the shadow darker and setting the z-index to be higher than the other images so that it appears on top. And Finally… Finally, for a bit more fun, we’re going to simulate the images coming towards you and lifting off the page. We’ll achieve this by making them grow larger and by offsetting the shadow & making it longer. Screenshot 1 shows the default state, while 2 shows our previous hover effect. Screenshot 3 is the effect we are aiming for, illustrated by demo 4. a.polaroid { /* ... */ z-index: 2; -webkit-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); -moz-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); -webkit-transform: rotate(10deg); -moz-transform: rotate(10deg); transform: rotate(10deg); -webkit-transition: all 0.5s ease-in; } a.polaroid:hover, a.polaroid:focus, a.polaroid:active { z-index: 999; border-color: #6A6A6A; -webkit-box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4); -moz-box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4); box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4); -webkit-transform: rotate(0deg) scale(1.05); -moz-transform: rotate(0deg) scale(1.05); transform: rotate(0deg) scale(1.05); } You’ll notice we are now giving the transform property another transform function: scale, which takes increases the size by the specified factor. Other things you can do with transform include skewing, translating or you can go mad creating your own transforms with a matrix. The box-shadow has both its offset and blur radius increased dramatically, and is darkened using the alpha channel of the rgba colour. And because we want the effects to all animate smoothly, we pass a value of all to the -webkit-transition property, ensuring that any changed property on that link will be animated. Demo 5 is the finished example, bringing everything nicely together. CSS transitions and transforms are a great example of progressive enrichment, which means improving the experience for a portion of the audience without negatively affecting other users. They are also a lot of fun to play with! Further reading -moz-transform – the mozilla developer center has a comprehensive explanation of transform that also applies to -webkit-transform and transform. CSS: Animation Using CSS Transforms – this is a good, more indepth tutorial on animations. CSS Animation – the Safari blog explains the usage of -webkit-transform. Dinky pocketbooks with transform – another use for transforms, create your own printable pocketbook. A while back, Simon wrote a little bookmarklet to spin the entire page… warning: this will spin the entire page.",2009,Natalie Downe,nataliedowne,2009-12-14T00:00:00+00:00,https://24ways.org/2009/going-nuts-with-css-transitions/,code 181,Working With RGBA Colour,"When Tim and I were discussing the redesign of this site last year, one of the clear goals was to have a graphical style without making the pages heavy with a lot of images. When we launched, a lot of people were surprised that the design wasn’t built with PNGs. Instead we’d used RGBA colour values, which is part of the CSS3 specification. What is RGBA Colour? We’re all familiar with specifying colours in CSS using by defining the mix of red, green and blue light required to achieve our tone. This is fine and dandy, but whatever values we specify have one thing in common — the colours are all solid, flat, and well, a bit boring. Flat RGB colours CSS3 introduces a couple of new ways to specify colours, and one of those is RGBA. The A stands for Alpha, which refers to the level of opacity of the colour, or to put it another way, the amount of transparency. This means that we can set not only the red, green and blue values, but also control how much of what’s behind the colour shows through. Like with layers in Photoshop. Don’t We Have Opacity Already? The ability to set the opacity on a colour differs subtly from setting the opacity on an element using the CSS opacity property. Let’s look at an example. Here we have an H1 with foreground and background colours set against a page with a patterned background. Heading with no transparency applied h1 { color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); } By setting the CSS opacity property, we can adjust the transparency of the entire element and its contents: Heading with 50% opacity on the element h1 { color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); opacity: 0.5; } RGBA colour gives us something different – the ability to control the opacity of the individual colours rather than the entire element. So we can set the opacity on just the background: 50% opacity on just the background colour h1 { color: rgb(0, 0, 0); background-color: rgba(255, 255, 255, 0.5); } Or leave the background solid and change the opacity on just the text: 50% opacity on just the foreground colour h1 { color: rgba(0, 0, 0, 0.5); background-color: rgb(255, 255, 255); } The How-To You’ll notice that above I’ve been using the rgb() syntax for specifying colours. This is a bit less common than the usual hex codes (like #FFF) but it makes sense when starting to use RGBA. As there’s no way to specify opacity with hex codes, we use rgba() like so: color: rgba(255, 255, 255, 0.5); Just like rgb() the first three values are red, green and blue. You can specify these 0-255 or 0%-100%. The fourth value is the opacity level from 0 (completely transparent) to 1 (completely opaque). You can use this anywhere you’d normally set a colour in CSS — so it’s good for foregrounds and background, borders, outlines and so on. All the transparency effects on this site’s current design are achieved this way. Supporting All Browsers Like a lot of the features we’ll be looking at in this year’s 24 ways, RGBA colour is supported by a lot of the newest browsers, but not the rest. Firefox, Safari, Chrome and Opera browsers all support RGBA, but Internet Explorer does not. Fortunately, due to the robust design of CSS as a language, we can specify RGBA colours for browsers that support it and an alternative for browsers that do not. Falling back to solid colour The simplest technique is to allow the browser to fall back to using a solid colour when opacity isn’t available. The CSS parsing rules specify that any unrecognised value should be ignored. We can make use of this because a browser without RGBA support will treat a colour value specified with rgba() as unrecognised and discard it. So if we specify the colour first using rgb() for all browsers, we can then overwrite it with an rgba() colour for browsers that understand RGBA. h1 { color: rgb(127, 127, 127); color: rgba(0, 0, 0, 0.5); } Falling back to a PNG In cases where you’re using transparency on a background-color (although not on borders or text) it’s possible to fall back to using a PNG with alpha channel to get the same effect. This is less flexible than using CSS as you’ll need to create a new PNG for each level of transparency required, but it can be a useful solution. Using the same principal as before, we can specify the background in a style that all browsers will understand, and then overwrite it in a way that browsers without RGBA support will ignore. h1 { background: transparent url(black50.png); background: rgba(0, 0, 0, 0.5) none; } It’s important to note that this works because we’re using the background shorthand property, enabling us to set both the background colour and background image in a single declaration. It’s this that enables us to rely on the browser ignoring the second declaration when it encounters the unknown rgba() value. Next Steps The really great thing about RGBA colour is that it gives us the ability to create far more graphically rich designs without the need to use images. Not only does that make for faster and lighter pages, but sites which are easier and quicker to build and maintain. CSS values can also be changed in response to user interaction or even manipulated with JavaScript in a way that’s just not so easy using images. Opacity can be changed on :hover or manipulated with JavaScript div { color: rgba(255, 255, 255, 0.8); background-color: rgba(142, 213, 87, 0.3); } div:hover { color: rgba(255, 255, 255, 1); background-color: rgba(142, 213, 87, 0.6); } Clever use of transparency in border colours can help ease the transition between overlay items and the page behind. Borders can receive the RGBA treatment, too div { color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); border: 10px solid rgba(255, 255, 255, 0.3); } In Conclusion That’s a brief insight into RGBA colour, what it’s good for and how it can be used whilst providing support for older browsers. With the current lack of support in Internet Explorer, it’s probably not a technique that commercial designs will want to heavily rely on right away – simply because of the overhead of needing to think about fallback all the time. It is, however, a useful tool to have for those smaller, less critical touches that can really help to finesse a design. As browser support becomes more mainstream, you’ll already be familiar and practised with RGBA and ready to go.",2009,Drew McLellan,drewmclellan,2009-12-01T00:00:00+00:00,https://24ways.org/2009/working-with-rgba-colour/,code 182,Breaking Out The Edges of The Browser,"HTML5 contains more than just the new entities for a more meaningful document, it also contains an arsenal of JavaScript APIs. So many in fact, that some APIs have outgrown the HTML5 spec’s backyard and have been sent away to grow up all on their own and been given the prestigious honour of being specs in their own right. So when I refer to (bendy finger quote) “HTML5”, I mean the HTML5 specification and a handful of other specifications that help us authors build web applications. Examples of those specs I would include in the umbrella term would be: geolocation, web storage, web databases, web sockets and web workers, to name a few. For all you guys and gals, on this special 2009 series of 24 ways, I’m just going to focus on data storage and offline applications: boldly taking your browser where no browser has gone before! Web Storage The Web Storage API is basically cookies on steroids, a unhealthy dosage of steroids. Cookies are always a pain to work with. First of all you have the problem of setting, changing and deleting them. Typically solved by Googling and blindly relying on PPK’s solution. If that wasn’t enough, there’s the 4Kb limit that some of you have hit when you really don’t want to. The Web Storage API gets around all of the hoops you have to jump through with cookies. Storage supports around 5Mb of data per domain (the spec’s recommendation, but it’s open to the browsers to implement anything they like) and splits in to two types of storage objects: sessionStorage – available to all pages on that domain while the window remains open localStorage – available on the domain until manually removed Support Ignoring beta browsers for our support list, below is a list of the major browsers and their support for the Web Storage API: Latest: Internet Explorer, Firefox, Safari (desktop & mobile/iPhone) Partial: Google Chrome (only supports localStorage) Not supported: Opera (as of 10.10) Usage Both sessionStorage and localStorage support the same interface for accessing their contents, so for these examples I’ll use localStorage. The storage interface includes the following methods: setItem(key, value) getItem(key) key(index) removeItem(key) clear() In the simple example below, we’ll use setItem and getItem to store and retrieve data: localStorage.setItem('name', 'Remy'); alert( localStorage.getItem('name') ); Using alert boxes can be a pretty lame way of debugging. Conveniently Safari (and Chrome) include database tab in their debugging tools (cmd+alt+i), so you can get a visual handle on the state of your data: Viewing localStorage As far as I know only Safari has this view on stored data natively in the browser. There may be a Firefox plugin (but I’ve not found it yet!) and IE… well that’s just IE. Even though we’ve used setItem and getItem, there’s also a few other ways you can set and access the data. In the example below, we’re accessing the stored value directly using an expando and equally, you can also set values this way: localStorage.name = ""Remy""; alert( localStorage.name ); // shows ""Remy"" The Web Storage API also has a key method, which is zero based, and returns the key in which data has been stored. This should also be in the same order that you set the keys, for example: alert( localStorage.getItem(localStorage.key(0)) ); // shows ""Remy"" I mention the key() method because it’s not an unlikely name for a stored value. This can cause serious problems though. When selecting the names for your keys, you need to be sure you don’t take one of the method names that are already on the storage object, like key, clear, etc. As there are no warnings when you try to overwrite the methods, it means when you come to access the key() method, the call breaks as key is a string value and not a function. You can try this yourself by creating a new stored value using localStorage.key = ""foo"" and you’ll see that the Safari debugger breaks because it relies on the key() method to enumerate each of the stored values. Usage Notes Currently all browsers only support storing strings. This also means if you store a numeric, it will get converted to a string: localStorage.setItem('count', 31); alert(typeof localStorage.getItem('count')); // shows ""string"" This also means you can’t store more complicated objects natively with the storage objects. To get around this, you can use Douglas Crockford’s JSON parser (though Firefox 3.5 has JSON parsing support baked in to the browser – yay!) json2.js to convert the object to a stringified JSON object: var person = { name: 'Remy', height: 'short', location: 'Brighton, UK' }; localStorage.setItem('person', JSON.stringify(person)); alert( JSON.parse(localStorage.getItem('person')).name ); // shows ""Remy"" Alternatives There are a few solutions out there that provide storage solutions that detect the Web Storage API, and if it’s not available, fall back to different technologies (for instance, using a flash object to store data). One comprehensive version of this is Dojo’s storage library. I’m personally more of a fan of libraries that plug missing functionality under the same namespace, just as Crockford’s JSON parser does (above). For those interested it what that might look like, I’ve mocked together a simple implementation of sessionStorage. Note that it’s incomplete (because it’s missing the key method), and it could be refactored to not using the JSON stringify (but you would need to ensure that the values were properly and safely encoded): // requires json2.js for all browsers other than Firefox 3.5 if (!window.sessionStorage && JSON) { window.sessionStorage = (function () { // window.top.name ensures top level, and supports around 2Mb var data = window.top.name ? JSON.parse(window.top.name) : {}; return { setItem: function (key, value) { data[key] = value+""""; // force to string window.top.name = JSON.stringify(data); }, removeItem: function (key) { delete data[key]; window.top.name = JSON.stringify(data); }, getItem: function (key) { return data[key] || null; }, clear: function () { data = {}; window.top.name = ''; } }; })(); } Now that we’ve cracked the cookie jar with our oversized Web Storage API, let’s have a look at how we take our applications offline entirely. Offline Applications Offline applications is (still) part of the HTML5 specification. It allows developers to build a web app and have it still function without an internet connection. The app is access via the same URL as it would be if the user were online, but the contents (or what the developer specifies) is served up to the browser from a local cache. From there it’s just an everyday stroll through open web technologies, i.e. you still have access to the Web Storage API and anything else you can do without a web connection. For this section, I’ll refer you to a prototype demo I wrote recently of a contrived Rubik’s cube (contrived because it doesn’t work and it only works in Safari because I’m using 3D transforms). Offline Rubik’s cube Support Support for offline applications is still fairly limited, but the possibilities of offline applications is pretty exciting, particularly as we’re seeing mobile support and support in applications such as Fluid (and I would expect other render engine wrapping apps). Support currently, is as follows: Latest: Safari (desktop & mobile/iPhone) Sort of: Firefox‡ Not supported: Internet Explorer, Opera, Google Chrome ‡ Firefox 3.5 was released to include offline support, but in fact has bugs where it doesn’t work properly (certainly on the Mac), Minefield (Firefox beta) has resolved the bug. Usage The status of the application’s cache can be tested from the window.applicationCache object. However, we’ll first look at how to enable your app for offline access. You need to create a manifest file, which will tell the browser what to cache, and then we point our web page to that cache: <!DOCTYPE html> <html manifest=""remy.manifest""> <!-- continues ... --> For the manifest to be properly read by the browser, your server needs to serve the .manifest files as text/manifest by adding the following to your mime.types: text/cache-manifest manifest Next we need to populate our manifest file so the browser can read it: CACHE MANIFEST /demo/rubiks/index.html /demo/rubiks/style.css /demo/rubiks/jquery.min.js /demo/rubiks/rubiks.js # version 15 The first line of the manifest must read CACHE MANIFEST. Then subsequent lines tell the browser what to cache. The HTML5 spec recommends that you include the calling web page (in my case index.html), but it’s not required. If I didn’t include index.html, the browser would cache it as part of the offline resources. These resources are implicitly under the CACHE namespace (which you can specify any number of times if you want to). In addition, there are two further namespaces: NETWORK and FALLBACK. NETWORK is a whitelist namespace that tells the browser not to cache this resource and always try to request it through the network. FALLBACK tells the browser that whilst in offline mode, if the resource isn’t available, it should return the fallback resource. Finally, in my example I’ve included a comment with a version number. This is because once you include a manifest, the only way you can tell the browser to reload the resources is if the manifest contents changes. So I’ve included a version number in the manifest which I can change forcing the browser to reload all of the assets. How it works If you’re building an app that makes use of the offline cache, I would strongly recommend that you add the manifest last. The browser implementations are very new, so can sometimes get a bit tricky to debug since once the resources are cached, they really stick in the browser. These are the steps that happen during a request for an app with a manifest: Browser: sends request for your app.html Server: serves all associated resources with app.html – as normal Browser: notices that app.html has a manifest, it re-request the assets in the manifest Server: serves the requested manifest assets (again) Browser: window.applicationCache has a status of UPDATEREADY Browser: reloads Browser: only request manifest file (which doesn’t show on the net requests panel) Server: responds with 304 Not Modified on the manifest file Browser: serves all the cached resources locally What might also add confusion to this process, is that the way the browsers work (currently) is if there is a cache already in place, it will use this first over updated resources. So if your manifest has changed, the browser will have already loaded the offline cache, so the user will only see the updated on the next reload. This may seem a bit convoluted, but you can also trigger some of this manually through the applicationCache methods which can ease some of this pain. If you bind to the online event you can manually try to update the offline cache. If the cache has then updated, swap the updated resources in to the cache and the next time the app loads it will be up to date. You could also prompt your user to reload the app (which is just a refresh) if there’s an update available. For example (though this is just pseudo code): addEvent(applicationCache, 'updateready', function () { applicationCache.swapCache(); tellUserToRefresh(); }); addEvent(window, 'online', function () { applicationCache.update(); }); Breaking out of the Browser So that’s two different technologies that you can use to break out of the traditional browser/web page model and get your apps working in a more application-ny way. There’s loads more in the HTML5 and non-HTML5 APIs to play with, so take your Christmas break to check them out!",2009,Remy Sharp,remysharp,2009-12-02T00:00:00+00:00,https://24ways.org/2009/breaking-out-the-edges-of-the-browser/,code 184,Spruce It Up,"The landscape of web typography is changing quickly these days. We’ve gone from the wild west days of sIFR to Cufón to finally seeing font embedding seeing wide spread adoption by browser developers (and soon web designers) with @font-face. For those who’ve felt limited by the typographic possibilities before, this has been a good year. As Mark Boulton has so eloquently elucidated, @font-face embedding doesn’t come without its drawbacks. Font files can be quite large and FOUT—that nasty flash of unstyled text—can be a distraction for users. Data URIs We can battle FOUT by using Data URIs. A Data URI allows the font to be encoded right into the CSS file. When the font comes with the CSS, the flash of unstyled text is mitigated. No extra HTTP requests are required. Don’t be a grinch, though. Sending hundreds of kilobytes down the pipe still isn’t great. Sometimes, all we want to do is spruce up our site with a little typographic sugar. Be Selective Dan Cederholm’s SimpleBits is an attractive site. Take a look at the ampersand within the header of his site. It’s the lovely (and free) Goudy Bookletter 1911 available from The League of Movable Type. The Opentype format is a respectable 28KB. Nothing too crazy but hold on here. Mr. Cederholm is only using the ampersand! Ouch. That’s a lot of bandwidth just for one character. Can we optimize a font like we can an image? Yes. Image optimization essentially works by removing unnecessary image data such as colour data, hidden comments or using compression algorithms. How do you remove unnecessary information from a font? Subsetting. If you’re the adventurous type, grab a copy of FontForge, which is an open source font editing tool. You can open the font, view and edit any of the glyphs and then re-generate the font. The interface is a little clunky but you’ll be able to select any character you don’t want and then cut the glyphs. Re-generate your font and you’ve now got a smaller file. There are certainly more optimizations that can also be made such as removing hinting and kerning information. Keep in mind that removing this information may affect how well the type renders. At this time of year, though, I’m sure you’re quite busy. Save yourself some time and head on over to the Font Squirrel Font Generator. The Font Generator is extremely handy and allows for a number of optimizations and cross-platform options to be generated instantly. Select the font from your local system—make sure that you are only using properly licensed fonts! In this particular case, we only want the ampersand. Click on Subset Fonts which will open up a new menu. Unselect any preselected sets and enter the ampersand into the Single Characters text box. Generate your font and what are you left with? 3KB. The Font Generator even generates a base64 encoded data URI stylesheet to be imported easily into your project. Check out the Demo page. (This demo won’t work in Internet Explorer as we’re only demonstrating the Data URI font embedding and not using the EOT file format that IE requires.) No Unnecessary Additives If you peeked under the hood of that demo, did you notice something interesting? There’s no <span> around the ampersand. The great thing about this is that we can take advantage of the font stack’s natural ability to switch to a fallback font when a character isn’t available. Just like that, we’ve managed to spruce up our page with a little typographic sugar without having to put on too much weight.",2009,Jonathan Snook,jonathansnook,2009-12-19T00:00:00+00:00,https://24ways.org/2009/spruce-it-up/,code 186,The Web Is Your CMS,"It is amazing what you can do these days with the services offered on the web. Flickr stores terabytes of photos for us and converts them automatically to all kind of sizes, finds people in them and even allows us to edit them online. YouTube does almost the same complete job with videos, LinkedIn allows us to maintain our CV, Delicious our bookmarks and so on. We don’t have to do these tasks ourselves any more, as all of these systems also come with ways to use the data in the form of Application Programming Interfaces, or APIs for short. APIs give us raw data when we send requests telling the system what we want to get back. The problem is that every API has a different idea of what is a simple way of accessing this data and in which format to give it back. Making it easier to access APIs What we need is a way to abstract the pains of different data formats and authentication formats away from the developer — and this is the purpose of the Yahoo Query Language, or YQL for short. Libraries like jQuery and YUI make it easy and reliable to use JavaScript in browsers (yes, even IE6) and YQL allows us to access web services and even the data embedded in web documents in a simple fashion – SQL style. Select * from the web and filter it the way I want YQL is a web service that takes a few inputs itself: A query that tells it what to get, update or access An output format – XML, JSON, JSON-P or JSON-P-X A callback function (if you defined JSON-P or JSON-P-X) You can try it out yourself – check out this link to get back Flickr photos for the search term ‘santa’*%20from%20flickr.photos.search%20where%20text%3D%22santa%22&format=xml in XML format. The YQL query for this is select * from flickr.photos.search where text=""santa"" The easiest way to take your first steps with YQL is to look at the console. There you get sample queries, access to all the data sources available to you and you can easily put together complex queries. In this article, however, let’s use PHP to put together a web page that pulls in Flickr photos, blog posts, Videos from YouTube and latest bookmarks from Delicious. Check out the demo and get the source code on GitHub. <?php /* YouTube RSS */ $query = 'select description from rss(5) where url=""http://gdata.youtube.com/feeds/base/users/chrisheilmann/uploads?alt=rss&v=2&orderby=published&client=ytapi-youtube-profile"";'; /* Flickr search by user id */ $query .= 'select farm,id,owner,secret,server,title from flickr.photos.search where user_id=""11414938@N00"";'; /* Delicious RSS */ $query .= 'select title,link from rss where url=""http://feeds.delicious.com/v2/rss/codepo8?count=10"";'; /* Blog RSS */ $query .= 'select title,link from rss where url=""http://feeds.feedburner.com/wait-till-i/gwZf""'; /* The YQL web service root with JSON as the output */ $root = 'http://query.yahooapis.com/v1/public/yql?format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys'; /* Assemble the query */ $query = ""select * from query.multi where queries='"".$query.""'""; $url = $root . '&q=' . urlencode($query); /* Do the curl call (access the data just like a browser would) */ $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); $output = curl_exec($ch); curl_close($ch); $data = json_decode($output); $results = $data->query->results->results; /* YouTube output */ $youtube = '<ul id=""youtube"">'; foreach($results[0]->item as $r){ $cleanHTML = undoYouTubeMarkupCrimes($r->description); $youtube .= '<li>'.$cleanHTML.'</li>'; } $youtube .= '</ul>'; /* Flickr output */ $flickr = '<ul id=""flickr"">'; foreach($results[1]->photo as $r){ $flickr .= '<li>'. '<a href=""http://www.flickr.com/photos/codepo8/'.$r->id.'/"">'. '<img src=""http://farm' .$r->farm . '.static.flickr.com/'. $r->server . '/' . $r->id . '_' . $r->secret . '_s.jpg"" alt=""'.$r->title.'""></a></li>'; } $flickr .= '</ul>'; /* Delicious output */ $delicious = '<ul id=""delicious"">'; foreach($results[2]->item as $r){ $delicious .= '<li><a href=""'.$r->link.'"">'.$r->title.'</a></li>'; } $delicious .= '</ul>'; /* Blog output */ $blog = '<ul id=""blog"">'; foreach($results[3]->item as $r){ $blog .= '<li><a href=""'.$r->link.'"">'.$r->title.'</a></li>'; } $blog .= '</ul>'; function undoYouTubeMarkupCrimes($str){ $cleaner = preg_replace('/555px/','100%',$str); $cleaner = preg_replace('/width=""[^""]+""/','',$cleaner); $cleaner = preg_replace('/<tbody>/','<colgroup><col width=""20%""><col width=""50%""><col width=""30%""></colgroup><tbody>',$cleaner); return $cleaner; } ?> What we are doing here is create a few different YQL statements and queue them together with the query.multi table. Each of these can be run inside YQL itself. Check out the YouTube, Flickr, Delicious and Blog example in the console if you don’t believe me. The benefit of using this table is that we don’t make individual requests for each query but we get all the data in one single request – which means a much better performing solution as the YQL server farm is faster on the web than our servers. We point the query to the YQL web service end point and get the resulting data using cURL. All that we need to do then is to convert the returned data to HTML lists that can be printed out inside an HTML template. Mixing, matching and using HTML as a data source This was a simple example of what YQL can do for you. Where it gets really powerful however is by mixing and matching different APIs. YQL is also a good tool to get information from HTML documents. By using the html table you can load the content of an HTML document (which gets fixed automatically by HTMLTidy) and use XPATH to filter down results to what you need. Take the following example which takes headlines from the news.bbc.co.uk homepage and runs the results through Yahoo’s Term Extractor API to give you a list of currently hot topics. select * from search.termextract where context in ( select content from html where url=""http://news.bbc.co.uk"" and xpath=""//table[@width=800]//a"" ) Try it out in the console or see the results here. In English, this means: Go to http://news.bbc.co.uk and get me the HTML Run it through HTML Tidy to clean it up. Get me only the links inside the table with an attribute of width and the value 800 Get only the content of the link and for each of the links Take the content and send it as context to the Yahoo Term Extractor API If we choose JSON-P as the output format we can use the outcome directly in JavaScript (see this demo or see its source): <ul id=""hottopics""></ul> <script type=""text/javascript""> function hottopics(o){ var res = o.query.results.Result, all = res.length, topics = {}, out = [], html = '', i=0; /* create hash from topics to prevent repetition */ for(i=0;i<all;i++){ topics[res[i]] = res[i]; }; for(i in topics){ out.push(i); }; html = '<li>' + out.join('</li><li>') + '</li>'; document.getElementById('hottopics').innerHTML = html; }; </script> <script type=""text/javascript"" src=""http://query.yahooapis.com/v1/public/yql?q=select%20content%20from%20search.termextract%20where %20context%20in%20(select%20content%20from%20html%20where%20url%3D%22http%3A%2F%2Fnews.bbc.co.uk%22%20and%20xpath%3D%22%2F%2Ftable%5B%40width%3D800%5D%2F%2Fa%22)&format=json&callback=hottopics""></script> Using JSON, we can also use PHP which means the demo works for everybody – not only those with JavaScript enabled (see this demo or see its source): <ul id=""hottopics""><li> <?php $url = 'http://query.yahooapis.com/v1/public/yql?q=select%20content'. '%20from%20search.termextract%20where%20context%20in'. '%20(select%20content%20from%20html%20where%20url%3D%22'. 'http%3A%2F%2Fnews.bbc.co.uk%22%20and%20xpath%3D%22%2F%2F'. 'table%5B%40width%3D800%5D%2F%2Fa%22)&format=json'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); $output = curl_exec($ch); curl_close($ch); $data = json_decode($output); $topics = array_unique($data->query->results->Result); echo join('</li><li>',$topics); ?> </li></ul> Summary This article could only scratch the surface of YQL. You have not only read access to the web but you can also write to web services. For example you can update Twitter, post to your WordPress blog or shorten a URL with bit.ly. Using Open Tables you can add any web service to the YQL interface and you can even run server-side JavaScript which is for example useful to return Flickr photos as HTML or get the HTML content from a document that needs POST data. The web of data is already here, and using YQL you don’t have to be a web services expert to use it and be part of it.",2009,Christian Heilmann,chrisheilmann,2009-12-17T00:00:00+00:00,https://24ways.org/2009/the-web-is-your-cms/,code 188,Don't Lose Your :focus,"For many web designers, accessibility conjures up images of blind users with screenreaders, and the difficulties in making sites accessible to this particular audience. Of course, accessibility covers a wide range of situations that go beyond the extreme example of screenreader users. And while it’s true that making a complex site accessible can often be a daunting prospect, there are also many small things that don’t take anything more than a bit of judicious planning, are very easy to test (without having to buy expensive assistive technology), and can make all the difference to certain user groups. In this short article we’ll focus on keyboard accessibility and how careless use of CSS can potentially make your sites completely unusable. Keyboard Access Users who for whatever reason can’t use a mouse will employ a keyboard (or keyboard-like custom interface) to navigate around web pages. By default, they will use TAB and SHIFT + TAB to move from one focusable element (links, form controls and area) of a page to the next. Note: in OS X, you’ll first need to turn on full keyboard access under System Preferences > Keyboard and Mouse > Keyboard Shortcuts. Safari under Windows needs to have the option Press Tab to highlight each item on a webpage in Preferences > Advanced enabled. Opera is the odd one out, as it has a variety of keyboard navigation options – the most relevant here being spatial navigation via Shift+Down, Shift+Up, Shift+Left, and Shift+Right). But I Don’t Like Your Dotted Lines… To show users where they are within a page, browsers place an outline around the element that currently has focus. The “problem” with these default outlines is that some browsers (Internet Explorer and Firefox) also display them when a user clicks on a focusable element with the mouse. Particularly on sites that make extensive use of image replacement on links with “off left” techniques this can create very unsightly outlines that stretch from the replaced element all the way to the left edge of the browser. Outline bleeding off to the left (image-replacement example from carsonified.com) There is a trivial workaround to prevent outlines from “spilling over” by adding a simple overflow:hidden, which keeps the outline in check around the clickable portion of the image-replaced element itself. Outline tamed with overflow:hidden But for many designers, even this is not enough. As a final solution, many actively suppress outlines altogether in their stylesheets. Controversially, even Eric Meyer’s popular reset.css – an otherwise excellent set of styles that levels the playing field of varying browser defaults – suppresses outlines. html, body, div, span, applet, object, iframe ... { ... outline: 0; ... } /* remember to define focus styles! */ :focus { outline: 0; } Yes, in his explanation (and in the CSS itself) Eric does remind designers to define relevant styles for :focus… but judging by the number of sites that seem to ignore this (and often remove the related comment from the stylesheet altogether), the message doesn’t seem to have sunk in. Anyway… hurrah! No more unsightly dotted lines on our lovely design. But what about keyboard users? Although technically they can still TAB from one element to the next, they now get no default cue as to where they are within the page (one notable exception here is Opera, where the outline is displayed regardless of stylesheets)… and if they’re Safari users, they won’t even get an indication of a link’s target in the status bar, like they would if they hovered over it with the mouse. Only Suppress outline For Mouse Users Is there a way to allow users navigating with the keyboard to retain the standard outline behaviour they’ve come to expect from their browser, while also ensuring that it doesn’t show display for mouse users? Testing some convoluted style combinations After playing with various approaches (see Better CSS outline suppression for more details), the most elegant solution also seemed to be the simplest: don’t remove the outline on :focus, do it on :active instead – after all, :active is the dynamic pseudo-class that deals explicitly with the styles that should be applied when a focusable element is clicked or otherwise activated. a:active { outline: none; } The only minor issues with this method: if a user activates a link and then uses the browser’s back button, the outline becomes visible. Oh, and old versions of Internet Explorer notoriously get confused by the exact meaning of :focus, :hover and :active, so this method fails in IE6 and below. Personally, I can live with both of these. Note: at the last minute before submitting this article, I discovered a fatal flaw in my test. It appears that outline still manages to appear in the time between activating a link and the link target loading (which in hindsight is logical – after activation, the link does indeed receive focus). As my test page only used in-page links, this issue never came up before. The slightly less elegant solution is to also suppress the outline on :hover. a:hover, a:active { outline: none; } In Conclusion Of course, many web designers may argue that they know what’s best, even for their keyboard-using audience. Maybe they’ve removed the default outline and are instead providing some carefully designed :focus styles. If they know for sure that these custom styles are indeed a reliable alternative for their users, more power to them… but, at the risk of sounding like Jakob “blue underlined links” Nielsen, I’d still argue that sometimes the default browser behaviours are best left alone. Complemented, yes (and if you’re already defining some fancy styles for :hover, by all means feel free to also make them display on :focus)… but not suppressed.",2009,Patrick Lauke,patricklauke,2009-12-09T00:00:00+00:00,https://24ways.org/2009/dont-lose-your-focus/,code 191,CSS Animations,"Friend: You should learn how to write CSS! Me: … Friend: CSS; Cascading Style Sheets. If you’re serious about web design, that’s the next thing you should learn. Me: What’s wrong with <font> tags? That was 8 years ago. Thanks to the hard work of Jeffrey, Andy, Andy, Cameron, Colly, Dan and many others, learning how to decently markup a website and write lightweight stylesheets was surprisingly easy. They made it so easy even a complete idiot (OH HAI) was able to quickly master it. And then… nothing. For a long time, it seemed like there wasn’t happening anything in the land of CSS, time stood still. Once you knew the basics, there wasn’t anything new to keep up with. It looked like a great band split, but people just kept re-releasing their music in various “Best Of!” or “Remastered!” albums. Fast forward a couple of years to late 2006. On the official WebKit blog Surfin’ Safari, there’s an article about something called CSS animations. Great new stuff to play with, but only supported by nightly builds (read: very, very beta) of WebKit. In the following months, they release other goodies, like CSS gradients, CSS reflections, CSS masks, and even more CSS animation sexiness. Whoa, looks like the band got back together, found their second youth, and went into overdrive! The problem was that if you wanted to listen to their new albums, you had to own some kind of new high-tech player no one on earth (besides some early adopters) owned. Back in the time machine. It is now late 2009, close to Christmas. Things have changed. Browsers supporting these new toys are widely available left and right. Even non-techies are using these advanced browsers to surf the web on a daily basis! Epic win? Almost, but at least this gives us enough reason to start learning how we could use all this new CSS voodoo. On Monday, Natalie Downe showed you a good tutorial on Going Nuts with CSS Transitions. Today, I’m taking it one step further… Howto: A basic spinner No matter how fast internet tubes or servers are, we’ll always need spinners to indicate something’s happening behind the scenes. Up until now, people would go to some site, pick one of the available templates, customize their foreground and background colors, and download a beautiful GIF image. There are some downsides to this though: It’s only _semi_-transparent: If you change your mind and pick a slightly different background color, you need to go back to the site, set all the parameters again, and replace your current image. There isn’t even a way to pick an image or gradient as background. Limited number of frames, probable to keep the file-size as small as possible (don’t forget this thing needs to be loaded before whatever process is finished in the background), and you don’t have that 24 frames per second smoothness. This is just too fucking easy. As a front-end code geek, there must be a “cooler” way to do this! What do we need to make a spinner with CSS animations? One image, and one element on our webpage we can hook on to. Yes, that’s it. I created a simple transparent PNG that looks it might be a spinner, and for the element on the page, I wrote this piece of genius HTML: <p id=""spinner"">Please wait while we do what we do best.</p> Looks semantic enough to me! Here’s the basic HTML I’m using to position the element in the center of the screen, and make the text inside it disappear: #spinner { position: absolute; top: 50%; left: 50%; margin: -100px 0 0 -100px; height: 200px; width: 200px; text-indent: 250px; white-space: nowrap; overflow: hidden; } Cool, but now we don’t see anything. Let’s pull rabbit number one out of the hat: -webkit-mask-image (accompanied by the previously mentioned transparent PNG image): #spinner { ... -webkit-mask-image: url(../img/spinner.png); } By now you should be feeling like a magician already. Oh, wait, we still have a blank screen, looks like we left something in the hat (tip: not rabbit droppings): #spinner { ... -webkit-mask-image: url(../img/spinner.png); background-color: #000; } Nice! What we’ve done right here is telling the element to clip onto the PNG. It’s a lot like clipping layers in Photoshop. So, spinners, they move, right? Into the hat again, and look what we pull out this time: CSS animations! #spinner { ... -webkit-mask-image: url(../img/spinner.png); background-color: #000; -webkit-animation-name: spinnerRotate; -webkit-animation-duration: 2s; -webkit-animation-iteration-count: infinite; -webkit-animation-timing-function: linear; } Some explanation: -webkit-animation-name: Name of the animation we’ll be defining later. -webkit-animation-duration: The timespan of the animation. -webkit-animation-iteration-count: Repeat once, a defined number of times or infinitely? -webkit-animation-timing-function: Linear is the one you’ll be using mostly. Other options are ease-in, ease-out, ease-in-out… Let’s define spinnerRotate: @-webkit-keyframes spinnerRotate { from { -webkit-transform:rotate(0deg); } to { -webkit-transform:rotate(360deg); } } En Anglais: Rotate #spinner starting at 0 degrees, ending at 360 degrees, over a timespan of 2 seconds, at a constant speed, and keep repeating this animation forever. That’s it! See it in action on the demo page. Note: these examples only work when you’re using a WebKit-based browser like Safari, Mobile Safari or Google Chrome. I’m confident though that Mozilla and Opera will try their very best catching up with all this new CSS goodness soon. When looking at this example, you see the possibilities are endless. Another advantage is you can change the look of it entirely by only changing a couple of lines of CSS, instead of re-creating and re-downloading the image from some website smelling like web 2.0 gone bad. I made another demo that shows how great it is to be able to change background and foreground colors (even on the fly!). So there you have it, a smoothly animated, fully transparent and completely customizable spinner. Cool? I think so. (Ladies?) But you can do a lot more with CSS animations than just create pretty spinners. Since I was fooling around with it anyway, I decided to test how far you can push this, space is the final limit, right? Conclusion CSS has never been more exciting than it is right now. I’m even prepared to say CSS is “cool” again, both for the more experienced front-end developers as for the new designers discovering CSS every day now. But… Remember when Javascript became popular? Remember when Flash became popular? Every time we’re been given new toys, some people aren’t ashamed to use it in a way you can barely call constructive. I’m thinking of Geocities websites, loaded with glowing blocks of text, moving images, bad color usage… In the wise words of Stan Lee: With great power there must also come great responsibility! A sprinkle of CSS animations is better than a bucket load. Apply with care.",2009,Tim Van Damme,timvandamme,2009-12-15T00:00:00+00:00,https://24ways.org/2009/css-animations/,code 192,Cleaner Code with CSS3 Selectors,"The parts of CSS3 that seem to grab the most column inches on blogs and in articles are the shiny bits. Rounded corners, text shadow and new ways to achieve CSS layouts are all exciting and bring with them all kinds of possibilities for web design. However what really gets me, as a developer, excited is a bit more mundane. In this article I’m going to take a look at some of the ways our front and back-end code will be simplified by CSS3, by looking at the ways we achieve certain visual effects now in comparison to how we will achieve them in a glorious, CSS3-supported future. I’m also going to demonstrate how we can use these selectors now with a little help from JavaScript – which can work out very useful if you find yourself in a situation where you can’t change markup that is being output by some server-side code. The wonder of nth-child So why does nth-child get me so excited? Here is a really common situation, the designer would like the tables in the application to look like this: Setting every other table row to a different colour is a common way to enhance readability of long rows. The tried and tested way to implement this is by adding a class to every other row. If you are writing the markup for your table by hand this is a bit of a nuisance, and if you stick a row in the middle you have to change the rows the class is applied to. If your markup is generated by your content management system then you need to get the server-side code to add that class – if you have access to that code. <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Strict//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd""> <html xmlns=""http://www.w3.org/1999/xhtml""> <head> <title>Striping every other row - using classes</title> <style type=""text/css""> body { padding: 40px; margin: 0; font: 0.9em Arial, Helvetica, sans-serif; } table { border-collapse: collapse; border: 1px solid #124412; width: 600px; } th { border: 1px solid #124412; background-color: #334f33; color: #fff; padding: 0.4em; text-align: left; } td { padding: 0.4em; } tr.odd td { background-color: #86B486; } </style> </head> <body> <table> <tr> <th>Name</th> <th>Cards sent</th> <th>Cards received</th> <th>Cards written but not sent</th> </tr> <tr> <td>Ann</td> <td>40</td> <td>28</td> <td>4</td> </tr> <tr class=""odd""> <td>Joe</td> <td>2</td> <td>27</td> <td>29</td> </tr> <tr> <td>Paul</td> <td>5</td> <td>35</td> <td>2</td> </tr> <tr class=""odd""> <td>Louise</td> <td>65</td> <td>65</td> <td>0</td> </tr> </table> </body> </html> View Example 1 This situation is something I deal with on almost every project, and apart from being an extra thing to do, it just isn’t ideal having the server-side code squirt classes into the markup for purely presentational reasons. This is where the nth-child pseudo-class selector comes in. The server-side code creates a valid HTML table for the data, and the CSS then selects the odd rows with the following selector: tr:nth-child(odd) td { background-color: #86B486; } View Example 2 The odd and even keywords are very handy in this situation – however you can also use a multiplier here. 2n would be equivalent to the keyword ‘odd’ 3n would select every third row and so on. Browser support Sadly, nth-child has pretty poor browser support. It is not supported in Internet Explorer 8 and has somewhat buggy support in some other browsers. Firefox 3.5 does have support. In some situations however, you might want to consider using JavaScript to add this support to browsers that don’t have it. This can be very useful if you are dealing with a Content Management System where you have no ability to change the server-side code to add classes into the markup. I’m going to use jQuery in these examples as it is very simple to use the same CSS selector used in the CSS to target elements with jQuery – however you could use any library or write your own function to do the same job. In the CSS I have added the original class selector to the nth-child selector: tr:nth-child(odd) td, tr.odd td { background-color: #86B486; } Then I am adding some jQuery to add a class to the markup once the document has loaded – using the very same nth-child selector that works for browsers that support it. <script src=""http://code.jquery.com/jquery-latest.js""></script> <script> $(document).ready(function(){ $(""tr:nth-child(odd)"").addClass(""odd""); }); </script> View Example 3 We could just add a background colour to the element using jQuery, however I prefer not to mix that information into the JavaScript as if we change the colour on our table rows I would need to remember to change it both in the CSS and in the JavaScript. Doing something different with the last element So here’s another thing that we often deal with. You have a list of items all floated left with a right hand margin on each element constrained within a fixed width layout. If each element has the right margin applied the margin on the final element will cause the set to become too wide forcing that last item down to the next row as shown in the below example where I have used a grey border to indicate the fixed width. Currently we have two ways to deal with this. We can put a negative right margin on the list, the same width as the space between the elements. This means that the extra margin on the final element fills that space and the item doesn’t drop down. <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Strict//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd""> <html xmlns=""http://www.w3.org/1999/xhtml""> <head> <title>The last item is different</title> <style type=""text/css""> body { padding: 40px; margin: 0; font: 0.9em Arial, Helvetica, sans-serif; } div#wrapper { width: 740px; float: left; border: 5px solid #ccc; } ul.gallery { margin: 0 -10px 0 0; padding: 0; list-style: none; } ul.gallery li { float: left; width: 240px; margin: 0 10px 10px 0; } </style> </head> <body> <div id=""wrapper""> <ul class=""gallery""> <li><img src=""xmas1.jpg"" alt=""baubles"" /></li> <li><img src=""xmas2.jpg"" alt=""star"" /></li> <li><img src=""xmas3.jpg"" alt=""wreath"" /></li> </ul> </div> </body> </html> View Example 4 The other solution will be to put a class on the final element and in the CSS remove the margin for this class. ul.gallery li.last { margin-right: 0; } This second solution may not be easy if the content is generated from server-side code that you don’t have access to change. It could all be so different. In CSS3 we have marvellously common-sense selectors such as last-child, meaning that we can simply add rules for the last list item. ul.gallery li:last-child { margin-right: 0; } View Example 5 This removed the margin on the li which is the last-child of the ul with a class of gallery. No messing about sticking classes on the last item, or pushing the width of the item out wit a negative margin. If this list of items repeated ad infinitum then you could also use nth-child for this task. Creating a rule that makes every 3rd element margin-less. ul.gallery li:nth-child(3n) { margin-right: 0; } View Example 6 A similar example is where the designer has added borders to the bottom of each element – but the last item does not have a border or is in some other way different. Again, only a class added to the last element will save you here if you cannot rely on using the last-child selector. Browser support for last-child The situation for last-child is similar to that of nth-child, in that there is no support in Internet Explorer 8. However, once again it is very simple to replicate the functionality using jQuery. Adding our .last class to the last list item. $(""ul.gallery li:last-child"").addClass(""last""); We could also use the nth-child selector to add the .last class to every third list item. $(""ul.gallery li:nth-child(3n)"").addClass(""last""); View Example 7 Fun with forms Styling forms can be a bit of a trial, made difficult by the fact that any CSS applied to the input element will effect text fields, submit buttons, checkboxes and radio buttons. As developers we are left adding classes to our form fields to differentiate them. In most builds all of my text fields have a simple class of text whereas I wouldn’t dream of adding a class of para to every paragraph element in a document. <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Strict//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd""> <html xmlns=""http://www.w3.org/1999/xhtml""> <head> <title>Syling form fields</title> <style type=""text/css""> body { padding: 40px; margin: 0; font: 0.9em Arial, Helvetica, sans-serif; } form div { clear: left; padding: 0 0 0.8em 0; } form label { float: left; width: 120px; } form .text, form textarea { border:1px solid #333; padding: 0.2em; width: 400px; } form .button { border: 1px solid #333; background-color: #eee; color: #000; padding: 0.1em; } </style> </head> <body> <h1>Send your Christmas list to Santa</h1> <form method=""post"" action="""" id=""christmas-list""> <div><label for=""fName"">Name</label> <input type=""text"" name=""fName"" id=""fName"" class=""text"" /></div> <div><label for=""fEmail"">Email address</label> <input type=""text"" name=""fEmail"" id=""fEmail"" class=""text"" /></div> <div><label for=""fList"">Your list</label> <textarea name=""fList"" id=""fList"" rows=""10"" cols=""30""></textarea></div> <div><input type=""submit"" name=""btnSubmit"" id=""btnSubmit"" value=""Submit"" class=""button"" ></div> </form> </body> </html> View Example 8 Attribute selectors provide a way of targeting elements by looking at the attributes of those elements. Unlike the other examples in this article which are CSS3 selectors, the attribute selector is actually a CSS2.1 selector – it just doesn’t get much use because of lack of support in Internet Explorer 6. Using attribute selectors we can write rules for text inputs and form buttons without needing to add any classes to the markup. For example after removing the text and button classes from my text and submit button input elements I can use the following rules to target them: form input[type=""text""] { border: 1px solid #333; padding: 0.2em; width: 400px; } form input[type=""submit""]{ border: 1px solid #333; background-color: #eee; color: #000; padding: 0.1em; } View Example 9 Another problem that I encounter with forms is where I am using CSS to position my labels and form elements by floating the labels. This works fine as long as I want all of my labels to be floated, however sometimes we get a set of radio buttons or a checkbox, and I don’t want the label field to be floated. As you can see in the below example the label for the checkbox is squashed up into the space used for the other labels, yet it makes more sense for the checkbox to display after the text. I could use a class on this label element however CSS3 lets me to target the label attribute directly by looking at the value of the for attribute. label[for=""fOptIn""] { float: none; width: auto; } Being able to precisely target attributes in this way is incredibly useful, and once IE6 is no longer an issue this will really help to clean up our markup and save us from having to create all kinds of special cases when generating this markup on the server-side. Browser support The news for attribute selectors is actually pretty good with Internet Explorer 7+, Firefox 2+ and all other modern browsers all having support. As I have already mentioned this is a CSS2.1 selector and so we really should expect to be able to use it as we head into 2010! Internet Explorer 7 has slightly buggy support and will fail on the label example shown above however I discovered a workaround in the Sitepoint CSS reference comments. Adding the selector label[htmlFor=""fOptIn""] to the correct selector will create a match for IE7. IE6 does not support these selector but, once again, you can use jQuery to plug the holes in IE6 support. The following jQuery will add the text and button classes to your fields and also add a checks class to the label for the checkbox, which you can use to remove the float and width for this element. $('form input[type=""submit""]').addClass(""button""); $('form input[type=""text""]').addClass(""text""); $('label[for=""fOptIn""]').addClass(""checks""); View Example 10 The selectors I’ve used in this article are easy to overlook as we do have ways to achieve these things currently. As developers – especially when we have frameworks and existing code that cope with these situations – it is easy to carry on as we always have done. I think that the time has come to start to clean up our front and backend code and replace our reliance on classes with these more advanced selectors. With the help of a little JavaScript almost all users will still get the full effect and, where we are dealing with purely visual effects, there is definitely a case to be made for not worrying about the very small percentage of people with old browsers and no JavaScript. They will still receive a readable website, it may just be missing some of the finesse offered to the modern browsing experience.",2009,Rachel Andrew,rachelandrew,2009-12-20T00:00:00+00:00,https://24ways.org/2009/cleaner-code-with-css3-selectors/,code 193,Web Content Accessibility Guidelines—for People Who Haven't Read Them,"I’ve been a huge fan of the Web Content Accessibility Guidelines 2.0 since the World Wide Web Consortium (W3C) published them, nine years ago. I’ve found them practical and future-proof, and I’ve found that they can save a huge amount of time for designers and developers. You can apply them to anything that you can open in a browser. My favourite part is when I use the guidelines to make a website accessible, and then attend user-testing and see someone with a disability easily using that website. Today, the United Nations International Day of Persons with Disabilities, seems like a good time to re-read Laura Kalbag’s explanation of why we should bother with accessibility. That should motivate you to devour this article. If you haven’t read the Web Content Accessibility Guidelines 2.0, you might find them a bit off-putting at first. The editors needed to create a single standard that countries around the world could refer to in legislation, and so some of the language in the guidelines reads like legalese. The editors also needed to future-proof the guidelines, and so some terminology—such as “time-based media” and “programmatically determined”—can sound ambiguous. The guidelines can seem lengthy, too: printing the guidelines, the Understanding WCAG 2.0 document, and the Techniques for WCAG 2.0 document would take 1,200 printed pages. This festive season, let’s rip off that legalese and ambiguous terminology like wrapping paper, and see—in a single article—what gifts the Web Content Accessibility Guidelines 2.0 editors have bestowed upon us. Can your users perceive the information on your website? The first guideline has criteria that help you prevent your users from asking “What the **** is this thing here supposed to be?” 1.1.1 Text is the most accessible format for information. Screen readers—such as the “VoiceOver” setting on your iPhone or the “TalkBack” app on your Android phone—understand text better than any other format. The same applies for other assistive technology, such as translation apps and Braille displays. So, if you have anything on your webpage that’s not text, you must add some text that gives your user the same information. You probably know how to do this already; for example: for images in webpages, put some alternative text in an alt attribute to tell your user what the image conveys to the user; for photos in tweets, add a description to make the images accessible; for Instagram posts, write a caption that conveys the photo’s information. The alternative text should allow the user to get the same information as someone who can see the image. For websites that have too many images for someone to add alternative text to, consider how machine learning and Dynamically Generated Alt Text might—might—be appropriate. You can probably think of a few exceptions where providing text to describe an image might not make sense. Remember I described these guidelines as “practical”? They cover all those exceptions: User interface controls such as buttons and text inputs must have names or labels to tell your user what they do. If your webpage has video or audio (more about these later on!), you must—at least—have text to tell the user what they are. Maybe your webpage has a test where your user has to answer a question about an image or some audio, and alternative text would give away the answer. In that case, just describe the test in text so your users know what it is. If your webpage features a work of art, tell your user the experience it evokes. If you have to include a Captcha on your webpage—and please avoid Captchas if at all possible, because some users cannot get past them—you must include text to tell your user what it is, and make sure that it doesn’t rely on only one sense, such as vision. If you’ve included something just as decoration, you must make sure that your user’s assistive technology can ignore it. Again, you probably know how to do this. For example, you could use CSS instead of HTML to include decorative images, or you could add an empty alt attribute to the img element. (Please avoid that recent trend where developers add empty alt attributes to all images in a webpage just to make the HTML validate. You’re better than that.) (Notice that the guidelines allow you to choose how to conform to them, with whatever technology you choose. To make your website conform to a guideline, you can either choose one of the techniques for WCAG 2.0 for that guideline or come up with your own. Choosing a tried-and-tested technique usually saves time!) 1.2.1 If your website includes a podcast episode, speech, lecture, or any other recorded audio without video, you must include a transcription or some other text to give your user the same information. In a lot of cases, you might find this easier than you expect: professional transcription services can prove relatively inexpensive and fast, and sometimes a speaker or lecturer can provide the speech or lecture notes that they read out word-for-word. Just make sure that all your users can get the same information and the same results, whether they can hear the audio or not. For example, David Smith and Marco Arment always publish episode transcripts for their Under the Radar podcast. Similarly, if your website includes recorded video without audio—such as an animation or a promotional video—you must either use text to detail what happens in the video or include an audio version. Again, this might work out easier then you perhaps fear: for example, you could check to see whether the animation started life as a list of instructions, or whether the promotional video conveys the same information as the “About Us” webpage. You want to make sure that all your users can get the same information and the same results, whether they can see that video or not. 1.2.2 If your website includes recorded videos with audio, you must add captions to those videos for users who can’t hear the audio. Professional transcription services can provide you with time-stamped text in caption formats that YouTube supports, such as .srt and .sbv. You can upload those to YouTube, so captions appear on your videos there. YouTube can auto-generate captions, but the quality varies from impressively accurate to comically inaccurate. If you have a text version of what the people in the video said—such as the speech that a politician read or the bedtime story that an actor read—you can create a transcript file in .txt format, without timestamps. YouTube then creates captions for your video by synchronising that text to the audio in the video. If you host your own videos, you can ask a professional transcription service to give you .vtt files that you can add to a video element’s track element—or you can handcraft your own. (A quick aside: if your website has more videos than you can caption in a reasonable amount of time, prioritise the most popular videos, the most important videos, and the videos most relevant to people with disabilities. Then make sure your users know how to ask you to caption other videos as they encounter them.) 1.2.3 If your website has recorded videos that have audio, you must add an “audio description” narration to the video to describe important visual details, or add text to the webpage to detail what happens in the video for users who cannot see the videos. (I like to add audio files from videos to my Huffduffer account so that I can listen to them while commuting.) Maybe your home page has a video where someone says, “I’d like to explain our new TPS reports” while “Bill Lumbergh, division Vice President of Initech” appears on the bottom of the screen. In that case, you should add an audio description to the video that announces “Bill Lumbergh, division Vice President of Initech”, just before Bill starts speaking. As always, you can make life easier for yourself by considering all of your users, before the event: in this example, you could ask the speaker to begin by saying, “I’m Bill Lumbergh, division Vice President of Initech, and I’d like to explain our new TPS reports”—so you won’t need to spend time adding an audio description afterwards. 1.2.4 If your website has live videos that have some audio, you should get a stenographer to provide real-time captions that you can include with the video. I’ll be honest: this can prove tricky nowadays. The Web Content Accessibility Guidelines 2.0 predate YouTube Live, Instagram live Stories, Periscope, and other such services. If your organisation creates a lot of live videos, you might not have enough resources to provide real-time captions for each one. In that case, if you know the contents of the audio beforehand, publish the contents during the live video—or failing that, publish a transcription as soon as possible. 1.2.5 Remember what I said about the recorded videos that have audio? If you can choose to either add an audio description or add text to the webpage to detail what happens in the video, you should go with the audio description. 1.2.6 If your website has recorded videos that include audio information, you could provide a sign language version of the audio information; some people understand sign language better than written language. (You don’t need to caption a video of a sign language version of audio information.) 1.2.7 If your website has recorded videos that have audio, and you need to add an audio description, but the audio doesn’t have enough pauses for you to add an “audio description” narration, you could provide a separate version of that video where you have added pauses to fit the audio description into. 1.2.8 Let’s go back to the recorded videos that have audio once more! You could add text to the webpage to detail what happens in the video, so that people who can neither read captions nor hear dialogue and audio description can use braille displays to understand your video. 1.2.9 If your website has live audio, you could get a stenographer to provide real-time captions. Again, if you know the contents of the audio beforehand, publish the contents during the live audio or publish a transcription as soon as possible. (Congratulations on making it this far! I know that seems like a lot to remember, but keep in mind that we’ve covered a complex area: helping your users to understand multimedia information that they can’t see and/or hear. Grab a mince pie to celebrate, and let’s keep going.) 1.3.1 You must mark up your website’s content so that your user’s browser, and any assistive technology they use, can understand the hierarchy of the information and how each piece of information relates to the rest. Once again, you probably know how to do this: use the most appropriate HTML element for each piece of information. Mark up headings, lists, buttons, radio buttons, checkboxes, and links with the most appropriate HTML element. If you’re looking for something to do to keep you busy this Christmas, scroll through the list of the elements of HTML. Do you notice any elements that you didn’t know, or that you’ve never used? Do you notice any elements that you could use on your current projects, to mark up the content more accurately? Also, revise HTML table advanced features and accessibility, how to structure an HTML form, and how to use the native form widgets—you might be surprised at how much you can do with just HTML! Once you’ve mastered those, you can make your website much more usable for your all of your users. 1.3.2 If your webpage includes information that your user has to read in a certain order, you must make sure that their browser and assistive technology can present the information in that order. Don’t rely on CSS or whitespace to create that order visually. Check that the order of the information makes sense when CSS and whitespace aren’t formatting it. Also, try using the Tab key to move the focus through the links and form widgets on your webpage. Does the focus go where you expect it to? Keep this in mind when using order in CSS Grid or Flexbox. 1.3.3 You must not presume that your users can identify sensory characteristics of things on your webpage. Some users can’t tell what you’ve positioned where on the screen. For example, instead of asking your users to “Choose one of the options on the left”, you could ask them to “Choose one of our new products” and link to that section of the webpage. 1.4.1 You must not rely on colour as the only way to convey something to your users. Some of your users can’t see, and some of your users can’t distinguish between colours. For example, if your webpage uses green to highlight the products that your shop has in stock, you could add some text to identify those products, or you could group them under a sub-heading. 1.4.2 If your webpage automatically plays a sound for more than 3 seconds, you must make sure your users can stop the sound or change its volume. Don’t rely on your user turning down the volume on their computer; some users need to hear the screen reader on their computer, and some users just want to keep listening to whatever they were listening before your webpage interrupted them! 1.4.3 You should make sure that your text contrasts enough with its background, so that your users can read it. Bookmark Lea Verou’s Contrast Ratio calculator now. You can enter the text colour and background colour as named colours, or as RGB, RGBa, HSL, or HSLa values. You should make sure that: normal text that set at 24px or larger has a ratio of at least 3:1; bold text that set at 18.75px or larger has a ratio of at least 3:1; all other text has a ratio of at least 4½:1. You don’t have to do this for disabled form controls, decorative stuff, or logos—but you could! 1.4.4 You should make sure your users can resize the text on your website up to 200% without using their assistive technology—and still access all your content and functionality. You don’t have to do this for subtitles or images of text. 1.4.5 You should avoid using images of text and just use text instead. In 1998, Jeffrey Veen’s first Hot Design Tip said, “Text is text. Graphics are graphics. Don’t confuse them.” Now that you can apply powerful CSS text-styling properties, use CSS Grid to precisely position text, and choose from thousands of web fonts (Jeffrey co-founded Typekit to help with this), you pretty much never need to use images of text. The guidelines say you can use images of text if you let your users specify the font, size, colour, and background of the text in the image of text—but I’ve never seen that on a real website. Also, this doesn’t apply to logos. 1.4.6 Let’s go back to colour contrast for a second. You could make your text contrast even more with its background, so that even more of your users can read it. To do that, use Lea Verou’s Contrast Ratio calculator to make sure that: normal text that is 24px or larger has a ratio of at least 4½:1; bold text that 18.75px or larger has a ratio of at least 4½:1; all other text has a ratio of at least 7:1. 1.4.7 If your website has recorded speech, you could make sure there are no background sounds, or that your users can turn off any background sounds. If that’s not possible, you could make sure that any background sounds that last longer than a couple of seconds are at least four times quieter than the speech. This doesn’t apply to audio Captchas, audio logos, singing, or rapping. (Yes, these guidelines mention rapping!) 1.4.8 You could make sure that your users can reformat blocks of text on your website so they can read them better. To do this, make sure that your users can: specify the colours of the text and the background, and make the blocks of text less than 80-characters wide, and align text to the left (or right for right-to-left languages), and set the line height to 150%, and set the vertical distance between paragraphs to 1½ times the line height of the text, and resize the text (without using their assistive technology) up to 200% and still not have to scroll horizontally to read it. By the way, when you specify a colour for text, always specify a colour for its background too. Don’t rely on default background colours! 1.4.9 Let’s return to images of text for a second. You could make sure that you use them only for decoration and logos. Can users operate the controls and links on your website? The second guideline has criteria that help you prevent your users from asking, “How the **** does this thing work?” 2.1.1 You must make sure that you users can carry out all of your website’s activities with just their keyboard, without time limits for pressing keys. (This doesn’t apply to drawing or anything else that requires a pointing device such as a mouse.) Again, if you use the most appropriate HTML element for each piece of information and for each form element, this should prove easy. 2.1.2 You must make sure that when the user uses the keyboard to focus on some part of your website, they can then move the focus to some other part of your webpage without needing to use a mouse or touch the screen. If your website needs them to do something complex before they can move the focus elsewhere, explain that to your user. These “keyboard traps” have become rare, but beware of forms that move focus from one text box to another as soon as they receive the correct number of characters. 2.1.3 Let’s revisit making sure that you users can carry out all of your website’s activities with just their keyboard, without time limits for pressing keys. You could make sure that your user can do absolutely everything on your website with just the keyboard. 2.2.1 Sometimes people need more time than you might expect to complete a task on your website. If any part of your website imposes a time limit on a task, you must do at least one of these: let your users turn off the time limit before they encounter it; or let your users increase the time limit to at least 10 times the default time limit before they encounter it; or warn your users before the time limit expires and give them at least 20 seconds to extend it, and let them extend it at least 10 times. Remember: these guidelines are practical. They allow you to enforce time limits for real-time events such as auctions and ticket sales, where increasing or extending time limits wouldn’t make sense. Also, the guidelines allow you to enforce a maximum time limit of 20 hours. The editors chose 20 hours because people need to go to sleep at some stage. See? Practical! 2.2.2 In my experience, this criterion remains the least well-known—even though some users can only use websites that conform to it. If your website presents content alongside other content that can distract users by automatically moving, blinking, scrolling, or updating, you must make sure that your users can: pause, stop, or hide the other content if it’s not essential and lasts more than 5 seconds; and pause, stop, hide, or control the frequency of the other content if it automatically updates. It’s OK if your users miss live information such as stock price updates or football scores; you can’t do anything about that! Also, this doesn’t apply to animations such as progress bars that you put on a website to let all users know that the webpage isn’t frozen. (If this one sounds complex, just add a pause button to anything that might distract your users.) 2.2.3 Let’s go back to time limits on tasks on your website. You could make your website even easier to use by removing all time limits except those on real-time events such as auctions and ticket sales. That would mean your user wouldn’t need to interact with a timer at all. 2.2.4 You could let your users turn off all interruptions—server updates, promotions, and so on—apart from any emergency information. 2.2.5 This is possibly my favourite of these criteria! After your website logs your user out, you could make sure that when they log in again, they can continue from where they were without having lost any information. Do that, and you’ll be on everyone’s Nice List this Christmas. 2.3.1 You must make sure that nothing flashes more than three times a second on your website, unless you can make sure that the flashes remain below the acceptable general flash and red flash thresholds… 2.3.2 …or you could just make sure that nothing flashes more than three times per second on your website. This is usually an easier goal. 2.4.1 You must make sure that your users can jump past any blocks of content, such as navigation menus, that are repeated throughout your website. You know the drill here: using HTML’s sectioning elements such as header, nav, main, aside, and footer allows users with assistive technology to go straight to the content they need, and adding “Skip Navigation” links allows everyone to get to your main content faster. 2.4.2 You must add a proper title to describe each webpage’s topic. Your webpage won’t even validate without a title element, so make it a useful one. 2.4.3 If your users can focus on links and native form widgets, you must make sure that they can focus on elements in an order that makes sense. 2.4.4 You must make sure that your users can understand the purpose of a link when they read: the text of the link; or the text of the paragraph, list item, table cell, or table header for the cell that contains the link; or the heading above the link. You don’t have to do that for games and quizzes. 2.4.5 You should give your users multiple ways to find any webpage within a set of webpages. Add site-wide search and a site map and you’re done! This doesn’t apply for a webpage that is part of a series of actions (like a shopping cart and checkout flow) or to a webpage that is a result of a series of actions (like a webpage confirming that the user has bought what was in the shopping cart). 2.4.6 You should help your users to understand your content by providing: headings that describe the topics of you content; labels that describe the purpose of the native form widgets on the webpage. 2.4.7 You should make sure that users can see which element they have focussed on. Next time you use your website, try hitting the Tab key repeatedly. Does it visually highlight each item as it moves focus to it? If it doesn’t, search your CSS to see whether you’ve applied outline: 0; to all elements—that’s usually the culprit. Use the :focus pseudo-element to define how elements should appear when they have focus. 2.4.8 You could help your user to understand where the current webpage is located within your website. Add “breadcrumb navigation” and/or a site map and you’re done. 2.4.9 You could make links even easier to understand, by making sure that your users can understand the purpose of a link when they read the text of the link. Again, you don’t have to do that for games and quizzes. 2.4.10 You could use headings to organise your content by topic. Can users understand your content? The third guideline has criteria that help you prevent your users from asking, “What the **** does this mean?” 3.1.1 Let’s start this section with the criterion that possibly takes the least time to implement; you must make sure that the user’s browser can identify the main language that your webpage’s content is written in. For a webpage that has mainly English content, use <html lang=""en"">. 3.1.2 You must specify when content in another language appears in your webpage, like so: <q>I wish you a <span lang=""fr"">Joyeux Noël</span>.</q>. You don’t have to do this for proper names, technical terms, or words that you can’t identify a language for. You also don’t have to do it for words from a different language that people consider part of the language around those words; for example, <q>Come to our Christmas rendezvous!</q> is OK. 3.1.3 You could make sure that your users can find out the meaning of any unusual words or phrases, including idioms like “stocking filler” or “Bah! Humbug!” and jargon such as “VoiceOver” and “TalkBack”. Provide a glossary or link to a dictionary. 3.1.4 You could make sure that your users can find out the meaning of any abbreviation. For example, VoiceOver pronounces “Xmas” as “Smas” instead of “Christmas”. Using the abbr element and linking to a glossary can help. (Interestingly, VoiceOver pronounces “abbr” as “abbreviation”!) 3.1.5 Do your users need to be able to read better than a typically educated nine-year-old, to read your content (apart from proper names and titles)? If so, you could provide a version that doesn’t require that level of reading ability, or you could provide images, videos, or audio to explain your content. (You don’t have to add captions or audio description to those videos.) 3.1.6 You could make sure that your users can access the pronunciation of any word in your content if that word’s meaning depends on its pronunciation. For example, the word “close” could have one of two meanings, depending on pronunciation, in a phrase such as, “Ready for Christmas? Close now!” 3.2.1 Some users need to focus on elements to access information about them. You must make sure that focusing on an element doesn’t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form. 3.2.2 Webpages are easier for users when the controls do what they’re supposed to do. Unless you have warned your users about it, you must make sure that changing the value of a control such as a text box, checkbox, or drop-down list doesn’t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form. 3.2.3 To help your users to find the content they want on each webpage, you should put your navigation elements in the same place on each webpage. (This doesn’t apply when your user has changed their preferences or when they use assistive technology to change how your content appears.) 3.2.4 When a set of webpages includes things that have the same functionality on different webpages, you should name those things consistently. For example, don’t use the word “Search” for the search box on one webpage and “Find” for the search box on another webpage within that set of webpages. 3.2.5 Let’s go back to major changes, such as a new window opening, another element taking focus, or a form being submitted. You could make sure that they only happen when users deliberately make them happen, or when you have warned users about them first. For example, you could give the user a button for updating some content instead of automatically updating that content. Also, if a link will open in a new window, you could add the words “opens in new window” to the link text. 3.3.1 Users make mistakes when filling in forms. Your website must identify each mistake to your user, and must describe the mistake to your users in text so that the user can fix it. One way to identify mistakes reliably to your users is to set the aria-invalid attribute to true in the element that has a mistake. That makes sure that users with assistive technology will be alerted about the mistake. Of course, you can then use the [aria-invalid=""true""] attribute selector in your CSS to visually highlight any such mistakes. Also, look into how certain attributes of the input element such as required, type, and list can help prevent and highlight mistakes. 3.3.2 You must include labels or instructions (and possibly examples) in your website’s forms, to help your users to avoid making mistakes. 3.3.3 When your user makes a mistake when filling in a form, your webpage should suggest ways to fix that mistake, if possible. This doesn’t apply in scenarios where those suggestions could affect the security of the content. 3.3.4 Whenever your user submits information that: has legal or financial consequences; or affects information that they have previously saved in your website; or is part of a test …you should make sure that they can: undo it; or correct any mistakes, after your webpage checks their information; or review, confirm, and correct the information before they finally submit it. 3.3.5 You could help prevent your users from making mistakes by providing obvious, specific help, such as examples, animations, spell-checking, or extra instructions. 3.3.6 Whenever your user submits any information, you could make sure that they can: undo it; or correct any mistakes, after your webpage checks their information; or review, confirm, and correct the information before they finally submit it. Have you made your website robust enough to work on your users’ browsers and assistive technologies? The fourth and final guideline has criteria that help you prevent your users from asking, “Why the **** doesn’t this work on my device?” 4.1.1 You must make sure that your website works as well as possible with current and future browsers and assistive technology. Prioritise complying with web standards instead of relying on the capabilities of currently popular devices and browsers. Web developers didn’t expect their users to be unwrapping the Wii U Browser five years ago—who knows what browsers and assistive technologies our users will be unwrapping in five years’ time? Avoid hacks, and use the W3C Markup Validation Service to make sure that your HTML has no errors. 4.1.2 If you develop your own user interface components, you must make their name, role, state, properties, and values available to your user’s browsers and assistive technologies. That should make them almost as accessible as standard HTML elements such as links, buttons, and checkboxes. “…and a partridge in a pear tree!” …as that very long Christmas song goes. We’ve covered a lot in this article—because your users have a lot of different levels of ability. Hopefully this has demystified the Web Content Accessibility Guidelines 2.0 for you. Hopefully you spotted a few situations that could arise for users on your website, and you now know how to tackle them. To start applying what we’ve covered, you might like to look at Sarah Horton and Whitney Quesenbery’s personas for Accessible UX. Discuss the personas, get into their heads, and think about which aspects of your website might cause problems for them. See if you can apply what we’ve covered today, to help users like them to do what they need to do on your website. How to know when your website is perfectly accessible for everyone LOL! There will never be a time when your website becomes perfectly accessible for everyone. Don’t aim for that. Instead, aim for regularly testing and making your website more accessible. Web Content Accessibility Guidelines (WCAG) 2.1 The W3C hope to release the Web Content Accessibility Guidelines (WCAG) 2.1 as a “recommendation” (that’s what the W3C call something that we should start using) by the middle of next year. Ten years may seem like a long time to move from version 2.0 to version 2.1, but consider the scale of the task: the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible. Keep an eye out for 2.1! You’ll go down in history One last point: I’ve met a surprising number of web designers and developers who do great work to make their websites more accessible without ever telling their users about it. Some of your potential customers have possibly tried and failed to use your website in the past. They probably won’t try again unless you let them know that things have improved. A quick Twitter search for your website’s name alongside phrases like “assistive technology”, “doesn’t work”, or “#fail” can let you find frustrated users—so you can tell them about how you’re making your website more accessible. Start making your websites work better for everyone—and please, let everyone know.",2017,Alan Dalton,alandalton,2017-12-03T00:00:00+00:00,https://24ways.org/2017/wcag-for-people-who-havent-read-them/,code 195,Levelling Up for Junior Developers,"If you are a junior developer starting out in the web industry, things can often seem a little daunting. There are so many things to learn, and as soon as you’ve learnt one framework or tool, there seems to be something new out there. I am lucky enough to lead a team of developers building applications for the web. During a recent One to One meeting with one of our junior developers, he asked me about a learning path and the basic fundamentals that every developer should know. After a bit of digging around, I managed to come up with a (not so exhaustive) list of principles that was shared with him. In this article, I will share the list with you, and hopefully help you level up from junior developer and become a better developer all round. This list doesn’t focus on an particular programming language, but rather coding concepts as a whole. The idea behind this list is that whether you are a front-end developer, back-end developer, full stack developer or just a curious one, these principles apply to everyone that writes code. I have tried to be technology agnostic, so that you can use these tips to guide you, whatever your tech stack might be. Without any further ado and in no particular order, let’s get started. Refactoring code like a boss The Boy Scouts have a rule that goes “always leave the campground cleaner than you found it.” This rule can be applied to code too and ensures that you leave code cleaner than you found it. As a junior developer, it’s almost certain that you will either create or come across older code that could be improved. The resources below are a guide that will help point you in the right direction. My favourite book on this subject has to be Clean Code by Robert C. Martin. It’s a must read for anyone writing code as it helps you identify bad code and shows you techniques that you can use to improve existing code. If you find that in your day to day work you deal with a lot of legacy code, Improving Existing Technology through Refactoring is another useful read. Design Patterns are a general repeatable solution to a commonly occurring problem in software design. My friend and colleague Ranj Abass likes to refer to them as a “common language” that helps developers discuss the way that we write code as a pattern. My favourite book on this subject is Head First Design Patterns which goes right back to the basics. Another great read on this topic is Refactoring to Patterns. Working Effectively With Legacy Code is another one that I found really valuable. Improving your debugging skills A solid understanding of how to debug code is a must for any developer. Whether you write code for the web or purely back-end code, the ability to debug will save you time and help you really understand what is going on under the hood. If you write front-end code for the web, one of my favourite resources to help you understand how to debug code in Chrome can be found on the Chrome Dev Tools website. While some of the tips are specific to Chrome, these techniques apply to any modern browser of your choice. At Settled, we use Node.js for much of our server side code. Without a doubt, our most trusted IDE has to be Visual Studio Code and the built-in debuggers are amazing. Regardless of whether you use Node.js or not, there are a number of plugins and debuggers that you can use in the IDE. I recommend reading the website of your favourite IDE for more information. As a side note, it is worth mentioning that Chrome Developer Tools actually has functionality that allows you to debug Node.js code too. This makes it a seamless transition from front-end code to server-side code debugging. The Debugging Mindset is an informative online article by Devon H. O’Dell and discusses the the psychology of learning strategies that lead to effective problem-solving skills. A good understanding of relational databases and NoSQL databases Almost all developers will need to persist data at some point in their career. Even if you don’t write SQL queries in your day to day job, a solid understanding of how they work will help you become a better developer. If you are a complete newbie when it comes to databases, I recommend checking out Code Academy. They offer a free online course that can help you get your head around how relational databases work. The course is quite basic, but is a useful hands-on approach to learning this topic. This article provides a great explainer for the difference between the SQL and NoSQL databases, and this Stackoverflow answer goes a little deeper into the subject of the two database types. If you’d like to learn more about NoSQL queries, I would recommend starting with this article on MongoDB queries. Unfortunately, there isn’t one overall course as most NoSQL databases have their own syntax. You may also have noticed that I haven’t included other types of databases such as Graph or In-memory; it’s worth focussing on the basics before going any deeper. Performance on the web If you build for the web today, it is important to understand how the browser receives and renders the content that you send it. I am pretty passionate about Web Performance, and hope that everyone can learn how to make websites faster and more efficient. It can be fun at the same time! Steve Souders High Performance Websites is the godfather of web performance books. While it was created a few years ago and many of the techniques might have changed slightly, it is the original book on the subject and set up many of the ground rules that we know about web performance today. A free online resource on this topic is the Google Developers website. The site is an up to date guide on the best web performance techniques for your site. It is definitely worth a read. The network plays a key role in delivering data to your users, and it plays a big role in performance on the web. A fantastic book on this topic is Ilya Grigorik’s High Performance Browser Networking. It is also available to read online at hpbn.co. Understand the end to end architecture of your software project I find that one of the best ways to improve my knowledge is to learn about the architecture of the software at the company I work at. It gives you a good understanding as to why things are designed the way they are, why certain decisions were made, and gives you an understanding of how you might do things differently with hindsight. Try and find someone more senior, such as a Technical Lead or Software Architect, at your company and ask them to explain the overall architecture and draw a few high-level diagrams for you. Not to mention that they will be impressed with your willingness to learn. I recommend reading Clean Architecture: A Craftsman’s Guide to Software Structure and Design for more detail on this subject. Far too often, software projects can be over-engineered and over-architected, it is worth reading Just Enough Software Architecture. The book helps developers understand how the smallest of changes can affect the outcome of your software architecture. How are things deployed A big part of creating software is actually shipping it! How is the software at your company released into the wild? Does your company do Continuous Integration? Continuous Deployment? Even if you answered no to any of these questions, it is worth finding someone with the knowledge in your company to explain these things to you. If it is not already documented, perhaps you could start a wiki to document everything you’re learning about the system - this is a great way to level up and be appreciated and invaluable. A streamlined deployment process is a beautiful thing, and understanding how they work can help you grow your knowledge as a developer. Continuous Integration is a practical read on the ins and outs of implementing this deployment technique. Docker is another great tool to use when it comes to software deployment. It can be tricky at first to wrap your head around, but it is definitely worth learning about this great technology. The documentation on the website will teach and guide you on how to get started using Docker. Writing Tests Testing is an essential tool in the developer bag of skills. They help you to make big refactoring changes to your code, and feel a lot more confident knowing that your changes haven’t broken anything. There are so many benefits to testing, which make it so important for developers at every level to become acquainted with it/them. The book that started it all for me was Roy Osherove’s The Art of Unit Testing. The code in the book is written in C#, but the principles apply to every language. It’s a great, easy-to-understand read. Another great read is How Google Tests Software and covers exactly what it says on the tin. It covers many different testing techniques such as exploratory, black box, white box, and acceptance testing and really helps you understand how large organisations test their code. Soft skills Whilst reading through this article, you’ve probably noticed that a large chunk of it focusses on code and technical ability. Without a doubt, I’d say that it is even more important to be a good teammate. If you look up the definition of soft skills in the dictionary, it is defined as “personal attributes that enable someone to interact effectively and harmoniously with other people” and I think that it sums this up perfectly. Working on your “soft skills” is something that can truly help you level up in your career. You may be the world’s greatest coder, but if you colleagues can’t get along with you, your coding skills won’t matter! While you may not learn how to become the perfect co-worker overnight, I really try and live by the motto “don’t be an arsehole”. Think about how you like to be treated and then try and treat your co-workers with the same courtesy and respect. The next time you need to make a decision at work, ask yourself “is this something an arsehole would do”? If you answered yes to that question, you probably shouldn’t do it! Summary Levelling up as a junior developer doesn’t have to be scary. Focus on the fundamentals and they should hold you in good stead, regardless of the new things that come along. Software engineering is built on these great principles that have stood the test of time. Whilst researching for this article, I came across a useful Github repo that is worth mentioning. Things Every Programmer Should Know is packed with useful information. I have to admit, I didn’t know everything on there! I hope that you have found this list helpful. Some of the topics I have mentioned might not be relevant for you at this stage in your career, but should give a nudge in the right direction. After all, knowledge is power! If you are a junior developer reading this article, what would you add to it?",2017,Dean Hume,deanhume,2017-12-05T00:00:00+00:00,https://24ways.org/2017/levelling-up-for-junior-developers/,code 201,Lint the Web Forward With Sonarwhal,"Years ago, when I was in a senior in college, much of my web development courses focused on two things: the basics like HTML and CSS (and boy, do I mean basic), and Adobe Flash. I spent many nights writing ActionScript 3.0 to build interactions for the websites that I would add to my portfolio. A few months after graduating, I built one website in Flash for a client, then never again. Flash was dying, and it became obsolete in my résumé and portfolio. That was my first lesson in the speed at which things change in technology, and what a daunting realization that was as a new graduate looking to enter the professional world. Now, seven years later, I work on the Microsoft Edge team where I help design and build a tool that would have lessened my early career anxieties: sonarwhal. Sonarwhal is a linting tool, built by and for the web community. The code is open source and lives under the JS Foundation. It helps web developers and designers like me keep up with the constant change in technology while simultaneously teaching how to code better websites. Introducing sonarwhal’s mascot Nellie Good web development is hard. It is more than HTML, CSS, and JavaScript: developers are expected to have a grasp of accessibility, performance, security, emerging standards, and more, all while refreshing this knowledge every few months as the web evolves. It’s a lot to keep track of.   Web development is hard Staying up-to-date on all this knowledge is one of the driving forces for developing this scanning tool. Whether you are just starting out, are a student, or you have over a decade of experience, the sonarwhal team wants to help you build better websites for all browsers. Currently sonarwhal checks for best practices in five categories: Accessibility, Interoperability, Performance, PWAs, and Security. Each check is called a “rule”. You can configure them and even create your own rules if you need to follow some specific guidelines for your project (e.g. validate analytics attributes, title format of pages, etc.). You can use sonarwhal in two ways: An online version, that provides a quick and easy way to scan any public website. A command line tool, if you want more control over the configuration, or want to integrate it into your development flow. The Online Scanner The online version offers a streamlined way to scan a website; just enter a URL and you will get a web page of scan results with a permalink that you can share and revisit at any time. The online version of sonarwal When my team works on a new rule, we spend the bulk of our time carefully researching each subject, finding sources, and documenting it rather than writing the rule’s code. Not only is it important that we get you the right results, but we also want you to understand why something is failing. Next to each failing rule you’ll find a link to its detailed documentation, explaining why you should care about it, what exactly we are testing, examples that pass and examples that don’t, and useful links to even more in-depth documentation if you are interested in the subject. We hope that between reading the documentation and continued use of sonarwhal, developers can stay on top of best practices. As devs continue to build sites and identify recurring issues that appear in their results, they will hopefully start to automatically include those missing elements or fix those pieces of code that are producing errors. This also isn’t a one-way communication: the documentation is not only available on the sonarwhal site, but also on GitHub for editing so you can help us make it even better! A results report The current configuration for the online scanner is very strict, so it might hurt your feelings (it did when I first tested it on my personal website). But you can configure sonarwhal to any level of strictness as well as customize the command line tool to your needs! Sonarwhal’s CLI  The CLI gives you full control of sonarwhal: what rules to use, tweaks to them, domains that are out of your control, and so on. You will need the latest node LTS (v8) or Stable (v9) and your favorite package manager, such as npm: npm install -g sonarwhal You can now run sonarwhal from anywhere via: sonarwhal https://example.com Using the CLI The configuration is done via a .sonarwhalrc file. When analyzing a site, if no file is available, you will be prompted to answer a series of questions: What connector do you want to use? Connectors are what sonarwhal uses to access a website and gather all the information about the requests, resources, HTML, etc. Currently it supports jsdom, Microsoft Edge, and Google Chrome. What formatter? This is how you want to see the results: summary, stylish, etc. Make sure to look at the full list. Some are concise for, perfect for a quick build assessment, while others are more verbose and informative. Do you want to use the recommended rules configuration? Rules are the things we are validating. Unless you’ve read the documentation and know what you are doing, first timers should probably use the recommended configuration. What browsers are you targeting? One of the best features of sonarwhal is that rules can adapt their feedback depending on your targeted browsers, suggesting to add or remove things. For example, the rule “Highest Document Mode” will tell you to add the “X-UA-Compatible” header if IE10 or lower is targeted or remove if the opposite is true. sonarwhal configuration generator questions Once you answer all these questions the scan will start and you will have a .sonarwhalrc file similar to the following: { ""connector"": { ""name"": ""jsdom"", ""options"": { ""waitFor"": 1000 } }, ""formatters"": ""stylish"", ""rulesTimeout"": 120000, ""rules"": { ""apple-touch-icons"": ""error"", ""axe"": ""error"", ""content-type"": ""error"", ""disown-opener"": ""error"", ""highest-available-document-mode"": ""error"", ""validate-set-cookie-header"": ""warning"", // ... } } You should see the scan initiate in the command line and within a few seconds the results should start to appear. Remember, the scan results will look different depending on which formatter you selected so try each one out to see which one you like best. sonarwhal results on my website and hurting my feelings 💔 Now that you have a list of errors, you can get to work improving the site! Note though, that when you scan your website, it scans all the resources on that page and if you’ve added something like analytics or fonts hosted elsewhere, you are unable to change those files. You can configure the CLI to ignore files from certain domains so that you are only getting results for files you are in control of. The documentation should give enough guidance on how to fix the errors, but if it’s insufficient, please help us and suggest edits or contribute back to it. This is a community effort and chances are someone else will have the same question as you. When I scanned both my websites, sonarwhal alerted me to not having an Apple Touch Icon. If I search on the web as opposed to using the sonarwhal documentation, the first top 3 results give me outdated information: I need to include many different icon sizes. I don’t need to include all the different size icons that target different devices. Declaring one icon sized 180px x 180px will provide a large enough icon for devices and it will scale down as appropriate for people on older devices. The information at the top of the search results isn’t always the correct answer to an issue and we don’t want you to have to search through outdated documentation. As sonarwhal’s capabilities expand, the goal is for it to be the one stop shop for helping preflight your website. The journey up until now and looking forward On the Microsoft Edge team, we’re passionate about empowering developers to build great websites. Every day we see so many sites come through our issue tracker. (Thanks for filing those bugs, they help us make Microsoft Edge better and better!) Some issues we see over and over are honest mistakes or outdated ‘best practices’ that could be avoided, so we built this tool to help everyone help make the web a better place. When we decided to create sonarwhal, we wanted to create a tool that would help developers write better and more up-to-date code for their websites. We want sonarwhal to be useful to anyone so, early on, we defined three guiding principles we’ve used along the way: Community Driven. We build for the community’s best interests. The web belongs to everyone and this project should too. Not only is it open source, we’ve also donated it to the JS Foundation and have an inclusive governance model that welcomes the collaboration of anyone, individual or company. User Centric. We want to put the user at the center, making sonarwhal configurable for your needs and easy to use no matter what your skill level is. Collaborative. We didn’t want to reinvent the wheel, so we collaborated with existing tools and services that help developers build for the web. Some examples are aXe, snyk.io, Cloudinary, etc. This is just the beginning and we still have lots to do. We’re hard at work on a backlog of exciting features for future releases, such as: New rules for a variety of areas like performance, accessibility, security, progressive web apps, and more. A plug-in for Visual Studio Code: we want sonarwhal to help you write better websites, and what better moment than when you are in your editor. Configuration options for the online service: as we fine tune the infrastructure, the rule configuration for our scanner is locked, but we look forward to adding CLI customization options here in the near future. This is a tool for the web community by the web community so if you are excited about sonarwhal, making a better web, and want to contribute, we have a few issues where you might be able to help. Also, don’t forget to check the rest of the sonarwhal GitHub organization. PRs are always welcome and appreciated! Let us know what you think about the scanner at @NarwhalNellie on Twitter and we hope you’ll help us lint the web forward!",2017,Stephanie Drescher,stephaniedrescher,2017-12-02T00:00:00+00:00,https://24ways.org/2017/lint-the-web-forward-with-sonarwhal/,code 202,Design Systems and CSS Grid,"Recently, my client has been looking at creating a few new marketing pages for their website. They currently have a design system in place but they’re looking to push this forward into 2018 with some small and possibly some larger changes. To start with we are creating a couple of new marketing pages. As well as making use of existing components within the design systems component library there are a couple of new components. Looking at the first couple of sketch files I felt this would be a great opportunity to use CSS Grid, to me the newer components need to be laid out on that page and grid would help with this perfectly. As well as this layout of the new components and the text within it, imagery would be used that breaks out of the grid and pushes itself into the spaces where the text is aligned. The existing grid system When the site was rebuilt in 2015 the team decided to make use of Sass and Susy, a “lightweight grid-layout engine using Sass”. It was built separating the grid system from the components that would be laid out on the page with a container, a row, an optional column, and a block. To make use of the grid system on a page for a component that would take the full width of the row you would have to write something like this: <div class=""grid-container""> <div class=""grid-row""> <div class=""grid-column-4""> <div class=""grid-block""> <!-- component code here --> </div> </div> </div> </div> Using a grid system similar to this can easily create quite the tag soup. It could fill the HTML full of divs that may become complex to understand and difficult to edit. Although there is this reliance on several <div>s to lay out the components on a page it does allow a tidy way to place the component code within that page. It abstracts the layout of the page to its own code, its own system, so the components can ‘fit’ where needed. The requirements of the new grid system Moving forward I set myself some goals for what I’d like to have achieved in this new grid system: It needs to behave like the existing grid systems We are not ripping up the existing grid system, it would be too much work, for now, to retrofit all of the existing components to work in a grid that has a different amount of columns, and spacing at various viewport widths. Allow full-width components Currently the grid system is a 14 column grid that becomes centred on the page when viewport is wide enough. We have, in the past, written some CSS that would allow for a full-width component, but his had always felt like a hack. We want the option to have a full-width element as part of the new grid system, not something that needs CSS to fight against. Less of a tag soup Ideally we want to end up writing less HTML to layout the page. Although the existing system can be quite clear as to what each element is doing, it can also become a little laborious in working out what each grid row or block is doing where. I would like to move the layout logic to CSS as much as is possible, potentially creating some utility classes or additional ‘layout classes’ for the components. Easier for people to use and author With many people using the existing design systems codebase we need to create a new grid system that is as easy or easier to use than the existing one. I think and hope this would be helped by removing as many <div>s as needed and would require new documentation and examples, and potentially some initial training. Separating layout from style There still needs to be a separation of layout from the styles for the component. To allow for the component itself to be placed wherever needed in the page we need to make sure that the CSS for the layout is a separate entity to the CSS for that styling. With these base requirements I took to CodePen and started working on some throwaway code to get started. Making the new grid(s) The Full-Width Grid To start with I created a grid that had three columns, one for the left, one for the middle, and one for the right. This would give the full-width option to components. Thankfully, one of Rachel Andrew’s many articles on Grid discussed this exact requirement of the new grid system to break out with Grid. I took some of the code in the examples and edited to make grid we needed. .container { display: grid; grid-template-columns: [full-start] minmax(.75em, 1fr) [main-start] minmax(0, 1008px) [main-end] minmax(.75em, 1fr) [full-end]; } We are declaring a grid, we have four grid column lines which we name and we define how the three columns they create react to the viewport width. We have a left and right column that have a minimum of 12px, and a central column with a maximum width of 1008px. Both left and right columns fill up any additional space if the viewport is wider that 1032px wide. We are also not declaring any gutters to this grid, the left and right columns would act as gutters at smaller viewports. At this point I noticed that older versions of Sass cannot parse the brackets in this code. To combat this I used Sass’ unquote method to wrap around the value of the grid-template-column. .container { display: grid; grid-template-columns: unquote("" [full-start] minmax(.75em, 1fr) [main-start] minmax(0, 1008px) [main-end] minmax(.75em, 1fr) [full-end] ""); } The existing codebase makes use of Sass variables, mixins and functions so to remove that would be a problem, but luckily the version of Sass used is up-to-date (note: example CodePens will be using CSS). The initial full-width grid displays on a webpage as below: The 14 column grid I decided to work out the 14 column grid as a separate prototype before working out how it would fit within the full-width grid. This grid is very similar to the 12 column grids that have been used in web design. Here we need 14 columns with a gutter between each one. Along with the many other resources on Grid, Mozilla’s MDN site had a page on common layouts using CSS Grid. This gave me the perfect CSS I needed to create my grid and I edited it as required: .inner { display: grid; grid-template-columns: repeat(14, [col-start] 1fr); grid-gap: .75em; } We, again, are declaring a grid, and we are splitting up the available space by creating 14 columns with 1 fr-unit and giving each one a starting line named col-start. This grid would display on web page as below: Bringing the grids together Now that we have got the two grids we need to help fulfil our requirements we need to put them together so that they are actually we we need. The subgrid There is no subgrid in CSS, yet. To workaround this for the new grid system we could nest the 14 column grid inside the full-width grid. In the HTML we nest the 14 column inner grid inside the full-width container. <div class=""container""> <div class=""inner""> </div> </div> So that the inner knows where to be laid out within the container we tell it what column to start and end with, with this code it would be the start and end of the main column. .inner { display: grid; grid-column: main-start / main-end; grid-template-columns: repeat(14, [col-start] 1fr); grid-gap: .75em; } The CSS for the container remains unchanged. This works, but we have added another div to our HTML. One of our requirements is to try and remove the potential for tag soup. The faux subgrid subgrid I wanted to see if it would be possible to place the CSS for the 14 column grid within the CSS for the full-width grid. I replaced the CSS for the main grid and added the grid-column-gap to the .container. .container { display: grid; grid-gap: .75em; grid-template-columns: [full-start] minmax(.75em, 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(.75em, 1fr) [full-end]; } What this gave me was a 16 column grid. I was unable to find a way to tell the main grid, the grid betwixt main-start and main-end to be a maximum of 1008px as required. I trawled the internet to find if it was possible to create our main requirement, a 14 column grid which also allows for full-width components. I found that we could not reverse minmax to minmax(1fr, 72px) as 1fr is not allowed as a minimum if there is a maximum. I tried working out if we could make the min larger than its max but in minmax it would be ignored. I was struggling, I was hoping for a cleaner version of the grid system we currently use but getting to the point where needing that extra <div> would be a necessity. At 3 in the morning, when I was failing to get to sleep, my mind happened upon an question: “Could you use calc?” At some point I drifted back to sleep so the next day I set upon seeing if this was possible. I knew that the maximum width of the central grid needed to be 1008px. The left and right columns needed to be however many pixels were left in the viewport divided by 2. In CSS it looked like I would need to use calc twice. The first time to takeaway 1008px from 100% of the viewport width and the second to divide that result by 2. calc(calc(100% - 1008px) / 2) The CSS above was part of the value that I would need to include in the declaration for the grid. .container { display: grid; grid-gap: .75em; grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; } We have created the grid required. A full-width grid, with a central 14 column grid, using fewer <div> elements. See the Pen Design Systems and CSS Grid, 6 by Stuart Robson (@sturobson) on CodePen. Success! Progressive enhancement Now that we have created the grid system required we need to back-track a little. Not all browsers support Grid, over the last 9 months or so this has gotten a lot better. However there will still be browsers that visit that potentially won’t have support. The effort required to make the grid system fall back for these browsers depends on your product or sites browser support. To determine if we will be using Grid or not for a browser we will make use of feature queries. This would mean that any version of Internet Explorer will not get Grid, as well as some mobile browsers and older versions of other browsers. @supports (display: grid) { /* Styles for browsers that support Grid */ } If a browser does not pass the requirements for @supports we will fallback to using flexbox where possible, and if that is not supported we are happy for the page to be laid out in one column. A website doesn’t have to look the same in every browser after all. A responsive grid We started with the big picture, how the grid would be at a large viewport and the grid system we have created gets a little silly when the viewport gets smaller. At smaller viewports we have a single column layout where every item of content, every component stacks atop each other. We don’t start to define a grid before we the viewport gets to 700px wide. At this point we have an 8 column grid and if the viewport gets to 1100px or wider we have our 14 column grid. /* * to start with there is no 'grid' just a single column */ .container { padding: 0 .75em; } /* * when we get to 700px we create an 8 column grid with * a left and right area to breakout of the grid. */ @media (min-width: 700px) { .container { display: grid; grid-gap: .75em; grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(8, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; padding: 0; } } /* * when we get to 1100px we create an 14 column grid with * a left and right area to breakout of the grid. */ @media (min-width: 1100px) { .container { grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; } } Being explicit in creating this there is some repetition that we could avoid, we will define the number of columns for the inner grid by using a Sass variable or CSS custom properties (more commonly termed as CSS variables). Let’s use CSS custom properties. We need to declare the variable first by adding it to our stylesheet. :root { --inner-grid-columns: 8; } We then need to edit a few more lines. First make use of the variable for this line. repeat(8, [col-start] 1fr) /* replace with */ repeat(var(--inner-grid-columns), [col-start] 1fr) Then at the 1100px breakpoint we would only need to change the value of the —inner-grid-columns value. @media (min-width: 1100px) { .container { grid-template-columns: [full-start] minmax(calc(calc(100% - 1008px) / 2), 1fr) [main-start] repeat(14, [col-start] 1fr) [main-end] minmax(calc(calc(100% - 1008px) / 2), 1fr) [full-end]; } } /* replace with */ @media (min-width: 1100px) { .container { --inner-grid-columns: 14; } } See the Pen Design Systems and CSS Grid, 8 by Stuart Robson (@sturobson) on CodePen. The final grid system We have finally created our new grid for the design system. It stays true to the existing grid in place, adds the ability to break-out of the grid, removes a <div> that could have been needed for the nested 14 column grid. We can move on to the new component. Creating a new component Back to the new components we are needing to create. To me there are two components one of which is a slight variant of the first. This component contains a title, subtitle, a paragraph (potentially paragraphs) of content, a list, and a call to action. To start with we should write the HTML for the component, something like this: <article class=""features""> <h3 class=""features__title""></h3> <p class=""features__subtitle""></p> <div class=""features__content""> <p class=""features__paragraph""></p> </div> <ul class=""features__list""> <li></li> <li></li> </ul> <a href="""" class=""features__button""></a> </article> To place the component on the existing grid is fine, but as child elements are not affected by the container grid we need to define another grid for the features component. As the grid doesn’t get invoked until 700px it is possible to negate the need for a media query. .features { grid-column: col-start 1 / span 6; } @supports (display: grid) { @media (min-width: 1100px) { .features { grid-column-end: 9; } } } We can also avoid duplication of declarations by making use of the grid-column shorthand and longhand. We need to write a little more CSS for the variant component, the one that will sit on the right side of the page too. .features:nth-of-type(even) { grid-column-start: 4; grid-row: 2; } @supports (display: grid) { @media (min-width: 1100px) { .features:nth-of-type(even) { grid-column-start: 9; grid-column-end: 16; } } } We cannot place the items within features on the container grid as they are not direct children. To make this work we have to define a grid for the features component. We can do this by defining the grid at the first breakpoint of 700px making use of CSS custom properties again to define how many columns there will need to be. .features { grid-column: col-start 1 / span 6; --features-grid-columns: 5; } @supports (display: grid) { @media (min-width: 700px) { .features { display: grid; grid-gap: .75em; grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr); } } } @supports (display: grid) { @media (min-width: 1100px) { .features { grid-column-end: 9; --features-grid-columns: 7; } } } See the Pen Design Systems and CSS Grid, 10 by Stuart Robson (@sturobson) on CodePen. Laying out the parts Looking at the spec and reading several articles I feel there are two ways that I could layout the text of this component on the grid. We could use the grid-column shorthand that incorporates grid-column-start and grid-column-end or we can make use of grid-template-areas. grid-template-areas allow for a nice visual way of representing how the parts of the component would be laid out. We can take the the mock of the features on the grid and represent them in text in our CSS. Within the .features rule we can add the relevant grid-template-areas value to represent the above. .features { display: grid; grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr); grid-template-areas: "". title title title title title title"" "". subtitle subtitle subtitle subtitle subtitle . "" "". content content content content . . "" "". list list list . . . "" "". . . . link link link ""; } In order to make the variant of the component we would have to create the grid-template-areas for that component too. We then need to tell each element of the component in what grid-area it should be placed within the grid. .features__title { grid-area: title; } .features__subtitle { grid-area: subtitle; } .features__content { grid-area: content; } .features__list { grid-area: list; } .features__link { grid-area: link; } See the Pen Design Systems and CSS Grid, 12 by Stuart Robson (@sturobson) on CodePen. The other way would be to use the grid-column shorthand and the grid-column-start and grid-column-end we have used previously. .features .features__title { grid-column: col-start 2 / span 6; } .features .features__subtitle { grid-column: col-start 2 / span 5; } .features .features__content { grid-column: col-start 2 / span 4; } .features .features__list { grid-column: col-start 2 / span 4; } .features .features__link { grid-column: col-start 5 / span 3; } For the variant of the component we can use the grid-column-start property as it will inherit the span defined in the grid-column shorthand. .features:nth-of-type(even) .features__title { grid-column-start: col-start 1; } .features:nth-of-type(even) .features__subtitle { grid-column-start: col-start 1; } .features:nth-of-type(even) .features__content { grid-column-start: col-start 3; } .features:nth-of-type(even) .features__list { grid-column-start: col-start 3; } .features:nth-of-type(even) .features__link { grid-column-start: col-start 1; } See the Pen Design Systems and CSS Grid, 14 by Stuart Robson (@sturobson) on CodePen. I think, for now, we will go with using grid-column properties rather than grid-template-areas. The repetition needed for creating the variant feels too much where we can change the grid-column-start instead, keeping the components elements layout properties tied a little closer to the elements rather than the grid. Some additional decisions The current component library has existing styles for titles, subtitles, lists, paragraphs of text and calls to action. These are name-spaced so that they shouldn’t clash with any other components. Looking forward there will be a chance that other products adopt the component library, but they may bring their own styles for titles, subtitles, etc. One way that we could write our code now for that near future possibility is to make sure our classes are working hard. Using class-attribute selectors we can target part of the class attributes that we know the elements in the component will have using *=. .features [class*=""title""] { grid-column: col-start 2 / span 6; } .features [class*=""subtitle""] { grid-column: col-start 2 / span 5; } .features [class*=""content""] { grid-column: col-start 2 / span 4; } .features [class*=""list""] { grid-column: col-start 2 / span 4; } .features [class*=""link""] { grid-column: col-start 5 / span 3; } See the Pen Design Systems and CSS Grid, 15 by Stuart Robson (@sturobson) on CodePen. Although the component we have created have a title, subtitle, paragraphs, a list, and a call to action there may be a time where one ore more of these is not required or available. One thing I found out is that if the element doesn’t exist then grid will not create space for it. This may be obvious, but it can be really helpful in making a nice malleable component. We have only looked at columns, as existing components have their own spacing for the vertical rhythm of the page we don’t really want to have them take up equal space in the component and just take up the space as needed. We can do this by adding grid-auto-rows: min-content; to our .features. This is useful if you also need your component to take up a height that is more than the component itself. The grid of the future From prototyping this new grid and components in CSS Grid, I’ve found it a fantastic way to reimagine how we can create a layout or grid system for our sites. It gives us options to create the same layouts in differing ways that could suit a project and its needs. It allows us to carry on – if we choose to – using a <div>-based grid but swapping out floats for CSS Grid or to tie it to our components so they have specific places to go depending on what component is being used. Or we could have several ‘grid components’ in our design system that we could use to layout various components throughout a page. If you find yourself tasked with creating some new components for your design system try it. If you are starting from scratch I believe you really should start with CSS Grid for your layout. It really feels like the possibilities are endless in terms of layout for the web. Resources Here are just a few resources I have pawed over these last few weeks whilst getting acquainted with CSS Grid. A collection of CodePens from this article Grid by Example from Rachel Andrew A Complete Guide to CSS Grid on Codrops from Hui Jing Chen Rachel Andrew’s Blog Archive tagged: cssgrid CSS Grid Layout Examples MDN’s CSS Grid Layout A Complete Guide to Grid from CSS-Tricks CSS Grid Layout Module Level 1 Specification",2017,Stuart Robson,stuartrobson,2017-12-12T00:00:00+00:00,https://24ways.org/2017/design-systems-and-css-grid/,code 204,Cascading Web Design with Feature Queries,"Feature queries, also known as the @supports rule, were introduced as an extension to the CSS2 as part of the CSS Conditional Rules Module Level 3, which was first published as a working draft in 2011. It is a conditional group rule that tests if the browser’s user agent supports CSS property:value pairs, and arbitrary conjunctions (and), disjunctions (or), and negations (not) of them. The motivation behind this feature was to allow authors to write styles using new features when they were supported but degrade gracefully in browsers where they are not. Even though the nature of CSS already allows for graceful degradation, for example, by ignoring unsupported properties or values without disrupting other styles in the stylesheet, sometimes we need a bit more than that. CSS is ultimately a holistic technology, in that, even though you can use properties in isolation, the full power of CSS shines through when used in combination. This is especially evident when it comes to building web layouts. Having native feature detection in CSS makes it much more convenient to build with cutting-edge CSS for the latest browsers while supporting older browsers at the same time. Browser support Opera first implemented feature queries in November 2012, both Chrome and Firefox had it since May 2013. There have been several articles about feature queries written over the years, however, it seems that awareness of its broad support isn’t that well-known. Much of the earlier coverage on feature queries was not written in English, and perhaps that was a limiting factor. @supports ― CSSのFeature Queries by Masataka Yakura, August 8 2012 Native CSS Feature Detection via the @supports Rule by Chris Mills, December 21 2012 CSS @supports by David Walsh, April 3 2013 Responsive typography with CSS Feature Queries by Aral Balkan, April 9 2013 How to use the @supports rule in your CSS by Lea Verou, January 31 2014 CSS Feature Queries by Amit Tal, June 2 2014 Coming Soon: CSS Feature Queries by Adobe Web Platform Team, August 21 2014 CSS feature queries mittels @supports by Daniel Erlinger, November 27 2014 As of December 2017, all current major browsers and their previous 2 versions support feature queries. Feature queries are also supported on Opera Mini, UC Browser and Samsung Internet. The only browsers that do not support feature queries are Internet Explorer and Blackberry Mobile, but that may be less of an issue than you might think. Can I Use css-featurequeries? Data on support for the css-featurequeries feature across the major browsers from caniuse.com. Granted, there is still a significant number of organisations that require support of Internet Explorer. Microsoft still continues to support IE11 for the life-cycle of Windows 7, 8 and 10. They have, however, stopped supporting older versions since January 12, 2016. It is inevitable that there will be organisations that, for some reason or another, make it mandatory to support IE, but as time goes on, this number will continue to shrink. Jen Simmons wrote an extensive article called Using Feature Queries in CSS which discussed a matrix of potential situations when it comes to the usage of feature queries. The following image is a summary of the aforementioned matrix. The most tricky situation we have to deal with is the box in the top-left corner, which are “browsers that don’t support feature queries, yet do support the feature in question”. For cases like those, it really depends on the specific CSS feature you want to use and a subsequent evaluation of the pros and cons of not including that feature in spite of the fact the browser (most likely Internet Explorer) supports it. The basics of feature queries As with any conditional, feature queries operate on boolean logic, in other words, if the query resolves to true, apply the CSS properties within the block, or else just ignore the entire block altogether. The syntax of a simple feature query is as follows: .selector { /* Styles that are supported in old browsers */ } @supports (property:value) { .selector { /* Styles for browsers that support the specified property */ } } Note that the parentheses around the property:value pair are mandatory and the rule is invalid without them. Styles that apply to older browsers, i.e. fallback styles, should come first, followed by the newer properties, which are contained within the @supports conditional. Because of the cascade, fallback styles will be overridden by the newer properties in the modern browsers that support them. main { background-color: red; } @supports (display:grid) { main { background-color: green; } } In this example, browsers that support CSS grid will have a main element with a green background colour because the conditional resolves to true, while browsers that do not support grid will have a main element with a red background colour. The implication of such behaviour means that we can layer on enhanced styles based on the features we want to use and these styles will show up in browsers that support them. But for those that do not, they will get a more basic look that still works anyway. And that will be our approach moving forward. Boolean operators for feature queries The and operator allows us to test for support of multiple properties within a single conditional. This would be useful for cases where the desired output requires multiple cutting-edge features to be supported at the same time to work. All the property:value pairs listed in the conditional must resolve to true for the styles within the rule to be applied. @supports (transform: rotate(45deg)) and (writing-mode: vertical-rl) { /* Some CSS styles */ } The or operator allows us to list multiple property:value pairs in the conditional and as long as one of them resolves to true, the styles within the block will be applied. A relevant use-case would be for properties with vendor-prefixes. @supports (background: -webkit-gradient(linear, left top, left bottom, from(white), to(black))) or (background: -o-linear-gradient(top, white, black)) or (background: linear-gradient(to bottom, white, black)) { /* Some CSS styles */ } The not operator negates the resolution of the property:value pair in the conditional, resolving to false when the property is supported and vice versa. This is useful when there are two distinct sets of styles to be applied depending on the support of a specific feature. However, we do need to keep in mind the case where a browser does not support feature queries, and handle the styles for those browsers accordingly. @supports not (shape-outside: polygon(100% 80%,20% 0,100% 0)) { /* Some CSS styles */ } To avoid confusion between and and or, these operators must be explicitly declared as opposed to using commas or spaces. To prevent confusion caused by precedence rules, and, or and not operators cannot be mixed without a layer of parentheses. This rule is not valid and the styles within the block will be ignored. @supports (transition-property: background-color) or (animation-name: fade) and (transform: scale(1.5)) { /* Some CSS styles */ } To make it work, parentheses must be added either around the two properties adjacent to the or or the and operator like so: @supports ((transition-property: background-color) or (animation-name: fade)) and (transform: scale(1.5)) { /* Some CSS styles */ } @supports (transition-property: background-color) or ((animation-name: fade) and (transform: scale(1.5))) { /* Some CSS styles */ } The current specification states that whitespace is required after a not and on both sides of an and or or, but this may change in a future version of the specification. It is acceptable to add extra parentheses even when they are not needed, but omission of parentheses is considered invalid. Cascading web design I’d like to introduce the concept of cascading web design, an approach made possible with feature queries. Browser update cycles are much shorter these days, so new features and bug fixes are being pushed out a lot more frequently as compared to the early days of the web. With the maturation of web standards, browser behaviour is less unpredictable than before, but each browser will still have their respective quirks. Chances are, the latest features will not ship across all browsers at the same time. But you know what? That’s perfectly fine. If we accept this as a feature of the web, instead of a bug, we’ve just opened up a lot more web design possibilities. The following example is a basic, responsive grid layout of items laid out with flexbox, as viewed on IE11. We can add a block of styles within an @supports rule to apply CSS grid properties for browsers that support them to enhance this layout, like so: The web is not a static medium. It is dynamic and interactive and we manipulate this medium by writing code to tell the browser what we want it to do. Rather than micromanaging the pixels in our designs, maybe it’s time we cede control of our designs to the browsers that render them. This means being okay with your designs looking different across browsers and devices. As mentioned earlier, CSS works best when various properties are combined. It’s one of those things whose whole is greater than the sum of its parts. So feature queries, when combined with media queries, allow us to design layouts that are most effective in the environment they have to perform in. Such an approach requires interpolative thinking, on multiple levels. As web designers and developers, we don’t just think in one fixed dimension, we get to think about how our design will morph on a narrow screen, or on an older browser, in addition to how it will appear on a browser with the latest features. In the following example, the layout on the left is what IE11 users will see, the one in the middle is what Firefox users will see, because Firefox doesn’t support CSS shapes yet, but once it does, it will then look like the layout on the right, which is what Chrome users see now. With the release of CSS Grid this year, we’ve hit another milestone in the evolution of the web as a medium. The beauty of the web is its backwards compatibility and generous fault tolerance. Browser features are largely additive, holding onto the good parts and building on top of them, while deprecating the bits that didn’t work well. Feature queries allow us to progressively enhance our CSS, establishing a basic level of user experience across the widest range of browsers, while building in more advanced functionality for browsers who can use them. And hopefully, this will allow more of us to create designs that truly embrace the nature of the web.",2017,Chen Hui Jing,chenhuijing,2017-12-01T00:00:00+00:00,https://24ways.org/2017/cascading-web-design/,code 206,Getting Hardboiled with CSS Custom Properties,"Custom Properties are a fabulous new feature of CSS and have widespread support in contemporary browsers. But how do we handle browsers without support for CSS Custom Properties? Do we wait until those browsers are lying dead in a ditch before we use them? Do we tool up and prop up our CSS using a post-processor? Or do we get tough? Do we get hardboiled? Previously only pre-processing tools like LESS and Sass enabled developers to use variables in their stylesheets, but now Custom Properties have brought variables natively to CSS. How do you write a custom property? It’s hardly a mystery. Simply add two dashes to the start of a style rule. Like this: --color-text-default : black; If you’re more the underscore type, try this: --color_text_default : black; Hyphens or underscores are allowed in property names, but don’t be a chump and try to use spaces. Custom property names are also case-sensitive, so --color-text-default and --Color_Text_Default are two distinct properties. To use a custom property in your style rules, var() tells a browser to retrieve the value of a property. In the next example, the browser retrieves the black colour from the color-text-default variable and applies it to the body element: body { color : var(--color-text-default); } Like variables in LESS or Sass, CSS Custom Properties mean you don’t have to be a dumb mug and repeat colour, font, or size values multiple times in your stylesheets. Unlike a preprocessor variable though, CSS Custom Properties use the cascade, can be modified by media queries and other state changes, and can also be manipulated by Javascript. (Serg Hospodarets wrote a fabulous primer on CSS Custom Properties where he dives deeper into the code and possible applications.) Browser support Now it’s about this time that people normally mention browser support. So what about support for CSS Custom Properties? Current versions of Chrome, Edge, Firefox, Opera, and Safari are all good. Internet Explorer 11 and before? Opera Mini? Nasty. Sound familiar? Can I Use css-variables? Data on support for the css-variables feature across the major browsers from caniuse.com. Not to worry, we can manually check for Custom Property support in a browser by using an @support directive, like this: --color-text-default : black; body { color : black; } @supports ((--foo : bar)) { body { color : var(--color-text-default); } } In that example we first set body element text to black and then override that colour with a Custom Property value when the browser supports our fictitious foo bar variable. Substitutions If we reference a variable that hasn’t been defined, that won’t be a problem as browsers are smart enough to ignore the value altogether. If we need a cast iron alibi, use substitutions to specify a fall-back value. body { color : var(--color-text-default, black); } Substitutions are similar to font stacks in that they contain a comma separated list of values. If there’s no value associated with a property, a browser will ignore it and move onto the next value in the list. Post-processing Of course we could use a post-processor plugin to turn Custom Properties into plain CSS, but hang on one goddam minute kiddo. Haven’t we been down this road before? Didn’t we engineer elaborate workarounds to enable us to use ‘advanced’ CSS3 properties like border-radius, CSS columns, and Flexbox in the past? We did what we had to do to get the job done, but came away feeling dirty. I think there’s a better way, one that allows us to enjoy the benefits of CSS Custom Properties in browsers that support them, while providing an appropriate, but not identical experience, for people who use less capable browsers. Guess what, I’ve been down this road before too. 2Tone Stuff & Nonsense When Internet Explorer 6 was the big dumb browser everyone hated, I served two different designs on my website. For the modern browsers of the time, mod arrows and targets were everywhere in glorious red, white, and blue, and I implemented all of them using CSS attribute selectors which were considered advanced at the time: [class=""banner""] { background-colour : red; } Internet Explorer 6 ignored any selectors it didn’t understand, so people using that browser saw a simpler black and white, 2Tone-based design that I’d implemented for them using class selectors: .banner { background-colour : black; } [class=""banner""] { background-colour : red; } You don’t have to be a detective to find out that most people thought I’d lost my wits, but Microsoft even used my website as a reference when they tested attribute selectors in Internet Explorer 7. They did, as I suggested, “Stomp to da betta browser.” Dumb browsers look the other way So how does this approach relate to tackling any lack of support for CSS Custom Properties? How can we take advantage of them without worrying about browsers with no support and having to implement complex workarounds, or spending hours specifying fallbacks that perfectly match our designs? Turns out, the answer is built into CSS, and always has been, because when browsers don’t know what they’re looking at, they look away. All we have to do is first specify values for a simpler design first, and then follow that up with the values in our CSS Custom Properties: body { color : black; color : var(--color-text-default, black); } All browsers understand the first value (black,) and if they‘re smart enough to understand the second (var(--color-text-default)), they’ll use it and override the first. If they’re too damn stupid to understand the custom property value, they’ll ignore it. Nobody dies. Repeat this for every style that contains a variable, baking an alternative, perhaps simpler design into your stylesheets for people who use less capable browsers, just like I did with Stuff & Nonsense. Conclusion I doubt that anyone agrees with presenting a design that looks broken or unloved—and I’m not advocating for that—but websites need not look the same in every browser. We can use substitutions to present a simpler design to people using less capable browsers. The decision when to start using new CSS properties isn‘t always a technical one. Sometimes a change in attitude about browser support is all that’s required. So get tough with dumb browsers and benefit from all the advantages that CSS Custom Properties offer. Get hardboiled. Resources: It’s Time To Start Using CSS Custom Properties—Smashing Magazine Using CSS variables correctly—Mike Riethmuller Developing Inspired Guides with CSS Custom Properties (variables)—Andy Clarke",2017,Andy Clarke,andyclarke,2017-12-13T00:00:00+00:00,https://24ways.org/2017/getting-hardboiled-with-css-custom-properties/,code 209,Feeding the Audio Graph,"In 2004, I was given an iPod. I count this as one of the most intuitive pieces of technology I’ve ever owned. It wasn’t because of the the snazzy (colour!) menus or circular touchpad. I loved how smoothly it fitted into my life. I could plug in my headphones and listen to music while I was walking around town. Then when I got home, I could plug it into an amplifier and carry on listening there. There was no faff. It didn’t matter if I could find my favourite mix tape, or if my WiFi was flakey - it was all just there. Nowadays, when I’m trying to pair my phone with some Bluetooth speakers, or can’t find my USB-to-headphone jack, or even access any music because I don’t have cellular reception; I really miss this simplicity. The Web Audio API I think the Web Audio API feels kind of like my iPod did. It’s different from most browser APIs - rather than throwing around data, or updating DOM elements - you plug together a graph of audio nodes, which the browser uses to generate, process, and play sounds. The thing I like about it is that you can totally plug it into whatever you want, and it’ll mostly just work. So, let’s get started. First of all we want an audio source. <audio src=""night-owl.mp3"" controls /> (Song - Night Owl by Broke For Free) This totally works. However, it’s not using the Web Audio API, so we can’t access or modify the sound it makes. To hook this up to our audio graph, we can use an AudioSourceNode. This captures the sound from the element, and lets us connect to other nodes in a graph. const audioCtx = new AudioContext() const audio = document.querySelector('audio') const input = audioCtx.createAudioSourceNode(audio) input.connect(audioCtx.destination) Great. We’ve made something that looks and sounds exactly the same as it did before. Go us. Gain Let’s plug in a GainNode - this allows you to alter the amplitude (volume) of an an audio stream. We can hook this node up to an <input> element by setting the gain property of the node. (The syntax for this is kind of weird because it’s an AudioParam which has options to set values at precise intervals). const node = audioCtx.createGain() const input = document.querySelector('input') input.oninput = () => node.gain.value = parseFloat(input.value) input.connect(node) node.connect(audioCtx.destination) You can now see a range input, which can be dragged to update the state of our graph. This input could be any kind of element, so now you’ll be free to build the volume control of your dreams. There’s a number of nodes that let you modify/filter an audio stream in more interesting ways. Head over to the MDN Web Audio page for a list of them. Analysers Something else we can add to our graph is an AnalyserNode. This doesn’t modify the audio at all, but allows us to inspect the sounds that are flowing through it. We can put this into our graph between our AudioSourceNode and the GainNode. const analyser = audioCtx.createAnalyser() input.connect(analyser) analyser.connect(gain) gain.connect(audioCtx.destination) And now we have an analyser. We can access it from elsewhere to drive any kind of visuals. For instance, if we wanted to draw lines on a canvas we could totally do that: const waveform = new Uint8Array(analyser.fftSize) const frequencies = new Uint8Array(analyser.frequencyBinCount) const ctx = canvas.getContext('2d') const loop = () => { requestAnimationFrame(loop) analyser.getByteTimeDomainData(waveform) analyser.getByteFrequencyData(frequencies) ctx.beginPath() waveform.forEach((f, i) => ctx.lineTo(i, f)) ctx.lineTo(0,255) frequencies.forEach((f, i) => ctx.lineTo(i, 255-f)) ctx.stroke() } loop() You can see that we have two arrays of data available (I added colours for clarity): The waveform - the raw samples of the audio being played. The frequencies - a fourier transform of the audio passing through the node. What’s cool about this is that you’re not tied to any specific functionality of the Web Audio API. If it’s possible for you to update something with an array of numbers, then you can just apply it to the output of the analyser node. For instance, if we wanted to, we could definitely animate a list of emoji in time with our music. spans.forEach( (s, i) => s.style.transform = `scale(${1 + (frequencies[i]/100)})` ) 🔈🎤🎤🎤🎺🎷📯🎶🔊🎸🎺🎤🎸🎼🎷🎺🎻🎸🎻🎺🎸🎶🥁🎶🎵🎵🎷📯🎸🎹🎤🎷🎻🎷🔈🔊📯🎼🎤🎵🎼📯🥁🎻🎻🎤🔉🎵🎹🎸🎷🔉🔈🔉🎷🎶🔈🎸🎸🎻🎤🥁🎼📯🎸🎸🎼🎸🥁🎼🎶🎶🥁🎤🔊🎷🔊🔈🎺🔊🎻🎵🎻🎸🎵🎺🎤🎷🎸🎶🎼📯🔈🎺🎤🎵🎸🎸🔊🎶🎤🥁🎵🎹🎸🔈🎻🔉🥁🔉🎺🔊🎹🥁🎷📯🎷🎷🎤🎸🔉🎹🎷🎸🎺🎼🎤🎼🎶🎷🎤🎷📯📯🎻🎤🎷📯🎹🔈🎵🎹🎼🔊🔉🔉🔈🎶🎸🥁🎺🔈🎷🎵🔉🥁🎷🎹🎷🔊🎤🎤🔊🎤🎤🎹🎸🎹🔉🎷 Generating Audio So far, we’ve been using the <audio> element as a source of sound. There’s a few other sources of audio that we can use. We’ll look at the AudioBufferNode - which allows you to manually generate a sound sample, and then connect it to our graph. First we have to create an AudioBuffer, which holds our raw data, then we pass that to an AudioBufferNode which we can then treat just like our AudioSource node. This can get a bit boring, so we’ll use a helper method that makes it simpler to generate sounds. const generator = (audioCtx, target) => (seconds, fn) => { const { sampleRate } = audioCtx const buffer = audioCtx.createBuffer( 1, sampleRate * seconds, sampleRate ) const data = buffer.getChannelData(0) for (var i = 0; i < data.length; i++) { data[i] = fn(i / sampleRate, seconds) } return () => { const source = audioCtx.createBufferSource() source.buffer = audioBuffer source.connect(target || audioCtx.destination) source.start() } } const sound = generator(audioCtx, gain) Our wrapper will let us provide a function that maps time (in seconds) to a sample (between 1 and -1). This generates a waveform, like we saw before with the analyser node. For example, the following will generate 0.75 seconds of white noise at 20% volume. const noise = sound(0.75, t => Math.random() * 0.2) button.onclick = noise Noise Now we’ve got a noisy button! Handy. Rather than having a static set of audio nodes, each time we click the button, we add a new node to our graph. Although this feels inefficient, it’s not actually too bad - the browser can do a good job of cleaning up old nodes once they’ve played. An interesting property of defining sounds as functions is that we can combine multiple function to generate new sounds. So if we wanted to fade our noise in and out, we could write a higher order function that does that. const ease = fn => (t, s) => fn(t) * Math.sin((t / s) * Math.PI) const noise = sound(0.75, ease(t => Math.random() * 0.2)) ease(noise) And we can do more than just white noise - if we use Math.sin, we can generate some nice pure tones. // Math.sin with period of 0..1 const wave = v => Math.sin(Math.PI * 2 * v) const hz = f => t => wave(t * f) const _440hz = sound(0.75, ease(hz(440))) const _880hz = sound(0.75, ease(hz(880))) 440Hz 880Hz We can also make our functions more complex. Below we’re combining several frequencies to make a richer sounding tone. const harmony = f => [4, 3, 2, 1].reduce( (v, h, i) => (sin(f * h) * (i+1) ) + v ) const a440 = sound(0.75, ease(harmony(440))) 440Hz 880Hz Cool. We’re still not using any audio-specific functionality, so we can repurpose anything that does an operation on data. For example, we can use d3.js - usually used for interactive data visualisations - to generate a triangular waveform. const triangle = d3.scaleLinear() .domain([0, .5, 1]) .range([-1, 1, -1]) const wave = t => triangle(t % 1) const a440 = sound(0.75, ease(harmony(440))) 440Hz 880Hz It’s pretty interesting to play around with different functions. I’ve plonked everything in jsbin if you want to have a play yourself. A departure from best practice We’ve been generating our audio from scratch, but most of what we’ve looked at can be implemented by a series of native Web Audio nodes. This would be way performant (because it’s not happening on the main thread), and more flexible in some ways (because you can set timings dynamically whilst the note is playing). But we’re going to stay with this approach because it’s fun, and sometimes the fun thing to do might not technically be the best thing to do. Making a keyboard Having a button that makes a sound is totally great, but how about lots of buttons that make lots of sounds? Yup, totally greater-er. The first thing we need to know is the frequency of each note. I thought this would be awkward because pianos were invented more than 250 years before the Hz unit was defined, so surely there wouldn’t be a simple mapping between the two? const freq = note => 27.5 * Math.pow(2, (note - 21) / 12) This equation blows my mind; I’d never really figured how tightly music and maths fit together. When you see a chord or melody, you can directly map it back to a mathematical pattern. Our keyboard is actually an SVG picture of a keyboard, so we can traverse the elements of it and map each element to a sound generated by one of the functions that we came up with before. Array.from(svg.querySelector('rect')) .sort((a, b) => + a.x - b.x) .forEach((key, i) => key.addEventListener('touchstart', sound(0.75, ease(harmony(freq(i + 48)))) ) ) rect {stroke: #ddd;} rect:hover {opacity: 0.8; stroke: #000} Et voilà. We have a keyboard. What I like about this is that it’s completely pure - there’s no lookup tables or hardcoded attributes; we’ve just defined a mapping from SVG elements to the sound they should probably make. Doing better in the future As I mentioned before, this could be implemented more performantly with Web Audio nodes, or even better - use something like Tone.js to be performant for you. Web Audio has been around for a while, though we’re getting new challenges with immersive WebXR experiences, where spatial audio becomes really important. There’s also always support and API improvements (if you like AudioBufferNode, you’re going to love AudioWorklet) Conclusion And that’s about it. Web Audio isn’t some black box, you can easily link it with whatever framework, or UI that you’ve built (whether you should is an entirely different question). If anyone ever asks you “could you turn this SVG into a musical instrument?” you don’t have to stare blankly at them any more. (function(a,c){var b=a.createElement(""script"");if(!(""noModule""in b)&&""on""+c in b){var d=!1;a.addEventListener(c,function(a){if(a.target===b)d=!0;else if(!a.target.hasAttribute(""nomodule"")||!d)return;a.preventDefault()},!0);b.type=""module"";b.src=""."";a.head.appendChild(b);b.remove()}})(document,""beforeload"");",2017,Ben Foxall,benfoxall,2017-12-17T00:00:00+00:00,https://24ways.org/2017/feeding-the-audio-graph/,code 211,Automating Your Accessibility Tests,"Accessibility is one of those things we all wish we were better at. It can lead to a bunch of questions like: how do we make our site better? How do we test what we have done? Should we spend time each day going through our site to check everything by hand? Or just hope that everyone on our team has remembered to check their changes are accessible? This is where automated accessibility tests can come in. We can set up automated tests and have them run whenever someone makes a pull request, and even alongside end-to-end tests, too. Automated tests can’t cover everything however; only 20 to 50% of accessibility issues can be detected automatically. For example, we can’t yet automate the comparison of an alt attribute with an image’s content, and there are some screen reader tests that need to be carried out by hand too. To ensure our site is as accessible as possible, we will still need to carry out manual tests, and I will cover these later. First, I’m going to explain how I implemented automated accessibility tests on Elsevier’s ecommerce pages, and share some of the lessons I learnt along the way. Picking the right tool One of the hardest, but most important parts of creating our automated accessibility tests was choosing the right tool. We began by investigating aXe CLI, but soon realised it wouldn’t fit our requirements. It couldn’t check pages that required a visitor to log in, so while we could test our product pages, we couldn’t test any customer account pages. Instead we moved over to Pa11y. Its beforeScript step meant we could log into the site and test pages such as the order history. The example below shows the how the beforeScript step completes a login form and then waits for the login to complete before testing the page: beforeScript: function(page, options, next) { // An example function that can be used to make sure changes have been confirmed before continuing to run Pa11y function waitUntil(condition, retries, waitOver) { page.evaluate(condition, function(err, result) { if (result || retries < 1) { // Once the changes have taken place continue with Pa11y testing waitOver(); } else { retries -= 1; setTimeout(function() { waitUntil(condition, retries, waitOver); }, 200); } }); } // The script to manipulate the page must be run with page.evaluate to be run within the context of the page page.evaluate(function() { const user = document.querySelector('#login-form input[name=""email""]'); const password = document.querySelector('#login-form input[name=""password""]'); const submit = document.querySelector('#login-form input[name=""submit""]'); user.value = 'user@example.com'; password.value = 'password'; submit.click(); }, function() { // Use the waitUntil function to set the condition, number of retries and the callback waitUntil(function() { return window.location.href === 'https://example.com'; }, 20, next); }); } The waitUntil callback allows the test to be delayed until our test user is successfully logged in. Another thing to consider when picking a tool is the type of error messages it produces. aXe groups all elements with the same error together, so the list of issues is a lot easier to read, and it’s easier to identify the most commons problems. For example, here are some elements that have insufficient colour contrast: Violation of ""color-contrast"" with 8 occurrences! Ensures the contrast between foreground and background colors meets WCAG 2 AA contrast ratio thresholds. Correct invalid elements at: - #maincontent > .make_your_mark > div:nth-child(2) > p > span > span - #maincontent > .make_your_mark > div:nth-child(4) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(2) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(4) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(6) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(8) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(10) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(12) > p > span > span For details, see: https://dequeuniversity.com/rules/axe/2.5/color-contrast aXe also provides links to their site where they discuss the best way to fix the problem. In comparison, Pa11y lists each individual error which can lead to a very verbose list. However, it does provide helpful suggestions of how to fix problems, such as suggesting an alternative shade of a colour to use: • Error: This element has insufficient contrast at this conformance level. Expected a contrast ratio of at least 4.5:1, but text in this element has a contrast ratio of 2.96:1. Recommendation: change text colour to #767676. ⎣ WCAG2AA.Principle1.Guideline1_4.1_4_3.G18.Fail ⎣ #maincontent > div:nth-child(10) > div:nth-child(8) > p > span > span ⎣ <span style=""color:#969696"">Featured products:</span> Integrating into our build pipeline We decided the perfect time to run our accessibility tests would be alongside our end-to-end tests. We have a Jenkins job that detects changes to our staging site and then triggers the end-to-end tests, and in turn our accessibility tests. Our Jenkins job retrieves the contents of a GitHub repository containing our Pa11y script file and npm package manifest. Once Jenkins has cloned the repository, it installs any dependencies and executes the tests via: npm install && npm test Bundling the URLs to be tested into our test script means we don’t have a command line style test where we list each URL we wish to test in the Jenkins CLI. It’s an effective method but can also be cluttered, and obscure which URLs are being tested. In the middle of the office we have a monitor displaying a Jenkins dashboard and from this we can see if the accessibility tests are passing or failing. Everyone in the team has access to the Jenkins logs and when the build fails they can see why and fix the issue. Fixing the issues As mentioned earlier, Pa11y can generate a long list of areas for improvement which can be very verbose and quite overwhelming. I recommend going through the list to see which issues occur most frequently and fix those first. For example, we initially had a lot of errors around colour contrast, and one shade of grey in particular. By making this colour darker, the number of errors decreased, and we could focus on the remaining issues. Another thing I like to do is to tackle the quick fixes, such as adding alt text to images. These are small things that allow us to make an impact instantly, giving us time to fix more detailed concerns such as addressing tabindex issues, or speaking to our designers about changing the contrast of elements on the site. Manual testing Adding automated tests to check our site for accessibility is great, but as I mentioned earlier, this can only cover 20-50% of potential issues. To improve on this, we need to test by hand too, either by ourselves or by asking others. One way we can test our site is to throw our mouse or trackpad away and interact with the site using only a keyboard. This allows us to check items such as tab order, and ensure menu items, buttons etc. can be used without a mouse. The commands may be different on different operating systems, but there are some great guides online for learning more about these. It’s tempting to add alt text and aria-labels to make errors go away, but if they don’t make any sense, what use are they really? Using a screenreader we can check that alt text accurately represents the image. This is also a great way to double check that our ARIA roles make sense, and that they correctly identify elements and how to interact with them. When testing our site with screen readers, it’s important to remember that not all screen readers are the same and some may interact with our site differently to others. Consider asking a range of people with different needs and abilities to test your site, too. People experience the web in numerous ways, be they permanent, temporary or even situational. They may interact with your site in ways you hadn’t even thought about, so this is a good way to broaden your knowledge and awareness. Tips and tricks One of our main issues with Pa11y is that it may find issues we don’t have the power to solve. A perfect example of this is the one pixel image Facebook injects into our site. So, we wrote a small function to go though such errors and ignore the ones that we cannot fix. const test = pa11y({ .... hideElements: '#ratings, #js-bigsearch', ... }); const ignoreErrors: string[] = [ '<img src=""https://books.google.com/intl/en/googlebooks/images/gbs_preview_button1.gif"" border=""0"" style=""cursor: pointer;"" class=""lightbox-is-image"">', '<script type=""text/javascript"" id="""">var USI_orderID=google_tag_mana...</script>', '<img height=""1"" width=""1"" style=""display:none"" src=""https://www.facebook.com/tr?id=123456789012345&ev=PageView&noscript=1"">' ]; const filterResult = result => { if (ignoreErrors.indexOf(result.context) > -1) { return false; } return true; }; Initially we wanted to focus on fixing the major problems, so we added a rule to ignore notices and warnings. This made the list or errors much smaller and allowed us focus on fixing major issues such as colour contrast and missing alt text. The ignored notices and warnings can be added in later after these larger issues have been resolved. const test = pa11y({ ignore: [ 'notice', 'warning' ], ... }); Jenkins gotchas While using Jenkins we encountered a few problems. Sometimes Jenkins would indicate a build had passed when in reality it had failed. This was because Pa11y had timed out due to PhantomJS throwing an error, or the test didn’t go past the first URL. Pa11y has recently released a new beta version that uses headless Chrome instead of PhantomJS, so hopefully these issues will less occur less often. We tried a few approaches to solve these issues. First we added error handling, iterating over the array of test URLs so that if an unexpected error happened, we could catch it and exit the process with an error indicating that the job had failed (using process.exit(1)). for (const url of urls) { try { console.log(url); let urlResult = await run(url); urlResult = urlResult.filter(filterResult); urlResult.forEach(result => console.log(result)); } catch (e) { console.log('Error:', e); process.exit(1); } } We also had issues with unhandled rejections sometimes caused by a session disconnecting or similar errors. To avoid Jenkins indicating our site was passing with 100% accessibility, when in reality it had not executed any tests, we instructed Jenkins to fail the job when an unhandled rejection or uncaught exception occurred: process.on('unhandledRejection', (reason, p) => { console.log('Unhandled Rejection at:', p, 'reason:', reason); process.exit(1); }); process.on('uncaughtException', (err) => { console.log('Caught exception: ${err}n'); process.exit(1); }); Now it’s your turn That’s it! That’s how we automated accessibility testing for Elsevier ecommerce pages, allowing us to improve our site and make it more accessible for everyone. I hope our experience can help you automate accessibility tests on your own site, and bring the web a step closer to being accessible to all.",2017,Seren Davies,serendavies,2017-12-07T00:00:00+00:00,https://24ways.org/2017/automating-your-accessibility-tests/,code 212,Refactoring Your Way to a Design System,"I love refactoring code. Absolutely love it. There’s something about taking a piece of UI or a bit of code and reworking it in a way that is simpler, modular, and reusable that makes me incredibly happy. I also love design systems work. It gives hybrids like me a home. It seems like everyone is talking about design systems right now. Design systems teams are perfect for those who enjoy doing architectural work and who straddle the line between designer and developer. Una Kravets recently identified some of the reasons that design systems fail, and chief among them are lack of buy-in, underlying architecture, and communication. While it’s definitely easier to establish these before project work begins, that doesn’t mean it is the only path to success. It’s a privilege to work on a greenfield project, and one that is not afforded to many. Companies with complex and/or legacy codebases may not be able to support a full rewrite of their product. In addition, many people feel overwhelmed at the thought of creating a complete system and are at a loss of how or where to even begin the process. This is where refactoring comes into play. According to Martin Fowler, “refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.” It’s largely invisible work, and if you do it right, the end user will never know the difference. What it will do is provide a decent foundation to begin more systematic work. Build a solid foundation When I was first asked to create Pantsuit, the design system for Hillary for America, I was tasked with changing our codebase to be more modular and scalable, without changing the behavior or visual design of the UI. We needed a system in place that would allow for the rapid creation of new projects while maintaining a consistent visual language. In essence, I was asked to refactor our code into a design system. During that refactor, I focused the majority of my efforts on creating a scalable architecture based on the UI components in a single workflow. Since I needed to maintain a 1:1 parity with production, the only changes I could create were under-the-hood. I started with writing coding standards and deciding on a CSS architecture that I would then use as I rewrote sections of the codebase. If you already have these in place, great! If not, then this is an excellent place to start. Even if your dream of a design system is never fully realized, having a coding philosophy and architecture in place will still have far-reaching benefits and implications. I want to note that if your refactor includes creating new coding standards or a CSS architecture, don’t try to switch everything over right away. Instead, focus on a single new feature and isolate/encapsulate your work from the rest of the codebase. Focus on the features The key principle to cleaning up a complex codebase is to always refactor in the service of a feature. — Max Kanat-Alexander Refactoring for the sake of refactoring can easily lead to accusations of misused time and lack of results. Another danger of refactoring is that it can turn into yak-shaving if you aren’t disciplined in your approach. To that end, tying your refactored components to feature work is a great way to limit scope and reduce the rest of unintended changes. For example, the initial work on Pantsuit focused only on components related to the donations flow. Every line of code I wrote was in service to improving the maintainability and modularity of that UI. Because we didn’t have any standards in place, I started with those. From there, I identified all the components present in every step of the donations flow, which included some type styles, buttons, form inputs and error states. Then came the refactor of each individual component. Finally, I reintegrated the newly refactored components into the existing donations flow and tested it against production, checking for visual and behavioral diffs. At the end of this process, I had the beginning of a design system that would grow to serve over 50 applications, and a case study to demonstrate its effectiveness. Ideally, you’ll want to get buy-in from your stakeholders and product owners before you begin any design systems work. However, in the absence of buy-in, linking your work to new feature development is a good way to both limit the scope of your refactor and jump start component creation. In addition, if you’re still trying to convince your team of the benefits of a design system, starting small and using the newly refactored, feature-driven work as a case study is one way showcase a design systems’ value. By providing a concrete example of how working towards a design system contributed to the project’s success, you’re gathering the data necessary to secure buy-in for a larger-scale effort. It’s a great way to show value, rather than just talking about it. Show, don’t tell Perhaps the most important thing you can do for any design system is to document it. The key is to create a frictionless way to keep the documentation up-to-date, otherwise no one will contribute to it, and in turn, it will become obsolete and useless. There are lots of tools out there to help you get started documenting your new system. One of my favorites is KSS, which parses comments in the code and uses them to generate a style guide. For Pantsuit, I used the node version of KSS, along with a template to quickly spin up a documentation site. I’ve listed just a few tools below; for even more, check out the tools sections of styleguides.io. Fractal Pattern Lab Drizzle Fabricator Astrum Catalog Regardless of what tool you settle on, it needs to integrate well with your current workflow. Conclusion: always be refactoring If you’re not lucky enough to be able to start a new design system from scratch, you can start small and work on a single feature or component. With each new project comes a new opportunity to flesh out a new part of the system, and another potential case study to secure buy-in and showcase its value. Make sure to carefully and thoroughly document each new portion of the system as it’s built. After a few projects, you’ll find yourself with a decent start to a design system. Good luck, and happy holidays! Further reading: Why Design Systems Fail CSS Architecture for Design Systems Refactoring: Improving the Design of Existing Code Refactoring CSS: The Three I’s Refactoring is About Features",2017,Mina Markham,minamarkham,2017-12-23T00:00:00+00:00,https://24ways.org/2017/refactoring-your-way-to-a-design-system/,code 213,Accessibility Through Semantic HTML,"Working on Better, a tracker blocker, I spend an awful lot of my time with my nose in other people’s page sources. I’m mostly there looking for harmful tracking scripts, but often notice the HTML on some of the world’s most popular sites is in a sad state of neglect. What does neglected HTML look like? Here’s an example of the markup I found on a news site just yesterday. There’s a bit of text, a few links, and a few images. But mostly it’s div elements. <div class=""block_wrapper""> <div class=""block_content""> <div class=""box""> <div id=""block1242235""> <div class=""column""> <div class=""column_content""> <a class=""close"" href=""#""><i class=""fa""></i></a> </div> <div class=""btn account_login""></div> Some text <span>more text</span> </div> </div> </div> </div> </div> divs and spans, why do we use them so much? While I find tracking scripts completely inexcusable, I do understand why people write HTML like the above. As developers, we like to use divs and spans as they’re generic elements. They come with no associated default browser styles or behaviour except that div displays as a block, and span displays inline. If we make our page up out of divs and spans, we know we’ll have absolute control over styles and behaviour cross-browser, and we won’t need a CSS reset. Absolute control may seem like an advantage, but there’s a greater benefit to less generic, more semantic elements. Browsers render semantic elements with their own distinct styles and behaviours. For example, button looks and behaves differently from a. And ul is different from ol. These defaults are shortcuts to a more usable and accessible web. They provide consistent and well-tested components for common interactions. Semantic elements aid usability A good example of how browser defaults can benefit the usability of an element is in the <select> option menu. In Safari on the desktop, the browser renders <select> as a popover-style menu. On a touchscreen, Safari overlays the same menu over the lower half of the screen as a “picker view.” Option menu in Safari on macOS. Option menu picker in Safari on iOS. The iOS picker is a much better experience than struggling to pick from a complicated interface inside the page. The menu is shown more clearly than in the confined space on the page, which makes the options easier to read. The required swipe and tap gestures are consistent with the rest of the operating system, making the expected interaction easier to understand. The whole menu is scaled up, meaning the gestures don’t need such fine motor control. Good usability is good accessibility. When we choose to use a div or span over a more semantic HTML element, we’re also doing hard work the browser could be doing for us. We don’t need to tie ourselves in knots making a custom div into a keyboard navigable option menu. Using select passes the bulk of the responsibility over to the browser.  Letting the browser do most of the work is also more future-friendly. More devices, with different expected interactions, will be released in the future. When that happens, the devices’ browsers can adapt our sites according to those interactions. Then we can spend our time doing something more fun than rewriting cross-browser JavaScript for each new device. HTML’s impact on accessibility Assistive technology also uses semantic HTML to understand how best to convey each element to its user. For screen readers Semantic HTML gives context to screen readers. Screen readers are a type of assistive technology that reads the content of the screen to the person using it. All sites have a linear page source. Sighted visitors can use visual cues on the page to navigate to their desired content in a non-linear fashion. As screen readers output audio (and sometimes braille), those visual cues aren’t usable in the same way. Screen readers provide alternative means of navigation, enabling people to jump between different types of content, such as links, forms, headings, lists, and paragraphs. If all our content is marked up using divs and spans, we’re not giving screen readers a chance to index the valuable content. For keyboard navigation Keyboard-only navigation is also aided by semantic HTML. Forms, option menus, navigation, video, and audio are particularly hard for people relying on a keyboard to access. For instance, option menus and navigation can be very fiddly if you need to use a mouse to hover a menu open and move to select the desired item at the same time.  Again, we can leave much of the interaction to the browser through semantic HTML. Semantic form elements can convey if a check box has been checked, or which label is associated with which input field. These default behaviours can make the difference between a person being able to use a form or leaving the site out of frustration. Did I convince you yet? I hope so. Let’s finish with some easy guidelines to follow. 1. Use the most semantic HTML element for the job When you reach for a div, first check if there’s a better element to do the job. What is the role of that element? How should a person be interacting with the element? Are you using class names like nav, header, or main? There are HTML5 elements for those sections! Using specific elements can also make writing CSS simpler, and ensure a consistent design with minimal effort. 2. Separate structure and style Don’t choose HTML elements based on how they’re styled in your CSS. Nowadays, common practice is to use class names rather than elements for CSS selectors. You’re unlikely to wrap all your page content in an <h1> element because you want all the text to be big and bold. Still, it can be easy to choose an HTML element because it will be the easiest to style. Focusing on content without style will help us choose the most semantic HTML element without that temptation. For example, you could add a class of .btn to a div to make it look like a button. But we all know that only a button will really behave like a button. 3. Use progressive enhancement for enhanced functionality Airbnb and Groupon recently proved we’re not past the laziness of “this site only works in X browser.” Baffling disregard for the open web aside, making complex interactive experiences work cross-browser and cross-device is not easy. We can use progressive enhancement to layer fancy or unsupported features on top of a baseline “it works” experience.  We should build the baseline experience on a foundation of accessible, semantic HTML. Then, if you really want to add a specific feature for a proprietary browser, you can layer that on top, without breaking the underlying experience. 4. Test your work Validators are always valuable for checking the browser will be able to correctly interpret your markup. Document outline checkers can be valuable for testing your structure, but be aware that the HTML5 document outline is not actually implemented in browsers. Once you’ve got something resembling a web page, test the experience! Ensure that semantic HTML element you chose looks and behaves in a predictable manner consistent with its use across the web. Test cross-browser, test cross-device, and test with assistive technology. Testing with assistive technology is not as expensive as it used to be, you can even use your smartphone for testing on iOS and Android. Your visitors will thank you! Further reading Accessibility For Everyone by Laura Kalbag HTML5 Doctor HTML5 Accessibility An overview of HTML5 Semantics HTML reference on MDN  Heydon Pickering’s Inclusive Design Checklist The Paciello Group’s Inclusive Design Principles",2017,Laura Kalbag,laurakalbag,2017-12-15T00:00:00+00:00,https://24ways.org/2017/accessibility-through-semantic-html/,code 214,Christmas Gifts for Your Future Self: Testing the Web Platform,"In the last year I became a CSS specification editor, on a mission to revitalise CSS Multi-column layout. This has involved learning about many things, one of which has been the Web Platform Tests project. In this article, I’m going to share what I’ve learned about testing the web platform. I’m also going to explain why I think you might want to get involved too. Why test? At one time or another it is likely that you have been frustrated by an issue where you wrote some valid CSS, and one browser did one thing with it and another something else entirely. Experiences like this make many web developers feel that browser vendors don’t work together, or they are actively doing things in a different way to one another to the detriment of those of us who use the platform. You’ll be glad to know that isn’t the case, and that the people who work on browsers want things to be consistent just as much as we do. It turns out however that interoperability, which is the official term for “works in all browsers”, is hard. Thanks to web-platform-tests, a test from another browser vendor just found genuine bug in our code before we shipped 😻— Brian Birtles (@brianskold) February 10, 2017 In order for W3C Specifications to move on to become W3C Recommendations we need to have interoperable implementations. 6.2.4 Implementation Experience Implementation experience is required to show that a specification is sufficiently clear, complete, and relevant to market needs, to ensure that independent interoperable implementations of each feature of the specification will be realized. While no exhaustive list of requirements is provided here, when assessing that there is adequate implementation experience the Director will consider (though not be limited to): is each feature of the current specification implemented, and how is this demonstrated? are there independent interoperable implementations of the current specification? are there implementations created by people other than the authors of the specification? are implementations publicly deployed? is there implementation experience at all levels of the specification’s ecosystem (authoring, consuming, publishing…)? are there reports of difficulties or problems with implementation? https://www.w3.org/2017/Process-20170301/#transition-reqs We all want interoperability, achieving interoperability is part of the standards process. The next question is, how do we make sure that we get it? Unimplemented vs uninteroperable implementations Before looking at how we can try to improve interoperability, I’d like to look at the reasons we don’t always meet that aim. There are a couple of reasons why browser X is not doing the same thing as browser Y. The first reason is that browser X has not implemented that feature yet. Relatively small teams of people work on browser engines, and their resources are spread as thinly as those of any company. Behind those browsers are business or organisational goals which may not match our desire for a shiny feature to be made available. There are ways in which we as the web community can help gently encourage implementations - by requesting the feature, by using it so it shows up in usage reports, or writing about it to show interest. However, there will always be some degree of lag based on priorities. A browser not supporting a feature at all, is reasonably easy to deal with these days. We can test for support with Feature Queries, and create sensible fallbacks. What is harder to deal with is when a feature is implemented in different ways by different browsers. In that situation you use the feature, perhaps referring to the specification to ensure that you are writing your CSS correctly. It looks exactly as you expect in one browser and it’s all broken when you test in another. A frequent cause of this kind of issue is that the specification is not well defined in a particular area or that the specification has changed since one or other browser implemented it. CSS specifications are not developed in a darkened room, then presented to browser vendors to implement as a completed document. It would be nice if it worked like that, however the web platform is a gnarly thing. Before we can be sure that a specification is correct, it needs implementing in order that we can get the interoperable implementations I described earlier. A circular process has to happen. Specifications have to be written, browsers have to implement and find the problems, and then the specification has to be revised. Many people reading this will be familiar with how flexbox changed three times in browsers, leaving us with a mess of incompatibilities and the need to use at least two versions of the spec. This story was an example of this circular process, in this case the specification was flagged as experimental using vendor prefixes. We had become used to using vendor prefixes in production and early adopters of flexbox were bitten by this. Today, specifications are developed behind experimental flags as we saw with CSS Grid Layout. Yet there has to come a time when implementations ship, and remove those flags, and it may be that knowingly or unknowingly some interop issues slip through. You will know these interop issues as “browser bugs”, perhaps you have even reported one (thank you!) and none of us want them, so how do we make the platform as robust as possible? How do we ensure we have interoperability? If you were working on a large web application, with several people committing code, it would be very easy for one person to make a change that inadvertently broke some part of the application. They might not realise the fact that their change would cause a problem, due to not having a complete understanding of the entire codebase. To prevent this from happening, it is accepted good practice to write tests as well as code. The tests can then be run before the application is deployed. Unless you start out from the beginning writing tests, and are very good at writing a test for every bit of code, it is likely that some issues do slip through from time to time. When this happens, a good approach is to not only fix the issue but also to write a test that would stop it ever happening again. That way the test suite improves over time and hopefully fewer issues happen. The web platform is essentially a giant, sprawling application, with a huge range of people working on it in different ways. There is therefore plenty of opportunity for issues to creep in, so it seems like having some way of writing tests and automating those tests on browsers would be a good thing. That, is what the Web Platform Tests project has set out to achieve. Web Platform Tests Web Platform Tests is the test suite for the web platform. It is set of tests for all parts of the web platform, which can be run in any browser and the results reported. This article mostly discusses CSS tests, because I work on CSS. You will find that there are tests covering the full platform, and you can look into whichever area you have the most interest and experience in. If we want to create a test suite for a CSS specification then we need to ensure that every feature of the specification has a related test. If a change is made to the spec, and a test committed that reflects that change, then it should be straightforward to run that test against each browser and see if it passes. Currently, at the CSS Working Group, specifications that are at Candidate Recommendation Status should commit at test with every normative change to the spec. To explain the terminology, a normative change is one that changes some of the normative text of a specification - text that contains instructions as to how a browser should render a certain thing. A Candidate Recommendation is the status at which the Working Group officially request implementations of the spec, therefore it is reasonable to assume that any change may cause an interoperability issue. It is usually the case that representatives from all browsers will have discussed the change, so anyone who needs to change code will be aware. In this case the test allows them to check that their change passes and matches everyone else. Tests would also highlight the situation where a change to the spec caused an issue in a browser that otherwise wouldn’t be aware if it. Just as a test suite for your web application should alert a person committing code, that their change will cause a problem elsewhere. Discovering the tests I’ve found that the more I have understood the effort that goes into interoperability, and the reasons why creating an interoperable web is so hard, running into browser issues has become less frustrating. I have somewhere to go, even if all I can do is log the bug. If you are even slightly interested in the subject, go have a poke around wpt.fyi. You can explore the various parts of the web platform and see how many tests have been committed. All the the CSS tests are under the directory /css where you will find each specification. Find a specification you are interested in, and look at the tests. Each test has a link to run it in your own browser to see if it passes. This can be useful in itself, if you are battling with an issue and have reduced it down to something specific, you can go and look to see if there is a test covering that and whether it appears to pass or fail in the browser you are battling with. If it turns out that the test fails, it’s probably not you! A test on the wpt.fyi dashboard Note: In some tests you will come across mention of a font called Ahem. This is a font designed for testing which contains consistent glyphs. You can read about how to use the font and download it here. Contributing to Web Platform Tests You can also become involved with Web Platform Tests. People often ask me how they can become involved in CSS, and I can think of no better way than by writing tests. You need to really understand a feature to accurately come up with a method of testing if it works or not in the different engines. This is not glamorous work, it is however a very useful thing to be involved with. In addition to helping yourself, and developing the sort of deep knowledge of the platform that enables contribution, you will really help the progress of specifications. There are only a very few people writing specs. If those people also have to write and review all of the tests it slows down their work. If you want a better, more interoperable web, and to massively improve your ability to have nerdy conversations about highly specific things, testing is the place to start. A local testing setup Your first stop should be to visit the home of Web Platform Tests. There is some documentation here, which does tend to assume you know about the tests and what you are looking for - having read this article you know as much as I do. To be able to work on tests you will want to: Clone the WPT repo, this is where all the tests are stored Install some tools so you can run up a local copy of the tests The instructions on the Readme in the repo should get you up and running, you can then load your own version of the test suite in a browser at http://web-platform-test:8000, or whichever port you set up. Running tests locally Finding things to test It’s currently not straightforward to locate low-hanging fruit in order to start committing tests. There are some issues flagged up as a good first issue in the GitHub repo, if any of those match your interest and knowledge. Otherwise, a good place to start is where you know of existing interoperability issues. If you are aware of a browser bug, have a look and see if there is a test that addresses it. If not, then a test highlights the interoperability issue, and if it is an issue that you are running into means that you have a nice way to see if it has been fixed! Talk to people There is an IRC channel at irc://irc.w3.org:6667/testing, where you will find people who are writing tests as well as people who are working on the test suite framework itself. They have always been very friendly, and are likely to welcome people with a real interest in creating tests. Gathering information First you need to read the spec. To be able to create a test you need to know and to understand what the specification says should be happening. As I mentioned, writing tests will improve your knowledge dramatically! In general I find that web developers assume their favourite browser has got it right, this isn’t about right or wrong however, or good browsers versus bad ones. The browser with the incorrect implementation may have had a perfect, as per the spec implementation, until something changed. Do some investigation and work out what the spec says, and which – if any – browser is doing it correctly. Another good place to look when trying to create a test for an interop issue, is to look at the browser issue trackers. It is quite likely that someone has already logged the issue, and detailed what it is, and even which browsers are as per the spec. This is useful information, as you then have a clue as to which browsers should pass your test. Remember to check version numbers - an issue may well be fixed in a pre-release version of Chrome for example, but not in the public release. Edge Issue Tracker Mozilla Issue Tracker WebKit Issue Tracker Chromium Issue Tracker Writing the test If you’ve ever created a Reduced Test Case to isolate a browser issue, you already have some idea of what we are trying to do with a test. We want to test one thing, in isolation, and to be able to confirm “yes this works as per the spec” or “no, this does not”. The main two types of test are: testharness.js tests reftests The testharness.js tests use JavaScript to test an assertion, this framework is designed as a way to test Web APIs and as this quickly gets fairly complicated - and I’m a complete beginner myself at writing these - I’ll refer you to the excellent tutorial Using testharness.js. Many CSS tests will be reftests. A reftest involves getting two pages to lay out in the same way, so that they are visually the same. For example, you can find a reftest for Grid Layout at:https://w3c-test.org/css/css-grid/alignment/grid-gutters-001.html or at http://web-platform.test:8000/css/css-grid/alignment/grid-gutters-001.html if you have run up your own copy of WPT. <!DOCTYPE html> <meta charset=""utf-8""> <title>CSS Grid Layout Test: Support for gap shorthand property of row-gap and column-gap</title> <link rel=""help"" href=""https://www.w3.org/TR/css-grid-1/#gutters""> <link rel=""help"" href=""https://www.w3.org/TR/css-align-3/#gap-shorthand""> <link rel=""match"" href=""../reference/grid-equal-gutters-ref.html""> <link rel=""author"" title=""Rachel Andrew"" href=""mailto:me@rachelandrew.co.uk""> <style> #grid { display: grid; width:200px; gap: 20px; grid-template-columns: 90px 90px; grid-template-rows: 90px 90px; background-color: green; } #grid > div { background-color: silver; } </style> <p>The test passes if it has the same visual effect as reference.</p> <div id=""grid""> <div></div> <div></div> <div></div> <div></div> </div> I am testing the new gap property (renamed grid-gap). The reference file can be found by looking for the line: <link rel=""match"" href=""../reference/grid-equal-gutters-ref.html""> In that file, I am using absolute positioning to mock up the way the file would look if gap is implemented correctly. <!DOCTYPE html> <meta charset=""utf-8""> <title>CSS Grid Layout Reference: a square with a green cross</title> <link rel=""author"" title=""Rachel Andrew"" href=""mailto:me@rachelandrew.co.uk"" /> <style> #grid { width:200px; height: 200px; background-color: green; position: relative; } #grid > div { background-color: silver; width: 90px; height: 90px; position: absolute; } #grid :nth-child(1) { top: 0; left: 0; } #grid :nth-child(2) { top: 0; left: 110px; } #grid :nth-child(3) { top: 110px; left: 0; } #grid :nth-child(4) { top: 110px; left: 110px; } </style> <div id=""grid""> <div></div> <div></div> <div></div> <div></div> </div> The tests are compared in an automated way by taking screenshots of the test and reference. These are relatively simple tests to write, you will find the work is not in writing the test however. The work is really in doing the research, and making sure you understand what is supposed to happen so you can write the test. Which is why, if you really want to get your hands dirty in the web platform, this is a good place to start. Committing a test Once you have written a test you can run the lint tool to make sure that everything is tidy. This tool is run automatically after you submit your pull request, and reviewers won’t accept a test with lint errors, so do this locally first to catch anything obvious. Tests are added as a pull request, once you have your test ready to go you can create a pull request to add it to the repository. Your test will be tested and it will then wait for a review. You may well then find yourself in a bit of a waiting game, as the test needs to be reviewed. How long that takes will depend on how active work is on that spec. People who are in the OWNERS file for that spec should be notified. You can always ask in IRC to see if someone is available who can look at and potentially merge your test. Usually the reviewer will have some comments as to how the test can be improved, in the same as the owner of an open source project you submit a PR to might ask you to change some things. Work with them to make your test as good as it can be, the things you learn on the first test you submit will make future ones easier. You can then bask in the glow of knowing you have done something towards the aim of a more interoperable web for all of us. Christmas gifts for your future self I have been a web developer for over 20 years. I have no idea what the web platform will look like in 20 more years, but for as long as I’m working on it I’ll keep on trying to make it better. Making the web more interoperable makes it a better place to be a web developer, storing up some Christmas gifts for my future self, while learning new things as I do so. Resources I rounded up everything I could find on WPT while researching this article. As well as some other links that might be helpful for you. These links are below. Happy testing! Web Platform Tests Using testharness.js IRC Channel irc://irc.w3.org:6667/testing Edge Issue Tracker Mozilla Issue Tracker WebKit Issue Tracker Chromium Issue Tracker Reducing an Issue - guide to created a reduced test case Effectively Using Web Platform Tests: Slides and Video An excellent walkthrough from Lyza Gardner on her working on tests for the HTML specification - Moving Targets: a case study on testing web standards. Improving interop with web-platform-tests: Slides and Video",2017,Rachel Andrew,rachelandrew,2017-12-10T00:00:00+00:00,https://24ways.org/2017/testing-the-web-platform/,code 215,Teach the CLI to Talk Back,"The CLI is a daunting tool. It’s quick, powerful, but it’s also incredibly easy to screw things up in – either with a mistyped command, or a correctly typed command used at the wrong moment. This puts a lot of people off using it, but it doesn’t have to be this way. If you’ve ever interacted with Slack’s Slackbot to set a reminder or ask a question, you’re basically using a command line interface, but it feels more like having a conversation. (My favourite Slack app is Lunch Train which helps with the thankless task of herding colleagues to a particular lunch venue on time.) Same goes with voice-operated assistants like Alexa, Siri and Google Home. There are even games, like Lifeline, where you interact with a stranded astronaut via pseudo SMS, and KOMRAD where you chat with a Soviet AI. I’m not aiming to build an AI here – my aspirations are a little more down to earth. What I’d like is to make the CLI a friendlier, more forgiving, and more intuitive tool for new or reluctant users. I want to teach it to talk back. Interactive command lines in the wild If you’ve used dev tools in the command line, you’ve probably already used an interactive prompt – something that asks you questions and responds based on your answers. Here are some examples: Yeoman If you have Yeoman globally installed, running yo will start a command prompt. The prompt asks you what you’d like to do, and gives you options with how to proceed. Seasoned users will run specific commands for these options rather than go through this prompt, but it’s a nice way to start someone off with using the tool. npm If you’re a Node.js developer, you’re probably familiar with typing npm init to initialise a project. This brings up prompts that will populate a package.json manifest file for that project. The alternative would be to expect the user to craft their own package.json, which is more error-prone since it’s in JSON format, so something as trivial as an extraneous comma can throw an error. Snyk Snyk is a dev tool that checks for known vulnerabilities in your dependencies. Running snyk wizard in the CLI brings up a list of all the known vulnerabilities, and gives you options on how to deal with it – such as patching the issue, applying a fix by upgrading the problematic dependency, or ignoring the issue (you are then prompted for a reason). These decisions get mapped to the manifest and a .snyk file, and committed into the repo so that the settings are the same for everyone who uses that project. I work at Snyk, and running the wizard is what made me think about building my own personal assistant in the command line to help me with some boring, repetitive tasks. Writing your own Something I do a lot is add bookmarks to styleguides.io – I pull down the entire repo, copy and paste a template YAML file, and edit to contents. Sometimes I get it wrong and break the site. So I’ve been putting together a tool to help me add bookmarks. It’s called bookmarkbot – it’s a personal assistant squirrel called Mark who will collect and bury your bookmarks for safekeeping.* *Fortunately, this metaphor also gives me a charming excuse for any situation where bookmarks sometimes get lost – it’s not my poorly-written code, honest, it’s just being realistic because sometimes squirrels forget where they buried things! When you run bookmarkbot, it will ask you for some information, and save that information as a Markdown file in YAML format. For this demo, I’m going to use a Node.js package called inquirer, which is a well supported tool for creating command line prompts. I like it because it has a bunch of different question types; from input, which asks for some text back, confirm which expects a yes/no response, or a list which gives you a set of options to choose from. You can even nest questions, Choose Your Own Adventure style. Prerequisites Node.js npm RubyGems (Only if you want to go as far as serving a static site for your bookmarks, and you want to use Jekyll for it) Disclaimer Bear in mind that this is a really simplified walkthrough. It doesn’t have any error states, and it doesn’t handle the situation where we save a file with the same name. But it gets you in a good place to start building out your tool. Let’s go! Create a new folder wherever you keep your projects, and give it an awesome name (I’ve called mine bookmarks and put it in the Sites directory because I’m unimaginative). Now cd to that directory. cd Sites/bookmarks Let’s use that example I gave earlier, the trusty npm init. npm init Pop in the information you’d like to provide, or hit ENTER to skip through and save the defaults. Your directory should now have a package.json file in it. Now let’s install some of the dependencies we’ll need. npm install --save inquirer npm install --save slugify Next, add the following snippet to your package.json to tell it to run this file when you run npm start. ""scripts"": { … ""start"": ""node index.js"" } That index.js file doesn’t exist yet, so let’s create it in the root of our folder, and add the following: // Packages we need var fs = require('fs'); // Creates our file (part of Node.js so doesn't need installing) var inquirer = require('inquirer'); // The engine for our questions prompt var slugify = require('slugify'); // Will turn a string into a usable filename // The questions var questions = [ { type: 'input', name: 'name', message: 'What is your name?', }, ]; // The questions prompt function askQuestions() { // Ask questions inquirer.prompt(questions).then(answers => { // Things we'll need to generate the output var name = answers.name; // Finished asking questions, show the output console.log('Hello ' + name + '!'); }); } // Kick off the questions prompt askQuestions(); This is just some barebones where we’re including the inquirer package we installed earlier. I’ve stored the questions in a variable, and the askQuestions function will prompt the user for their name, and then print “Hello <your name>” in the console. Enough setup, let’s see some magic. Save the file, go back to the command line and run npm start. Extending what we’ve learnt At the moment, we’re just saving a name to a file, which isn’t really achieving our goal of saving bookmarks. We don’t want our tool to forget our information every time we talk to it – we need to save it somewhere. So I’m going to add a little function to write the output to a file. Saving to a file Create a folder in your project’s directory called _bookmarks. This is where the bookmarks will be saved. I’ve replaced my questions array, and instead of asking for a name, I’ve extended out the questions, asking to be provided with a link and title (as a regular input type), a list of tags (using inquirer’s checkbox type), and finally a description, again, using the input type. So this is how my code looks now: // Packages we need var fs = require('fs'); // Creates our file var inquirer = require('inquirer'); // The engine for our questions prompt var slugify = require('slugify'); // Will turn a string into a usable filename // The questions var questions = [ { type: 'input', name: 'link', message: 'What is the url?', }, { type: 'input', name: 'title', message: 'What is the title?', }, { type: 'checkbox', name: 'tags', message: 'Would you like me to add any tags?', choices: [ { name: 'frontend' }, { name: 'backend' }, { name: 'security' }, { name: 'design' }, { name: 'process' }, { name: 'business' }, ], }, { type: 'input', name: 'description', message: 'How about a description?', }, ]; // The questions prompt function askQuestions() { // Say hello console.log('🐿 Oh, hello! Found something you want me to bookmark?\n'); // Ask questions inquirer.prompt(questions).then((answers) => { // Things we'll need to generate the output var title = answers.title; var link = answers.link; var tags = answers.tags + ''; var description = answers.description; var output = '---\n' + 'title: ""' + title + '""\n' + 'link: ""' + link + '""\n' + 'tags: [' + tags + ']\n' + '---\n' + description + '\n'; // Finished asking questions, show the output console.log('\n🐿 All done! Here is what I\'ve written down:\n'); console.log(output); // Things we'll need to generate the filename var slug = slugify(title); var filename = '_bookmarks/' + slug + '.md'; // Write the file fs.writeFile(filename, output, function () { console.log('\n🐿 Great! I have saved your bookmark to ' + filename); }); }); } // Kick off the questions prompt askQuestions(); The output is formatted into YAML metadata as a Markdown file, which will allow us to turn it into a static HTML file using a build tool later. Run npm start again and have a look at the file it outputs. Getting confirmation Before the user makes critical changes, it’s good to verify those changes first. We’re going to add a confirmation step to our tool, before writing the file. More seasoned CLI users may favour speed over a “hey, can you wait a sec and just check this is all ok” step, but I always think it’s worth adding one so you can occasionally save someone’s butt. So, underneath our questions array, let’s add a confirmation array. // Packages we need … // The questions … // Confirmation questions var confirm = [ { type: 'confirm', name: 'confirm', message: 'Does this look good?', }, ]; // The questions prompt … As we’re adding the confirm step before the file gets written, we’ll need to add the following inside the askQuestions function: // The questions prompt function askQuestions() { // Say hello … // Ask questions inquirer.prompt(questions).then((answers) => { … // Things we'll need to generate the output … // Finished asking questions, show the output … // Confirm output is correct inquirer.prompt(confirm).then(answers => { // Things we'll need to generate the filename var slug = slugify(title); var filename = '_bookmarks/' + slug + '.md'; if (answers.confirm) { // Save output into file fs.writeFile(filename, output, function () { console.log('\n🐿 Great! I have saved your bookmark to ' + filename); }); } else { // Ask the questions again console.log('\n🐿 Oops, let\'s try again!\n'); askQuestions(); } }); }); } // Kick off the questions prompt askQuestions(); Now run npm start and give it a go! Typing y will write the file, and n will take you back to the start. Ideally, I’d store the answers already given as defaults so the user doesn’t have to start from scratch, but I want to keep this demo simple. Serving the files Now that your bookmarking tool is successfully saving formatted Markdown files to a folder, the next step is to serve those files in a way that lets you share them online. The easiest way to do this is to use a static-site generator to convert your YAML files into HTML, and pop them all on one page. Now, you’ve got a few options here and I don’t want to force you down any particular path, as there are plenty out there – it’s just a case of using the one you’re most comfortable with. I personally favour Jekyll because of its tight integration with GitHub Pages – I don’t want to mess around with hosting and deployment, so it’s really handy to have my bookmarks publish themselves on my site as soon as I commit and push them using Git. I’ll give you a very brief run-through of how I’m doing this with bookmarkbot, but I recommend you read my Get Started With GitHub Pages (Plus Bonus Jekyll) guide if you’re unfamiliar with them, because I’ll be glossing over some bits that are already covered in there. Setting up a build tool If you haven’t already, install Jekyll and Bundler globally through RubyGems. Jekyll is our static-site generator, and Bundler is what we use to install Ruby dependencies. gem install jekyll bundler In my project folder, I’m going to run the following which will install the Jekyll files we’ll need to build our listing page. I’m using --force, otherwise it will complain that the directory isn’t empty. jekyll new . --force If you check your project folder, you’ll see a bunch of new files. Now run the following to start the server: bundle exec jekyll serve This will build a new directory called _site. This is where your static HTML files have been generated. Don’t touch anything in this folder because it will get overwritten the next time you build. Now that serve is running, go to http://127.0.0.1:4000/ and you’ll see the default Jekyll page and know that things are set up right. Now, instead, we want to see our list of bookmarks that are saved in the _bookmarks directory (make sure you’ve got a few saved). So let’s get that set up next. Open up the _config.yml file that Jekyll added earlier. In here, we’re going to tell it about our bookmarks. Replace everything in your _config.yml file with the following: title: My Bookmarks description: These are some of my favourite articles about the web. markdown: kramdown baseurl: /bookmarks # This needs to be the same name as whatever you call your repo on GitHub. collections: - bookmarks This will make Jekyll aware of our _bookmarks folder so that we can call it later. Next, create a new directory and file at _layouts/home.html and paste in the following. <!doctype html> <html lang=""en""> <head> <meta charset=""UTF-8"" /> <title>{{site.title}}</title> <meta name=""description"" content=""{{site.description}}""> </head> <body> <h1>{{site.title}}</h1> <p>{{site.description}}</p> <ul> {% for bookmark in site.bookmarks %} <li> <a href=""{{bookmark.link}}""> <h2>{{bookmark.title}}</h2> </a> {{bookmark.content}} {% if bookmark.tags %} <ul> {% for tags in bookmark.tags %}<li>{{tags}}</li>{% endfor %} </ul> {% endif %} </li> {% endfor %} </ul> </body> </html> Restart Jekyll for your config changes to kick in, and go to the url it provides you (probably http://127.0.0.1:4000/bookmarks, unless you gave something different as your baseurl). It’s a decent start – there’s a lot more we can do in this area but now we’ve got a nice list of all our bookmarks, let’s get it online! If you want to use GitHub Pages to host your files, your first step is to push your project to GitHub. Go to your repository and click “settings”. Scroll down to the section labelled “GitHub Pages”, and from here you can enable it. Select your master branch, and it will provide you with a url to view your published pages. What next? Now that you’ve got a framework in place for publishing bookmarks, you can really go to town on your listing page and make it your own. First thing you’ll probably want to do is add some CSS, then when you’ve added a bunch of bookmarks, you’ll probably want to have some filtering in place for the tags, perhaps extend the types of questions that you ask to include an image (if you’re feeling extra-fancy, you could just ask for a url and pull in metadata from the site itself). Maybe you’ve got an idea that doesn’t involve bookmarks at all. You could use what you’ve learnt to build a place where you can share quotes, a list of your favourite restaurants, or even Christmas gift ideas. Here’s one I made earlier My demo, bookmarkbot, is on GitHub, and I’ve reused a lot of the code from styleguides.io. Feel free to grab bits of code from there, and do share what you end up making!",2017,Anna Debenham,annadebenham,2017-12-11T00:00:00+00:00,https://24ways.org/2017/teach-the-cli-to-talk-back/,code 216,Styling Components - Typed CSS With Stylable,"There’s been a lot of debate recently about how best to style components for web apps so that styles don’t accidentally ‘leak’ out of the component they’re meant for, or clash with other styles on the page. Elaborate CSS conventions have sprung up, such as OOCSS, SMACSS, BEM, ITCSS, and ECSS. These work well, but they are methodologies, and require everyone in the team to know them and follow them, which can be a difficult undertaking across large or distributed teams. Others just give up on CSS and put all their styles in JavaScript. Now, I’m not bashing JS, especially so close to its 22nd birthday, but CSS-in-JS has problems of its own. Browsers have 20 years experience in optimising their CSS engines, so JavaScript won’t be as fast as using real CSS, and in any case, this requires waiting for JS to download, parse, execute then render the styles. There’s another problem with CSS-in-JS, too. Since Responsive Web Design hit the streets, most designers no longer make comps in Photoshop or its equivalents; instead, they write CSS. Why hire an expensive design professional and require them to learn a new way of doing their job? A recent thread on Twitter asked “What’s your biggest gripe with CSS-in-JS?”, and the replies were illuminating: “Always having to remember to camelCase properties then spending 10min pulling hair out when you do forget”, “the cryptic domain-specific languages that each of the frameworks do just ever so slightly differently”, “When I test look and feel in browser, then I copy paste from inspector, only to have to re-write it as a JSON object”, “Lack of linting, autocomplete, and css plug-ins for colors/ incrementing/ etc”. If you’re a developer, and you’re still unconvinced, I challenge you to let designers change the font in your IDE to Zapf Chancery and choose a new colour scheme, simply because they like it better. Does that sound like fun? Will that boost your productivity? Thought not. Some chums at Wix Engineering and I wanted to see if we could square this circle. Wix-hosted sites have always used CSS-in-JS (the concept isn’t new; it was in Netscape 4!) but that was causing performance problems. Could we somehow devise a method of extending CSS (like SASS and LESS do) that gives us styles that are guaranteed not to leak or clash, that is compatible with code editors’ autocompletion, and which could be pre-processed at build time to valid, cross-browser, static CSS? After a few months and a few proofs of concept later (drumroll), yes – we could! We call it Stylable. Introducing Stylable Stylable is a CSS pre-processor, like SASS or LESS. It uses CSS syntax so all your development tools will work. At build time, the Stylable CSS extensions are transpiled to flat, valid, cross-browser vanilla CSS for maximum performance. There’s quite a bit to it, and this is a short article, so let’s look at the basic concepts. Components all the way down Stylable is designed for component-based systems. Imagine you have a Gallery component. Within that, there is a Navigation component (for example, containing a ‘next’, ‘previous’, ‘show all thumbnails’, and ‘show all albums’ controls), and within that there are NavButton components. Each component is discrete, used elsewhere in the system in different contexts, perhaps maintained by different team members or even different organisations — you can use Stylable to add a typed interface to non-Stylable component libraries, as well as using it to build an app from scratch. Firstly, Stylable will automatically namespace styles so they only apply inside that component, by rewriting them at build time with a unique (but human-readable) prefix. So, for example, <div className=""jingle bells"" /> might be re-written as <div class=""header183--jingle header183--bells""></div>. So far, so BEM-like (albeit without the headache of remembering a convention). But what else can it do? Custom pseudo-elements An important feature of Stylable is the ability to reach into a component and style it from the outside, without having to know about its internal structure. Let’s see the guts of a simple JSX button component in the file button.jsx: render () { return ( <button> <span className=""icon"" /> <span className=""label"">Submit</span> </button> ); } (Note:className is the JSX way of setting a class on an element; this example uses React, but Stylable itself is framework-agnostic.) I style it using a Stylable stylesheet (the .st.css suffix tells the preprocessor to process this file): /* button.st.css */ /* note that the root class is automatically placed on the root HTML element by Stylable React integration */ .root { background: #b0e0e6; } .icon { display: block; height: 2em; background-image: url('./assets/btnIcon.svg'); } .label { font-size: 1.2em; color: rgba(81, 12, 68, 1.0); } Note that Stylable allows all the CSS that you know and love to be included. As Drew Powers wrote in his review: with Stylable, you get CSS, and every part of CSS. This seems like a “duh” observation, but this is significant if you’ve ever battled with a CSS-in-JS framework over a lost or “hacky” implementation of a basic CSS feature. I can import my Button component into another component - this time, panel.jsx: /* panel.jsx */ import * as React from 'react'; import {properties, stylable} from 'wix-react-tools'; import {Button} from '../button'; import style from './panel.st.css'; export const Panel = stylable(style)(() => ( <div> <Button className=""cancelBtn"" /> </div> )); In panel.st.css: /* panel.st.css */ :import { -st-from: './button.st.css'; -st-default: Button; } /* cancelBtn is of type Button */ .cancelBtn { -st-extends: Button; background: cornflowerblue; } /* targets the label of <Button className=""cancelBtn"" /> */ .cancelBtn::label { color: honeydew; font-weight: bold; } Here, we’re reaching into the Button component from the Panel component. Buttons that are not inside a Panel won’t be affected. We do this by extending the CSS concept of pseudo-elements. As MDN says “A CSS pseudo-element is a keyword added to a selector that lets you style a specific part of the selected element(s)”. We don’t use a descendant selector because the label isn’t part of the Panel component, it’s part of the Button component. This syntax allows us three important features: Piercing the Shadow Boundary Because, like a Matroshka doll of code, you can have components inside components inside components, you can chain pseudo-elements. In Stylable, Gallery::NavigationPanel::Button::Icon is a legitimate selector. We were worried by this (even though all Stylable CSS is transpiled to flat, valid CSS) because it’s not allowed in CSS, albeit with the note “A future version of this specification may allow multiple pseudo-elements per selector”. So I asked the CSS Working Group and was told “we intend to only allow specific combinations”, so we feel this extension to CSS is in the spirit of the language. While we’re on the subject of those pesky Web Standards, note that the proposed ::part and ::theme pseudo-elements are meant to fulfil the same function. However, those are coming in two years (YouTube link) and, when they do, Stylable will support them. Structure-agnostic The second totez-groovy™ feature of Stylable’s pseudo-element syntax is that you don’t have to care about the internal structure of the component whose boundary you’re piercing. Any element with a class attribute is exposed as a pseudo-element to any component that imports it. It acts as an interface on any component, whether written in-house or by a third party. Code completion When we started writing Stylable, our objective was to do for CSS what TypeScript does for JavaScript. Wikipedia says Challenges with dealing with complex JavaScript code led to demand for custom tooling to ease developing of components in the language. TypeScript developers sought a solution that would not break compatibility with the standard and its cross-platform support … [with] static typing that enables static language analysis, which facilitates tooling and IDE support. Similarly, because Stylable knows about components, their stylable parts and states, and how they inter-relate, we can develop language services like code completion and validation. That means we can see our errors at build time or even while working in our IDE. Wave goodbye to silent run-time breakage misery, with the Stylable Intelligence VS Code extension ! An action replay of Visual Studio Code offering code completion etc, filmed in super StyloVision. Pseudo-classes for state Stylable makes it easy to apply styles to custom states (as well as the usual :active, :checked, :visited etc) by extending the CSS pseudo-class syntax. We do this by declaring the possible custom states on the component: /* Gallery.st.css */ .root { -st-states: toggled, loading; } .root:toggled { color: red; } .root:loading { color: green; } .root:loading:toggled { color: blue; } The -st-states “property” is actually a directive for the transpiler, so Stylable knows about possible pseudo-elements and can offer code completion etc. It looks like a vendor prefix by design, because it’s therefore valid CSS syntax and IDEs won’t flag it as an error, but is removed at build time. Remember, Stylable resolves to flat, valid, cross-browser CSS. As with plain CSS, it can’t set a state, but can only react to states set externally. In the case of custom pseudo-classes, your JavaScript logic is responsible for maintaining state — by default, by setting a data-* attribute. And there’s more! Hopefully, I’ve shown you how Stylable extends CSS to allow you to style components and sub-components without worrying about that styles will leak, or knowing too much about internal structure. There isn’t time to tell you about mixins (CSS macros in JavaScript), variables or our theming capabilities, because I have wine to wrap and presents to mull. We made Stylable because we ♥ CSS. But there’s a practical reason, too. As James Kyle, a core team member of Yarn, Babel and TC39 (the JavaScript Standards Technical Committee), said of Styable “pretty sure all the CSS-in-JS libraries just died for me”, explaining CSS could be perfectly static if given the right tools, that’s exactly what stylable does. It gives you the tools you need in CSS so that you don’t need to do a bunch of dynamic shit in JS. Making it static is a huge performance win. Wix is currently battle-testing Stylable in its back-office systems, before rolling it out to power Wix-hosted sites to make them more performant. There are 110 million Wix-hosted sites, so there will be a lot of Stylable on the web in a few months. And it’s open-sourced so you, dear Reader, can try it out and use it too. There’s a Stylable boilerplate based on create-react-app to get you started (more integrations are in the pipeline). Happy Hols ‘n’ Hugz from the Stylable team: Bruce, Arnon, Tom, Ido. Read more Stylable documentation centre Stylable on Twitter A nice picture of a hedgehog",2017,Bruce Lawson,brucelawson,2017-12-09T00:00:00+00:00,https://24ways.org/2017/styling-components-typed-css-with-stylable/,code 220,Finding Your Way with Static Maps,"Since the introduction of the Google Maps service in 2005, online maps have taken off in a way not really possible before the invention of slippy map interaction. Although quickly followed by a plethora of similar services from both commercial and non-commercial parties, Google’s first-mover advantage, and easy-to-use developer API saw Google Maps become pretty much the de facto mapping service. It’s now so easy to add a map to a web page, there’s no reason not to. Dropping an iframe map into your page is as simple as embedding a YouTube video. But there’s one crucial drawback to both the solution Google provides for you to drop into your page and the code developers typically implement themselves – they don’t work without JavaScript. A bit about JavaScript Back in October of this year, The Yahoo! Developer Network blog ran some tests to measure how many visitors to the Yahoo! home page didn’t have JavaScript available or enabled in their browser. It’s an interesting test when you consider that the audience for the Yahoo! home page (one of the most visited pages on the web) represents about as mainstream a sample as you’ll find. If there’s any such thing as an ‘average Web user’ then this is them. The results surprised me. It varied from region to region, but at most just two per cent of visitors didn’t have JavaScript running. To be honest, I was expecting it to be higher, but this quote from the article caught my attention: While the percentage of visitors with JavaScript disabled seems like a low number, keep in mind that small percentages of big numbers are also big numbers. That’s right, of course, and it got me thinking about what that two per cent means. For many sites, two per cent is the number of visitors using the Opera web browser, using IE6, or using Mobile Safari. So, although a small percentage of the total, users without JavaScript can’t just be forgotten about, and catering for them is at the very heart of how the web is supposed to work. Starting with content in HTML, we layer on presentation with CSS and then enhance interactivity with JavaScript. If anything fails along the way or the network craps out, or a browser just doesn’t support one of the technologies, the user still gets something they can work with. It’s progressive enhancement – also known as doing our jobs properly. Sorry, wasn’t this about maps? As I was saying, the default code Google provides, and the example code it gives to developers (which typically just gets followed ‘as is’) doesn’t account for users without JavaScript. No JavaScript, no content. When adding the ability to publish maps to our small content management system Perch, I didn’t want to provide a solution that only worked with JavaScript. I had to go looking for a way to provide maps without JavaScript, too. There’s a simple solution, fortunately, in the form of static map tiles. All the various slippy map services use a JavaScript interface on top of what are basically rendered map image tiles. Dragging the map loads in more image tiles in the direction you want to view. If you’ve used a slippy map on a slow connection, you’ll be familiar with seeing these tiles load in one by one. The Static Map API The good news is that these tiles (or tiles just like them) can be used as regular images on your site. Google has a Static Map API which not only gives you a handy interface to retrieve a tile for the exact area you need, but also allows you to place pins, and zoom and centre the tile so that the image looks just so. This means that you can create a static, non-JavaScript version of your slippy map’s initial (or ideal) state to load into your page as a regular image, and then have the JavaScript map hijack the image and make it slippy. Clearly, that’s not going to be a perfect solution for every map’s requirements. It doesn’t allow for panning, zooming or interrogation without JavaScript. However, for the majority of straightforward map uses online, a static map makes a great alternative for those visitors without JavaScript. Here’s the how Retrieving a static map tile is staggeringly easy – it’s just a case of forming a URL with the correct arguments and then using that as the src of an image tag. <img src=""http://maps.google.com/maps/api/staticmap ?center=Bethlehem+Israel &zoom=5 &size=540x280 &maptype=satellite &markers=color:red|31.4211,35.1144 &sensor=false"" width=""540"" height=""280"" alt=""Map of Bethlehem, Israel"" /> As you can see, there are a few key options that we pass along to the base URL. All of these should be familiar to anyone who’s worked with the JavaScript API. center determines the point on which the map is centred. This can be latitude and longitude values, or simply an address which is then geocoded. zoom sets the zoom level. size is the pixel dimensions of the image you require. maptype can be roadmap, satellite, terrain or hybrid. markers sets one or more pin locations. Markers can be labelled, have different colours, and so on – there’s quite a lot of control available. sensor states whether you are using a sensor to determine the user’s location. When just embedding a map in a web page, set this to false. There are many options, including plotting paths and setting the image format, which can all be found in the straightforward documentation. Adding to your page If you’ve worked with the JavaScript API, you’ll know that it needs a container element which you inject the map into: <div id=""map""></div> All you need to do is put your static image inside that container: <div id=""map""> <img src=""http://maps.google.com/maps/api/staticmap[...]"" /> </div> And then, in your JavaScript, find the image and remove it. For example, with jQuery you’d simply use: $('#map img').remove(); Why not use a <noscript> element around the image? You could, and that would certainly work fine for browsers that do not support JavaScript. What that won’t cover, however, is the situation where the browser has JavaScript support but, for whatever reason, the JavaScript doesn’t run. This could be due to network issues, an aggressive corporate firewall, or even just a bug in your code. So for that reason, we put the image in for all browsers that show images, and then remove it when the JavaScript is successfully running. See an example in action About rate limits The Google Static Map API limits the requests per site viewer – currently at one thousand distinct maps per day per viewer. So, for most sites you really don’t need to worry about the rate limit. Requests for the same tile aren’t normally counted, as the tile has already been generated and is cached. You can embed the images direct from Google and let it worry about the distribution and caching. In conclusion As you can see, adding a static map alongside your dynamic map for those users without JavaScript is very easy indeed. There may not be a huge percentage of web visitors browsing without JavaScript but, as we’ve seen, a small percentage of a big number is still a big number. When it’s so easy to add a static map, can you really justify not doing it?",2010,Drew McLellan,drewmclellan,2010-12-01T00:00:00+00:00,https://24ways.org/2010/finding-your-way-with-static-maps/,code 221,"“Probably, Maybe, No”: The State of HTML5 Audio","With the hype around HTML5 and CSS3 exceeding levels not seen since 2005’s Ajax era, it’s worth noting that the excitement comes with good reason: the two specifications render many years of feature hacks redundant by replacing them with native features. For fun, consider how many CSS2-based rounded corners hacks you’ve probably glossed over, looking for a magic solution. These days, with CSS3, the magic is border-radius (and perhaps some vendor prefixes) followed by a coffee break. CSS3’s border-radius, box-shadow, text-shadow and gradients, and HTML5’s <canvas>, <audio> and <video> are some of the most anticipated features we’ll see put to creative (ab)use as adoption of the ‘new shiny’ grows. Developers jumping on the cutting edge are using subsets of these features to little detriment, in most cases. The more popular CSS features are design flourishes that can degrade nicely, but the current audio and video implementations in particular suffer from a number of annoyances. The new shiny: how we got here Sound involves one of the five senses, a key part of daily life for most – and yet it has been strangely absent from HTML and much of the web by default. From a simplistic perspective, it seems odd that HTML did not include support for the full multimedia experience earlier, despite the CD-ROM-based craze of the early 1990s. In truth, standards like HTML can take much longer to bake, but eventually deliver the promise of a lowered barrier to entry, consistent implementations and shiny new features now possible ‘for free’ just about everywhere. <img> was introduced early and naturally to HTML, despite having some opponents at the time. Perhaps <audio> and <video> were avoided, given the added technical complexity of decoding various multi-frame formats, plus the hardware and bandwidth limitations of the era. Perhaps there were quarrels about choosing a standard format or – more simply – maybe these elements just weren’t considered to be applicable to the HTML-based web at the time. In any event, browser plugins from programs like RealPlayer and QuickTime eventually helped to fill the in-page audio/video gap, handling <object> and <embed> markup which pointed to .wav, .avi, .rm or .mov files. Suffice it to say, the experience was inconsistent at best and, on the standards side of the fence right now, so is HTML5 in terms of audio and video. : the theory As far as HTML goes, the code for <audio> is simple and logical. Just as with <img>, a src attribute specifies the file to load. Pretty straightforward – sounds easy, right? <audio src=""mysong.ogg"" controls> <!-- alternate content for unsupported case --> Download <a href=""mysong.ogg"">mysong.ogg</a>; </audio> Ah, if only it were that simple. The first problem is that the OGG audio format, while ‘free’, is not supported by some browsers. Conversely, nor is MP3, despite being a de facto standard used in all kinds of desktop software (and hardware). In fact, as of November 2010, no single audio format is commonly supported across all major HTML5-enabled browsers. What you end up writing, then, is something like this: <audio controls> <source src=""mysong.mp3"" /> <source src=""mysong.ogg"" /> <!-- alternate content for unsupported case, maybe Flash, etc. --> Download <a href=""mysong.ogg"">mysong.ogg</a> or <a href=""mysong.mp3"">mysong.mp3</a> </audio> Keep in mind, this is only a ‘first class’ experience for the HTML5 case; also, for non-supported browsers, you may want to look at another inline player (object/embed, or a JavaScript plus Flash API) to have inline audio. You can imagine the added code complexity in the case of supporting ‘first class’ experiences for older browsers, too. : the caveats With <img>, you typically don’t have to worry about format support – it just works – and that’s part of what makes a standard wonderful. JPEG, PNG, BMP, GIF, even TIFF images all render just fine if for no better reason, perhaps, than being implemented during the ‘wild west’ days of the web. The situation with <audio> today reflects a very different – read: business-aware – environment in 2010. (Further subtext: There’s a lot of [potential] money involved.) Regrettably, this is a collision of free and commercial interests, where the casualty is ultimately the user. Second up in the casualty list is you, the developer, who has to write additional code around this fragmented support. The HTML5 audio API as implemented in JavaScript has one of the most un-computer-like responses I’ve ever seen, and inspired the title of this post. Calling new Audio().canPlayType('audio/mp3'), which queries the system for format support according to a MIME type, is supposed to return one of “probably”, “maybe”, or “no”. Sometimes, you’ll just get a null or empty string, which is also fun. A “maybe” response does not guarantee that a format will be supported; sometimes audio/mp3 gives “maybe,” but then audio/mpeg; codecs=""mp3"" will give a more-solid “probably” response. This can vary by browser or platform, too, depending on native support – and finally, the user may also be able to install codecs, extending support to include other formats. (Are you excited yet?) Damn you, warring formats! New market and business opportunities go hand-in-hand with technology developments. What we have here is certainly not failure to communicate; rather, we have competing parties shouting loudly in public in attempts to influence mindshare towards a de facto standard for audio and video. Unfortunately, the current situation means that at least two formats are effectively required to serve the majority of users correctly. As it currently stands, we have the free and open source software camp of OGG Vorbis/WebM and its proponents (notably, Mozilla, Google and Opera in terms of browser makers), up against the non-free, proprietary and ‘closed’ camp of MP3 and MPEG4/HE-AAC/H.264 – which is where you’ll find commitments from Apple and Microsoft, among others. Apple is likely in with H.264 for the long haul, given its use of the format for its iTunes music store and video offerings. It is generally held that H.264 is a technically superior format in terms of file size versus quality, but it involves intellectual property and, in many use cases, requires licensing fees. To be fair, there is a business model with H.264 and much has been invested in its development, but this approach is not often the kind that wins over the web. On that front, OGG/WebM may eventually win for being a ‘free’ format that does not involve a licensing scheme. Closed software and tools ideologically clash with the open nature of the web, which exists largely thanks to free and open technology. Because of philosophical and business reasons, support for audio and video is fragmented across browsers adopting HTML5 features. It does not help that a large amount of audio and video currently exists in non-free MP3 and MPEG-4 formats. Adoption of <audio> and <video> may be slowed, since it is more complex than <img> and may feel ‘broken’ to developers when edge cases are encountered. Furthermore, the HTML5 spec does not mandate a single required format. The end result is that, as a developer, you must currently provide at least both MP3 and OGG, for example, to serve most existing HTML5-based user agents. Transitioning to There will be some growing pains as developers start to pick up the new HTML5 shiny, while balancing the needs of current and older agents that don’t support either <audio> or the preferred format you may choose (for example, MP3). In either event, Flash or other plugins can be used as done traditionally within HTML4 documents to embed and play the relevant audio. The SoundManager 2 page player demo in action. Ideally, HTML5 audio should be used whenever possible with Flash as the backup option. A few JavaScript/Flash-based audio player projects exist which balance the two; in attempting to tackle this problem, I develop and maintain SoundManager 2, a JavaScript sound API which transparently uses HTML5 Audio() and, if needed, Flash for playing audio files. The internals can get somewhat ugly, but the transition between HTML4 and HTML5 is going to be just that – and even with HTML5, you will need some form of format fall-back in addition to graceful degradation. It may be safest to fall back to MP3/MP4 formats for inline playback at this time, given wide support via Flash, some HTML5-based browsers and mobile devices. Considering the amount of MP3/MP4 media currently available, it is wiser to try these before falling through to a traditional file download process. Early findings Here is a brief list of behavioural notes, annoyances, bugs, quirks and general weirdness I have found while playing with HTML5-based audio at time of writing (November 2010): Apple iPad/iPhone (iOS 4, iPad 3.2+) Only one sound can be played at a time. If a second sound starts, the first is stopped. No auto-play allowed. Sounds follow the pop-up window security model and can only be started from within a user event handler such as onclick/touch, and so on. Otherwise, playback attempts silently fail. Once started, a sequence of sounds can be created or played via the ‘finish’ event of the previous sound (for example, advancing through a playlist without interaction after first track starts). iPad, iOS 3.2: Occasional ‘infinite loop’ bug seen where audio does not complete and stop at a sound’s logical end – instead, it plays again from the beginning. Might be specific to example file format (HE-AAC) encoded from iTunes. Apple Safari, OS X Snow Leopard 10.6.5 Critical bug: Safari 4 and 5 intermittently fail to load or play HTML5 audio on Snow Leopard due to bug(s) in QuickTime X and/or other underlying frameworks. Known Apple ‘radar’ bug: bugs.webkit.org #32159 (see also, test case.) Amusing side note: Safari on Windows is fine. Apple Safari, Windows Food for thought: if you download “Safari” alone on Windows, you will not get HTML5 audio/video support (tested in WinXP). You need to download “Safari + QuickTime” to get HTML5 audio/video support within Safari. (As far as I’m aware, Chrome, Firefox and Opera either include decoders or use system libraries accordingly. Presumably IE 9 will use OS-level APIs.) General Quirks Seeking and loading, ‘progress’ events, and calculating bytes loaded versus bytes total should not be expected to be linear, as users can arbitrarily seek within a sound. It appears that some support for HTTP ranges exists, which adds a bit of logic to UI code. Browsers seem to vary slightly in their current implementations of these features. The onload event of a sound may be of little relevance, if non-linear loading is involved (see above note re: seeking). Interestingly (perhaps I missed it), the current spec does not seem to specify a panning or left/right channel mix option. The preload attribute values may vary slightly between browsers at this time. Upcoming shiny: HTML5 Audio Data API With access to audio data, you can incorporate waveform and spectrum elements that make your designs react to music. The HTML5 audio spec does a good job covering the basics of playback, but did not initially get into manipulation or generation of audio on-the-fly, something Flash has had for a number of years now. What if JavaScript could create, monitor and change audio dynamically, like a sort of audio <canvas> element? With that kind of capability, many dynamic audio processing features become feasible and, when combined with other media, can make for some impressive demos. What started as a small idea among a small group of audio and programming enthusiasts grew to inspire a W3C audio incubator group, and continued to establish the Mozilla Audio Data API. Contributors wrote a patch for Firefox which was reviewed and revised, and is now slated to be in the public release of Firefox 4. Some background and demos are also detailed in an article from the BBC R&D blog. There are plenty of live demos to see, which give an impression of the new creative ideas this API enables. Many concepts are not new in themselves, but it is exciting to see this sort of thing happening within the native browser context. Mozilla is not alone in this effort; the WebKit folks are also working on a JavaScriptAudioNode interface, which implements similar audio buffering and sample elements. The future? It is my hope that we’ll see a common format emerge in terms of support across the major browsers for both audio and video; otherwise, support will continue to be fragmented and mildly frustrating to develop for, and that can impede growth of the feature. It’s a big call, but if <img> had lacked a common format back in the wild west era, I doubt the web would have grown to where it is today. Complaints and nitpicks aside, HTML5 brings excellent progress on the browser multimedia front, and the first signs of native support are a welcome improvement given all audio and video previously relied on plugins. There is good reason to be excited. While there is room for more, support could certainly be much worse – and as tends to happen with specifications, the implementations targeting them should improve over time. Note: Thanks to Nate Koechley, who suggested the Audio().canPlayType() response be part of the article title.",2010,Scott Schiller,scottschiller,2010-12-08T00:00:00+00:00,https://24ways.org/2010/the-state-of-html5-audio/,code 223,Calculating Color Contrast,"Some websites and services allow you to customize your profile by uploading pictures, changing the background color or other aspects of the design. As a customer, this personalization turns a web app into your little nest where you store your data. As a designer, letting your customers have free rein over the layout and design is a scary prospect. So what happens to all the stock text and images that are designed to work on nice white backgrounds? Even the Mac only lets you choose between two colors for the OS, blue or graphite! Opening up the ability to customize your site’s color scheme can be a recipe for disaster unless you are flexible and understand how to find maximum color contrasts. In this article I will walk you through two simple equations to determine if you should be using white or black text depending on the color of the background. The equations are both easy to implement and produce similar results. It isn’t a matter of which is better, but more the fact that you are using one at all! That way, even with the craziest of Geocities color schemes that your customers choose, at least your text will still be readable. Let’s have a look at a range of various possible colors. Maybe these are pre-made color schemes, corporate colors, or plucked from an image. Now that we have these potential background colors and their hex values, we need to find out whether the corresponding text should be in white or black, based on which has a higher contrast, therefore affording the best readability. This can be done at runtime with JavaScript or in the back-end before the HTML is served up. There are two functions I want to compare. The first, I call ’50%’. It takes the hex value and compares it to the value halfway between pure black and pure white. If the hex value is less than half, meaning it is on the darker side of the spectrum, it returns white as the text color. If the result is greater than half, it’s on the lighter side of the spectrum and returns black as the text value. In PHP: function getContrast50($hexcolor){ return (hexdec($hexcolor) > 0xffffff/2) ? 'black':'white'; } In JavaScript: function getContrast50(hexcolor){ return (parseInt(hexcolor, 16) > 0xffffff/2) ? 'black':'white'; } It doesn’t get much simpler than that! The function converts the six-character hex color into an integer and compares that to one half the integer value of pure white. The function is easy to remember, but is naive when it comes to understanding how we perceive parts of the spectrum. Different wavelengths have greater or lesser impact on the contrast. The second equation is called ‘YIQ’ because it converts the RGB color space into YIQ, which takes into account the different impacts of its constituent parts. Again, the equation returns white or black and it’s also very easy to implement. In PHP: function getContrastYIQ($hexcolor){ $r = hexdec(substr($hexcolor,0,2)); $g = hexdec(substr($hexcolor,2,2)); $b = hexdec(substr($hexcolor,4,2)); $yiq = (($r*299)+($g*587)+($b*114))/1000; return ($yiq >= 128) ? 'black' : 'white'; } In JavaScript: function getContrastYIQ(hexcolor){ var r = parseInt(hexcolor.substr(0,2),16); var g = parseInt(hexcolor.substr(2,2),16); var b = parseInt(hexcolor.substr(4,2),16); var yiq = ((r*299)+(g*587)+(b*114))/1000; return (yiq >= 128) ? 'black' : 'white'; } You’ll notice first that we have broken down the hex value into separate RGB values. This is important because each of these channels is scaled in accordance to its visual impact. Once everything is scaled and normalized, it will be in a range between zero and 255. Much like the previous ’50%’ function, we now need to check if the input is above or below halfway. Depending on where that value is, we’ll return the corresponding highest contrasting color. That’s it: two simple contrast equations which work really well to determine the best readability. If you are interested in learning more, the W3C has a few documents about color contrast and how to determine if there is enough contrast between any two colors. This is important for accessibility to make sure there is enough contrast between your text and link colors and the background. There is also a great article by Kevin Hale on Particletree about his experience with choosing light or dark themes. To round it out, Jonathan Snook created a color contrast picker which allows you to play with RGB sliders to get values for YIQ, contrast and others. That way you can quickly fiddle with the knobs to find the right balance. Comparing results Let’s revisit our color schemes and see which text color is recommended for maximum contrast based on these two equations. If we use the simple ’50%’ contrast function, we can see that it recommends black against all the colors except the dark green and purple on the second row. In general, the equation feels the colors are light and that black is a better choice for the text. The more complex ‘YIQ’ function, with its weighted colors, has slightly different suggestions. White text is still recommended for the very dark colors, but there are some surprises. The red and pink values show white text rather than black. This equation takes into account the weight of the red value and determines that the hue is dark enough for white text to show the most contrast. As you can see, the two contrast algorithms agree most of the time. There are some instances where they conflict, but overall you can use the equation that you prefer. I don’t think it is a major issue if some edge-case colors get one contrast over another, they are still very readable. Now let’s look at some common colors and then see how the two functions compare. You can quickly see that they do pretty well across the whole spectrum. In the first few shades of grey, the white and black contrasts make sense, but as we test other colors in the spectrum, we do get some unexpected deviation. Pure red #FF0000 has a flip-flop. This is due to how the ‘YIQ’ function weights the RGB parts. While you might have a personal preference for one style over another, both are justifiable. In this second round of colors, we go deeper into the spectrum, off the beaten track. Again, most of the time the contrasting algorithms are in sync, but every once in a while they disagree. You can select which you prefer, neither of which is unreadable. Conclusion Contrast in color is important, especially if you cede all control and take a hands-off approach to the design. It is important to select smart defaults by making the contrast between colors as high as possible. This makes it easier for your customers to read, increases accessibility and is generally just easier on the eyes. Sure, there are plenty of other equations out there to determine contrast; what is most important is that you pick one and implement it into your system. So, go ahead and experiment with color in your design. You now know how easy it is to guarantee that your text will be the most readable in any circumstance.",2010,Brian Suda,briansuda,2010-12-24T00:00:00+00:00,https://24ways.org/2010/calculating-color-contrast/,code 231,Designing for iOS: Life Beyond Media Queries,"Although not a new phenomenon, media queries seem to be getting a lot attention online recently and for the right reasons too – it’s great to be able to adapt a design with just a few lines of CSS – but many people are relying only on them to create an iPhone-specific version of their website. I was pleased to hear at FOWD NYC a few weeks ago that both myself and Aral Balkan share the same views on why media queries aren’t always going to be the best solution for mobile. Both of us specialise in iPhone design ourselves and we opt for a different approach to media queries. The trouble is, regardless of what you have carefully selected to be display:none; in your CSS, the iPhone still loads everything in the background; all that large imagery for your full scale website also takes up valuable mobile bandwidth and time. You can greatly increase the speed of your website by creating a specific site tailored to mobile users with just a few handy pointers – media queries, in some instances, might be perfectly suitable but, in others, here’s what you can do. Redirect your iPhone/iPod Touch users To detect whether someone is viewing your site on an iPhone or iPod Touch, you can either use JavaScript or PHP. The JavaScript if((navigator.userAgent.match(/iPhone/i)) || (navigator.userAgent.match(/iPod/i))) { if (document.cookie.indexOf(""iphone_redirect=false"") == -1) window.location = ""http://mobile.yoursitehere.com""; } The PHP if(strstr($_SERVER['HTTP_USER_AGENT'],'iPhone') || strstr($_SERVER['HTTP_USER_AGENT'],'iPod')) { header('Location: http://mobile.yoursitehere.com'); exit(); } Both of these methods redirect the user to a site that you have made specifically for the iPhone. At this point, be sure to provide a link to the full version of the website, in case the user wishes to view this and not be thrown into an experience they didn’t want, with no way back. Tailoring your site So, now you’ve got 320 × 480 pixels of screen to play with – and to create a style sheet for, just as you would for any other site you build. There are a few other bits and pieces that you can add to your code to create a site that feels more like a fully immersive iPhone app rather than a website. Retina display When building your website specifically tailored to the iPhone, you might like to go one step further and create a specific style sheet for iPhone 4’s Retina display. Because there are four times as many pixels on the iPhone 4 (640 × 960 pixels), you’ll find specifics such as text shadows and borders will have to be increased. <link rel=""stylesheet"" media=""only screen and (-webkit-min-device-pixel-ratio: 2)"" type=""text/css"" href=""../iphone4.css"" /> (Credit to Thomas Maier) Prevent user scaling This declaration, added into the <head>, stops the user being able to pinch-zoom in and out of your design, which is perfect if you are designing to the exact pixel measurements of the iPhone screen. <meta name=""viewport"" content=""width=device-width; initial-scale=1.0; maximum-scale=1.0;""> Designing for orientation As iPhones aren’t static devices, you’ll also need to provide a style sheet for horizontal orientation. We can do this by inserting some JavaScript into the <head> as follows: <script type=""text/javascript""> function orient() { switch(window.orientation) { case 0: document.getElementById(""orient_css"").href = ""css/iphone_portrait.css""; break; case -90: document.getElementById(""orient_css"").href = ""css/iphone_landscape.css""; break; case 90: document.getElementById(""orient_css"").href = ""css/iphone_landscape.css""; break; } } window.onload = orient(); </script> You can also specify orientation styles using media queries. This is absolutely fine, as by this point you’ll already be working with mobile-specific graphics and have little need to set a lot of things to display:none; <link rel=""stylesheet"" media=""only screen and (max-device-width: 480px)"" href=""/iphone.css""> <link rel=""stylesheet"" media=""only screen and (orientation: portrait)"" href=""/portrait.css""> <link rel=""stylesheet"" media=""only screen and (orientation: landscape)” href=""/landscape.css""> Remove the address and status bars, top and bottom To give you more room on-screen and to make your site feel more like an immersive web app, you can place the following declaration into the <head> of your document’s code to remove the address and status bars at the top and bottom of the screen. <meta name=""apple-mobile-web-app-capable"" content=""yes"" /> Making the most of inbuilt functions Similar to mailto: e-mail links, the iPhone also supports another two handy URI schemes which are great for enhancing contact details. When tapped, the following links will automatically bring up the appropriate call or text interface: <a href=""tel:01234567890"">Call us</a> <a href=""sms:01234567890"">Text us</a> iPhone-specific Web Clip icon Although I believe them to be fundamentally flawed, since they rely on the user bookmarking your site, iPhone Web Clip icons are still a nice touch. You need just two declarations, again in the <head> of your document: <link rel=""apple-touch-icon"" href=""icons/regular_icon.png"" /> <link rel=""apple-touch-icon"" sizes=""114x114"" href=""icons/retina_icon.png"" /> For iPhone 4 you’ll need to create a 114 × 114 pixels icon; for a non-Retina display, a 57 × 57 pixels icon will do the trick. Precomposed Apple adds its standard gloss ‘moon’ over the top of any icon. If you feel this might be too much for your particular icon and would prefer a matte finish, you can add precomposed to the end of the apple-touch-icon declaration to remove the standard gloss. <link rel=""apple-touch-icon-precomposed"" href=""/images/touch-icon.png"" /> Wrapping up Media queries definitely have their uses. They make it easy to build a custom experience for your visitor, regardless of their browser’s size. For more complex sites, however, or where you have lots of imagery and other content that isn’t necessary on the mobile version, you can now use these other methods to help you out. Remember, they are purely for presentation and not optimisation; for busy people on the go, optimisation and faster-running mobile experiences can only be a good thing. Have a wonderful Christmas fellow Webbies!",2010,Sarah Parmenter,sarahparmenter,2010-12-17T00:00:00+00:00,https://24ways.org/2010/life-beyond-media-queries/,code 233,Wrapping Things Nicely with HTML5 Local Storage,"HTML5 is here to turn the web from a web of hacks into a web of applications – and we are well on the way to this goal. The coming year will be totally and utterly awesome if you are excited about web technologies. This year the HTML5 revolution started and there is no stopping it. For the first time all the browser vendors are rallying together to make a technology work. The new browser war is fought over implementation of the HTML5 standard and not over random additions. We live in exciting times. Starting with a bang As with every revolution there is a lot of noise with bangs and explosions, and that’s the stage we’re at right now. HTML5 showcases are often CSS3 showcases, web font playgrounds, or video and canvas examples. This is great, as it gets people excited and it gives the media something to show. There is much more to HTML5, though. Let’s take a look at one of the less sexy, but amazingly useful features of HTML5 (it was in the HTML5 specs, but grew at such an alarming rate that it warranted its own spec): storing information on the client-side. Why store data on the client-side? Storing information in people’s browsers affords us a few options that every application should have: You can retain the state of an application – when the user comes back after closing the browser, everything will be as she left it. That’s how ‘real’ applications work and this is how the web ones should, too. You can cache data – if something doesn’t change then there is no point in loading it over the Internet if local access is so much faster You can store user preferences – without needing to keep that data on your server at all. In the past, storing local data wasn’t much fun. The pain of hacky browser solutions In the past, all we had were cookies. I don’t mean the yummy things you get with your coffee, endorsed by the blue, furry junkie in Sesame Street, but the other, digital ones. Cookies suck – it isn’t fun to have an unencrypted HTTP overhead on every server request for storing four kilobytes of data in a cryptic format. It was OK for 1994, but really neither an easy nor a beautiful solution for the task of storing data on the client. Then came a plethora of solutions by different vendors – from Microsoft’s userdata to Flash’s LSO, and from Silverlight isolated storage to Google’s Gears. If you want to know just how many crazy and convoluted ways there are to store a bit of information, check out Samy’s evercookie. Clearly, we needed an easier and standardised way of storing local data. Keeping it simple – local storage And, lo and behold, we have one. The local storage API (or session storage, with the only difference being that session data is lost when the window is closed) is ridiculously easy to use. All you do is call a few methods on the window.localStorage object – or even just set the properties directly using the square bracket notation: if('localStorage' in window && window['localStorage'] !== null){ var store = window.localStorage; // valid, API way store.setItem(‘cow’,‘moo’); console.log( store.getItem(‘cow’) ); // => ‘moo’ // shorthand, breaks at keys with spaces store.sheep = ‘baa’ console.log( store.sheep ); // ‘baa’ // shorthand for all store[‘dog’] = ‘bark’ console.log( store[‘dog’] ); // => ‘bark’ } Browser support is actually pretty good: Chrome 4+; Firefox 3.5+; IE8+; Opera 10.5+; Safari 4+; plus iPhone 2.0+; and Android 2.0+. That should cover most of your needs. Of course, you should check for support first (or use a wrapper library like YUI Storage Utility or YUI Storage Lite). The data is stored on a per domain basis and you can store up to five megabytes of data in localStorage for each domain. Strings attached By default, localStorage only supports strings as storage formats. You can’t store results of JavaScript computations that are arrays or objects, and every number is stored as a string. This means that long, floating point numbers eat into the available memory much more quickly than if they were stored as numbers. var cowdesc = ""the cow is of the bovine ilk, ""+ ""one end is for the moo, the ""+ ""other for the milk""; var cowdef = { ilk“bovine”, legs, udders, purposes front“moo”, end“milk” } }; window.localStorage.setItem(‘describecow’,cowdesc); console.log( window.localStorage.getItem(‘describecow’) ); // => the cow is of the bovine… window.localStorage.setItem(‘definecow’,cowdef); console.log( window.localStorage.getItem(‘definecow’) ); // => [object Object] = bad! This limits what you can store quite heavily, which is why it makes sense to use JSON to encode and decode the data you store: var cowdef = { ""ilk"":""bovine"", ""legs"":4, ""udders"":4, ""purposes"":{ ""front"":""moo"", ""end"":""milk"" } }; window.localStorage.setItem(‘describecow’,JSON.stringify(cowdef)); console.log( JSON.parse( window.localStorage.getItem(‘describecow’) ) ); // => Object { ilk=“bovine”, more…} You can also come up with your own formatting solutions like CSV, or pipe | or tilde ~ separated formats, but JSON is very terse and has native browser support. Some use case examples The simplest use of localStorage is, of course, storing some data: the current state of a game; how far through a multi-form sign-up process a user is; and other things we traditionally stored in cookies. Using JSON, though, we can do cooler things. Speeding up web service use and avoiding exceeding the quota A lot of web services only allow you a certain amount of hits per hour or day, and can be very slow. By using localStorage with a time stamp, you can cache results of web services locally and only access them after a certain time to refresh the data. I used this technique in my An Event Apart 10K entry, World Info, to only load the massive dataset of all the world information once, and allow for much faster subsequent visits to the site. The following screencast shows the difference: For use with YQL (remember last year’s 24 ways entry?), I’ve built a small script called YQL localcache that wraps localStorage around the YQL data call. An example would be the following: yqlcache.get({ yql: 'select * from flickr.photos.search where text=""santa""', id: 'myphotos', cacheage: ( 60*60*1000 ), callback: function(data) { console.log(data); } }); This loads photos of Santa from Flickr and stores them for an hour in the key myphotos of localStorage. If you call the function at various times, you receive an object back with the YQL results in a data property and a type property which defines where the data came from – live is live data, cached means it comes from cache, and freshcache indicates that it was called for the first time and a new cache was primed. The cache will work for an hour (60×60×1,000 milliseconds) and then be refreshed. So, instead of hitting the YQL endpoint over and over again, you hit it once per hour. Caching a full interface Another use case I found was to retain the state of a whole interface of an application by caching the innerHTML once it has been rendered. I use this in the Yahoo Firehose search interface, and you can get the full story about local storage and how it is used in this screencast: The stripped down code is incredibly simple (JavaScript with PHP embed): // test for localStorage support if(('localStorage' in window) && window['localStorage'] !== null){ var f = document.getElementById(‘mainform’); // test with PHP if the form was sent (the submit button has the name “sent”) // get the HTML of the form and cache it in the property “state” localStorage.setItem(‘state’,f.innerHTML); // if the form hasn’t been sent… // check if a state property exists and write back the HTML cache if(‘state’ in localStorage){ f.innerHTML = localStorage.getItem(‘state’); } } Other ideas In essence, you can use local storage every time you need to speed up access. For example, you could store image sprites in base-64 encoded datasets instead of loading them from a server. Or you could store CSS and JavaScript libraries on the client. Anything goes – have a play. Issues with local and session storage Of course, not all is rainbows and unicorns with the localStorage API. There are a few niggles that need ironing out. As with anything, this needs people to use the technology and raise issues. Here are some of the problems: Inadequate information about storage quota – if you try to add more content to an already full store, you get a QUOTA_EXCEEDED_ERR and that’s it. There’s a great explanation and test suite for localStorage quota available. Lack of automatically expiring storage – a feature that cookies came with. Pamela Fox has a solution (also available as a demo and source code) Lack of encrypted storage – right now, everything is stored in readable strings in the browser. Bigger, better, faster, more! As cool as the local and session storage APIs are, they are not quite ready for extensive adoption – the storage limits might get in your way, and if you really want to go to town with accessing, filtering and sorting data, real databases are what you’ll need. And, as we live in a world of client-side development, people are moving from heavy server-side databases like MySQL to NoSQL environments. On the web, there is also a lot of work going on, with Ian Hickson of Google proposing the Web SQL database, and Nikunj Mehta, Jonas Sicking (Mozilla), Eliot Graff (Microsoft) and Andrei Popescu (Google) taking the idea beyond simply replicating MySQL and instead offering Indexed DB as an even faster alternative. On the mobile front, a really important feature is to be able to store data to use when you are offline (mobile coverage and roaming data plans anybody?) and you can use the Offline Webapps API for that. As I mentioned at the beginning, we have a very exciting time ahead – let’s make this web work faster and more reliably by using what browsers offer us. For more on local storage, check out the chapter on Dive into HTML5.",2010,Christian Heilmann,chrisheilmann,2010-12-06T00:00:00+00:00,https://24ways.org/2010/html5-local-storage/,code 234,An Introduction to CSS 3-D Transforms,"Ladies and gentlemen, it is the second decade of the third millennium and we are still kicking around the same 2-D interface we got three decades ago. Sure, Apple debuted a few apps for OSX 10.7 that have a couple more 3-D flourishes, and Microsoft has had that Flip 3D for a while. But c’mon – 2011 is right around the corner. That’s Twenty Eleven, folks. Where is our 3-D virtual reality? By now, we should be zipping around the Metaverse on super-sonic motorbikes. Granted, the capability of rendering complex 3-D environments has been present for years. On the web, there are already several solutions: Flash; three.js in <canvas>; and, eventually, WebGL. Finally, we meagre front-end developers have our own three-dimensional jewel: CSS 3-D transforms! Rationale Like a beautiful jewel, 3-D transforms can be dazzling, a true spectacle to behold. But before we start tacking 3-D diamonds and rubies to our compositions like Liberace‘s tailor, we owe it to our users to ask how they can benefit from this awesome feature. An entire application should not take advantage of 3-D transforms. CSS was built to style documents, not generate explorable environments. I fail to find a benefit to completing a web form that can be accessed by swivelling my viewport to the Sign-Up Room (although there have been proposals to make the web just that). Nevertheless, there are plenty of opportunities to use 3-D transforms in between interactions with the interface, via transitions. Take, for instance, the Weather App on the iPhone. The application uses two views: a details view; and an options view. Switching between these two views is done with a 3-D flip transition. This informs the user that the interface has two – and only two – views, as they can exist only on either side of the same plane. Flipping from details view to options view via a 3-D transition Also, consider slide shows. When you’re looking at the last slide, what cues tip you off that advancing will restart the cycle at the first slide? A better paradigm might be achieved with a 3-D transform, placing the slides side-by-side in a circle (carousel) in three-dimensional space; in that arrangement, the last slide obviously comes before the first. 3-D transforms are more than just eye candy. We can also use them to solve dilemmas and make our applications more intuitive. Current support The CSS 3D Transforms module has been out in the wild for over a year now. Currently, only Safari supports the specification – which includes Safari on Mac OS X and Mobile Safari on iOS. The support roadmap for other browsers varies. The Mozilla team has taken some initial steps towards implementing the module. Mike Taylor tells me that the Opera team is keeping a close eye on CSS transforms, and is waiting until the specification is fleshed out. And our best friend Internet Explorer still needs to catch up to 2-D transforms before we can talk about the 3-D variety. To make matters more perplexing, Safari’s WebKit cousin Chrome currently accepts 3-D transform declarations, but renders them in 2-D space. Chrome team member Paul Irish, says that 3-D transforms are on the horizon, perhaps in one of the next 8.0 releases. This all adds up to a bit of a challenge for those of us excited by 3-D transforms. I’ll give it to you straight: missing the dimension of depth can make degradation a bit ungraceful. Unless the transform is relatively simple and holds up in non-3D-supporting browsers, you’ll most likely have to design another solution. But what’s another hurdle in a steeplechase? We web folk have had our mettle tested for years. We’re prepared to devise multiple solutions. Here’s the part of the article where I mention Modernizr, and you brush over it because you’ve read this part of an article hundreds of times before. But seriously, it’s the best way to test for CSS 3-D transform support. Use it. Even with these difficulties mounting up, trying out 3-D transforms today is the right move. The CSS 3-D transforms module was developed by the same team at Apple that produced the CSS 2D Transforms and Animation modules. Both specifications have since been adopted by Mozilla and Opera. Transforming in three-dimensions now will guarantee you’ll be ahead of the game when the other browsers catch up. The choice is yours. You can make excuses and pooh-pooh 3-D transforms because they’re too hard and only snobby Apple fans will see them today. Or, with a tip of the fedora to Mr Andy Clarke, you can get hard-boiled and start designing with the best features out there right this instant. So, I bid you, in the words of the eternal Optimus Prime… Transform and roll out. Let’s get coding. Perspective To activate 3-D space, an element needs perspective. This can be applied in two ways: using the transform property, with the perspective as a functional notation: -webkit-transform: perspective(600); or using the perspective property: -webkit-perspective: 600; See example: Perspective 1. The red element on the left uses transform: perspective() functional notation; the blue element on the right uses the perspective property These two formats both trigger a 3-D space, but there is a difference. The first, functional notation is convenient for directly applying a 3-D transform on a single element (in the previous example, I use it in conjunction with a rotateY transform). But when used on multiple elements, the transformed elements don’t line up as expected. If you use the same transform across elements with different positions, each element will have its own vanishing point. To remedy this, use the perspective property on a parent element, so each child shares the same 3-D space. See Example: Perspective 2. Each red box on the left has its own vanishing point within the parent container; the blue boxes on the right share the vanishing point of the parent container The value of perspective determines the intensity of the 3-D effect. Think of it as a distance from the viewer to the object. The greater the value, the further the distance, so the less intense the visual effect. perspective: 2000; yields a subtle 3-D effect, as if we were viewing an object from far away. perspective: 100; produces a tremendous 3-D effect, like a tiny insect viewing a massive object. By default, the vanishing point for a 3-D space is positioned at its centre. You can change the position of the vanishing point with perspective-origin property. -webkit-perspective-origin: 25% 75%; See Example: Perspective 3. 3-D transform functions As a web designer, you’re probably well acquainted with working in two dimensions, X and Y, positioning items horizontally and vertically. With a 3-D space initialised with perspective, we can now transform elements in all three glorious spatial dimensions, including the third Z dimension, depth. 3-D transforms use the same transform property used for 2-D transforms. If you’re familiar with 2-D transforms, you’ll find the basic 3D transform functions fairly similar. rotateX(angle) rotateY(angle) rotateZ(angle) translateZ(tz) scaleZ(sz) Whereas translateX() positions an element along the horizontal X-axis, translateZ() positions it along the Z-axis, which runs front to back in 3-D space. Positive values position the element closer to the viewer, negative values further away. The rotate functions rotate the element around the corresponding axis. This is somewhat counter-intuitive at first, as you might imagine that rotateX will spin an object left to right. Instead, using rotateX(45deg) rotates an element around the horizontal X-axis, so the top of the element angles back and away, and the bottom gets closer to the viewer. See Example: Transforms 1. 3-D rotate() and translate() functions around each axis There are also several shorthand transform functions that require values for all three dimensions: translate3d(tx,ty,tz) scale3d(sx,sy,sz) rotate3d(rx,ry,rz,angle) Pro-tip: These foo3d() transform functions also have the benefit of triggering hardware acceleration in Safari. Dean Jackson, CSS 3-D transform spec author and main WebKit dude, writes (to Thomas Fuchs): In essence, any transform that has a 3D operation as one of its functions will trigger hardware compositing, even when the actual transform is 2D, or not doing anything at all (such as translate3d(0,0,0)). Note this is just current behaviour, and could change in the future (which is why we don’t document or encourage it). But it is very helpful in some situations and can significantly improve redraw performance. For the sake of simplicity, my demos will use the basic transform functions, but if you’re writing production-ready CSS for iOS or Safari-only, make sure to use the foo3d() functions to get the best rendering performance. Card flip We now have all the tools to start making 3-D objects. Let’s get started with something simple: flipping a card. Here’s the basic markup we’ll need: <section class=""container""> <div id=""card""> <figure class=""front"">1</figure> <figure class=""back"">2</figure> </div> </section> The .container will house the 3-D space. The #card acts as a wrapper for the 3-D object. Each face of the card has a separate element: .front; and .back. Even for such a simple object, I recommend using this same pattern for any 3-D transform. Keeping the 3-D space element and the object element(s) separate establishes a pattern that is simple to understand and easier to style. We’re ready for some 3-D stylin’. First, apply the necessary perspective to the parent 3-D space, along with any size or positioning styles. .container { width: 200px; height: 260px; position: relative; -webkit-perspective: 800; } Now the #card element can be transformed in its parent’s 3-D space. We’re combining absolute and relative positioning so the 3-D object is removed from the flow of the document. We’ll also add width: 100%; and height: 100%;. This ensures the object’s transform-origin will occur in the centre of .container. More on transform-origin later. Let’s add a CSS3 transition so users can see the transform take effect. #card { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; -webkit-transition: -webkit-transform 1s; } The .container’s perspective only applies to direct descendant children, in this case #card. In order for subsequent children to inherit a parent’s perspective, and live in the same 3-D space, the parent can pass along its perspective with transform-style: preserve-3d. Without 3-D transform-style, the faces of the card would be flattened with its parents and the back face’s rotation would be nullified. To position the faces in 3-D space, we’ll need to reset their positions in 2-D with position: absolute. In order to hide the reverse sides of the faces when they are faced away from the viewer, we use backface-visibility: hidden. #card figure { display: block; position: absolute; width: 100%; height: 100%; -webkit-backface-visibility: hidden; } To flip the .back face, we add a basic 3-D transform of rotateY(180deg). #card .front { background: red; } #card .back { background: blue; -webkit-transform: rotateY(180deg); } With the faces in place, the #card requires a corresponding style for when it is flipped. #card.flipped { -webkit-transform: rotateY(180deg); } Now we have a working 3-D object. To flip the card, we can toggle the flipped class. When .flipped, the #card will rotate 180 degrees, thus exposing the .back face. See Example: Card 1. Flipping a card in three dimensions Slide-flip Take another look at the Weather App 3-D transition. You’ll notice that it’s not quite the same effect as our previous demo. If you follow the right edge of the card, you’ll find that its corners stay within the container. Instead of pivoting from the horizontal centre, it pivots on that right edge. But the transition is not just a rotation – the edge moves horizontally from right to left. We can reproduce this transition just by modifying a couple of lines of CSS from our original card flip demo. The pivot point for the rotation occurs at the right side of the card. By default, the transform-origin of an element is at its horizontal and vertical centre (50% 50% or center center). Let’s change it to the right side: #card { -webkit-transform-origin: right center; } That flip now needs some horizontal movement with translateX. We’ll set the rotation to -180deg so it flips right side out. #card.flipped { -webkit-transform: translateX(-100%) rotateY(-180deg); } See Example: Card 2. Creating a slide-flip from the right edge of the card Cube Creating 3-D card objects is a good way to get started with 3-D transforms. But once you’ve mastered them, you’ll be hungry to push it further and create some true 3-D objects: prisms. We’ll start out by making a cube. The markup for the cube is similar to the card. This time, however, we need six child elements for all six faces of the cube: <section class=""container""> <div id=""cube""> <figure class=""front"">1</figure> <figure class=""back"">2</figure> <figure class=""right"">3</figure> <figure class=""left"">4</figure> <figure class=""top"">5</figure> <figure class=""bottom"">6</figure> </div> </section> Basic position and size styles set the six faces on top of one another in the container. .container { width: 200px; height: 200px; position: relative; -webkit-perspective: 1000; } #cube { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; } #cube figure { width: 196px; height: 196px; display: block; position: absolute; border: 2px solid black; } With the card, we only had to rotate its back face. The cube, however, requires that five of the six faces to be rotated. Faces 1 and 2 will be the front and back. Faces 3 and 4 will be the sides. Faces 5 and 6 will be the top and bottom. #cube .front { -webkit-transform: rotateY(0deg); } #cube .back { -webkit-transform: rotateX(180deg); } #cube .right { -webkit-transform: rotateY(90deg); } #cube .left { -webkit-transform: rotateY(-90deg); } #cube .top { -webkit-transform: rotateX(90deg); } #cube .bottom { -webkit-transform: rotateX(-90deg); } We could remove the first #cube .front style declaration, as this transform has no effect, but let’s leave it in to keep our code consistent. Now each face is rotated, and only the front face is visible. The four side faces are all perpendicular to the viewer, so they appear invisible. To push them out to their appropriate sides, they need to be translated out from the centre of their positions. Each side of the cube is 200 pixels wide. From the cube’s centre they’ll need to be translated out half that distance, 100px. #cube .front { -webkit-transform: rotateY(0deg) translateZ(100px); } #cube .back { -webkit-transform: rotateX(180deg) translateZ(100px); } #cube .right { -webkit-transform: rotateY(90deg) translateZ(100px); } #cube .left { -webkit-transform: rotateY(-90deg) translateZ(100px); } #cube .top { -webkit-transform: rotateX(90deg) translateZ(100px); } #cube .bottom { -webkit-transform: rotateX(-90deg) translateZ(100px); } Note here that the translateZ function comes after the rotate. The order of transform functions is important. Take a moment and soak this up. Each face is first rotated towards its position, then translated outward in a separate vector. We have a working cube, but we’re not done yet. Returning to the Z-axis origin For the sake of our users, our 3-D transforms should not distort the interface when the active panel is at its resting position. But once we start pushing elements off their Z-axis origin, distortion is inevitable. In order to keep 3-D transforms snappy, Safari composites the element, then applies the transform. Consequently, anti-aliasing on text will remain whatever it was before the transform was applied. When transformed forward in 3-D space, significant pixelation can occur. See Example: Transforms 2. Looking back at the Perspective 3 demo, note that no matter how small the perspective value is, or wherever the transform-origin may be, the panel number 1 always returns to its original position, as if all those funky 3-D transforms didn’t even matter. To resolve the distortion and restore pixel perfection to our #cube, we can push the 3-D object back, so that the front face will be positioned back to the Z-axis origin. #cube { -webkit-transform: translateZ(-100px); } See Example: Cube 1. Restoring the front face to the original position on the Z-axis Rotating the cube To expose any face of the cube, we’ll need a style that rotates the cube to expose any face. The transform values are the opposite of those for the corresponding face. We toggle the necessary class on the #box to apply the appropriate transform. #cube.show-front { -webkit-transform: translateZ(-100px) rotateY(0deg); } #cube.show-back { -webkit-transform: translateZ(-100px) rotateX(-180deg); } #cube.show-right { -webkit-transform: translateZ(-100px) rotateY(-90deg); } #cube.show-left { -webkit-transform: translateZ(-100px) rotateY(90deg); } #cube.show-top { -webkit-transform: translateZ(-100px) rotateX(-90deg); } #cube.show-bottom { -webkit-transform: translateZ(-100px) rotateX(90deg); } Notice how the order of the transform functions has reversed. First, we push the object back with translateZ, then we rotate it. Finishing up, we can add a transition to animate the rotation between states. #cube { -webkit-transition: -webkit-transform 1s; } See Example: Cube 2. Rotating the cube with a CSS transition Rectangular prism Cubes are easy enough to generate, as we only have to worry about one measurement. But how would we handle a non-regular rectangular prism? Let’s try to make one that’s 300 pixels wide, 200 pixels high, and 100 pixels deep. The markup remains the same as the #cube, but we’ll switch the cube id for #box. The container styles remain mostly the same: .container { width: 300px; height: 200px; position: relative; -webkit-perspective: 1000; } #box { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; } Now to position the faces. Each set of faces will need their own sizes. The smaller faces (left, right, top and bottom) need to be positioned in the centre of the container, where they can be easily rotated and then shifted outward. The thinner left and right faces get positioned left: 100px ((300 − 100) ÷ 2), The stouter top and bottom faces get positioned top: 50px ((200 − 100) ÷ 2). #box figure { display: block; position: absolute; border: 2px solid black; } #box .front, #box .back { width: 296px; height: 196px; } #box .right, #box .left { width: 96px; height: 196px; left: 100px; } #box .top, #box .bottom { width: 296px; height: 96px; top: 50px; } The rotate values can all remain the same as the cube example, but for this rectangular prism, the translate values do differ. The front and back faces are each shifted out 50 pixels since the #box is 100 pixels deep. The translate value for the left and right faces is 150 pixels for their 300 pixels width. Top and bottom panels take 100 pixels for their 200 pixels height: #box .front { -webkit-transform: rotateY(0deg) translateZ(50px); } #box .back { -webkit-transform: rotateX(180deg) translateZ(50px); } #box .right { -webkit-transform: rotateY(90deg) translateZ(150px); } #box .left { -webkit-transform: rotateY(-90deg) translateZ(150px); } #box .top { -webkit-transform: rotateX(90deg) translateZ(100px); } #box .bottom { -webkit-transform: rotateX(-90deg) translateZ(100px); } See Example: Box 1. Just like the cube example, to expose a face, the #box needs to have a style to reverse that face’s transform. Both the translateZ and rotate values are the opposites of the corresponding face. #box.show-front { -webkit-transform: translateZ(-50px) rotateY(0deg); } #box.show-back { -webkit-transform: translateZ(-50px) rotateX(-180deg); } #box.show-right { -webkit-transform: translateZ(-150px) rotateY(-90deg); } #box.show-left { -webkit-transform: translateZ(-150px) rotateY(90deg); } #box.show-top { -webkit-transform: translateZ(-100px) rotateX(-90deg); } #box.show-bottom { -webkit-transform: translateZ(-100px) rotateX(90deg); } See Example: Box 2. Rotating the rectangular box with a CSS transition Carousel Front-end developers have a myriad of choices when it comes to content carousels. Now that we have 3-D capabilities in our browsers, why not take a shot at creating an actual 3-D carousel? The markup for this demo takes the same form as the box, cube and card. Let’s make it interesting and have a carousel with nine panels. <div class=""container""> <div id=""carousel""> <figure>1</figure> <figure>2</figure> <figure>3</figure> <figure>4</figure> <figure>5</figure> <figure>6</figure> <figure>7</figure> <figure>8</figure> <figure>9</figure> </div> </div> Now, apply basic layout styles. Let’s give each panel of the #carousel 20 pixel gaps between one another, done here with left: 10px; and top: 10px;. The effective width of each panel is 210 pixels. .container { width: 210px; height: 140px; position: relative; -webkit-perspective: 1000; } #carousel { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; } #carousel figure { display: block; position: absolute; width: 186px; height: 116px; left: 10px; top: 10px; border: 2px solid black; } Next up: rotating the faces. This #carousel has nine panels. If each panel gets an equal distribution on the carousel, each panel would be rotated forty degrees from its neighbour (360 ÷ 9). #carousel figure:nth-child(1) { -webkit-transform: rotateY(0deg); } #carousel figure:nth-child(2) { -webkit-transform: rotateY(40deg); } #carousel figure:nth-child(3) { -webkit-transform: rotateY(80deg); } #carousel figure:nth-child(4) { -webkit-transform: rotateY(120deg); } #carousel figure:nth-child(5) { -webkit-transform: rotateY(160deg); } #carousel figure:nth-child(6) { -webkit-transform: rotateY(200deg); } #carousel figure:nth-child(7) { -webkit-transform: rotateY(240deg); } #carousel figure:nth-child(8) { -webkit-transform: rotateY(280deg); } #carousel figure:nth-child(9) { -webkit-transform: rotateY(320deg); } Now, the outward shift. Back when we were creating the cube and box, the translate value was simple to calculate, as it was equal to one half the width, height or depth of the object. With this carousel, there is no size we can automatically use as a reference. We’ll have to calculate the distance of the shift by other means. Drawing a diagram of the carousel, we can see that we know only two things: the width of each panel is 210 pixels; and the each panel is rotated forty degrees from the next. If we split one of these segments down its centre, we get a right-angled triangle, perfect for some trigonometry. We can determine the length of r in this diagram with a basic tangent equation: There you have it: the panels need to be translated 288 pixels in 3-D space. #carousel figure:nth-child(1) { -webkit-transform: rotateY(0deg) translateZ(288px); } #carousel figure:nth-child(2) { -webkit-transform: rotateY(40deg) translateZ(288px); } #carousel figure:nth-child(3) { -webkit-transform: rotateY(80deg) translateZ(288px); } #carousel figure:nth-child(4) { -webkit-transform: rotateY(120deg) translateZ(288px); } #carousel figure:nth-child(5) { -webkit-transform: rotateY(160deg) translateZ(288px); } #carousel figure:nth-child(6) { -webkit-transform: rotateY(200deg) translateZ(288px); } #carousel figure:nth-child(7) { -webkit-transform: rotateY(240deg) translateZ(288px); } #carousel figure:nth-child(8) { -webkit-transform: rotateY(280deg) translateZ(288px); } #carousel figure:nth-child(9) { -webkit-transform: rotateY(320deg) translateZ(288px); } If we decide to change the width of the panel or the number of panels, we only need to plug in those two variables into our equation to get the appropriate translateZ value. In JavaScript terms, that equation would be: var tz = Math.round( ( panelSize / 2 ) / Math.tan( ( ( Math.PI * 2 ) / numberOfPanels ) / 2 ) ); // or simplified to var tz = Math.round( ( panelSize / 2 ) / Math.tan( Math.PI / numberOfPanels ) ); Just like our previous 3-D objects, to show any one panel we need only apply the reverse transform on the carousel. Here’s the style to show the fifth panel: -webkit-transform: translateZ(-288px) rotateY(-160deg); See Example: Carousel 1. By now, you probably have two thoughts: Rewriting transform styles for each panel looks tedious. Why bother doing high school maths? Aren’t robots supposed to be doing all this work for us? And you’re absolutely right. The repetitive nature of 3-D objects lends itself to scripting. We can offload all the monotonous transform styles to our dynamic script, which, if done correctly, will be more flexible than the hard-coded version. See Example: Carousel 2. Conclusion 3-D transforms change the way we think about the blank canvas of web design. Better yet, they change the canvas itself, trading in the flat surface for voluminous depth. My hope is that you took at least one peak at a demo and were intrigued. We web designers, who have rejoiced for border-radius, box-shadow and background gradients, now have an incredible tool at our disposal in 3-D transforms. They deserve just the same enthusiasm, research and experimentation we have seen on other CSS3 features. Now is the perfect time to take the plunge and start thinking about how to use three dimensions to elevate our craft. I’m breathless waiting for what’s to come. See you on the flip side.",2010,David DeSandro,daviddesandro,2010-12-14T00:00:00+00:00,https://24ways.org/2010/intro-to-css-3d-transforms/,code 235,"Real Animation Using JavaScript, CSS3, and HTML5 Video","When I was in school to be a 3-D animator, I read a book called Timing for Animation. Though only 152 pages long, it’s essentially the bible for anyone looking to be a great animator. In fact, Pixar chief creative officer John Lasseter used the first edition as a reference when he was an animator at Walt Disney Studios in the early 1980s. In the book, authors John Halas and Harold Whitaker advise: Timing is the part of animation which gives meaning to movement. Movement can easily be achieved by drawing the same thing in two different positions and inserting a number of other drawings between the two. The result on the screen will be movement; but it will not be animation. But that’s exactly what we’re doing with CSS3 and JavaScript: we’re moving elements, not animating them. We’re constantly specifying beginning and end states and allowing the technology to interpolate between the two. And yet, it’s the nuances within those middle frames that create the sense of life we’re looking for. As bandwidth increases and browser rendering grows more consistent, we can create interactions in different ways than we’ve been able to before. We’re encountering motion more and more on sites we’d generally label ‘static.’ However, this motion is mostly just movement, not animation. It’s the manipulation of an element’s properties, most commonly width, height, x- and y-coordinates, and opacity. So how do we create real animation? The metaphor In my experience, animation is most believable when it simulates, exaggerates, or defies the real world. A bowling ball falls differently than a racquetball. They each have different weights and sizes, which affect the way they land, bounce, and impact other objects. This is a major reason that JavaScript animation frequently feels mechanical; it doesn’t complete a metaphor. Expanding and collapsing a <div> feels very different than a opening a door or unfolding a piece of paper, but it often shouldn’t. The interaction itself should tie directly to the art direction of a page. Physics Understanding the physics of a situation is key to creating convincing animation, even if your animation seeks to defy conventional physics. Isaac Newton’s first law of motion’s_laws_of_motion states, “Every body remains in a state of rest or uniform motion (constant velocity) unless it is acted upon by an external unbalanced force.” Once a force acts upon an object, the object’s shape can change accordingly, depending on the strength of the force and the mass of the object. Another nugget of wisdom from Halas and Whitaker: All objects in nature have their own weight, construction, and degree of flexibility, and therefore each behaves in its own individual way when a force acts upon it. This behavior, a combination of position and timing, is the basis of animation. The basic question which an animator is continually asking himself is this: “What will happen to this object when a force acts upon it?” And the success of his animation largely depends on how well he answers this question. In animating with CSS3 and JavaScript, keep physics in mind. How ‘heavy’ is the element you’re interacting with? What kind of force created the action? A gentle nudge? A forceful shove? These subtleties will add a sense of realism to your animations and make them much more believable to your users. Misdirection Magicians often use misdirection to get their audience to focus on one thing rather than another. They fool us into thinking something happened that actually didn’t. Animation is the same, especially on a screen. By changing the arrangement of pixels on screen at a fast enough rate, your eyes fool your mind into thinking an object is actually in motion. Another important component of misdirecting in animation is the use of multiple objects. Try to recall a cartoon where a character vanishes. More often, the character makes some sort of exaggerated motion (this is called anticipation) then disappears, and a puff a smoke follows. That smoke is an extra element, but it goes a long way into make you believe that character actually disappeared. Very rarely does a vanishing character’s opacity simply go from one hundred per cent to zero. That’s not believable. So why do we do it with <div>s? Armed with the ammunition of metaphors and misdirection, let’s code an example. Shake, rattle, and roll (These demos require at least a basic understanding of jQuery and CSS3. Run away if your’re afraid, or brush up on CSS animation and resources for learning jQuery. Also, these demos use WebKit-specific features and are best viewed in the latest version of Safari, so performance in other browsers may vary.) We often see the design pattern of clicking a link to reveal content. Our “first demo”:”/examples/2010/real-animation/demo1/ shows us exactly that. It uses jQuery’s “ slideDown()”:http://api.jquery.com/slideDown/ method, as many instances do. But what force acted on the <div> that caused it to open? Did pressing the button unlatch some imaginary hook? Did it activate an unlocking sequence with some gears? Take 2 Our second demo is more explicit about what happens: the button fell on the <div> and shook its content loose. Here’s how it’s done. function clickHandler(){ $('#button').addClass('animate'); return false; } Clicking the link adds a class of animate to our button. That class has the following CSS associated with it: <style> .animate { -webkit-animation-name: ANIMATE; -webkit-animation-duration: 0.25s; -webkit-animation-iteration-count: 1; -webkit-animation-timing-function: ease-in; } @-webkit-keyframes ANIMATE { from { top: 72px; } to { top: 112px; } } </style> In our keyframe definition, we’ve specified from and to states. This is great, because we can be explicit about how an object starts and finishes moving. What’s also extra handy is that these CSS keyframes broadcast events that you can react to with JavaScript. In this example, we’re listening to the webkitAnimationEnd event and opening the <div> only when the sequence is complete. Here’s that code. function attachAnimationEventHandlers(){ var wrap = document.getElementById('wrap'); wrap.addEventListener('webkitAnimationEnd', function($e) { switch($e.animationName){ case ""ANIMATE"" : openMain(); break; default: } }, false); } function openMain(){ $('#main .inner').slideDown('slow'); } (For more info on handling animation events, check out the documentation at the Safari Reference Library.) Take 3 The problem with the previous demo is that the subtleties of timing aren’t evident. It still feels a bit choppy. For our third demo, we’ll use percentages instead of keywords so that we can insert as many points as we need to communicate more realistic timing. The percentages allow us to add the keys to well-timed animation: anticipation, hold, release, and reaction. <style> @-webkit-keyframes ANIMATE { 0% { top: 72px; } 40% { /* anticipation */ top: 57px; } 70% { /* hold */ top: 56px; } 80% { /* release */ top: 112px; } 100% { /* return */ top: 72px; } } </style> Take 4 The button animation is starting to feel much better, but the reaction of the <div> opening seems a bit slow. This fourth demo uses jQuery’s delay() method to time the opening precisely when we want it. Since we know the button’s animation is one second long and its reaction starts at eighty per cent of that, that puts our delay at 800ms (eighty per cent of one second). However, here’s a little pro tip: let’s start the opening at 750ms instead. The extra fifty milliseconds makes it feel more like the opening is a reaction to the exact hit of the button. Instead of listening for the webkitAnimationEnd event, we can start the opening as soon as the button is clicked, and the movement plays on the specified delay. function clickHandler(){ $('#button').addClass('animate'); openMain(); return false; } function openMain(){ $('#main .inner').delay(750).slideDown('slow'); } Take 5 We can tweak the timing of that previous animation forever, but that’s probably as close as we’re going to get to realistic animation with CSS and JavaScript. However, for some extra sauce, we could relegate the whole animation in our final demo to a video sequence which includes more nuances and extra elements for misdirection. Here’s the basis of video replacement. Add a <video> element to the page and adjust its opacity to zero. Once the button is clicked, fade the button out and start playing the video. Once the video is finished playing, fade it out and bring the button back. function clickHandler(){ if($('#main .inner').is(':hidden')){ $('#button').fadeTo(100, 0); $('#clickVideo').fadeTo(100, 1, function(){ var clickVideo = document.getElementById('clickVideo'); clickVideo.play(); setTimeout(removeVideo, 2400); openMain(); }); } return false; } function removeVideo(){ $('#button').fadeTo(500, 1); $('#clickVideo').fadeOut('slow'); } function openMain(){ $('#main .inner').delay(1100).slideDown('slow'); } Wrapping up I’m no JavaScript expert by any stretch. I’m sure a lot of you scripting wizards out there could write much cleaner and more efficient code, but I hope this gives you an idea of the theory behind more realistic motion with the technology we’re using most. This is just one model of creating more convincing animation, but you can create countless variations of this, including… Exporting <video> animations in 3-D animation tools or 2-D animation tools like Flash or After Effects Using <canvas> or SVG instead of <video> Employing specific JavaScript animation frameworks Making use of all the powerful properties of CSS Transforms and CSS Animation Trying out emerging CSS3 animation tools like Sencha Animator If it wasn’t already apparent, these demos show an exaggerated example and probably aren’t practical in a lot of environments. However, there are a handful of great sites out there that honor animation techniques—metaphor, physics, and misdirection, among others—like Benjamin De Cock’s vCard, 20 Things I Learned About Browsers and the Web by Fantasy Interactive, and the Nike Snowboarding site by Ian Coyle and HEGA. They’re wonderful testaments to what you can do to aid interaction for users. My goal was to show you the ‘why’ and the ‘how.’ Your charge is to discern the ‘where’ and the ‘when.’ Happy animating!",2010,Dan Mall,danmall,2010-12-15T00:00:00+00:00,https://24ways.org/2010/real-animation-using-javascript-css3-and-html5-video/,code 238,Everything You Wanted To Know About Gradients (And a Few Things You Didn’t),"Hello. I am here to discuss CSS3 gradients. Because, let’s face it, what the web really needed was more gradients. Still, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer’s vocabulary. There’s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work. Of course, that whole ‘proper application’ thing is the tricky bit. But given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They’re part of the draft images module, and implemented in two of the major rendering engines. Still, I’ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you’ll indulge me, let’s take a quick look at how to create CSS gradients—hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser. Gradient theory 101 (I hope that’s not really a thing) Right. So before we dive into the code, let’s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here’s a straightforward one: I spent seconds hours designing this gradient. I hope you like it. At either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different: (Don’t ever really do this. Please. I beg you.) It’s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way. Now, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn’t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient. A tale of two syntaxes Armed with our new vocabulary, let’s look at a CSS gradient in the wild. Behold, the simple input button: There’s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here’s the CSS that makes it happen: input[type=submit] { background-color: #F47A20; background-image: -moz-linear-gradient( #FAA51A, #F47A20 ); background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(0, #FAA51A), color-stop(1, #F47A20) ); } I’ve borrowed David DeSandro’s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that’s perfectly understandable—heck, it sort of turned mine. But let’s step through the CSS slowly, and see if we can’t make it a little less terrifying. Verbose WebKit is verbose Here’s the syntax for our little gradient on WebKit: background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(0, #FAA51A), color-stop(1, #F47A20) ); Woof. Quite a mouthful, no? Well, here’s what we’re looking at: WebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients. The next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points. Afterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop’s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself. For a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us: background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#FAA51A), to(#FAA51A) ); from(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)—in both cases, we’re simply declaring the first and last color stops in our gradient. Terse Gecko is terse WebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3. Naturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented. Here’s how we get gradient-y in Gecko: background-image: -moz-linear-gradient( #FAA51A, #F47A20 ); Wait, what? Done already? That’s right. By default, -moz-linear-gradient assumes you’re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that’s the case, then you simply need to specify your color stops, delimited with a few commas. I know: that was almost… painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them. We can specify an origin point for our gradient: background-image: -moz-linear-gradient(50% 100%, #FAA51A, #F47A20 ); As well as an angle, to give it a direction: background-image: -moz-linear-gradient(50% 100%, 45deg, #FAA51A, #F47A20 ); And we can specify multiple stops, simply by adding to our comma-delimited list: background-image: -moz-linear-gradient(50% 100%, 45deg, #FAA51A, #FCC, #F47A20 ); By adding a percentage after a given color value, we can determine its position along the gradient path: background-image: -moz-linear-gradient(50% 100%, 45deg, #FAA51A, #FCC 20%, #F47A20 ); So that’s some of the flexibility implicit in the W3C/Mozilla-style syntax. Now, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit’s more verbose approach to the, well, looseness behind the -moz syntax. À chacun son gradient syntax. Still, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same. Reusing color stops for fun and profit But CSS gradients aren’t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects. Tim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it’s the feature image that really catches the eye. You see, there’s nothing that says you can’t reuse color stops. And Tim’s exploited that perfectly. He’s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he’s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo. But then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss. And his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so. Rocking the radials We’ve been looking at linear gradients pretty exclusively. But I’d be remiss if I didn’t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar: And here’s the relevant CSS: background: -moz-radial-gradient(50% 100%, farthest-side, rgb(204, 255, 255) 1%, rgb(85, 85, 85) 15%, rgba(85, 85, 85, 0) ); background: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15, from(rgb(204, 255, 255)), to(rgba(85, 85, 85, 0)) ); Now, the syntax builds on what we’ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes’ reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, …)) to shift into circular mode. Mozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There’s also a size constant defined (farthest-side), which determines the reach and shape of our gradient. WebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle. Again, this is a fairly modest little radial gradient. Time and article length (and, let’s be honest, your author’s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can’t recommend the references at Mozilla and Apple strongly enough. Leave no browser behind But no matter the kind of gradients you’re working with, there is a large swathe of browsers that simply don’t support gradients. Thankfully, it’s fairly easy to declare a sensible fallback—it just depends on the kind of fallback you’d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties. For example: if you’d like to fall back to a flat color, simply declare a separate background-color: .nav { background-color: #000; background-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45)); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45))); } Or perhaps just create three separate background properties. .nav { background: #000; background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45)); background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45))); } We can even build on this to fall back to a non-gradient image: .nav { background: #000 <strong>url(""faux-gradient-lol.png"") repeat-x</strong>; background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45)); background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45))); } No matter the approach you feel most appropriate to your design, it’s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings. (If you’re feeling especially masochistic, there’s even a way to get simple linear gradients working in IE via Microsoft’s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I’d recommend avoiding those. And don’t tell Andy Clarke I told you, or he’ll probably unload his Derringer at me. Or something.) Go forth and, um, gradientify! It’s entirely possible your head’s spinning. Heck, mine is, but that might be the effects of the ’nog. But maybe you’re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine. Well, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they’re easily modifiable by tweaking a few CSS properties. Because, let’s face it, less time spent yelling at Photoshop is a very, very good thing. Of course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it’s still under development at the W3C. As we’ve seen, browser support is still very much in flux. And it’s possible that gradients themselves have some real performance drawbacks—so test thoroughly, and gradient carefully. But still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you. So have fun, and get gradientin’.",2010,Ethan Marcotte,ethanmarcotte,2010-12-22T00:00:00+00:00,https://24ways.org/2010/everything-you-wanted-to-know-about-gradients/,code 239,Using the WebFont Loader to Make Browsers Behave the Same,"Web fonts give us designers a whole new typographic palette with which to work. However, browsers handle the loading of web fonts in different ways, and this can lead to inconsistent user experiences. Safari, Chrome and Internet Explorer leave a blank space in place of the styled text while the web font is loading. Opera and Firefox show text with the default font which switches over when the web font has loaded, resulting in the so-called Flash of Unstyled Text (aka FOUT). Some people prefer Safari’s approach as it eliminates FOUT, others think the Firefox way is more appropriate as content can be read whilst fonts download. Whatever your preference, the WebFont Loader can make all browsers behave the same way. The WebFont Loader is a JavaScript library that gives you extra control over font loading. It was co-developed by Google and Typekit, and released as open source. The WebFont Loader works with most web font services as well as with self-hosted fonts. The WebFont Loader tells you when the following events happen as a browser downloads web fonts (or loads them from cache): when fonts start to download (‘loading’) when fonts finish loading (‘active’) if fonts fail to load (‘inactive’) If your web page requires more than one font, the WebFont Loader will trigger events for individual fonts, and for all the fonts as a whole. This means you can find out when any single font has loaded, and when all the fonts have loaded (or failed to do so). The WebFont Loader notifies you of these events in two ways: by applying special CSS classes when each event happens; and by firing JavaScript events. For our purposes, we’ll be using just the CSS classes. Implementing the WebFont Loader As stated above, the WebFont Loader works with most web font services as well as with self-hosted fonts. Self-hosted fonts To use the WebFont Loader when you are hosting the font files on your own server, paste the following code into your web page: <script type=""text/javascript""> WebFontConfig = { custom: { families: ['Font Family Name', 'Another Font Family'], urls: [ 'http://yourwebsite.com/styles.css' ] } }; (function() { var wf = document.createElement('script'); wf.src = ('https:' == document.location.protocol ? 'https' : 'http') + '://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js'; wf.type = 'text/javascript'; wf.async = 'true'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(wf, s); })(); </script> Replace Font Family Name and Another Font Family with a comma-separated list of the font families you want to check against, and replace http://yourwebsite.com/styles.css with the URL of the style sheet where your @font-face rules reside. Fontdeck Assuming you have added some fonts to a website project in Fontdeck, use the afore-mentioned code for self-hosted solutions and replace http://yourwebsite.com/styles.css with the URL of the <link> tag in your Fontdeck website settings page. It will look something like http://f.fontdeck.com/s/css/xxxx/domain/nnnn.css. Typekit Typekit’s JavaScript-based implementation incorporates the WebFont Loader events by default, so you won’t need to include any WebFont Loader code. Making all browsers behave like Safari To make Firefox and Opera work in the same way as WebKit browsers (Safari, Chrome, etc.) and Internet Explorer, and thus minimise FOUT, you need to hide the text while the fonts are loading. While fonts are loading, the WebFont Loader adds a class of wf-loading to the <html> element. Once the fonts have loaded, the wf-loading class is removed and replaced with a class of wf-active (or wf-inactive if all of the fonts failed to load). This means you can style elements on the page while the fonts are loading and then style them differently when the fonts have finished loading. So, let’s say the text you need to hide while fonts are loading is contained in all paragraphs and top-level headings. By writing the following style rule into your CSS, you can hide the text while the fonts are loading: .wf-loading h1, .wf-loading p { visibility:hidden; } Because the wf-loading class is removed once the the fonts have loaded, the visibility:hidden rule will stop being applied, and the text revealed. You can see this in action on this simple example page. That works nicely across the board, but the situation is slightly more complicated. WebKit doesn’t wait for all fonts to load before displaying text: it displays text elements as soon as the relevant font is loaded. To emulate WebKit more accurately, we need to know when individual fonts have loaded, and apply styles accordingly. Fortunately, as mentioned earlier, the WebFont Loader has events for individual fonts too. When a specific font is loading, a class of the form wf-fontfamilyname-n4-loading is applied. Assuming headings and paragraphs are styled in different fonts, we can make our CSS more specific as follows: .wf-fontfamilyname-n4-loading h1, .wf-anotherfontfamily-n4-loading p { visibility:hidden; } Note that the font family name is transformed to lower case, with all spaces removed. The n4 is a shorthand for the weight and style of the font family. In most circumstances you’ll use n4 but refer to the WebFont Loader documentation for exceptions. You can see it in action on this Safari example page (you’ll probably need to disable your cache to see any change occur). Making all browsers behave like Firefox To make WebKit browsers and Internet Explorer work like Firefox and Opera, you need to explicitly show text while the fonts are loading. In order to make this happen, you need to specify a font family which is not a web font while the fonts load, like this: .wf-fontfamilyname-n4-loading h1 { font-family: 'arial narrow', sans-serif; } .wf-anotherfontfamily-n4-loading p { font-family: arial, sans-serif; } You can see this in action on the Firefox example page (again you’ll probably need to disable your cache to see any change occur). And there’s more That’s just the start of what can be done with the WebFont Loader. More areas to explore would be tweaking font sizes to reduce the impact of reflowing text and to better cater for very narrow fonts. By using the JavaScript events much more can be achieved too, such as fading in text as the fonts load.",2010,Richard Rutter,richardrutter,2010-12-02T00:00:00+00:00,https://24ways.org/2010/using-the-webfont-loader-to-make-browsers-behave-the-same/,code 240,My CSS Wish List,"I love Christmas. I love walking around the streets of London, looking at the beautifully decorated windows, seeing the shiny lights that hang above Oxford Street and listening to Christmas songs. I’m not going to lie though. Not only do I like buying presents, I love receiving them too. I remember making long lists that I would send to Father Christmas with all of the Lego sets I wanted to get. I knew I could only get one a year, but I would spend days writing the perfect list. The years have gone by, but I still enjoy making wish lists. And I’ll tell you a little secret: my mum still asks me to send her my Christmas list every year. This time I’ve made my CSS wish list. As before, I’d be happy with just one present. Before I begin… … this list includes: things that don’t exist in the CSS specification (if they do, please let me know in the comments – I may have missed them); others that are in the spec, but it’s incomplete or lacks use cases and examples (which usually means that properties haven’t been implemented by even the most recent browsers). Like with any other wish list, the further down I go, the more unrealistic my expectations – but that doesn’t mean I can’t wish. Some of the things we wouldn’t have thought possible a few years ago have been implemented and our wishes fulfilled (think multiple backgrounds, gradients and transformations, for example). The list Cross-browser implementation of font-size-adjust When one of the fall-back fonts from your font stack is used, rather than the preferred (first) one, you can retain the aspect ratio by using this very useful property. It is incredibly helpful when the fall-back fonts are smaller or larger than the initial one, which can make layouts look less polished. What font-size-adjust does is divide the original font-size of the fall-back fonts by the font-size-adjust value. This preserves the x-height of the preferred font in the fall-back fonts. Here’s a simple example: p { font-family: Calibri, ""Lucida Sans"", Verdana, sans-serif; font-size-adjust: 0.47; } In this case, if the user doesn’t have Calibri installed, both Lucida Sans and Verdana will keep Calibri’s aspect ratio, based on the font’s x-height. This property is a personal favourite and one I keep pointing to. Firefox supported this property from version three. So far, it’s the only browser that does. Fontdeck provides the font-size-adjust value along with its fonts, and has a handy tool for calculating it. More control over overflowing text The text-overflow property lets you control text that overflows its container. The most common use for it is to show an ellipsis to indicate that there is more text than what is shown. To be able to use it, the container should have overflow set to something other than visible, and white-space: nowrap: div { white-space: nowrap; width: 100%; overflow: hidden; text-overflow: ellipsis; } This, however, only works for blocks of text on a single line. In the wish list of many CSS authors (and in mine) is a way of defining text-overflow: ellipsis on a block of multiple text lines. Opera has taken the first step and added support for the -o-ellipsis-lastline property, which can be used instead of ellipsis. This property is not part of the CSS3 spec, but we could certainly make good use of it if it were… WebKit has -webkit-line-clamp to specify how many lines to show before cutting with an ellipsis, but support is patchy at best and there is no control over where the ellipsis shows in the text. Many people have spent time wrangling JavaScript to do this for us, but the methods used are very processor intensive, and introduce a JavaScript dependency. Indentation and hanging punctuation properties You might notice a trend here: almost half of the items in this list relate to typography. The lack of fine-grained control over typographical detail is a general concern among designers and CSS authors. Indentation and hanging punctuation fall into this category. The CSS3 specification introduces two new possible values for the text-indent property: each-line; and hanging. each-line would indent the first line of the block container and each line after a forced line break; hanging would invert which lines are affected by the indentation. The proposed hanging-punctuation property would allow us to specify whether opening and closing brackets and quotes should hang outside the edge of the first and last lines. The specification is still incomplete, though, and asks for more examples and use cases. Text alignment and hyphenation properties Following the typographic trend of this list, I’d like to add better control over text alignment and hyphenation properties. The CSS3 module on Generated Content for Paged Media already specifies five new hyphenation-related properties (namely: hyphenate-dictionary; hyphenate-before and hyphenate-after; hyphenate-lines; and hyphenate-character), but it is still being developed and lacks examples. In the text alignment realm, the new text-align-last property allows you to define how the last line of a block (or a line just before a forced break) is aligned, if your text is set to justify. Its value can be: start; end; left; right; center; and justify. The text-justify property should also allow you to have more control over text set to text-align: justify but, for now, only Internet Explorer supports this. calc() This is probably my favourite item in the list: the calc() function. This function is part of the CSS3 Values and Units module, but it has only been implemented by Firefox (4.0). To take advantage of it now you need to use the Mozilla vendor code, -moz-calc(). Imagine you have a fluid two-column layout where the sidebar column has a fixed width of 240 pixels, and the main content area fills the rest of the width available. This is how you could create that using -moz-calc(): #main { width: -moz-calc(100% - 240px); } Can you imagine how many hacks and headaches we could avoid were this function available in more browsers? Transitions and animations are really nice and lovely but, for me, it’s the ability to do the things that calc() allows you to that deserves the spotlight and to be pushed for implementation. Selector grouping with -moz-any() The -moz-any() selector grouping has been introduced by Mozilla but it’s not part of any CSS specification (yet?); it’s currently only available on Firefox 4. This would be especially useful with the way HTML5 outlines documents, where we can have any number of variations of several levels of headings within numerous types of containers (think sections within articles within sections…). Here is a quick example (copied from the Mozilla blog post about the article) of how -moz-any() works. Instead of writing: section section h1, section article h1, section aside h1, section nav h1, article section h1, article article h1, article aside h1, article nav h1, aside section h1, aside article h1, aside aside h1, aside nav h1, nav section h1, nav article h1, nav aside h1, nav nav h1, { font-size: 24px; } You could simply write: -moz-any(section, article, aside, nav) -moz-any(section, article, aside, nav) h1 { font-size: 24px; } Nice, huh? More control over styling form elements Some are of the opinion that form elements shouldn’t be styled at all, since a user might not recognise them as such if they don’t match the operating system’s controls. I partially agree: I’d rather put the choice in the hands of designers and expect them to be capable of deciding whether their particular design hampers or improves usability. I would say the same idea applies to font-face: while some fear designers might go crazy and litter their web pages with dozens of different fonts, most welcome the freedom to use something other than Arial or Verdana. There will always be someone who will take this freedom too far, but it would be useful if we could, for example, style the default Opera date picker: <input type=""date"" /> or Safari’s slider control (think star movie ratings, for example): <input type=""range"" min=""0"" max=""5"" step=""1"" value=""3"" /> Parent selector I don’t think there is one CSS author out there who has never come across a case where he or she wished there was a parent selector. There have been many suggestions as to how this could work, but a variation of the child selector is usually the most popular: article < h1 { … } One can dream… Flexible box layout The Flexible Box Layout Module sounds a bit like magic: it introduces a new box model to CSS, allowing you to distribute and order boxes inside other boxes, and determine how the available space is shared. Two of my favourite features of this new box model are: the ability to redistribute boxes in a different order from the markup the ability to create flexible layouts, where boxes shrink (or expand) to fill the available space Let’s take a quick look at the second case. Imagine you have a three-column layout, where the first column takes up twice as much horizontal space as the other two: <body> <section id=""main""> </section> <section id=""links""> </section> <aside> </aside> </body> With the flexible box model, you could specify it like this: body { display: box; box-orient: horizontal; } #main { box-flex: 2; } #links { box-flex: 1; } aside { box-flex: 1; } If you decide to add a fourth column to this layout, there is no need to recalculate units or percentages, it’s as easy as that. Browser support for this property is still in its early stages (Firefox and WebKit need their vendor prefixes), but we should start to see it being gradually introduced as more attention is drawn to it (I’m looking at you…). You can read a more comprehensive write-up about this property on the Mozilla developer blog. It’s easy to understand why it’s harder to start playing with this module than with things like animations or other more decorative properties, which don’t really break your layouts when users don’t see them. But it’s important that we do, even if only in very experimental projects. Nested selectors Anyone who has never wished they could do something like the following in CSS, cast the first stone: article { h1 { font-size: 1.2em; } ul { margin-bottom: 1.2em; } } Even though it can easily turn into a specificity nightmare and promote redundancy in your style sheets (if you abuse it), it’s easy to see how nested selectors could be useful. CSS compilers such as Less or Sass let you do this already, but not everyone wants or can use these compilers in their projects. Every wish list has an item that could easily be dropped. In my case, I would say this is one that I would ditch first – it’s the least useful, and also the one that could cause more maintenance problems. But it could be nice. Implementation of the ::marker pseudo-element The CSS Lists module introduces the ::marker pseudo-element, that allows you to create custom list item markers. When an element’s display property is set to list-item, this pseudo-element is created. Using the ::marker pseudo-element you could create something like the following: Footnote 1: Both John Locke and his father, Anthony Cooper, are named after 17th- and 18th-century English philosophers; the real Anthony Cooper was educated as a boy by the real John Locke. Footnote 2: Parts of the plane were used as percussion instruments and can be heard in the soundtrack. where the footnote marker is generated by the following CSS: li::marker { content: ""Footnote "" counter(notes) "":""; text-align: left; width: 12em; } li { counter-increment: notes; } You can read more about how to use counters in CSS in my article from last year. Bear in mind that the CSS Lists module is still a Working Draft and is listed as “Low priority”. I did say this wish list would start to grow more unrealistic closer to the end… Variables The sight of the word ‘variables’ may make some web designers shy away, but when you think of them applied to things such as repeated colours in your stylesheets, it’s easy to see how having variables available in CSS could be useful. Think of a website where the main brand colour is applied to elements like the main text, headings, section backgrounds, borders, and so on. In a particularly large website, where the colour is repeated countless times in the CSS and where it’s important to keep the colour consistent, using variables would be ideal (some big websites are already doing this by using server-side technology). Again, Less and Sass allow you to use variables in your CSS but, again, not everyone can (or wants to) use these. If you are using Less, you could, for instance, set the font-family value in one variable, and simply call that variable later in the code, instead of repeating the complete font stack, like so: @fontFamily: Calibri, ""Lucida Grande"", ""Lucida Sans Unicode"", Helvetica, Arial, sans-serif; body { font-family: @fontFamily; } Other features of these CSS compilers might also be useful, like the ability to ‘call’ a property value from another selector (accessors): header { background: #000000; } footer { background: header['background']; } or the ability to define functions (with arguments), saving you from writing large blocks of code when you need to write something like, for example, a CSS gradient: .gradient (@start:"""", @end:"""") { background: -webkit-gradient(linear, left top, left bottom, from(@start), to(@end)); background: -moz-linear-gradient(-90deg,@start,@end); } button { .gradient(#D0D0D0,#9F9F9F); } Standardised comments Each CSS author has his or her own style for commenting their style sheets. While this isn’t a massive problem on smaller projects, where maybe only one person will edit the CSS, in larger scale projects, where dozens of hands touch the code, it would be nice to start seeing a more standardised way of commenting. One attempt at creating a standard for CSS comments is CSSDOC, an adaptation of Javadoc (a documentation generator that extracts comments from Java source code into HTML). CSSDOC uses ‘DocBlocks’, a term borrowed from the phpDocumentor Project. A DocBlock is a human- and machine-readable block of data which has the following structure: /** * Short description * * Long description (this can have multiple lines and contain <p> tags * * @tags (optional) */ CSSDOC includes a standard for documenting bug fixes and hacks, colours, versioning and copyright information, amongst other important bits of data. I know this isn’t a CSS feature request per se; rather, it’s just me pointing you at something that is usually overlooked but that could contribute towards keeping style sheets easier to maintain and to hand over to new developers. Final notes I understand that if even some of these were implemented in browsers now, it would be a long time until all vendors were up to speed. But if we don’t talk about them and experiment with what’s available, then it will definitely never happen. Why haven’t I mentioned better browser support for existing CSS3 properties? Because that would be the same as adding chocolate to your Christmas wish list – you don’t need to ask, everyone knows you want it. The list could go on. There are dozens of other things I would love to see integrated in CSS or further developed. These are my personal favourites: some might be less useful than others, but I’ve wished for all of them at some point. Part of the research I did while writing this article was asking some friends what they would add to their lists; other than a couple of items I already had in mine, everything else was different. I’m sure your list would be different too. So tell me, what’s on your CSS wish list?",2010,Inayaili de León Persson,inayailideleon,2010-12-03T00:00:00+00:00,https://24ways.org/2010/my-css-wish-list/,code 241,Jank-Free Image Loads,"There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then — Your browser doesn’t support HTML5 video. Here is a link to the video instead. — oops! — an image pops in above it, pushing said text down the page, and our poor reader loses their place. By default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I’ll chart a path into a jank-free future – one in which it’s easy and natural to author <img> elements that load like this: Your browser doesn’t support HTML5 video. Here is a link to the video instead. Jank is a very old problem, and there is a very old solution to it: the width and height attributes on <img>. The idea is: if we stick an image’s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives. width Specifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network. —The HTML 3.2 Specification, published on January 14 1997 Unfortunately for us, when width and height were first spec’d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird: See the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen. width and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths. I have good news, bad news, and great news. The good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them. The bad news is that these techniques are all terrible, cumbersome hacks. They’re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways. So, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform. aspect-ratio in CSS The first proposed feature? An aspect-ratio property in CSS! This would allow us to write CSS like this: img { width: 100%; } .thumb { aspect-ratio: 1/1; } .hero { aspect-ratio: 16/9; } This’ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above. Alas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It’s images – possibly user generated images – that can have any aspect ratio. The really tricky problem is unknown-when-you’re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles: <img src=""image.jpg"" style=""aspect-ratio: 5/4"" /> And inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style="""" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I’m cutting off my, (or anyone else’s) ability to change those aspect ratios, for whatever reason, later. How might we specify aspect ratios at a lower level? How might we give browsers information about an image’s dimensions, without giving them explicit instructions about how to style it? I’ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio! A brief note on intrinsic and extrinsic sizing What do I mean by “intrinsic” and “extrinsic?” The intrinsic size of an image is, put simply, how big it’d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800×600 image has an intrinsic width of 800px. The extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800×600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn’t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px. It surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity). CSS aspect-ratio lets us avoid setting extrinsic heights and widths – and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it. The last tool I’m going to talk about gets us out of the extrinsic sizing game all together — which, I think, is only appropriate for a feature that we’re going to be using in HTML. intrinsicsize in HTML The proposed intrinsicsize attribute will let you do this: <img src=""image.jpg"" intrinsicsize=""800x600"" /> That tells the browser, “hey, this image.jpg that I’m using here – I know you haven’t loaded it yet but I’m just going to let you know right away that it’s going to have an intrinsic size of 800×600.” This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image’s intrinsic size. You may ask (I did!): wait, what if my <img> references multiple resources, which all have different intrinsic sizes? Well, if you’re using srcset, intrinsicsize is a bit of a misnomer – what the attribute will do then, is specify an intrinsic aspect ratio: <img srcset=""300x200.jpg 300w, 600x400.jpg 600w, 900x600.jpg 900w, 1200x800.jpg 1200w"" sizes=""75vw"" intrinsicsize=""3x2"" /> In the future (and behind the “Experimental Web Platform Features” flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 – it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2. Can’t wait I seem to have gotten myself into the weeds a bit. Sizing on the web is complicated! Don’t let all of these details bury the big takeaway here: sometime soon (🤞 2019‽ 🤞), we’ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can’t wait!",2018,Eric Portis,ericportis,2018-12-21T00:00:00+00:00,https://24ways.org/2018/jank-free-image-loads/,code 242,Creating My First Chrome Extension,"Writing a Chrome Extension isn’t as scary at it seems! Not too long ago, I used a Chrome extension called 20 Cubed. I’m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops. Unfortunately, the developer stopped updating the extension and removed it from Chrome’s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same? Fortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the “old-fashioned” way. Sky’s the limit! Setup But first, some things you’ll need to know about before getting started: Callbacks Timeouts Chrome Dev Tools Developing with Chrome extension methods requires a lot of callbacks. If you’ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I’d highly recommend brushing up on that subject before getting started. Hyperbole and a Half Timeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn’t consider the fact that I’d be juggling three timers. And I probably would’ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit. On the note of organization, abstraction is important! You might have any combination of the following: The Chrome extension options page The popup from the Chrome Menu The windows or tabs you create The background scripts And that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you’ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs. Alright, now that you know all that up front, let’s get going! Documentation TL;DR READ THE DOCS. A few things to get started: Read Google’s primer on browser extensions Have a look at their Getting started tutorial Check out their overview on Chrome Extensions This overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension. The manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here’s what my manifest.json looked like, for example: https://github.com/jennz0r/eye-rest/blob/master/manifest.json Because I’m a visual learner, I found the images that describe the extension’s architecture most helpful. To clarify this diagram, the background.js file is the extension’s event handler. It’s constantly listening for browser events, which you’ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle. The Popup is the little window that appears when you click on an extension’s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { ""default_popup"": FILE_NAME_HERE }. The Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose “Options” under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE. Content scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension. Debugging A quick note: don’t forget the debugging tutorial! Just like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you’re likely to have several inspector windows open – one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with. For example, I kept seeing the error “This request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.” Well, it turns out there are limitations on how often you can sync stored information. Another error I kept seeing was “Alarm delay is less than minimum of 1 minutes. In released .crx, alarm “ALARM_NAME_HERE” will fire in approximately 1 minutes”. Well, it turns out there are minimum interval times for alarms. Chrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help! Me adding a ton of `console.log`s while trying to debug my alarms. Eye Rest Functionality Ok, so what is the extension I created? Again, it’s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following: If the extension is running AND If the user has not clicked Pause in the Popup HTML AND If the counter in the Popup HTML is down to 00:00 THEN Open a new window with Timer HTML AND Start a 20 sec countdown in Timer HTML AND Reset the Popup HTML counter to 20:00 If the Timer HTML is down to 0 sec THEN Close that window. Rinse. Repeat. Sounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that’s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn’t realize until I started coding. For visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it’s a pun.) API Now let’s discuss the APIs that I used to build this extension. Alarms What even are alarms? I didn’t know either. Alarms are basically Chrome’s setTimeout and setInterval. They exist because, as Google says… DOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant. For more information, check out this background migration doc. One interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn’t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name. Background Scripts For Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file. I wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json. I also wanted my Popup script to do the same things when the user has unpaused the extension’s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file. Other APIs Windows I use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script. One day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window’s HTML. Until then, it hadn’t dawned on me that the url could be relative. Duh. I was tired! I pass the timer.html as the url option, as well as type, size, position, and other visual options. Storage Maybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome’s storage is avoiding quotas and write operation maximums. I wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it’s quite complicated and requires lots of writes. That’s why I went with the user’s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused. Declarative Content The Declarative Content API allows you to show your extension’s page action based on several type of matches, without needing to take a host permission or inject a content script. So you’ll need this to get your extension to work in the browser! You can see me set this in my Background script. Because I want my extension’s popup to appear on every page one is browsing, I leave the page matchers empty. There are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available! The Extension Here’s what my original Popup looked like before I added styles. And here’s what it looks like with new styles. I guess I’m going for a Nickelodeon feel. And here’s the Timer window and Popup together! Publishing Publishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That’s all! Then your extension will be available on the Chrome extension store! Neato :D My extension is now available for you to install. Conclusion I thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it’s definitely achievable with some time, persistence, and good ole Google searches. Eventually, I’d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup! Either way, with Eye Rest’s framework built out, I’m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.",2018,Jennifer Wong,jenniferwong,2018-12-05T00:00:00+00:00,https://24ways.org/2018/my-first-chrome-extension/,code 243,Researching a Property in the CSS Specifications,"I frequently joke that I’m “reading the specs so you don’t have to”, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial. What if you could just look it up yourself? That’s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs. Where are the CSS Specifications? The easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4. Who are the specifications for? CSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected. Which version of a spec should I look at? There are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I’m referring to a particular Working Draft in an article I’ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing. If you want the very latest additions and changes to the spec, then the Editor’s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor’s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor’s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed. If you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated. How to approach a spec The first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won’t need to read all the way to the bottom, or read every section in detail. The second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as: Values and Units Intrinsic and Extrinsic Sizing Box Alignment When something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore. Researching your property As an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property. You might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties. Clicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns. Value For value we can see that the property accepts a value <track-size>. The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec. The <track-size> value is defined as accepting various values: <track-breadth> minmax( <inflexible-breadth> , <track-breadth> ) fit-content( <length-percentage> We need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size. <track-breadth> is defined just below <track-size> as: <length-percentage> <flex> min-content max-content auto So these are all things that would be valid to use as a value for grid-auto-rows. The first value of <length-percentage> is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a <length> in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values. grid-auto-rows: 100px; grid-auto-rows: 1em; grid-auto-rows: 30%; When using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work. “<percentage> values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.” This means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid. The second value of <flex> is also defined here, as “A non-negative dimension with the unit fr specifying the track’s flex factor.” This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be: grid-auto-rows: 1fr; There is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed. We then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out. For example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary. We can now add min-content and max-content to our available values. grid-auto-rows: min-content; grid-auto-rows: max-content; The final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this. grid-auto-rows: auto; So, this was the list for <track-breadth>, the next possible value is minmax( <inflexible-breadth> , <track-breadth> ). So this is telling us that we can use minmax() as a value, the final (max) value will be <track-breadth> and we have already unpacked all of the allowable values there. The first value (min) is detailed as an <inflexible-breadth>. If we look at the values for this, we discover that they are the same as <track-breadth>, minus the <flex> value: <length-percentage> min-content max-content auto We already know what all of these do, so we can add possible minmax() values to our list of values for <track-size>. grid-auto-rows: minmax(100px, 200px); grid-auto-rows: minmax(20%, 1fr); grid-auto-rows: minmax(1em, auto); grid-auto-rows: minmax(min-content, max-content); Finally we can use fit-content( <length-percentage>. We can see that fit-content takes a value of <length-percentage> which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it. grid-auto-rows: fit-content(200px); grid-auto-rows: fit-content(20%); Those are all of our possible values, and to round things off, check again at the initial <track-size> value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, “A plus (+) indicates that the preceding type, word, or group occurs one or more times.” This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size: “If multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.” Therefore with the following CSS, if five implicit rows were needed they would be as follows: 100px 1fr auto 100px 1fr .grid { display: grid; grid-auto-rows: 100px 1fr auto; } Initial We can now move to the next line in the box, and you’ll be glad to know that it isn’t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one. Applies to This line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers. Inherited This tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element. Percentages Are percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area. Media This defines the media group that the property belongs to. In this case visual. Computed Value This details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute. Canonical Order If you have a property–generally a shorthand property–which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don’t need to worry about this value in the table. Animation Type This details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet! That’s (mostly) it! Sometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property. In selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole. Being able to work out what is valid for each property is incredibly useful. It means you don’t waste time trying to use a value that doesn’t work for that property. You also may find that there are values you weren’t aware of, that solve problems for you. Further reading Specifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need. You may also find useful: Value Definition Syntax on MDN. The MDN Glossary defines many common terms. Understanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax. How to read W3C Specs - from 2001 but still relevant. I hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed.",2018,Rachel Andrew,rachelandrew,2018-12-14T00:00:00+00:00,https://24ways.org/2018/researching-a-property-in-the-css-specifications/,code 244,It’s Beginning to Look a Lot Like XSSmas,"I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional. It’s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it. With platforms like GitHub and npm, giving the gift of code is so easy it’s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it’s also risky. Vulnerabilities and Open Source Open source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps. Graph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017 In the first 24 ways article this year, Katie referenced a few different types of vulnerability: Cross-site Request Forgery (also known as CSRF) SQL Injection Cross-site Scripting (also known as XSS) There are many more types of vulnerability, and those that live under the same category share similarities. For example, my favourite – is it weird to have a favourite vulnerability? – is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks – share it generously with your colleagues. Most vulnerabilities like this are not added intentionally – they’re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library. Others, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer’s curiosity. Another tactic to get developers to unwittingly install malicious packages into their codebase is “typosquatting” – back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env). This is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas. Santa’s on his sleigh If you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn’t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too. Let’s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that’s just the tip of the iceberg – have a look at the full dependency tree which contains 109 dependencies – more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies. Fixing vulnerabilities – the ultimate christmas gift If you’re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy. When you find out about a new vulnerability, you have some options: Fix the vulnerability via an upgrade You can often fix a vulnerability by upgrading the library to the latest version. Make sure you’re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version. Patch the vulnerable code Sometimes, a fix for a vulnerable library isn’t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won’t change anything else. Switch to a different library If the library you’re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it’s being unmaintained. Responsibly disclosing vulnerabilities Knowing how to responsibly disclose vulnerabilities is something I’m ashamed to admit that I didn’t know about before I joined a security company. But it’s so important! On discovering a new vulnerability, a developer has a few options: A malicious developer will exploit that vulnerability for their own gain. A reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited. An ethical and aware developer will follow what’s known as a “responsible disclosure process”. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too. It’s important to understand this process if you’re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code. So what does responsible disclosure look like? I’ll take Node.js’s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There’s a separate process for bug found in third-party npm packages). Once you’ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they’ll give you regular updates about the progress they’re making towards fixing it. As part of this, they’ll figure out which versions are affected, and prepare fixes for them. They’ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org. Tim Kadlec published an in-depth article about responsible disclosures if you’re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed. Encourage responsible disclosure Add a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability – 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn’t published one one. Stats from the State of Open Source Security 2017 Bug bounties Some companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program. If you don’t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money. On one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question. A gift worth giving It’s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool – I’m just a little biased towards Snyk, but as Katie mentioned, there’s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities. Stats from the State of Open Source Security 2017 Most importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too.",2018,Anna Debenham,annadebenham,2018-12-17T00:00:00+00:00,https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/,code 246,Designing Your Site Like It’s 1998,"It’s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary—and my fourteenth contribution to 24 ways— I’d like to explain how I would’ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films. My design for Planes, Trains and Automobiles is fixed at 800px wide. Developing a <frameset> framework I’ll start by using frames to set up the framework for this new website. Frames are individual pages—one for navigation, the other for my content—pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a <frameset> element. I add two rows to my <frameset>; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don’t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0: <frameset frameborder=""0"" framespacing=""0"" rows=""50,*""> […] </frameset> Next I add the source of my two frame documents. I don’t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame: <frameset frameborder=""0"" framespacing=""0"" rows=""50,*""> <frame noresize scrolling=""no"" src=""nav.html""> <frame src=""content.html""> </frameset> I do want links from my navigation to open in the content frame, so I give each <frame> a name so I can specify where I want links to open: <frameset frameborder=""0"" framespacing=""0"" rows=""50,*""> <frame name=""navigation"" noresize scrolling=""no"" src=""nav.html""> <frame name=""content"" src=""content.html""> </frameset> The framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets—and as many individual documents—as I need: <frameset rows=""50,*""> <frame name=""navigation""> <frameset cols=""25%,*""> <frame name=""sidebar""> <frame name=""content""> </frameset> </frameset> Letterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space. Handling older browsers Sadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content: <noframes> <body> <p>This page uses frames, but your browser doesn’t support them. Please upgrade your browser.</p> </body> </noframes> Forcing someone back into a frame Sometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a <frameset>, people could easily miss out on other parts of a design. This short script will prevent this happening and because it’s vanilla Javascript, it doesn’t require a library such as jQuery: <script type=""text/javascript""> if (top == self) { location = 'frameset.html'; } </script> Laying out my page Before starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website: <body background=""img/container.jpg"" bgcolor=""#fef7fb"" link=""#245eab"" alink=""#245eab"" vlink=""#3c146e"" text=""#000000""> I want absolute control over how people experience my design and don’t want to allow it to stretch, so I first need a <table> which limits the width of my layout to 800px. The align attribute will keep this <table> in the centre of someone’s screen: <table width=""800"" align=""center""> <tr> <td>[…]</td> </tr> </table> Although they were developed for displaying tabular information, the cells and rows which make up the <table> element make it ideal for the precise implementation of a design. I need several tables—often nested inside each other—to implement my design. These include tables for a banner and three rows of content: <table width=""800"" align=""center""> <table>[…]</table> <table> <table> <table>[…]</table> </table> </table> <table>[…]</table> <table>[…]</table> </table> The width of the first table—used for my banner—is fixed to match the logo it contains. As I don’t need borders, padding, or spacing between these cells, I use attributes to remove them: <table border=""0"" cellpadding=""0"" cellspacing=""0"" width=""587"" align=""center""> <tr> <td><img src=""logo.gif"" border=""0"" width=""587"" alt=""Logo""></td> </tr> </table> The next table—which contains the largest image, introduction, and a call-to-action—is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images: <tr> <td><img src=""spacer.gif"" width=""593"" height=""1""></td> <td><img src=""spacer.gif"" width=""207"" height=""1""></td> </tr> The height and width of these “shims” or “spacers” is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development. For the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images: <tr> <td background=""slice-1.jpg"" height=""473""> </td> <td background=""slice-2.jpg"" height=""473"">[…]</td> </tr> <tr> <td background=""slice-3.jpg"" height=""388""> </td> </tr> I use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table—this time with a white background—inside it: <table border=""0"" cellpadding=""1"" cellspacing=""0""> <tr> <td> <table bgcolor=""#ffffff"" border=""0"" cellpadding=""0"" cellspacing=""0""> […] </table> </td> </tr> </table> Adding details Tables are fabulous tools for laying out a page, but they’re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my “Buy the DVD” call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button: <table border=""0"" cellpadding=""0"" cellspacing=""0""> <tr> <td background=""btn-1.jpg"" border=""0"" height=""48"" width=""30""><img src=""spacer.gif"" width=""30"" height=""1""></td> <td background=""btn-2.jpg"" border=""0"" height=""48""> <center> <a href="""" target=""_blank""><b>Buy the DVD</b></a> </center> </td> <td background=""btn-3.jpg"" border=""0"" height=""48"" width=""30""><img src=""spacer.gif"" width=""30"" height=""1""></td> </tr> </table> I use those same elements to add details to headlines and lists too. Adding a “bullet” to each item in a list needs only two additional table cells, a circular graphic, and a spacer: <table border=""0"" cellpadding=""0"" cellspacing=""0""> <tr> <td width=""10""><img src=""li.gif"" border=""0"" width=""8"" height=""8""> </td> <td><img src=""spacer.gif"" width=""10"" height=""1""> </td> <td>Directed by John Hughes</td> </tr> </table> Implementing a typographic hierarchy So far I’ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use <font> elements to change the typeface from the browser’s default to any font installed on someone’s device: <font face=""Arial"">Planes, Trains and Automobiles is a comedy film […]</font> To adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser’s default. I use a size of 4 for this headline and 2 for the text which follows: <font face=""Arial"" size=""4""><b>Steve Martin</b></font> <font face=""Arial"" size=""2"">An American actor, comedian, writer, producer, and musician.</font> When I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website. NB: I use as many <br> elements as needed to create space between headlines and paragraphs. View the final result (and especially the source.) My modern day design for Planes, Trains and Automobiles. I can imagine many people reading this and thinking “This is terrible advice because we don’t develop websites like this in 2018.” That’s true. We have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes: font-variant-caps: titling-caps; font-variant-ligatures: common-ligatures; font-variant-numeric: oldstyle-nums; Grid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS: body { display: grid; grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr; grid-template-rows: auto; grid-column-gap: 2vw; grid-row-gap: 1vh; } Flexbox has made it easy to develop flexible components such as navigation links: nav ul { display: flex; } nav li { flex: 1; } Just one line of CSS can create multiple columns of fluid type: main { column-width: 12em; } CSS Shapes enable text to flow around irregular shapes including polygons: [src*=""main-img""] { float: left; shape-outside: polygon(…); } Today, we wouldn’t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead: .btn { background: linear-gradient(#8B1212, #DD3A3C); border-radius: 1em; box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50); } CSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we’re certain we know how to design and develop products and websites better than we did at the end of 1998. Strange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That’s why it’s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today—tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass—will be any more relevant in the future than <font> elements, frames, layout tables, and spacer images are today. I have no prediction for what the web will be like twenty years from now. However, I want to believe we’ll build on what we’ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won’t be repeated. Head over to my website if you’d like to read about how I’d implement my design for ‘Planes, Trains and Automobiles’ today.",2018,Andy Clarke,andyclarke,2018-12-23T00:00:00+00:00,https://24ways.org/2018/designing-your-site-like-its-1998/,code 247,Managing Flow and Rhythm with CSS Custom Properties,"An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability. Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as <div> and <section> also have no default margin or padding associated with them. I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in! The Flow utility With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy. The code .flow { --flow-space: 1em; } .flow > * + * { margin-top: var(--flow-space); } What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them. The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples. A basic example See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/LXqerj What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a <h2> in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size. This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/qQgxaY I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances: h2 { --flow-space: 3rem; } Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured. A more advanced example Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space. See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/ZmPGyL In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself: <article class=""media__content flow""> ... </article> Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element. “But what about old browsers?”, I hear you cry We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size: .flow { --flow-space: 1em; } .flow > * + * { margin-top: 1em; margin-top: var(--flow-space); } Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement! Wrapping up This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS. If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on!",2018,Andy Bell,andybell,2018-12-07T00:00:00+00:00,https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/,code 249,Fast Autocomplete Search for Your Website,"Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive. I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface. Try it out here, then read on to see how I built it. First step: crawling the data The first step in building a search engine is to grab a copy of the data that you plan to make searchable. There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need. I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites. We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running. Then we’ll tell wget to recursively crawl the website, using the --recursive flag. We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2 The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X ""/*/*/comments"". Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this. Tie all of those options together and we get this command: wget --recursive --wait 2 --no-clobber -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 -X ""/*/*/comments"" https://24ways.org/archives/ If you leave this running for a few minutes, you’ll end up with a folder structure something like this: $ find 24ways.org 24ways.org 24ways.org/2013 24ways.org/2013/why-bother-with-accessibility 24ways.org/2013/why-bother-with-accessibility/index.html 24ways.org/2013/levelling-up 24ways.org/2013/levelling-up/index.html 24ways.org/2013/project-hubs 24ways.org/2013/project-hubs/index.html 24ways.org/2013/credits-and-recognition 24ways.org/2013/credits-and-recognition/index.html ... As a quick sanity check, let’s count the number of HTML pages we have retrieved: $ find 24ways.org | grep index.html | wc -l 328 There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead: wget --recursive --wait 2 --no-clobber -I /2018 -X ""/*/*/comments"" https://24ways.org/ Thanks to --no-clobber, this is safe to run every day in December to pick up any new content. We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index. Building a search index using SQLite There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL. I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite. SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch! SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications. This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year. It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension. I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday. If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages: $ python3 -m venv ./jupyter-venv $ ./jupyter-venv/bin/pip install jupyter # ... lots of installer output # Now lets install some extra packages we will need later $ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib # And start the notebook web application $ ./jupyter-venv/bin/jupyter-notebook # This will open your browser to Jupyter at http://localhost:8888/ You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook. A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account. ​ Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on. from pathlib import Path from bs4 import BeautifulSoup as Soup base = Path(""/Users/simonw/Dropbox/Development/24ways-search"") articles = list(base.glob(""*/*/*/*.html"")) # articles is now a list of paths that look like this: # PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html') docs = [] for path in articles: year = str(path.relative_to(base)).split(""/"")[1] url = 'https://' + str(path.relative_to(base).parent) + '/' soup = Soup(path.open().read(), ""html5lib"") author = soup.select_one("".c-continue"")[""title""].split( ""More information about"" )[1].strip() author_slug = soup.select_one("".c-continue"")[""href""].split( ""/authors/"" )[1].split(""/"")[0] published = soup.select_one("".c-meta time"")[""datetime""] contents = soup.select_one("".e-content"").text.strip() title = soup.find(""title"").text.split("" ◆"")[0] try: topic = soup.select_one( '.c-meta a[href^=""/topics/""]' )[""href""].split(""/topics/"")[1].split(""/"")[0] except TypeError: topic = None docs.append({ ""title"": title, ""contents"": contents, ""year"": year, ""author"": author, ""author_slug"": author_slug, ""published"": published, ""url"": url, ""topic"": topic, }) After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this: [ { ""title"": ""Why Bother with Accessibility?"", ""contents"": ""Web accessibility (known in other fields as inclus..."", ""year"": ""2013"", ""author"": ""Laura Kalbag"", ""author_slug"": ""laurakalbag"", ""published"": ""2013-12-10T00:00:00+00:00"", ""url"": ""https://24ways.org/2013/why-bother-with-accessibility/"", ""topic"": ""design"" }, { ""title"": ""Levelling Up"", ""contents"": ""Hello, 24 ways. Iu2019m Ashley and I sell property ins..."", ""year"": ""2013"", ""author"": ""Ashley Baxter"", ""author_slug"": ""ashleybaxter"", ""published"": ""2013-12-06T00:00:00+00:00"", ""url"": ""https://24ways.org/2013/levelling-up/"", ""topic"": ""business"" }, ... My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries. import sqlite_utils db = sqlite_utils.Database(""/tmp/24ways.db"") db[""articles""].insert_all(docs) That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table. (I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.) You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this: $ sqlite3 /tmp/24ways.db sqlite> .headers on sqlite> .mode column sqlite> select title, author, year from articles; title author year ------------------------------ ------------ ---------- Why Bother with Accessibility? Laura Kalbag 2013 Levelling Up Ashley Baxte 2013 Project Hubs: A Home Base for Brad Frost 2013 Credits and Recognition Geri Coady 2013 Managing a Mind Christopher 2013 Run Ragged Mark Boulton 2013 Get Started With GitHub Pages Anna Debenha 2013 Coding Towards Accessibility Charlie Perr 2013 ... <Ctrl+D to quit> There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this: db[""articles""].enable_fts([""title"", ""author"", ""contents""]) Introducing Datasette Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet. We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that? If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article. If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter: ./jupyter-venv/bin/pip install datasette This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette. Now you can run Datasette against the 24ways.db file we created earlier like so: ./jupyter-venv/bin/datasette /tmp/24ways.db This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application. If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db Publishing the database to the internet One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish: $ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways -----> Python app detected -----> Installing requirements with pip -----> Running post-compile hook -----> Discovering process types Procfile declares types -> web -----> Compressing... Done: 47.1M -----> Launching... Released v8 https://search-24ways.herokuapp.com/ deployed to Heroku If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways. You can run this command as many times as you like to deploy updated versions of the underlying database. Searching and faceting Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action. ​ SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL: http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A Better search using custom SQL The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page. This exposes the underlying SQL query, which looks like this: select rowid, * from articles where rowid in ( select rowid from articles_fts where articles_fts match :search ) order by rowid limit 101 We can do better than this by constructing a custom SQL query. Here’s the query we will use instead: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || ""*"" order by rank limit 10; You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute. There’s a lot going on here! Let’s break the SQL down line-by-line: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on. articles_fts.rank, articles.title, articles.url, articles.author, articles.year These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table. from articles join articles_fts on articles.rowid = articles_fts.rowid articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it. where articles_fts match :search || ""*"" order by rank limit 10; :search || ""*"" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results. How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects: JSON API call to search for articles matching SVG The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature. A simple JavaScript autocomplete search interface I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own. We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh: function debounce(func, wait, immediate) { let timeout; return function() { let context = this, args = arguments; let later = () => { timeout = null; if (!immediate) func.apply(context, args); }; let callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; }; We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing. Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these: const htmlEscape = (s) => s.replace( />/g, '&gt;' ).replace( /</g, '&lt;' ).replace( /&/g, '&' ).replace( /""/g, '&quot;' ).replace( /'/g, '&#039;' ); We need some HTML for the search form, and a div in which to render the results: <h1>Autocomplete search</h1> <form> <p><input id=""searchbox"" type=""search"" placeholder=""Search 24ways"" style=""width: 60%""></p> </form> <div id=""results""></div> And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript: // Embed the SQL query in a multi-line backtick string: const sql = `select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || ""*"" order by rank limit 10`; // Grab a reference to the <input type=""search""> const searchbox = document.getElementById(""searchbox""); // Used to avoid race-conditions: let requestInFlight = null; searchbox.onkeyup = debounce(() => { const q = searchbox.value; // Construct the API URL, using encodeURIComponent() for the parameters const url = ( ""https://search-24ways.herokuapp.com/24ways-866073b.json?sql="" + encodeURIComponent(sql) + `&search=${encodeURIComponent(q)}&_shape=array` ); // Unique object used just for race-condition comparison let currentRequest = {}; requestInFlight = currentRequest; fetch(url).then(r => r.json()).then(d => { if (requestInFlight !== currentRequest) { // Avoid race conditions where a slow request returns // after a faster one. return; } let results = d.map(r => ` <div class=""result""> <h3><a href=""${r.url}"">${htmlEscape(r.title)}</a></h3> <p><small>${htmlEscape(r.author)} - ${r.year}</small></p> <p>${highlight(r.snippet)}</p> </div> `).join(""""); document.getElementById(""results"").innerHTML = results; }); }, 100); // debounce every 100ms There’s just one more utility function, used to help construct the HTML results: const highlight = (s) => htmlEscape(s).replace( /b4de2a49c8/g, '<b>' ).replace( /8c94a2ed4b/g, '</b>' ); This is what those unique strings passed to the snippet() function were for. Avoiding race conditions in autocomplete One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case: User types acces Browser sends request A - querying documents matching acces* User continues to type accessibility Browser sends request B - querying documents matching accessibility* Request B returns. It was fast, because there are fewer documents matching the full term The results interface updates with the documents from request B, matching accessibility* Request A returns results (this was the slower of the two requests) The results interface updates with the documents from request A - results matching access* This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on. Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null. Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well. When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results. It’s not a lot of code, really And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like. How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with. A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite. More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now. For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.",2018,Simon Willison,simonwillison,2018-12-19T00:00:00+00:00,https://24ways.org/2018/fast-autocomplete-search-for-your-website/,code 253,Clip Paths Know No Bounds,"CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance. The basics of a clip path Before we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed. Using the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely’s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element’s dimensions. See the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen. So for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines. clip-path: polygon( 30% 0%, 70% 0%, 100% 30%, 100% 70%, 70% 100%, 30% 100%, 0% 70%, 0% 30% ); A shape with less vertices than the eye can see It’s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box — or more specifically when we think outside the range of 0% - 100%. Our element’s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element. See the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen. By going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons. Our earlier octagon can similarly be made with only four points. See the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen. Multiple shapes, one clip path We can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon(). See the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen. Depending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element’s box, we can draw extra lines to help us get where we need to go next as needed. It can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles. See the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen. Different shapes with fill rules A polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification — the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape. See the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen. As lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path’s lines, the point is considered outside, and if it crosses an odd number the point is inside. Order of operations It is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more. These compositing effects are applied in the order: CSS Filters (e.g. filter: blur(2px)) Clipping (e.g. what this article is about) Masking (Clipping’s cousin) Blend Modes (e.g. mix-blend-mode: multiply) Opacity This means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element’s box edges. See the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen. If we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally. Revealing content with animation CSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. See the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen. Don’t keep CSS Shapes in a box Clip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.",2018,Dan Wilson,danwilson,2018-12-20T00:00:00+00:00,https://24ways.org/2018/clip-paths-know-no-bounds/,code 255,Inclusive Considerations When Restyling Form Controls,"I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility. I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year. Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match. However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve. Quick level setting Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following: Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels. It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics. Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing. Visually resetting and restyling form controls Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors. As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation. See the Pen form control styling comparisons by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/gZOrZm/ Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated. If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing. You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting? It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements: Non-standard This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future. While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all. Can’t make it? Fake it. Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended. Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling. Getting clever with markup and CSS The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox. See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/RqEayN/ Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen? As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others. Limitations to cleverness Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to. However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach. Using ARIA when appropriate Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA). ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA: If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so. By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible. Exceptions to the rule One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control. Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized. While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control. See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/XyvoeE/ By adding a role=""switch"" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state. But while this is a valid approach to take in building a switch, how does this actually match up to reality? Does it pass the test(s)? Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better. What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example: Is the control implemented in all supported browsers? If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field? If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline? Test with multiple input devices. Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few. You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well. How well does the control take to custom styles? If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling. Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled. Do specifications match reality? If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations? For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order? Wrapping up While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented. Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option. 2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs. And hey. There’s always next year, right?",2018,Scott O'Hara,scottohara,2018-12-13T00:00:00+00:00,https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/,code 256,Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist,"We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month. *(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!) Looking for critters in rocky intertidal habitats One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’ A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.) ​ They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons. My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible. ​ I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist. Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower! Finding nearby animals with iNaturalist We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society. I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting. This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF. ​ iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button. ​ You’ll then get a URL that looks a bit like https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at which you can call and interrrogate using a programming language of your choice. If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season). Rapid development using Observable Notebooks We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all. You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example. Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock. Developing your Naturalist superpowers If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you. For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in. We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com ​ The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API. westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811 A quick map primer: The higher the latitude the more north it is The lower the latitude the more south it is Latitude 0 = the equator The higher the longitude the more east it is of Greenwich The lower the longitude the more west it is of Greenwich Longitude 0 = Greenwich In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California: nelat = highest latitude = north limit = 37.499811 nelng = highest longitude = east limit = -122.492738 swlat = smallest latitude = south limit = 37.492805 swlng = smallest longitude = west limit = 122.50542 As API parameters these look like this: ?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542 These parameters in this format can be used for most of the iNaturalist API methods. Nudibranch seasonality in Pillar Point We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box. In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’. Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’. The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead. The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box: pillar_point_counts_per_week = fetch( ""https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true"" ).then(response => { return response.json(); }) Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks. (Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite) The iNaturalist API returns data that looks like this: { ""total_results"": 53, ""page"": 1, ""per_page"": 53, ""results"": { ""week_of_year"": { ""1"": 136, ""2"": 20, ""3"": 150, ""4"": 65, ""5"": 186, ""6"": 74, ""7"": 47, ""8"": 87, ""9"": 64, ""10"": 56, But for our Vega-Lite graph we need data that looks like this: [{ ""week"": ""01"", ""value"": 136 }, { ""week"": ""02"", ""value"": 20 }, ...] We can convert what we get back from the API to the second format using a loop that iterates over the object keys: objects_to_plot = { let objects = []; Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) { objects.push({ week: `Wk ${week_index.toString()}`, observations: pillar_point_counts_per_week.results.week_of_year[week_index] }); }) return objects; } We can then plug this into Vega-Lite to draw us a graph: vegalite({ data: {values: objects_to_plot}, mark: ""bar"", encoding: { x: {field: ""week"", type: ""nominal"", sort: null}, y: {field: ""observations"", type: ""quantitative""} }, width: width * 0.9 }) It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences. So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names. pillar_point_nudibranches = { let api_results = await fetch( ""https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true"" ).then(r => r.json()) let species_list = api_results.results.map(i => ({ value: i.taxon.id, label: `${i.taxon.preferred_common_name} (${i.taxon.name})` })); return species_list } We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from ""@jashkenas/inputs"" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another! viewof select_species = select({ title: ""Which Nudibranch do you want to see seasonality for?"", options: [{value: """", label: ""All the Nudibranchs!""}, ...pillar_point_nudibranches], value: """" }) Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113. pillar_point_counts_per_month_per_species = fetch( `https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true` ).then(r => r.json()) Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite: objects_to_plot_species_month = { let objects = []; Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) { objects.push({ month: (new Date(2018, (month_index - 1), 1)).toLocaleString(""en"", {month: ""long""}), observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index] }); }) return objects; } (Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month) And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update! vegalite({ data: {values: objects_to_plot_species_month}, mark: ""bar"", encoding: { x: {field: ""month"", type: ""nominal"", sort:null}, y: {field: ""observations"", type: ""quantitative""} }, width: width * 0.9 }) Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch. ​ This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs? Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year! Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month. viewof selected_month = select({ title: ""When do you want to see Nudibranchs?"", options: [ { label: ""Whenever"", value: """" }, { label: ""January"", value: ""1"" }, { label: ""February"", value: ""2"" }, { label: ""March"", value: ""3"" }, { label: ""April"", value: ""4"" }, { label: ""May"", value: ""5"" }, { label: ""June"", value: ""6"" }, { label: ""July"", value: ""7"" }, { label: ""August"", value: ""8"" }, { label: ""September"", value: ""9"" }, { label: ""October"", value: ""10"" }, { label: ""November"", value: ""11"" }, { label: ""December"", value: ""12"" }, ], value: """" }) We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name. all_species_data = fetch( `https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true` ).then(r => r.json()) You can render HTML directly in a notebook cell using Observable’s html tagged template literal: <style> .collection { margin-top: 2em } .collection .species { display: inline-block; width: 9em; margin-bottom: 2em; } .collection .species-name { font-size: 1em; margin: 0; padding: 0 } .collection .species-count { margin: 0 0 0.3em 0; padding: 0; font-size: 0.75em; color: #999; font-style: italic; } .collection img { display: block; width: 100% } .collection select { font-size: 1.5em; } </style> <h2>If you go to Pillar Point ${ {"""": """", ""1"":""in January"", ""2"":""in Febrary"", ""3"":""in March"", ""4"":""in April"", ""5"":""in May"", ""6"":""in June"", ""7"":""in July"", ""8"":""in August"", ""9"":""in September"", ""10"":""in October"", ""11"":""in November"", ""12"":""in December"", }[selected_month] } you might see…</h2> <div class=""collection""> ${all_species_data.results.map(s => `<div class=""species""><h3 class=""species-name"">${s.taxon.name}</h3> <p class=""species-count"">Seen ${s.count} times</p> <img src=""${s.taxon.default_photo.medium_url}""></div> `)} </div> These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month! ​ Play with it yourself in this Observable Notebook. Conclusion I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower. Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there. Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.",2018,Natalie Downe,nataliedowne,2018-12-18T00:00:00+00:00,https://24ways.org/2018/observable-notebooks-and-inaturalist/,code 257,The (Switch)-Case for State Machines in User Interfaces,"You’re tasked with creating a login form. Email, password, submit button, done. “This will be easy,” you think to yourself. Login form by Selecto You’ve made similar forms many times in the past; it’s essentially muscle memory at this point. You’re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you’ll have to translate the pixels to meaningful, responsive CSS values, but that’s the least of your problems. As you’re writing up the HTML structure and CSS layout and styles for this form, you realize that you don’t know what the successful “logged in” page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work. What if login fails? Where do those errors show up? Should we show errors differently if the user forgot to enter their email, or password, or both? Or should the submit button be disabled? Should we validate the email field? When should we show validation errors – as they’re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.) When should the errors disappear? What do we show during the login process? Some loading spinner? What if loading takes too long, or a server error occurs? Many more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases. Modeling Behavior Describing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story – they often leave out potential edge-cases or small yet important bits of information. However, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine. The general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states: start - not submitted yet loading - submitted and logging in success - successfully logged in error - login failed Additionally, we can describe an application as accepting a finite number of events – that is, all the possible events that can be “sent” to the application, either from the user or some other external entity: SUBMIT - pressing the submit button RESOLVE - the server responds, indicating that login is successful REJECT - the server responds, indicating that login failed Then, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be: From the start state, when the SUBMIT event occurs, the app should be in the loading state. From the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state. If login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state. From the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state. Otherwise, if any other event occurs, don’t do anything and stay in the same state. That’s a pretty thorough description, similar to a user story! It’s also a bit more symbolic than a user story (e.g., “when the SUBMIT event occurs” instead of “when the user presses the submit button”), and that’s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like: Every state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event. From Visuals to Code Drawing a state machine doesn’t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn’t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application. With the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements: function loginMachine(state, event) { switch (state) { case 'start': if (event === 'SUBMIT') { return 'loading'; } break; case 'loading': if (event === 'RESOLVE') { return 'success'; } else if (event === 'REJECT') { return 'error'; } break; case 'success': // Accept no further events break; case 'error': if (event === 'SUBMIT') { return 'loading'; } break; default: // This should never occur return undefined; } } console.log(loginMachine('start', 'SUBMIT')); // => 'loading' This is fine (I suppose) but personally, I find it much easier to use objects: const loginMachine = { initial: ""start"", states: { start: { on: { SUBMIT: 'loading' } }, loading: { on: { REJECT: 'error', RESOLVE: 'success' } }, error: { on: { SUBMIT: 'loading' } }, success: {} } }; function transition(state, event) { return machine .states[state] // Look up the state .on[event] // Look up the next state based on the event || state; // If not found, return the current state } console.log(transition('start', 'SUBMIT')); As you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here: A Common Language Between Designers and Developers Although finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can: identify all possible states, and potentially missing states describe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?) eliminate impossible states and identify states that are “unreachable” (have no entry transition) or “sunken” (have no exit transition) add features with full confidence of knowing what other states it might affect simplify redundant states or complex user flows create test paths for almost every possible user flow, and easily identify edge cases collaborate better by understanding the entire application model equally. Not a New Idea I’m not the first to suggest that state machines can help bridge the gap between design and development. Vince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines User flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them. In 1999, Ian Horrocks wrote a book titled “Constructing the User Interface with Statecharts”, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today. More than a decade earlier, David Harel published “Statecharts: A Visual Formalism for Complex Systems”, in which the statechart - an extended hierarchical state machine model - is born. State machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits: Visualized modeling Precise diagrams Automatic code generation Comprehensive test coverage Accommodation of late-breaking requirements changes Moving Forward It’s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources: The World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications The Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling I gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases. I have also been working on XState, which is a library for “state machines and statecharts for the modern web”. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages). I’m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.",2018,David Khourshid,davidkhourshid,2018-12-12T00:00:00+00:00,https://24ways.org/2018/state-machines-in-user-interfaces/,code 258,Mistletoe Offline,"It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season. I am, of course, talking about Murphy, and the golden rule he gave unto us: Anything that can go wrong will go wrong. So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit? But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong. I guess there’s nothing we can do about those particular situations, right? Wrong! A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade. If you’ve got a custom 404 page, why not make a custom offline page too? Get your server in order Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields. If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must. Make an offline page Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference. When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html. Pre-cache your offline page Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener: addEventListener('install', installEvent => { // put your instructions here. }); // end addEventListener In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code: addEventListener('install', installEvent => { installEvent.waitUntil( caches.open('Johnny') .then( JohnnyCache => { JohnnyCache.addAll([ '/offline.html' ]); // end addAll }) // end open.then ); // end waitUntil }); // end addEventListener I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point: addEventListener('install', installEvent => { installEvent.waitUntil( caches.open('Johnny') .then( JohnnyCache => { JohnnyCache.addAll([ '/offline.html', '/path/to/stylesheet.css', '/path/to/javascript.js', '/path/to/image.jpg' ]); // end addAll }) // end open.then ); // end waitUntil }); // end addEventListener Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached. Intercept requests The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time: addEventListener('fetch', fetchEvent => { // What happens next is up to you! }); // end addEventListener Let’s write a fairly conservative script with the following logic: Whenever a file is requested, First, try to fetch it from the network, But if that doesn’t work, try to find it in the cache, But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead. Here’s how that translates into JavaScript: // Whenever a file is requested addEventListener('fetch', fetchEvent => { const request = fetchEvent.request; fetchEvent.respondWith( // First, try to fetch it from the network fetch(request) .then( responseFromFetch => { return responseFromFetch; }) // end fetch.then // But if that doesn't work .catch( fetchError => { // try to find it in the cache caches.match(request) .then( responseFromCache => { if (responseFromCache) { return responseFromCache; // But if that doesn't work } else { // and it's a request for a web page if (request.headers.get('Accept').includes('text/html')) { // show the custom offline page instead return caches.match('/offline.html'); } // end if } // end if/else }) // end match.then }) // end fetch.catch ); // end respondWith }); // end addEventListener I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas. Hook up your service worker script You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML: if (navigator.serviceWorker) { navigator.serviceWorker.register('/serviceworker.js'); } That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend. You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted. Go further! Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe! This is just the beginning. You can do more with service workers. What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version. Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images. So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript. Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action.",2018,Jeremy Keith,jeremykeith,2018-12-04T00:00:00+00:00,https://24ways.org/2018/mistletoe-offline/,code 260,The Art of Mathematics: A Mandala Maker Tutorial,"In front-end development, there’s often a great deal of focus on tools that aim to make our work more efficient. But what if you’re new to web development? When you’re just starting out, the amount of new material can be overwhelming, particularly if you don’t have a solid background in Computer Science. But the truth is, once you’ve learned a little bit of JavaScript, you can already make some pretty impressive things. A couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days: Mandala Maker user interface The coolest part about it is the fact that it’s a tool: anyone can use it to create something original and brand new. In this tutorial, we’ll build a smaller version of this app – a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we’re done, you’re on your own and can tweak it as you please. Be creative! Preparations: a blank canvas The first thing you’ll need for this project is a designated drawing space. We’ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like). Files Create 3 files: index.html, styles.css, main.js. Don’t forget to include your JS and CSS files in your HTML. <!DOCTYPE html> <html> <head> <meta charset=""utf-8""> <link rel=""stylesheet"" type=""text/css"" href=""style.css""> <script src=""main.js""></script> </head> <body onload=""init()""> <canvas width=""600"" height=""600""> <p>Your browser doesn't support canvas.</p> </canvas> </body> </html> I’ll ask you to update your HTML file at a later point, but the CSS file we’ll start with will stay the same throughout the project. This is the full CSS we are going to use: body { background-color: #ccc; text-align: center; } canvas { touch-action: none; background-color: #fff; } button { font-size: 110%; } Next steps We are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts: Building a simple drawing app with one line and one color Adding a Clear button and a color picker Adding more functionality: 2 line drawing (add the first reflection) Adding more functionality: 8 line drawing (add 6 more reflections!) Interactive demos This tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet. Part 1: A simple drawing app Let’s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables: var canvas, context, w, h, prevX = 0, currX = 0, prevY = 0, currY = 0, draw = false; The variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true. 1. init Responsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events. function init() { canvas = document.querySelector(""canvas""); context = canvas.getContext(""2d""); w = canvas.width; h = canvas.height; canvas.onpointermove = handlePointerMove; canvas.onpointerdown = handlePointerDown; canvas.onpointerup = stopDrawing; canvas.onpointerout = stopDrawing; } 2. drawLine This is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial. lineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY). function drawLine() { var a = prevX, b = prevY, c = currX, d = currY; context.lineWidth = 4; context.lineCap = ""round""; context.beginPath(); context.moveTo(a, b); context.lineTo(c, d); context.stroke(); context.closePath(); } 3. stopDrawing This is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout). function stopDrawing() { draw = false; } 4. recordPointerLocation This tracks the pointer’s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it. function recordPointerLocation(e) { prevX = currX; prevY = currY; currX = e.clientX - canvas.offsetLeft; currY = e.clientY - canvas.offsetTop; } 5. handlePointerMove This is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it. function handlePointerMove(e) { if (draw) { recordPointerLocation(e); drawLine(); } } 6. handlePointerDown This is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That’s because we only want movement events from handlePointerMove to cause drawing if the pointer is down. function handlePointerDown(e) { recordPointerLocation(e); draw = true; } Finally, we have a working drawing app. But that’s just the beginning! See the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen. Part 2: Add a Clear button and a color picker Now we’ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear. <body onload=""init()""> <canvas width=""600"" height=""600""> <p>Your browser doesn't support canvas.</p> </canvas> <div class=""menu""> <input type=""color"" class=""color"" /> <button type=""button"" class=""clear"">Clear</button> </div> </body> Color picker This is our new color picker function. It targets the input element by its class and gets its value. function getColor() { return document.querySelector("".color"").value; } Up until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We’ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor. function drawLine() { //...code... context.strokeStyle = getColor(); context.lineWidth = 4; context.lineCap = ""round""; //...code... } Clear button This is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing. function clearCanvas() { if (confirm(""Want to clear?"")) { context.clearRect(0, 0, w, h); } } The method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. If we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up. Let’s add this event handler to init: function init() { //...code... canvas.onpointermove = handleMouseMove; canvas.onpointerdown = handleMouseDown; canvas.onpointerup = stopDrawing; canvas.onpointerout = stopDrawing; document.querySelector("".clear"").onclick = clearCanvas; } See the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen. Part 3: Draw with 2 lines It’s time to make a line appear where no pointer has gone before. A ghost line! For that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it’s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn’t matter which one we choose. Let’s go with the x-axis. Here is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). Now, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal. A sketch illustrating the mathematics of reflecting a point. What happens to the x coordinates? The variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them “the x coordinates”. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. What happens to the y coordinates? What about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height. This is the new code for the app’s variables and the two lines: the one that fills the pointer’s path and the one mirroring it across the x-axis. function drawLine() { var a = prevX, a_ = a, b = prevY, b_ = h-b, c = currX, c_ = c, d = currY, d_ = h-d; //... code ... // Draw line #1, at the pointer's location context.moveTo(a, b); context.lineTo(c, d); // Draw line #2, mirroring the line #1 context.moveTo(a_, b_); context.lineTo(c_, d_); //... code ... } In case this was too abstract for you, let’s look at some actual numbers to see how this works. Let’s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3. So b' = 10 - 2 = 8 and d' = 10 - 3 = 7. We use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That’s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior. If you are still confused, I don’t blame you. Here is the result. Draw something and see what happens. See the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen. Part 4: Draw with 8 lines I have made yet another confusing sketch, with points C and D, so you understand what we’re trying to do. Later on we’ll look at points E, F, G and H as well. The circled point is the one we’re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. A sketch illustrating points C and D. This is the part where the math gets a bit mathier, as our drawLine function evolves further. We’ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let’s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different). function drawLine() { //... code ... // Reassign values a_ = w-a; b_ = b; c_ = w-c; d_ = d; // Draw the 3rd line context.moveTo(a_, b_); context.lineTo(c_, d_); // Reassign values a_ = w-a; b_ = h-b; c_ = w-c; d_ = h-d; // Draw the 4th line context.moveTo(a_, b_); context.lineTo(c_, d_); //... code ... What is happening? You might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That’s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. Also, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It’s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). What happens to the x coordinates? As you recall, our x coordinates are a (prevX) and c (currX). For the third line we are adding, a' = w - a and c' = w - c, which means… For the fourth line, the same thing happens to our x coordinates a and c. What happens to the y coordinates? As you recall, our y coordinates are b (prevY) and d (currY). For the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. For the fourth line, b' = h - b and d' = h - d, which we’ve seen before: that’s a reflection across the x-axis. We have four more lines, or locations, to define. Note: the part of the code that’s responsible for drawing a micro-line between the newly calculated coordinates is always the same: context.moveTo(a_, b_); context.lineTo(c_, d_); We can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. Once again, we need some concrete examples to see where we’re going, so here’s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code. A sketch illustrating points E and F. This is the code for E and F: // Reassign for 5 a_ = w/2+h/2-b; b_ = w/2+h/2-a; c_ = w/2+h/2-d; d_ = w/2+h/2-c; // Reassign for 6 a_ = w/2+h/2-b; b_ = h/2-w/2+a; c_ = w/2+h/2-d; d_ = h/2-w/2+c; Their x coordinates are identical and their y coordinates are reversed to one another. This one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3). A sketch illustrating points G and H. This is the code: // Reassign for 7 a_ = w/2-h/2+b; b_ = w/2+h/2-a; c_ = w/2-h/2+d; d_ = w/2+h/2-c; // Reassign for 8 a_ = w/2-h/2+b; b_ = h/2-w/2+a; c_ = w/2-h/2+d; d_ = h/2-w/2+c; //...code... } Once again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won’t go into the full details, since this has been a long enough journey as it is, and I think we’ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them. I hope you had fun learning! This is our final app: See the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.",2018,Hagar Shilo,hagarshilo,2018-12-02T00:00:00+00:00,https://24ways.org/2018/the-art-of-mathematics/,code 263,Securing Your Site like It’s 1999,"Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities. As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned. What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate. Bad input validation: Trusting anything the user sends you Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web. One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong. One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile. Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened? There aren’t any official rules for developing software on the web. But if there were, my golden rule would be: Never trust user input. Ever. Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe. Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML. Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner. Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world. SQL injection: Allowing the user to run their own database queries A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day. One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned. It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it? ' OR '1'='1 SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this: SELECT COUNT(*) FROM USERS WHERE USERNAME='Alice' AND PASSWORD='hunter2' This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead! SELECT COUNT(*) FROM USERS WHERE USERNAME='Admin' AND PASSWORD='' OR '1'='1' Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum! So how can you protect yourself from SQL injection? Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize. Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can! Cross site request forgery: Getting other users to do your dirty work for you Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge. Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky… Let’s just say there are older and fouler things than Rick Astley in the dark places of the web. What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it: <img src=""http://www.netflix.com/JSON/AddToQueue?movieid=70110672"" /> Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge! This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from. The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set. Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies. You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers. Cross site scripting: Someone else’s code running on your website In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends. Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then… Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject <img /> and <div /> tags into his headline, but <script /> was filtered. MySpace wasn’t about to let someone else run their own code on MySpace. Intrigued, Samy set about finding out exactly what he could do with <img /> and <div /> tags. He found that you could add style properties to <div /> tags to style them with CSS. <div style=""background:url('javascript:alert(1)')""> This code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from <div />. <div style=""background:url('java script:alert(1)')""> Samy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace’s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code. alert(document.body.innerHTML) Samy wondered if he could inspect the page’s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too. alert(eval('document.body.inne' + 'rHTML')) This isn’t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends’ pages. One final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain. if (location.hostname == 'profile.myspace.com') { document.location = 'http://www.myspace.com' + location.pathname + location.search } Samy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser’s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace’s API. Samy installed this code on his profile page, and he waited. Over the course of the next day, over a million people unwittingly installed Samy’s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy’s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service. This is the power of installing a little bit of JavaScript on someone else’s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people. So how can you help protect yourself from cross-site scripting? Always sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don’t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down. You can also use an auto-escaping templating language to make sure nobody else’s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use. You should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function. For content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing. npm audit: Protecting yourself from code you don’t own JavaScript and npm run the modern web. Together, they make it easy to take advantage of the world’s largest public registry of open source software. How do you protect yourself from code written by someone you’ve never met? Enter npm audit. npm audit reviews the security of your website’s dependency tree. You can start using it by upgrading to the latest version of npm: npm install npm -g npm audit When you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed. If your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What’s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you! Securing your site like it’s 2019 The truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes. However, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies. As we roll over into 2019, let’s take the opportunity to reflect on the security of the code we write and be grateful for the everything we’ve learned in the last twenty years.",2018,Katie Fenn,katiefenn,2018-12-01T00:00:00+00:00,https://24ways.org/2018/securing-your-site-like-its-1999/,code 264,Dynamic Social Sharing Images,"Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you’d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days! With so many social channels and a proliferation of content and promotions flying past in everyone’s streams, it’s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader’s attention if you’re trying to get as many eyes as possible on that sweet, sweet social content. One of the best ways to grab attention with your posts or tweets is to include an image. There’s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much. So without hard stats to quote, we’ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them. Adding sharing images The process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified. <meta property=""og:image"" content=""https://example.com/my_image.jpg""> There’s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing. This is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don’t necessarily have an image? One approach is to use stock photography, but that’s not going to be right for every situation. This was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don’t usually lend themselves directly to being the main ‘hero’ image for an article. Putting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article. One of the hand-made sharing images from 2017 Each day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this. Hatching a new plan One initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility. The more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS! In fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn’t this just be another page on the site generated by the CMS? Breaking it down, I figured the steps needed would be something like: Create the CSS to lay out a component that could be turned into an image Add a new field to articles in the CMS to hold a handpicked quote Build a new article template in the CMS to output the author name and quote dynamically for any article … um … screenshot? I thought I’d get cracking and see if I could figure out the final steps later. Building the page The first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul. Paul’s code uses a fixed dimension container in the HTML, set to 600 × 315px. This is to make it the correct aspect ratio for Facebook’s recommended image size. It’s useful to remember here that it doesn’t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser. With the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul’s markup. I also added the quote as a new field on the article in the CMS, so each ‘image’ could be quickly and easily customised in the editing process. I added a new field to the article template to capture the quote. With very little effort, we quickly had a page to dynamically generate our ‘image’ right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article. Our automatically generated layout direct from the CMS It soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that’s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought… how could I automatically take a screenshot? Enter Puppeteer Puppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It’s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface. Headless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or… get this… rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node. Using Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that’s it’s actually Puppeteer’s ‘hello world’ example. First you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don’t need to worry about installing that. At the command line: npm i puppeteer Then save a new file as example.js with this code: const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com'); await page.screenshot({path: 'example.png'}); await browser.close(); })(); and then run it using Node: node example.js This will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow: Launch a browser Open up a new page Goto a URL Take a screenshot Close the browser The async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That’s useful with actions like loading a web page that might take some time to complete. They’re used with Promises, and the effect is to make asynchronous code behave as if it’s synchronous. You can read more about async and await at MDN if you’re interested. That’s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there? Our content is up in the corner of the image with lots of empty space. That’s not great. It’s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 × 600, so we need to find out how to adjust that. Fortunately, the docs aren’t the worst and I was able to find the page.setViewport() method pretty easily. const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing'); await page.setViewport({ width: 600, height: 315 }); await page.screenshot({path: 'example.png'}); await browser.close(); })(); This worked! The screenshot is now 600 × 315 as expected. That’s exactly what we asked for. Trouble is, that’s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels. await page.setViewport({ width: 600, height: 315, deviceScaleFactor: 2 }); Perfect! We now have a programmatic way of turning a URL into an image. Improving the script Rather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site’s build process to automate the generation of the image. Our goal is to call the script like this: node shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/ And I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it. // Get the URL and the slug segment from it const url = process.argv[2]; const segments = url.split('/'); // Get the second-to-last segment (the slug) const slug = segments[segments.length-2]; We can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page. (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url + 'sharing'); await page.setViewport({ width: 600, height: 315, deviceScaleFactor: 2 }); await page.screenshot({path: slug + '.png'}); await browser.close(); })(); Once you’re generating the image with Node, there’s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project. You can also run optimisations on the file. I’m using imagemin with pngquant to reduce the file size a little. const imagemin = require('imagemin'); const imageminPngquant = require('imagemin-pngquant'); await imagemin([slug + '.png'], 'build', { plugins: [ imageminPngquant({quality: '75-90'}) ] }); You can see the completed example as a gist. Integrating it with your CMS So we now have a command we can run to take a URL and generate a custom image for that URL. It’s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you’re using, but it’s likely not too hard as long as you can run a command as part of the process. For 24 ways this year, I’ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I’ll look to automate this one step further. We may also look at having a few slightly different layouts to choose from, so that each day isn’t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there’s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities! By using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we’ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images. I hope you’ll give it a try!",2018,Drew McLellan,drewmclellan,2018-12-24T00:00:00+00:00,https://24ways.org/2018/dynamic-social-sharing-images/,code 271,Creating Custom Font Stacks with Unicode-Range,"Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We’ve all used it — either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts. If you’re like me, however, you’ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec. One such property — the unicode-range descriptor — sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways. Unicode-range The unicode-range descriptor is designed to help when using fonts that don’t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers. @font-face { font-family: BBCBengali; src: url(fonts/BBCBengali.ttf) format(""opentype""); unicode-range: U+00-FF; } In this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to ÿ at U+FF – the extent of the Basic Latin character range. By adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts. When I say that it’s possible to specify the range of characters the font covers, that’s true, but what you’re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together. The best available ampersand A few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a <span> element with a class applied: <span class=""amp"">&</span> A CSS rule can then be written to select the <span> and apply a different font: span.amp { font-family: Baskerville, Palatino, ""Book Antiqua"", serif; } That’s a perfectly serviceable technique, but the drawbacks are clear — you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn’t always possible when working with a CMS. Perhaps we could do this with unicode-range. A better best available ampersand The Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so: @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } What we’ve done here is specify a new family called Ampersand and created a font stack for it with the user’s locally installed copies of Baskerville, Palatino or Book Antiqua. We’ve then limited it to a single character range — the ampersand. Of course, those don’t need to be local fonts — they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life. We can then use that new family in a regular font stack. h1 { font-family: Ampersand, Arial, sans-serif; } With this in place, any <h1> elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn’t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial. You didn’t think it was that easy, did you? Oh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all. If you’re familiar with how CSS works when it comes to unsupported properties, you’ll know that if a browser encounters a property it doesn’t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius — if the browser can’t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect. Less perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters — the whole range. If you’re using a fancy font for flamboyant ampersands, you probably don’t want that applied to all your text if unicode-range isn’t supported. That would be bad. Really bad. Ensuring good fallbacks As ever, the trick is to make sure that there’s a sensible fallback in place if a browser doesn’t have support for whatever technology you’re trying to use. This is where being a super nerd about understanding the spec you’re working with really pays off. We can make use of the rules of the CSS cascade to make sure that if unicode-range isn’t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren’t implemented. @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } @font-face { font-family: 'Ampersand'; src: local('Arial'); } In theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don’t have support, the second rule should just supersede the first, setting the font to Arial. Unfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That’s both unexpected and unfortunate. Bad Firefox. On your rug. If that doesn’t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That’s really what we’re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we’re never going to use in our page? Then it wouldn’t affect the outcome for browsers that do support ranges. @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } @font-face { /* Ampersand fallback font */ font-family: 'Ampersand'; src: local('Arial'); unicode-range: U+270C; } By specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we’re not using in any of our pages — U+270C. So we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect! That obscure character, my friends, is what Unicode defines as the VICTORY HAND. ✌ So, how can we use this? Ampersands are a neat trick, and it works well in browsers that support ranges, but that’s not really the point of all this. Styling ampersands is fun, but they’re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting. How do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end. Of course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you’re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you’ll have to use your own best judgement based on your needs and audience. One thing to keep in mind is that if you’re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn’t be downloaded if none of the characters within the Unicode range are present in a given page. As ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.",2011,Drew McLellan,drewmclellan,2011-12-01T00:00:00+00:00,https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/,code 276,Your jQuery: Now With 67% Less Suck,"Fun fact: more websites are now using jQuery than Flash. jQuery is an amazing tool that’s made JavaScript accessible to developers and designers of all levels of experience. However, as Spiderman taught us, “with great power comes great responsibility.” The unfortunate downside to jQuery is that while it makes it easy to write JavaScript, it makes it easy to write really really f*&#ing bad JavaScript. Scripts that slow down page load, unresponsive user interfaces, and spaghetti code knotted so deep that it should come with a bottle of whiskey for the next sucker developer that has to work on it. This becomes more important for those of us who have yet to move into the magical fairy wonderland where none of our clients or users view our pages in Internet Explorer. The IE JavaScript engine moves at the speed of an advancing glacier compared to more modern browsers, so optimizing our code for performance takes on an even higher level of urgency. Thankfully, there are a few very simple things anyone can add into their jQuery workflow that can clear up a lot of basic problems. When undertaking code reviews, three of the areas where I consistently see the biggest problems are: inefficient selectors; poor event delegation; and clunky DOM manipulation. We’ll tackle all three of these and hopefully you’ll walk away with some new jQuery batarangs to toss around in your next project. Selector optimization Selector speed: fast or slow? Saying that the power behind jQuery comes from its ability to select DOM elements and act on them is like saying that Photoshop is a really good tool for selecting pixels on screen and making them change color – it’s a bit of a gross oversimplification, but the fact remains that jQuery gives us a ton of ways to choose which element or elements in a page we want to work with. However, a surprising number of web developers are unaware that all selectors are not created equal; in fact, it’s incredible just how drastic the performance difference can be between two selectors that, at first glance, appear nearly identical. For instance, consider these two ways of selecting all paragraph tags inside a <div> with an ID. $(""#id p""); $(""#id"").find(""p""); Would it surprise you to learn that the second way can be more than twice as fast as the first? Knowing which selectors outperform others (and why) is a pretty key building block in making sure your code runs well and doesn’t frustrate your users waiting for things to happen. There are many different ways to select elements using jQuery, but the most common ways can be basically broken down into five different methods. In order, roughly, from fastest to slowest, these are: $(""#id""); This is without a doubt the fastest selector jQuery provides because it maps directly to the native document.getElementbyId() JavaScript method. If possible, the selectors listed below should be prefaced with an ID selector in conjunction with jQuery’s .find() method to limit the scope of the page that has to be searched (as in the $(""#id"").find(""p"") example shown above). $(""p"");, $(""input"");, $(""form""); and so on Selecting elements by tag name is also fast, since it maps directly to the native document.getElementsByTagname() method. $("".class""); Selecting by class name is a little trickier. While still performing very well in modern browsers, it can cause some pretty significant slowdowns in IE8 and below. Why? IE9 was the first IE version to support the native document.getElementsByClassName() JavaScript method. Older browsers have to resort to using much slower DOM-scraping methods that can really impact performance. $(""[attribute=value]""); There is no native JavaScript method for this selector to use, so the only way that jQuery can perform the search is by crawling the entire DOM looking for matches. Modern browsers that support the querySelectorAll() method will perform better in certain cases (Opera, especially, runs these searches much faster than any other browser) but, generally speaking, this type of selector is Slowey McSlowersons. $("":hidden""); Like attribute selectors, there is no native JavaScript method for this one to use. Pseudo-selectors can be painfully slow since the selector has to be run against every element in your search space. Again, modern browsers with querySelectorAll() will perform slightly better here, but try to avoid these if at all possible. If you must use one, try to limit the search space to a specific portion of the page: $(""#list"").find("":hidden""); But, hey, proof is in the performance testing, right? It just so happens that said proof is sitting right here. Be sure to notice the class selector numbers beside IE7 and 8 compared to other browsers and then wonder how the people on the IE team at Microsoft manage to sleep at night. Yikes. Chaining Almost all jQuery methods return a jQuery object. This means that when a method is run, its results are returned and you can continue executing more methods on them. Rather than writing out the same selector multiple times over, just making a selection once allows multiple actions to be run on it. Without chaining $(""#object"").addClass(""active""); $(""#object"").css(""color"",""#f0f""); $(""#object"").height(300); With chaining $(""#object"").addClass(""active"").css(""color"", ""#f0f"").height(300); This has the dual effect of making your code shorter and faster. Chained methods will be slightly faster than multiple methods made on a cached selector, and both ways will be much faster than multiple methods made on non-cached selectors. Wait… “cached selector”? What is this new devilry? Caching Another easy way to speed up your code that seems to be a mystery to developers is the idea of caching your selectors. Think of how many times you end up writing the same selector over and over again in any project. Every $("".element"") selector has to search the entire DOM each time, regardless of whether or not that selector had been previously run. Running the selection once and then storing the results in a variable means that the DOM only has to be searched once. Once the results of a selector have been cached, you can do anything with them. First, run your search (here we’re selecting all of the <li> elements inside <ul id=""blocks"">): var blocks = $(""#blocks"").find(""li""); Now, you can use the blocks variable wherever you want without having to search the DOM every time. $(""#hideBlocks"").click(function() { blocks.fadeOut(); }); $(""#showBlocks"").click(function() { blocks.fadeIn(); }); My advice? Any selector that gets run more than once should be cached. This jsperf test shows just how much faster a cached selector runs compared to a non-cached one (and even throws some chaining love in to boot). Event delegation Event listeners cost memory. In complex websites and apps it’s not uncommon to have a lot of event listeners floating around, and thankfully jQuery provides some really easy methods for handling event listeners efficiently through delegation. In a bit of an extreme example, imagine a situation where a 10×10 cell table needs to have an event listener on each cell; let’s say that clicking on a cell adds or removes a class that defines the cell’s background color. A typical way that this might be written (and something I’ve often seen during code reviews) is like so: $('table').find('td').click(function() { $(this).toggleClass('active'); }); jQuery 1.7 has provided us with a new event listener method, .on(). It acts as a utility that wraps all of jQuery’s previous event listeners into one convenient method, and the way you write it determines how it behaves. To rewrite the above .click() example using .on(), we’d simply do the following: $('table').find('td').on('click',function() { $(this).toggleClass('active'); }); Simple enough, right? Sure, but the problem here is that we’re still binding one hundred event listeners to our page, one to each individual table cell. A far better way to do things is to create one event listener on the table itself that listens for events inside it. Since the majority of events bubble up the DOM tree, we can bind a single event listener to one element (in this case, the <table>) and wait for events to bubble up from its children. The way to do this using the .on() method requires only one change from our code above: $('table').on('click','td',function() { $(this).toggleClass('active'); }); All we’ve done is moved the td selector to an argument inside the .on() method. Providing a selector to .on() switches it into delegation mode, and the event is only fired for descendants of the bound element (table) that match the selector (td). With that one simple change, we’ve gone from having to bind one hundred event listeners to just one. You might think that the browser having to do one hundred times less work would be a good thing and you’d be completely right. The difference between the two examples above is staggering. (Note that if your site is using a version of jQuery earlier than 1.7, you can accomplish the very same thing using the .delegate() method. The syntax of how you write the function differs slightly; if you’ve never used it before, it’s worth checking the API docs for that page to see how it works.) DOM manipulation jQuery makes it very easy to manipulate the DOM. It’s trivial to create new nodes, insert them, remove other ones, move things around, and so on. While the code to do this is simple to write, every time the DOM is manipulated, the browser has to repaint and reflow content which can be extremely costly. This is no more evident than in a long loop, whether it be a standard for() loop, while() loop, or jQuery $.each() loop. In this case, let’s say we’ve just received an array full of image URLs from a database or Ajax call or wherever, and we want to put all of those images in an unordered list. Commonly, you’ll see code like this to pull this off: var arr = [reallyLongArrayOfImageURLs]; $.each(arr, function(count, item) { var newImg = '<li><img src=""'+item+'""></li>'; $('#imgList').append(newImg); }); There are a couple of problems with this. For one (which you should have already noticed if you’ve read the earlier part of this article), we’re making the $(""#imgList"") selection once for each iteration of our loop. The other problem here is that each time the loop iterates, it’s adding a new <li> to the DOM. Each of those insertions is going to be costly, and if our array is quite large then this could lead to a massive slowdown or even the dreaded ‘A script is causing this page to run slowly’ warning. var arr = [reallyLongArrayOfImageURLs], tmp = ''; $.each(arr, function(count, item) { tmp += '<li><img src=""'+item+'""></li>'; }); $('#imgList').append(tmp); All we’ve done here is create a tmp variable that each <li> is added to as it’s created. Once our loop has finished iterating, that tmp variable will contain all of our list items in memory, and can be appended to our <ul> all in one go. Browsers work much faster when working with objects in memory rather than on screen, so this is a much faster, more CPU-cycle-friendly method of building a list. Wrapping up These are far from being the only ways to make your jQuery code run better, but they are among the simplest ones to implement. Though each individual change may only make a few milliseconds of difference, it doesn’t take long for those milliseconds to add up. Studies have shown that the human eye can discern delays of as few as 100ms, so simply making a few changes sprinkled throughout your code can very easily have a noticeable effect on how well your website or app performs. Do you have other jQuery optimization tips to share? Leave them in the comments and help make us all better. Now go forth and make awesome!",2011,Scott Kosman,scottkosman,2011-12-13T00:00:00+00:00,https://24ways.org/2011/your-jquery-now-with-less-suck/,code 283,"CSS3 Patterns, Explained","Many of you have probably seen my CSS3 patterns gallery. It became very popular throughout the year and it showed many web developers how powerful CSS3 gradients really are. But how many really understand how these patterns are created? The biggest benefit of CSS-generated backgrounds is that they can be modified directly within the style sheet. This benefit is void if we are just copying and pasting CSS code we don’t understand. We may as well use a data URI instead. Important note In all the examples that follow, I’ll be using gradients without a vendor prefix, for readability and brevity. However, you should keep in mind that in reality you need to use all the vendor prefixes (-moz-, -ms-, -o-, -webkit-) as no browser currently implements them without a prefix. Alternatively, you could use -prefix-free and have the current vendor prefix prepended at runtime, only when needed. The syntax described here is the one that browsers currently implement. The specification has since changed, but no browser implements the changes yet. If you are interested in what is coming, I suggest you take a look at the dev version of the spec. If you are not yet familiar with CSS gradients, you can read these excellent tutorials by John Allsopp and return here later, as in the rest of the article I assume you already know the CSS gradient basics: CSS3 Linear Gradients CSS3 Radial Gradients The main idea I’m sure most of you can imagine the background this code generates: background: linear-gradient(left, white 20%, #8b0 80%); It’s a simple gradient from one color to another that looks like this: See this example live As you probably know, in this case the first 20% of the container’s width is solid white and the last 20% is solid green. The other 60% is a smooth gradient between these colors. Let’s try moving these color stops closer to each other: background: linear-gradient(left, white 30%, #8b0 70%); See this example live background: linear-gradient(left, white 40%, #8b0 60%); See this example live background: linear-gradient(left, white 50%, #8b0 50%); See this example live Notice how the gradient keeps shrinking and the solid color areas expanding, until there is no gradient any more in the last example. We can even adjust the position of these two color stops to control where each color abruptly changes into another: background: linear-gradient(left, white 30%, #8b0 30%); See this example live background: linear-gradient(left, white 90%, #8b0 90%); See this example live What you need to take away from these examples is that when two color stops are at the same position, there is no gradient, only solid colors. Even without going any further, this trick is useful for a number of different use cases like faux columns or the effect I wanted to achieve in my homepage or the -prefix-free page where the background is only shown on one side and hidden on the other: Combining with background-size We can do wonders, however, if we combine this with the CSS3 background-size property: background: linear-gradient(left, white 50%, #8b0 50%); background-size: 100px 100px; See this example live And there it is. We just created the simplest of patterns: (vertical) stripes. We can remove the first parameter (left) or replace it with top and we’ll get horizontal stripes. However, let’s face it: Horizontal and vertical stripes are kinda boring. Most stripey backgrounds we see on the web are diagonal. So, let’s try doing that. Our first attempt would be to change the angle of the gradient to something like 45deg. However, this results in an ugly pattern like this: See this example live Before reading on, think for a second: why didn’t this produce the desired result? Can you figure it out? The reason is that the gradient angle rotates the gradient inside each tile, not the tiled background as a whole. However, didn’t we have the same problem the first time we tried to create diagonal stripes with an image? And then we learned that every stripe has to be included twice, like so: So, let’s try to create that effect with CSS gradients. It’s essentially what we tried before, but with more color stops: background: linear-gradient(45deg, white 25%, #8b0 25%, #8b0 50%, white 50%, white 75%, #8b0 75%); background-size:100px 100px; See this example live And there we have our stripes! An easy way to remember the order of the percentages and colors it is that you always have two of the same in succession, except the first and last color. Note: Firefox for Mac also needs an additional 100% color stop at the end of any pattern with more than two stops, like so: ..., white 75%, #8b0 75%, #8b0). The bug was reported in February 2011 and you can vote for it and track its progress at Bugzilla. Unfortunately, this is essentially a hack and we will realize that if we try to change the gradient angle to 60deg: See this example live Not that maintainable after all, eh? Luckily, CSS3 offers us another way of declaring such backgrounds, which not only helps this case but also results in much more concise code: background: repeating-linear-gradient(60deg, white, white 35px, #8b0 35px, #8b0 70px); See this example live In this case, however, the size has to be declared in the color stop positions and not through background-size, since the gradient is supposed to cover the entire container. You might notice that the declared size is different from the one specified the previous way. This is because the size of the stripes is measured differently: in the first example we specify the dimensions of the tile itself; in the second, the width of the stripes (35px), which is measured diagonally. Multiple backgrounds Using only one gradient you can create stripes and that’s about it. There are a few more patterns you can create with just one gradient (linear or radial) but they are more or less boring and ugly. Almost every pattern in my gallery contains a number of different backgrounds. For example, let’s create a polka dot pattern: background: radial-gradient(circle, white 10%, transparent 10%), radial-gradient(circle, white 10%, black 10%) 50px 50px; background-size:100px 100px; See this example live Notice that the two gradients are almost the same image, but positioned differently to create the polka dot effect. The only difference between them is that the first (topmost) gradient has transparent instead of black. If it didn’t have transparent regions, it would effectively be the same as having a single gradient, as the topmost gradient would obscure everything beneath it. There is an issue with this background. Can you spot it? This background will be fine for browsers that support CSS gradients but, for browsers that don’t, it will be transparent as the whole declaration is ignored. We have two ways to provide a fallback, each for different use cases. We have to either declare another background before the gradient, like so: background: black; background: radial-gradient(circle, white 10%, transparent 10%), radial-gradient(circle, white 10%, black 10%) 50px 50px; background-size:100px 100px; or declare each background property separately: background-color: black; background-image: radial-gradient(circle, white 10%, transparent 10%), radial-gradient(circle, white 10%, transparent 10%); background-size:100px 100px; background-position: 0 0, 50px 50px; The vigilant among you will have noticed another change we made to our code in the last example: we altered the second gradient to have transparent regions as well. This way background-color serves a dual purpose: it sets both the fallback color and the background color of the polka dot pattern, so that we can change it with just one edit. Always strive to make code that can be modified with the least number of edits. You might think that it will never be changed in that way but, almost always, given enough time, you’ll be proved wrong. We can apply the exact same technique with linear gradients, in order to create checkerboard patterns out of right triangles: background-color: white; background-image: linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%), linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%); background-size:100px 100px; background-position: 0 0, 50px 50px; See this example live Using the right units Don’t use pixels for the sizes without any thought. In some cases, ems make much more sense. For example, when you want to make a lined paper background, you want the lines to actually follow the text. If you use pixels, you have to change the size every time you change font-size. If you set the background-size in ems, it will naturally follow the text and you will only have to update it if you change line-height. Is it possible? The shapes that can be achieved with only one gradient are: stripes right triangles circles and ellipses semicircles and other shapes formed from slicing ellipses horizontally or vertically You can combine several of them to create squares and rectangles (two right triangles put together), rhombi and other parallelograms (four right triangles), curves formed from parts of ellipses, and other shapes. Just because you can doesn’t mean you should Technically, anything can be crafted with these techniques. However, not every pattern is suitable for it. The main advantages of this technique are: no extra HTTP requests short code human-readable code (unlike data URIs) that can be changed without even leaving the CSS file. Complex patterns that require a large number of gradients are probably better left to SVG or bitmap images, since they negate almost every advantage of this technique: they are not shorter they are not really comprehensible – changing them requires much more effort than using an image editor They still save an HTTP request, but so does a data URI. I have included some very complex patterns in my gallery, because even though I think they shouldn’t be used in production (except under very exceptional conditions), understanding how they work and coding them helps somebody understand the technology in much more depth. Another rule of thumb is that if your pattern needs shapes to obscure parts of other shapes, like in the star pattern or the yin yang pattern, then you probably shouldn’t use it. In these patterns, changing the background color requires you to also change the color of these shapes, making edits very tedious. If a certain pattern is not practicable with a reasonable amount of CSS, that doesn’t mean you should resort to bitmap images. SVG is a very good alternative and is supported by all modern browsers. Browser support CSS gradients are supported by Firefox 3.6+, Chrome 10+, Safari 5.1+ and Opera 11.60+ (linear gradients since Opera 11.10). Support is also coming in Internet Explorer when IE10 is released. You can get gradients in older WebKit versions (including most mobile browsers) by using the proprietary -webkit-gradient(), if you really need them. Epilogue I hope you find these techniques useful for your own designs. If you come up with a pattern that’s very different from the ones already included, especially if it demonstrates a cool new technique, feel free to send a pull request to the github repo of the patterns gallery. Also, I’m always fascinated to see my techniques put in practice, so if you made something cool and used CSS patterns, I’d love to know about it! Happy holidays!",2011,Lea Verou,leaverou,2011-12-16T00:00:00+00:00,https://24ways.org/2011/css3-patterns-explained/,code 288,Displaying Icons with Fonts and Data- Attributes,"Traditionally, bitmap formats such as PNG have been the standard way of delivering iconography on websites. They’re quick and easy, and it also ensures they’re as pixel crisp as possible. Bitmaps have two drawbacks, however: multiple HTTP requests, affecting the page’s loading performance; and a lack of scalability, noticeable when the page is zoomed or viewed on a screen with a high pixel density, such as the iPhone 4 and 4S. The requests problem is normally solved by using CSS sprites, combining the icon set into one (physically) large image file and showing the relevant portion via background-position. While this works well, it can get a bit fiddly to specify all the positions. In particular, scalability is still an issue. A vector-based format such as SVG sounds ideal to solve this, but browser support is still patchy. The rise and adoption of web fonts have given us another alternative. By their very nature, they’re not only scalable, but resolution-independent too. No need to specify higher resolution graphics for high resolution screens! That’s not all though: Browser support: Unlike a lot of new shiny techniques, they have been supported by Internet Explorer since version 4, and, of course, by all modern browsers. We do need several different formats, however! Design on the fly: The font contains the basic graphic, which can then be coloured easily with CSS – changing colours for themes or :hover and :focus styles is done with one line of CSS, rather than requiring a new graphic. You can also use CSS3 properties such as text-shadow to add further effects. Using -webkit-background-clip: text;, it’s possible to use gradient and inset shadow effects, although this creates a bitmap mask which spoils the scalability. Small file size: specially designed icon fonts, such as Drew Wilson’s Pictos font, can be as little as 12Kb for the .woff font. This is because they contain fewer characters than a fully fledged font. You can see Pictos being used in the wild on sites like Garrett Murray’s Maniacal Rage. As with all formats though, it’s not without its disadvantages: Icons can only be rendered in monochrome or with a gradient fill in browsers that are capable of rendering CSS3 gradients. Specific parts of the icon can’t be a different colour. It’s only appropriate when there is an accompanying text to provide meaning. This can be alleviated by wrapping the text label in a tag (I like to use <b> rather than <span>, due to the fact that it’s smaller and isn’ t being used elsewhere) and then hiding it from view with text-indent:-999em. Creating an icon font can be a complex and time-consuming process. While font editors can carry out hinting automatically, the best results are achieved manually. Unless you’re adept at creating your own fonts, you’re restricted to what is available in the font. However, fonts like Pictos will cover the most common needs, and icons are most effective when they’re using familiar conventions. The main complaint about using fonts for icons is that it can mean adding a meaningless character to our markup. The good news is that we can overcome this by using one of two methods – CSS generated content or the data-icon attribute – in combination with the :before and :after pseudo-selectors, to keep our markup minimal and meaningful. Our simple markup looks like this: <a href=""/basket"" class=""icon basket"">View Basket</a> Note the multiple class attributes. Next, we’ll import the Pictos font using the @font-face web fonts property in CSS: @font-face { font-family: 'Pictos'; src: url('pictos-web.eot'); src: local('☺'), url('pictos-web.woff') format('woff'), url('pictos-web.ttf') format('truetype'), url('pictos-web.svg#webfontIyfZbseF') format('svg'); } This rather complicated looking set of rules is (at the time of writing) the most bulletproof way of ensuring as many browsers as possible load the font we want. We’ll now use the content property applied to the :before pseudo-class selector to generate our icon. Once again, we’ll use those multiple class attribute values to set common icon styles, then specific styles for .basket. This helps us avoid repeating styles: .icon { font-family: 'Pictos'; font-size: 22px: } .basket:before { content: ""$""; } What does the :before pseudo-class do? It generates the dollar character in a browser, even when it’s not present in the markup. Using the generated content approach means our markup stays simple, but we’ll need a new line of CSS, defining what letter to apply to each class attribute for every icon we add. data-icon is a new alternative approach that uses the HTML5 data- attribute in combination with CSS attribute selectors. This new attribute lets us add our own metadata to elements, as long as its prefixed by data- and doesn’t contain any uppercase letters. In this case, we want to use it to provide the letter value for the icon. Look closely at this markup and you’ll see the data-icon attribute. <a href=""/basket"" class=""icon"" data-icon=""$"">View Basket</a> We could add others, in fact as many as we like. <a href=""/"" class=""icon"" data-icon=""k"">Favourites</a> <a href=""/"" class=""icon"" data-icon=""t"">History</a> <a href=""/"" class=""icon"" data-icon=""@"">Location</a> Then, we need just one CSS attribute selector to style all our icons in one go: .icon:before { content: attr(data-icon); /* Insert your fancy colours here */ } By placing our custom attribute data-icon in the selector in this way, we can enable CSS to read the value of that attribute and display it before the element (in this case, the anchor tag). It saves writing a lot of CSS rules. I can imagine that some may not like the extra attribute, but it does keep it out of the actual content – generated or not. This could be used for all manner of tasks, including a media player and large simple illustrations. See the demo for live examples. Go ahead and zoom the page, and the icons will be crisp, with the exception of the examples that use -webkit-background-clip: text as mentioned earlier. Finally, it’s worth pointing out that with both generated content and the data-icon method, the letter will be announced to people using screen readers. For example, with the shopping basket icon above, the reader will say “dollar sign view basket”. As accessibility issues go, it’s not exactly the worst, but could be confusing. You would need to decide whether this method is appropriate for the audience. Despite the disadvantages, icon fonts have huge potential.",2011,Jon Hicks,jonhicks,2011-12-12T00:00:00+00:00,https://24ways.org/2011/displaying-icons-with-fonts-and-data-attributes/,code 289,Front-End Developers Are Information Architects Too,"The theme of this year’s World IA Day was “Information Everywhere, Architects Everywhere”. This article isn’t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People’s ability to be successful online is unequivocally connected to the quality of the code that is written. Places made of information In information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated: People live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds. 25 Theses We’re creating structures which people rely on for significant parts of their lives, so it’s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML. Semantics The word “semantic” has its origin in Greek words meaning “significant”, “signify”, and “sign”. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building’s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don’t need an “entrance” sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building’s purpose happens, but the affect is similar: the building is signifying something about its purpose. HTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating: Elements, attributes, and attribute values in HTML are defined … to have certain meanings (semantics). For example, the <ol> element represents an ordered list, and the lang attribute represents the language of the content. HTML 5.1 Semantics, structure, and APIs of HTML documents HTML’s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they’re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It’s also what a front-end developer does, whether they realise it or not. A brief introduction to information architecture We’re going to start by looking at what an information architect is. There are many definitions, and I’m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is: the individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information. Of Patterns And Structures To me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people. Just as there are many definitions of what an information architect is, there are for information architecture itself. I’m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as: The structural design of shared information environments. The synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems. The art and science of shaping information products and experiences to support usability, findability, and understanding. Information Architecture For The World Wide Web, 4th Edition To me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015’s State Of The Browser conference, Edd Sowden talked about the accessibility of <table>s. He discovered that by simply not using the semantically-correct <th> element to mark up <table> headings, in some situations browsers will decide that a <table> is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by Léonie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page. Our definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we’re going to use. Dan defines these terms as: Ontology The definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate. What we mean when we say what we say. Taxonomy The arrangements of the parts. Developing systems and structures for what everything’s called, where everything’s sorted, and the relationships between labels and categories Choreography Rules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time. We now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states: … data is not information. Information is data endowed with relevance and purpose. Managing For The Future If we use the correct semantic element to mark up content then we’re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we’re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an <h2> element should be relevant to its parent, which should be the <h1>. By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you’ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways: by creating a list of headings so users can quickly scan the page for information by using a keyboard command to cycle through one heading at a time If we had a document for Christmas Day TV we might structure it something like this: <h1>Christmas Day TV schedule</h1> <h2>BBC1</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>BBC2</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>ITV</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>Channel 4</h2> <h3>Morning</h3> <h3>Evening</h3> If I use VoiceOver to generate a list of headings, I get this: Once I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the <h2>s: If we hadn’t used headings, of if we’d nested them incorrectly, our users would be frustrated. Putting this together Let’s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a <button>. Every browser understands what a <button> is, how it works, and what keyboard shortcuts should be used with them. The HTML specification for the <button> element says: The <button> element represents a button labeled by its contents. The contents that a <button> can have include the type attribute, any relevant ARIA attributes, and the actual text label that the user sees. This information is more important than the visual design: it doesn’t matter how beautiful or obtuse the design is, if the underlying code is non-semantic and poorly labelled, people are going to struggle to use it. Here are three buttons, each created with the same HTML but with different designs: Regardless of what they look like, because we’ve used semantic HTML instead of a bunch of meaningless <div>s or <span>s, people who use assistive technology are going to benefit. Out of the box, without any extra development effort, a <button> is accessible and usable with a keyboard. We don’t have to write event handlers to listen for people pressing the Enter key or the space bar, which we would have to do if we’d faked a button with non-semantic elements. Our <button> can also be quickly findable: for example, in the same way it’s possible to create a list of headings with a screen reader, I can also create a list of form elements and then quickly jump to the one I want. Now we have our <button>, let’s add the panel we’re toggling the appearance of. Here’s our code: <button aria-controls=""panel"" aria-expanded=""false"" class=""settings"" id=""settings"" type=""button"">Settings</button> <div class=""panel hidden"" id=""panel""> <ul aria-labelledby=""settings""> <li><a href=""…"">Account</a></li> <li><a href=""…"">Privacy</a></li> <li><a href=""…"">Security</a></li> </ul> </div> There’s quite a bit going on here. We’re using the: aria-controls attribute to architect a connection between the <button> element and the panel whose appearance it controls. When some assistive technology, for example the JAWS screen reader, encounters an element with aria-controls it audibly tells a user about the controlled expanded element and gives them the ability to move focus to it. aria-expanded attribute to denote whether the panel is visible or not. We toggle this value using JavaScript to true when the panel is visible and false when it’s not. This important attribute tells people who use screen readers about the state of the elements they’re interacting with. For example, VoiceOver announces Settings expanded button when the panel is visible and Settings collapsed button when it’s hidden. aria-labelledby attribute to give the list a title of “Settings”. This can benefit some users of assistive technology. For example, screen readers can cycle through all the lists on a page, so being able to title them can improve findability. Being able to hear list Settings three items is, I’d argue, more useful than list three items. By doing this we’re supporting usability and findability. <ul> element to contain our list of links in our panel. Let’s look at the choice of <ul> to contain our settings choices. Firstly, our settings are related items, so they belong in a structure that semantically groups things. This is something that a list can do that other elements or patterns can’t. This pattern, for example, isn’t semantic and has no structure: <div><a href=""…"">Account</a></div> <div><a href=""…"">Privacy</a></div> <div><a href=""…"">Security</a></div> All we have there is three elements next to each other on the screen and in the DOM. That is not robust code that signifies anything. Why are we using an unordered list as opposed to an ordered list or a definition list? A quick look at the HTML specification tells us why: The <ul> element represents a list of items, where the order of the items is not important — that is, where changing the order would not materially change the meaning of the document. The HTML 5.1 specification’s description of the element Will the meaning of our document materially change if we moved the order of our links around? Nope. Therefore, I’d argue, we’ve used the correct element to structure our content. These coding decisions are information architecture I believe that what we’ve done here is pure information architecture. Going back to Dan Klyn’s model, we’ve practiced ontology by looking at the meaning of what we’re intending to communicate: we want to communicate there is an interactive element that toggles the appearance of an element on a page so we’ve used one, a <button>, with those semantics. programmatically we’ve used the type='button' attribute to signify that the button isn’t a menu, reset, or submit element. visually we’ve designed our <button> look like something that can be interacted with and, importantly, we haven’t removed the focus ring. we’ve labelled the <button> with the word “Settings” so that our users will hopefully understand what the button is for. we’ve used an <ul> element to structure and communicate our list of related items. We’ve also practiced taxonomy by developing systems and structures and creating relationships between our elements: by connecting the <button> to the panel using the aria-controls attribute we’ve programmatically created a relationship between two elements. we’ve developed a structure in our elements by labelling our <ul> with the same name as the <button> that controls its appearance. And finally we’ve practiced choreography by creating elements that foster movement and interaction. We’ve anticipated the way users and information want to flow: we’ve used a <button> element that is interactive and accessible out of the box. our aria-controls attribute can help some people who use screen readers move easily from the <button> to the panel it controls. by toggling the value of the aria-expanded attribute we’ve developed a system that tells assistive technology about the status of the relationship between our elements: the panel is visible or the panel is hidden. we’ve made sure our information is more usable and findable no matter how our users want or need to interact with it. Regardless of how someone “sees” our work they’re going to be able to use it because we’ve architected multiple ways to access our information. Information architecture, robust code, and accessibility The United Nations estimates that around 10% of the world’s population has some form of disability which, at the time of writing, is around 740,000,000 people. That’s a lot of people who rely on well-architected semantic code that can be interpreted by whatever assistive technology they may need to use. If everyone involved in the creation of our places made of information practiced information architecture it would make satisfying the WCAG 2.0 POUR principles so much easier. Our digital construction practices directly affect the quality of life of millions of people, and we have a responsibility to make technology available to them. In her book How To Make Sense Of Any Mess, Abby Covert states: If we’re going to be successful in this new world, we need to see information as a workable material and learn to architect it in a way that gets us to our goals. How To Make Sense Of Any Mess I believe that the world will be a better place if we start treating front-end development as information architecture.",2016,Francis Storr,francisstorr,2016-12-17T00:00:00+00:00,https://24ways.org/2016/front-end-developers-are-information-architects-too/,code 292,Watch Your Language!,"I’m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can’t be expressed in some languages, while other languages express these concepts with ease. It also helped me understand the way we label languages. English: business. French: romance. Here’s an example of how words, or the absence thereof, can affect the way we think: In French we love everything. There’s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It’s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn’t grasped the difference. Needless to say, I’ve scared away plenty of first dates! Learning another language made me realize the limitations of my native language and revealed concepts I didn’t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas. When I lived in Montréal, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient. I’m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job. When I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn’t consider accessibility. All this made editing views a laborious process. Grep. Replace all. Test. Ship it. Repeat. This wasn’t sustainable at Shopify’s scale, so the newly-formed front end team was given two missions: Make the app responsive (AKA Let’s Make This Thing Responsive ASAP) Make the view layer scalable and maintainable (AKA Let’s Build a Pattern Library… in Ruby) Let’s make this thing responsive ASAP The year was 2015. The Shopify admin wasn’t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries. It seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn’t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool. Writing the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive. But with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn’t performant at all. Components would jarringly jump around the page before settling in on first paint. It was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn’t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout. In other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly. Let’s build a pattern library… in Ruby In order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping. When I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn’t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan. We put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks. Since the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn’t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn’t going to cut it. I don’t know the end of this story, because we haven’t written it yet. We’ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven’t determined the winner yet. Only time (and data gathering) will tell. Ruby is great but it doesn’t speak the browser’s language efficiently. It was not the right language for the job. Learning a new spoken language has had an impact on how I write code. It has taught me that you don’t know what you don’t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you’re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language.",2016,Annie-Claude Côté,annieclaudecote,2016-12-10T00:00:00+00:00,https://24ways.org/2016/watch-your-language/,code 293,A Favor for Your Future Self,"We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens. We comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later. I want to talk about a situation that Past Alicia and Team couldn’t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn’t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it’s the ultimate “I really hope this doesn’t break something else” situation. It was a stressful and tedious effort of triple checking that the things we were removing weren’t dependencies elsewhere. To be honest, we wouldn’t have been able to do this with any amount of success or confidence without our test suite. Writing tests for code is one of those things that developers really, really don’t want to do. It’s one of the easiest things to cut in the development process, and there’s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can’t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren’t in a tough spot later on because we only thought of the best case scenarios. JavaScript Regardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you’re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests. Unit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations. /* * This example uses a test framework called Jasmine */ describe(""Calculator Operations"", function () { it(""Should add two numbers"", function () { // Say we have a calculator Calculator.init(); // We can run the function that does our addition calculation... var result = Calculator.addNumbers(7,3); // ...and ensure we're getting the right output expect(result).toBe(10); }); }); Even though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let’s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn’t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.). it(“Should remember the last calculation”, function () { // Run an operation Calculator.addNumbers(7,10); // Expect something else to have happened as a result expect(Calculator.updateCurrentValue).toHaveBeenCalled(); expect(Calculator.currentValue).toBe(17); }); Unit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment. Interfaces Regardless of how you’re building something, it most definitely has some kind of interface. Whether you’re using a very barebones structure, or you’re leveraging a whole design system, these things can be tested as well. Acceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable. For example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress. /* * This example uses Jasmine along with an add-on called jasmine-integration */ describe(""Acceptance tests"", function () { // Go to our signup page var page = visit(""/signup""); // Fill our signup form with invalid information page.fill_in(""input[name='email']"", ""Not An Email""); page.fill_in(""input[name='name']"", ""Alicia""); page.click(""button[type=submit]""); // Check that we get an expected error message it(""Shouldn't allow signup with invalid information"", function () { expect(page.find(""#signupError"").hasClass(""hidden"")).toBeFalsy(); }); // Now, fill our signup form with valid information page.fill_in(""input[name='email']"", ""thisismyemail@gmail.com""); page.fill_in(""input[name='name']"", ""Gerry""); page.click(""button[type=submit]""); // Check that we get an expected success message and the error message is hidden it(""Should allow signup with valid information"", function () { expect(page.find(""#signupError"").hasClass(""hidden"")).toBeTruthy(); expect(page.find(""#thankYouMessage"").hasClass(""hidden"")).toBeFalsy(); }); }); In terms of visual design, we’re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga. Tools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this: /* * This example uses PhantomCSS */ casper.start(""/home"").then(function(){ // Initial state of form phantomcss.screenshot(""#signUpForm"", ""sign up form""); // Hit the sign up button (should trigger error) casper.click(""button#signUp""); // Take a screenshot of the UI component phantomcss.screenshot(""#signUpForm"", ""sign up form error""); // Fill in form by name attributes & submit casper.fill(""#signUpForm"", { name: ""Alicia Sedlock"", email: ""alicia@example.com"" }, true); // Take a second screenshot of success state phantomcss.screenshot(""#signUpForm"", ""sign up form success""); }); You run this code before starting any development, to create your baseline set of screen captures. After you’ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements – your image diff would look something like this: This is a great tool for ensuring not just your site retains its expected styling, but it’s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It’s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored. Conclusion The shape and size of what you’re testing for your site or app will vary. You may not need lots of unit or integration tests if you don’t write a lot of JavaScript. You may not need visual regression testing for a one page site. It’s important to assess your codebase to see which tests would provide the most benefit for you and your team. Writing tests isn’t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that’s broken breaks trust with our users, and it’s our responsibility as developers to make sure that trust isn’t broken. Testing shouldn’t be considered a “nice to have” - it should be an integral piece of our workflow and our day-to-day job.",2016,Alicia Sedlock,aliciasedlock,2016-12-03T00:00:00+00:00,https://24ways.org/2016/a-favor-for-your-future-self/,code 294,New Tricks for an Old Dog,"Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy. I’ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn’t know or exposes an area of uncertainty that you can go work on. In this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways! How do you do, fellow kids? To start with I’d like to introduce you to our JavaScript framework, Flight. Right now it’s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React. Over time, as we used Flight for more complex interfaces, we found it wasn’t scaling with us. Composing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn’t come built-in to Flight, and it made reusing components a real challenge. There was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn’t predict how a component would be built until you opened it. Making matters worse, Flight relied on events to move data around the application. Unfortunately, events aren’t good for giving structure to complex logic. They jump around in a way that’s hard to understand and debug, and force you to search your code for a specific string — the event name‚ to figure out what’s going on. To find fixes for these problems, we looked around at other frameworks. We like React for it’s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone. But when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you’ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort! Instead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using. Boiled down, what we liked seemed quite simple: Component nesting & composition Easy, predictable state management Normal functions for data manipulation Making these ideas applicable to Flight took some time, but we’re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app. While the specifics of our situation and Flight aren’t really important, this experience taught me something: Distill good tech into great ideas. You can apply great ideas anywhere. You don’t have to use cool kids’ latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already? Times, they are a changin’ Apart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code. Going back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module’s source was encased in AMD boilerplate. In fact, we found ourselves with a mix of both AMD syntaxes: define([""lodash""], function (_) { // . . . }); define(function (require) { var _ = require(""lodash""); // . . . }); This just wouldn’t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax: import _ from ""lodash""; These days we’re using Babel, Webpack, ES2015 modules and many new language features that make development just… better. But how did we get there? To explain, I want to introduce you to codemods and jscodeshift. A codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL(""..."") to new URL(""...""). jscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file’s syntax tree – a data-structure representation of the code — finding and modifying in place as it goes. Here’s an example that renames all instances of the variable foo to bar: module.exports = function (fileInfo, api) { return api .jscodeshift(fileInfo.source) .findVariableDeclarators('foo') .renameTo('bar') .toSource(); }; It’s a seriously powerful tool, and we’ve used it to write a series of codemods that: rename modules, unify our use of AMD to a single syntax, transition from one testing framework to another, and switch from AMD to CommonJS. These changes can be pretty huge and far-reaching. Here’s an example commit from when we switched to CommonJS: commit 8f75de8fd4c702115c7bf58febba1afa96ae52fc Date: Tue Jul 12 2016 Run AMD -> CommonJS codemod 418 files changed, 47550 insertions(+), 48468 deletions(-) Yep, that’s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone! From this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes: Find all the existing patterns Choose the two most similar Unify with a codemod Repeat. For example: For module loading, we had 2 competing AMD patterns plus some use of CommonJS The two AMD syntaxes were the most similar We used a codemod to move to unify the AMD patterns Later we returned to AMD to convert it to CommonJS It’s worked for us, and if you’d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer. Welcome aboard! As TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie. Inspired by Amy’s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks. Joining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding! And as you build up an onboarding process, you’ll create things that are useful for more than just new hires; it’ll force you to write documentation, for example, in a way that’s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help. This is something that’s taken for granted in open source, but somehow I think we forget about it in big companies. After all, better documentation is just a good thing. You will forget things from time to time, and you’d be surprised how often the “beginner” docs help! For TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts: What are our dependencies? Where are the potential points of failure? Where does authentication live? Storage? Caching? Who owns “X”? Of course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track. To address this, we’ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review. In my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team’s work. But, if you’re not doing it, here’s my suggestion for getting started: Every pull request gets a +1 from someone else. That’s all — it’s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master. At Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things: Don’t review for more than hour 1 Keep reviews smaller than ~400 lines 2 Code review your own code first 2 After an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone’s put code up for a review, that review is blocking them doing other work. It’s your job to unblock them. On TweetDeck, we actually try to keep reviews under 250 lines. It doesn’t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration. But the most important thing I’ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team’s: with fresh, critical eyes, after a break, using a dedicated code review tool. It’s amazing what you can spot when you put a new in a new interface around code you’ve been staring at for hours! And yes, this list features science. The data backs up these conclusions, and if you’d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It’s ace. For more dedicated information sharing, we’ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it’s someone else’s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results. If you’d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session — thanks to James for this idea. And guess what… your onboarding architecture diagrams will help (and benefit from) this! More, please! There’s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations. And finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk. Dunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software. ↩ Cohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476. ↩ ↩",2016,Tom Ashworth,tomashworth,2016-12-18T00:00:00+00:00,https://24ways.org/2016/new-tricks-for-an-old-dog/,code 295,Internet of Stranger Things,"This year I’ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I’m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1. What you’ll see You can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the “Send a message” button and wait your turn.) It’s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I’m using WebSockets to communicate in real time between the browser, server and Raspberry Pi. What you’ll need Raspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3 Micro SD card at least 4Gb Class 10 speed3 Micro USB power supply at least 2A USB Wifi dongle (unless you have a Pi 3 - that has wifi built in). Addressable fairy lights Logic level shifter (with pins soldered unless you want to do it!) Breadboard Jumper wires (3x male to male and 4x female to male) Optional but recommended Base board to hold the Pi and Breadboard (often comes with a breadboard!) Find links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004. Setting up the Raspberry Pi You’ll need to install the SD card for the Raspberry Pi. You’ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5. Next up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There’s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6. network={ ssid=""mywifi"" psk=""mypassword"" } Save the file, eject the card from your computer and plug it into the Raspberry Pi. If you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don’t plug it in yet! Wiring! Time to wire everything up! First of all, push the Logic Level Converter into the middle of the breadboard: Logic Level Converter The logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top. Raspberry Pi pins only output 3.3v but the lights need 5v. That’s why we need the logic level converter in there to boost up the signal. Connect the first two wires between the Raspberry Pi pins and the breadboard: Note that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don’t have to match but it’s easier to follow (and check) if you use the same ones as in the diagram. Then the next two: This is what you should have so far: Lights Now to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard. Make sure that you connect the right end of the lights, mine has a male connector at the wrong end so it’s impossible to do this, but double check. Also make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or ‘-’ or G) and DI (sometimes DIN for data in). You can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that’s what takes the data along to the chip in the next light. Make sure that you’re plugging into the data-in and not the data-out! That’s it! Everything’s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they’re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check! The Moment of Truth! Plug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that’s your confirmation that it’s connected to my WebSocket server and ready to receive messages from the upside-down! However, if the first light in the string is pulsing red, it means that you’re not connected to the internet. So check the Troubleshooting section of the support document. If it’s pulsing green then you’re connected to the internet but can’t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it’ll come back up. Rig up the lights! Fix the lights up on the wall however you want, pins, nails, tape. I’ve used cable clips. Just be careful! I’m using a 50 light string so I’ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi. Check the photo here to see how the lights line up, note that there are spare unused lights in-between each row: Now visit lights.seb.ly and you’ll see this : If you’re the only one online you’ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you’ll need to click the button to join the queue and wait your turn. How it works - the geeky details! Electronics: The pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer). Addressable LEDs or “Neopixels” We’re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string. The chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project. Neopixels with the chip and the LED all in one - it’s the white square shaped component and the darker square inside is the chip. These are only 5mm wide! A Laser Light Synth! Covered with around 800 super bright neopixels! Logic Level Converter The logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip. Power Neopixels can often draw a lot of current so you need to be careful how you power them. I’ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you’ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit’s Neopixel Uberguide. Node.js There are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they’re hosted on npm as stranger-lights and stranger-lights-server. The server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time. WebSockets I’m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server. When you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9. The Raspberry Pi code The Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file: node /home/pi/strangerthings/client.js > /dev/null & Anything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn’t hold up the boot process. Working with the Raspberry Pi headless You might know that when a computer has no screen or keyboard, you would refer to it as “running headless”. So just like most web servers, you need to configure it over the network with ssh10. If you’re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you’ll need to find its IP address. There’s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you’re very new to the terminal, I highly recommend this great online Linux command line tutorial. Improvements This is quite an early experiment and I’m sure I’ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I’ve just rigged up my lights with Post-it notes. It’d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style. Where next? Finding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that’s something you’d be interested in, or sign up to my mailing list at st4i.com. There are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you’ll be spoilt for choice! Also take a look at Arduino - it’s an incredibly popular platform for electronics and the internet is literally bursting with resources. I hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly. Not a particularly original idea, but I don’t think I’ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables ↩︎ Video guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit ↩︎ Slower cards will work but performance may suffer ↩︎ Or £5,000 in UK money. Sorry, Brexit joke :) ↩︎ You will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots.  ↩︎ SSID and password should be all that you need but you can see all the config options on this wpa supplicant guide ↩︎ Raspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle ↩︎ Thanks to Adafruit who invented the term neopixels so we don’t have to refer to them as WS281x any more! ↩︎ So you can see other people sending messages in the browser ↩︎ ssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal. ↩︎ You can change this default hostname using raspi-config ↩︎",2016,Seb Lee-Delisle,sebleedelisle,2016-12-01T00:00:00+00:00,https://24ways.org/2016/internet-of-stranger-things/,code 296,Animation in Design Systems,"Our modern front-end workflow has matured over time to include design systems and component libraries that help us stay organized, improve workflows, and simplify maintenance. These systems, when executed well, ensure proper documentation of the code available and enable our systems to scale with reduced communication conflicts. But while most of these systems take a critical stance on fonts, colors, and general building blocks, their treatment of animation remains disorganized and ad-hoc. Let’s leverage existing structures and workflows to reduce friction when it comes to animation and create cohesive and performant user experiences. Understand the importance of animation Part of the reason we treat animation like a second-class citizen is that we don’t really consider its power. When users are scanning a website (or any environment or photo), they are attempting to build a spatial map of their surroundings. During this process, nothing quite commands attention like something in motion. We are biologically trained to notice motion: evolutionarily speaking, our survival depends on it. For this reason, animation when done well can guide your users. It can aid and reinforce these maps, and give us a sense that we understand the UX more deeply. We retrieve information and put it back where it came from instead of something popping in and out of place. “Where did that menu go? Oh it’s in there.” For a deeper dive into how animation can connect disparate states, I wrote about the Importance of Context-Shifting in UX Patterns for CSS-Tricks. An animation flow on mobile. Animation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn’t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped. 14 second generic loading screen.22 second custom loading screen. This also works for form submission. Giving your personal information over to an online process like a static form can be a bit harrowing. It becomes more harrowing without animation used as a signal that something is happening, and that some process is completing. That same animation can also entertain users and make them feel as though the wait isn’t as long. Eli Fitch gave a talk at CSS Dev Conf called: “Perceived Performance: The Only Kind That Really Matters”, which is one of my favorite talk titles of all time. In it, he discussed how we tend to measure things like timelines and network requests because they are more quantifiable–and therefore easier to measure–but that measuring how a user feels when visiting the site is more important and worth the time and attention. In his talk, he states “Humans over-estimate passive waits by 36%, per Richard Larson of MIT”. This means that if you’re not using animation to speed up how fast the wait time of a form submission loads, users are perceiving it to be much slower than the dev tools timeline is recording. Reign it in Unlike fonts, colors, and so on, we tend to add animation in as a last step, which leads to disorganized implementations that lack overall cohesion. If you asked a designer or developer if they would create a mockup or build a UI without knowing the fonts they were working with, they would dislike the idea. Not knowing the building blocks they’re working with means that the design can fall apart or the development can break with something so fundamental left out at the start. Good animation works the same way. The first step in reigning in your use of animation is to perform an animation audit. Look at all the places you are using animation on your site, or the places you aren’t using animation but probably should. (Hint: perceived performance of a loader on a form submission can dramatically change your bounce rates.) Not sure how to perform a good audit? Val Head has a great chapter on it in her book, Designing Interface Animations, which has of buckets of research and great ideas. Even some beautiful component libraries that have animation in the docs make this mistake. You don’t need every kind of animation, just like you don’t need every kind of font. This bloats our code. Ask yourself questions like: do you really need a flip 180 degree animation? I can’t even conceive of a place on a typical UI where that would be useful, yet most component libraries that I’ve seen have a mixin that does just this. Which leads to… Have an opinion Many people are confused about Material Design. They think that Material Design is Motion Design, mostly because they’ve never seen anyone take a stance on animation before and document these opinions well. But every time you use Material Design as your motion design language, people look at your site and think GOOGLE. Now that’s good branding. By using Google’s motion design language and not your own, you’re losing out on a chance to be memorable on your own website. What does having an opinion on motion look like in practice? It could mean you’ve decided that you never flip things. It could mean that your eases are always going to glide. In that instance, you would put your efforts towards finding an ease that looks “gliding” and pulling out any transform: scaleX(-1) animation you find on your site. Across teams, everyone knows not to spend time mocking up flipping animation (even if they’re working on an entirely different codebase), and to instead work on something that feels like it glides. You save time and don’t have to communicate again and again to make things feel cohesive. Create good developer resources Sometimes people don’t incorporate animation into a design system because they aren’t sure how, beyond the base hover states. All animation properties can be broken into interchangeable pieces. This allows developers and designers alike to mix and match and iterate quickly, while still staying in the correct language. Here are some recommendations (with code and a demo to follow): Create timing units, similar to h1, h2, h3. In a system I worked on recently, I called these t1, t2, t3. T1 would be reserved for longer pieces, down to t5 which is a bit like h5 in that it’s the default (usually around .25 seconds or thereabouts). Keep animation easings for entrance, exit, entrance emphasis and exit emphasis that people can commonly refer to. This, and the animation-fill-mode, are likely to be the only two properties that can be reused for the entrance and exit of the animation. Use the animation-name property to define the keyframes for the animation itself. I would recommend starting with 5 or 6 before making a slew of them, and see if you need more. Writing 30 different animations might seem like a nice resource, but just like your color palette having too many can unnecessarily bulk up your codebase, and keep it from feeling cohesive. Think critically about what you need here. See the Pen Modularized Animation for Component Libraries by Sarah Drasner (@sdras) on CodePen. The example above is pared-down, but you can see how in a robust system, having pieces that are interchangeable cached across the whole system would save time for iterations and prototyping, not to mention make it easy to make adjustments for different feeling movement on the same animation easily. One low hanging fruit might be a loader that leads to a success dialog. On a big site, you might have that pattern many times, so writing up a component that does only that helps you move faster while also allowing you to really zoom in and focus on that pattern. You avoid throwing something together at the last minute, or using a GIF, which are really heavy and mushy on retina. You can make singular pieces that look really refined and are reusable. React and Vue Implementations are great for reusable components, as you can create a building block with a common animation pattern, and once created, it can be a resource for all. Remember to take advantage of things like props to allow for timing and easing adjustments like we have in the previous example! Responsive At the very least we should ensure that interaction also works well on mobile, but if we’d like to create interactions that take advantage of all of the gestures mobile has to offer, we can use libraries like zingtouch or hammer to work with swipe or multiple finger detection. With a bit of work, these can all be created through native detection as well. Responsive web pages can specify initial-scale=1.0 in the meta tag so that the device is not waiting the required 300ms on the secondary tap before calling action. Interaction for touch events must either start from a larger touch-target (40px × 40px or greater) or use @media(pointer:coarse) as support allows. Buy-in Sometimes people don’t create animation resources simply because it gets deprioritized. But design systems were also something we once had to fight for, too. This year at CSS Dev Conf, Rachel Nabors demonstrated how to plot out animation wants vs. needs on a graph (reproduced with her permission) to help prioritize them: This helps people you’re working with figure out the relative necessity and workload of the addition of these animations and think more critically about it. You’re also more likely to get something through if you’re proving that what you’re making is needed and can be reused. Good compromises can be made this way: “we’re not going to go all out and create an animated ‘About Us’ page like you wanted, but I suppose we can let our users know their contact email went through with a small progress and success notification.” Successfully pushing smaller projects through helps build trust with your team, and lets them see what this type of collaboration can look like. This builds up the type of relationship necessary to push through projects that are more involved. It can’t be overstressed that good communication is key. Get started! With these tools and good communication, we can make our codebases more efficient, performant, and feel better for our users. We can enhance the user experience on our sites, and create great resources for our teams to allow them to move more quickly while innovating beautifully.",2016,Sarah Drasner,sarahdrasner,2016-12-16T00:00:00+00:00,https://24ways.org/2016/animation-in-design-systems/,code 298,First Steps in VR,"The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not. I think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people. Every year we see articles saying it will be the “year of virtual reality”, that was especially prevalent this year. 2016 has been a year of progress, VR isn’t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web. WebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier. Getting Started with A-Frame I like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That’s not to say you can’t build complex things, you have complete access to write JavaScript and shaders. I’m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There’s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It’s fun. For this project I wanted to keep things as simple as possible, so I can easily explain it without the classic “draw a circle then draw an owl”. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy: Must work on Google Cardboard at a minimum, because of price Therefore, it must not rely on having a controller Auto-moving around a maze would be a good example Move in direction you look Stretch goal: Scoring, time until you hit a wall or get stuck in maze Stretch goal: Levels, so the map doesn’t need to be random Stretch goal: Snow! I decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground. <!DOCTYPE html> <html> <head> <meta charset=""utf-8""> <meta name=""viewport"" content=""width=device-width""> <title>24 ways</title> <script src=""https://aframe.io/releases/0.3.2/aframe.js""></script> <script src=""//cdn.rawgit.com/donmccurdy/aframe-extras/v2.6.1/dist/aframe-extras.min.js""></script> </head> <body> <a-scene> <a-entity id=""player"" camera universal-controls kinematic-body position=""0 1.8 0""> </a-entity> <a-entity id=""walls""></a-entity> <a-grid id=""ground"" static-body></a-grid> <a-sky id=""sky"" color=""#AADDF0""></a-sky> <!-- Lighting --> <a-light type=""ambient"" color=""#ccc""></a-light> </a-scene> <script> document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var MAP_SIZE = 10, PLATFORM_SIZE = 5, NUM_PLATFORMS = 50; var platformsEl = document.querySelector('#walls'); var v, box; for (var i = 0; i < NUM_PLATFORMS; i++) { // y: 0 is ground v = { x: (Math.floor(Math.random() * MAP_SIZE) - PLATFORM_SIZE) * PLATFORM_SIZE, y: PLATFORM_SIZE / 2, z: (Math.floor(Math.random() * MAP_SIZE) - PLATFORM_SIZE) * PLATFORM_SIZE }; box = document.createElement('a-box'); platformsEl.appendChild(box); box.setAttribute('color', '#39BB82'); box.setAttribute('width', PLATFORM_SIZE); box.setAttribute('height', PLATFORM_SIZE); box.setAttribute('depth', PLATFORM_SIZE); box.setAttribute('position', v.x + ' ' + v.y + ' ' + v.z); box.setAttribute('static-body', ''); } console.info('Platforms loaded.'); }); </script> </body> </html> As you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an <a-scene> which is the container for everything that is going to show up on the screen. There are a few <a-entity> which can be compared to <div> as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them. Styling Now we’ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave. Note, you will probably need a server running for images to work. You can do this by running python -m ""SimpleHTTPServer"" in the folder where the code is, then go to localhost:8000 in browser. Textures Unless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor. <a-assets> <img id=""texture-floor"" src=""floor.jpg""> <img id=""texture-wall"" src=""wall.jpg""> </a-assets> The <a-assets> is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures. To apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image. Replace: <a-grid id=""ground"" static-body></a-grid> With: <a-grid id=""ground"" static-body material=""src: #texture-floor""></a-grid> This will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined. box.setAttribute('material', 'src: #texture-wall'); That’s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object. Lighting The next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished. There are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the <a-light> entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit. To start with I wanted to light up the scene with a general light, type=""ambient"", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (<a-sky>), as it looked a bit odd with a white sky. <a-light type=""ambient"" color=""#92455E"" intensity=""0.4""></a-light> <a-sky id=""sky"" color=""#0000ff""></a-sky> I felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=""point"" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn’t a walking human or anything but by moving light where the player is it feels a bit more physical. <a-light color=""#fff"" distance=""5"" intensity=""0.7"" type=""point""></a-light> By this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the <a-scene> with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see. I was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers. Movement One of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you’re looking, but I wasn’t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working. AFRAME.registerComponent('automove-controls', { init: function () { this.speed = 0.1; this.isMoving = true; this.velocityDelta = new THREE.Vector3(); }, isVelocityActive: function () { return this.isMoving; }, getVelocityDelta: function () { this.velocityDelta.z = this.isMoving ? -speed : 0; return this.velocityDelta.clone(); } }); Replace: universal-controls With: universal-controls=""movementControls: automove, gamepad, keyboard"" This works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn’t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels. Building a map Currently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels. I made the maze in Tiled by finding a random tileset online (we don’t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn’t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it’s easy to update in the future. var map = { ""data"":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], ""height"":10, ""width"":10 } As you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don’s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2. document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var WALL_SIZE = 5, WALL_HEIGHT = 3; var el = document.querySelector('#walls'); var wall; for (var x = 0; x < map.height; x++) { for (var y = 0; y < map.width; y++) { var i = y*map.width + x; var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE; if (map.data[i] === 1) { // Create wall wall = document.createElement('a-box'); el.appendChild(wall); wall.setAttribute('color', '#fff'); wall.setAttribute('material', 'src: #texture-wall;'); wall.setAttribute('width', WALL_SIZE); wall.setAttribute('height', WALL_HEIGHT); wall.setAttribute('depth', WALL_SIZE); wall.setAttribute('position', position); wall.setAttribute('static-body', '); } if (map.data[i] === 2) { // Set player position document.querySelector('#player').setAttribute('position', position); } } } console.info('Walls added.'); }); With this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There’s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web. It’s Not All Fun And Games A lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that’s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation. There have been a number of cases where it has been shown virtual reality can help as a tool for therapies: Oxford study finds virtual reality can help treat severe paranoia Virtual Reality Therapy for Phobias at the Duke Faculty Practice Bravemind: Virtual Reality Exposure Therapy at the University of Southern California These are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool. Wrapping Up Ten years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. “The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it.” We can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the “desktop web” Cameron was referring to. So while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don’t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come… well, let’s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it’s fun… have a go anyway.",2016,Shane Hudson,shanehudson,2016-12-11T00:00:00+00:00,https://24ways.org/2016/first-steps-in-vr/,code 300,Taking Device Orientation for a Spin,"When The Police sang “Don’t Stand So Close To Me” they weren’t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we’re there, we’re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot. But if you can’t beat them, join them. A brave new world A couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn’t work in anything and went off disheartened into the corner for a bit of a sob. Turns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable—we’re still talking about browsers, after all. Now, of course they’re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we’re still trying to locate the documentation, right? Well, no! So what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let’s find out. The Device Orientation API The phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn’t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat. The main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too. The API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away: window.addEventListener('deviceorientation', function(e) { console.log(e.alpha); console.log(e.beta); console.log(e.gamma); }); If you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended. One important note Like may of these newfangled APIs, Device Orientation is only available over HTTPS. We’re not allowed to have too much fun without protection, so make sure that you’re working on a secure line. I’ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel. ngrok http -host-header=rewrite mylocaldevsite.dev:80 ngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option. Right, where were we? Make something to look at It’s all well and good having a bunch of numbers, but they’re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we’re not trying to be too clever here). Yeah, let’s just build one of those. Our basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the ‘window’ so that the part we want to show is visible. Here it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.) The details of the slider aren’t important (we’re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or ‘alt-nazi’ or whatever). var position_image = function(percent) { var pos = (img_W / 100)*percent; img.style.transform = 'translate(-'+pos+'px)'; }; All this does is figure out what that percentage means in terms of the image width, and set the transform: translate(…); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.) Ok. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we’re done. (We’re so not done.) The maths bit If we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you’ll note that the main value that is changing as we swing back and forth is the ‘alpha’ value. That’s the one we want to track. As our goal here is hey, these APIs are interesting and fun and not let’s build the world’s best panoramic image viewer, we’ll start by making a few assumptions and simplifications: When the image loads, we’ll centre the image and take the current nose-forward orientation reading as the middle. Moving left, we’ll track to the left of the image (lower percentage). Moving right, we’ll track to the right (higher percentage). If the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they’re on their own. Nose-forward When the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn’t really matter to us, as we don’t know which direction the user might be facing in anyway — we just need to record that initial state and then use it to compare any new readings. var initial_position = null; window.addEventListener('deviceorientation', function(e) { if (initial_position === null) { initial_position = Math.floor(e.alpha); }; var current_position = initial_position - Math.floor(e.alpha); }); (I’m rounding down the values with Math.floor() to make debugging easier - we’ll take out the rounding later.) We get our initial position if it’s not yet been set, and then calculate the current position as a difference between the new value and the stored one. These values are weird One thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you’d expect if you do a forward roll. What I’m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can’t see it. var rotation = current_position; if (current_position > 180) rotation = current_position-360; Which way up? Since we’re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That’s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn’t well supported. The best I can come up with is: var screen_portrait = false; if (window.innerWidth < window.innerHeight) { screen_portrait = true; } It works. Then we can use screen_portrait to branch our code: if (screen_portrait) { if (current_position > 180) rotation = current_position-360; } else { if (current_position < -180) rotation = 360+current_position; } Here’s the code in action so you can see the values for yourself. If you change screen orientation you’ll need to refresh the page (it’s a demo!). Limiting rotation Now, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360° to view a photo. Instead, we need to limit the range of movement to something like 60°-from-nose in either direction and normalise our values to pan the entire image across that 120° range. -60 would be full-left (0%) and 60 would be full-right (100%). If we set max_rotation = 60, that code ends up looking like this: if (rotation > max_rotation) rotation = max_rotation; if (rotation < (0-max_rotation)) rotation = 0-max_rotation; var percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100); We should now be able to get a rotation from -60° to +60° expressed as a percentage. Try it for yourself. The big reveal All that’s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work. position_image(percent); You can see the final result and take it for a spin. Literally. So what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it. What we have made is progress. We’ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn’t have this morning. Something we probably didn’t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we’ve learned that our browsers are just a little bit more capable than we thought. The web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock. The web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let’s create things.",2016,Drew McLellan,drewmclellan,2016-12-24T00:00:00+00:00,https://24ways.org/2016/taking-device-orientation-for-a-spin/,code 303,We Need to Talk About Technical Debt,"In my work with clients, a lot of time is spent assessing old, legacy, sprawling systems and identifying good code, bad code, and technical debt. One thing that constantly strikes me is the frequency with which bad code and technical debt are conflated, so let me start by saying this: Not all technical debt is bad code, and not all bad code is technical debt. Sometimes your bad code is just that: bad code. Calling it technical debt often feels like a more forgiving and friendly way of referring to what may have just been a poor implementation or a substandard piece of work. It is an oft-misunderstood phrase, and when mistaken for meaning ‘anything legacy or old hacky or nasty or bad’, technical debt is swept under the carpet along with all of the other parts of the codebase we’d rather not talk about, and therein lies the problem. We need to talk about technical debt. What We Talk About When We Talk About Technical Debt The thing that separates technical debt from the rest of the hacky code in our project is the fact that technical debt, by definition, is something that we knowingly and strategically entered into. Debt doesn’t happen by accident: debt happens when we choose to gain something otherwise-unattainable immediately in return for paying it back (with interest) later on. An Example You’re a front-end developer working on a SaaS product, and your sales team is courting a large customer – a customer so large that you can’t really afford to lose them. The customer tells you that as long as you can allow them to theme your SaaS application according to their branding, they are willing to sign on the dotted line… the problem being that your CSS architecture was never designed to incorporate theming at all, and there isn’t currently a nice, clean way to incorporate a theme into the codebase. You and the business make the decision that you will hack a theme into the product in two days. It’s going to be messy, it’s going to be ugly, but you can’t afford to lose a huge customer just because your CSS isn’t quite right, right now. This is technical debt. You deliver the theme, the customer signs up, and everyone is happy. Except you (and the business, because you are one and the same) have a decision to make: Do we go back and build theming into the CSS architecture as a first-class citizen, porting the hacked theme back into a codified and formal framework? Do we carry on as we are? Things are working okay, and the customer paid up, so is there any reason to invest time and effort into things after we (and the customer) got what we wanted? Option 1 is choosing to pay off your debts; Option 2 is ignoring your repayments. With Option 1, you’re acknowledging that you did what you could given the constraints, but, free of constraints, you’d have done something different. Now, you are choosing to implement that something different. With Option 2, however, you are avoiding your responsibility to repay your debt, and you are letting interest accrue. The problem here is that… your SaaS product now offers theming to one of your customers; another potential customer might also demand the ability to theme their instance of your product; you can’t refuse them that request, nor can you quickly fulfil it; you hack in another theme, thus adding to the balance of your existing debt; and so on (plus interest) for every subsequent theme you need to implement. Here you have increased entropy whilst making little to no attempt to address what you already knew to be problems. Your second, third, fourth, fifth request for theming will be hacked on top of your hack, further accumulating debt whilst offering nothing by way of a repayment. After a long enough period, the code involved will get so unwieldy, so hard to work with, that you are forced to tear it all down and start again, and the most painful part of this is that you’re actually paying off even more than your debt repayments would have been in the first place. Two days of hacking plus, say, five days of subsequent refactoring, would still have been substantially less than the weeks you will now have to spend rewriting your CSS to fix and incorporate the themes properly. You’ve made a loss; your strategic debt ultimately became a loss-making exercise. The important thing to note here is that you didn’t necessarily write bad code. You knew there were two options: the quick way and the correct way. The decision to take the quick route was a definite choice, because you knew there was a better way. Implementing the better way is your repayment. Good Debt and Bad Debt Technical debt is acceptable as long as you have intentions to settle; it can be a valuable solution to a business problem, provided the right approach is taken afterwards. That doesn’t, however, mean that all debt is born equal. Just as in real life, there is good debt and there is bad debt. Good debt might be… a mortgage; a student loan, or; a business loan. These are types of debt that will secure you the means of repaying them. These are well considered debts whose very reason for being will allow you to make the money to pay them off—they have real, tangible benefit. A business loan to secure some equipment and premises will allow you to start an enterprise whose revenue will allow you to pay that debt back; a student loan will allow you to secure the kind of job that has the ability to pay a student loan back. These kinds of debt involve a considered and well-balanced decision to acquire something in the short term in the knowledge that you will have the means, in the long term, to pay it back. Conversely, bad debt might be… borrowing $1,000 from a loan shark so you can go to Vegas, or; taking out a payday loan in order to buy a new television. Both of these kinds of debt will leave you paying for things that didn’t provide you a way of earning your own capital. That is to say, the loans taken did not secure anything that would help pay off said loans. These are bad debts that will usually provide a net loss. You really are only gaining the short term in exchange for a long term financial responsibility: i.e., was it worth it? A good litmus test for debt is to compare the gains of its immediate benefit with the cost of its long term commitment. The earlier example of theming a site is a good debt, provided we are keeping up our repayments (all debt is bad debt if you don’t). A calculated decision to do something ‘wrong’ in the short term with the promise of better payoffs later on. Bad Technical Debt The majority of my work is with front-end development teams—CSS is what I do. To that end, the most succinct example of technical debt for that audience is simply: !important All front-end developers know the horrors and dangers associated with using !important, yet we continue to use it. Why? It’s not necessarily because we’re bad developers, but because we see a shortcut. !important is usually implemented as a quick way out of a sticky specificity situation. We could spend the rest of the day refactoring our CSS to fix the issue at its source, or we can spend mere seconds typing the word !important and patch over the symptoms. This is us making an explicit decision to do something less than ideal now in exchange for immediate benefit. After all, refactoring our CSS will take a lot more time, and will still only leave us with the same outcome that the vastly quicker !important solution will, so it seems to make better business sense. However, this is a bad debt. !important takes seconds to implement but weeks to refactor. The cost of refactoring this back out later will be an order of magnitude higher than it would be to have done things properly the first time. The first !important usually sets a precedent, and subsequent developers are likely to have to use it themselves in order to get around the one that you left. So many CSS projects deteriorate because of this one simple word, and rewrites become more and more imminent. That makes it possibly the most costly 10 bytes a CSS developer could ever write. Bad Code Now we’ve got a good idea of what constitutes technical debt, let’s take a look at what constitutes bad code. Something I hear time and time again in my client work goes a little like this: We’ve amassed a lot of technical debt and we’d like to get a strategy in place to begin dealing with it. Whilst I genuinely admire their willingness to identify and desire to fix problems in their code, sometimes they’re not looking at technical debt at all—sometimes they’re just looking at bad code, plain and simple. Where technical debt is knowing that there’s a better way, but the quicker way makes more sense right now, bad code is not caring if there’s a better way at all. Again, looking at a CSS-specific world, a lot of bad code is contributed by non-front-end developers with little training, appreciation, or even respect for the front-end landscape. Writing code with reckless abandon should not be described as technical debt, because to do so would imply that… the developers knew they were implementing a sub-par solution, but… the developers also knew that a better solution was out there, which… implies that it can be tidied up relatively simply. Developers writing bad code is a larger and more cultural problem that requires a lot more effort to fix. Hopefully—and usually—bad code is in the minority, but it helps to be objective in identifying and solving it. Bad code usually doesn’t happen for a good enough reason, and is therefore much harder to justify. Technical debt often represents ability in judgement, whereas bad code often represents a gap in skills. Takeaway Take time to familiarise yourself with the true concepts underlying technical debt and why it exists. Understand that technical debt can be good or bad. Admit that sometimes code is just of poor quality. Understanding these points will allow you to make better calls around what you might need to refactor and when, and what skills gaps you might have in your team. Sometimes it’s okay to cut corners if there is a tangible gain to be had in the immediate term. Technical debt is okay provided it is a sensible debt and you have intentions to pay it off. Technical debt is not necessarily synonymous with bad code, and bad code isn’t necessarily technical debt. Technical debt is code that was implemented given limited knowledge or resource, with the understanding that you would need to repay something in future. Technical debt is not inherently bad—failure to make repayments is. Periodically, it is justifiable—encouraged, even—to enter a debt in order to fulfil a more pressing matter. However, it is imperative that we begin making repayments as soon as we are capable, be that based on newly available time or knowledge. Bad code is worse than technical debt as it represents a lack of knowledge or quality control within a team. It needs a much more fundamental fix.",2016,Harry Roberts,harryroberts,2016-12-05T00:00:00+00:00,https://24ways.org/2016/we-need-to-talk-about-technical-debt/,code 305,CSS Writing Modes,"Since you may not have a lot of time, I’m going to start at the end, with the dessert. You can use a little-known, yet important and powerful CSS property to make text run vertically. Like this. Or instead of running text vertically, you can layout a set of icons or interface buttons in this way. Or, of course, with anything on your page. The CSS I’ve applied makes the browser rethink the orientation of the world, and flow the layout of this element at a 90° angle to “normal”. Check out the live demo, highlight the headline, and see how the cursor is now sideways. See the Pen Writing Mode Demo — Headline by Jen Simmons (@jensimmons) on CodePen. The code for accomplishing this is pretty simple. h1 { writing-mode: vertical-rl; } That’s all it takes to switch the writing mode from the web’s default horizontal top-to-bottom mode to a vertical right-to-left mode. If you apply such code to the html element, the entire page is switched, affecting the scroll direction, too. In my example above, I’m telling the browser that only the h1 will be in this vertical-rl mode, while the rest of my page stays in the default of horizontal-tb. So now the dessert course is over. Let me serve up this whole meal, and explain the the CSS Writing Mode Specification. Why learn about writing modes? There are three reasons I’m teaching writing modes to everyone—including western audiences—and explaining the whole system, instead of quickly showing you a simple trick. We live in a big, diverse world, and learning about other languages is fascinating. Many of you lay out pages in languages like Chinese, Japanese and Korean. Or you might be inspired to in the future. Using writing-mode to turn bits sideways is cool. This CSS can be used in all kinds of creative ways, even if you are working only in English. Most importantly, I’ve found understanding Writing Modes incredibly helpful when understanding Flexbox and CSS Grid. Before I learned Writing Mode, I felt like there was still a big hole in my knowledge, something I just didn’t get about why Grid and Flexbox work the way they do. Once I wrapped my head around Writing Modes, Grid and Flexbox got a lot easier. Suddenly the Alignment properties, align-* and justify-*, made sense. Whether you know about it or not, the writing mode is the first building block of every layout we create. You can do what we’ve been doing for 25 years – and leave your page set to the default left-to-right direction, horizontal top-to-bottom writing mode. Or you can enter a world of new possibilities where content flows in other directions. CSS properties I’m going to focus on the CSS writing-mode property in this article. It has five possible options: writing-mode: horizontal-tb; writing-mode: vertical-rl; writing-mode: vertical-lr; writing-mode: sideways-rl; writing-mode: sideways-lr; The CSS Writing Modes Specification is designed to support a wide range of written languages in all our human and linguistic complexity. Which—spoiler alert—is pretty insanely complex. The global evolution of written languages has been anything but simple. So I’ve got to start with explaining some basic concepts of web page layout and writing systems. Then I can show you what these CSS properties do. Inline Direction, Block Direction, and Character Direction In the world of the web, there’s a concept of ‘block’ and ‘inline’ layout. If you’ve ever written display: block or display: inline, you’ve leaned on these concepts. In the default writing mode, blocks stack vertically starting at the top of the page and working their way down. Think of how a bunch of block-levels elements stack—like a bunch of a paragraphs—that’s the block direction. Inline is how each line of text flows. The default on the web is from left to right, in horizontal lines. Imagine this text that you are reading right now, being typed out one character at a time on a typewriter. That’s the inline direction. The character direction is which way the characters point. If you type a capital “A” for instance, on which side is the top of the letter? Different languages can point in different directions. Most languages have their characters pointing towards the top of the page, but not all. Put all three together, and you start to see how they work as a system. The default settings for the web work like this. Now that we know what block, inline, and character directions mean, let’s see how they are used in different writing systems from around the world. The four writing systems of CSS Writing Modes The CSS Writing Modes Specification handles all the use cases for four major writing systems; Latin, Arabic, Han and Mongolian. Latin-based systems One writing system dominates the world more than any other, reportedly covering about 70% of the world’s population. The text is horizontal, running from left to right, or LTR. The block direction runs from top to bottom. It’s called the Latin-based system because it includes all languages that use the Latin alphabet, including English, Spanish, German, French, and many others. But there are many non-Latin-alphabet languages that also use this system, including Greek, Cyrillic (Russian, Ukrainian, Bulgarian, Serbian, etc.), and Brahmic scripts (Devanagari, Thai, Tibetan), and many more. You don’t need to do anything in your CSS to trigger this mode. This is the default. Best practices, however, dictate that you declare in your opening <html> element which language and which direction (LTR or RTL) you are using. This website, for instance, uses <html lang='en-gb' dir='ltr'> to let the browser know this content is published in Great Britian’s version of English, in a left to right direction. Arabic-based systems Arabic, Hebrew and a few other languages run the inline direction from right to left. This is commonly known as RTL. Note that the inline direction still runs horizontally. The block direction runs from top to bottom. And the characters are upright. It’s not just the flow of text that runs from right to left, but everything about the layout of the website. The upper right-hand corner is the starting position. Important things are on the right. The eyes travel from right to left. So, typically RTL websites use layouts that are just like LTR websites, only flipped. On websites that support both LTR and RTL, like the United Nations’ site at un.org, the two layouts are mirror images of each other. For many web developers, our experiences with internationalization have focused solely on supporting Arabic and Hebrew script. CSS layout hacks for internationalization & RTL To prepare an LTR project to support RTL, developers have had to create all sorts of hacks. For example, the Drupal community started a convention of marking every margin-left and -right, every padding-left and -right, every float: left and float: right with the comment /* LTR */. Then later developers could search for each instance of that exact comment, and create stylesheets to override each left with right, and vice versa. It’s a tedious and error prone way to work. CSS itself needed a better way to let web developers write their layout code once, and easily switch language directions with a single command. Our new CSS layout system does exactly that. Flexbox, Grid and Alignment use start and end instead of left and right. This lets us define everything in relationship to the writing system, and switch directions easily. By writing justify-content: flex-start, justify-items: end, and eventually margin-inline-start: 1rem we have code that doesn’t need to be changed. This is a much better way to work. I know it can be confusing to think through start and end as replacements for left and right. But it’s better for any multiligual project, and it’s better for the web as a whole. Sadly, I’ve seen CSS preprocessor tools that claim to “fix” the new CSS layout system by getting rid of start and end and bringing back left and right. They want you to use their tool, write justify-content: left, and feel self-righteous. It seems some folks think the new way of working is broken and should be discarded. It was created, however, to fulfill real needs. And to reflect a global internet. As Bruce Lawson says, WWW stands for the World Wide Web, not the Wealthy Western Web. Please don’t try to convince the industry that there’s something wrong with no longer being biased towards western culture. Instead, spread the word about why this new system is here. Spend a bit of time drilling the concept of inline and block into your head, and getting used to start and end. It will be second nature soon enough. I’ve also seen CSS preprocessors that let us use this new way of thinking today, even as all the parts aren’t fully supported by browsers yet. Some tools let you write text-align: start instead of text-align: left, and let the preprocessor handle things for you. That is terrific, in my opinion. A great use of the power of a preprocessor to help us switch over now. But let’s get back to RTL. How to declare your direction You don’t want to use CSS to tell the browser to switch from an LTR language to RTL. You want to do this in your HTML. That way the browser has the information it needs to display the document even if the CSS doesn’t load. This is accomplished mainly on the html element. You should also declare your main language. As I mentioned above, the 24 ways website is using <html lang='en-gb' dir='ltr'> to declare the LTR direction and the use of British English. The UN Arabic website uses <html lang='ar' dir='rtl'>to declare the site as an Arabic site, using a RTL layout. Things get more complicated when you’ve got a page with a mix of languages. But I’m not going to get into all of that, since this article is focused on CSS and layouts, not explaining everything about internationalization. Let me just leave direction here by noting that much of the heavy work of laying out the characters which make up each word is handled by Unicode. If you are interested in learning more about LTR, RTL and bidirectional text, watch this video: Introduction to Bidirectional Text, a presentation by Elika Etemad. Meanwhile, let’s get back to CSS. The writing mode CSS for Latin-based and Arabic-based systems For both of these systems—Latin-based and Arabic-based, whether LTR or RTL—the same CSS property applies for specifying the writing mode: writing-mode: horizontal-tb. That’s because in both systems, the inline text flow is horizontal, while the block direction is top-to-bottom. This is expressed as horizontal-tb. horizontal-tb is the default writing mode for the web, so you don’t need to specify it unless you are overriding something else higher up in the cascade. You can just imagine that every site you’ve ever built came with: html { writing-mode: horizontal-tb; } Now let’s turn our attention to the vertical writing systems. Han-based systems This is where things start to get interesting. Han-based writing systems include CJK languages, Chinese, Japanese, Korean and others. There are two options for laying out a page, and sometimes both are used at the same time. Much of CJK text is laid out like Latin-based languages, with a horizontal top-to-bottom block direction, and a left-to-right inline direction. This is the more modern way to doing things, started in the 20th century in many places, and further pushed into domination by the computer and later the web. The CSS to do this bit of the layouts is the same as above: section { writing-mode: horizontal-tb; } Or, you know, do nothing, and get that result as a default. Alternatively Han-based languages can be laid out in a vertical writing mode, where the inline direction runs vertically, and the block direction goes from right to left. See both options in this diagram: Note that the horizontal text flows from left to right, while the vertical text flows from right to left. Wild, eh? This Japanese issue of Vogue magazine is using a mix of writing modes. The cover opens on the left spine, opposite of what an English magazine does. This page mixes English and Japanese, and typesets the Japanese text in both horizontal and vertical modes. Under the title “Richard Stark” in red, you can see a passage that’s horizontal-tb and LTR, while the longer passage of text at the bottom of the page is typeset vertical-rl. The red enlarged cap marks the beginning of that passage. The long headline above the vertical text is typeset LTR, horizontal-tb. The details of how to set the default of the whole page will depend on your use case. But each element, each headline, each section, each article can be marked to flow the opposite of the default however you’d like. For example, perhaps you leave the default as horizontal-tb, and specify your vertical elements like this: div.articletext { writing-mode: vertical-rl; } Or alternatively you could change the default for the page to a vertical orientation, and then set specific elements to horizontal-tb, like this: html { writing-mode: vertical-rl; } h2, .photocaptions, section { writing-mode: horizontal-tb; } If your page has a sideways scroll, then the writing mode will determine whether the page loads with upper left corner as the starting point, and scroll to the right (horizontal-tb as we are used to), or if the page loads with the upper right corner as the starting point, scrolling to the left to display overflow. Here’s an example of that change in scrolling direction, in a CSS Writing Mode demo by Chen Hui Jing. Check out her demo — you can switch from horizontal to vertical writing modes with a checkbox and see the difference. Mongolian-based systems Now, hopefully so far all of this kind of makes sense. It might be a bit more complicated than expected, but it’s not so hard. Well, enter the Mongolian-based systems. Mongolian is also a vertical script language. Text runs vertically down the page. Just like Han-based systems. There are two major differences. First, the block direction runs the other way. In Mongolian, block-level elements stack from left to right. Here’s a drawing of how Wikipedia would look in Mongolian if it were laid out correctly. Perhaps the Mongolian version of Wikipedia will be redone with this layout. Now you might think, that doesn’t look so weird. Tilt your head to the left, and it’s very familiar. The block direction starts on the left side of the screen and goes to the right. The inline direction starts on the top of the page and moves to the bottom (similar to RTL text, just turned 90° counter-clockwise). But here comes the other huge difference. The character direction is “upside down”. The top of the Mongolian characters are not pointing to the left, towards the start edge of the block direction. They point to the right. Like this: Now you might be tempted to ignore all this. Perhaps you don’t expect to be typesetting Mongolian content anytime soon. But here’s why this is important for everyone — the way Mongolian works defines the results writing-mode: vertical-lr. And it means we cannot use vertical-lr for typesetting content in other languages in the way we might otherwise expect. If we took what we know about vertical-rl and guessed how vertical-lr works, we might imagine this: But that’s wrong. Here’s how they actually compare: See the unexpected situation? In both writing-mode: vertical-rl and writing-mode: vertical-lr latin text is rotated clockwise. Neither writing mode let’s us rotate text counter-clockwise. If you are typesetting Mongolian content, apply this CSS in the same way you would apply writing-mode to Han-based writing systems. To the whole page on the html element, or to specific pages of the page like this: section { writing-mode: vertical-lr; } Now, if you are using writing-mode for a graphic design effect on a language that is otherwise typesets horizontally, I don’t think writing-mode: vertical-lr is useful. If the text wraps onto two lines, it stacks in a very unexpected way. So I’ve sort of obliterated it from my toolkit. I find myself using writing-mode: vertical-rl a lot. And never using -lr. Hm. Writing modes for graphic design So how do we use writing-mode to turn English headlines sideways? We could rely on transform: rotate() Here are two examples, one for each direction. (By the way, each of these demos use CSS Grid for their overall layout, so be sure to test them in a browser that supports CSS Grid, like Firefox Nightly.) In this demo 4A, the text is rotated clockwise using this code: h1 { writing-mode: vertical-rl; } In this demo 4B, the text is rotated counter-clockwise using this code: h1 { writing-mode: vertical-rl; transform: rotate(180deg); text-align: right; } I use vertical-rl to rotate the text so that it takes up the proper amount of space in the overall flow of the layout. Then I rotate it 180° to spin it around to the other direction. And then I use text-align: right to get it to rise up to the top of it’s container. This feels like a hack, but it’s a hack that works. Now what I would like to do instead is use another CSS value that was designed for this use case — one of the two other options for writing mode. If I could, I would lay out example 4A with: h1 { writing-mode: sideways-rl; } And layout example 4B with: h1 { writing-mode: sideways-lr; } The problem is that these two values are only supported in Firefox. None of the other browsers recognize sideways-*. Which means we can’t really use it yet. In general, the writing-mode property is very well supported across browsers. So I’ll use writing-mode: vertical-rl for now, with the transform: rotate(180deg); hack to fake the other direction. There’s much more to what we can do with the CSS designed to support multiple languages, but I’m going to stop with this intermediate introduction. If you do want a bit more of a taste, look at this example that adds text-orientation: upright; to the mix — turning the individual letters of the latin font to be upright instead of sideways. It’s this demo 4C, with this CSS applied: h1 { writing-mode: vertical-rl; text-orientation: upright; text-transform: uppercase; letter-spacing: -25px; } You can check out all my Writing Modes demos at labs.jensimmons.com/#writing-modes. I’ll leave you with this last demo. One that applies a vertical writing mode to the sub headlines of a long article. I like how small details like this can really bring a fresh feeling to the content. See the Pen Writing Mode Demo — Article Subheadlines by Jen Simmons (@jensimmons) on CodePen.",2016,Jen Simmons,jensimmons,2016-12-23T00:00:00+00:00,https://24ways.org/2016/css-writing-modes/,code 306,What next for CSS Grid Layout?,"In 2012 I wrote an article for 24 ways detailing a new CSS Specification that had caught my eye, at the time with an implementation only in Internet Explorer. What I didn’t realise at the time was that CSS Grid Layout was to become a theme on which I would base the next four years of research, experimentation, writing and speaking. As I write this article in December 2016, we are looking forward to CSS Grid Layout being shipped in Chrome and Firefox. What will ship early next year in those browsers is expanded and improved from the early implementation I explored in 2012. Over the last four years the spec has been developed as part of the CSS Working Group process, and has had input from browser engineers, specification writers and web developers. Use cases have been discussed, and features added. The CSS Grid Layout specification is now a Candidate Recommendation. This status means the spec is to all intents and purposes, finished. The discussions now happening are on fine implementation details, and not new feature ideas. It makes sense to draw a line under a specification in order that browser vendors can ship complete, interoperable implementations. That approach is good for all of us, it makes development far easier if we know that a browser supports all of the features of a specification, rather than working out which bits are supported. However it doesn’t mean that works stops here, and that new use cases and features can’t be proposed for future levels of Grid Layout. Therefore, in this article I’m going to take a look at some of the things I think grid layout could do in the future. I would love for these thoughts to prompt you to think about how Grid - or any CSS specification - could better suit the use cases you have. Subgrid - the missing feature of Level 1 The implementation of CSS Grid Layout in Chrome, Firefox and Webkit is comparable and very feature complete. There is however one standout feature that has not been implemented in any browser as yet - subgrid. Once you set the value of the display property to grid, any direct children of that element become grid items. This is similar to the way that flexbox behaves, set display: flex and all direct children become flex items. The behaviour does not apply to children of those items. You can nest grids, just as you can nest flex containers, but the child grids have no relationship to the parent. Nesting Grids by Rachel Andrew (@rachelandrew) on CodePen. The subgrid behaviour would enable the grid defined on the parent to be used by the children. I feel this would be most useful when working with a multiple column flexible grid - for example a typical 12 column grid. I could define a grid on a wrapper, then position UI elements on that grid - from the major structural elements of my page down through the child elements to a form where I wanted the field to line up with items above. The specification contained an initial description of subgrid, with a value of subgrid for grid-template-columns and grid-template-rows, you can read about this in the August 2015 Working Draft. This version of the specification would have meant you could declare a subgrid in one dimension only, and create a different set of tracks in the other. In an attempt to get some implementation of subgrid, a revised specification was proposed earlier this year. This gives a single subgrid value of the display property. As we now cannot specify a subgrid on rows OR columns this limits us to have a subgrid that works in two dimensions. At this point neither version has been implemented by anyone, and subgrids are marked as “at risk” in the Level 1 Candidate Recommendation. With regard to ‘at-risk’ this is explained as follows: “‘At-risk’ is a W3C Process term-of-art, and does not necessarily imply that the feature is in danger of being dropped or delayed. It means that the WG believes the feature may have difficulty being interoperably implemented in a timely manner, and marking it as such allows the WG to drop the feature if necessary when transitioning to the Proposed Rec stage, without having to publish a new Candidate Rec without the feature first.” If we lose subgrid from Level 1, as it looks likely that we will, this does give us a chance to further discuss and iterate on that feature. My current thoughts are that I’m not completely happy about subgrids being tied to both dimensions and feel that a return to the earlier version, or something like it, would be preferable. Further reading about subgrid My post from 2015 detailing why I feel subgrid is important My post based on the revised specification Eric Meyer’s thoughts on subgrid Write-up of a discussion from Igalia who work on the Blink and Webkit browser implementations Styling cells, tracks and areas Having defined a grid with CSS Grid Layout you can place child elements into that grid, however what you can’t do is style the grid tracks or cells. Grid doesn’t even go as far as multiple column layout, which has the column-rule properties. In order to set a background colour on a grid cell at the moment you would have to add an empty HTML element or insert some generated content as in the below example. I’m using a 1 pixel grid gap to fake lines between grid cells, and empty div elements, and some generated content to colour those cells. Faked backgrounds and borders by Rachel Andrew (@rachelandrew) on CodePen. I think it would be a nice addition to Grid Layout to be able to directly add backgrounds and borders to cells, tracks and areas. There is an Issue raised in the CSS WG Drafts repository for Decorative Grid Cell pseudo-elements, if you want to add thoughts to that. More control over auto placement If you haven’t explicitly placed the direct children of your grid element they will be laid out according to the grid auto placement rules. You can see in this example how we have created a grid and the items are placing themselves into cells on that grid. Items auto-place on a defined grid by Rachel Andrew (@rachelandrew) on CodePen. The auto-placement algorithm is very cool. We can position some items, leaving others to auto-place; we can set items to span more than one track; we can use the grid-auto-flow property with a value of dense to backfill gaps in our grid. Websafe colors meet CSS Grid (auto-placement demo) by Rachel Andrew (@rachelandrew) on CodePen. I think however this could be taken further. In this issue posted to my CSS Grid AMA on GitHub, the question is raised as to whether it would be possible to ask grid to place items on the next available line of a certain name. This would allow you to skip tracks in the grid when using auto-placement, an issue that has also been raised by Emil Björklund in this post to the www-style list prior to spec discussion moving to Github. I think there are probably similar issues, if you can think of one add a comment here. Creating non-rectangular grid areas A grid area is a collection of grid cells, defined by setting the start and end lines for columns and rows or by creating the area in the value of the grid-template-areas property as shown below. Those areas however must be rectangular - you can’t create an L-shaped or otherwise non-regular shape. Grid Areas by Rachel Andrew (@rachelandrew) on CodePen. Perhaps in the future we could define an L-shape or other non-rectangular area into which content could flow, as in the below currently invalid code where a quote is embedded into an L-shaped content area. .wrapper { display: grid; grid-template-areas: ""sidebar header header"" ""sidebar content quote"" ""sidebar content content""; } Flowing content through grid cells or areas Some uses cases I have seen perhaps are not best solved by grid layout at all, but would involve grid working alongside other CSS specifications. As I detail in this post, there are a class of problems that I believe could be solved with the CSS Regions specification, or a revised version of that spec. Being able to create a grid layout, then flow content through the areas could be very useful. Jen Simmons presented to the CSS Working Group at the Lisbon meeting a suggestion as to how this might work. In a post from earlier this year I looked at a collection of ideas from specifications that include Grid, Regions and Exclusions. These working notes from my own explorations might prompt ideas of your own. Solving the keyboard/layout disconnect One issue that grid, and flexbox to a lesser extent, raises is that it is very easy to end up with a layout that is disconnected from the underlying markup. This raises problems for people navigating using the keyboard as when tabbing around the document you find yourself jumping to unexpected places. The problem is explained by Léonie Watson with reference to flexbox in Flexbox and the keyboard navigation disconnect. The grid layout specification currently warns against creating such a disconnect, however I think it will take careful work by web developers in order to prevent this. It’s also not always as straightforward as it seems. In some cases you want the logical order to follow the source, and others it would make more sense to follow the visual. People are thinking about this issue, as you can read in this mailing list discussion. Bringing your ideas to the future of Grid Layout When I’m not getting excited about new CSS features, my day job involves working on a software product - the CMS that is serving this very website, Perch. When we launched Perch there were many use cases that we had never thought of, despite having a good idea of what might be needed in a CMS and thinking through lots of use cases. The additional use cases brought to our attention by our customers and potential customers informed the development of the product from launch. The same will be true for Grid Layout. As a “product” grid has been well thought through by many people. Yet however hard we try there will be use cases we just didn’t think of. You may well have one in mind right now. That’s ok, because as with any CSS specification, once Level One of grid is complete, work can begin on Level Two. The feature set of Level Two will be informed by the use cases that emerge as people get to grips with what we have now. This is where you get to contribute to the future of layout on the web. When you hit up against the things you cannot do, don’t just mutter about how the CSS Working Group don’t listen to regular developers and code around the problem. Instead, take a few minutes and write up your use case. Post it to your blog, to Medium, create a CodePen and go to the CSS Working Group GitHub specs repository and post an issue there. Write some pseudo-code, draw a picture, just make sure that the use case is described in enough detail that someone can see what problem you want grid to solve. It may be that - as with any software development - your use case can’t be solved in exactly the way you suggest. However once we have a use case, collected with other use cases, methods of addressing that class of problems can be investigated. I opened this article by explaining I’d written about grid layout four years ago, and how we’re only now at a point where we will have Grid Layout available in the majority of browsers. Specification development, and implementation into browsers takes time. This is actually a good thing, as it’s impossible to take back CSS once it is out there and being used by production websites. We want CSS in the wild to be well thought through and that takes time. So don’t feel that because you don’t see your use case added to a spec immediately it has been ignored. Do your future self a favour and write down your frustrations or thoughts, and we can all make sure that the web platform serves the use cases we’re dealing with now and in the future.",2016,Rachel Andrew,rachelandrew,2016-12-12T00:00:00+00:00,https://24ways.org/2016/what-next-for-css-grid-layout/,code 307,Get the Balance Right: Responsive Display Text,"Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers’ attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout. When setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader’s preferences. You can take the same approach with display text by choosing larger sizes from the same scale. However, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window. Introducing viewport units With CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels: vw - viewport width, vh - viewport height, vmin - viewport height or width, whichever is smaller vmax - viewport height or width, whichever is larger In one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width: h1 { font-size: 13 vw; } So for a selection of widths, the rendered font size would be: Rendered font size (px) Viewport width 13 vw 320 42 768 100 1024 133 1280 166 1920 250 A problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation. Landscape text is much bigger than portrait text when using vw units. The proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression. However if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering. h1 { font-size: 13vmin; } Landscape text is consistent with portrait text when using vmin units. Comparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude: Rendered font size (px) Viewport 13 vw 13 vmin 320 × 480 42 42 414 × 736 54 54 768 × 1024 100 100 1024 × 768 133 100 1280 × 720 166 94 1366 × 768 178 100 1440 × 900 187 117 1680 × 1050 218 137 1920 × 1080 250 140 2560 × 1440 333 187 Hybrid font sizing Using vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading: h1 { font-size: 13vmin; } h2 { font-size: 5vmin; } Using vmin alone for supporting text can scale it too quickly The balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens. A solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example: h2 { font-size: calc(0.5rem + 2.5vmin); } For a 320 px wide screen, the font size will be 16 px, calculated as follows: (0.5 × 16) + (320 × 0.025) = 8 + 8 = 16px For a 768 px wide screen, the font size will be 27 px: (0.5 × 16) + (768 × 0.025) = 8 + 19 = 27px This results in a more balanced subheading that doesn’t take emphasis away from the main heading: To give you an idea of the effect of using a hybrid approach, here’s a side-by-side comparison of hybrid and viewport text sizing: table.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em} top: calc() hybrid method; bottom: vmin only 16 20 27 32 35 40 44 16 24 38 48 54 64 72 320 480 768 960 1080 1280 1440 Over this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting. A reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays. ↩︎ For even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller. ↩︎",2016,Richard Rutter,richardrutter,2016-12-09T00:00:00+00:00,https://24ways.org/2016/responsive-display-text/,code 308,How to Make a Chrome Extension to Delight (or Troll) Your Friends,"If you’re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius. But today is different. You’re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack… right? Nope. If there’s one thing 2016 has taught me, it’s that we—the self-serious, world-changing tech movers and shakers of the universe—haven’t changed one bit from our younger, more delightable selves. How do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like “Stinky Dinosaur” or “Tiny Potato”. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there’s still plenty of room in our big, grown-up hearts for fun. Today, we’re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension. Here’s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to. Along the way, we’ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful. What you’ll need Adobe Illustrator (or a similar illustration program to export PNG) Some images of your friends A text editor Note: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you’ll find ways to do it. Getting started Download a local copy of the boilerplate for today’s tutorial here, and open it in a text editor. Inside, you’ll find a simple web app that you can run in Chrome. Open index.html in Chrome. You should see a grey page that says “Noname”. Open template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template. Note: We’re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn’t totally cross-browser compatible, but that’s okay. Step 1: Gather your friends The first thing to do is choose who your muses are. Since the holidays are upon us, I’d suggest finding inspiration in your family. Create your artwork For each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here’s my crew: As you can see, my use of the word “family” extends to cats. There’s no judgement here. Notice that some of my photos don’t completely fill the artboard–that’s fine. The images will be clipped into ovals when they’re rendered in the application. Now, export your images by following these steps: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your faces. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your images to config.js Open scripts/config.js. This is where you configure your extension. Add key value pairs to the faces object. The key should be the person’s name, and the value should be the filepath to the image. faces: { leslie: 'images/face_leslie.png', kyle: 'images/face_kyle.png', beep: 'images/face_beep.png' } The application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that’s special about the faces object is that person’s name will also be displayed when their face is chosen. Now, when you refresh the project in Chrome, you should see one of your friends along with their name, like this: Congrats, you’re off and running! Step 2: Add adjectives Now that you’ve loaded your friends into the application, it’s time to call them names. This step definitely yields the most laughs for the least amount of effort. Add a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering… prefixes: [ 'Loving', 'Drunk', 'Chatty', 'Merry', 'Creepy', 'Introspective', 'Cheerful', 'Awkward', 'Unrelatable', 'Hungry', ... ] When you refresh Chrome, you should see one of these words prefixed before your friend’s name. Voila! Step 3: Choose your color palette Real talk: I’m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you’ve been blessed with the gift of color aptitude, skip ahead. How to choose colors To create a color palette, I start by going to a Coolors.co, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like. Copy these colors into your swatches in Adobe Illustrator. They’ll be the base for any illustrations you create later. Now you need a set of background colors. Here’s my trick to making these consistent with your illustration palette without completely blending in. Use the “Adjust Palette” tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors. Add your background colors to config.js Copy your hex codes into the bgColors array in config.js. bgColors: [ '#FFDD77', '#FF8E72', '#ED5E84', '#4CE0B3', '#9893DA', ... ] Now when you go back to Chrome and refresh the page, you’ll see your new palette! Step 4: Accessorize This is the fun part. We’re going to illustrate objects, accessories, lizards—whatever you want—and layer them on top of your friends. Your objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like “hats” or “glasses”. This will allow combinations of accessories to show at once, without showing two of the same type on the same person. Create a group of accessories To get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don’t feel like illustrating, you can use cut-out images instead. Next, follow the same steps as you did when you exported the faces. Here they are again: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your hats. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your accessories to config.js In config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array: customProps: { hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ] } Refresh Chrome and behold, accessories! Create as many more accessories as you want Repeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this: The last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses: customProps: { hair: [ 'images/hair_bowl.png', 'images/hair_bob.png' ], hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ], glasses: [ 'images/glasses_aviators.png', 'images/glasses_monacle.png' ] } And, there you have it! Randomly generated friends with random accessories. Feel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress… Step 5: Publish it It’s time to put this in your new tabs! You have two options: Publish it as a Chrome extension in the Chrome Web Store. Host it as a website and point to it with the New Tab Redirect extension. Today, we’re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I’d suggest sticking to Option #2. How to make a simple Chrome extension to replace the new tab page All you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement: { ""manifest_version"": 2, ""name"": ""Your extension name"", ""version"": ""1.0"", ""chrome_url_overrides"" : { ""newtab"": ""index.html"" } } To test your extension, you’ll need to run it in Developer Mode. Here’s how to do that: Go to the Extensions page in Chrome by navigating to chrome://extensions/. Tick the checkbox in the upper-right corner labelled “Developer Mode”. Click “Load unpacked extension…” and select this project. If everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page. Voila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you. Share the love Now that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they’ve become a part of everyday life–just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats. At the end of the day, let’s make something that makes us happy. Cheers!",2016,Leslie Zacharkow,lesliezacharkow,2016-12-08T00:00:00+00:00,https://24ways.org/2016/how-to-make-a-chrome-extension/,code 309,HTTP/2 Server Push and Service Workers: The Perfect Partnership,"Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push. During this year’s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination. If you’ve never heard of HTTP/2 Server Push before, fear not - it’s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users. What is HTTP/2 Server Push? When a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower. Before we go any further, let’s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this: As you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs. This is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and “pushes” it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster. Using the same example above, let’s “push” the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like. Whoa, that looks different - let’s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn’t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better! There are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I’ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I’m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings. HTTP/2 Server Push is awesome, but it isn’t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn’t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache “aware”. This means that the server isn’t aware about the state of your client. If you’ve visited a web page before, the server isn’t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they’re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs. HTTP/2 Server Push & Service Workers So where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you’ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand. Using Service Workers, you can easily cache assets on a user’s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server. Let’s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load! Let’s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files. <!DOCTYPE html> <html> <head> <meta charset=""UTF-8""> <title>HTTP2 Push Demo</title> </head> <body> <h1>HTTP2 Push</h1> <img src=""./images/beer-1.png"" width=""200"" height=""200"" /> <img src=""./images/beer-2.png"" width=""200"" height=""200"" /> <br> <br> <img src=""./images/beer-3.png"" width=""200"" height=""200"" /> <img src=""./images/beer-4.png"" width=""200"" height=""200"" /> <!-- Scripts --> <script async src=""./js/promise.min.js""></script> <script async src=""./js/fetch.js""></script> <script> // Register the service worker if ('serviceWorker' in navigator) { navigator.serviceWorker.register('./service-worker.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }).catch(function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); } </script> </body> </html> In the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push. The Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns. I’ve add the following code to the service-worker.js file. (global => { 'use strict'; // Load the sw-toolbox library. importScripts('/js/sw-toolbox/sw-toolbox.js'); // The route for any requests toolbox.router.get('/push', global.toolbox.fastest); toolbox.router.get('/images/(.*)', global.toolbox.fastest); toolbox.router.get('/js/(.*)', global.toolbox.fastest); // Ensure that our service worker takes control of the page as soon as possible. global.addEventListener('install', event => event.waitUntil(global.skipWaiting())); global.addEventListener('activate', event => event.waitUntil(global.clients.claim())); })(self); Let’s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I’ve told the toolbox to listen to these routes too. The best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn’t exist in cache, the code above will add it into cache so that we can retrieve it when it’s needed again. You may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit. But what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to “push” everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing! Using this technique, the waterfall chart for a repeat visit should look like the image below. If you look closely at the image above, you’ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache. Whether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user’s browser doesn’t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance. Summary When used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site’s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it’s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don’t just simply “push” everything; it could actually lead to having slower page load times. If you’d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information. All of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I’ll get back to you as soon as possible. If you’d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone’s talk at this years Chrome Developer Summit. I’d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live!",2016,Dean Hume,deanhume,2016-12-15T00:00:00+00:00,https://24ways.org/2016/http2-server-push-and-service-workers/,code 310,Fairytale of new Promise,"There are only four good Christmas songs. I know, yeah, JavaScript or whatever. We’ll get to that in a minute, I promise. First—and I cannot stress this enough— there are four good Christmas songs. You’re free to disagree with me here, of course, but please try to understand that you will be wrong. They don’t all have the most safe-for-work titles; I can’t list all of them here, but if you choose to let your fingers do the walkin’ to your nearest search engine, I will say that one was released by the band FEAR way back in 1982 and one was on Run the Jewels’ self-titled debut album. The lyrics are a hell of a lot worse than the titles, so maybe wait until you get home from work before you queue them up. Wear headphones, if you’ve got thin walls. For my money, though, the two I can reference by name are the top of that small heap: Tom Waits’ Christmas Card from a Hooker in Minneapolis, and The Pogues’ Fairytale of New York. The former once held the honor of being the only good Christmas song—about which which I was also unequivocally correct, right up until I changed my mind. It’s not the song up for discussion today, but feel free to familiarize yourself just the same—I’ll wait. Fairytale of New York—the top of the list—starts out by hinting at some pretty standard holiday fare; dreams and cheer and whatnot. Typical seasonal stuff, so long as you ignore that the story seems to be recounted as a drunken flashback in a jail cell. You can probably make a few guesses at the underlying spirit of the song based on that framing: following a lucky break, our bright-eyed protagonists move to New York in search of fame and fortune, only to quickly descend into bad decisions, name-calling, and vaguely festive chaos. This song speaks to me on a couple of levels, not the least of which is as a retelling of my day-to-day interactions with JavaScript. Each day’s melody might vary a little bit, granted, but the lyrics almost always follow a pretty clear arc toward “PARENTAL ADVISORY: EXPLICIT CONTENT.” You might have heard a similar tune yourself; it goes a little somethin’ like setTimeout(function() { console.log( ""this should be happening last"" ); }, 1000); . Callbacks are calling callbacks calling callbacks and something is happening somewhere, as the JavaScript interpreter plods through our code start-to-finish, line-by-line, step-by-step. If we need to take actions based on the results of something that could take its sweet time resolving, well, we’d better fiddle with the order of things to make sure those actions don’t happen too soon. “But I can see a better time,” as the song says, “when all our dreams come true.” So, with that Pogues brand of holiday spirit squarely in mind—by which I mean that your humble narrator is almost certainly drunk, and may be incarcerated at the time of publication—gather ’round for a story of hope, of hardships, of semi-asynchronous JavaScript programming, and ultimately: of Promise unfulfilled. The Main Thread JavaScript is single-minded, in a manner of speaking. Anything we tell the JavaScript runtime to do goes into a single-file queue; you’ll see it referred to as the “main thread,” or “UI thread.” That thread can be shared by a number of critical browser processes, like rendering and re-rendering parts of the page, and user interactions ranging from the simple—say, highlighting text—to the more complex—interacting with form elements. If that sounds a little scary to you, well, that’s because it is. The more complex our scripts, the more we’re cramming into that single-file main thread, to be processed along with—say—some of our CSS animations. Too much JavaScript clogging up the main thread means a lot of user-facing performance jankiness. Getting away from that single thread is a big part of all the excitement around Web Workers, which allow us to offload entire scripts into their own dedicated background threads—though not without limitations of their own. Outside of Web Workers, that everything-thread is the only game in town: scripts executed one thing at a time, functions calling functions calling functions, taking numbers and crowding up the same deli counter as a user’s interactions—which, in this already strained metaphor, would be ham, I guess? Asynchronous JavaScript Now, those queued actions may include asynchronous things. For example: AJAX callbacks, setTimeout/setInterval, and addEventListener won’t block the main thread while we’re waiting for a request to come back, a timer to tick away, or an event to trigger. Once those things do kick in, though, the actions they’re meant to perform will get shuffled right back into that single-thread queue. There are a couple of places you might have written asynchronously-fired JavaScript, even if you’re not super familiar with the overarching concept: XMLHttpRequest—“AJAX,” if ya nasty—or just kicking off a function once a user triggers a click or mouseenter event. Event-driven development is writ a little larger, with the overall flow of the script dictated by events, both internal and external. Writing event-driven JavaScript applications is a step in the right direction for sure—it won’t cure what ails the main thread, but it does work with the medium in a reasonable way. Event-driven development allows us to manage our use of the main thread in a way that makes sense. If any of this rings a bell for you, the motivation for Promises should feel familiar. For example, a custom init event might kick things off, and fire a create event that applies our classes and restructures our markup which, on completion, fires a bindEvents event to handle all the event listeners for user interaction. There might not sound like much difference between that and one big function that kicks off, manipulates the DOM, and binds our events line-by-line—but in a script of sufficient size and complexity we’re not only provided with a decoupled flow through the script, but obvious touchpoints for future updates and a predictable structure for ongoing maintenance. This pattern falls apart a little where we were still creating, binding, and listening for events in the same top-to-bottom, one-item-at-a-time way—we had to set a listener on a given object before the event fires, or nothing would happen: // Create the event: var event = document.createEvent( ""Event"" ); // Name the event: event.initEvent( ""doTheStuff"", true, true ); // Listen for the custom `doTheStuff` event on `window`: window.addEventListener( ""doTheStuff"", initializeEverything ); // Fire the custom event window.dispatchEvent( event ); This example is a little contrived, and this stuff is a lot more manageable for sure with the addition of a framework, but that’s the basic gist: create and name the event, add a listener for the event, and—after setting our listener—dispatch the event. Events and callbacks aren’t the only game in town for weaving our way in and out of the main thread, though—at least, not anymore. Promises A Promise is, at the risk of sounding sentimental, pure potential—an empty container into which a value eventually results. A Promise can exist in several states: “pending,” while the computation they contain is being performed or “resolved” once that computation is complete. Once resolved, a Promise is “fulfilled” if it gave us back something we expect, or “rejected” if it didn’t. The Promise constructor accepts a callback with two arguments: resolve and reject. We perform an action—asynchronous or otherwise—within that callback. If everything in there has gone according to plan, we call resolve. If something has gone awry, we call reject—with an error, conventionally. To illustrate, let’s tack something together with a pretty decent chance of doing what we don’t want: a promise meant only to give us the number 1, but has a chance of giving us back a 2. No reasonable person would ever do this, of course, but I wouldn’t necessarily put it past me. var promisedOne = new Promise( function( resolve, reject ) { var coinToss = Math.floor( Math.random() * 2 ) + 1; if( coinToss === 1 ) { resolve( coinToss ); } else { reject( new Error( ""That ain’t a one."" ) ); } }); There’s nothing too surprising in there, after you boil it all down. It’s a little return-y, with the exception that we’re flagging results as “as expected” or “something went wrong.” Tapping into that Promise uses another new keyword: then—and as someone who attempts to make sense of JavaScript by breaking it down to plain ol’ human-language, I’m a big fan of this syntax. then is tacked onto our Promise identifier, and does just what it says on the tin: once the Promise is resolved, then do one of two things, both supplied as callbacks: the first in the case of a fulfilled promise, and the second in the case of a rejected one. Those two callbacks will have, as arguments, the results we specified with resolve orreject, respectively. It sounds like a lot in prose, but in code it’s a pretty simple pattern: promisedOne.then( function( result ) { console.log( result ); }, function( error ) { console.error( error ); }); If you’ve spent any time working with AJAX—jQuery-wise, in particular—you’ve seen something like this pattern before: a success callback and an error callback. The state of a promise, once fulfilled or rejected, cannot be changed—any reference we make to promisedOne will have a single, fixed result. It may not look like too much the way I’m using it here, but it’s powerful stuff—a pattern for asynchronously resolving anything. I’ve recently used Promises alongside a script that emulates Font Load Events, to apply webfonts asynchronously and avoid a potential performance hit. Font Face Observer allows us to, as the name implies, determine when the files referenced by our @font-face rules have finished loading. var fontObserver = new FontFaceObserver( ""Fancy Font"" ); fontObserver.check().then(function() { document.documentElement.className += "" fonts-loaded""; }, function( error ) { console.error( error ); }); fontObserver.check() gives us back a Promise, allowing us to chain on a then containing our callbacks for success and failure. We use the fulfilled callback to bolt a class onto the page once the font file has been fully transferred. We don’t bother including an argument in the first function, since we don’t care about the result itself so much as we care that the promise resolved without error—we’re not doing anything with the resolved value, just adding a class to the page. We do include the error argument, since we’ll want to know what happened should something go wrong. Now, this isn’t the tidiest syntax around—at least to my eyes—with those two functions just kinda floating in a then. Luckily there’s an similar alternative syntax; one that I find a bit easier to parse at-a-glance: fontObserver.check() .then(function() { document.documentElement.className += "" fonts-loaded""; }) .catch(function( error ) { console.log( error ); }); The first callback inside then provides us with our success state, while the catch provides us with a single, explicit “something went wrong” callback. The two syntaxes aren’t completely identical in all situations, but for a simple case like this, I find it a little neater. The Common Thread I guess I still owe you an explanation, huh. Not about the JavaScript-whatever; I think I’ve explained that plenty. No, I mean Fairytale of New York, and why it’s perched up there at the top of the four (4) song heap. Fairytale is a sad song, ostensibly. If you follow the main thread—start to finish, line-by-line, step by step— Fairytale is a sad song. And I can see you out there, visions of Die Hard dancing in your heads: “but is it a Christmas song?” Well, for my money, nothing says “holidays” quite like unreliable narration. Shane MacGowan, the song’s author, has placed the first verse about “Christmas Eve in the drunk tank” as happening right after the “lucky one, came in eighteen-to-one”—not at the chronological end of the story. That means the song might not be mostly drunken flashback, but all of it a single, overarching flashback including a Christmas Eve in protective custody. It could be that the man and woman are, together, recounting times long past—good times and bad times—maybe not even in chronological order. Hell, the “NYPD Choir” mentioned in the chorus? There’s no such thing. We’re not big Christmas folks, my family and I. But just the same, every year, the handful of us get together, and every year—like clockwork—there’s a lull in conversation, there’s a sharp exhale, and Ma says “we all made it.” Not to a house, not to a dinner, but through another year, to another Christmas. At this point, without fail, someone starts telling a story—and one begets another, and so on. Sometimes the stories are happy, sometimes they’re sad, more often than not they’re both. Some are about things we were lucky to walk away from, some are about a time when another one of us didn’t. Start-to-finish, line-by-line, step-by-step, the main thread through the year doesn’t change, and maybe there isn’t a whole lot we can do to change it. But by carefully weaving our way in and out of that thread—stories all out of sync and resolving one way or the other, with the results determined by questionably reliable narrators—we can change the way we interact with it and, little by little, we can start making sense of it.",2016,Mat Marquis,matmarquis,2016-12-19T00:00:00+00:00,https://24ways.org/2016/fairytale-of-new-promise/,code 313,Centered Tabs with CSS,"Doug Bowman’s Sliding Doors is pretty much the de facto way to build tabbed navigation with CSS, and rightfully so – it is, as they say, rockin’ like Dokken. But since it relies heavily on floats for the positioning of its tabs, we’re constrained to either left- or right-hand navigation. But what if we need a bit more flexibility? What if we need to place our navigation in the center? Styling the li as a floated block does give us a great deal of control over margin, padding, and other presentational styles. However, we should learn to love the inline box – with it, we can create a flexible, centered alternative to floated navigation lists. Humble Beginnings Do an extra shot of ‘nog, because you know what’s coming next. That’s right, a simple unordered list: <div id=""navigation""> <ul> <li><a href=""#""><span>Home</span></a></li> <li><a href=""#""><span>About</span></a></li> <li><a href=""#""><span>Our Work</span></a></li> <li><a href=""#""><span>Products</span></a></li> <li class=""last""><a href=""#""><span>Contact Us</span></a></li> </ul> </div> If we were wedded to using floats to style our list, we could easily fix the width of our ul, and trick it out with some margin: 0 auto; love to center it accordingly. But this wouldn’t net us much flexibility: if we ever changed the number of navigation items, or if the user increased her browser’s font size, our design could easily break. Instead of worrying about floats, let’s take the most basic approach possible: let’s turn our list items into inline elements, and simply use text-align to center them within the ul: #navigation ul, #navigation ul li { list-style: none; margin: 0; padding: 0; } #navigation ul { text-align: center; } #navigation ul li { display: inline; margin-right: .75em; } #navigation ul li.last { margin-right: 0; } Our first step is sexy, no? Well, okay, not really – but it gives us a good starting point. We’ve tamed our list by removing its default styles, set the list items to display: inline, and centered the lot. Adding a background color to the links shows us exactly how the different elements are positioned. Now the fun stuff. Inline Elements, Padding, and You So how do we give our links some dimensions? Well, as the CSS specification tells us, the height property isn’t an option for inline elements such as our anchors. However, what if we add some padding to them? #navigation li a { padding: 5px 1em; } I just love leading questions. Things are looking good, but something’s amiss: as you can see, the padded anchors seem to be escaping their containing list. Thankfully, it’s easy to get things back in line. Our anchors have 5 pixels of padding on their top and bottom edges, right? Well, by applying the same vertical padding to the list, our list will finally “contain” its child elements once again. ’Tis the Season for Tabbing Now, we’re finally able to follow the “Sliding Doors” model, and tack on some graphics: #navigation ul li a { background: url(""tab-right.gif"") no-repeat 100% 0; color: #06C; padding: 5px 0; text-decoration: none; } #navigation ul li a span { background: url(""tab-left.gif"") no-repeat; padding: 5px 1em; } #navigation ul li a:hover span { color: #69C; text-decoration: underline; } Finally, our navigation’s looking appropriately sexy. By placing an equal amount of padding on the top and bottom of the ul, our tabs are properly “contained”, and we can subsequently style the links within them. But what if we want them to bleed over the bottom-most border? Easy: we can simply decrease the bottom padding on the list by one pixel, like so. A Special Note for Special Browsers The Mac IE5 users in the audience are likely hopping up and down by now: as they’ve discovered, our centered navigation behaves rather annoyingly in their browser. As Philippe Wittenbergh has reported, Mac IE5 is known to create “phantom links” in a block-level element when text-align is set to any value but the default value of left. Thankfully, Philippe has documented a workaround that gets that [censored] venerable browser to behave. Simply place the following code into your CSS, and the links will be restored to their appropriate width: /**//*/ #navigation ul li a { display: inline-block; white-space: nowrap; width: 1px; } /**/ IE for Windows, however, displays an extra kind of crazy. The padding I’ve placed on my anchors is offsetting the spans that contain the left curve of my tabs; thankfully, these shenanigans are easily straightened out: /**/ * html #navigation ul li a { padding: 0; } /**/ And with that, we’re finally finished. All set. And that’s it. With your centered navigation in hand, you can finally enjoy those holiday toddies and uncomfortable conversations with your skeevy Uncle Eustace.",2005,Ethan Marcotte,ethanmarcotte,2005-12-08T00:00:00+00:00,https://24ways.org/2005/centered-tabs-with-css/,code 314,Easy Ajax with Prototype,"There’s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity. But it’s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it’s just so much damn effort. Well, the good news is that – ta-da – it doesn’t have to be a headache. But man does it still look impressive. Here’s how to amaze your friends. Introducing prototype.js Prototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it’s a JavaScript file which you link into your page that then enables you to do cool stuff. There’s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it’s good to go. What a nice man that Mr Stephenson is – friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp. First step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree. Cutting to the chase Before I go on and set up an example of how to use this, let’s just get to the crux. Here’s how Prototype enables you to make a simple Ajax call and dump the results back to the page: var url = 'myscript.php'; var pars = 'foo=bar'; var target = 'output-div'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); This snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page. Knocking up a basic example So to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too. So, to that basic HTML page for the user to interact with. Here’s one I found whilst out carol singing: <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Strict//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd""> <html xmlns=""http://www.w3.org/1999/xhtml"" xml:lang=""en"" lang=""en""> <head> <meta http-equiv=""Content-Type"" content=""text/html; charset=utf-8""/> <title>Easy Ajax</title> <script type=""text/javascript"" src=""prototype.js""></script> <script type=""text/javascript"" src=""ajax.js""></script> </head> <body> <form method=""get"" action=""greeting.php"" id=""greeting-form""> <div> <label for=""greeting-name"">Enter your name:</label> <input id=""greeting-name"" type=""text"" /> <input id=""greeting-submit"" type=""submit"" value=""Greet me!"" /> </div> <div id=""greeting""></div> </form> </body> </html> As you can see, I’ve linked in prototype.js, and also a file called ajax.js, which is where we’ll be putting our glue. (Careful where you leave your glue, kids.) Our basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There’s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You’ll also notice that the form has a submit button – this is so that it can function as a regular form when no JavaScript is available. It’s important not to get carried away and forget the basics of accessibility. Meanwhile, back at the server So we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you’d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it. Here’s a quick PHP script – greeting.php – that Santa brought me early. <?php $the_name = htmlspecialchars($_GET['greeting-name']); echo ""<p>Season's Greetings, $the_name!</p>""; ?> You’ll perhaps want to do something a little more complex within your own projects. Just sayin’. Gluing it all together Inside our ajax.js file, we need to hook this all together. We’re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He’s how we attach an onload event to the window object and get it to call a function named init(): Event.observe(window, 'load', init, false); Now we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field. function init(){ $('greeting-submit').style.display = 'none'; Event.observe('greeting-name', 'keyup', greet, false); } As you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this: function greet(){ var url = 'greeting.php'; var pars = 'greeting-name='+escape($F('greeting-name')); var target = 'greeting'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); } The key points to note here are that any user input needs to be escaped before putting into the parameters so that it’s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call. That’s it No, seriously. That’s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz.",2005,Drew McLellan,drewmclellan,2005-12-01T00:00:00+00:00,https://24ways.org/2005/easy-ajax-with-prototype/,code 315,Edit-in-Place with Ajax,"Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn’t go that far towards implementing something really practical. We dipped our toes in, but haven’t learned to swim yet. So here is swimming lesson number one. Anyone who’s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page. Prototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we’ll need here) and a whole bunch of manipulations to the user interface – all through simple library calls. Here’s what we’re building, so let’s do it. Getting Started There are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you’ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here’s what Santa dropped down my chimney: <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Strict//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd""> <html xmlns=""http://www.w3.org/1999/xhtml"" xml:lang=""en"" lang=""en""> <head> <meta http-equiv=""Content-Type"" content=""text/html; charset=utf-8""/> <title>Edit-in-Place with Ajax</title> <link href=""editinplace.css"" rel=""Stylesheet"" type=""text/css"" /> <script src=""prototype.js"" type=""text/javascript""></script> <script src=""editinplace.js"" type=""text/javascript""></script> </head> <body> <h1>Edit-in-place</h1> <p id=""desc"">Dashing through the snow on a one horse open sleigh.</p> </body> </html> So that’s our page. The editable item is going to be the <p> called desc. The process goes something like this: Highlight the area onMouseOver Clear the highlight onMouseOut If the user clicks, hide the area and replace with a <textarea> and buttons Remove all of the above if the user cancels the operation When the Save button is clicked, make an Ajax POST and show that something’s happening When the Ajax call comes back, update the page with the new content Events and Highlighting The first step is to offer feedback to the user that the item is editable. This is done by shading the background colour when the user mouses over. Of course, the CSS :hover pseudo class is a straightforward way to do this, but for three reasons, I’m using JavaScript to switch class names. :hover isn’t supported on many elements in Internet Explorer for Windows I want to keep control over when the highlight switches off after an update, regardless of mouse position If JavaScript isn’t available we don’t want to end up with the CSS suggesting it might be With this in mind, here’s how editinplace.js starts: Event.observe(window, 'load', init, false); function init(){ makeEditable('desc'); } function makeEditable(id){ Event.observe(id, 'click', function(){edit($(id))}, false); Event.observe(id, 'mouseover', function(){showAsEditable($(id))}, false); Event.observe(id, 'mouseout', function(){showAsEditable($(id), true)}, false); } function showAsEditable(obj, clear){ if (!clear){ Element.addClassName(obj, 'editable'); }else{ Element.removeClassName(obj, 'editable'); } } The first line attaches an onLoad event to the window, so that the function init() gets called once the page has loaded. In turn, init() sets up all the items on the page that we want to make editable. In this example I’ve just got one, but you can add as many as you like. The function madeEditable() attaches the mouseover, mouseout and click events to the item we’re making editable. All showAsEditable does is add and remove the class name editable from the object. This uses the particularly cunning methods Element.addClassName() and Element.removeClassName() which enable you to cleanly add and remove effects without affecting any styling the object may otherwise have. Oh, remember to add a rule for .editable to your style sheet: .editable{ color: #000; background-color: #ffffd3; } The Switch As you can see above, when the user clicks on an editable item, a call is made to the function edit(). This is where we switch out the static item for a nice editable textarea. Here’s how that function looks. function edit(obj){ Element.hide(obj); var textarea ='<div id=""' + obj.id + '_editor""> <textarea id=""' + obj.id + '_edit"" name=""' + obj.id + '"" rows=""4"" cols=""60"">' + obj.innerHTML + '</textarea>'; var button = '<input id=""' + obj.id + '_save"" type=""button"" value=""SAVE"" /> OR <input id=""' + obj.id + '_cancel"" type=""button"" value=""CANCEL"" /></div>'; new Insertion.After(obj, textarea+button); Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false); Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false); } The first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object. The last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel. In the event of a cancel, we can clean up behind ourselves like so: function cleanUp(obj, keepEditable){ Element.remove(obj.id+'_editor'); Element.show(obj); if (!keepEditable) showAsEditable(obj, true); } Saving the Changes This is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response. function saveChanges(obj){ var new_content = escape($F(obj.id+'_edit')); obj.innerHTML = ""Saving...""; cleanUp(obj, true); var success = function(t){editComplete(t, obj);} var failure = function(t){editFailed(t, obj);} var url = 'edit.php'; var pars = 'id=' + obj.id + '&content=' + new_content; var myAjax = new Ajax.Request(url, {method:'post', postBody:pars, onSuccess:success, onFailure:failure}); } function editComplete(t, obj){ obj.innerHTML = t.responseText; showAsEditable(obj, true); } function editFailed(t, obj){ obj.innerHTML = 'Sorry, the update failed.'; cleanUp(obj); } As you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to “Saving…” to show that an update is occurring, and make the Ajax POST. If the Ajax fails, editFailed() sets the contents of the object to “Sorry, the update failed.” Admittedly, that’s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to “Saving…”. No one likes a failure – especially a messy one. If the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up. Meanwhile, back at the server The missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here’s what I have in PHP. <?php $id = $_POST['id']; $content = $_POST['content']; echo htmlspecialchars($content); ?> Not exactly rocket science is it? I’m just catching the content item from the POST and echoing it back. For your application to be useful, however, you’ll need to know exactly which record you should be updating. I’m passing in the ID of my <div>, which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update. You should also check the user’s credentials to make sure they have permission to edit whatever it is they’re editing. Basically the same rules apply as with any script in your application. Limitations There are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I’ve already mentioned. The second is that from an idealistic standpoint, I’d rather not be using innerHTML. However, the reality is that it’s presently the most efficient way of making large changes to the document. If you’re serving as XML, remember that you’ll need to replace these with proper DOM nodes. It’s also important to note that it’s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don’t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all – it fails silently. It is for this reason that this shouldn’t be used as a complete replacement for a traditional, universally accessible edit form. It’s a great time-saver for those with the ability to use it, but it’s no replacement. See it in action I’ve put together an example page using the inert PHP script above. That is to say, your edits aren’t committed to a database, so the example is reset when the page is reloaded.",2005,Drew McLellan,drewmclellan,2005-12-23T00:00:00+00:00,https://24ways.org/2005/edit-in-place-with-ajax/,code 316,Have Your DOM and Script It Too,"When working with the XMLHttpRequest object it appears you can only go one of three ways: You can stay true to the colorful moniker du jour and stick strictly to the responseXML property You can play with proprietary – yet widely supported – fire and inject the value of responseText property into the innerHTML of an element of your choosing Or you can be eval() and parse JSON or arbitrary JavaScript delivered via responseText But did you know that there’s a fourth option giving you the best of the latter two worlds? Mint uses this unmentioned approach to grab fresh HTML and run arbitrary JavaScript simultaneously. Without relying on eval(). “But wait-”, you might say, “when would I need to do this?” Besides the example below this technique is handy for things like tab groups that need initialization onload but miss the main onload event handler by a mile thanks to asynchronous scripting. Consider the problem Originally Mint used option 2 to refresh or load new tabs into individual Pepper panes without requiring a full roundtrip to the server. This was all well and good until I introduced the new Client Mode which when enabled allows anyone to view a Mint installation without being logged in. If voyeurs are afoot as Client Mode is disabled, the next time they refresh a pane the entire login page is inserted into the current document. That’s not very helpful so I needed a way to redirect the current document to the login page. Enter the solution Wouldn’t it be cool if browsers interpreted the contents of script tags crammed into innerHTML? Sure, but unfortunately, that just wasn’t meant to be. However like the body element, image elements have an onload event handler. When the image has fully loaded the handler runs the code applied to it. See where I’m going with this? By tacking a tiny image (think single pixel, transparent spacer gif – shudder) onto the end of the HTML returned by our Ajax call, we can smuggle our arbitrary JavaScript into the existing document. The image is added to the DOM, and our stowaway can go to town. <p>This is the results of our Ajax call.</p> <img src=""../images/loaded.gif"" alt="""" onload=""alert('Now that I have your attention...');"" /> Please be neat So we’ve just jammed some meaningless cruft into our DOM. If our script does anything with images this addition could have some unexpected side effects. (Remember The Fly?) So in order to save that poor, unsuspecting element whose innerHTML we just swapped out from sharing Jeff Goldblum’s terrible fate we should tidy up after ourselves. And by using the removeChild method we do just that. <p>This is the results of our Ajax call.</p> <img src=""../images/loaded.gif"" alt="""" onload=""alert('Now that I have your attention...');this.parentNode.removeChild(this);"" />",2005,Shaun Inman,shauninman,2005-12-24T00:00:00+00:00,https://24ways.org/2005/have-your-dom-and-script-it-too/,code 318,Auto-Selecting Navigation,"In the article Centered Tabs with CSS Ethan laid out a tabbed navigation system which can be centred on the page. A frequent requirement for any tab-based navigation is to be able to visually represent the currently selected tab in some way. If you’re using a server-side language such as PHP, it’s quite easy to write something like class=""selected"" into your markup, but it can be even simpler than that. Let’s take the navigation div from Ethan’s article as an example. <div id=""navigation""> <ul> <li><a href=""#""><span>Home</span></a></li> <li><a href=""#""><span>About</span></a></li> <li><a href=""#""><span>Our Work</span></a></li> <li><a href=""#""><span>Products</span></a></li> <li class=""last""><a href=""#""><span>Contact Us</span></a></li> </ul> </div> As you can see we have a standard unordered list which is then styled with CSS to look like tabs. By giving each tab a class which describes it’s logical section of the site, if we were to then apply a class to the body tag of each page showing the same, we could write a clever CSS selector to highlight the correct tab on any given page. Sound complicated? Well, it’s not a trivial concept, but actually applying it is dead simple. Modifying the markup First thing is to place a class name on each li in the list: <div id=""navigation""> <ul> <li class=""home""><a href=""#""><span>Home</span></a></li> <li class=""about""><a href=""#""><span>About</span></a></li> <li class=""work""><a href=""#""><span>Our Work</span></a></li> <li class=""products""><a href=""#""><span>Products</span></a></li> <li class=""last contact""><a href=""#""><span>Contact Us</span></a></li> </ul> </div> Then, on each page of your site, apply the a matching class to the body tag to indicate which section of the site that page is in. For example, on your About page: <body class=""about"">...</body> Writing the CSS selector You can now write a single CSS rule to match the selected tab on any given page. The logic is that you want to match the ‘about’ tab on the ‘about’ page and the ‘products’ tab on the ‘products’ page, so the selector looks like this: body.home #navigation li.home, body.about #navigation li.about, body.work #navigation li.work, body.products #navigation li.products, body.contact #navigation li.contact{ ... whatever styles you need to show the tab selected ... } So all you need to do when you create a new page in your site is to apply a class to the body tag to say which section it’s in. The CSS will do the rest for you – without any server-side help.",2005,Drew McLellan,drewmclellan,2005-12-10T00:00:00+00:00,https://24ways.org/2005/auto-selecting-navigation/,code 319,Avoiding CSS Hacks for Internet Explorer,"Back in October, IEBlog issued a call to action, asking developers to clean up their CSS hacks for IE7 testing. Needless to say, a lot of hubbub ensued… both on IEBlog and elsewhere. My contribution to all of the noise was to suggest that developers review their code and use good CSS hacks. But what makes a good hack? Tantek Çelik, the Godfather of CSS hacks, gave us the answer by explaining how CSS hacks should be designed. In short, they should (1) be valid, (2) target only old/frozen/abandoned user-agents/browsers, and (3) be ugly. Tantek also went on to explain that using a feature of CSS is not a hack. Now, I’m not a frequent user of CSS hacks, but Tantek’s post made sense to me. In particular, I felt it gave developers direction on how we should be coding to accommodate that sometimes troublesome browser, Internet Explorer. But what I’ve found, through my work with other developers, is that there is still much confusion over the use of CSS hacks and IE. Using examples from the code I’ve seen recently, allow me to demonstrate how to clean up some IE-specific CSS hacks. The two hacks that I’ve found most often in the code I’ve seen and worked with are the star html bug and the underscore hack. We know these are both IE-specific by checking Kevin Smith’s CSS Filters chart. Let’s look at each of these hacks and see how we can replace them with the same CSS feature-based solution. The star html bug This hack violates Tantek’s second rule as it targets current (and future) UAs. I’ve seen this both as a stand alone rule, as well as an override to some other rule in a style sheet. Here are some code samples: * html div#header {margin-top:-3px;} .promo h3 {min-height:21px;} * html .promo h3 {height:21px;} The underscore hack This hack violates Tantek’s first two rules: it’s invalid (according to the W3C CSS Validator) and it targets current UAs. Here’s an example: ol {padding:0; _padding-left:5px;} Using child selectors We can use the child selector to replace both the star html bug and underscore hack. Here’s how: Write rules with selectors that would be successfully applied to all browsers. This may mean starting with no declarations in your rule! div#header {} .promo h3 {} ol {padding:0;} To these rules, add the IE-specific declarations. div#header {margin-top:-3px;} .promo h3 {height:21px;} ol {padding:0 0 0 5px;} After each rule, add a second rule. The selector of the second rule must use a child selector. In this new rule, correct any IE-specific declarations previously made. div#header {margin-top:-3px;} body > div#header {margin-top:0;} .promo h3 {height:21px;} .promo > h3 {height:auto; min-height:21px;} ol {padding:0 0 0 5px;} html > body ol {padding:0;} Voilà – no more hacks! There are a few caveats to this that I won’t go into… but assuming you’re operating in strict mode and barring any really complicated stuff you’re doing in your code, your CSS will still render perfectly across browsers. And while this may make your CSS slightly heftier in size, it should future-proof it for IE7 (or so I hope). Happy holidays!",2005,Kimberly Blessing,kimberlyblessing,2005-12-17T00:00:00+00:00,https://24ways.org/2005/avoiding-css-hacks-for-internet-explorer/,code 320,DOM Scripting Your Way to Better Blockquotes,"Block quotes are great. I don’t mean they’re great for indenting content – that would be an abuse of the browser’s default styling. I mean they’re great for semantically marking up a chunk of text that is being quoted verbatim. They’re especially useful in blog posts. <blockquote> <p>Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.</p> </blockquote> Notice that you can’t just put the quoted text directly between the <blockquote> tags. In order for your markup to be valid, block quotes may only contain block-level elements such as paragraphs. There is an optional cite attribute that you can place in the opening <blockquote> tag. This should contain a URL containing the original text you are quoting: <blockquote cite=""http://en.wikipedia.org/wiki/Progressive_Enhancement""> <p>Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.</p> </blockquote> Great! Except… the default behavior in most browsers is to completely ignore the cite attribute. Even though it contains important and useful information, the URL in the cite attribute is hidden. You could simply duplicate the information with a hyperlink at the end of the quoted text: <blockquote cite=""http://en.wikipedia.org/wiki/Progressive_Enhancement""> <p>Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.</p> <p class=""attribution""> <a href=""http://en.wikipedia.org/wiki/Progressive_Enhancement"">source</a> </p> </blockquote> But somehow it feels wrong to have to write out the same URL twice every time you want to quote something. It could also get very tedious if you have a lot of quotes. Well, “tedious” is no problem to a programming language, so why not use a sprinkling of DOM Scripting? Here’s a plan for generating an attribution link for every block quote with a cite attribute: Write a function called prepareBlockquotes. Begin by making sure the browser understands the methods you will be using. Get all the blockquote elements in the document. Start looping through each one. Get the value of the cite attribute. If the value is empty, continue on to the next iteration of the loop. Create a paragraph. Create a link. Give the paragraph a class of “attribution”. Give the link an href attribute with the value from the cite attribute. Place the text “source” inside the link. Place the link inside the paragraph. Place the paragraph inside the block quote. Close the for loop. Close the function. Here’s how that translates to JavaScript: function prepareBlockquotes() { if (!document.getElementsByTagName || !document.createElement || !document.appendChild) return; var quotes = document.getElementsByTagName(""blockquote""); for (var i=0; i<quotes.length; i++) { var source = quotes[i].getAttribute(""cite""); if (!source) continue; var para = document.createElement(""p""); var link = document.createElement(""a""); para.className = ""attribution""; link.setAttribute(""href"",source); link.appendChild(document.createTextNode(""source"")); para.appendChild(link); quotes[i].appendChild(para); } } Now all you need to do is trigger that function when the document has loaded: window.onload = prepareBlockquotes; Better yet, use Simon Willison’s handy addLoadEvent function to queue this function up with any others you might want to execute when the page loads. That’s it. All you need to do is save this function in a JavaScript file and reference that file from the head of your document using <script> tags. You can style the attribution link using CSS. It might look good aligned to the right with a smaller font size. If you’re looking for something to do to keep you busy this Christmas, I’m sure that this function could be greatly improved. Here are a few ideas to get you started: Should the text inside the generated link be the URL itself? If the block quote has a title attribute, how would you take its value and use it as the text inside the generated link? Should the attribution paragraph be placed outside the block quote? If so, how would you that (remember, there is an insertBefore method but no insertAfter)? Can you think of other instances of useful information that’s locked away inside attributes? Access keys? Abbreviations?",2005,Jeremy Keith,jeremykeith,2005-12-05T00:00:00+00:00,https://24ways.org/2005/dom-scripting-your-way-to-better-blockquotes/,code 321,Tables with Style,"It might not seem like it but styling tabular data can be a lot of fun. From a semantic point of view, there are plenty of elements to tie some style into. You have cells, rows, row groups and, of course, the table element itself. Adding CSS to a paragraph just isn’t as exciting. Where do I start? First, if you have some tabular data (you know, like a spreadsheet with rows and columns) that you’d like to spiffy up, pop it into a table — it’s rightful place! To add more semantics to your table — and coincidentally to add more hooks for CSS — break up your table into row groups. There are three types of row groups: the header (thead), the body (tbody) and the footer (tfoot). You can only have one header and one footer but you can have as many table bodies as is appropriate. Sample table example Inspiration Table Striping To improve scanning information within a table, a common technique is to style alternating rows. Also known as zebra tables. Whether you apply it using a class on every other row or turn to JavaScript to accomplish the task, a handy-dandy trick is to use a semi-transparent PNG as your background image. This is especially useful over patterned backgrounds. tbody tr.odd td { background:transparent url(background.png) repeat top left; } * html tbody tr.odd td { background:#C00; filter: progid:DXImageTransform.Microsoft.AlphaImageLoader( src='background.png', sizingMethod='scale'); } We turn off the default background and apply our PNG hack to have this work in Internet Explorer. Styling Columns Did you know you could style a column? That’s right. You can add special column (col) or column group (colgroup) elements. With that you can add border or background styles to the column. <table> <col id=""ingredients""> <col id=""serve12""> <col id=""serve24""> ... Check out the example. Fun with Backgrounds Pop in a tiled background to give your table some character! Internet Explorer’s PNG hack unfortunately only works well when applied to a cell. To figure out which background will appear over another, just remember the hierarchy: (bottom) Table → Column → Row Group → Row → Cell (top) The Future is Bright Once browser-makers start implementing CSS3, we’ll have more power at our disposal. Just with :first-child and :last-child, you can pull off a scalable version of our previous table with rounded corners and all — unfortunately, only Firefox manages to pull this one off successfully. And the selector the masses are clamouring for, nth-child, will make zebra tables easy as eggnog.",2005,Jonathan Snook,jonathansnook,2005-12-19T00:00:00+00:00,https://24ways.org/2005/tables-with-style/,code 322,Introduction to Scriptaculous Effects,"Gather around kids, because this year, much like in that James Bond movie with Denise Richards, Christmas is coming early… in the shape of scrumptuous smooth javascript driven effects at your every whim. Now what I’m going to do, is take things down a notch. Which is to say, you don’t need to know much beyond how to open a text file and edit it to follow this article. Personally, I for instance can’t code to save my life. Well, strictly speaking, that’s not entirely true. If my life was on the line, and the code needed was really simple and I wasn’t under any time constraints, then yeah maybe I could hack my way out of it But my point is this: I’m not a programmer in the traditional sense of the word. In fact, what I do best, is scrounge code off of other people, take it apart and then put it back together with duct tape, chewing gum and dumb blind luck. No, don’t run! That happens to be a good thing in this case. You see, we’re going to be implementing some really snazzy effects (which are considerably more relevant than most people are willing to admit) on your site, and we’re going to do it with the aid of Thomas Fuchs’ amazing Script.aculo.us library. And it will be like stealing candy from a child. What Are We Doing? I’m going to show you the very basics of implementing the Script.aculo.us javascript library’s Combination Effects. These allow you to fade elements on your site in or out, slide them up and down and so on. Why Use Effects at All? Before get started though, let me just take a moment to explain how I came to see smooth transitions as something more than smoke and mirror-like effects included for with little more motive than to dazzle and make parents go ‘uuh, snazzy’. Earlier this year, I had the good fortune of meeting the kind, gentle and quite knowledgable Matt Webb at a conference here in Copenhagen where we were both speaking (though I will be the first to admit my little talk on Open Source Design was vastly inferior to Matt’s talk). Matt held a talk called Fixing Broken Windows (based on the Broken Windows theory), which really made an impression on me, and which I have since then referred back to several times. You can listen to it yourself, as it’s available from Archive.org. Though since Matt’s session uses many visual examples, you’ll have to rely on your imagination for some of the examples he runs through during it. Also, I think it looses audio for a few seconds every once in a while. Anyway, one of the things Matt talked a lot about, was how our eyes are wired to react to movement. The world doesn’t flickr. It doesn’t disappear or suddenly change and force us to look for the change. Things move smoothly in the real world. They do not pop up. How it Works Once the necessary files have been included, you trigger an effect by pointing it at the ID of an element. Simple as that. Implementing the Effects So now you know why I believe these effects have a place in your site, and that’s half the battle. Because you see, actually getting these effects up and running, is deceptively simple. First, go and download the latest version of the library (as of this writing, it’s version 1.5 rc5). Unzip itand open it up. Now we’re going to bypass the instructions in the readme file. Script.aculo.us can do a bunch of quite advanced things, but all we really want from it is its effects. And by sidestepping the rest of the features, we can shave off roughly 80KB of unnecessary javascript, which is well worth it if you ask me. As with Drew’s article on Easy Ajax with Prototype, script.aculo.us also uses the Prototype framework by Sam Stephenson. But contrary to Drew’s article, you don’t have to download Prototype, as a version comes bundled with script.aculo.us (though feel free to upgrade to the latest version if you so please). So in the unzipped folder, containing the script.aculo.us files and folder, go into ‘lib’ and grab the ‘prototype.js’ file. Move it to whereever you want to store the javascript files. Then fetch the ‘effects.js’ file from the ‘src’ folder and put it in the same place. To make things even easier for you to get this up and running, I have prepared a small javascript snippet which does some checking to see what you’re trying to do. The script.aculo.us effects are all either ‘turn this off’ or ‘turn this on’. What this snippet does, is check to see what state the target currently has (is it on or off?) and then use the necessary effect. You can either skip to the end and download the example code, or copy and paste this code into a file manually (I’ll refer to that file as combo.js): Effect.OpenUp = function(element) { element = $(element); new Effect.BlindDown(element, arguments[1] || {}); } Effect.CloseDown = function(element) { element = $(element); new Effect.BlindUp(element, arguments[1] || {}); } Effect.Combo = function(element) { element = $(element); if(element.style.display == 'none') { new Effect.OpenUp(element, arguments[1] || {}); }else { new Effect.CloseDown(element, arguments[1] || {}); } } Currently, this code uses the BlindUp and BlindDown code, which I personally like, but there’s nothing wrong with you changing the effect-type into one of the other effects available. Now, include the three files in the header of your code, like so: <script src=""prototype.js"" type=""text/javascript""></script> <script src=""effects.js"" type=""text/javascript""></script> <script src=""combo.js"" type=""text/javascript""></script> Now insert the element you want to use the effect on, like so: <div id=""content"" style=""display: none;"">Lorem ipsum dolor sit amet.</div> The above element will start out invisible, and when triggered will be revealed. If you want it to start visible, simply remove the style parameter. And now for the trigger <a href=""javascript:Effect.Combo('content');"">Click Here</a> And that, is pretty much it. Clicking the link should unfold the DIV targeted by the effect, in this case ‘content’. Effect Options Now, it gets a bit long-haired though. The documentation for script.aculo.us is next to non-existing, and because of that you’ll have to do some digging yourself to appreciate the full potentialof these effects. First of all, what kind of effects are available? Well you can go to the demo page and check them out, or you can open the ‘effects.js’ file and have a look around, something I recommend doing regardlessly, to gain an overview of what exactly you’re dealing with. If you dissect it for long enough, you can even distill some of the options available for the various effects. In the case of the BlindUp and BlindDown effect, which we’re using in our example (as triggered from combo.js), one of the things that would be interesting to play with would be the duration of the effect. If it’s too long, it will feel slow and unresponsive. Too fast and it will be imperceptible. You set the options like so: <a href=""javascript:Effect.Combo('content', {duration: .2});"">Click Here</a> The change from the previous link being the inclusion of , {duration: .2}. In this case, I have lowered the duration to 0.2 second, to really make it feel snappy. You can also go all-out and turn on all the bells and whistles of the Blind effect like so (slowed down to a duration of three seconds so you can see what’s going on): <a href=""javascript:Effect.Combo('content', {duration: 3, scaleX: true, scaleContent: true});"">Click Here</a> Conclusion And that’s pretty much it. The rest is a matter of getting to know the rest of the effects and their options as well as finding out just when and where to use them. Remember the ancient Chinese saying: Less is more. Download Example I have prepared a very basic example, which you can download and use as a reference point.",2005,Michael Heilemann,michaelheilemann,2005-12-12T00:00:00+00:00,https://24ways.org/2005/introduction-to-scriptaculous-effects/,code 323,Introducing UDASSS!,"Okay. What’s that mean? Unobtrusive Degradable Ajax Style Sheet Switcher! Boy are you in for treat today ‘cause we’re gonna have a whole lotta Ajaxifida Unobtrucitosity CSS swappin’ Fun! Okay are you really kidding? Nope. I’ve even impressed myself on this one. Unfortunately, I don’t have much time to tell you the ins and outs of what I actually did to get this to work. We’re talking JavaScript, CSS, PHP…Ajax. But don’t worry about that. I’ve always believed that a good A.P.I. is an invisible A.P.I… and this I felt I achieved. The only thing you need to know is how it works and what to do. A Quick Introduction Anyway… First of all, the idea is very simple. I wanted something just like what Paul Sowden put together in Alternative Style: Working With Alternate Style Sheets from Alistapart Magazine EXCEPT a few minor (not-so-minor actually) differences which I’ve listed briefly below: Allow users to switch styles without JavaScript enabled (degradable) Preventing the F.O.U.C. before the window ‘load’ when getting preferred styles Keep the JavaScript entirely off our markup (no onclick’s or onload’s) Make it very very easy to implement (ok, Paul did that too) What I did to achieve this was used server-side cookies instead of JavaScript cookies. Hence, PHP. However this isn’t a “PHP style switcher” – which is where Ajax comes in. For the extreme technical folks, no, there is no xml involved here, or even a callback response. I only say Ajax because everyone knows what ‘it’ means. With that said, it’s the Ajax that sets the cookies ‘on the fly’. Got it? Awesome! What you need Luckily, I’ve done the work for you. It’s all packaged up in a nice zip file (at the end…keep reading for now) – so from here on out, just follow these instructions As I’ve mentioned, one of the things we’ll be working with is PHP. So, first things first, open up a file called index and save it with a ‘.php’ extension. Next, place the following text at the top of your document (even above your DOCTYPE) <?php require_once('utils/style-switcher.php'); // style sheet path[, media, title, bool(set as alternate)] $styleSheet = new AlternateStyles(); $styleSheet->add('css/global.css','screen,projection'); // [Global Styles] $styleSheet->add('css/preferred.css','screen,projection','Wog Standard'); // [Preferred Styles] $styleSheet->add('css/alternate.css','screen,projection','Tiny Fonts',true); // [Alternate Styles] $styleSheet->add('css/alternate2.css','screen,projection','Big O Fonts',true); // // [Alternate Styles] $styleSheet->getPreferredStyles(); ?> The way this works is REALLY EASY. Pay attention closely. Notice in the first line we’ve included our style-switcher.php file. Next we instantiate a PHP class called AlternateStyles() which will allow us to configure our style sheets. So for kicks, let’s just call our object $styleSheet As part of the AlternateStyles object, there lies a public method called add. So naturally with our $styleSheet object, we can call it to (da – da-da-da!) Add Style Sheets! How the add() method works The add method takes in a possible four arguments, only one is required. However, you’ll want to add some… since the whole point is working with alternate style sheets. $path can simply be a uri, absolute, or relative path to your style sheet. $media adds a media attribute to your style sheets. $title gives a name to your style sheets (via title attribute).$alternate (which shows boolean) simply tells us that these are the alternate style sheets. add() Tips For all global style sheets (meaning the ones that will always be seen and will not be swapped out), simply use the add method as shown next to // [Global Styles]. To add preferred styles, do the same, but add a ‘title’. To add the alternate styles, do the same as what we’ve done to add preferred styles, but add the extra boolean and set it to true. Note following when adding style sheets Multiple global style sheets are allowed You can only have one preferred style sheet (That’s a browser rule) Feel free to add as many alternate style sheets as you like Moving on Simply add the following snippet to the <head> of your web document: <script type=""text/javascript"" src=""js/prototype.js""></script> <script type=""text/javascript"" src=""js/common.js""></script> <script type=""text/javascript"" src=""js/alternateStyles.js""></script> <?php $styleSheet->drop(); ?> Nothing much to explain here. Just use your copy & paste powers. How to Switch Styles Whether you knew it or not, this baby already has the built in ‘ubobtrusive’ functionality that lets you switch styles by the drop of any link with a class name of ‘altCss‘. Just drop them where ever you like in your document as follows: <a class=""altCss"" href=""index.php?css=Bog_Standard"">Bog Standard</a> <a class=""altCss"" href=""index.php?css=Really_Small_Fonts"">Small Fonts</a> <a class=""altCss"" href=""index.php?css=Large_Fonts"">Large Fonts</a> Take special note where the file is linking to. Yep. Just linking right back to the page we’re on. The only extra parameters we pass in is a variable called ‘css’ – and within that we append the names of our style sheets. Also take very special note on the names of the style sheets have an under_score to take place of any spaces we might have. Go ahead… play around and change the style sheet on the example page. Try disabling JavaScript and refreshing your browser. Still works! Cool eh? Well, I put this together in one night so it’s still a work in progress and very beta. If you’d like to hear more about it and its future development, be sure stop on by my site where I’ll definitely be maintaining it. Download the beta anyway Well this wouldn’t be fun if there was nothing to download. So we’re hooking you up so you don’t go home (or logoff) unhappy Download U.D.A.S.S.S | V0.8 Merry Christmas! Thanks for listening and I hope U.D.A.S.S.S. has been well worth your time and will bring many years of Ajaxy Style Switchin’ Fun! Many Blessings, Merry Christmas and have a great new year!",2005,Dustin Diaz,dustindiaz,2005-12-18T00:00:00+00:00,https://24ways.org/2005/introducing-udasss/,code 326,Don't be eval(),"JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It’s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you’re using eval() there’s probably something wrong with your design. Common mistakes Here’s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it – but you don’t know the name of the property until runtime. Here’s how NOT to do it: var property = 'bar'; var value = eval('foo.' + property); Yes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It’s also dirt ugly. Here’s the right way of doing the above: var property = 'bar'; var value = foo[property]; In JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string. Security issues In any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript – you should be extremely cautious of running eval() against any code that may have been tampered with – for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks. What’s it good for? Some programmers say that eval() is B.A.D. – Broken As Designed – and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that – it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval(). function evalRequest(url) { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { eval(xmlhttp.responseText); } } xmlhttp.open(""GET"", url, true); xmlhttp.send(null); } If you want this to work with Internet Explorer you’ll need to include this compatibility patch.",2005,Simon Willison,simonwillison,2005-12-07T00:00:00+00:00,https://24ways.org/2005/dont-be-eval/,code 327,Improving Form Accessibility with DOM Scripting,"The form label element is an incredibly useful little element – it lets you link the form field unquestionably with the descriptive label text that sits alongside or above it. This is a very useful feature for people using screen readers, but there are some problems with this element. What happens if you have one piece of data that, for various reasons (validation, the way your data is collected/stored etc), needs to be collected using several form elements? The classic example is date of birth – ideally, you’ll ask for the date of birth once but you may have three inputs, one each for day, month and year, that you also need to provide hints about the format required. The problem is that to be truly accessible you need to label each field. So you end up needing something to say “this is a date of birth”, “this is the day field”, “this is the month field” and “this is the day field”. Seems like overkill, doesn’t it? And it can uglify a form no end. There are various ways that you can approach it (and I think I’ve seen them all). Some people omit the label and rely on the title attribute to help the user through; others put text in a label but make the text 1 pixel high and merging in to the background so that screen readers can still get that information. The most common method, though, is simply to set the label to not display at all using the CSS display:none property/value pairing (a technique which, for the time being, seems to work on most screen readers). But perhaps we can do more with this? The technique I am suggesting as another alternative is as follows (here comes the pseudo-code): Start with a totally valid and accessible form Ensure that each form input has a label that is linked to its related form control Apply a class to any label that you don’t want to be visible (for example superfluous) Then, through the magic of unobtrusive JavaScript/the DOM, manipulate the page as follows once the page has loaded: Find all the label elements that are marked as superfluous and hide them Find out what input element each of these label elements is related to Then apply a hint about formatting required for input (gleaned from the original, now-hidden label text) – add it to the form input as default text Finally, add in a behaviour that clears or selects the default text (as you choose) So, here’s the theory put into practice – a date of birth, grouped using a fieldset, and with the behaviours added in using DOM, and here’s the JavaScript that does the heavy lifting. But why not just use display:none? As demonstrated at Juicy Studio, display:none seems to work quite well for hiding label elements. So why use a sledge hammer to crack a nut? In all honesty, this is something of an experiment, but consider the following: Using the DOM, you can add extra levels of help, potentially across a whole form – or even range of forms – without necessarily increasing your markup (it goes beyond simply hiding labels) Screen readers today may identify a label that is set not to display, but they may not in the future – this might provide a way around By expanding this technique above, it might be possible to visually change the parent container that groups these items – in this case, a fieldset and legend, which are notoriously difficult to style consistently across different browsers – while still retaining the underlying semantic/logical structure Well, it’s an idea to think about at least. How is it for you? How else might you use DOM scripting to improve the accessiblity or usability of your forms?",2005,Ian Lloyd,ianlloyd,2005-12-03T00:00:00+00:00,https://24ways.org/2005/improving-form-accessibility-with-dom-scripting/,code 331,Splintered Striper,"Back in March 2004, David F. Miller demonstrated a little bit of DOM scripting magic in his A List Apart article Zebra Tables. His script programmatically adds two alternating CSS background colours to table rows, making them more readable and visually pleasing, while saving the document author the tedious task of manually assigning the styling to large static data tables. Although David’s original script performs its duty well, it is nonetheless very specific and limited in its application. It only: works on a single table, identified by its id, with at least a single tbody section assigns a background colour allows two colours for odd and even rows acts on data cells, rather than rows, and then only if they have no class or background colour already defined Taking it further In a recent project I found myself needing to apply a striped effect to a medium sized unordered list. Instead of simply modifying the Zebra Tables code for this particular case, I decided to completely recode the script to make it more generic. Being more general purpose, the function in my splintered striper experiment is necessarily more complex. Where the original script only expected a single parameter (the id of the target table), the new function is called as follows: striper('[parent element tag]','[parent element class or null]','[child element tag]','[comma separated list of classes]') This new, fairly self-explanatory function: targets any type of parent element (and, if specified, only those with a certain class) assigns two or more classes (rather than just two background colours) to the child elements inside the parent preserves any existing classes already assigned to the child elements See it in action View the demonstration page for three usage examples. For simplicity’s sake, we’re making the calls to the striper function from the body’s onload attribute. In a real deployment situation, we would look at attaching a behaviour to the onload programmatically — just remember that, as we need to pass variables to the striper function, this would involve creating a wrapper function which would then be attached…something like: function stripe() { striper('tbody','splintered','tr','odd,even'); } window.onload=stripe; A final thought Just because the function is called striper does not mean that it’s limited to purely applying a striped look; as it’s more of a general purpose “alternating class assignment” script, you can achieve a whole variety of effects with it.",2005,Patrick Lauke,patricklauke,2005-12-15T00:00:00+00:00,https://24ways.org/2005/splintered-striper/,code 332,CSS Layout Starting Points,"I build a lot of CSS layouts, some incredibly simple, others that cause sleepless nights and remind me of the torturous puzzle books that were given to me at Christmas by aunties concerned for my education. However, most of the time these layouts fit quite comfortably into one of a very few standard formats. For example: Liquid, multiple column with no footer Liquid, multiple column with footer Fixed width, centred Rather than starting out with blank CSS and (X)HTML documents every time you need to build a layout, you can fairly quickly create a bunch of layout starting points, that will give you a solid basis for creating the rest of the design and mean that you don’t have to remember how a three column layout with a footer is best achieved every time you come across one! These starting points can be really basic, in fact that’s exactly what you want as the final design, the fonts, the colours and so on will be different every time. It’s just the main sections we want to be able to quickly get into place. For example, here is a basic starting point CSS and XHTML document for a fixed width, centred layout with a footer. <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Strict//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd""> <html xmlns=""http://www.w3.org/1999/xhtml""> <head> <title>Fixed Width and Centred starting point document</title> <link rel=""stylesheet"" type=""text/css"" href=""fixed-width-centred.css"" /> <meta http-equiv=""Content-Type"" content=""text/html; charset=utf-8"" /> </head> <body> <div id=""wrapper""> <div id=""side""> <div class=""inner""> <p>Sidebar content here</p> </div> </div> <div id=""content""> <div class=""inner""> <p>Your main content goes here.</p> </div> </div> <div id=""footer""> <div class=""inner""> <p>Ho Ho Ho!</p> </div> </div> </div> </body> </html> body { text-align: center; min-width: 740px; padding: 0; margin: 0; } #wrapper { text-align: left; width: 740px; margin-left: auto; margin-right: auto; padding: 0; } #content { margin: 0 200px 0 0; } #content .inner { padding-top: 1px; margin: 0 10px 10px 10px; } #side { float: right; width: 180px; margin: 0; } #side .inner { padding-top: 1px; margin: 0 10px 10px 10px; } #footer { margin-top: 10px; clear: both; } #footer .inner { margin: 10px; } 9 times out of 10, after figuring out exactly what main elements I have in a layout, I can quickly grab the ‘one I prepared earlier’, mark-up the relevant sections within the ready-made divs, and from that point on, I only need to worry about the contents of those different areas. The actual layout is tried and tested, one that I know works well in different browsers and that is unlikely to throw me any nasty surprises later on. In addition, considering how the layout is going to work first prevents the problem of developing a layout, then realising you need to put a footer on it, and needing to redevelop the layout as the method you have chosen won’t work well with a footer. While enjoying your mince pies and mulled wine during the ‘quiet time’ between Christmas and New Year, why not create some starting point layouts of your own? The css-discuss Wiki, CSS layouts section is a great place to find examples that you can try out and find your favourite method of creating the various layout types.",2005,Rachel Andrew,rachelandrew,2005-12-04T00:00:00+00:00,https://24ways.org/2005/css-layout-starting-points/,code 333,The Attribute Selector for Fun and (no ad) Profit,"If I had a favourite CSS selector, it would undoubtedly be the attribute selector (Ed: You really need to get out more). For those of you not familiar with the attribute selector, it allows you to style an element based on the existence, value or partial value of a specific attribute. At it’s very basic level you could use this selector to style an element with particular attribute, such as a title attribute. <abbr title=""Cascading Style Sheets"">CSS</abbr> In this example I’m going to make all <abbr> elements with a title attribute grey. I am also going to give them a dotted bottom border that changes to a solid border on hover. Finally, for that extra bit of feedback, I will change the cursor to a question mark on hover as well. abbr[title] { color: #666; border-bottom: 1px dotted #666; } abbr[title]:hover { border-bottom-style: solid; cursor: help; } This provides a nice way to show your site users that <abbr> elements with title tags are special, as they contain extra, hidden information. Most modern browsers such as Firefox, Safari and Opera support the attribute selector. Unfortunately Internet Explorer 6 and below does not support the attribute selector, but that shouldn’t stop you from adding nice usability embellishments to more modern browsers. Internet Explorer 7 looks set to implement this CSS2.1 selector, so expect to see it become more common over the next few years. Styling an element based on the existence of an attribute is all well and good, but it is still pretty limited. Where attribute selectors come into their own is their ability to target the value of an attribute. You can use this for a variety of interesting effects such as styling VoteLinks. VoteWhats? If you haven’t heard of VoteLinks, it is a microformat that allows people to show their approval or disapproval of a links destination by adding a pre-defined keyword to the rev attribute. For instance, if you had a particularly bad meal at a restaurant, you could signify your dissaproval by adding a rev attribute with a value of vote-against. <a href=""http://www.mommacherri.co.uk/"" rev=""vote-against"">Momma Cherri's</a> You could then highlight these links by adding an image to the right of these links. a[rev=""vote-against""]{ padding-right: 20px; background: url(images/vote-against.png) no-repeat right top; } This is a useful technique, but it will only highlight VoteLinks on sites you control. This is where user stylesheets come into effect. If you create a user stylesheet containing this rule, every site you visit that uses VoteLinks will receive your new style. Cool huh? However my absolute favourite use for attribute selectors is as a lightweight form of ad blocking. Most online adverts conform to industry-defined sizes. So if you wanted to block all banner-ad sized images, you could simply add this line of code to your user stylesheet. img[width=""468""][height=""60""], img[width=""468px""][height=""60px""] { display: none !important; } To hide any banner-ad sized element, such as flash movies, applets or iFrames, simply apply the above rule to every element using the universal selector. *[width=""468""][height=""60""], *[width=""468px""][height=""60px""] { display: none !important; } Just bare in mind when using this technique that you may accidentally hide something that isn’t actually an advert; it just happens to be the same size. The Interactive Advertising Bureau lists a number of common ad sizes. Using these dimensions, you can create stylesheet that blocks all the popular ad formats. Apply this as a user stylesheet and you never need to suffer another advert again. Here’s wishing you a Merry, ad-free Christmas.",2005,Andy Budd,andybudd,2005-12-11T00:00:00+00:00,https://24ways.org/2005/the-attribute-selector-for-fun-and-no-ad-profit/,code 334,Transitional vs. Strict Markup,"When promoting web standards, standardistas often talk about XHTML as being more strict than HTML. In a sense it is, since it requires that all elements are properly closed and that attribute values are quoted. But there are two flavours of XHTML 1.0 (three if you count the Frameset DOCTYPE, which is outside the scope of this article), defined by the Transitional and Strict DOCTYPEs. And HTML 4.01 also comes in those flavours. The names reveal what they are about: Transitional DOCTYPEs are meant for those making the transition from older markup to modern ways. Strict DOCTYPEs are actually the default – the way HTML 4.01 and XHTML 1.0 were constructed to be used. A Transitional DOCTYPE may be used when you have a lot of legacy markup that cannot easily be converted to comply with a Strict DOCTYPE. But Strict is what you should be aiming for. It encourages, and in some cases enforces, the separation of structure and presentation, moving the presentational aspects from markup to CSS. From the HTML 4 Document Type Definition: This is HTML 4.01 Strict DTD, which excludes the presentation attributes and elements that W3C expects to phase out as support for style sheets matures. Authors should use the Strict DTD when possible, but may use the Transitional DTD when support for presentation attribute and elements is required. An additional benefit of using a Strict DOCTYPE is that doing so will ensure that browsers use their strictest, most standards compliant rendering modes. Tommy Olsson provides a good summary of the benefits of using Strict over Transitional in Ten questions for Tommy Olsson at Web Standards Group: In my opinion, using a Strict DTD, either HTML 4.01 Strict or XHTML 1.0 Strict, is far more important for the quality of the future web than whether or not there is an X in front of the name. The Strict DTD promotes a separation of structure and presentation, which makes a site so much easier to maintain. For those looking to start using web standards and valid, semantic markup, it is important to understand the difference between Transitional and Strict DOCTYPEs. For complete listings of the differences between Transitional and Strict DOCTYPEs, see XHTML: Differences between Strict & Transitional, Comparison of Strict and Transitional XHTML, and XHTML1.0 Element Attributes by DTD. Some of the differences are more likely than others to cause problems for developers moving from a Transitional DOCTYPE to a Strict one, and I’d like to mention a few of those. Elements that are not allowed in Strict DOCTYPEs center font iframe strike u Attributes not allowed in Strict DOCTYPEs align (allowed on elements related to tables: col, colgroup, tbody, td, tfoot, th, thead, and tr) language background bgcolor border (allowed on table) height (allowed on img and object) hspace name (allowed in HTML 4.01 Strict, not allowed on form and img in XHTML 1.0 Strict) noshade nowrap target text, link, vlink, and alink vspace width (allowed on img, object, table, col, and colgroup) Content model differences An element type’s content model describes what may be contained by an instance of the element type. The most important difference in content models between Transitional and Strict is that blockquote, body, and form elements may only contain block level elements. A few examples: text and images are not allowed immediately inside the body element, and need to be contained in a block level element like p or div input elements must not be direct descendants of a form element text in blockquote elements must be wrapped in a block level element like p or div Go Strict and move all presentation to CSS Something that can be helpful when doing the transition from Transitional to Strict DOCTYPEs is to focus on what each element of the page you are working on is instead of how you want it to look. Worry about looks later and get the structure and semantics right first.",2005,Roger Johansson,rogerjohansson,2005-12-13T00:00:00+00:00,https://24ways.org/2005/transitional-vs-strict-markup/,code 335,Naughty or Nice? CSS Background Images,"Web Standards based development involves many things – using semantically sound HTML to provide structure to our documents or web applications, using CSS for presentation and layout, using JavaScript responsibly, and of course, ensuring that all that we do is accessible and interoperable to as many people and user agents as we can. This we understand to be good. And it is good. Except when we don’t clearly think through the full implications of using those techniques. Which often happens when time is short and we need to get things done. Here are some naughty examples of CSS background images with their nicer, more accessible counterparts. Transaction related messages I’m as guilty of this as others (or, perhaps, I’m the only one that has done this, in which case this can serve as my holiday season confessional) We use lovely little icons to show status messages for a transaction to indicate if the action was successful, or was there a warning or error? For example: “Your postal/zip code was not in the correct format.” Notice that we place a nice little icon there, and use background colours and borders to convey a specific message: there was a problem that needs to be fixed. Notice that all of this visual information is now contained in the CSS rules for that div: <div class=""error""> <p>Your postal/zip code was not in the correct format.</p> </div> div.error { background: #ffcccc url(../images/error_small.png) no-repeat 5px 4px; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; } Using this approach also makes it very easy to create a div.success and div.warning CSS rules meaning we have less to change in our HTML. Nice, right? No. Naughty. Visual design communicates The CSS is being used to convey very specific information. The choice of icon, the choice of background colour and borders tell us visually that there is something wrong. With the icon as a background image – there is no way to specify any alt text for the icon, and significant meaning is lost. A screen reader user, for example, misses the fact that it is an “error.” The solution? Ask yourself: what is the bare minimum needed to indicate there was an error? Currently in the absence of CSS there will be no icon – which (I’m hoping you agree) is critical to communicating there was an error. The icon should be considered content and not simply presentational. The borders and background colour are certainly much less critical – they belong in the CSS. Lets change the code to place the image directly in the HTML and using appropriate alt text to better communicate the meaning of the icon to all users: <div class=""bettererror""> <img src=""images/error_small.png"" alt=""Error"" /> <p>Your postal/zip code was not in the correct format.</p> </div> div.bettererror { background-color: #ffcccc; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; position: relative; min-height: 1.25em; } div.bettererror img { display: block; position: absolute; left: 0.25em; top: 0.25em; padding: 0; margin: 0; } div.bettererror p { position: absolute; left: 2.5em; padding: 0; margin: 0; } Compare these two examples of transactional messages Status of a Record This example is pretty straightforward. Consider the following: a real estate listing on a web site. There are three “states” for a listing: new, normal, and sold. Here’s how they look: Example of a New Listing Example of A Sold Listing If we (forgive the pun) blindly apply the “use a CSS background image” technique we clearly run into problems with the new and sold images – they actually contain content with no way to specify an alternative when placed in the CSS. In this case of the “new” image, we can use the same strategy as we used in the first example (the transaction result). The “new” image should be considered content and is placed in the HTML as part of the <h2>...</h2> that identifies the listing. However when considering the “sold” listing, there are less changes to be made to keep the same look by leaving the “SOLD” image as a background image and providing the equivalent information elsewhere in the listing – namely, right in the heading. For those that can’t see the background image, the status is communicated clearly and right away. A screen reader user that is navigating by heading or viewing a listing will know right away that a particular property is sold. Of note here is that in both cases (new and sold) placing the status near the beginning of the record helps with a zoom layout as well. Better Example of A Sold Listing Summary Remember: in the holiday season, its what you give that counts!! Using CSS background images is easy and saves time for you but think of the children. And everyone else for that matter… CSS background images should only be used for presentational images, not for those that contain content (unless that content is already represented and readily available elsewhere).",2005,Derek Featherstone,derekfeatherstone,2005-12-20T00:00:00+00:00,https://24ways.org/2005/naughty-or-nice-css-background-images/,code 336,Practical Microformats with hCard,"You’ve probably heard about microformats over the last few months. You may have even read the easily digestible introduction at Digital Web Magazine, but perhaps you’ve not found time to actually implement much yet. That’s understandable, as it can sometimes be difficult to see exactly what you’re adding by applying a microformat to a page. Sure, you’re semantically enhancing the information you’re marking up, and the Semantic Web is a great idea and all, but what benefit is it right now, today? Well, the answer to that question is simple: you’re adding lots of information that can be and is being used on the web here and now. The big ongoing battle amongst the big web companies if one of territory over information. Everyone’s grasping for as much data as possible. Some of that information many of us are cautious to give away, but a lot of is happy to be freely available. Of the data you’re giving away, it makes sense to give it as much meaning as possible, thus enabling anyone from your friends and family to the giant search company down the road to make the most of it. Ok, enough of the waffle, let’s get working. Introducing hCard You may have come across hCard. It’s a microformat for describing contact information (or really address book information) from within your HTML. It’s based on the vCard format, which is the format the contacts/address book program on your computer uses. All the usual fields are available – name, address, town, website, email, you name it. If you’re running Firefox and Greasemonkey (or if you can, just to try this out), install this user script. What it does is look for instances of the hCard microformat in a page, and then add in a link to pass any hCards it finds to a web service which will convert it to a vCard. Take a look at the About the author box at the bottom of this article. It’s a hCard, so you should be able to click the icon the user script inserts and add me to your Outlook contacts or OS X Address Book with just a click. So microformats are useful after all. Free microformats all round! Implementing hCard This is the really easy bit. All the hCard microformat is, is a bunch of predefined class names that you apply to the markup you’ve probably already got around your contact information. Let’s take the example of the About the author box from this article. Here’s how the markup looks without hCard: <div class=""bio""> <h3>About the author</h3> <p>Drew McLellan is a web developer, author and no-good swindler from just outside London, England. At the <a href=""http://www.webstandards.org/"">Web Standards Project</a> he works on press, strategy and tools. Drew keeps a <a href=""http://www.allinthehead.com/"">personal weblog</a> covering web development issues and themes.</p> </div> This is a really simple example because there’s only two key bits of address book information here:- my name and my website address. Let’s push it a little and say that the Web Standards Project is the organisation I work for – that gives us Name, Company and URL. To kick off an hCard, you need a containing object with a class of vcard. The div I already have with a class of bio is perfect for this – all it needs to do is contain the rest of the contact information. The next thing to identify is my name. hCard uses a class of fn (meaning Full Name) to identify a name. As is this case there’s no element surrounding my name, we can just use a span. These changes give us: <div class=""bio vcard""> <h3>About the author</h3> <p><span class=""fn"">Drew McLellan</span> is a web developer... The two remaining items are my URL and the organisation I belong to. The class names designated for those are url and org respectively. As both of those items are links in this case, I can apply the classes to those links. So here’s the finished hCard. <div class=""bio vcard""> <h3>About the author</h3> <p><span class=""fn"">Drew McLellan</span> is a web developer, author and no-good swindler from just outside London, England. At the <a class=""org"" href=""http://www.webstandards.org/"">Web Standards Project</a> he works on press, strategy and tools. Drew keeps a <a class=""url"" href=""http://www.allinthehead.com/"">personal weblog</a> covering web development issues and themes.</p> </div> OK, that was easy. By just applying a few easy class names to the HTML I was already publishing, I’ve implemented an hCard that right now anyone with Greasemonkey can click to add to their address book, that Google and Yahoo! and whoever else can index and work out important things like which websites are associated with my name if they so choose (and boy, will they so choose), and in the future who knows what. In terms of effort, practically nil. Where next? So that was a trivial example, but to be honest it doesn’t really get much more complex even with the most pernickety permutations. Because hCard is based on vCard (a mature and well thought-out standard), it’s all tried and tested. Here’s some good next steps. Play with the hCard Creator Take a deep breath and read the spec Start implementing hCard as you go on your own projects – it takes very little time hCard is just one of an ever-increasing number of microformats. If this tickled your fancy, I suggest subscribing to the microformats site in your RSS reader to keep in touch with new developments. What’s the take-away? The take-away is this. They may sound like just more Web 2-point-HoHoHo hype, but microformats are a well thought-out, and easy to implement way of adding greater depth to the information you publish online. They have some nice benefits right away – certainly at geek-level – but in the longer term they become much more significant. We’ve been at this long enough to know that the web has a long, long memory and that what you publish today will likely be around for years. But putting the extra depth of meaning into your documents now you can help guard that they’ll continue to be useful in the future, and not just a bunch of flat ASCII.",2005,Drew McLellan,drewmclellan,2005-12-06T00:00:00+00:00,https://24ways.org/2005/practical-microformats-with-hcard/,code