{"rowid": 254, "title": "What I Learned in Six Years at GDS", "contents": "When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public.\nLots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned.\n1. What is the user need?\n\u2028The main discipline I learned from my time at GDS was to always ask \u2018what is the user need?\u2019 It\u2019s very easy to build something that seems like a good idea, but until you\u2019ve identified what problem you are solving for the user, you can\u2019t be sure that you are building something that is going to help solve an actual problem.\nA really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a \u201cwhere\u2019s my stuff\u201d for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what\u2019s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres.\nThe project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users\u2019 needs. At the end of this discovery, the team realised that a status tracker wasn\u2019t the way to address the problem. As they wrote in this blog post: \n\nStatus tracking tools are often just \u2018channel shift\u2019 for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place.\n\nWhat would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it\u2019s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes.\nMaking sure you know your user\nAt GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you\u2019d observe users working with things you\u2019d built, but even if they weren\u2019t, it was an incredibly valuable experience, and something you should seek out if you are able to.\nEven if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn\u2019t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged.\nUser needs is not just about building things\nAsking the question \u201cwhat is the user need?\u201d really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be). \nThinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What\u2019s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review. \nOr you are writing an email to a colleague. What\u2019s the user need? What are you hoping the reader will learn, understand or do as a result of your email?\n2. Make things open: it makes things better\nThe second important thing I learned at GDS was \u2018make things open: it makes things better\u2019. This works on many levels: being open about your strategy, blogging about what you are doing and what you\u2019ve learned (including mistakes), and \u2013 the part that I got most involved in \u2013 coding in the open.\nTalking about your work helps clarify it\nOne thing we did really well at GDS was blogging \u2013 a lot \u2013 about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you\u2019re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions.\nIt\u2019s also really valuable to blog about what you\u2019ve learned, especially if you\u2019ve made a mistake. It makes sure you\u2019ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better.\nCoding in the open has a lot of benefits\nIn my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed).\nWhen I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people\u2019s mind at rest about these things is why it\u2019s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given). \nBut I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won\u2019t be looking) makes everyone raise their game slightly. The very fact that you know it\u2019s open, makes you make it a bit better.\nIt helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I\u2019ve left GDS, it\u2019s so useful to be able to look back at code I worked on to remember how things worked.\nShare what you learn\nIt\u2019s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you\u2019ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well.\n3. Do the hard work to make it simple (tech edition)\n\u2018Start with user needs\u2019 and \u2018Make things open: it makes things better\u2019 are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: \u2018Do the hard work to make it simple\u2019, and specifically, how this manifests itself in the way we build technology.\nAt GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer.\nWe worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it.\nReviewing others\u2019 pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a \u2018pull request seal\u2019, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days.\nMaking it easier for developers to support the site\nAnother example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers.\nThe team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions. \nThe main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it.\nThis was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they\u2019d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day.\nDevelopers are users too\nDoing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work.\nThese three principles help us make great things\nI learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design.\nAnd the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I\u2019ve talked about here: think about the user need, make things open, and do the hard work to make it simple.", "year": "2018", "author": "Anna Shipman", "author_slug": "annashipman", "published": "2018-12-08T00:00:00+00:00", "url": "https://24ways.org/2018/what-i-learned-in-six-years-at-gds/", "topic": "business"} {"rowid": 245, "title": "Web Content Accessibility Guidelines 2.1\u2014for People Who Haven\u2019t Read the Update", "contents": "Happy United Nations International Day of Persons with Disabilities 2018! The United Nations chose \u201cEmpowering persons with disabilities and ensuring inclusiveness and equality\u201d as this year\u2019s theme. We\u2019ve seen great examples of that in 2018; for example, Paul Robert Lloyd has detailed how he improved the accessibility of this very website. \nOn social media, US Congressmember-Elect Alexandria Ocasio-Cortez started using the Clipomatic app to add live captions to her Instagram live stories, conforming to success criterion 1.2.4, \u201cCaptions (Live)\u201d of the Web Content Accessibility Guidelines (figure 1) \u2026and British Vogue Contributing Editor Sin\u00e9ad Burke has used the split-screen feature of Instagram live stories to invite an interpreter to provide live Sign Language interpretation, going above and beyond success criterion 1.2.6, \u201cSign Language (Prerecorded)\u201d of the Web Content Accessibility Guidelines (figure 2).\n\nFigure 1: Screenshot of Alexandria Ocasio-Cortez\u2019s Instagram story with live captionsFigure 2: Screenshot of Sin\u00e9ad Burke\u2019s Instagram story with Sign Language Interpretation\nThat theme chimes with this year\u2019s publication of the World Wide Web Consortium (W3C)\u2019s Web Content Accessibility Guidelines (WCAG) 2.1. In last year\u2019s \u201cWeb Content Accessibility Guidelines\u2014for People Who Haven\u2019t Read Them\u201d, I mentioned the scale of the project to produce this update during 2018: \u201cthe editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible\u201d. \nThe WCAG working group have added 17 success criteria to the 61 that they released way back in 2008\u2014for context, that was 1\u00bd years before Apple released their first iPad! These new criteria make it easier than ever for us web geeks to produce work that is more accessible to people using mobile devices and touchscreens, people with low vision, and people with cognitive and learning disabilities. \nOnce again, let\u2019s rip off all the legalese and ambiguous terminology like wrapping paper, and get up to date.\nCan your users perceive the information on your website?\nThe first guideline has criteria that help you prevent your users from asking, \u201cWhat the **** is this thing here supposed to be?\u201d We\u2019ve seven new criteria for this guideline.\n1.3.4 Some people can\u2019t easily change the orientation of the device that they use to browse the web, and so you should make sure that your users can use your website in portrait orientation and in landscape orientation. Consider how people slowly twirl presents that they have plucked from under the Christmas tree, to find the appropriate orientation\u2014and expect your users to do likewise with your websites and apps. We\u2019ve had 18\u00bd years since John Allsopp\u2019s revelatory Dao of Web Design enlightened us to \u201cembrace the fact that the web doesn\u2019t have the same constraints\u201d as printed pages, and to \u201cdesign for this flexibility\u201d. So, even though this guideline doesn\u2019t apply to websites where \u201ca specific display orientation is essential,\u201d such as a piano tutorial, always ask yourself, \u201cWhat would John Allsopp do?\u201d\n1.3.5 You should help the user\u2019s browser to automatically complete\u2013or not complete\u2013form fields, to save the user some time and effort. The surprisingly powerful and flexible autocomplete attribute for input elements should prove most useful here. If you\u2019ve used microformats or microdata to mark up information about a person, the autocomplete attribute\u2019s range of values should seem familiar. I like how the W3\u2019s \u201cUsing HTML 5.2 autocomplete attributes\u201d says that autocompleted values in forms help \u201cthose with dexterity disabilities who have trouble typing, those who may need more time, and anyone who wishes to reduce effort to fill out a form\u201d (emphasis mine). Um\u2026\ud83d\ude4b\u200d\u2642\ufe0f\n1.3.6 I like this one a lot, because it can help a huge audience to overcome difficulties that might prevent them from ever using the web. Some people have cognitive difficulties that affect their memory, focus, attention, language processing, and/or decision-making. Those users often rely on assistive technologies that present information through proprietary symbols, summaries of content, and keyboard shortcuts. You could use ARIA landmarks to identify the regions of each webpage. You could also keep an eye on the W3C\u2019s ongoing work on Personalisation Semantics.\n1.4.10 If you were to find a Nintendo Switch and \u201cSuper Mario Odyssey\u201d under your Christmas tree, you would have many hours of enjoyably scrolling horizontally and vertically to play the game. On the other hand, if you had to zoom a webpage to 400% so that you could read the content, you might have many hours of frustratedly scrolling horizontally and vertically to read the content. Learned reader, I assume you understand the purpose and the core techniques of Responsive Web Design. I also assume you\u2019re getting up to speed with the new Grid, Flexbox, and Box Alignment techniques for layout, and overflow-wrap. Using those skills, you should make sure that all content and functionality remain available when the browser is 320px wide, without your user needing to scroll horizontally. (For vertical text, you should make sure that all content and functionality remain available when the browser is 256px high, without your user needing to scroll vertically.) You don\u2019t have to do this for anything that would lose meaning if you restructured it into one narrow column. That includes some images, maps, diagrams, video, games, presentations, and data tables. Remember to check how your media queries affect font size: your user might find that text becomes smaller as they zoom into the webpage. So, test this one on real devices, or\u2014better yet\u2014test it with real users.\n1.4.11 In \u201cWeb Content Accessibility Guidelines\u2014for People Who Haven\u2019t Read Them\u201d, I recommended bookmarking Lea Verou\u2019s Contrast Ratio calculator for checking that text contrasts enough with its background (for success criteria 1.4.3 and 1.4.6), so that more people can read it more easily. For this update, you should make sure that form elements and their focus states have a 3:1 contrast ratio with the colour around them. This doesn\u2019t apply to controls that use the browser\u2019s default styling. Also, you should make sure that graphics that convey information have a 3:1 contrast ratio with the colour around them.\n1.4.12 Some people, due to low vision or dyslexia, might need to modify the typography that you agonised over. Research indicates that you should make sure that all content and functionality would remain available if a user were to set:\n\nline height to at least 1\u00bd \u00d7 the font size;\nspace below paragraphs to at least 2 \u00d7 the font size;\nletter spacing to at least 0.12 \u00d7 the font size;\nword spacing to at least 0.16 \u00d7 the font size.\n\nTo test this, check for text overlapping, text hiding behind other elements, or text disappearing.\n1.4.13 Sometimes when visiting a website, you hover over\u2014or tab on to\u2014something that unleashes a newsletter subscription pop-up, some suggested \u201crelated content\u201d, and/or a GDPR-related pop-up. On a well-designed website, you can press the Esc key on your keyboard or click a prominent \u201cClose\u201d button or \u201cX\u201d button to vanquish such intrusions. If the Esc key fails you, or if you either can\u2019t see or can\u2019t click the \u201cClose\u201d button\u2026well, you\u2019ll probably just close that browser tab. This situation can prove even more infuriating for users with low vision or cognitive disabilities. So, if new content appears when your user hovers over or tabs on to some element, you should make sure that:\n\nyour user can dismiss that content without needing to move their pointer or tab on to some other element (this doesn\u2019t apply to error warnings, or well-behaved content that doesn\u2019t obscure or replace other content);\nthe new content remains visible while your user moves their cursor over it;\nthe new content remains visible as long as the user hovers over that element or dismisses that content\u2014or until the new content is no longer valid.\n\nThis doesn\u2019t apply to situations such as hovering over an element\u2019s title attribute, where the user\u2019s browser controls the display of the content that appears.\nCan users operate the controls and links on your website?\nThe second guideline has criteria that help you prevent your users from asking, \u201cHow the **** does this thing work?\u201d We\u2019ve nine new criteria for this guideline.\n2.1.4 Some websites offer keyboard shortcuts for users. For example, the keyboard shortcuts for Gmail allow the user to press the \u21e7 key and u to mark a message as unread. Usually, shortcuts on websites include modifier keys, such as Ctrl, along with a letter, number, or punctuation symbol. Unfortunately, users who have dexterity challenges sometimes trigger those shortcuts by accident, and that can make a website impossible to use. Also, speech input technology can sometimes trigger those shortcuts. If your website offers single-character keyboard shortcuts, you must allow your user to turn off or remap those shortcuts. This doesn\u2019t apply to single-character keyboard shortcuts that only work when a control, such as drop-down list, has focus.\n2.2.6 If your website uses a timeout for some process, you could store the user\u2019s data for at least 20 hours, so that users with cognitive disabilities can take a break or take longer than usual to complete the process without losing their place or losing their data. Alternatively, you could warn the user, at the start of the process, about that the website will timeout after whatever amount of time you have chosen. \n2.3.3 If your website has some non-essential animation (such as parallax scrolling) that starts when the user does some particular action, you could allow the user to turn off that animation so that you avoid harming users with vestibular disorders. The prefers-reduced-motion media query currently has limited browser support, but you can start using it now to avoid showing animations to users who select the \u201cReduce Motion\u201d setting (or equivalent) in their device\u2019s operating system:\n@media (prefers-reduced-motion: reduce) {\n .MrFancyPants {\n animation: none;\n }\n}\n2.5.1 Some websites let users use multi-touch gestures on touchscreen devices. For example, Google Maps allows users to pinch with two fingers to zoom out and \u201cunpinch\u201d with two fingers to zoom in. Also, some websites allow users to drag a finger to do some action, such as changing the value on an input element with type=\"range\", or swiping sideways to the next photograph in a gallery. Some users with dexterity challenges, and some users who use a head pointer, an eye-gaze system, or speech-controlled mouse emulation, might find multi-touch gestures or dragging impossible. You must make sure that your website supports single-tap alternatives to any multi-touch gestures or dragging actions that it provides. For example, if your website lets someone pinch and unpinch a map to zoom in and out, you must also provide buttons that a user can tap to zoom in and out.\n2.5.2 This might be my favourite accessibility criterion ever! Did you ever touch or press a \u201cSend\u201d button but then immediately realise that you really didn\u2019t want to send the message, and so move your finger or cursor away from the \u201cSend\u201d button before lifting your finger?! Imagine how many arguments that functionality has prevented. \ud83d\ude0c You must make sure that touching or pressing does not cause anything to happen before the user raises their finger or cursor, or make sure that the user can move their finger or cursor away to prevent the action. In JavaScript, prefer onclick to onmousedown, unless your website has actions that need onmousedown. Also, this doesn\u2019t apply to actions that need to happen as soon as the user clicks or touches. For example, a user playing a \u201cWhac-A-Mole\u201d game or a piano emulator needs the action to happen as soon as they click or touch the screen.\n2.5.3 Recently, entrepreneur and social media guru Gary Vaynerchuk has emphasised the rise of audio and voice as output and input. He quotes a Google statistic that says one in five search queries use voice input. Once again, users with disabilities have been ahead of the curve here, having used screen readers and/or dictation software for many years. You must make sure that the text that appears on a form control or image matches how your HTML identifies that form control or image. Use proper semantic HTML to achieve this:\n\nuse the label element to pair text with the corresponding input element;\nuse an alt attribute value that exactly matches any text that appears in an image;\nuse an aria-labelledby attribute value that exactly matches the text that appears in any complex component.\n\n2.5.4 Modern Web APIs allow web developers to specify how their website will react to the user shaking, tilting, or gesturing towards their device. Some users might find those actions difficult, impossible, or embarrassing to perform. If you make any functionality available when the user shakes, tilts, or gestures towards their device, you must provide form controls that make that same functionality available. As usual, this doesn\u2019t apply to websites that require shaking, tilting, or gesturing; this includes some games and music programmes. John Gruber describes the iPhone\u2019s \u201cShake to Undo\u201d gesture as \u201cdreadful \u2014 impossible to discover through exploration of the on-screen [user interface], bad for accessibility, and risks your phone flying out of your hand\u201d. This accessibility criterion seems to empathise with John: you must make sure that your user can prevent your website from responding to shaking, tilting and/or gesturing towards their device.\n2.5.5 Homer Simpson\u2019s telephone famously complained, \u201cThe fingers you have used to dial are too fat.\u201d I think we\u2019ve all felt like that when using phones and tablets, particularly when trying to dismiss pop-ups and ads. You could make interactive elements at least 44px wide \u00d7 44px high. Apple\u2019s \u201cHuman Interface Guidelines\u201d agree: \u201cProvide ample touch targets for interactive elements. Try to maintain a minimum tappable area of 44pt x 44pt for all controls.\u201d This doesn\u2019t apply to links within inline text, or to unsoiled elements.\n2.5.6 Expect your users to use a variety of input devices they want, and to change from one to another whenever they please. For example, a user with a tablet and keyboard might jab icons on the screen while typing on the keyboard, or a user might dictate text while alone and then type on a keyboard when a colleague arrives. You could make sure that your website allows your users to use whichever available input modality they choose. Once again, this doesn\u2019t apply to websites that require a specific modality; this includes typing tutors and music programmes.\nCan users understand your content?\nThe third guideline has criteria that help you prevent your users from asking, \u201cWhat the **** does this mean?\u201d We\u2019ve no new criteria for this guideline. \nHave you made your website robust enough to work on your users\u2019 browsers and assistive technologies?\nThe fourth and final guideline has criteria that help you prevent your users from asking, \u201cWhy the **** doesn\u2019t this work on my device?\u201d We\u2019ve one new criterion for this guideline.\n4.1.3 Sometimes you need to let your user know the status of something: \u201cDid it work OK? What was the error? How far through it are we?\u201d However, you should avoid making your user lose their place on the webpage, and so you should let them know the status without opening a new window, focusing on another element, or submitting a form. To do this properly for assistive technology users, choose the appropriate ARIA role for the new content; for example: \n\nif your user needs to know, \u201cDid it work OK?\u201d, add role=\"status\u201d;\nif your user needs to know, \u201cWhat was the error?\u201d, add role=\"alert\u201d;\nif you user needs to know, \u201cHow far through it are we?\u201d, add role=\"log\" (for a chat window) or role=\"progressbar\" (for, well, a progress bar).\n\nBetter design for humans\nMy favourite of Luke Wroblewski\u2019s collection of Design Quotes is, \u201cDesign is the art of gradually applying constraints until only one solution remains,\u201d from that most prolific author, \u201cUnknown\u201d. I\u2019ve always viewed the Web Content Accessibility Guidelines as people-based constraints, and liked how they help the design process. With these 17 new web content accessibility criteria, go forth and create solutions that more people than ever before can use.\nSpending those book vouchers you got for Christmas\nWhat next? If you\u2019re looking for something to do to keep you busy this Christmas, I thoroughly recommend these four books for increasing your accessibility expertise:\n\n\u201cPro HTML5 Accessibility\u201d by Joshue O Connor (Head of Accessibility (Interim) at the UK Government Digital Service, Director of InterAccess, and one of the editors of the Web Content Accessibility Guidelines 2.1): Although this book is six years old\u2014a long time in web design\u2014I find it an excellent go-to resource. It begins by explaining how people with disabilities use the web, and then expertly explains modern HTML in that context.\n\u201cA Web for Everyone\u2014Designing Accessible User Experiences\u201d by Sarah Horton (the Paciello Group\u2019s UX Strategy Lead) and Whitney Quesenbery (the Center for Civic Design\u2019s co-director): This book covers the Web Content Accessibility Guidelines 2.0, the principles of Universal Design, and design thinking. Its personas for Accessible UX and its profiles of well-known industry figures\u2014including some 24ways authors\u2014keep its content practical and relevant throughout.\n\u201cAccessibility For Everyone\u201d by Laura Kalbag (Ind.ie\u2019s co-founder and designer, and 24ways author): This book is just over a year old, and so serves as a great resource for up-to-date coverage of guidelines, laws, and accessibility features of operating systems\u2014as well as content, design, coding, and testing. The audiobook, which Laura narrates, can help you and your colleagues go from having little or no understanding of web accessibility, to becoming familiar with all aspects of web accessibility\u2014in less than four hours.\n\u201cJust Ask: Integrating Accessibility Throughout Design\u201d by Shawn Lawton Henry (the World Wide Web Consortium (W3C)\u2019s Web Accessibility Initiative (WAI)\u2019s Outreach Coordinator): Although this book is 11\u00bd years old, the way it presents accessibility as part of the User-Centered Design process is timeless. I found its section on Usability Testing with people with disabilities particularly useful.", "year": "2018", "author": "Alan Dalton", "author_slug": "alandalton", "published": "2018-12-03T00:00:00+00:00", "url": "https://24ways.org/2018/wcag-for-people-who-havent-read-the-update/", "topic": "ux"} {"rowid": 252, "title": "Turn Jekyll up to Eleventy", "contents": "Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper \u2014 and more secure \u2014 option. \nThe JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it\u2019s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.\nBy combining three approachable languages \u2014 Markdown for content, YAML for data and Liquid for templating \u2014 the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it\u2019s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself \u201cthis, but in Node\u201d. Thankfully, one of Santa\u2019s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.\nIntroducing Eleventy\nEleventy is a more flexible alternative Jekyll. Besides being written in Node, it\u2019s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).\nAs content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I\u2019ve discovered, there are a few gotchas. If you\u2019ve been considering making the switch, here are a few tips and tricks to help you on your way1.\nNote: Throughout this article, I\u2019ll be converting Matt Cone\u2019s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:\ngit clone https://github.com/mattcone/markdown-guide.git\ncd markdown-guide\nBefore you start\nIf you\u2019ve used tools like Grunt, Gulp or Webpack, you\u2019ll be familiar with Node.js but, if you\u2019ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now\u2019s the time to install Node.js and set up your project to work with its package manager, NPM:\n\nInstall Node.js:\n\nMac: If you haven\u2019t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.\nWindows: Download the Windows installer from the Node.js website and follow the instructions.\n\nInitiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems\u2019s Gemfile, this file contains a list of your project\u2019s third-party dependencies.\n\nIf you\u2019re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn\u2019t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.\nInstalling Eleventy\nWith Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:\nnpm install --save-dev @11ty/eleventy\nIf you open package.json you should see the following:\n\u2026\n\"devDependencies\": {\n \"@11ty/eleventy\": \"^0.6.0\"\n}\n\u2026\nWe can now run Eleventy from the command line using NPM\u2019s npx command. For example, to covert the README.md file to HTML, we can run the following:\nnpx eleventy --input=README.md --formats=md\nThis command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.\nConfiguration\nWhereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we\u2019ll see later on.\nWe\u2019ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:\nmodule.exports = function(eleventyConfig) {\n return {\n dir: {\n input: \"./\", // Equivalent to Jekyll's source property\n output: \"./_site\" // Equivalent to Jekyll's destination property\n }\n };\n};\nA few other things to bear in mind:\n\n\nWhereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).\n\nBy default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you\u2019ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.\n\nLayouts\nOne area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).\nWanting to keep our layouts together, we\u2019ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy\u2019s config:\nmodule.exports = function(eleventyConfig) {\n // Aliases are in relation to the _includes folder\n eleventyConfig.addLayoutAlias('about', 'layouts/about.html');\n eleventyConfig.addLayoutAlias('book', 'layouts/book.html');\n eleventyConfig.addLayoutAlias('default', 'layouts/default.html');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n }\n };\n}\nDetermining which template language to use\nEleventy will transform Markdown (.md) files using Liquid by default, but we\u2019ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.\nVariables\nOn the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.\nSite variables\nAlongside build settings, Jekyll let\u2019s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:\ntitle: \"Markdown Guide\"\nurl: https://www.markdownguide.org\nbaseurl: \"\"\nrepo: http://github.com/mattcone/markdown-guide\ncomments: false\nauthor:\n name: \"Matt Cone\"\nog_locale: \"en_US\"\nEleventy\u2019s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.\n{\n \"title\": \"Markdown Guide\",\n \"url\": \"https://www.markdownguide.org\",\n \"baseurl\": \"\",\n \"repo\": \"http://github.com/mattcone/markdown-guide\",\n \"comments\": false,\n \"author\": {\n \"name\": \"Matt Cone\"\n },\n \"og_locale\": \"en_US\"\n}\nPage variables\nThe table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:\n\n\n\nJekyll\nEleventy\n\n\n\n\npage.url\npage.url\n\n\npage.date\npage.date\n\n\npage.path\npage.inputPath\n\n\npage.id\npage.outputPath\n\n\npage.name\npage.fileSlug\n\n\npage.content\ncontent\n\n\npage.title\ntitle\n\n\npage.foobar\nfoobar\n\n\n\nWhen iterating through pages, frontmatter values are available via the data object while content is available via templateContent:\n\n\n\nJekyll\nEleventy\n\n\n\n\nitem.url\nitem.url\n\n\nitem.date\nitem.date\n\n\nitem.path\nitem.inputPath\n\n\nitem.name\nitem.fileSlug\n\n\nitem.id\nitem.outputPath\n\n\nitem.content\nitem.templateContent\n\n\nitem.title\nitem.data.title\n\n\nitem.foobar\nitem.data.foobar\n\n\n\nIdeally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.\nPagination variables\nWhereas Jekyll\u2019s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:\n\n\n\nJekyll\nEleventy\n\n\n\n\npaginator.page\npagination.pageNumber\n\n\npaginator.per_page\npagination.size\n\n\npaginator.posts\npagination.items\n\n\npaginator.previous_page_path\npagination.previousPageHref\n\n\npaginator.next_page_path\npagination.nextPageHref\n\n\n\nFilters\nAlthough Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few \u2014 more than can be covered by this article \u2014 but you can replicate them by using Eleventy\u2019s addFilter configuration option. Let\u2019s convert two used by our Markdown Guide: jsonify and where.\nThe jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:\n// {{ variable | jsonify }}\neleventyConfig.addFilter('jsonify', function (variable) {\n return JSON.stringify(variable);\n});\nJekyll\u2019s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:\n{{ site.members | where: \"graduation_year\",\"2014\" }}\nTo account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:\n// {{ array | where: key,value }}\neleventyConfig.addFilter('where', function (array, key, value) {\n return array.filter(item => {\n const keys = key.split('.');\n const reducedKey = keys.reduce((object, key) => {\n return object[key];\n }, item);\n\n return (reducedKey === value ? item : false);\n });\n});\nThere\u2019s quite a bit going on within this filter, but I\u2019ll try to explain. Essentially we\u2019re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it\u2019s removed. Phew!\nIncludes\nAs with filters, Jekyll provides a set of tags that aren\u2019t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you\u2019re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:\n\n{% include include.html value=\"key\" %}\n\n\n{{ include.value }}\nin Eleventy, you would do this:\n\n{% include \"include.html\", value: \"key\" %}\n\n\n{{ value }}\nA downside of Shopify\u2019s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.\nTweaking Liquid\nYou may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS\u2019s dynamicPartials option to false. Additionally, Eleventy doesn\u2019t support the include_relative tag, meaning you can\u2019t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option. \nThankfully, Eleventy allows us to pass options to LiquidJS:\neleventyConfig.setLiquidOptions({\n dynamicPartials: false,\n root: [\n '_includes',\n '.'\n ]\n});\nCollections\nJekyll\u2019s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.\nCollections in Jekyll\nIn Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:\ncollections:\n - basic-syntax\n - extended-syntax\nThese correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:\n{% for syntax in site.extended-syntax %}\n {{ syntax.title }}\n{% endfor %}\nCollections in Eleventy\nThere are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:\n---\ntitle: Strikethrough\nsyntax-id: strikethrough\nsyntax-summary: \"~~The world is flat.~~\"\ntag: extended-syntax\n---\nWe can then iterate over tagged content like this:\n{% for syntax in collections.extended-syntax %}\n {{ syntax.data.title }}\n{% endfor %}\nEleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):\neleventyConfig.addCollection('basic-syntax', collection => {\n return collection.getFilteredByGlob('_basic-syntax/*.md');\n});\n\neleventyConfig.addCollection('extended-syntax', collection => {\n return collection.getFilteredByGlob('_extended-syntax/*.md');\n});\nWe can extend this further. For example, say we wanted to sort a collection by the display_order property in our document\u2019s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript\u2019s sort method to sort the result:\neleventyConfig.addCollection('example', collection => {\n return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {\n return a.data.display_order - b.data.display_order;\n });\n});\nHopefully, this gives you just a hint of what\u2019s possible using this approach.\nUsing directory data to manage defaults\nBy default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:\n---\ntitle: Lists\nsyntax-id: lists\napi: \"no\"\npermalink: /basic-syntax/lists.html\n---\nAgain, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.\nFor example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:\n{\n \"layout\": \"syntax\",\n \"tag\": \"basic-syntax\",\n \"permalink\": \"basic-syntax/{{ title | slug }}.html\"\n}\nHowever, Markdown Guide doesn\u2019t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let\u2019s change things around a little. No longer tied to Jekyll\u2019s rules about where collection folders should be saved and how they should be labelled, we\u2019ll move them into a folder called _content:\nmarkdown-guide\n\u2514\u2500\u2500 _content\n \u251c\u2500\u2500 basic-syntax\n \u251c\u2500\u2500 extended-syntax\n \u251c\u2500\u2500 getting-started\n \u2514\u2500\u2500 _content.json\nWe will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:\n{\n \"permalink\": false\n}\nStatic files\nEleventy only transforms files whose template language it\u2019s familiar with. But often we may have static assets that don\u2019t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:\nmodule.exports = function(eleventyConfig) {\n \u2026\n\n // Copy the `assets` directory to the compiled site folder\n eleventyConfig.addPassthroughCopy('assets');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n },\n passthroughFileCopy: true\n };\n}\nFinal considerations\nAssets\nUnlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts \u2014 we have plenty of choices in that department already. If you\u2019ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.\nPublishing to GitHub Pages\nOne of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site \u2014 or any site not built with Jekyll \u2014 to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It\u2019s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!\n\nGoing one louder\nIf you\u2019ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons. \nWhile it\u2019s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy\u2019s appeal, it\u2019s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.\nI moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that\u2019s perfectly fine! \nBut these go to 11.\n\n\n\n\nInformation provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5\u00a0\u21a9", "year": "2018", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2018-12-11T00:00:00+00:00", "url": "https://24ways.org/2018/turn-jekyll-up-to-eleventy/", "topic": "content"} {"rowid": 261, "title": "Surviving\u2014and Thriving\u2014as a Remote Worker", "contents": "Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there\u2019s an entire world of talent out there?\nI\u2019ve been working remotely, full-time, for five and a half years. I\u2019ve reached the point where I can\u2019t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I\u2019ve grown attached to my current level of freedom and flexibility.\nHowever, it took me a lot of trial and error to reach success as a remote worker \u2014 and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn\u2019t for everyone, but most people, with enough effort, can make it work \u2014 or even thrive. Here\u2019s what I\u2019ve learned in over five years of working remotely.\nExperiment with your environment\nAs a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space \u2014 whether that\u2019s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch\u2026 but I\u2019ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn\u2019t completely destroy your eyesight.\nWorking remotely doesn\u2019t always mean working from home. If you don\u2019t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don\u2019t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you\u2019re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls.\nCo-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They\u2019re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.) \nJust be polite \u2014 make sure your headphones don\u2019t leak, and don\u2019t work from a library if you have a day full of calls.\nRemember, too, that you don\u2019t have to stay in the same spot all day. It\u2019s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don\u2019t force yourself to sit at your desk for eight hours if that doesn\u2019t work for you.\nSet boundaries\nIf you\u2019re a workaholic, working remotely can be a challenge. It\u2019s incredibly easy to just\u2026 work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there\u2019s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it\u2019s time to figure this out, eh?\nLearning how to set boundaries is one of the most important lessons I\u2019ve learned working remotely. (And honestly, it\u2019s something I still struggle with). \nFor a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it\u2019s noon, I\u2019m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot.\nI recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time.\nI too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can\u2019t get sucked into work before I\u2019m even dressed. I\u2019ve gotten pretty good at forbidding myself from working until I\u2019m ready, but building any new habit requires intentionality. \nSomething else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn\u2019t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I\u2019ve heard other coworkers have figured out ways to work through these technical issues, so I\u2019m hoping to give it another try soon.\nYou might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It\u2019s true! If you\u2019re a more disciplined person, you might not need any of these coping mechanisms. If you\u2019re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out.\nCreate intentional transition time\nI know it\u2019s a stereotype that people who work from home stay in their pajamas all day, but\u2026 sometimes, it\u2019s very easy to do. I\u2019ve found that in order to reach peak focus, I need to create intentional transition time. \nThe most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it\u2019s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort. \nI\u2019ve found it helpful to take similar steps at the end of my day. If I\u2019ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch. \nThis is another reason working from outside your home is advantageous. Commutes, even if it\u2019s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen \u2014 not next to your computer. \nEmbrace async\nIf you\u2019re used to working in an office, you\u2019ve probably gotten pretty used to being able to pop over to a colleague\u2019s desk if you need to ask a question. They\u2019re pretty much forced to engage with you at that point. When you\u2019re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary\u2019s in Australia and it\u2019s 3am there right now. \nFor many remote workers, that\u2019s part of the package. When you\u2019re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers.\nAsynchronous communication is great. Not everyone can be present for every meeting or office conversation \u2014 and the same goes for working remotely. However, when you\u2019re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (\u201cp2s\u201d) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase \u2014 \u201cp2 or it didn\u2019t happen.\u201d\nWorking remotely has made me a better communicator largely because I\u2019ve gotten into the habit of making written updates. I\u2019ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it!\n(That said, if you\u2019re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.)\nSeek out social opportunities\nEven introverts can feel lonely or isolated. When you work remotely, there isn\u2019t a built-in community you\u2019re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide.\nI have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.)\nEvery now and then, I\u2019ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh.\nIf you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work. \nIf you don\u2019t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you\u2019ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can\u2019t fall back on your work to provide community, you need to build your own.\n\nThese are tips that I\u2019ve found help me, but not everyone works the same way. Remember that it\u2019s okay to experiment \u2014 just because you\u2019ve worked one way, doesn\u2019t mean that\u2019s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment!\nHope to see you all online soon \ud83d\ude4c", "year": "2018", "author": "Mel Choyce", "author_slug": "melchoyce", "published": "2018-12-09T00:00:00+00:00", "url": "https://24ways.org/2018/thriving-as-a-remote-worker/", "topic": "process"} {"rowid": 251, "title": "The System, the Search, and the Food Bank", "contents": "Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center. \nOn the belt are plastic bins filled with assortments of shelf-stable food\u2014one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like \u201cBottled Water\u201d or \u201cCereal\u201d or \u201cCandy.\u201d \nSuch was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins.\nI could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there\u2019s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into.\nLook, I can\u2019t quite explain it: I just know that I love sorting, organizing, and classifying things\u2014groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask?\nThe opportunity to create meaning from nothing is at the core of my excitement, which is why I\u2019ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. \u201cI can\u2019t believe they\u2019re letting me do this,\u201d I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey.\nThe jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It\u2019s not just a nice-to-have that we spent our morning separating cookies from carrots\u2014it\u2019s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we\u2019re talking about donated groceries or\u2014there it is\u2014web content.\nThis happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.) \nIs X a Y? was the question at the heart of our food sorting\u2014but it\u2019s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves:\n\nWhat is the criteria that defines Y?\nDoes X meet that criteria?\n\nWe don\u2019t usually articulate it so concretely because it\u2019s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn\u2019t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn\u2019t allowed, causing a small cognitive hiccup\u2014this X is NOT a Y\u2014that sometimes meant having to re-sort our boxes.\nOn the web, we\u2019re interested\u2014I would hope\u2014in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer\u2014Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y?\nWe have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means\u2014well, this means a lot of things, and I\u2019ve got limited space here, so let\u2019s focus on these three lessons from the food bank:\n\n\nUse plain, familiar language. This advice seems to be given constantly, but that\u2019s because it\u2019s solid and it\u2019s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it\u2019s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I\u2019ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows\u2014not only will they be more likely to spot what they\u2019re looking for on your site or app, but you\u2019ll turn up more often in search results.\n\n\nCreate consistency in all things. Missteps in consistency look like my earlier chicken broth example\u2014changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts\u2014these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall.\n\nMake the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it.\n\nThis idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn\u2019t know every category label around the conveyer belt, nor what criteria each category warranted. \nThe system wasn\u2019t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We\u2019d ask staff members. We\u2019d ask more seasoned volunteers. We\u2019d ask each other. We\u2019d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn.\nThe more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it.\nThe same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away\u2014or, worse, misinform or hurt them. \nThis is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y\u2014well-sorted content can give them the answer.", "year": "2018", "author": "Lisa Maria Martin", "author_slug": "lisamariamartin", "published": "2018-12-16T00:00:00+00:00", "url": "https://24ways.org/2018/the-system-the-search-and-the-food-bank/", "topic": "content"} {"rowid": 260, "title": "The Art of Mathematics: A Mandala Maker Tutorial", "contents": "In front-end development, there\u2019s often a great deal of focus on tools that aim to make our work more efficient. But what if you\u2019re new to web development? When you\u2019re just starting out, the amount of new material can be overwhelming, particularly if you don\u2019t have a solid background in Computer Science. But the truth is, once you\u2019ve learned a little bit of JavaScript, you can already make some pretty impressive things.\nA couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days:\nMandala Maker user interface\nThe coolest part about it is the fact that it\u2019s a tool: anyone can use it to create something original and brand new. \nIn this tutorial, we\u2019ll build a smaller version of this app \u2013 a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we\u2019re done, you\u2019re on your own and can tweak it as you please. Be creative!\nPreparations: a blank canvas\nThe first thing you\u2019ll need for this project is a designated drawing space. We\u2019ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like).\nFiles\nCreate 3 files: index.html, styles.css, main.js. Don\u2019t forget to include your JS and CSS files in your HTML. \n\n\n\n \n \n \n\n\n \n

Your browser doesn't support canvas.

\n
\n\n\nI\u2019ll ask you to update your HTML file at a later point, but the CSS file we\u2019ll start with will stay the same throughout the project. This is the full CSS we are going to use:\nbody {\n background-color: #ccc;\n text-align: center;\n}\n\ncanvas {\n touch-action: none;\n background-color: #fff;\n}\n\nbutton {\n font-size: 110%;\n}\nNext steps\nWe are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts:\n\nBuilding a simple drawing app with one line and one color \nAdding a Clear button and a color picker\nAdding more functionality: 2 line drawing (add the first reflection)\nAdding more functionality: 8 line drawing (add 6 more reflections!)\n\nInteractive demos\nThis tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet.\nPart 1: A simple drawing app\nLet\u2019s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables:\nvar canvas, context, w, h,\n prevX = 0, currX = 0, prevY = 0, currY = 0,\n draw = false;\nThe variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true. \n1. init\nResponsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events. \nfunction init() {\n canvas = document.querySelector(\"canvas\");\n context = canvas.getContext(\"2d\");\n w = canvas.width;\n h = canvas.height;\n\n canvas.onpointermove = handlePointerMove;\n canvas.onpointerdown = handlePointerDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n}\n2. drawLine\nThis is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial.\nlineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY).\nfunction drawLine() {\n var a = prevX,\n b = prevY,\n c = currX,\n d = currY;\n\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n context.beginPath();\n context.moveTo(a, b);\n context.lineTo(c, d);\n context.stroke();\n context.closePath();\n}\n3. stopDrawing\nThis is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout).\nfunction stopDrawing() {\n draw = false;\n}\n4. recordPointerLocation\nThis tracks the pointer\u2019s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it.\nfunction recordPointerLocation(e) {\n prevX = currX;\n prevY = currY;\n currX = e.clientX - canvas.offsetLeft;\n currY = e.clientY - canvas.offsetTop;\n}\n5. handlePointerMove\nThis is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it.\nfunction handlePointerMove(e) {\n if (draw) {\n recordPointerLocation(e);\n drawLine();\n }\n}\n6. handlePointerDown\nThis is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That\u2019s because we only want movement events from handlePointerMove to cause drawing if the pointer is down.\nfunction handlePointerDown(e) {\n recordPointerLocation(e);\n draw = true;\n}\nFinally, we have a working drawing app. But that\u2019s just the beginning!\nSee the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 2: Add a Clear button and a color picker\nNow we\u2019ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear.\n\n \n

Your browser doesn't support canvas.

\n
\n
\n \n \n
\n\nColor picker\nThis is our new color picker function. It targets the input element by its class and gets its value. \nfunction getColor() {\n return document.querySelector(\".color\").value;\n}\nUp until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We\u2019ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.\nfunction drawLine() {\n //...code... \n context.strokeStyle = getColor();\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n //...code... \n}\nClear button\nThis is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.\nfunction clearCanvas() {\n if (confirm(\"Want to clear?\")) {\n context.clearRect(0, 0, w, h);\n }\n}\nThe method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. \nIf we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.\nLet\u2019s add this event handler to init:\nfunction init() {\n //...code...\n canvas.onpointermove = handleMouseMove;\n canvas.onpointerdown = handleMouseDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n document.querySelector(\".clear\").onclick = clearCanvas;\n}\nSee the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 3: Draw with 2 lines\nIt\u2019s time to make a line appear where no pointer has gone before. A ghost line! \nFor that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it\u2019s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn\u2019t matter which one we choose. Let\u2019s go with the x-axis. \nHere is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). \nNow, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.\nA sketch illustrating the mathematics of reflecting a point.\nWhat happens to the x coordinates?\nThe variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them \u201cthe x coordinates\u201d. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. \nWhat happens to the y coordinates?\nWhat about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.\nThis is the new code for the app\u2019s variables and the two lines: the one that fills the pointer\u2019s path and the one mirroring it across the x-axis.\nfunction drawLine() {\n var a = prevX, a_ = a,\n b = prevY, b_ = h-b,\n c = currX, c_ = c,\n d = currY, d_ = h-d;\n\n //... code ...\n\n // Draw line #1, at the pointer's location\n context.moveTo(a, b);\n context.lineTo(c, d);\n\n // Draw line #2, mirroring the line #1\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ...\n}\nIn case this was too abstract for you, let\u2019s look at some actual numbers to see how this works.\nLet\u2019s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.\nSo b' = 10 - 2 = 8 and d' = 10 - 3 = 7.\nWe use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That\u2019s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.\nIf you are still confused, I don\u2019t blame you. \nHere is the result. Draw something and see what happens.\nSee the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 4: Draw with 8 lines\nI have made yet another confusing sketch, with points C and D, so you understand what we\u2019re trying to do. Later on we\u2019ll look at points E, F, G and H as well. The circled point is the one we\u2019re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. \nA sketch illustrating points C and D.\nThis is the part where the math gets a bit mathier, as our drawLine function evolves further. We\u2019ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let\u2019s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).\nfunction drawLine() {\n\n //... code ... \n\n // Reassign values\n a_ = w-a; b_ = b;\n c_ = w-c; d_ = d;\n\n // Draw the 3rd line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n // Reassign values\n a_ = w-a; b_ = h-b;\n c_ = w-c; d_ = h-d;\n\n // Draw the 4th line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ... \nWhat is happening?\nYou might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That\u2019s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. \nAlso, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It\u2019s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). \nWhat happens to the x coordinates?\nAs you recall, our x coordinates are a (prevX) and c (currX).\nFor the third line we are adding, a' = w - a and c' = w - c, which means\u2026\nFor the fourth line, the same thing happens to our x coordinates a and c.\nWhat happens to the y coordinates?\nAs you recall, our y coordinates are b (prevY) and d (currY).\nFor the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. \nFor the fourth line, b' = h - b and d' = h - d, which we\u2019ve seen before: that\u2019s a reflection across the x-axis.\nWe have four more lines, or locations, to define. Note: the part of the code that\u2019s responsible for drawing a micro-line between the newly calculated coordinates is always the same:\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\nWe can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. \nOnce again, we need some concrete examples to see where we\u2019re going, so here\u2019s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.\nA sketch illustrating points E and F.\nThis is the code for E and F:\n // Reassign for 5\n a_ = w/2+h/2-b; b_ = w/2+h/2-a;\n c_ = w/2+h/2-d; d_ = w/2+h/2-c;\n\n // Reassign for 6\n a_ = w/2+h/2-b; b_ = h/2-w/2+a;\n c_ = w/2+h/2-d; d_ = h/2-w/2+c;\nTheir x coordinates are identical and their y coordinates are reversed to one another.\nThis one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).\nA sketch illustrating points G and H.\nThis is the code:\n // Reassign for 7\n a_ = w/2-h/2+b; b_ = w/2+h/2-a;\n c_ = w/2-h/2+d; d_ = w/2+h/2-c;\n\n // Reassign for 8\n a_ = w/2-h/2+b; b_ = h/2-w/2+a;\n c_ = w/2-h/2+d; d_ = h/2-w/2+c;\n //...code... \n}\nOnce again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won\u2019t go into the full details, since this has been a long enough journey as it is, and I think we\u2019ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.\nI hope you had fun learning! This is our final app:\nSee the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.", "year": "2018", "author": "Hagar Shilo", "author_slug": "hagarshilo", "published": "2018-12-02T00:00:00+00:00", "url": "https://24ways.org/2018/the-art-of-mathematics/", "topic": "code"} {"rowid": 257, "title": "The (Switch)-Case for State Machines in User Interfaces", "contents": "You\u2019re tasked with creating a login form. Email, password, submit button, done.\n\u201cThis will be easy,\u201d you think to yourself.\nLogin form by Selecto\nYou\u2019ve made similar forms many times in the past; it\u2019s essentially muscle memory at this point. You\u2019re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you\u2019ll have to translate the pixels to meaningful, responsive CSS values, but that\u2019s the least of your problems.\nAs you\u2019re writing up the HTML structure and CSS layout and styles for this form, you realize that you don\u2019t know what the successful \u201clogged in\u201d page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.\n\nWhat if login fails? Where do those errors show up?\nShould we show errors differently if the user forgot to enter their email, or password, or both?\nOr should the submit button be disabled?\nShould we validate the email field?\nWhen should we show validation errors \u2013 as they\u2019re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)\nWhen should the errors disappear?\nWhat do we show during the login process? Some loading spinner?\nWhat if loading takes too long, or a server error occurs?\n\nMany more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.\nModeling Behavior\nDescribing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story \u2013 they often leave out potential edge-cases or small yet important bits of information.\nHowever, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.\nThe general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:\n\nstart - not submitted yet\nloading - submitted and logging in\nsuccess - successfully logged in\nerror - login failed\n\nAdditionally, we can describe an application as accepting a finite number of events \u2013 that is, all the possible events that can be \u201csent\u201d to the application, either from the user or some other external entity:\n\nSUBMIT - pressing the submit button\nRESOLVE - the server responds, indicating that login is successful\nREJECT - the server responds, indicating that login failed\n\nThen, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:\n\nFrom the start state, when the SUBMIT event occurs, the app should be in the loading state.\nFrom the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.\nIf login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.\nFrom the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.\nOtherwise, if any other event occurs, don\u2019t do anything and stay in the same state.\n\nThat\u2019s a pretty thorough description, similar to a user story! It\u2019s also a bit more symbolic than a user story (e.g., \u201cwhen the SUBMIT event occurs\u201d instead of \u201cwhen the user presses the submit button\u201d), and that\u2019s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:\n\nEvery state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.\nFrom Visuals to Code\nDrawing a state machine doesn\u2019t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn\u2019t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.\nWith the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:\nfunction loginMachine(state, event) {\n switch (state) {\n case 'start':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n case 'loading':\n if (event === 'RESOLVE') {\n return 'success';\n } else if (event === 'REJECT') {\n return 'error';\n }\n break;\n case 'success':\n // Accept no further events\n break;\n case 'error':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n default:\n // This should never occur\n return undefined;\n }\n}\n\nconsole.log(loginMachine('start', 'SUBMIT'));\n// => 'loading'\nThis is fine (I suppose) but personally, I find it much easier to use objects:\nconst loginMachine = {\n initial: \"start\",\n states: {\n start: {\n on: { SUBMIT: 'loading' }\n },\n loading: {\n on: {\n REJECT: 'error',\n RESOLVE: 'success'\n }\n },\n error: {\n on: {\n SUBMIT: 'loading'\n }\n },\n success: {}\n }\n};\n\nfunction transition(state, event) {\n return machine\n .states[state] // Look up the state\n .on[event] // Look up the next state based on the event\n || state; // If not found, return the current state\n}\n\nconsole.log(transition('start', 'SUBMIT'));\nAs you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:\n\nA Common Language Between Designers and Developers\nAlthough finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:\n\nidentify all possible states, and potentially missing states\ndescribe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)\neliminate impossible states and identify states that are \u201cunreachable\u201d (have no entry transition) or \u201csunken\u201d (have no exit transition)\nadd features with full confidence of knowing what other states it might affect\nsimplify redundant states or complex user flows\ncreate test paths for almost every possible user flow, and easily identify edge cases\ncollaborate better by understanding the entire application model equally.\n\nNot a New Idea\nI\u2019m not the first to suggest that state machines can help bridge the gap between design and development.\n\nVince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines\nUser flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.\nIn 1999, Ian Horrocks wrote a book titled \u201cConstructing the User Interface with Statecharts\u201d, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.\nMore than a decade earlier, David Harel published \u201cStatecharts: A Visual Formalism for Complex Systems\u201d, in which the statechart - an extended hierarchical state machine model - is born.\n\nState machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:\n\nVisualized modeling\nPrecise diagrams\nAutomatic code generation\nComprehensive test coverage\nAccommodation of late-breaking requirements changes\n\nMoving Forward\nIt\u2019s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:\n\nThe World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications\nThe Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling\nI gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.\nI have also been working on XState, which is a library for \u201cstate machines and statecharts for the modern web\u201d. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).\n\nI\u2019m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.", "year": "2018", "author": "David Khourshid", "author_slug": "davidkhourshid", "published": "2018-12-12T00:00:00+00:00", "url": "https://24ways.org/2018/state-machines-in-user-interfaces/", "topic": "code"} {"rowid": 263, "title": "Securing Your Site like It\u2019s 1999", "contents": "Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.\nAs the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.\nWhat follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.\nBad input validation: Trusting anything the user sends you\nOur story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.\nOne such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.\nOne day, a player discovered a hidden field in the forum\u2019s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum\u2019s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player\u2019s profile.\nNeedless to say, this idyllic online community descended into chaos. Users changed each other\u2019s passwords, deleted each other\u2019s messages, and attacked each-other under the cover of complete anonymity. What happened?\nThere aren\u2019t any official rules for developing software on the web. But if there were, my golden rule would be:\nNever trust user input. Ever.\nAlways ask yourself how users will send you data that isn\u2019t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.\nMake sure you validate user input to make sure it\u2019s of the correct type (e.g. string, number, JSON string) and that it\u2019s the length that you were expecting. Don\u2019t forget that user input doesn\u2019t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.\nMake sure to check a user\u2019s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum\u2019s owner.\nFinally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.\nSQL injection: Allowing the user to run their own database queries\nA long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I\u2019d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.\nOne day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn\u2019t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.\nIt turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?\n' OR '1'='1\nSQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Alice'\nAND PASSWORD='hunter2'\nThis query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let\u2019s see what happens when we use our magic password instead!\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Admin'\nAND PASSWORD='' OR '1'='1'\nDoes the password look like part of the query to you? That\u2019s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!\nSo how can you protect yourself from SQL injection?\nNever build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.\nExpert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!\nCross site request forgery: Getting other users to do your dirty work for you\nDo you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.\nHave you ever clicked on a hyperlink, only to find something that you weren\u2019t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky\u2026\nLet\u2019s just say there are older and fouler things than Rick Astley in the dark places of the web.\nWhat if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:\n\nDid you notice the source URL of the image? It\u2019s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user\u2019s name and shipping address, and you could ship yourself DVDs completely free of charge!\nThis attack is possible when websites unconditionally trust a user\u2019s session cookies without checking where HTTP requests come from.\nThe first check you can make is to verify that a request\u2019s origin and referer headers match the location of the website. These headers can\u2019t be programmatically set.\nAnother check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren\u2019t stored in cookies.\nYou can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.\nCross site scripting: Someone else\u2019s code running on your website\nIn 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.\nSamy enjoyed using MySpace which, at the time, was the world\u2019s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn\u2019t have to wade through photos of avocado toast back then\u2026\nSamy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
tags into his headline, but \n\nLaying out my page\nBefore starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:\n\nI want absolute control over how people experience my design and don\u2019t want to allow it to stretch, so I first need a which limits the width of my layout to 800px. The align attribute will keep this
in the centre of someone\u2019s screen:\n
\n \n \n \n
[\u2026]
\nAlthough they were developed for displaying tabular information, the cells and rows which make up the element make it ideal for the precise implementation of a design. I need several tables\u2014often nested inside each other\u2014to implement my design. These include tables for a banner and three rows of content:\n
\n
[\u2026]
\n \n
\n
[\u2026]
\n \n \n [\u2026]
\n [\u2026]
\n\nThe width of the first table\u2014used for my banner\u2014is fixed to match the logo it contains. As I don\u2019t need borders, padding, or spacing between these cells, I use attributes to remove them:\n\n \n \n \n
\"Logo\"
\nThe next table\u2014which contains the largest image, introduction, and a call-to-action\u2014is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:\n\n \n \n\nThe height and width of these \u201cshims\u201d or \u201cspacers\u201d is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.\nFor the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:\n\n \u00a0\n [\u2026]\n\n\n\n \u00a0\n\nI use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table\u2014this time with a white background\u2014inside it:\n\n \n \n \n
\n \n[\u2026]\n
\n
\nAdding details\nTables are fabulous tools for laying out a page, but they\u2019re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my \u201cBuy the DVD\u201d call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:\n\n \n \n\n \n\n \n \n
\n
\n Buy the DVD\n
\n
\nI use those same elements to add details to headlines and lists too. Adding a \u201cbullet\u201d to each item in a list needs only two additional table cells, a circular graphic, and a spacer:\n\n \n \n \n \n \n
\u00a0\u00a0Directed by John Hughes
\nImplementing a typographic hierarchy\nSo far I\u2019ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use elements to change the typeface from the browser\u2019s default to any font installed on someone\u2019s device:\nPlanes, Trains and Automobiles is a comedy film [\u2026]\nTo adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser\u2019s default. I use a size of 4 for this headline and 2 for the text which follows:\nSteve Martin\n\nAn American actor, comedian, writer, producer, and musician.\nWhen I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.\nNB: I use as many
elements as needed to create space between headlines and paragraphs.\nView the final result (and especially the source.)\nMy modern day design for Planes, Trains and Automobiles.\nI can imagine many people reading this and thinking \u201cThis is terrible advice because we don\u2019t develop websites like this in 2018.\u201d That\u2019s true.\nWe have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:\nfont-variant-caps: titling-caps;\nfont-variant-ligatures: common-ligatures;\nfont-variant-numeric: oldstyle-nums;\nGrid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:\nbody {\n display: grid;\n grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;\n grid-template-rows: auto;\n grid-column-gap: 2vw;\n grid-row-gap: 1vh;\n}\nFlexbox has made it easy to develop flexible components such as navigation links:\nnav ul { display: flex; }\nnav li { flex: 1; }\nJust one line of CSS can create multiple columns of fluid type:\nmain { column-width: 12em; }\nCSS Shapes enable text to flow around irregular shapes including polygons:\n[src*=\"main-img\"] {\n float: left;\n shape-outside: polygon(\u2026);\n}\nToday, we wouldn\u2019t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:\n.btn {\n background: linear-gradient(#8B1212, #DD3A3C);\n border-radius: 1em;\n box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);\n}\nCSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we\u2019re certain we know how to design and develop products and websites better than we did at the end of 1998.\nStrange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That\u2019s why it\u2019s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today\u2014tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass\u2014will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.\nI have no prediction for what the web will be like twenty years from now. However, I want to believe we\u2019ll build on what we\u2019ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won\u2019t be repeated.\n\nHead over to my website if you\u2019d like to read about how I\u2019d implement my design for \u2018Planes, Trains and Automobiles\u2019 today.", "year": "2018", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2018-12-23T00:00:00+00:00", "url": "https://24ways.org/2018/designing-your-site-like-its-1998/", "topic": "code"} {"rowid": 259, "title": "Designing Your Future", "contents": "I\u2019ve had the pleasure of working for a variety of clients \u2013 both large and small \u2013 over the last 25 years. In addition to my work as a design consultant, I\u2019ve worked as an educator, leading the Interaction Design team at Belfast School of Art, for the last 15 years.\nIn July, 2018 \u2013 frustrated with formal education, not least the ever-present hand of \u2018austerity\u2019 that has ravaged universities in the UK for almost a decade \u2013 I formally reduced my teaching commitment, moving from a full-time role to a half-time role.\nMaking the move from a (healthy!) monthly salary towards a position as a freelance consultant is not without its challenges: one month your salary\u2019s arriving in your bank account (and promptly disappearing to pay all of your bills); the next month, that salary\u2019s been drastically reduced. That can be a shock to the system.\nIn this article, I\u2019ll explore the challenges encountered when taking a life-changing leap of faith. To help you confront \u2018the fear\u2019 \u2013 the nervousness, the sleepless nights and the ever-present worry about paying the bills \u2013 I\u2019ll provide a set of tools that will enable you to take a leap of faith and pursue what deep down drives you.\nIn short: I\u2019ll bare my soul and share everything I\u2019m currently working on to \u2013 once and for all \u2013 make a final bid for freedom.\nThis isn\u2019t easy. I\u2019m sharing my innermost hopes and aspirations, and I might open myself up to ridicule, but I believe that by doing so, I might help others, by providing them with tools to help them make their own leap of faith.\nThe power of visualisation\nAs designers we have skills that we use day in, day out to imagine future possibilities, which we then give form. In our day-to-day work, we use those abilities to design products and services, but I also believe we can use those skills to design something every bit as important: ourselves.\nIn this article I\u2019ll explore three tools that you can use to design your future:\n\nProduct DNA\nArtefacts From the Future\nTomorrow Clients\n\nEach of these tools is designed to help you visualise your future. By giving that future form, and providing a concrete goal to aim for, you put the pieces in place to make that future a reality.\nBrian Eno \u2013 the noted musician, producer and thinker \u2013 states, \u201cHumans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds.\u201d Eno helpfully provides a powerful example:\n\nWhen Martin Luther King said, \u201cI have a dream,\u201d he was inviting others to dream that dream with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it.\nThe dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real.\n\nWhen you imagine your future \u2013 designing an alternate, imagined reality in your mind \u2013 you begin the process of making that future real.\nProduct DNA\nThe first tool, which I use regularly \u2013 for myself and for client work \u2013 is a tool called Product DNA. The intention of this tool is to identify beacons from which you can learn, helping you to visualise your future.\nWe all have heroes \u2013 individuals or organisations \u2013 that we look up to. Ask yourself, \u201cWho are your heroes?\u201d If you had to pick three, who would they be and what could you learn from them? (You probably have more than three, but distilling down to three is an exercise in itself.)\nEarlier this year, when I was putting the pieces in place for a change in career direction, I started with my heroes. I chose three individuals that inspired me:\n\nAlan Moore: the author of \u2018Do Design: Why Beauty is Key to Everything\u2019;\nMark Shayler: the founder of Ape, a strategic consultancy; and\nSeth Godin: a writer and educator I\u2019ve admired and followed for many years.\n\nLooking at each of these individuals, I \u2018borrowed\u2019 a little DNA from each of them. That DNA helped me to paint a picture of the kind of work I wanted to do and the direction I wanted to travel.\n\nMoore\u2019s book - \u2018Do Design\u2019 \u2013 had a powerful influence on me, but the primary inspiration I drew from him was the sense of gravitas he conveyed in his work. Moore\u2019s mission is an important one and he conveys that with an appropriate weight of expression.\nShayler\u2019s work appealed to me for its focus on equipping big businesses with a startup mindset. As he puts it: \u201cI believe that you can do the things that you do better.\u201d That sense \u2013 of helping others to be their best selves \u2013 appealed to me.\nFinally, the words Godin uses to describe himself \u2013 \u201cAn Author, Entrepreneur and Most of All, a Teacher\u201d \u2013 resonated with me. The way he positions himself, as, \u201cmost of all, a teacher,\u201d gave me the belief I needed that I could work as an educator, but beyond the ivory tower of academia.\nI\u2019ve been exploring each of these individuals in depth, learning from them and applying what I learn to my practice. They don\u2019t all know it, but they are all \u2018mentors from afar\u2019.\nIn a moment of serendipity \u2013 and largely, I believe, because I\u2019d used this tool to explore his work \u2013 I was recently invited by Alan Moore to help him develop a leadership programme built around his book.\nThe key lesson here is that not only has this exercise helped me to design my future and give it tangible form, it\u2019s also led to a fantastic opportunity to work with Alan Moore, a thinker who I respect greatly.\nArtefacts From the Future\nThe second tool, which I also use regularly, is a tool called \u2018Artefacts From the Future\u2019. These artefacts \u2013 especially when designed as \u2018finished\u2019 pieces \u2013 are useful for creating provocations to help you see the future more clearly.\n\u2018Artefacts From the Future\u2019 can take many forms: they might be imagined magazine articles, news items, or other manifestations of success. By imagining these end points and giving them form, you clarify your goals, establishing something concrete to aim for.\nEarlier this year I revisited this tool to create a provocation for myself. I\u2019d just finished Alla Kholmatova\u2019s excellent book on \u2018Design Systems\u2019, which I would recommend highly. The book wasn\u2019t just filled with valuable insights, it was also beautifully designed.\nOnce I\u2019d finished reading Kholmatova\u2019s book, I started thinking: \u201cPerhaps it\u2019s time for me to write a new book?\u201d Using the magic of \u2018Inspect Element\u2019, I created a fictitious page for a new book I wanted to write: \u2018Designing Delightful Experiences\u2019.\nI wrote a description for the book, considering how I\u2019d pitch it.\n\nThis imagined page was just what I needed to paint a picture in my mind of a possible new book. I contacted the team at Smashing Magazine and pitched the idea to them. I\u2019m happy to say that I\u2019m now working on that book, which is due to be published in 2019.\nWithout this fictional promotional page from the future, the book would have remained as an idea \u2013 loosely defined \u2013 rolling around my mind. By spending some time, turning that idea into something \u2018real\u2019, I had everything I needed to tell the story of the book, sharing it with the publishing team at Smashing Magazine.\nOf course, they could have politely informed me that they weren\u2019t interested, but I\u2019d have lost nothing \u2013 truly \u2013 in the process.\nAs designers, creating these imaginary \u2018Artefacts From the Future\u2019 is firmly within our grasp. All we need to do is let go a little and allow our imaginations to wander.\nIn my experience, working with clients and \u2013 to a lesser extent, students \u2013 it\u2019s the \u2018letting go\u2019 part that\u2019s the hard part. It can be difficult to let down your guard and share a weighty goal, but I\u2019d encourage you to do so. At the end of the day, you have nothing to lose.\nThe key lesson here is that your \u2018Artefacts From the Future\u2019 will focus your mind. They\u2019ll transform your unformed ideas into \u2018tangible evidence\u2019 of future possibilities, which you can use as discussion points and provocations, helping you to shape your future reality.\nTomorrow Clients\nThe third tool, which I developed more recently, is a tool called \u2018Tomorrow Clients\u2019. This tool is designed to help you identify a list of clients that you aspire to work with.\nThe goal is to pinpoint who you would like to work with \u2013 in an ideal world \u2013 and define how you\u2019d position yourself to win them over. Again, this involves \u2018letting go\u2019 and allowing your mind to imagine the possibilities, asking, \u201cWhat if\u2026?\u201d\nBefore I embarked upon the design of my new website, I put together a \u2018soul searching\u2019 document that acted as a focal point for my thinking. I contacted a number of designers for a second opinion to see if my thinking was sound.\nOne of my graduates \u2013 Chris Armstrong, the founder of Niice \u2013 replied with the following: \u201cMight it be useful to consider five to ten companies you\u2019d love to work for, and consider how you\u2019d pitch yourself to them?\u201d\nThis was just the provocation I needed. To add a little focus, I reduced the list to three, asking: \u201cWho would my top three clients be?\u201d\n\nBy distilling the list down I focused on who I\u2019d like to work for and how I\u2019d position myself to entice them to work with me. My list included: IDEO, Adobe and IBM. All are companies I admire and I believed each would be interesting to work for.\nThis exercise might \u2013 on the surface \u2013 appear a little like indulging in fantasy, but I believe it helps you to clarify exactly what it is you are good at and, just as importantly, put that in to words.\nFor each company, I wrote a short pitch outlining why I admired them and what I thought I could add to their already existing skillset.\nFocusing first on Adobe, I suggested establishing an emphasis on educational resources, designed to help those using Adobe\u2019s creative tools to get the most out of them.\nA few weeks ago, I signed a contract with the team working on Adobe XD to create a series of \u2018capsule courses\u2019, focused on UX design. The first of these courses \u2013 exploring UI design \u2013 will be out in 2019.\nI believe that Armstrong\u2019s provocation \u2013 asking me to shift my focus from clients I have worked for in the past to clients I aspire to work for in the future \u2013 made all the difference.\nThe key lesson here is that this exercise encouraged me to raise the bar and look to the future, not the past. In short, it enabled me to proactively design my future.\nIn closing\u2026\nI hope these three tools will prove a welcome addition to your toolset. I use them when working with clients, I also use them when working with myself.\nI passionately believe that you can design your future. I also firmly believe that you\u2019re more likely to make that future a reality if you put some thought into defining what it looks like.\nAs I say to my students and the clients I work with: It\u2019s not enough to want to be a success, the word \u2018success\u2019 is too vague to be a destination. A far better approach is to define exactly what success looks like.\nThe secret is to visualise your future in as much detail as possible. With that future vision in hand as a map, you give yourself something tangible to translate into a reality.", "year": "2018", "author": "Christopher Murphy", "author_slug": "christophermurphy", "published": "2018-12-15T00:00:00+00:00", "url": "https://24ways.org/2018/designing-your-future/", "topic": "process"} {"rowid": 253, "title": "Clip Paths Know No Bounds", "contents": "CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance.\nThe basics of a clip path\nBefore we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed.\nUsing the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely\u2019s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element\u2019s dimensions.\nSee the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen.\n\nSo for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines.\nclip-path: polygon(\n 30% 0%,\n 70% 0%,\n 100% 30%,\n 100% 70%,\n 70% 100%,\n 30% 100%,\n 0% 70%,\n 0% 30%\n);\nA shape with less vertices than the eye can see\nIt\u2019s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box \u2014 or more specifically when we think outside the range of 0% - 100%.\nOur element\u2019s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element.\nSee the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen.\n\nBy going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons.\nOur earlier octagon can similarly be made with only four points.\nSee the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen.\n\nMultiple shapes, one clip path\nWe can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon().\nSee the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen.\n\nDepending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element\u2019s box, we can draw extra lines to help us get where we need to go next as needed.\nIt can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles.\nSee the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen.\n\nDifferent shapes with fill rules\nA polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification \u2014 the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape.\nSee the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen.\n\nAs lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path\u2019s lines, the point is considered outside, and if it crosses an odd number the point is inside.\nOrder of operations\nIt is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more.\nThese compositing effects are applied in the order:\n\nCSS Filters (e.g. filter: blur(2px))\nClipping (e.g. what this article is about)\nMasking (Clipping\u2019s cousin)\nBlend Modes (e.g. mix-blend-mode: multiply)\nOpacity\n\nThis means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element\u2019s box edges.\nSee the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen.\n\nIf we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally.\nRevealing content with animation\nCSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. \nSee the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen.\n\nDon\u2019t keep CSS Shapes in a box\nClip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.", "year": "2018", "author": "Dan Wilson", "author_slug": "danwilson", "published": "2018-12-20T00:00:00+00:00", "url": "https://24ways.org/2018/clip-paths-know-no-bounds/", "topic": "code"} {"rowid": 250, "title": "Build up Your Leadership Toolbox", "contents": "Leadership. It can mean different things to different people and vary widely between companies. Leadership is more than just a job title. You won\u2019t wake up one day and magically be imbued with all you need to do a good job at leading. If we don\u2019t have a shared understanding of what a Good Leader looks like, how can we work on ourselves towards becoming one? How do you know if you even could be a leader? Can you be a leader without the title?\nWhat even is it?\nI got very frustrated way back in my days as a senior developer when I was given \u201cadvice\u201d about my leadership style; at the time I didn\u2019t have the words to describe the styles and ways in which I was leading to be able to push back. I heard these phrases a lot:\n\nyou need to step up\nyou need to take charge\nyou need to grab the bull by its horns\nyou need to have thicker skin\nyou need to just be more confident in your leading\nyou need to just make it happen\n\nI appreciate some people\u2019s intent was to help me, but honestly it did my head in. WAT?! What did any of this even mean. How exactly do you \u201cstep up\u201d and how are you evaluating what step I\u2019m on? I am confident, what does being even more confident help achieve with leading? Does that not lead you down the path of becoming an arrogant door knob? >___<\nWhile there is no One True Way to Lead, there is an overwhelming pattern of people in positions of leadership within tech industry being held by men. It felt a lot like what people were fundamentally telling me to do was to be more like an extroverted man. I was being asked to demonstrate more masculine associated qualities (#notallmen). I\u2019ll leave the gendered nature of leadership qualities as an exercise in googling for the reader.\nI\u2019ve never had a good manager and at the time had no one else to ask for help, so I turned to my trusted best friends. Books.\nI <3 books\nI refused to buy into that style of leadership as being the only accepted way to be. There had to be room for different kinds of people to be leaders and have different leadership styles.\nThere are three books that changed me forever in how I approach and think about leadership.\n\nPrimal leadership, by Daniel Goleman, Richard Boyatzis and Annie McKee\nQuiet, by Susan Cain\nDaring Greatly - How the Courage to be Vulnerable transforms the way we live, love, parent and Lead, by Bren\u00e9 Brown\n\nI recommend you read them. Ignore the slightly cheesy titles and trust me, just read them.\nPrimal leadership helped to give me the vocabulary and understanding I needed about the different styles of leadership there are, how and when to apply them.\nQuiet really helped me realise how much I was being undervalued and misunderstood in an extroverted world. If I\u2019d had managers or support from someone who valued introverts\u2019 strengths, things would\u2019ve been very different. I would\u2019ve had someone telling others to step down and shut up for a change rather than pushing on me to step up and talk louder over everyone else. It\u2019s OK to be different and needing different things like time to recharge or time to think before speaking. It also improved my ability to work alongside my more extroverted colleagues by giving me an understanding of their world so I could communicate my needs in a language they would get.\nBren\u00e9 Brown\u2019s book I am forever in debt to. Her work gave me the courage to stand up and be my own kind of leader. Even when no-one around me looked or sounded like me, I found my own voice.\nIt takes great courage to be vulnerable and open about what you can and can\u2019t do. Open about your mistakes. Vocalise what you don\u2019t know and asking for help. In some lights, these are seen as weaknesses and many have tried to use them against me, to pull me down and exclude me for talking about them. Dear reader, it did not work, they failed. The truth is, they are my greatest strengths. The privileges I have, I use for good as best and often as I can.\nJust like gender, leadership is not binary\nIf you google for what a leader is, you\u2019ll get many different answers. I personally think Bren\u00e9\u2019s version is the best as it is one that can apply to a wider range of people, irrespective of job title or function.\n\nI define a leader as anyone who takes responsibility for finding potential in people and processes, and who has the courage to develop that potential.\nBren\u00e9 Brown\n\nBeing a leader isn\u2019t about being the loudest in a room, having veto power, talking over people or ignoring everyone else\u2019s ideas. It\u2019s not about \u201ctelling people what to do\u201d. It\u2019s not about an elevated status that you\u2019re better than others. Nor is it about creating a hand wavey far away vision and forgetting to help support people in how to get there.\nBeing a Good Leader is about having a toolbox of leadership styles and skills to choose from depending on the situation. Knowing how and when to apply them is part of the challenge and difficulty in becoming good at it. It is something you will have to continuously work on, forever. There is no Done.\nLeaders are Made, they are not Born.\nBe flexible in your leadership style\n\nTypically, the best, most effective leaders act according to one or more of six distinct approaches to leadership and skillfully switch between the various styles depending on the situation.\n\nFrom the book, Primal Leadership, it gives a summary of 6 leadership styles which are:\n\nVisionary\nCoaching\nAffiliative\nDemocratic\nPacesetting\nCommanding\n\nVisionary, moves people toward a shared dream or future. When change requires a new vision or a clear direction is needed, using a visionary style of leadership helps communicate that picture. By learning how to effectively communicate a story you can help people to move in that direction and give them clarity on why they\u2019re doing what they\u2019re doing.\nCoaching, is about connecting what a person wants and helping to align that with organisation\u2019s goals. It\u2019s a balance of helping someone improve their performance to fulfil their role and their potential beyond.\nAffiliative, creates harmony by connecting people to each other and requires effective communication to aid facilitation of those connections. This style can be very impactful in healing rifts in a team or to help strengthen connections within and across teams. During stressful times having a positive and supportive connection to those around us really helps see us through those times.\nDemocratic, values people\u2019s input and gets commitment through participation. Taking this approach can help build buy-in or consensus and is a great way to get valuable input from people. The tricky part about this style, I find, is that when I gather and listen to everyone\u2019s input, that doesn\u2019t mean the end result is that I have to please everyone.\nThe next two, sadly, are the ones wielded far too often and have the greatest negative impact. It\u2019s where the \u201ctelling people what to do\u201d comes from. When used sparingly and in the right situations, they can be a force for good. However, they must not be your default style.\nPacesetting, when used well, it is about meeting challenging and exciting goals. When you need to get high-quality results from a motivated and well performing team, this can be great to help achieve real focus and drive. Sadly it is so overused and poorly executed it becomes the \u201cjust make it happen\u201d and driver of unrealistic workload which contributes to burnout.\nCommanding, when used appropriately soothes fears by giving clear direction in an emergency or crisis. When shit is on fire, you want to know that your leadership ability can help kick-start a turnaround and bring clarity. Then switch to another style. This approach is also required when dealing with problematic employees or unacceptable behaviour.\nCommanding style seems to be what a lot of people think being a leader is, taking control and commanding a situation. It should be used sparingly and only when absolutely necessary.\nBe responsible for the power you wield\nIf reading through those you find yourself feeling a bit guilty that maybe you overuse some of the styles, or overwhelmed that you haven\u2019t got all of these down and ready to use in your toolbox\u2026\nTake a breath. Take responsibility. Take action.\nNo one is perfect, and it\u2019s OK. You can start right now working on those. You can have a conversation with your team and try being open about how you\u2019re going to try some different styles. You can be vulnerable and own up to mistakes you might\u2019ve made followed with an apology. You can order those books and read them. Those books will give you more examples on those leadership styles and help you to find your own voice.\nThe impact you can have on the lives of those around you when you\u2019re a leader, is huge. You can help be that positive impact, help discover and develop potential in someone.\n\nTime spent understanding people is never wasted.\nCate Huston.\n\nI believe in you. <3 Mazz.", "year": "2018", "author": "Mazz Mosley", "author_slug": "mazzmosley", "published": "2018-12-10T00:00:00+00:00", "url": "https://24ways.org/2018/build-up-your-leadership-toolbox/", "topic": "business"} {"rowid": 262, "title": "Be the Villain", "contents": "Inclusive Design is the practice of making products and services accessible to, and usable by as many people as reasonably possible without the need for specialized accommodations. The practice was popularized by author and User Experience Design Director Kat Holmes. If getting you to discover her work is the only thing this article succeeds in doing then I\u2019ll consider it a success.\nAs a framework for creating resilient solutions to problems, Inclusive Design is incredible. However, the aimless idealistic aspirations many of its newer practitioners default to can oftentimes run into trouble. Without outlining concrete, actionable outcomes that are then vetted by the people you intend to serve, there is the potential to do more harm than good. \nWhen designing, you take a user flow and make sure it can\u2019t be broken. Ensuring that if something is removed, it can be restored. Or that something editable can also be updated at a later date\u2014you know, that kind of thing. What we want to do is avoid surprises. Much like a water slide with a section of pipe missing, a broken flow forcibly ejects a user, to great surprise and frustration. Interactions within a user flow also have to be small enough to be self-contained, so as to avoid creating a none pizza with left beef scenario.\nLately, I\u2019ve been thinking about how to expand on this practice. Watertight user flows make for a great immediate experience, but it\u2019s all too easy to miss the forest for the trees when you\u2019re a product designer focused on cranking out features. \nWhat I\u2019m concerned about is while to trying to envision how a user flow could be broken, you also think about how it could be subverted. In addition to preventing the removal of a section of water slide, you also keep someone from mugging the user when they shoot out the end.\nIf you pay attention, you\u2019ll start to notice this subversion with increasing frequency:\n\nDomestic abusers using internet-controlled devices to spy on and control their partner.\nZealots tanking a business\u2019 rating on Google because its owners spoke out against unchecked gun violence.\nForcing people to choose between TV or stalking because the messaging center portion of a cable provider\u2019s entertainment package lacks muting or blocking features.\nWhite supremacists tricking celebrities into endorsing anti-Semitic conspiracy theories.\nFacebook repeatedly allowing housing, credit, and employment advertisers to discriminate against users by their race, ability, and religion.\nWhite supremacists also using a video game chat service as a recruiting tool.\nThe unchecked harassment of minors on Instagram.\nSwatting.\n\nIf I were to guess why we haven\u2019t heard more about this problem, I\u2019d say that optimistically, people have settled out of court. Pessimistically, it\u2019s most likely because we ignore, dismiss, downplay, and suppress those who try to bring it to our attention. \nSubverted design isn\u2019t the practice of employing Dark Patterns to achieve your business goals. If you are not familiar with the term, Dark Patterns are the use of cheap user interface tricks and psychological manipulation to get users to act against their own best interests. User Experience consultant Chris Nodder wrote Evil By Design, a fantastic book that unpacks how to detect and think about them, if you\u2019re interested in this kind of thing\nSubverted design also isn\u2019t beholden design, or simple lack of attention. This phenomenon isn\u2019t even necessarily premeditated. I think it arises from na\u00efve (or willfully ignorant) design decisions being executed at a historically unprecedented pace and scale. These decisions are then preyed on by the shrewd and opportunistic, used to control and inflict harm on the undeserving. Have system, will game.\nThis is worth discussing. As the field of design continues to industrialize empathy, it also continues to ignore the very established practice of threat modeling. Most times, framing user experience in terms of how to best funnel people into a service comes with an implicit agreement that the larger system that necessitates the service is worth supporting. \nTo achieve success in the eyes of their superiors, designers may turn to emotional empathy exercises. By projecting themselves into the perceived surface-level experiences of others, they play-act at understanding how to nudge their targeted demographics into a conversion funnel. This roleplaying exercise has the effect of scoping concerns to the immediate, while simultaneously reinforcing the idea of engagement at all cost within the identified demographic.\nThe thing is, pure engagement leaves the door wide open for bad actors. Even within the scope of a limited population, the assumption that everyone entering into the funnel is acting with good intentions is a poor one. Security researchers, network administrators, and other professionals who practice threat modeling understand that the opposite is true. By preventing everyone save for well-intentioned users from operating a system within the parameters you set for them, you intentionally limit the scope of abuse that can be enacted.\nDon\u2019t get me wrong: being able to escort as many users as you can to the happy path is a foundational skill. But we should also be having uncomfortable conversations about why something unthinkable may in fact not be. \nThey\u2019re not going to be fun conversations. It\u2019s not going to be easy convincing others that these aren\u2019t paranoid delusions best tucked out of sight in the darkest, dustiest corner of the backlog. Realistically, talking about it may even harm your career.\nBut consider the alternative. The controlled environment of the hypothetical allows us to explore these issues without propagating harm. Better to be viewed as the office\u2019s resident villain than to have to live with something like this:\n\nIf the past few years have taught us anything, it\u2019s that the choices we make\u2014or avoid making\u2014have consequences. Design has been doing a lot of growing up as of late, including waking up to the idea that technology isn\u2019t neutral. \nYou\u2019re going to have to start thinking the way a monster does\u2014if you can imagine it, chances are someone else can as well. To get into this kind of mindset, inverting the Inclusive Design Principles is a good place to start:\n\nProviding a comparable experience becomes forcing a single path.\nConsidering situation becomes ignoring circumstance.\nBeing consistent becomes acting capriciously.\nGiving control becomes removing autonomy. \nOffering choice becomes limiting options. \nPrioritizing content becomes obfuscating purpose.\nAdding value becomes filling with gibberish. \n\nCombined, these inverted principles start to paint a picture of something we\u2019re all familiar with: a half-baked, unscrupulous service that will jump at the chance to take advantage of you. This environment is also a perfect breeding ground for spawning bad actors.\nThese kinds of services limit you in the ways you can interact with them. They kick you out or lock you in if you don\u2019t meet their unnamed criteria. They force you to parse layout, prices, and policies that change without notification or justification. Their controls operate in ways that are unexpected and may shift throughout the experience. Their terms are dictated to you, gaslighting you to extract profit. Heaps of jargon and flashy, unnecessary features are showered on you to distract from larger structural and conceptual flaws.\nSo, how else can we go about preventing subverted design? Marli Mesibov, Content Strategist and Managing Editor of UX Booth, wrote a brilliant article about how to use Dark Patterns for good\u2014perhaps the most important takeaway being admitting you have a problem in the first place. \nAnother exercise is asking the question, \u201cWhat is the evil version of this feature?\u201d Ask it during the ideation phase. Ask it as part of acceptance criteria. Heck, ask it over lunch. I honestly don\u2019t care when, so long as the question is actually raised. \nIn keeping with the spirit of this article, we can also expand on this line of thinking. Author, scientist, feminist, and pacifist Ursula Franklin urges us to ask, \u201cWhose benefits? Whose risks?\u201d instead of \u201cWhat benefits? What risks?\u201d in her talk, When the Seven Deadly Sins Became the Seven Cardinal Virtues. Inspired by the talk, Ethan Marcotte discusses how this relates to the web platform in his powerful post, Seven into seven.\nFew things in this world are intrinsically altruistic or good\u2014it\u2019s just the nature of the beast. However, that doesn\u2019t mean we have to stand idly by when harm is created. If we can add terms like \u201canti-pattern\u201d to our professional vocabulary, we can certainly also incorporate phrases like \u201cabuser flow.\u201d \nDesign finally got a seat at the table. We should use this newfound privilege wisely. Listen to women. Listen to minorities, listen to immigrants, the unhoused, the less economically advantaged, and the less technologically-literate. Listen to the underrepresented and the underprivileged.\nSubverted design is a huge problem, likely one that will never completely go away. However, the more of us who put the hard work into being the villain, the more we can lessen the scope of its impact.", "year": "2018", "author": "Eric Bailey", "author_slug": "ericbailey", "published": "2018-12-06T00:00:00+00:00", "url": "https://24ways.org/2018/be-the-villain/", "topic": "ux"} {"rowid": 205, "title": "Why Design Systems Fail", "contents": "Design systems are so hot right now, and for good reason. They promote a modular approach to building a product, and ensure organizational unity and stability via reusable code snippets and utility styles. They make prototyping a breeze, and provide a common language for both designers and developers.\nA design system is a culmination of several individual components, which can include any or all of the following (and more):\n\nStyle guide or visual pattern library\nDesign tooling (e.g. Sketch Library)\nComponent library (where the components live in code)\nCode usage guidelines and documentation\nDesign usage documentation\nVoice and tone guideline\nAnimation language guideline\n\nDesign systems are standalone (internal or external) products, and have proven to be very effective means of design-driven development. However, in order for a design system to succeed, everyone needs to get on board.\nI\u2019d like to go over a few considerations to ensure design system success and what could hinder that success.\nOrganizational Support\nPut simply, any product, including internal products, needs support. Something as cross-functional as a design system, which spans every vertical project team, needs support from the top and bottom levels of your organization. \nWhat I mean by that is that there needs to be top-level support from project managers up through VP\u2019s to see the value of a design system, to provide resources for its implementation, and advocate for its use company-wide. This is especially important in companies where such systems are being put in place on top of existing, crufty codebases, because it may mean there needs to be some time and effort put in the calendar for refactoring work.\nSupport from the bottom-up means that designers and engineers of all levels also need to support this system and feel responsibility for it. A design system is an organization\u2019s product, and everyone should feel confident contributing to it. If your design system supports external clients as well (such as contractors), they too can become valuable teammates.\nA design system needs support and love to be nurtured and to grow. It also needs investment.\nInvestment\nTo have a successful design system, you need to make a continuous effort to invest resources into it. I like to compare this to working out.\nYou can work out intensely for 3 months and see some gains, but once you stop working out, those will slowly fade away. If you continue to work out, even if its less often than the initial investment, you\u2019ll see yourself maintaining your fitness level at a much higher rate than if you stopped completely. \nIf you invest once in a design system (say, 3 months of overhauling it) but neglect to keep it up, you\u2019ll face the same situation. You\u2019ll see immediate impact, but that impact will fade as it gets out of sync with new designs and you\u2019ll end up with strange, floating bits of code that nobody is using. Your engineers will stop using it as the patterns become outdated, and then you\u2019ll find yourself in for another round of large investment (while dreading going through the process since its fallen so far out of shape).\n\nWith design systems, small incremental investments over time lead to big gains overall.\n\nWith this point, I also want to note that because of how they scale, design systems can really make a large impact across the platform, making it extremely important to really invest in things like accessibility and solid architecture from the start. You don\u2019t want to scale a brittle system that\u2019s not easy to use.\nTake care of your design systems, and keep working on them to ensure their effectiveness. One way to ensure this is to have a dedicated team working on this design system, managing tickets and styling updates that trickle out to the rest of your company.\nResponsibility\nWith some kind of team to act as an owner of a design system, whether it be the design team, engineering team, or a new team\nmade of both designers and engineers (the best option), your company is more likely to keep a relevant, up-to-date system that doesn\u2019t break.\nThis team is responsible for a few things:\n\nHelping others get set up on the system (support)\nDesigning and building components (development)\nAdvocating for overall UI consistency and adherence (evangelism)\nCreating a rollout plan and update system (product management)\n\nAs you can see, these are a lot of roles, so it helps to have multiple people on this team, at least part of the time, if you can. One thing I\u2019ve found to be effective in the past is to hold office hours for coworkers to book slots within to help them get set up and to answer any questions about using the system. Having an open Slack channel also helps for this sort of thing, as well as for bringing up bugs/issues/ideas and being an channel for announcements like new releases.\nCommunication\nOnce you have resources and a plan to invest in a design system, its really important that this person or team acts as a bridge between design and engineering. Continuous communication is really important here, and the way you communicate is even more important.\nRemember that nobody wants to be told what to do or prescribed a solution, especially developers, who are used to a lot of autonomy (usually they get to choose their own tools at work). Despite how much control the other engineers have on the process, they need to feel like they have input, and feel heard.\nThis can be challenging, especially since ultimately, some party needs to be making a final decision on direction and execution. Because it\u2019s a hard balance to strike, having open communication channels and being as transparent as possible as early as possible is a good start.\nBuy-in\nFor all of the reasons we\u2019ve just looked over, good communication is really important for getting buy-in from your users (the engineers and designers), as well as from product management.\n\nBuilding and maintaining a design system is surprisingly a lot of people-ops work.\n\nTo get buy-in where you don\u2019t have a previous concensus that this is the right direction to take, you need to make people want to use your design system. A really good way to get someone to want to use a product is to make it the path of least resistance, to show its value.\nGather examples and usage wins, because showing is much more powerful than telling.\nIf you can, have developers use your product in a low-stakes situation where it provides clear benefits. Hackathons are a great place to debut your design system. Having a hackathon internally at DigitalOcean was a perfect opportunity to:\n\nEvangelize for the design system\nSee what people were using the component library for and what they were struggling with (excellent user testing there)\nGet user feedback afterward on how to improve it in future iterations\nLet people experience the benefits of using it themselves\n\nThese kinds of moments, where people explore on their own are where you can really get people on your side and using the design system, because they can get their hands on it and draw their own conclusions (and if they don\u2019t love it \u2014 listen to them on how to improve it so that they do). We don\u2019t always get so lucky as to have this sort of instantaneous user feedback from our direct users.\nArchitecture\nI briefly mentioned the scalable nature of design systems. This is exactly why it\u2019s important to develop a solid architecture early on in the process. Build your design system with growth and scalability in mind. What happens if your company acquires a new product? What happens when it develops a new market segment? How can you make sure there\u2019s room for customization and growth?\nA few things we\u2019ve found helpful include:\nNamespacing\nUse namespacing to ensure that the system doesn\u2019t collide with existing styles if applying it to an existing codebase. This means prefixing every element in the system to indicate that this class is a part of the design system. To ensure that you don\u2019t break parts of the existing build (which may have styled base elements), you can namespace the entire system inside of a parent class. Sass makes this easy with its nested structure. \nThis kind of namespacing wouldn\u2019t be necessary per se on new projects, but it is definitely useful when integrating new and old styles.\nSemantic Versioning\nI\u2019ve used Semantic Versioning on all of the design systems I\u2019ve ever worked on. Semantic versioning uses a system of Major.Minor.Patch for any updates. You can then tag released on Github with versioned updates and ensure that someone\u2019s app won\u2019t break unintentionally when there is an update, if they are anchored to a specific version (which they should be).\nWe also use this semantic versioning as a link with our design system assets at DigitalOcean (i.e. Sketch library) to keep them in sync, with the same version number corresponding to both Sketch and code.\nOur design system is served as a node module, but is also provided as a series of built assets using our CDN for quick prototyping and one-off projects. For these built assets, we run a deploy script that automatically creates folders for each release, as well as a latest folder if someone wanted the always-up-to-date version of the design system. \nSo, semantic versioning for the system I\u2019m currently building is what links our design system node module assets, sketch library assets, and statically built file assets.\nThe reason we have so many ways of consuming our design system is to make adoption easier and to reduce friction.\nFriction\nA while ago, I posed the question of why design systems become outdated and unused, and a major conclusion I drew from the conversation was:\n\n\u201cIf it\u2019s harder for people to use than their current system, people just won\u2019t use it\u201d\n\nYou have to make your design system the path of least resistance, lowering cognitive overhead of development, not adding to it. This is vital. A design system is intended to make development much more efficient, enforce a consistent style across sites, and allow for the developer to not worry as much about small decisions like naming and HTML semantics. These are already sorted out for them, meaning they can focus on building product.\nBut if your design system is complicated and over-engineered, they may find it frustrating to use and go back to what they know, even if its not the best solution. If you\u2019re a Sass expert, and base your system on complex mixins and functions, you better hope your user (the developer) is also a Sass expert, or wants to learn. This is often not the case, however. You need to talk to your audience.\nWith the DigitalOcean design system, we provide a few options:\nOption 1\nUsers can implement the component library into a development environment and use Sass, select just the components they want to include, and extend the system using a hook-based system. This is the most performant and extensible output. Only the components that are called upon are included, and they can be easily extended using mixins.\nBut as noted earlier, not everyone wants to work this way (including Sass a dependency and potentially needing to set up a build system for it and learn a new syntax). There is also the user who just wants to throw a link onto their page and have it look nice, and thats where our versioned built assets come in.\nOption 2\nWith Option 2, users pull in links that are served via a CDN that contain JS, CSS, and our SVG icon library. The code is a bit bigger than the completely customized version, but often this isn\u2019t the aim when people are using Option 2.\nReducing friction for adoption should be a major goal of your design system rollout.\nConclusion\nHaving a design system is really beneficial to any product, especially as it grows. In order to have an effective system, it\u2019s important to primarily always keep your user in mind and garner support from your entire company. Once you have support and acceptance, this system will flourish and grow. Make sure someone is responsible for it, and make sure its built with a solid foundation from the start which will be carefully maintained toward the future. Good luck, and happy holidays!", "year": "2017", "author": "Una Kravets", "author_slug": "unakravets", "published": "2017-12-14T00:00:00+00:00", "url": "https://24ways.org/2017/why-design-systems-fail/", "topic": "process"} {"rowid": 193, "title": "Web Content Accessibility Guidelines\u2014for People Who Haven't Read Them", "contents": "I\u2019ve been a huge fan of the Web Content Accessibility Guidelines 2.0 since the World Wide Web Consortium (W3C) published them, nine years ago. I\u2019ve found them practical and future-proof, and I\u2019ve found that they can save a huge amount of time for designers and developers. You can apply them to anything that you can open in a browser. My favourite part is when I use the guidelines to make a website accessible, and then attend user-testing and see someone with a disability easily using that website.\nToday, the United Nations International Day of Persons with Disabilities, seems like a good time to re-read Laura Kalbag\u2019s explanation of why we should bother with accessibility. That should motivate you to devour this article.\nIf you haven\u2019t read the Web Content Accessibility Guidelines 2.0, you might find them a bit off-putting at first. The editors needed to create a single standard that countries around the world could refer to in legislation, and so some of the language in the guidelines reads like legalese. The editors also needed to future-proof the guidelines, and so some terminology\u2014such as \u201ctime-based media\u201d and \u201cprogrammatically determined\u201d\u2014can sound ambiguous. The guidelines can seem lengthy, too: printing the guidelines, the Understanding WCAG 2.0 document, and the Techniques for WCAG 2.0 document would take 1,200 printed pages.\nThis festive season, let\u2019s rip off that legalese and ambiguous terminology like wrapping paper, and see\u2014in a single article\u2014what gifts the Web Content Accessibility Guidelines 2.0 editors have bestowed upon us.\nCan your users perceive the information on your website?\nThe first guideline has criteria that help you prevent your users from asking \u201cWhat the **** is this thing here supposed to be?\u201d\n1.1.1 Text is the most accessible format for information. Screen readers\u2014such as the \u201cVoiceOver\u201d setting on your iPhone or the \u201cTalkBack\u201d app on your Android phone\u2014understand text better than any other format. The same applies for other assistive technology, such as translation apps and Braille displays. So, if you have anything on your webpage that\u2019s not text, you must add some text that gives your user the same information. You probably know how to do this already; for example:\n\nfor images in webpages, put some alternative text in an alt attribute to tell your user what the image conveys to the user;\nfor photos in tweets, add a description to make the images accessible;\nfor Instagram posts, write a caption that conveys the photo\u2019s information.\n\nThe alternative text should allow the user to get the same information as someone who can see the image. For websites that have too many images for someone to add alternative text to, consider how machine learning and Dynamically Generated Alt Text might\u2014might\u2014be appropriate.\nYou can probably think of a few exceptions where providing text to describe an image might not make sense. Remember I described these guidelines as \u201cpractical\u201d? They cover all those exceptions:\n\nUser interface controls such as buttons and text inputs must have names or labels to tell your user what they do.\nIf your webpage has video or audio (more about these later on!), you must\u2014at least\u2014have text to tell the user what they are.\nMaybe your webpage has a test where your user has to answer a question about an image or some audio, and alternative text would give away the answer. In that case, just describe the test in text so your users know what it is.\nIf your webpage features a work of art, tell your user the experience it evokes.\nIf you have to include a Captcha on your webpage\u2014and please avoid Captchas if at all possible, because some users cannot get past them\u2014you must include text to tell your user what it is, and make sure that it doesn\u2019t rely on only one sense, such as vision.\nIf you\u2019ve included something just as decoration, you must make sure that your user\u2019s assistive technology can ignore it. Again, you probably know how to do this. For example, you could use CSS instead of HTML to include decorative images, or you could add an empty alt attribute to the img element. (Please avoid that recent trend where developers add empty alt attributes to all images in a webpage just to make the HTML validate. You\u2019re better than that.)\n\n(Notice that the guidelines allow you to choose how to conform to them, with whatever technology you choose. To make your website conform to a guideline, you can either choose one of the techniques for WCAG 2.0 for that guideline or come up with your own. Choosing a tried-and-tested technique usually saves time!)\n1.2.1 If your website includes a podcast episode, speech, lecture, or any other recorded audio without video, you must include a transcription or some other text to give your user the same information. In a lot of cases, you might find this easier than you expect: professional transcription services can prove relatively inexpensive and fast, and sometimes a speaker or lecturer can provide the speech or lecture notes that they read out word-for-word. Just make sure that all your users can get the same information and the same results, whether they can hear the audio or not. For example, David Smith and Marco Arment always publish episode transcripts for their Under the Radar podcast. \nSimilarly, if your website includes recorded video without audio\u2014such as an animation or a promotional video\u2014you must either use text to detail what happens in the video or include an audio version. Again, this might work out easier then you perhaps fear: for example, you could check to see whether the animation started life as a list of instructions, or whether the promotional video conveys the same information as the \u201cAbout Us\u201d webpage. You want to make sure that all your users can get the same information and the same results, whether they can see that video or not.\n1.2.2 If your website includes recorded videos with audio, you must add captions to those videos for users who can\u2019t hear the audio. Professional transcription services can provide you with time-stamped text in caption formats that YouTube supports, such as .srt and .sbv. You can upload those to YouTube, so captions appear on your videos there. YouTube can auto-generate captions, but the quality varies from impressively accurate to comically inaccurate. If you have a text version of what the people in the video said\u2014such as the speech that a politician read or the bedtime story that an actor read\u2014you can create a transcript file in .txt format, without timestamps. YouTube then creates captions for your video by synchronising that text to the audio in the video. If you host your own videos, you can ask a professional transcription service to give you .vtt files that you can add to a video element\u2019s track element\u2014or you can handcraft your own. (A quick aside: if your website has more videos than you can caption in a reasonable amount of time, prioritise the most popular videos, the most important videos, and the videos most relevant to people with disabilities. Then make sure your users know how to ask you to caption other videos as they encounter them.)\n1.2.3 If your website has recorded videos that have audio, you must add an \u201caudio description\u201d narration to the video to describe important visual details, or add text to the webpage to detail what happens in the video for users who cannot see the videos. (I like to add audio files from videos to my Huffduffer account so that I can listen to them while commuting.) Maybe your home page has a video where someone says, \u201cI\u2019d like to explain our new TPS reports\u201d while \u201cBill Lumbergh, division Vice President of Initech\u201d appears on the bottom of the screen. In that case, you should add an audio description to the video that announces \u201cBill Lumbergh, division Vice President of Initech\u201d, just before Bill starts speaking. As always, you can make life easier for yourself by considering all of your users, before the event: in this example, you could ask the speaker to begin by saying, \u201cI\u2019m Bill Lumbergh, division Vice President of Initech, and I\u2019d like to explain our new TPS reports\u201d\u2014so you won\u2019t need to spend time adding an audio description afterwards. \n1.2.4 If your website has live videos that have some audio, you should get a stenographer to provide real-time captions that you can include with the video. I\u2019ll be honest: this can prove tricky nowadays. The Web Content Accessibility Guidelines 2.0 predate YouTube Live, Instagram live Stories, Periscope, and other such services. If your organisation creates a lot of live videos, you might not have enough resources to provide real-time captions for each one. In that case, if you know the contents of the audio beforehand, publish the contents during the live video\u2014or failing that, publish a transcription as soon as possible.\n1.2.5 Remember what I said about the recorded videos that have audio? If you can choose to either add an audio description or add text to the webpage to detail what happens in the video, you should go with the audio description.\n1.2.6 If your website has recorded videos that include audio information, you could provide a sign language version of the audio information; some people understand sign language better than written language. (You don\u2019t need to caption a video of a sign language version of audio information.)\n1.2.7 If your website has recorded videos that have audio, and you need to add an audio description, but the audio doesn\u2019t have enough pauses for you to add an \u201caudio description\u201d narration, you could provide a separate version of that video where you have added pauses to fit the audio description into.\n1.2.8 Let\u2019s go back to the recorded videos that have audio once more! You could add text to the webpage to detail what happens in the video, so that people who can neither read captions nor hear dialogue and audio description can use braille displays to understand your video.\n1.2.9 If your website has live audio, you could get a stenographer to provide real-time captions. Again, if you know the contents of the audio beforehand, publish the contents during the live audio or publish a transcription as soon as possible.\n(Congratulations on making it this far! I know that seems like a lot to remember, but keep in mind that we\u2019ve covered a complex area: helping your users to understand multimedia information that they can\u2019t see and/or hear. Grab a mince pie to celebrate, and let\u2019s keep going.)\n1.3.1 You must mark up your website\u2019s content so that your user\u2019s browser, and any assistive technology they use, can understand the hierarchy of the information and how each piece of information relates to the rest. Once again, you probably know how to do this: use the most appropriate HTML element for each piece of information. Mark up headings, lists, buttons, radio buttons, checkboxes, and links with the most appropriate HTML element. If you\u2019re looking for something to do to keep you busy this Christmas, scroll through the list of the elements of HTML. Do you notice any elements that you didn\u2019t know, or that you\u2019ve never used? Do you notice any elements that you could use on your current projects, to mark up the content more accurately? Also, revise HTML table advanced features and accessibility, how to structure an HTML form, and how to use the native form widgets\u2014you might be surprised at how much you can do with just HTML! Once you\u2019ve mastered those, you can make your website much more usable for your all of your users.\n1.3.2 If your webpage includes information that your user has to read in a certain order, you must make sure that their browser and assistive technology can present the information in that order. Don\u2019t rely on CSS or whitespace to create that order visually. Check that the order of the information makes sense when CSS and whitespace aren\u2019t formatting it. Also, try using the Tab key to move the focus through the links and form widgets on your webpage. Does the focus go where you expect it to? Keep this in mind when using order in CSS Grid or Flexbox.\n1.3.3 You must not presume that your users can identify sensory characteristics of things on your webpage. Some users can\u2019t tell what you\u2019ve positioned where on the screen. For example, instead of asking your users to \u201cChoose one of the options on the left\u201d, you could ask them to \u201cChoose one of our new products\u201d and link to that section of the webpage.\n1.4.1 You must not rely on colour as the only way to convey something to your users. Some of your users can\u2019t see, and some of your users can\u2019t distinguish between colours. For example, if your webpage uses green to highlight the products that your shop has in stock, you could add some text to identify those products, or you could group them under a sub-heading.\n1.4.2 If your webpage automatically plays a sound for more than 3 seconds, you must make sure your users can stop the sound or change its volume. Don\u2019t rely on your user turning down the volume on their computer; some users need to hear the screen reader on their computer, and some users just want to keep listening to whatever they were listening before your webpage interrupted them!\n1.4.3 You should make sure that your text contrasts enough with its background, so that your users can read it. Bookmark Lea Verou\u2019s Contrast Ratio calculator now. You can enter the text colour and background colour as named colours, or as RGB, RGBa, HSL, or HSLa values. You should make sure that:\n\nnormal text that set at 24px or larger has a ratio of at least 3:1;\nbold text that set at 18.75px or larger has a ratio of at least 3:1;\nall other text has a ratio of at least 4\u00bd:1.\n\nYou don\u2019t have to do this for disabled form controls, decorative stuff, or logos\u2014but you could!\n1.4.4 You should make sure your users can resize the text on your website up to 200% without using their assistive technology\u2014and still access all your content and functionality. You don\u2019t have to do this for subtitles or images of text.\n1.4.5 You should avoid using images of text and just use text instead. In 1998, Jeffrey Veen\u2019s first Hot Design Tip said, \u201cText is text. Graphics are graphics. Don\u2019t confuse them.\u201d Now that you can apply powerful CSS text-styling properties, use CSS Grid to precisely position text, and choose from thousands of web fonts (Jeffrey co-founded Typekit to help with this), you pretty much never need to use images of text. The guidelines say you can use images of text if you let your users specify the font, size, colour, and background of the text in the image of text\u2014but I\u2019ve never seen that on a real website. Also, this doesn\u2019t apply to logos.\n1.4.6 Let\u2019s go back to colour contrast for a second. You could make your text contrast even more with its background, so that even more of your users can read it. To do that, use Lea Verou\u2019s Contrast Ratio calculator to make sure that:\n\nnormal text that is 24px or larger has a ratio of at least 4\u00bd:1;\nbold text that 18.75px or larger has a ratio of at least 4\u00bd:1;\nall other text has a ratio of at least 7:1.\n\n1.4.7 If your website has recorded speech, you could make sure there are no background sounds, or that your users can turn off any background sounds. If that\u2019s not possible, you could make sure that any background sounds that last longer than a couple of seconds are at least four times quieter than the speech. This doesn\u2019t apply to audio Captchas, audio logos, singing, or rapping. (Yes, these guidelines mention rapping!)\n1.4.8 You could make sure that your users can reformat blocks of text on your website so they can read them better. To do this, make sure that your users can:\n\nspecify the colours of the text and the background, and\nmake the blocks of text less than 80-characters wide, and \nalign text to the left (or right for right-to-left languages), and \nset the line height to 150%, and \nset the vertical distance between paragraphs to 1\u00bd times the line height of the text, and \nresize the text (without using their assistive technology) up to 200% and still not have to scroll horizontally to read it.\n\nBy the way, when you specify a colour for text, always specify a colour for its background too. Don\u2019t rely on default background colours!\n1.4.9 Let\u2019s return to images of text for a second. You could make sure that you use them only for decoration and logos.\nCan users operate the controls and links on your website?\nThe second guideline has criteria that help you prevent your users from asking, \u201cHow the **** does this thing work?\u201d\n2.1.1 You must make sure that you users can carry out all of your website\u2019s activities with just their keyboard, without time limits for pressing keys. (This doesn\u2019t apply to drawing or anything else that requires a pointing device such as a mouse.) Again, if you use the most appropriate HTML element for each piece of information and for each form element, this should prove easy.\n2.1.2 You must make sure that when the user uses the keyboard to focus on some part of your website, they can then move the focus to some other part of your webpage without needing to use a mouse or touch the screen. If your website needs them to do something complex before they can move the focus elsewhere, explain that to your user. These \u201ckeyboard traps\u201d have become rare, but beware of forms that move focus from one text box to another as soon as they receive the correct number of characters.\n2.1.3 Let\u2019s revisit making sure that you users can carry out all of your website\u2019s activities with just their keyboard, without time limits for pressing keys. You could make sure that your user can do absolutely everything on your website with just the keyboard.\n2.2.1 Sometimes people need more time than you might expect to complete a task on your website. If any part of your website imposes a time limit on a task, you must do at least one of these: \n\nlet your users turn off the time limit before they encounter it; or\nlet your users increase the time limit to at least 10 times the default time limit before they encounter it; or\nwarn your users before the time limit expires and give them at least 20 seconds to extend it, and let them extend it at least 10 times.\n\nRemember: these guidelines are practical. They allow you to enforce time limits for real-time events such as auctions and ticket sales, where increasing or extending time limits wouldn\u2019t make sense. Also, the guidelines allow you to enforce a maximum time limit of 20 hours. The editors chose 20 hours because people need to go to sleep at some stage. See? Practical!\n2.2.2 In my experience, this criterion remains the least well-known\u2014even though some users can only use websites that conform to it. If your website presents content alongside other content that can distract users by automatically moving, blinking, scrolling, or updating, you must make sure that your users can:\n\npause, stop, or hide the other content if it\u2019s not essential and lasts more than 5 seconds; and\npause, stop, hide, or control the frequency of the other content if it automatically updates.\n\nIt\u2019s OK if your users miss live information such as stock price updates or football scores; you can\u2019t do anything about that! Also, this doesn\u2019t apply to animations such as progress bars that you put on a website to let all users know that the webpage isn\u2019t frozen.\n(If this one sounds complex, just add a pause button to anything that might distract your users.)\n2.2.3 Let\u2019s go back to time limits on tasks on your website. You could make your website even easier to use by removing all time limits except those on real-time events such as auctions and ticket sales. That would mean your user wouldn\u2019t need to interact with a timer at all.\n2.2.4 You could let your users turn off all interruptions\u2014server updates, promotions, and so on\u2014apart from any emergency information.\n2.2.5 This is possibly my favourite of these criteria! After your website logs your user out, you could make sure that when they log in again, they can continue from where they were without having lost any information. Do that, and you\u2019ll be on everyone\u2019s Nice List this Christmas.\n2.3.1 You must make sure that nothing flashes more than three times a second on your website, unless you can make sure that the flashes remain below the acceptable general flash and red flash thresholds\u2026\n2.3.2 \u2026or you could just make sure that nothing flashes more than three times per second on your website. This is usually an easier goal.\n2.4.1 You must make sure that your users can jump past any blocks of content, such as navigation menus, that are repeated throughout your website. You know the drill here: using HTML\u2019s sectioning elements such as header, nav, main, aside, and footer allows users with assistive technology to go straight to the content they need, and adding \u201cSkip Navigation\u201d links allows everyone to get to your main content faster.\n2.4.2 You must add a proper title to describe each webpage\u2019s topic. Your webpage won\u2019t even validate without a title element, so make it a useful one.\n2.4.3 If your users can focus on links and native form widgets, you must make sure that they can focus on elements in an order that makes sense.\n2.4.4 You must make sure that your users can understand the purpose of a link when they read:\n\nthe text of the link; or\nthe text of the paragraph, list item, table cell, or table header for the cell that contains the link; or\nthe heading above the link.\n\nYou don\u2019t have to do that for games and quizzes.\n2.4.5 You should give your users multiple ways to find any webpage within a set of webpages. Add site-wide search and a site map and you\u2019re done!\nThis doesn\u2019t apply for a webpage that is part of a series of actions (like a shopping cart and checkout flow) or to a webpage that is a result of a series of actions (like a webpage confirming that the user has bought what was in the shopping cart).\n2.4.6 You should help your users to understand your content by providing:\n\nheadings that describe the topics of you content;\nlabels that describe the purpose of the native form widgets on the webpage.\n\n2.4.7 You should make sure that users can see which element they have focussed on. Next time you use your website, try hitting the Tab key repeatedly. Does it visually highlight each item as it moves focus to it? If it doesn\u2019t, search your CSS to see whether you\u2019ve applied outline: 0; to all elements\u2014that\u2019s usually the culprit. Use the :focus pseudo-element to define how elements should appear when they have focus.\n2.4.8 You could help your user to understand where the current webpage is located within your website. Add \u201cbreadcrumb navigation\u201d and/or a site map and you\u2019re done.\n2.4.9 You could make links even easier to understand, by making sure that your users can understand the purpose of a link when they read the text of the link. Again, you don\u2019t have to do that for games and quizzes.\n2.4.10 You could use headings to organise your content by topic. \nCan users understand your content?\nThe third guideline has criteria that help you prevent your users from asking, \u201cWhat the **** does this mean?\u201d\n3.1.1 Let\u2019s start this section with the criterion that possibly takes the least time to implement; you must make sure that the user\u2019s browser can identify the main language that your webpage\u2019s content is written in. For a webpage that has mainly English content, use . \n3.1.2 You must specify when content in another language appears in your webpage, like so: I wish you a Joyeux No\u00ebl.. You don\u2019t have to do this for proper names, technical terms, or words that you can\u2019t identify a language for. You also don\u2019t have to do it for words from a different language that people consider part of the language around those words; for example, Come to our Christmas rendezvous! is OK.\n3.1.3 You could make sure that your users can find out the meaning of any unusual words or phrases, including idioms like \u201cstocking filler\u201d or \u201cBah! Humbug!\u201d and jargon such as \u201cVoiceOver\u201d and \u201cTalkBack\u201d. Provide a glossary or link to a dictionary.\n3.1.4 You could make sure that your users can find out the meaning of any abbreviation. For example, VoiceOver pronounces \u201cXmas\u201d as \u201cSmas\u201d instead of \u201cChristmas\u201d. Using the abbr element and linking to a glossary can help. (Interestingly, VoiceOver pronounces \u201cabbr\u201d as \u201cabbreviation\u201d!)\n3.1.5 Do your users need to be able to read better than a typically educated nine-year-old, to read your content (apart from proper names and titles)? If so, you could provide a version that doesn\u2019t require that level of reading ability, or you could provide images, videos, or audio to explain your content. (You don\u2019t have to add captions or audio description to those videos.)\n3.1.6 You could make sure that your users can access the pronunciation of any word in your content if that word\u2019s meaning depends on its pronunciation. For example, the word \u201cclose\u201d could have one of two meanings, depending on pronunciation, in a phrase such as, \u201cReady for Christmas? Close now!\u201d\n3.2.1 Some users need to focus on elements to access information about them. You must make sure that focusing on an element doesn\u2019t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form.\n3.2.2 Webpages are easier for users when the controls do what they\u2019re supposed to do. Unless you have warned your users about it, you must make sure that changing the value of a control such as a text box, checkbox, or drop-down list doesn\u2019t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form.\n3.2.3 To help your users to find the content they want on each webpage, you should put your navigation elements in the same place on each webpage. (This doesn\u2019t apply when your user has changed their preferences or when they use assistive technology to change how your content appears.) \n3.2.4 When a set of webpages includes things that have the same functionality on different webpages, you should name those things consistently. For example, don\u2019t use the word \u201cSearch\u201d for the search box on one webpage and \u201cFind\u201d for the search box on another webpage within that set of webpages.\n3.2.5 Let\u2019s go back to major changes, such as a new window opening, another element taking focus, or a form being submitted. You could make sure that they only happen when users deliberately make them happen, or when you have warned users about them first. For example, you could give the user a button for updating some content instead of automatically updating that content. Also, if a link will open in a new window, you could add the words \u201copens in new window\u201d to the link text.\n3.3.1 Users make mistakes when filling in forms. Your website must identify each mistake to your user, and must describe the mistake to your users in text so that the user can fix it. One way to identify mistakes reliably to your users is to set the aria-invalid attribute to true in the element that has a mistake. That makes sure that users with assistive technology will be alerted about the mistake. Of course, you can then use the [aria-invalid=\"true\"] attribute selector in your CSS to visually highlight any such mistakes. Also, look into how certain attributes of the input element such as required, type, and list can help prevent and highlight mistakes.\n3.3.2 You must include labels or instructions (and possibly examples) in your website\u2019s forms, to help your users to avoid making mistakes. \n3.3.3 When your user makes a mistake when filling in a form, your webpage should suggest ways to fix that mistake, if possible. This doesn\u2019t apply in scenarios where those suggestions could affect the security of the content.\n3.3.4 Whenever your user submits information that:\n\nhas legal or financial consequences; or\naffects information that they have previously saved in your website; or\nis part of a test\n\n\u2026you should make sure that they can:\n\nundo it; or\ncorrect any mistakes, after your webpage checks their information; or\nreview, confirm, and correct the information before they finally submit it.\n\n3.3.5 You could help prevent your users from making mistakes by providing obvious, specific help, such as examples, animations, spell-checking, or extra instructions.\n3.3.6 Whenever your user submits any information, you could make sure that they can:\n\nundo it; or\ncorrect any mistakes, after your webpage checks their information; or\nreview, confirm, and correct the information before they finally submit it.\n\nHave you made your website robust enough to work on your users\u2019 browsers and assistive technologies?\nThe fourth and final guideline has criteria that help you prevent your users from asking, \u201cWhy the **** doesn\u2019t this work on my device?\u201d\n4.1.1 You must make sure that your website works as well as possible with current and future browsers and assistive technology. Prioritise complying with web standards instead of relying on the capabilities of currently popular devices and browsers. Web developers didn\u2019t expect their users to be unwrapping the Wii U Browser five years ago\u2014who knows what browsers and assistive technologies our users will be unwrapping in five years\u2019 time? Avoid hacks, and use the W3C Markup Validation Service to make sure that your HTML has no errors.\n4.1.2 If you develop your own user interface components, you must make their name, role, state, properties, and values available to your user\u2019s browsers and assistive technologies. That should make them almost as accessible as standard HTML elements such as links, buttons, and checkboxes.\n\u201c\u2026and a partridge in a pear tree!\u201d\n\u2026as that very long Christmas song goes. We\u2019ve covered a lot in this article\u2014because your users have a lot of different levels of ability. Hopefully this has demystified the Web Content Accessibility Guidelines 2.0 for you. Hopefully you spotted a few situations that could arise for users on your website, and you now know how to tackle them. \nTo start applying what we\u2019ve covered, you might like to look at Sarah Horton and Whitney Quesenbery\u2019s personas for Accessible UX. Discuss the personas, get into their heads, and think about which aspects of your website might cause problems for them. See if you can apply what we\u2019ve covered today, to help users like them to do what they need to do on your website.\nHow to know when your website is perfectly accessible for everyone\nLOL! There will never be a time when your website becomes perfectly accessible for everyone. Don\u2019t aim for that. Instead, aim for regularly testing and making your website more accessible.\nWeb Content Accessibility Guidelines (WCAG) 2.1\nThe W3C hope to release the Web Content Accessibility Guidelines (WCAG) 2.1 as a \u201crecommendation\u201d (that\u2019s what the W3C call something that we should start using) by the middle of next year. Ten years may seem like a long time to move from version 2.0 to version 2.1, but consider the scale of the task: the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible. Keep an eye out for 2.1!\nYou\u2019ll go down in history\nOne last point: I\u2019ve met a surprising number of web designers and developers who do great work to make their websites more accessible without ever telling their users about it. Some of your potential customers have possibly tried and failed to use your website in the past. They probably won\u2019t try again unless you let them know that things have improved. A quick Twitter search for your website\u2019s name alongside phrases like \u201cassistive technology\u201d, \u201cdoesn\u2019t work\u201d, or \u201c#fail\u201d can let you find frustrated users\u2014so you can tell them about how you\u2019re making your website more accessible. Start making your websites work better for everyone\u2014and please, let everyone know.", "year": "2017", "author": "Alan Dalton", "author_slug": "alandalton", "published": "2017-12-03T00:00:00+00:00", "url": "https://24ways.org/2017/wcag-for-people-who-havent-read-them/", "topic": "code"} {"rowid": 214, "title": "Christmas Gifts for Your Future Self: Testing the Web Platform", "contents": "In the last year I became a CSS specification editor, on a mission to revitalise CSS Multi-column layout. This has involved learning about many things, one of which has been the Web Platform Tests project. In this article, I\u2019m going to share what I\u2019ve learned about testing the web platform. I\u2019m also going to explain why I think you might want to get involved too.\nWhy test?\nAt one time or another it is likely that you have been frustrated by an issue where you wrote some valid CSS, and one browser did one thing with it and another something else entirely. Experiences like this make many web developers feel that browser vendors don\u2019t work together, or they are actively doing things in a different way to one another to the detriment of those of us who use the platform. You\u2019ll be glad to know that isn\u2019t the case, and that the people who work on browsers want things to be consistent just as much as we do. It turns out however that interoperability, which is the official term for \u201cworks in all browsers\u201d, is hard.\nThanks to web-platform-tests, a test from another browser vendor just found genuine bug in our code before we shipped \ud83d\ude3b\u2014 Brian Birtles (@brianskold) February 10, 2017\n\nIn order for W3C Specifications to move on to become W3C Recommendations we need to have interoperable implementations.\n\n6.2.4 Implementation Experience\nImplementation experience is required to show that a specification is sufficiently clear, complete, and relevant to market needs, to ensure that independent interoperable implementations of each feature of the specification will be realized. While no exhaustive list of requirements is provided here, when assessing that there is adequate implementation experience the Director will consider (though not be limited to):\nis each feature of the current specification implemented, and how is this demonstrated?\nare there independent interoperable implementations of the current specification?\nare there implementations created by people other than the authors of the specification?\nare implementations publicly deployed?\nis there implementation experience at all levels of the specification\u2019s ecosystem (authoring, consuming, publishing\u2026)?\nare there reports of difficulties or problems with implementation?\nhttps://www.w3.org/2017/Process-20170301/#transition-reqs\n\nWe all want interoperability, achieving interoperability is part of the standards process. The next question is, how do we make sure that we get it?\nUnimplemented vs uninteroperable implementations\nBefore looking at how we can try to improve interoperability, I\u2019d like to look at the reasons we don\u2019t always meet that aim. There are a couple of reasons why browser X is not doing the same thing as browser Y.\nThe first reason is that browser X has not implemented that feature yet. Relatively small teams of people work on browser engines, and their resources are spread as thinly as those of any company. Behind those browsers are business or organisational goals which may not match our desire for a shiny feature to be made available. There are ways in which we as the web community can help gently encourage implementations - by requesting the feature, by using it so it shows up in usage reports, or writing about it to show interest. However, there will always be some degree of lag based on priorities.\nA browser not supporting a feature at all, is reasonably easy to deal with these days. We can test for support with Feature Queries, and create sensible fallbacks. What is harder to deal with is when a feature is implemented in different ways by different browsers. In that situation you use the feature, perhaps referring to the specification to ensure that you are writing your CSS correctly. It looks exactly as you expect in one browser and it\u2019s all broken when you test in another.\nA frequent cause of this kind of issue is that the specification is not well defined in a particular area or that the specification has changed since one or other browser implemented it. CSS specifications are not developed in a darkened room, then presented to browser vendors to implement as a completed document. It would be nice if it worked like that, however the web platform is a gnarly thing. Before we can be sure that a specification is correct, it needs implementing in order that we can get the interoperable implementations I described earlier. A circular process has to happen. Specifications have to be written, browsers have to implement and find the problems, and then the specification has to be revised.\nMany people reading this will be familiar with how flexbox changed three times in browsers, leaving us with a mess of incompatibilities and the need to use at least two versions of the spec. This story was an example of this circular process, in this case the specification was flagged as experimental using vendor prefixes. We had become used to using vendor prefixes in production and early adopters of flexbox were bitten by this. Today, specifications are developed behind experimental flags as we saw with CSS Grid Layout. Yet there has to come a time when implementations ship, and remove those flags, and it may be that knowingly or unknowingly some interop issues slip through.\nYou will know these interop issues as \u201cbrowser bugs\u201d, perhaps you have even reported one (thank you!) and none of us want them, so how do we make the platform as robust as possible?\nHow do we ensure we have interoperability?\nIf you were working on a large web application, with several people committing code, it would be very easy for one person to make a change that inadvertently broke some part of the application. They might not realise the fact that their change would cause a problem, due to not having a complete understanding of the entire codebase. To prevent this from happening, it is accepted good practice to write tests as well as code. The tests can then be run before the application is deployed.\nUnless you start out from the beginning writing tests, and are very good at writing a test for every bit of code, it is likely that some issues do slip through from time to time. When this happens, a good approach is to not only fix the issue but also to write a test that would stop it ever happening again. That way the test suite improves over time and hopefully fewer issues happen.\nThe web platform is essentially a giant, sprawling application, with a huge range of people working on it in different ways. There is therefore plenty of opportunity for issues to creep in, so it seems like having some way of writing tests and automating those tests on browsers would be a good thing. That, is what the Web Platform Tests project has set out to achieve.\nWeb Platform Tests\nWeb Platform Tests is the test suite for the web platform. It is set of tests for all parts of the web platform, which can be run in any browser and the results reported. This article mostly discusses CSS tests, because I work on CSS. You will find that there are tests covering the full platform, and you can look into whichever area you have the most interest and experience in.\nIf we want to create a test suite for a CSS specification then we need to ensure that every feature of the specification has a related test. If a change is made to the spec, and a test committed that reflects that change, then it should be straightforward to run that test against each browser and see if it passes.\nCurrently, at the CSS Working Group, specifications that are at Candidate Recommendation Status should commit at test with every normative change to the spec. To explain the terminology, a normative change is one that changes some of the normative text of a specification - text that contains instructions as to how a browser should render a certain thing. A Candidate Recommendation is the status at which the Working Group officially request implementations of the spec, therefore it is reasonable to assume that any change may cause an interoperability issue. It is usually the case that representatives from all browsers will have discussed the change, so anyone who needs to change code will be aware. In this case the test allows them to check that their change passes and matches everyone else. Tests would also highlight the situation where a change to the spec caused an issue in a browser that otherwise wouldn\u2019t be aware if it. Just as a test suite for your web application should alert a person committing code, that their change will cause a problem elsewhere.\nDiscovering the tests\nI\u2019ve found that the more I have understood the effort that goes into interoperability, and the reasons why creating an interoperable web is so hard, running into browser issues has become less frustrating. I have somewhere to go, even if all I can do is log the bug.\nIf you are even slightly interested in the subject, go have a poke around wpt.fyi. You can explore the various parts of the web platform and see how many tests have been committed. All the the CSS tests are under the directory /css where you will find each specification. Find a specification you are interested in, and look at the tests. Each test has a link to run it in your own browser to see if it passes. This can be useful in itself, if you are battling with an issue and have reduced it down to something specific, you can go and look to see if there is a test covering that and whether it appears to pass or fail in the browser you are battling with. If it turns out that the test fails, it\u2019s probably not you!\nA test on the wpt.fyi dashboard\nNote: In some tests you will come across mention of a font called Ahem. This is a font designed for testing which contains consistent glyphs. You can read about how to use the font and download it here.\nContributing to Web Platform Tests\nYou can also become involved with Web Platform Tests. People often ask me how they can become involved in CSS, and I can think of no better way than by writing tests. You need to really understand a feature to accurately come up with a method of testing if it works or not in the different engines. This is not glamorous work, it is however a very useful thing to be involved with.\nIn addition to helping yourself, and developing the sort of deep knowledge of the platform that enables contribution, you will really help the progress of specifications. There are only a very few people writing specs. If those people also have to write and review all of the tests it slows down their work. If you want a better, more interoperable web, and to massively improve your ability to have nerdy conversations about highly specific things, testing is the place to start.\nA local testing setup\nYour first stop should be to visit the home of Web Platform Tests. There is some documentation here, which does tend to assume you know about the tests and what you are looking for - having read this article you know as much as I do. To be able to work on tests you will want to:\n\nClone the WPT repo, this is where all the tests are stored\nInstall some tools so you can run up a local copy of the tests\n\nThe instructions on the Readme in the repo should get you up and running, you can then load your own version of the test suite in a browser at http://web-platform-test:8000, or whichever port you set up.\nRunning tests locally\nFinding things to test\nIt\u2019s currently not straightforward to locate low-hanging fruit in order to start committing tests. There are some issues flagged up as a good first issue in the GitHub repo, if any of those match your interest and knowledge. Otherwise, a good place to start is where you know of existing interoperability issues. If you are aware of a browser bug, have a look and see if there is a test that addresses it. If not, then a test highlights the interoperability issue, and if it is an issue that you are running into means that you have a nice way to see if it has been fixed!\nTalk to people\nThere is an IRC channel at irc://irc.w3.org:6667/testing, where you will find people who are writing tests as well as people who are working on the test suite framework itself. They have always been very friendly, and are likely to welcome people with a real interest in creating tests.\nGathering information\nFirst you need to read the spec. To be able to create a test you need to know and to understand what the specification says should be happening. As I mentioned, writing tests will improve your knowledge dramatically! In general I find that web developers assume their favourite browser has got it right, this isn\u2019t about right or wrong however, or good browsers versus bad ones. The browser with the incorrect implementation may have had a perfect, as per the spec implementation, until something changed. Do some investigation and work out what the spec says, and which \u2013 if any \u2013 browser is doing it correctly.\nAnother good place to look when trying to create a test for an interop issue, is to look at the browser issue trackers. It is quite likely that someone has already logged the issue, and detailed what it is, and even which browsers are as per the spec. This is useful information, as you then have a clue as to which browsers should pass your test. Remember to check version numbers - an issue may well be fixed in a pre-release version of Chrome for example, but not in the public release.\n\nEdge Issue Tracker\nMozilla Issue Tracker\nWebKit Issue Tracker\nChromium Issue Tracker\n\nWriting the test\nIf you\u2019ve ever created a Reduced Test Case to isolate a browser issue, you already have some idea of what we are trying to do with a test. We want to test one thing, in isolation, and to be able to confirm \u201cyes this works as per the spec\u201d or \u201cno, this does not\u201d.\nThe main two types of test are:\n\ntestharness.js tests\nreftests\n\nThe testharness.js tests use JavaScript to test an assertion, this framework is designed as a way to test Web APIs and as this quickly gets fairly complicated - and I\u2019m a complete beginner myself at writing these - I\u2019ll refer you to the excellent tutorial Using testharness.js.\nMany CSS tests will be reftests. A reftest involves getting two pages to lay out in the same way, so that they are visually the same. For example, you can find a reftest for Grid Layout at:https://w3c-test.org/css/css-grid/alignment/grid-gutters-001.html or at http://web-platform.test:8000/css/css-grid/alignment/grid-gutters-001.html if you have run up your own copy of WPT.\n\n\nCSS Grid Layout Test: Support for gap shorthand property of row-gap and column-gap\n\n\n\n\n\n\n

The test passes if it has the same visual effect as reference.

\n
\n
\n
\n
\n
\n
\nI am testing the new gap property (renamed grid-gap). The reference file can be found by looking for the line:\n\nIn that file, I am using absolute positioning to mock up the way the file would look if gap is implemented correctly.\n\n\nCSS Grid Layout Reference: a square with a green cross\n\n\n\n
\n
\n
\n
\n
\n
\nThe tests are compared in an automated way by taking screenshots of the test and reference.\nThese are relatively simple tests to write, you will find the work is not in writing the test however. The work is really in doing the research, and making sure you understand what is supposed to happen so you can write the test. Which is why, if you really want to get your hands dirty in the web platform, this is a good place to start.\nCommitting a test\nOnce you have written a test you can run the lint tool to make sure that everything is tidy. This tool is run automatically after you submit your pull request, and reviewers won\u2019t accept a test with lint errors, so do this locally first to catch anything obvious.\nTests are added as a pull request, once you have your test ready to go you can create a pull request to add it to the repository. Your test will be tested and it will then wait for a review.\nYou may well then find yourself in a bit of a waiting game, as the test needs to be reviewed. How long that takes will depend on how active work is on that spec. People who are in the OWNERS file for that spec should be notified. You can always ask in IRC to see if someone is available who can look at and potentially merge your test.\nUsually the reviewer will have some comments as to how the test can be improved, in the same as the owner of an open source project you submit a PR to might ask you to change some things. Work with them to make your test as good as it can be, the things you learn on the first test you submit will make future ones easier. You can then bask in the glow of knowing you have done something towards the aim of a more interoperable web for all of us.\nChristmas gifts for your future self\nI have been a web developer for over 20 years. I have no idea what the web platform will look like in 20 more years, but for as long as I\u2019m working on it I\u2019ll keep on trying to make it better. Making the web more interoperable makes it a better place to be a web developer, storing up some Christmas gifts for my future self, while learning new things as I do so.\nResources\nI rounded up everything I could find on WPT while researching this article. As well as some other links that might be helpful for you. These links are below. Happy testing!\n\nWeb Platform Tests\nUsing testharness.js\nIRC Channel irc://irc.w3.org:6667/testing\nEdge Issue Tracker\nMozilla Issue Tracker\nWebKit Issue Tracker\nChromium Issue Tracker\nReducing an Issue - guide to created a reduced test case\nEffectively Using Web Platform Tests: Slides and Video\nAn excellent walkthrough from Lyza Gardner on her working on tests for the HTML specification - Moving Targets: a case study on testing web standards.\nImproving interop with web-platform-tests: Slides and Video", "year": "2017", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2017-12-10T00:00:00+00:00", "url": "https://24ways.org/2017/testing-the-web-platform/", "topic": "code"} {"rowid": 215, "title": "Teach the CLI to Talk Back", "contents": "The CLI is a daunting tool. It\u2019s quick, powerful, but it\u2019s also incredibly easy to screw things up in \u2013 either with a mistyped command, or a correctly typed command used at the wrong moment. This puts a lot of people off using it, but it doesn\u2019t have to be this way.\nIf you\u2019ve ever interacted with Slack\u2019s Slackbot to set a reminder or ask a question, you\u2019re basically using a command line interface, but it feels more like having a conversation. (My favourite Slack app is Lunch Train which helps with the thankless task of herding colleagues to a particular lunch venue on time.)\nSame goes with voice-operated assistants like Alexa, Siri and Google Home. There are even games, like Lifeline, where you interact with a stranded astronaut via pseudo SMS, and KOMRAD where you chat with a Soviet AI.\nI\u2019m not aiming to build an AI here \u2013 my aspirations are a little more down to earth. What I\u2019d like is to make the CLI a friendlier, more forgiving, and more intuitive tool for new or reluctant users. I want to teach it to talk back.\nInteractive command lines in the wild\nIf you\u2019ve used dev tools in the command line, you\u2019ve probably already used an interactive prompt \u2013 something that asks you questions and responds based on your answers. Here are some examples:\nYeoman\nIf you have Yeoman globally installed, running yo will start a command prompt.\n\nThe prompt asks you what you\u2019d like to do, and gives you options with how to proceed. Seasoned users will run specific commands for these options rather than go through this prompt, but it\u2019s a nice way to start someone off with using the tool.\nnpm\nIf you\u2019re a Node.js developer, you\u2019re probably familiar with typing npm init to initialise a project. This brings up prompts that will populate a package.json manifest file for that project.\n\nThe alternative would be to expect the user to craft their own package.json, which is more error-prone since it\u2019s in JSON format, so something as trivial as an extraneous comma can throw an error.\nSnyk\nSnyk is a dev tool that checks for known vulnerabilities in your dependencies. Running snyk wizard in the CLI brings up a list of all the known vulnerabilities, and gives you options on how to deal with it \u2013 such as patching the issue, applying a fix by upgrading the problematic dependency, or ignoring the issue (you are then prompted for a reason).\n\nThese decisions get mapped to the manifest and a .snyk file, and committed into the repo so that the settings are the same for everyone who uses that project.\nI work at Snyk, and running the wizard is what made me think about building my own personal assistant in the command line to help me with some boring, repetitive tasks.\nWriting your own\nSomething I do a lot is add bookmarks to styleguides.io \u2013 I pull down the entire repo, copy and paste a template YAML file, and edit to contents. Sometimes I get it wrong and break the site. So I\u2019ve been putting together a tool to help me add bookmarks.\nIt\u2019s called bookmarkbot \u2013 it\u2019s a personal assistant squirrel called Mark who will collect and bury your bookmarks for safekeeping.*\n\n*Fortunately, this metaphor also gives me a charming excuse for any situation where bookmarks sometimes get lost \u2013 it\u2019s not my poorly-written code, honest, it\u2019s just being realistic because sometimes squirrels forget where they buried things!\nWhen you run bookmarkbot, it will ask you for some information, and save that information as a Markdown file in YAML format.\nFor this demo, I\u2019m going to use a Node.js package called inquirer, which is a well supported tool for creating command line prompts. I like it because it has a bunch of different question types; from input, which asks for some text back, confirm which expects a yes/no response, or a list which gives you a set of options to choose from. You can even nest questions, Choose Your Own Adventure style.\nPrerequisites\n\nNode.js\nnpm\nRubyGems (Only if you want to go as far as serving a static site for your bookmarks, and you want to use Jekyll for it)\n\nDisclaimer\nBear in mind that this is a really simplified walkthrough. It doesn\u2019t have any error states, and it doesn\u2019t handle the situation where we save a file with the same name. But it gets you in a good place to start building out your tool.\nLet\u2019s go!\nCreate a new folder wherever you keep your projects, and give it an awesome name (I\u2019ve called mine bookmarks and put it in the Sites directory because I\u2019m unimaginative). Now cd to that directory. \ncd Sites/bookmarks\nLet\u2019s use that example I gave earlier, the trusty npm init.\nnpm init\nPop in the information you\u2019d like to provide, or hit ENTER to skip through and save the defaults. Your directory should now have a package.json file in it. Now let\u2019s install some of the dependencies we\u2019ll need.\nnpm install --save inquirer\nnpm install --save slugify\nNext, add the following snippet to your package.json to tell it to run this file when you run npm start.\n\"scripts\": {\n \u2026\n \"start\": \"node index.js\"\n}\nThat index.js file doesn\u2019t exist yet, so let\u2019s create it in the root of our folder, and add the following:\n// Packages we need\nvar fs = require('fs'); // Creates our file (part of Node.js so doesn't need installing)\nvar inquirer = require('inquirer'); // The engine for our questions prompt\nvar slugify = require('slugify'); // Will turn a string into a usable filename\n\n// The questions\nvar questions = [\n {\n type: 'input',\n name: 'name',\n message: 'What is your name?',\n },\n];\n\n// The questions prompt\nfunction askQuestions() {\n\n // Ask questions\n inquirer.prompt(questions).then(answers => {\n\n // Things we'll need to generate the output\n var name = answers.name;\n\n // Finished asking questions, show the output\n console.log('Hello ' + name + '!');\n\n });\n\n}\n\n// Kick off the questions prompt\naskQuestions();\nThis is just some barebones where we\u2019re including the inquirer package we installed earlier. I\u2019ve stored the questions in a variable, and the askQuestions function will prompt the user for their name, and then print \u201cHello \u201d in the console.\nEnough setup, let\u2019s see some magic. Save the file, go back to the command line and run npm start.\n\nExtending what we\u2019ve learnt\nAt the moment, we\u2019re just saving a name to a file, which isn\u2019t really achieving our goal of saving bookmarks. We don\u2019t want our tool to forget our information every time we talk to it \u2013 we need to save it somewhere. So I\u2019m going to add a little function to write the output to a file.\nSaving to a file\nCreate a folder in your project\u2019s directory called _bookmarks. This is where the bookmarks will be saved.\nI\u2019ve replaced my questions array, and instead of asking for a name, I\u2019ve extended out the questions, asking to be provided with a link and title (as a regular input type), a list of tags (using inquirer\u2019s checkbox type), and finally a description, again, using the input type.\nSo this is how my code looks now:\n// Packages we need\nvar fs = require('fs'); // Creates our file\nvar inquirer = require('inquirer'); // The engine for our questions prompt\nvar slugify = require('slugify'); // Will turn a string into a usable filename\n\n// The questions\nvar questions = [\n {\n type: 'input',\n name: 'link',\n message: 'What is the url?',\n },\n {\n type: 'input',\n name: 'title',\n message: 'What is the title?',\n },\n {\n type: 'checkbox',\n name: 'tags',\n message: 'Would you like me to add any tags?',\n choices: [\n { name: 'frontend' },\n { name: 'backend' },\n { name: 'security' },\n { name: 'design' },\n { name: 'process' },\n { name: 'business' },\n ],\n },\n {\n type: 'input',\n name: 'description',\n message: 'How about a description?',\n },\n];\n\n// The questions prompt\nfunction askQuestions() {\n\n // Say hello\n console.log('\ud83d\udc3f Oh, hello! Found something you want me to bookmark?\\n');\n\n // Ask questions\n inquirer.prompt(questions).then((answers) => {\n\n // Things we'll need to generate the output\n var title = answers.title;\n var link = answers.link;\n var tags = answers.tags + '';\n var description = answers.description;\n var output = '---\\n' +\n 'title: \"' + title + '\"\\n' +\n 'link: \"' + link + '\"\\n' +\n 'tags: [' + tags + ']\\n' +\n '---\\n' + description + '\\n';\n\n // Finished asking questions, show the output\n console.log('\\n\ud83d\udc3f All done! Here is what I\\'ve written down:\\n');\n console.log(output);\n\n // Things we'll need to generate the filename\n var slug = slugify(title);\n var filename = '_bookmarks/' + slug + '.md';\n\n // Write the file\n fs.writeFile(filename, output, function () {\n console.log('\\n\ud83d\udc3f Great! I have saved your bookmark to ' + filename);\n });\n\n });\n\n}\n\n// Kick off the questions prompt\naskQuestions();\nThe output is formatted into YAML metadata as a Markdown file, which will allow us to turn it into a static HTML file using a build tool later. Run npm start again and have a look at the file it outputs.\n\nGetting confirmation\nBefore the user makes critical changes, it\u2019s good to verify those changes first. We\u2019re going to add a confirmation step to our tool, before writing the file. More seasoned CLI users may favour speed over a \u201chey, can you wait a sec and just check this is all ok\u201d step, but I always think it\u2019s worth adding one so you can occasionally save someone\u2019s butt.\nSo, underneath our questions array, let\u2019s add a confirmation array.\n// Packages we need\n\u2026\n// The questions\n\u2026\n\n// Confirmation questions\nvar confirm = [\n {\n type: 'confirm',\n name: 'confirm',\n message: 'Does this look good?',\n },\n];\n\n// The questions prompt\n\u2026\n\nAs we\u2019re adding the confirm step before the file gets written, we\u2019ll need to add the following inside the askQuestions function:\n// The questions prompt\nfunction askQuestions() {\n // Say hello\n \u2026\n // Ask questions\n inquirer.prompt(questions).then((answers) => {\n \u2026\n // Things we'll need to generate the output\n \u2026\n // Finished asking questions, show the output\n \u2026\n\n // Confirm output is correct\n inquirer.prompt(confirm).then(answers => {\n\n // Things we'll need to generate the filename\n var slug = slugify(title);\n var filename = '_bookmarks/' + slug + '.md';\n\n if (answers.confirm) {\n // Save output into file\n fs.writeFile(filename, output, function () {\n console.log('\\n\ud83d\udc3f Great! I have saved your bookmark to ' +\n filename);\n });\n } else {\n // Ask the questions again\n console.log('\\n\ud83d\udc3f Oops, let\\'s try again!\\n');\n askQuestions();\n }\n\n });\n\n });\n}\n\n// Kick off the questions prompt\naskQuestions();\nNow run npm start and give it a go!\n\nTyping y will write the file, and n will take you back to the start. Ideally, I\u2019d store the answers already given as defaults so the user doesn\u2019t have to start from scratch, but I want to keep this demo simple.\nServing the files\nNow that your bookmarking tool is successfully saving formatted Markdown files to a folder, the next step is to serve those files in a way that lets you share them online. The easiest way to do this is to use a static-site generator to convert your YAML files into HTML, and pop them all on one page. Now, you\u2019ve got a few options here and I don\u2019t want to force you down any particular path, as there are plenty out there \u2013 it\u2019s just a case of using the one you\u2019re most comfortable with.\nI personally favour Jekyll because of its tight integration with GitHub Pages \u2013 I don\u2019t want to mess around with hosting and deployment, so it\u2019s really handy to have my bookmarks publish themselves on my site as soon as I commit and push them using Git.\nI\u2019ll give you a very brief run-through of how I\u2019m doing this with bookmarkbot, but I recommend you read my Get Started With GitHub Pages (Plus Bonus Jekyll) guide if you\u2019re unfamiliar with them, because I\u2019ll be glossing over some bits that are already covered in there.\nSetting up a build tool\nIf you haven\u2019t already, install Jekyll and Bundler globally through RubyGems. Jekyll is our static-site generator, and Bundler is what we use to install Ruby dependencies.\ngem install jekyll bundler\nIn my project folder, I\u2019m going to run the following which will install the Jekyll files we\u2019ll need to build our listing page. I\u2019m using --force, otherwise it will complain that the directory isn\u2019t empty.\njekyll new . --force\nIf you check your project folder, you\u2019ll see a bunch of new files. Now run the following to start the server:\nbundle exec jekyll serve\nThis will build a new directory called _site. This is where your static HTML files have been generated. Don\u2019t touch anything in this folder because it will get overwritten the next time you build.\nNow that serve is running, go to http://127.0.0.1:4000/ and you\u2019ll see the default Jekyll page and know that things are set up right. Now, instead, we want to see our list of bookmarks that are saved in the _bookmarks directory (make sure you\u2019ve got a few saved). So let\u2019s get that set up next.\nOpen up the _config.yml file that Jekyll added earlier. In here, we\u2019re going to tell it about our bookmarks. Replace everything in your _config.yml file with the following:\ntitle: My Bookmarks\ndescription: These are some of my favourite articles about the web.\nmarkdown: kramdown\nbaseurl: /bookmarks # This needs to be the same name as whatever you call your repo on GitHub.\ncollections:\n - bookmarks\nThis will make Jekyll aware of our _bookmarks folder so that we can call it later. Next, create a new directory and file at _layouts/home.html and paste in the following.\n\n\n\n \n {{site.title}}\n \n\n\n\n\n

{{site.title}}

\n

{{site.description}}

\n\n
    \n {% for bookmark in site.bookmarks %}\n
  • \n \n

    {{bookmark.title}}

    \n
    \n {{bookmark.content}}\n {% if bookmark.tags %}\n
      \n {% for tags in bookmark.tags %}
    • {{tags}}
    • {% endfor %}\n
    \n {% endif %}\n
  • \n {% endfor %}\n
\n\n\n\n\nRestart Jekyll for your config changes to kick in, and go to the url it provides you (probably http://127.0.0.1:4000/bookmarks, unless you gave something different as your baseurl).\n\nIt\u2019s a decent start \u2013 there\u2019s a lot more we can do in this area but now we\u2019ve got a nice list of all our bookmarks, let\u2019s get it online!\nIf you want to use GitHub Pages to host your files, your first step is to push your project to GitHub. Go to your repository and click \u201csettings\u201d. Scroll down to the section labelled \u201cGitHub Pages\u201d, and from here you can enable it. Select your master branch, and it will provide you with a url to view your published pages.\n\nWhat next?\nNow that you\u2019ve got a framework in place for publishing bookmarks, you can really go to town on your listing page and make it your own. First thing you\u2019ll probably want to do is add some CSS, then when you\u2019ve added a bunch of bookmarks, you\u2019ll probably want to have some filtering in place for the tags, perhaps extend the types of questions that you ask to include an image (if you\u2019re feeling extra-fancy, you could just ask for a url and pull in metadata from the site itself). Maybe you\u2019ve got an idea that doesn\u2019t involve bookmarks at all.\nYou could use what you\u2019ve learnt to build a place where you can share quotes, a list of your favourite restaurants, or even Christmas gift ideas.\nHere\u2019s one I made earlier\n\nMy demo, bookmarkbot, is on GitHub, and I\u2019ve reused a lot of the code from styleguides.io. Feel free to grab bits of code from there, and do share what you end up making!", "year": "2017", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2017-12-11T00:00:00+00:00", "url": "https://24ways.org/2017/teach-the-cli-to-talk-back/", "topic": "code"} {"rowid": 216, "title": "Styling Components - Typed CSS With Stylable", "contents": "There\u2019s been a lot of debate recently about how best to style components for web apps so that styles don\u2019t accidentally \u2018leak\u2019 out of the component they\u2019re meant for, or clash with other styles on the page.\nElaborate CSS conventions have sprung up, such as OOCSS, SMACSS, BEM, ITCSS, and ECSS. These work well, but they are methodologies, and require everyone in the team to know them and follow them, which can be a difficult undertaking across large or distributed teams.\nOthers just give up on CSS and put all their styles in JavaScript. Now, I\u2019m not bashing JS, especially so close to its 22nd birthday, but CSS-in-JS has problems of its own. Browsers have 20 years experience in optimising their CSS engines, so JavaScript won\u2019t be as fast as using real CSS, and in any case, this requires waiting for JS to download, parse, execute then render the styles.\nThere\u2019s another problem with CSS-in-JS, too. Since Responsive Web Design hit the streets, most designers no longer make comps in Photoshop or its equivalents; instead, they write CSS. Why hire an expensive design professional and require them to learn a new way of doing their job? \nA recent thread on Twitter asked \u201cWhat\u2019s your biggest gripe with CSS-in-JS?\u201d, and the replies were illuminating: \u201cAlways having to remember to camelCase properties then spending 10min pulling hair out when you do forget\u201d, \u201cthe cryptic domain-specific languages that each of the frameworks do just ever so slightly differently\u201d, \u201cWhen I test look and feel in browser, then I copy paste from inspector, only to have to re-write it as a JSON object\u201d, \u201cLack of linting, autocomplete, and css plug-ins for colors/ incrementing/ etc\u201d. \nIf you\u2019re a developer, and you\u2019re still unconvinced, I challenge you to let designers change the font in your IDE to Zapf Chancery and choose a new colour scheme, simply because they like it better. Does that sound like fun? Will that boost your productivity? Thought not.\nSome chums at Wix Engineering and I wanted to see if we could square this circle. Wix-hosted sites have always used CSS-in-JS (the concept isn\u2019t new; it was in Netscape 4!) but that was causing performance problems. Could we somehow devise a method of extending CSS (like SASS and LESS do) that gives us styles that are guaranteed not to leak or clash, that is compatible with code editors\u2019 autocompletion, and which could be pre-processed at build time to valid, cross-browser, static CSS?\nAfter a few months and a few proofs of concept later (drumroll), yes \u2013 we could! We call it Stylable.\nIntroducing Stylable\nStylable is a CSS pre-processor, like SASS or LESS. It uses CSS syntax so all your development tools will work. At build time, the Stylable CSS extensions are transpiled to flat, valid, cross-browser vanilla CSS for maximum performance. There\u2019s quite a bit to it, and this is a short article, so let\u2019s look at the basic concepts.\nComponents all the way down\nStylable is designed for component-based systems. Imagine you have a Gallery component. Within that, there is a Navigation component (for example, containing a \u2018next\u2019, \u2018previous\u2019, \u2018show all thumbnails\u2019, and \u2018show all albums\u2019 controls), and within that there are NavButton components. Each component is discrete, used elsewhere in the system in different contexts, perhaps maintained by different team members or even different organisations \u2014 you can use Stylable to add a typed interface to non-Stylable component libraries, as well as using it to build an app from scratch.\nFirstly, Stylable will automatically namespace styles so they only apply inside that component, by rewriting them at build time with a unique (but human-readable) prefix. So, for example,\n
might be re-written as
. \nSo far, so BEM-like (albeit without the headache of remembering a convention). But what else can it do?\nCustom pseudo-elements\nAn important feature of Stylable is the ability to reach into a component and style it from the outside, without having to know about its internal structure. Let\u2019s see the guts of a simple JSX button component in the file button.jsx:\nrender () {\n return (\n \n );\n}\n(Note:className is the JSX way of setting a class on an element; this example uses React, but Stylable itself is framework-agnostic.)\nI style it using a Stylable stylesheet (the .st.css suffix tells the preprocessor to process this file):\n/* button.st.css */\n\n/* note that the root class is automatically placed on the root HTML \nelement by Stylable React integration */\n.root {\n background: #b0e0e6;\n}\n\n.icon {\n display: block; \n height: 2em;\n background-image: url('./assets/btnIcon.svg');\n}\n\n.label {\n font-size: 1.2em;\n color: rgba(81, 12, 68, 1.0);\n}\nNote that Stylable allows all the CSS that you know and love to be included. As Drew Powers wrote in his review:\n\nwith Stylable, you get CSS, and every part of CSS. This seems like a \u201cduh\u201d observation, but this is significant if you\u2019ve ever battled with a CSS-in-JS framework over a lost or \u201chacky\u201d implementation of a basic CSS feature.\n\nI can import my Button component into another component - this time, panel.jsx:\n/* panel.jsx */\nimport * as React from 'react';\nimport {properties, stylable} from 'wix-react-tools';\nimport {Button} from '../button';\nimport style from './panel.st.css';\n\nexport const Panel = stylable(style)(() => (\n
\n
\n));\nIn panel.st.css: \n/* panel.st.css */\n:import {\n -st-from: './button.st.css';\n -st-default: Button;\n}\n\n/* cancelBtn is of type Button */\n.cancelBtn {\n -st-extends: Button;\n background: cornflowerblue;\n}\n\n/* targets the label of