{"rowid": 52, "title": "Git Rebasing: An Elfin Workshop Workflow", "contents": "This year Santa\u2019s helpers have been tasked with making a garland. It\u2019s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they\u2019re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn\u2019t. It\u2019s worse; it\u2019s about Git.)\nFor the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they\u2019re able to work independently, but towards the common goal of making a beautiful garland.\nAt first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they\u2019re going to need a system to change the beads out when mistakes are made in the chain.\nThe first common mistake is not looking to see what the latest chain is that\u2019s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It\u2019s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It\u2019s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command\ngit pull --rebase=preserve\n(They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they\u2019ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available.\nThe next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I\u2019ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it\u2019s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands:\ngit checkout master\ngit checkout -b 1234_pattern-name\n1234 represents the work order number and pattern-name describes the pattern they\u2019re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland.\nTo combine their work with the main garland, the elves need to make a few decisions. If they\u2019re making a single strand, they issue the following commands:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\nTo share their work they publish the new version of the main garland to the workshop spool with the command git push origin master.\nSometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order. \nTo update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command:\ngit reset --merge ORIG_HEAD\nThis works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf\u2019s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally.\nWith these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf\u2019s beads will be restrung on the tip of the main workshop garland.\nSometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf\u2019s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they\u2019re ready to close out their work order.\nOnce their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\ngit push origin master\nGenerally this process works well for the elves. Sometimes, though, when they\u2019re tired or bored or a little drunk on festive cheer, they realise there\u2019s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options.\nLet\u2019s pretend the elf has a sequence of five beads she\u2019s been working on. The work order says the pattern should be red-blue-red-blue-red.\n\nIf the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5.\n\nIf she\u2019s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5.\n\nIf one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence.\n\nUsing a tool that\u2019s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available.\nStart the garland surgery process with the command:\ngit rebase --interactive HEAD~4\nA new screen comes up with the following information (the oldest bead is on top):\npick c2e4877 Red bead\npick 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nThe elf adjusts the list, changing \u201cpick\u201d to \u201cedit\u201d next to the first blue bead:\npick c2e4877 Red bead\nedit 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nShe then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead.\n\nShe needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes.\nOnce she\u2019s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message \u201cRed bead \u2013 added\u201d so she can easily find it.\n\nThe garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue.\nThe new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool.\n\nShe knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads.\n\nWith the conflict resolved, the elf saves her changes and quits the mergetool.\nBack at the command line, the elf checks the status of her work using the command git status.\nrebase in progress; onto 4a9cb9d\nYou are currently rebasing branch '2_RBRBR' on '4a9cb9d'.\n (all conflicts fixed: run \"git rebase --continue\")\n\nChanges to be committed:\n (use \"git reset HEAD ...\" to unstage)\n\n modified: beads.txt\n\nUntracked files:\n (use \"git add ...\" to include in what will be committed)\n\n beads.txt.orig\nShe removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands:\ngit add beads.txt\ngit commit --message \"Blue bead -- resolved conflict\"\n\nWith the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files.\n\nShe incorporates the changes from the left and right column to ensure her bead sequence is correct.\n\nOnce the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland:\ngit add beads.txt\ngit commit --message \"Red bead -- resolved conflict\"\nand then runs the final verification steps in the rebase process (git rebase --continue).\n\nThe verification process runs through to the end, and the elf checks her work using the command git log --oneline.\n9269914 Red bead -- resolved conflict\n4916353 Blue bead -- resolved conflict\naef0d5c Red bead -- added\n9b5555e Blue bead\nc2e4877 Red bead\nShe knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct.\nSometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It\u2019s made her more confident at restringing beads when she\u2019s found real mistakes. And she doesn\u2019t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don\u2019t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked.\n\nOur lessons from the workshop:\n\nBy using rebase to update your branches, you avoid merge commits and keep a clean commit history.\nIf you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard.\nIf you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch.\nIf you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.", "year": "2015", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2015-12-07T00:00:00+00:00", "url": "https://24ways.org/2015/git-rebasing/", "topic": "code"} {"rowid": 54, "title": "Putting My Patterns through Their Paces", "contents": "Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.\nThe thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.\nMeet the Teaser\nHere\u2019s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)\nIn its simplest form, a teaser contains a headline, which links to an article:\n\nFairly straightforward, sure. But it\u2019s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types \u2013 some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.\n\n Nearly every element visible on this page is built out of our generic \u201cteaser\u201d pattern.\n \nBut the teaser variation I\u2019d like to call out is the one that appears on The Toast\u2019s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.\n\n The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.\n \nAnd this is, as it happens, the teaser variation that gave me pause. Back in the old days \u2013 you know, like six months ago \u2013 I probably would\u2019ve marked this module up to match the design. In other words, I would\u2019ve looked at the module\u2019s visual hierarchy (metadata up top, headline and content below) and written the following HTML:\n
\n \n 126 comments\n

Article Title

\n

Lorem ipsum dolor sit amet, consectetur\u2026

\n
\nBut then I caught myself, and realized this wasn\u2019t the best approach.\nMoving Beyond Layout\nSince I\u2019ve started working responsively, there\u2019s a question I work into every step of my design process. Whether I\u2019m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:\n\nWhat if someone doesn\u2019t browse the web like I do?\n\n\u2026Okay, that doesn\u2019t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it\u2019s been invaluable to so many aspects of my practice. If I\u2019m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I\u2019m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It\u2019s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.\nAnd that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it\u2019d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they\u2019re associated.\nThat\u2019s why I find it\u2019s helpful to begin designing a pattern\u2019s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I\u2019m trying to support. In other words, if someone\u2019s encountering my design without the CSS I\u2019ve written, what should their experience be?\nSo I took a step back, and came up with a different approach:\n
\n

Article Title

\n \n

\n Lorem ipsum dolor sit amet, consectetur\u2026\n 126 comments\n

\n
\nMuch, much better. This felt like a better match for the content I was designing: the headline \u2013 easily most important element \u2013 was at the top, followed by the author\u2019s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that\u2019s why it\u2019s at the very end of the excerpt, the last element within our teaser. And with some light styling, we\u2019ve got a respectable-looking hierarchy in place:\n\nYeah, you\u2019re right \u2013 it\u2019s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we\u2019ll bolster the markup with an extra element around our title and byline:\n
\n \n \u2026\n
\nWith that in place, we can use flexbox to tweak our layout, like so:\n.teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\nflex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.\n\nGetting closer! But as great as flexbox is, it doesn\u2019t do anything for elements outside our container, like our little comment count, which is, as you\u2019ve probably noticed, still stranded at the very bottom of our teaser.\nFlexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser\u2019s using a sufficiently modern version of flexbox. Here\u2019s the one we used:\nvar doc = document.body || document.documentElement;\nvar style = doc.style;\n\nif ( style.webkitFlexWrap == '' ||\n style.msFlexWrap == '' ||\n style.flexWrap == '' ) {\n doc.className += \" supports-flex\";\n}\nEagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser\u2019s design:\n.supports-flex .teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\n.supports-flex .teaser .comment-count {\n position: absolute;\n right: 0;\n top: 1.1em;\n}\nIf the supports-flex class is present, we can apply our flexbox layout to the title area, sure \u2013 but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don\u2019t meet our threshold for our advanced styles are left with an attractive design that matches our HTML\u2019s content hierarchy; but the ones that pass our test receive the finished, final design.\n\nAnd with that, our teaser\u2019s complete.\nDiving Into Device-Agnostic Design\nThis is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I\u2019d recommend Heydon Pickering\u2019s \u201cFlexbox Grid Finesse\u201d, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you\u2019d be right! In fact, it\u2019s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I\u2019ve found that thinking about my design as existing in broad experience tiers \u2013 in layers \u2013 is one of the best ways of designing for the modern web. And what\u2019s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.\nOpen video\n \n Even a simple search form can be conditionally enhanced, given a little layered thinking.\n \nThis more layered approach to interface design isn\u2019t a new one, mind you: it\u2019s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I\u2019ve found to practice responsive design. As Trent Walton once wrote,\n\nLike cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web\u2019s inherent variability.\n\nWe have a weird job, working on the web. We\u2019re designing for the latest mobile devices, sure, but we\u2019re increasingly aware that our definition of \u201csmartphone\u201d is much too narrow. Browsers have started appearing on our wrists and in our cars\u2019 dashboards, but much of the world\u2019s mobile data flows over sub-3G networks. After all, the web\u2019s evolution has never been charted along a straight line: it\u2019s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don\u2019t yet know about, a more device-agnostic, more layered design process can better prepare our patterns \u2013 and ourselves \u2013 for the future.\n(It won\u2019t help you get enough to eat at holiday parties, though.)", "year": "2015", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2015-12-10T00:00:00+00:00", "url": "https://24ways.org/2015/putting-my-patterns-through-their-paces/", "topic": "code"} {"rowid": 71, "title": "Upping Your Web Security Game", "contents": "When I started working in web security fifteen years ago, web development looked very different. The few non-static web applications were built using a waterfall process and shipped quarterly at best, making it possible to add security audits before every release; applications were deployed exclusively on in-house servers, allowing Info Sec to inspect their configuration and setup; and the few third-party components used came from a small set of well-known and trusted providers. And yet, even with these favourable conditions, security teams were quickly overwhelmed and called for developers to build security in.\nIf the web security game was hard to win before, it\u2019s doomed to fail now. In today\u2019s web development, every other page is an application, accepting inputs and private data from users; software is built continuously, designed to eliminate manual gates, including security gates; infrastructure is code, with servers spawned with little effort and even less security scrutiny; and most of the code in a typical application is third-party code, pulled in through open source repositories with rarely a glance at who provided them.\nSecurity teams, when they exist at all, cannot solve this problem. They are vastly outnumbered by developers, and cannot keep up with the application\u2019s pace of change. For us to have a shot at making the web secure, we must bring security into the core. We need to give it no less attention than that we give browser compatibility, mobile design or web page load times. More broadly, we should see security as an aspect of quality, expecting both ourselves and our peers to address it, and taking pride when we do it well.\nWhere To Start?\nEmbracing security isn\u2019t something you do overnight.\nA good place to start is by reviewing things you\u2019re already doing \u2013 and trying to make them more secure. Here are three concrete steps you can take to get going.\nHTTPS\nThreats begin when your system interacts with the outside world, which often means HTTP. As is, HTTP is painfully insecure, allowing attackers to easily steal and manipulate data going to or from the server. HTTPS adds a layer of crypto that ensures the parties know who they\u2019re talking to, and that the information exchanged can be neither modified nor sniffed.\nHTTPS is relevant to any site. If your non-HTTPS site holds opinions, reading it may get your users in trouble with employers or governments. If your users believe what you say, attackers can modify your non-HTTPS to take advantage of and abuse that trust. If you want to use new browser technologies like HTTP2 and service workers, your site will need to be HTTPS. And if you want to be discovered on the web, using HTTPS can help your Google ranking. For more details on why I think you should make the switch to HTTPS, check out this post, these slides and this video.\nUsing HTTPS is becoming easier and cheaper. Here are a few free tools that can help:\n\nGet free and easy HTTPS delivery from Cloudflare (be sure to use \u201cFull SSL\u201d!)\nGet a free and automation-friendly certificate from Let\u2019s Encrypt (now in open beta).\nTest how well your HTTPS is set up using SSLTest.\n\nOther vendors and platforms are rapidly simplifying and reducing the cost of their HTTPS offering, as demand and importance grows.\nTwo-Factor Authentication\nThe most sensitive data is usually stored behind a login, and the authentication process is the primary gate in front of this data. Making this process secure has many aspects, including using HTTPS when accepting credentials, having a strong password policy, never storing the password, and more.\nAll of these are important, but the best single step to boost your authentication security is to introduce two-factor authentication (2FA). Adding 2FA usually means prompting users for an additional one-time code when logging in, which they get via SMS or a mobile app (e.g. Google Authenticator). This code is short-lived and is extremely hard for a remote attacker to guess, thus vastly reducing the risk a leaked or easily guessed password presents.\nThe typical algorithm for 2FA is based on an IETF standard called the time-based one-time password (TOTP) algorithm, and it isn\u2019t that hard to implement. Joel Franusic wrote a great post on implementing 2FA; modules like speakeasy make it even easier; and you can swap SMS with Google Authenticator or your own app if you prefer. If you don\u2019t want to build 2FA support yourself, you can purchase two/multi-factor authentication services from vendors such as DuoSecurity, Auth0, Clef, Hypr and others.\nIf implementing 2FA still feels like too much work, you can also choose to offload your entire authentication process to an OAuth-based federated login. Many companies offer this today, including Facebook, Google, Twitter, GitHub and others. These bigger players tend to do authentication well and support 2FA, but you should consider what data you\u2019re sharing with them in the process.\nTracking Known Vulnerabilities\nMost of the code in a modern application was actually written by third parties, and pulled into your app as frameworks, modules and libraries. While using these components makes us much more productive, along with their functionality we also adopt their security flaws. To make things worse, some of these flaws are well-known vulnerabilities, making it easy for hackers to take advantage of them in an attack.\nThis is a real problem and happens on pretty much every platform. Do you develop in Java? In 2014, over 6% of Java modules downloaded from Maven had a known severe security issue, the typical Java applications containing 24 flaws. Are you coding in Node.js? Roughly 14% of npm packages carry a known vulnerability, and over 60% of dev shops find vulnerabilities in their code. 30% of Docker Hub containers include a high priority known security hole, and 60% of the top 100,000 websites use client-side libraries with known security gaps.\nTo find known security issues, take stock of your dependencies and match them against language-specific lists such as Snyk\u2019s vulnerability DB for Node.js, rubysec for Ruby, victims-db for Python and OWASP\u2019s Dependency Check for Java. Once found, you can fix most issues by upgrading the component in question, though that may be tricky for indirect dependencies. \nThis process is still way too painful, which means most teams don\u2019t do it. The Snyk team and I are hoping to change that by making it as easy as possible to find, fix and monitor known vulnerabilities in your dependencies. Snyk\u2019s wizard will help you find and fix these issues through guided upgrades and patches, and adding Snyk\u2019s test to your continuous integration and deployment (CI/CD) will help you stay secure as your code evolves.\nNote that newly disclosed vulnerabilities usually impact old code \u2013 the one you\u2019re running in production. This means you have to stay alert when new vulnerabilities are disclosed, so you can fix them before attackers can exploit them. You can do so by subscribing to vulnerability lists like US-CERT, OSVDB and NVD. Snyk\u2019s monitor will proactively let you know about new disclosures relevant to your code, but only for Node.js for now \u2013 you can register to get updated when we expand.\nSecuring Yourself\nIn addition to making your application secure, you should make the contributors to that application secure \u2013 including you. Earlier this year we\u2019ve seen attackers target mobile app developers with a malicious Xcode. The real target, however, wasn\u2019t these developers, but rather the users of the apps they create. That you create. Securing your own work environment is a key part of keeping your apps secure, and your users from being compromised.\nThere\u2019s no single step that will make you fully secure, but here are a few steps that can make a big impact:\n\nUse 2FA on all the services related to the application, notably source control (e.g. GitHub), cloud platform (e.g. AWS), CI/CD, CDN, DNS provider and domain registrar. If an attacker compromises any one of those, they could modify or replace your entire application. I\u2019d recommend using 2FA on all your personal services too.\nUse a password manager (e.g. 1Password, LastPass) to ensure you have a separate and complex password for each service. Some of these services will get hacked, and passwords will leak. When that happens, don\u2019t let the attackers access your other systems too.\nSecure your workstation. Be careful what you download, lock your screen when you walk away, change default passwords on services you install, run antivirus software, etc. Malware on your machine can translate to malware in your applications.\nBe very wary of phishing. Smart attackers use \u2018spear phishing\u2019 techniques to gain access to specific systems, and can trick even security savvy users. There are even phishing scams targeting users with 2FA. Be alert to phishy emails.\nDon\u2019t install things through curl | sudo bash, especially if the URL is on GitHub, meaning someone else controls it. Don\u2019t do it on your machines, and definitely don\u2019t do it in your CI/CD systems. Seriously.\n\nStaying secure should be important to you personally, but it\u2019s doubly important when you have privileged access to an application. Such access makes you a way to reach many more users, and therefore a more compelling target for bad actors.\nA Culture of Security\nUsing HTTPS, enabling two-factor authentication and fixing known vulnerabilities are significant steps in building security at your core. As you implement them, remember that these are just a few steps in a longer journey.\nThe end goal is to embrace security as an aspect of quality, and accept we all share the responsibility of keeping ourselves \u2013 and our users \u2013 safe.", "year": "2015", "author": "Guy Podjarny", "author_slug": "guypodjarny", "published": "2015-12-11T00:00:00+00:00", "url": "https://24ways.org/2015/upping-your-web-security-game/", "topic": "code"} {"rowid": 63, "title": "Be Fluid with Your Design Skills: Build Your Own Sites", "contents": "Just five years ago in 2010, when we were all busy trying to surprise and delight, learning CSS3 and trying to get whole websites onto one page, we had a poster on our studio wall. It was entitled \u2018Designers Vs Developers\u2019, an infographic that showed us the differences between the men(!) who created websites. \nDesigners wore skinny jeans and used Macs and developers wore cargo pants and brought their own keyboards to work. We began to learn that designers and developers were not only doing completely different jobs but were completely different people in every way. This opinion was backed up by hundreds of memes, millions of tweets and pages of articles which used words like void and battle and versus.\nThankfully, things move quickly in this industry; the wide world of web design has moved on in the last five years. There are new devices, technologies, tools \u2013 and even a few women. Designers have been helped along by great apps, software, open source projects, conferences, and a community of people who, to my unending pride, love to share their knowledge and their work.\nSo the world has moved on, and if Miley Cyrus, Ruby Rose and Eliot Sumner are identifying as gender fluid (an identity which refers to a gender which varies over time or is a combination of identities), then I would like to come out as discipline fluid! \nOK, I will probably never identify as a developer, but I will identify as fluid! How can we be anything else in an industry that moves so quickly? That\u2019s how we should think of our skills, our interests and even our job titles. After all, Steve Jobs told us that \u201cDesign is not just what it looks like and feels like. Design is how it works.\u201d Sorry skinny-jean-wearing designers \u2013 this means we\u2019re all designing something together. And it\u2019s not just about knowing the right words to use: you have to know how it feels. How it feels when you make something work, when you fix that bug, when you make it work on IE.\nLike anything in life, things run smoothly when you make the effort to share experiences, empathise and deeply understand the needs of others. How can designers do that if they\u2019ve never built their own site? I\u2019m not talking the big stuff, I\u2019m talking about your portfolio site, your mate\u2019s business website, a website for that great idea you\u2019ve had. I\u2019m talking about doing it yourself to get an unique insight into how it feels.\nWe all know that designers and developers alike love an
    , so here it is.\nTen reasons designers should be fluid with their skills and build their own sites\n1. It\u2019s never been easier\nNow here\u2019s where the definition of \u2018build\u2019 is going to get a bit loose and people are going to get angry, but when I say it\u2019s never been easier I mean because of the existence of apps and software like WordPress, Squarespace, Tumblr, et al. It\u2019s easy to make something and get it out there into the world, and these are all gateway drugs to hard coding!\n2. You\u2019ll understand how it feels\nHow it feels to be so proud that something actually works that you momentarily don\u2019t notice if the kerning is off or the padding is inconsistent. How it feels to see your site appear when you\u2019ve redirected a URL. How it feels when you just can\u2019t work out where that one extra space is in a line of PHP that has killed your whole site.\n3. It makes you a designer\nNot a better designer, it makes you a designer when you are designing how things look and how they work. \n4. You learn about movement\nPhotoshop and Sketch just don\u2019t cut it yet. Until you see your site in a browser or your app on a phone, it\u2019s hard to imagine how it moves. Building your own sites shows you that it\u2019s not just about how the content looks on the screen, but how it moves, interacts and feels.\n5. You make techie friends\nAll the tutorials and forums in the world can\u2019t beat your network of techie friends. Since I started working in web design I have worked with, sat next to, and co-created with some of the greatest developers. Developers who\u2019ve shared their knowledge, encouraged me to build things, patiently explained HTML, CSS, servers, divs, web fonts, iOS development. There has been no void, no versus, very few battles; just people who share an interest and love of making things. \n6. You will own domain names\nWhen something is paid for, online and searchable then it\u2019s real and you\u2019ve got to put the work in. Buying domains has taught me how to stop procrastinating, but also about DNS, FTP, email, and how servers work.\n7. People will ask you to do things\u2028\nLearning about code and development opens a whole new world of design. When you put your own personal websites and projects out there people ask you to do more things. OK, so sometimes those things are \u201cMake me a website for free\u201d, but more often it\u2019s cool things like \u201cCome and speak at my conference\u201d, \u201cWrite an article for my magazine\u201d and \u201cCollaborate with me.\u201d\n8. The young people are coming!\nThey love typography, they love print, they love layout, but they\u2019ve known how to put a website together since they started their first blog aged five and they show me clever apps they\u2019ve knocked together over the weekend! They\u2019re new, they\u2019re fluid, and they\u2019re better than us!\n9. Your portfolio is your portfolio\nOK, it\u2019s an obvious one, but as designers our work is our CV, our legacy! We need to show our skill, our attention to detail and our creativity in the way we showcase our work. Building your portfolio is the best way to start building your own websites. (And please be that designer who\u2019s bothered to work out how to change the Squarespace favicon!) \n10. It keeps you fluid!\nBuilding your own websites is tough. You\u2019ll never be happy with it, you\u2019ll constantly be updating it to keep up with technology and fashion, and by the time you\u2019ve finished it you\u2019ll want to start all over again. Perfect for forcing you to stay up-to-date with what\u2019s going on in the industry.\n
", "year": "2015", "author": "Ros Horner", "author_slug": "roshorner", "published": "2015-12-12T00:00:00+00:00", "url": "https://24ways.org/2015/be-fluid-with-your-design-skills-build-your-own-sites/", "topic": "code"} {"rowid": 68, "title": "Grid, Flexbox, Box Alignment: Our New System for Layout", "contents": "Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers.\nGrid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work.\nAnother new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported. \nOwing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks. \nIf there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we\u2019ll be easily able to date a website to circa 2015 by seeing
or
in the markup.\nIn this article, I\u2019m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them.\nTo see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here. \nRelationship\nItems only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It\u2019s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout.\nAt a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox:\nSee the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen.\n\nAnd in grid layout (requires a CSS grid-supporting browser):\nSee the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen.\n\nAlignment\nFull-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you\u2019d like to have under your tree this Christmas, then this is the box you\u2019ll want to unwrap first.\nThe box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things.\nOur examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property.\nThe examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self.\nTake a look at an example in flexbox:\nSee the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen.\n\nAnd also in grid layout (requires a CSS grid-supporting browser):\nSee the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen.\n\nThe alignment properties used with CSS grid layout.\nFluid grids\nA cornerstone of responsive design is the concept of fluid grids.\n\n\u201c[\u2026]every aspect of the grid\u2014and the elements laid upon it\u2014can be expressed as a proportion relative to its container.\u201d\n\u2014Ethan Marcotte, \u201cFluid Grids\u201d\n\nThe method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element.\nh1 {\n margin-left: 14.575%; /*\u00a0144px / 988px = 0.14575\u00a0*/\n width: 70.85%; /*\u00a0700px / 988px = 0.7085\u00a0*/\n}\nIn more recent years, we\u2019ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple.\nThe most basic of flexbox demos shows this fluidity in action. The justify-content property \u2013 another property defined in the box alignment module \u2013 can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion.\nIn this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between.\nSee the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen.\n\nFor true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion.\nThe flexbox flex property is a shorthand for:\n\nflex-grow\nflex-shrink\nflex-basis\n\nThe flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value.\n\nflex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis.\nflex: 0 0 200px: a box that will be 200px and cannot grow or shrink.\nflex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller.\n\nIn this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis.\nSee the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.\n\nWhat I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out.\nIf I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo:\nSee the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.\n\nOnce you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes.\nIn this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr.\ngrid-template-columns: 1fr 2fr 2fr 2fr;\nSee the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen.\n\nThe four-track grid.\nSeparation of concerns\nMy younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup.\nBrowsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another.\nHere is a snippet from the Bootstrap grid examples \u2013 two columns with two nested columns:\n
\n
\n .col-md-8\n
\n
\n .col-md-6\n
\n
\n .col-md-6\n
\n
\n
\n
\n .col-md-4\n
\n
\nNot a million miles away from something I might have written in 1999.\n\n \n \n \n \n
\n .col-md-8\n \n \n \n \n \n
\n .col-md-6\n \n .col-md-6\n
\n
\n .col-md-4\n
\nGrid and flexbox layouts do not need to be described in markup. The layout description happens entirely in the CSS, meaning that elements can be moved around from within the presentation layer.\nFlexbox gives us the ability to reverse the flow of elements, but also to set the order of elements with the order property. This is demonstrated here, where Widget the cat is in position 1 in the source, but I have used the order property to display him after the things that are currently unimpressive to him.\nSee the Pen Flexbox: order by rachelandrew (@rachelandrew) on CodePen.\n\nGrid layout takes this a step further. Where flexbox lets us set the order of items in a single dimension, grid layout gives us the ability to position things in two dimensions: both rows and columns. Defined in the CSS, this positioning can be changed at any breakpoint without needing additional markup. Compare the source order with the display order in this example (requires a CSS grid-supporting browser):\nSee the Pen Grid positioning in two dimensions by rachelandrew (@rachelandrew) on CodePen.\n\nLaying out our items in two dimensions using grid layout.\nAs these demos show, a straightforward way to decide if you should use grid layout or flexbox is whether you want to position items in one dimension or two. If two, you want grid layout.\nA note on accessibility and reordering\nThe issues arising from this powerful ability to change the way items are ordered visually from how they appear in the source have been the subject of much discussion. The current flexbox editor\u2019s draft states\n\n\u201cAuthors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.\u201d\n\u2014CSS Flexible Box Layout Module Level 1, Editor\u2019s Draft (3 December 2015)\n\nThis is to ensure that non-visual user agents (a screen reader, for example) can rely on the document source order as being correct. Take care when reordering that you do so from the basis of a sound document that makes sense in terms of source order. Avoid using visual order to convey meaning.\nAutomatic content placement with rules\nHaving control over the order of items, or placing items on a predefined grid, is nice. However, we can often do that already with one method or another and we have frameworks and tools to help us. Tools such as Susy mean we can even get away from stuffing our markup full of grid classes. However, our new layout methods give us some interesting new possibilities.\nSomething that is useful to be able to do when dealing with content coming out of a CMS or being pulled from some other source, is to define a bunch of rules and then say, \u201cDisplay this content, using these rules.\u201d\nAs an example of this, I will leave you with a Christmas poem displayed in a document alongside Widget the cat and some of the decorations that are bringing him no Christmas cheer whatsoever.\nThe poem is displayed first in the source as a set of paragraphs. I\u2019ve added a class identifying each of the four paragraphs but they are displayed in the source as one text. Below that are all my images, some landscape and some portrait; I\u2019ve added a class of landscape to the landscape ones.\nThe mobile-first grid is a single column and I use line-based placement to explicitly position my poem paragraphs. The grid layout auto-placement rules then take over and place the images into the empty cells left in the grid.\nAt wider screen widths, I declare a four-track grid, and position my poem around the grid, keeping it in a readable order.\nI also add rules to my landscape class, stating that these items should span two tracks. Once again the grid layout auto-placement rules position the rest of my images without my needing to position them. You will see that grid layout takes items out of source order to fill gaps in the grid. It does this because I have set the property grid-auto-flow to dense. The default is sparse meaning that grid will not attempt this backfilling behaviour.\nTake a look and play around with the full demo (requires a CSS grid layout-supporting browser):\nSee the Pen Grid auto-flow with rules by rachelandrew (@rachelandrew) on CodePen.\n\nThe final automatic placement example.\nMy wish for 2016\nI really hope that in 2016, we will see CSS grid layout finally emerge from behind browser flags, so that we can start to use these features in production \u2014 that we can start to move away from using the wrong tools for the job.\nHowever, I also hope that we\u2019ll see developers fully embracing these tools as the new system that they are. I want to see people exploring the possibilities they give us, rather than trying to get them to behave like the grid systems of 2015. As you discover these new modules, treat them as the new paradigm that they are, get creative with them. And, as you find the edges of possibility with them, take that feedback to the CSS Working Group. Help improve the layout systems that will shape the look of the future web.\nSome further reading\n\nI maintain a site of grid layout examples and resources at Grid by Example.\nThe three CSS specifications I\u2019ve discussed can be found as editor\u2019s drafts: CSS grid, flexbox, box alignment.\nI wrote about the last three years of my interest in CSS grid layout, which gives something of a history of the specification.\nMore examples of box alignment and grid layout.\nMy presentation at Fronteers earlier this year, in which I explain more about these concepts.", "year": "2015", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2015-12-15T00:00:00+00:00", "url": "https://24ways.org/2015/grid-flexbox-box-alignment-our-new-system-for-layout/", "topic": "code"} {"rowid": 65, "title": "The Accessibility Mindset", "contents": "Accessibility is often characterized as additional work, hard to learn and only affecting a small number of people. Those myths have no logical foundation and often stem from outdated information or misconceptions.\nIndeed, it is an additional skill set to acquire, quite like learning new JavaScript frameworks, CSS layout techniques or new HTML elements. But it isn\u2019t particularly harder to learn than those other skills.\nA World Health Organization (WHO) report on disabilities states that,\n\n[i]ncluding children, over a billion people (or about 15% of the world\u2019s population) were estimated to be living with disability.\n\nBeing disabled is not as unusual as one might think. Due to chronic health conditions and older people having a higher risk of disability, we are also currently paving the cowpath to an internet that we can still use in the future.\nAccessibility has a very close relationship with usability, and advancements in accessibility often yield improvements in the usability of a website. Websites are also more adaptable to users\u2019 needs when they are built in an accessible fashion.\nBeyond the bare minimum\nIn the time of table layouts, web developers could create code that passed validation rules but didn\u2019t adhere to the underlying semantic HTML model. We later developed best practices, like using lists for navigation, and with HTML5 we started to wrap those lists in nav elements. Working with accessibility standards is similar. The Web Content Accessibility Guidelines (WCAG) 2.0 can inform your decision to make websites accessible and can be used to test that you met the success criteria. What it can\u2019t do is measure how well you met them. \nW3C developed a long list of techniques that can be used to make your website accessible, but you might find yourself in a situation where you need to adapt those techniques to be the most usable solution for your particular problem.\nThe checkbox below is implemented in an accessible way: The input element has an id and the label associated with the checkbox refers to the input using the for attribute. The hover area is shown with a yellow background and a black dotted border:\nOpen video\n \nThe label is clickable and the checkbox has an accessible description. Job done, right? Not really. Take a look at the space between the label and the checkbox:\nOpen video\n \nThe gutter is created using a right margin which pushes the label to the right. Users would certainly expect this space to be clickable as well. The simple solution is to wrap the label around the checkbox and the text:\nOpen video\n \nYou can also set the label to display:block; to further increase the clickable area:\nOpen video\n \nAnd while we\u2019re at it, users might expect the whole box to be clickable anyway. Let\u2019s apply the CSS that was on a wrapping div element to the label directly:\nOpen video\n \nThe result enhances the usability of your form element tremendously for people with lower dexterity, using a voice mouse, or using touch interfaces. And we only used basic HTML and CSS techniques; no JavaScript was added and not one extra line of CSS.\n
\n \n
\nButton Example\nThe button below looks like a typical edit button: a pencil icon on a real button element. But if you are using a screen reader or a braille keyboard, the button is just read as \u201cbutton\u201d without any indication of what this button is for.\nOpen video\n A screen reader announcing a button. Contains audio.\nThe code snippet shows why the button is not properly announced:\n\nAn icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users:\nOpen video\n A screen reader announcing a button with a title.\nHowever, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn\u2019t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts:\nThe mobile GitHub view with and without external fonts.\nEven if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn\u2019t load.\n\nProviding always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons.\n\nThis also reads just fine in screen readers:\nOpen video\n A screen reader announcing the revised button.\nClever usability enhancements don\u2019t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the \u201ccaptioned videos\u201d or \u201caudio description\u201d categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don\u2019t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study.\nMore information\nW3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out \u201cHow People with Disabilities Use the Web\u201d, there are \u201cTips for Getting Started\u201d for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way.\nConclusion\nYou can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don\u2019t notice when those fundamental aspects of good website design and development are done right. But they\u2019ll always know when they are implemented poorly.\nIf you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn\u2019t know they\u2019d need it:\nOpen video\n \nIn this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks \u201cWhat did she say?\u201d the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.", "year": "2015", "author": "Eric Eggert", "author_slug": "ericeggert", "published": "2015-12-17T00:00:00+00:00", "url": "https://24ways.org/2015/the-accessibility-mindset/", "topic": "code"} {"rowid": 64, "title": "Being Responsive to the Small Things", "contents": "It\u2019s that time of the year again to trim the tree with decorations. Or maybe a DOM tree?\nAny web page is made of HTML elements that lay themselves out in a tree structure. We start at the top and then have multiple branches with branches that branch out from there. \n\nTo decorate our tree, we use CSS to specify which branches should receive the tinsel we wish to adorn upon it. It\u2019s all so lovely.\nIn years past, this was rather straightforward. But these days, our trees need to be versatile. They need to be responsive!\nResponsive web design is pretty wonderful, isn\u2019t it? Based on our viewport, we can decide how elements on the page should change their appearance to accommodate various constraints using media queries.\nClearleft have a delightfully clean and responsive site\nAlas, it\u2019s not all sunshine, lollipops, and rainbows. \nWith complex layouts, we may have design chunks \u2014 let\u2019s call them components \u2014 that appear in different contexts. Each context may end up providing its own constraints on the design, both in its default state and in its possibly various responsive states.\n\nMedia queries, however, limit us to the context of the entire viewport, not individual containers on the page. For every container our component lives in, we need to specify how to rearrange things in that context. The more complex the system, the more contexts we need to write code for.\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\nEach new component and each new breakpoint just makes the entire system that much more difficult to maintain. \n@media (min-width: 600px) {\n .features > .component { }\n .grid > .component {}\n}\n\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\n\n@media (min-width: 1024px) {\n .features > .component { }\n}\nEnter container queries\nContainer queries, also known as element queries, allow you to specify conditional CSS based on the width (or maybe height) of the container that an element lives in. In doing so, you no longer have to consider the entire page and the interplay of all the elements within. \nWith container queries, you\u2019ll be able to consider the breakpoints of just the component you\u2019re designing. As a result, you end up specifying less code and the components you develop have fewer dependencies on the things around them. (I guess that makes your components more independent.)\nAwesome, right?\nThere\u2019s only one catch.\nBrowsers can\u2019t do container queries. There\u2019s not even an official specification for them yet. The Responsive Issues (n\u00e9e Images) Community Group is looking into solving how such a thing would actually work. \nSee, container queries are tricky from an implementation perspective. The contents of a container can affect the size of the container. Because of this, you end up with troublesome circular references. \nFor example, if the width of the container is under 500px then the width of the child element should be 600px, and if the width of the container is over 500px then the width of the child element should be 400px. \nCan you see the dilemma? When the container is under 500px, the child element resizes to 600px and suddenly the container is 600px. If the container is 600px, then the child element is 400px! And so on, forever. This is bad.\nI guess we should all just go home and sulk about how we just got a pile of socks when we really wanted the Millennium Falcon. \nOur saviour this Christmas: JavaScript\nThe three wise men \u2014 Tim Berners-Lee, H\u00e5kon Wium Lie, and Brendan Eich \u2014 brought us the gifts of HTML, CSS, and JavaScript. \nTo date, there are a handful of open source solutions to fill the gap until a browser implementation sees the light of day.\n\nElementary by Scott Jehl\nElementQuery by Tyson Matanich\nEQ.js by Sam Richards\nCSS Element Queries from Marcj\n\nUsing any of these can sometimes feel like your toy broke within ten minutes of unwrapping it.\nEach take their own approach on how to specify the query conditions. For example, Elementary, the smallest of the group, only supports min-width declarations made in a :before selector.\n.mod-foo:before {\n content: \u201c300 410 500\u201d;\n}\nThe script loops through all the elements that you specify, reading the content property and then setting an attribute value on the HTML element, allowing you to use CSS to style that condition. \n.mod-foo[data-minwidth~=\"300\"] {\n background: blue;\n}\nTo get the script to run, you\u2019ll need to set up event handlers for when the page loads and for when it resizes. \nwindow.addEventListener( \"load\", window.elementary, false );\nwindow.addEventListener( \"resize\", window.elementary, false );\nThis works okay for static sites but breaks down on pages where elements can expand or contract, or where new content is dynamically inserted.\nIn the case of EQ.js, the implementation requires the creation of the breakpoints in the HTML. That means that you have implementation details in HTML, JavaScript, and CSS. (Although, with the JavaScript, once it\u2019s in the build system, it shouldn\u2019t ever be much of a concern unless you\u2019re tracking down a bug.)\nAnother problem you may run into is the use of content delivery networks (CDNs) or cross-origin security issues. The ElementQuery and CSS Element Queries libraries need to be able to read the CSS file. If you are unable to set up proper cross-origin resource sharing (CORS) headers, these libraries won\u2019t help.\nAt Shopify, for example, we had all of these problems. The admin that store owners use is very dynamic and the CSS and JavaScript were being loaded from a CDN that prevented the JavaScript from reading the CSS. \nTo go responsive, the team built their own solution \u2014 one similar to the other scripts above, in that it loops through elements and adds or removes classes (instead of data attributes) based on minimum or maximum width.\nThe caveat to this particular approach is that the declaration of breakpoints had to be done in JavaScript. \n elements = [\n { \u2018module\u2019: \u201c.carousel\u201d, \u201cclassName\u201d:\u2019alpha\u2019, minWidth: 768, maxWidth: 1024 },\n { \u2018module\u2019: \u201c.button\u201d, \u201cclassName\u201d:\u2019beta\u2019, minWidth: 768, maxWidth: 1024 } ,\n { \u2018module\u2019: \u201c.grid\u201d, \u201cclassName\u201d:\u2019cappa\u2019, minWidth: 768, maxWidth: 1024 }\n ]\nWith that done, the script then had to be set to run during various events such as inserting new content via Ajax calls. This sometimes reveals itself in flashes of unstyled breakpoints (FOUB). An unfortunate side effect but one largely imperceptible.\nUsing this approach, however, allowed the Shopify team to make the admin responsive really quickly. Each member of the team was able to tackle the responsive story for a particular component without much concern for how all the other components would react. \n\nEach element responds to its own breakpoint that would amount to dozens of breakpoints using traditional breakpoints. This approach allows for a truly fluid and adaptive interface for all screens.\nChristmas is over\nI wish I were the bearer of greater tidings and cheer. It\u2019s not all bad, though. We may one day see browsers implement container queries natively. At which point, we shall all rejoice!", "year": "2015", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2015-12-19T00:00:00+00:00", "url": "https://24ways.org/2015/being-responsive-to-the-small-things/", "topic": "code"} {"rowid": 55, "title": "How Tabs Should Work", "contents": "Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They\u2019ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented.\nSo this post is my definition of how a tabbing system should work, and one approach of implementing that.\nBut\u2026 tabs are easy, right?\nI\u2019ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system:\nvar tabs = $('.tab').click(function () {\n tabs.hide().filter(this.hash).show();\n}).map(function () {\n return $(this.hash)[0];\n});\n\n$('.tab:first').click();\nSimple, right? Nearly fits in a tweet (ignoring the whole jQuery library\u2026). Still, it\u2019s riddled with problems that make it a far from perfect solution.\nRequirements: what makes the perfect tab?\n\nAll content is navigable and available without JavaScript (crawler-compatible and low JS-compatible).\nARIA roles.\nThe tabs are anchor links that:\n\nare clickable\nhave block layout\nhave their href pointing to the id of the panel element\nuse the correct cursor (i.e. cursor: pointer).\n\nSince tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open.\nRight-clicking (and Shift-clicking) doesn\u2019t cause the tab to be selected.\nNative browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place).\n\nThe first three points are all to do with the semantics of the markup and how the markup has been styled. I think it\u2019s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work.\nThe last three points are JavaScript problems. Let\u2019s investigate that.\nThe shitmus test\nLike a litmus test, here\u2019s a couple of quick ways you can tell if a tabbing system is poorly implemented:\n\nChange tab, then use the Back button (or keyboard shortcut) and it breaks\nThe tab isn\u2019t a link, so you can\u2019t open it in a new tab\n\nThese two basic things are, to me, the bare minimum that a tabbing system should have.\nWhy is this important?\nThe people who push their so-called native apps on users can\u2019t have more reasons why the web sucks. If something as basic as a tab doesn\u2019t work, obviously there\u2019s more ammo to push a closed native app or platform on your users.\nIf you\u2019re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn\u2019t mean don\u2019t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. :breath:\nURI fragment, absolute URL or query string?\nA URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content.\nThis decision really depends on the context of your tabbing system. For something like GitHub\u2019s tabs to view a pull request, it makes sense that the full URL changes.\nFor our problem though, I want to solve the issue when the page doesn\u2019t do a full URL update; that is, your regular run-of-the-mill tabbing system.\nI used to be from the school of using the hash to show the correct tab, but I\u2019ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don\u2019t work, and comma-separated hash fragments don\u2019t make any sense to control multiple tabs (since it doesn\u2019t actually link to anything).\nFor this article, I\u2019ll keep focused on using a single tabbing system and a hash on the URL to control the tabs.\nMarkup\nI\u2019m going to assume subcontent, so my markup would look like this (yes, this is a cat demo\u2026):\n\n\n
\n \n
\n
\n \n
\n
\n \n
\nIt\u2019s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we\u2019ll see once we\u2019re on to writing the code.\nURL-driven tabbing systems\nInstead of making the code responsive to the user\u2019s input, we\u2019re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free.\nWith that in mind, let\u2019s start building up our code. I\u2019ll assume we have the jQuery library, but I\u2019ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support).\nNote that I\u2019ll start with the simplest solution, and I\u2019ll refactor the code as I go along, like in places where I keep calling jQuery selectors.\nfunction show(id) {\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n $('.tab').removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n $('.panel').hide().filter(id).show();\n}\n\n$(window).on('hashchange', function () {\n show(location.hash);\n});\n\n// initialise by showing the first panel\nshow('#dizzy');\nThis works pretty well for such little code. Notice that we don\u2019t have any click handlers for the user and the Back button works right out of the box.\nHowever, there\u2019s a number of problems we need to fix:\n\nThe initialised tab is hard-coded to the first panel, rather than what\u2019s on the URL.\nIf there\u2019s no hash on the URL, all the panels are hidden (and thus broken).\nIf you scroll to the bottom of the example, you\u2019ll find a \u201ctop\u201d link; clicking that will break our tabbing system.\nI\u2019ve purposely made the page long, so that when you click on a tab, you\u2019ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying.\n\nFrom our criteria at the start of this post, we\u2019ve already solved items 4 and 5. Not a terrible start. Let\u2019s solve items 1 through 3 next.\nUsing the URL to initialise correctly and protect from breakage\nInstead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it\u2019s available.\nThe problem is: what if the hash on the URL isn\u2019t actually for a tab?\nThe solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won\u2019t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached.\nSo now the code will collect all the tabs, then find the related panels from those tabs, and we\u2019ll use that list to double the values we give the show function (during initialisation, for instance).\n// collect all the tabs\nvar tabs = $('.tab');\n\n// get an array of the panel ids (from the anchor hash)\nvar targets = tabs.map(function () {\n return this.hash;\n}).get();\n\n// use those ids to get a jQuery collection of panels\nvar panels = $(targets.join(','));\n\nfunction show(id) {\n // if no value was given, let's take the first panel\n if (!id) {\n id = targets[0];\n }\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n tabs.removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n panels.hide().filter(id).show();\n}\n\n$(window).on('hashchange', function () {\n var hash = location.hash;\n if (targets.indexOf(hash) !== -1) {\n show(hash);\n }\n});\n\n// initialise\nshow(targets.indexOf(location.hash) !== -1 ? location.hash : '');\nThe core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab.\nThe second breakage we saw in the original demo was that clicking the \u201ctop\u201d link would break our tabs. This was due to the hashchange event firing and the code didn\u2019t validate the hash that was passed. Now this happens, the panels don\u2019t break.\nSo far we\u2019ve got a tabbing system that:\n\nWorks without JavaScript.\nSupports right-click and Shift-click (and doesn\u2019t select in these cases).\nLoads the correct panel if you start with a hash.\nSupports native browser navigation.\nSupports the keyboard.\n\nThe only annoying problem we have now is that the page jumps when a tab is selected. That\u2019s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it\u2019s all for a good cause.\nRemoving the jump to tab\nYou\u2019d be forgiven for thinking you just need to hook a click handler and return false. It\u2019s what I started with. Only that\u2019s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support.\nThere may be another way to solve this, but what follows is the way I found \u2013 and it works. It\u2019s just a bit\u2026 hairy, as I said.\nWe\u2019re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won\u2019t jump the page.\nThe change involves the following:\n\nAdd a click handle that removes the id from the target panel, and cache this in a target variable that we\u2019ll use later in hashchange (see point 4).\nIn the same click handler, set the location.hash to the current link\u2019s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line).\nFor each panel, put a backup copy of the id attribute in a data property (I\u2019ve called it old-id).\nWhen the hashchange event fires, if we have a target value, let\u2019s put the id back on the panel.\n\nThese changes result in this final code:\n/*global $*/\n\n// a temp value to cache *what* we're about to show\nvar target = null;\n\n// collect all the tabs\nvar tabs = $('.tab').on('click', function () {\n target = $(this.hash).removeAttr('id');\n\n // if the URL isn't going to change, then hashchange\n // event doesn't fire, so we trigger the update manually\n if (location.hash === this.hash) {\n // but this has to happen after the DOM update has\n // completed, so we wrap it in a setTimeout 0\n setTimeout(update, 0);\n }\n});\n\n// get an array of the panel ids (from the anchor hash)\nvar targets = tabs.map(function () {\n return this.hash;\n}).get();\n\n// use those ids to get a jQuery collection of panels\nvar panels = $(targets.join(',')).each(function () {\n // keep a copy of what the original el.id was\n $(this).data('old-id', this.id);\n});\n\nfunction update() {\n if (target) {\n target.attr('id', target.data('old-id'));\n target = null;\n }\n\n var hash = window.location.hash;\n if (targets.indexOf(hash) !== -1) {\n show(hash);\n }\n}\n\nfunction show(id) {\n // if no value was given, let's take the first panel\n if (!id) {\n id = targets[0];\n }\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n tabs.removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n panels.hide().filter(id).show();\n}\n\n$(window).on('hashchange', update);\n\n// initialise\nif (targets.indexOf(window.location.hash) !== -1) {\n update();\n} else {\n show();\n}\nThis version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add.\nARIA roles\nThis article on ARIA tabs made it very easy to get the tabbing system working as I wanted.\nThe tasks were simple:\n\nAdd aria-role set to tab for the tabs, and tabpanel for the panels.\nSet aria-controls on the tabs to point to their related panel (by id).\nI use JavaScript to add tabindex=0 to all the tab elements.\nWhen I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false.\nWhen I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false.\n\nAnd that\u2019s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible.\nCheck out the final version (and the non-jQuery version as promised).\nIn conclusion\nThere\u2019s a lot of tab implementations out there, but there\u2019s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there\u2019s a special hell for those tab systems that don\u2019t even use links, but I think it\u2019s clear that even in something that\u2019s relatively simple, it\u2019s the small details that make or break the user experience.\nObviously there are corners I\u2019ve not explored, like when there\u2019s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that\u2019s for another year!", "year": "2015", "author": "Remy Sharp", "author_slug": "remysharp", "published": "2015-12-22T00:00:00+00:00", "url": "https://24ways.org/2015/how-tabs-should-work/", "topic": "code"} {"rowid": 295, "title": "Internet of Stranger Things", "contents": "This year I\u2019ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I\u2019m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1.\nWhat you\u2019ll see\nYou can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the \u201cSend a message\u201d button and wait your turn.)\n\n\n\n\nIt\u2019s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I\u2019m using WebSockets to communicate in real time between the browser, server and Raspberry Pi.\nWhat you\u2019ll need\n\nRaspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3\nMicro SD card at least 4Gb Class 10 speed3\nMicro USB power supply at least 2A\nUSB Wifi dongle (unless you have a Pi 3 - that has wifi built in). \nAddressable fairy lights\nLogic level shifter (with pins soldered unless you want to do it!)\nBreadboard\nJumper wires (3x male to male and 4x female to male)\n\nOptional but recommended\n\nBase board to hold the Pi and Breadboard (often comes with a breadboard!)\n\nFind links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004.\nSetting up the Raspberry Pi\nYou\u2019ll need to install the SD card for the Raspberry Pi. You\u2019ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5. \nNext up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There\u2019s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6.\nnetwork={\n ssid=\"mywifi\"\n psk=\"mypassword\"\n}\nSave the file, eject the card from your computer and plug it into the Raspberry Pi. \nIf you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don\u2019t plug it in yet!\nWiring!\nTime to wire everything up! \nFirst of all, push the Logic Level Converter into the middle of the breadboard:\n\nLogic Level Converter\nThe logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top. \n\nRaspberry Pi pins only output 3.3v but the lights need 5v. That\u2019s why we need the logic level converter in there to boost up the signal.\nConnect the first two wires between the Raspberry Pi pins and the breadboard:\n\nNote that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don\u2019t have to match but it\u2019s easier to follow (and check) if you use the same ones as in the diagram. \n\nThen the next two:\n\nThis is what you should have so far:\n\nLights\nNow to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard.\n\n\n\nMake sure that you connect the right end of the lights, mine has a male connector at the wrong end so it\u2019s impossible to do this, but double check. \nAlso make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or \u2018-\u2019 or G) and DI (sometimes DIN for data in).\n\nYou can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that\u2019s what takes the data along to the chip in the next light. Make sure that you\u2019re plugging into the data-in and not the data-out! \nThat\u2019s it! Everything\u2019s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they\u2019re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check!\n\nThe Moment of Truth!\nPlug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that\u2019s your confirmation that it\u2019s connected to my WebSocket server and ready to receive messages from the upside-down! \n\nHowever, if the first light in the string is pulsing red, it means that you\u2019re not connected to the internet. So check the Troubleshooting section of the support document. If it\u2019s pulsing green then you\u2019re connected to the internet but can\u2019t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it\u2019ll come back up. \nRig up the lights!\nFix the lights up on the wall however you want, pins, nails, tape. I\u2019ve used cable clips. Just be careful! I\u2019m using a 50 light string so I\u2019ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi. \nCheck the photo here to see how the lights line up, note that there are spare unused lights in-between each row:\n\nNow visit lights.seb.ly and you\u2019ll see this : \n\nIf you\u2019re the only one online you\u2019ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you\u2019ll need to click the button to join the queue and wait your turn. \nHow it works - the geeky details!\nElectronics:\nThe pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer). \nAddressable LEDs or \u201cNeopixels\u201d\nWe\u2019re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string. \nThe chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project.\nNeopixels with the chip and the LED all in one - it\u2019s the white square shaped component and the darker square inside is the chip. These are only 5mm wide!\nA Laser Light Synth! Covered with around 800 super bright neopixels!\nLogic Level Converter\nThe logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip. \nPower\nNeopixels can often draw a lot of current so you need to be careful how you power them. I\u2019ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you\u2019ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit\u2019s Neopixel Uberguide. \nNode.js\nThere are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they\u2019re hosted on npm as stranger-lights and stranger-lights-server. \nThe server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time.\nWebSockets\nI\u2019m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server. \nWhen you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9. \nThe Raspberry Pi code\nThe Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file: \nnode /home/pi/strangerthings/client.js > /dev/null &\nAnything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn\u2019t hold up the boot process. \nWorking with the Raspberry Pi headless\nYou might know that when a computer has no screen or keyboard, you would refer to it as \u201crunning headless\u201d. So just like most web servers, you need to configure it over the network with ssh10. If you\u2019re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you\u2019ll need to find its IP address. There\u2019s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you\u2019re very new to the terminal, I highly recommend this great online Linux command line tutorial.\nImprovements\nThis is quite an early experiment and I\u2019m sure I\u2019ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I\u2019ve just rigged up my lights with Post-it notes. It\u2019d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style. \nWhere next?\nFinding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that\u2019s something you\u2019d be interested in, or sign up to my mailing list at st4i.com. \nThere are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you\u2019ll be spoilt for choice! \nAlso take a look at Arduino - it\u2019s an incredibly popular platform for electronics and the internet is literally bursting with resources. \nI hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly. \n\n\n\n\nNot a particularly original idea, but I don\u2019t think I\u2019ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables\u00a0\u21a9\ufe0e\n\n\nVideo guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit\u00a0\u21a9\ufe0e\n\n\nSlower cards will work but performance may suffer\u00a0\u21a9\ufe0e\n\n\nOr \u00a35,000 in UK money. Sorry, Brexit joke :)\u00a0\u21a9\ufe0e\n\n\nYou will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots. \u00a0\u21a9\ufe0e\n\n\nSSID and password should be all that you need but you can see all the config options on this wpa supplicant guide\u00a0\u21a9\ufe0e\n\n\nRaspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle\u00a0\u21a9\ufe0e\n\n\nThanks to Adafruit who invented the term neopixels so we don\u2019t have to refer to them as WS281x any more!\u00a0\u21a9\ufe0e\n\n\nSo you can see other people sending messages in the browser\u00a0\u21a9\ufe0e\n\n\nssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal.\u00a0\u21a9\ufe0e\n\n\nYou can change this default hostname using raspi-config\u00a0\u21a9\ufe0e", "year": "2016", "author": "Seb Lee-Delisle", "author_slug": "sebleedelisle", "published": "2016-12-01T00:00:00+00:00", "url": "https://24ways.org/2016/internet-of-stranger-things/", "topic": "code"} {"rowid": 293, "title": "A Favor for Your Future Self", "contents": "We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens. \nWe comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later.\nI want to talk about a situation that Past Alicia and Team couldn\u2019t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn\u2019t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it\u2019s the ultimate \u201cI really hope this doesn\u2019t break something else\u201d situation. It was a stressful and tedious effort of triple checking that the things we were removing weren\u2019t dependencies elsewhere. To be honest, we wouldn\u2019t have been able to do this with any amount of success or confidence without our test suite.\nWriting tests for code is one of those things that developers really, really don\u2019t want to do. It\u2019s one of the easiest things to cut in the development process, and there\u2019s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can\u2019t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren\u2019t in a tough spot later on because we only thought of the best case scenarios.\nJavaScript\nRegardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you\u2019re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests.\nUnit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations.\n/*\n * This example uses a test framework called Jasmine\n */\n\ndescribe(\"Calculator Operations\", function () {\n\n it(\"Should add two numbers\", function () {\n\n // Say we have a calculator\n Calculator.init();\n\n // We can run the function that does our addition calculation...\n var result = Calculator.addNumbers(7,3);\n\n // ...and ensure we're getting the right output\n expect(result).toBe(10);\n\n });\n});\nEven though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let\u2019s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn\u2019t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.).\n it(\u201cShould remember the last calculation\u201d, function () {\n\n // Run an operation\n Calculator.addNumbers(7,10);\n\n // Expect something else to have happened as a result\n expect(Calculator.updateCurrentValue).toHaveBeenCalled();\n expect(Calculator.currentValue).toBe(17);\n });\nUnit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment.\nInterfaces\nRegardless of how you\u2019re building something, it most definitely has some kind of interface. Whether you\u2019re using a very barebones structure, or you\u2019re leveraging a whole design system, these things can be tested as well.\nAcceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable.\nFor example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress.\n/*\n * This example uses Jasmine along with an add-on called jasmine-integration\n */\n\ndescribe(\"Acceptance tests\", function () {\n\n // Go to our signup page\n var page = visit(\"/signup\");\n\n // Fill our signup form with invalid information\n page.fill_in(\"input[name='email']\", \"Not An Email\");\n page.fill_in(\"input[name='name']\", \"Alicia\");\n page.click(\"button[type=submit]\");\n\n // Check that we get an expected error message\n it(\"Shouldn't allow signup with invalid information\", function () {\n expect(page.find(\"#signupError\").hasClass(\"hidden\")).toBeFalsy();\n });\n\n // Now, fill our signup form with valid information\n page.fill_in(\"input[name='email']\", \"thisismyemail@gmail.com\");\n page.fill_in(\"input[name='name']\", \"Gerry\");\n page.click(\"button[type=submit]\");\n\n // Check that we get an expected success message and the error message is hidden\n it(\"Should allow signup with valid information\", function () {\n expect(page.find(\"#signupError\").hasClass(\"hidden\")).toBeTruthy();\n expect(page.find(\"#thankYouMessage\").hasClass(\"hidden\")).toBeFalsy();\n });\n});\nIn terms of visual design, we\u2019re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga.\nTools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this:\n/*\n * This example uses PhantomCSS\n */\n\ncasper.start(\"/home\").then(function(){\n\n // Initial state of form\n phantomcss.screenshot(\"#signUpForm\", \"sign up form\");\n\n // Hit the sign up button (should trigger error)\n casper.click(\"button#signUp\");\n\n // Take a screenshot of the UI component\n phantomcss.screenshot(\"#signUpForm\", \"sign up form error\");\n\n // Fill in form by name attributes & submit\n casper.fill(\"#signUpForm\", {\n name: \"Alicia Sedlock\",\n email: \"alicia@example.com\"\n }, true);\n\n // Take a second screenshot of success state\n phantomcss.screenshot(\"#signUpForm\", \"sign up form success\");\n});\nYou run this code before starting any development, to create your baseline set of screen captures. After you\u2019ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements \u2013 your image diff would look something like this:\n\nThis is a great tool for ensuring not just your site retains its expected styling, but it\u2019s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It\u2019s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored.\nConclusion\nThe shape and size of what you\u2019re testing for your site or app will vary. You may not need lots of unit or integration tests if you don\u2019t write a lot of JavaScript. You may not need visual regression testing for a one page site. It\u2019s important to assess your codebase to see which tests would provide the most benefit for you and your team.\nWriting tests isn\u2019t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that\u2019s broken breaks trust with our users, and it\u2019s our responsibility as developers to make sure that trust isn\u2019t broken. Testing shouldn\u2019t be considered a \u201cnice to have\u201d - it should be an integral piece of our workflow and our day-to-day job.", "year": "2016", "author": "Alicia Sedlock", "author_slug": "aliciasedlock", "published": "2016-12-03T00:00:00+00:00", "url": "https://24ways.org/2016/a-favor-for-your-future-self/", "topic": "code"} {"rowid": 303, "title": "We Need to Talk About Technical Debt", "contents": "In my work with clients, a lot of time is spent assessing old, legacy, sprawling systems and identifying good code, bad code, and technical debt.\nOne thing that constantly strikes me is the frequency with which bad code and technical debt are conflated, so let me start by saying this:\nNot all technical debt is bad code, and not all bad code is technical debt.\nSometimes your bad code is just that: bad code. Calling it technical debt often feels like a more forgiving and friendly way of referring to what may have just been a poor implementation or a substandard piece of work.\nIt is an oft-misunderstood phrase, and when mistaken for meaning \u2018anything legacy or old hacky or nasty or bad\u2019, technical debt is swept under the carpet along with all of the other parts of the codebase we\u2019d rather not talk about, and therein lies the problem.\nWe need to talk about technical debt.\nWhat We Talk About When We Talk About Technical Debt\nThe thing that separates technical debt from the rest of the hacky code in our project is the fact that technical debt, by definition, is something that we knowingly and strategically entered into. Debt doesn\u2019t happen by accident: debt happens when we choose to gain something otherwise-unattainable immediately in return for paying it back (with interest) later on.\nAn Example\nYou\u2019re a front-end developer working on a SaaS product, and your sales team is courting a large customer \u2013 a customer so large that you can\u2019t really afford to lose them. The customer tells you that as long as you can allow them to theme your SaaS application according to their branding, they are willing to sign on the dotted line\u2026 the problem being that your CSS architecture was never designed to incorporate theming at all, and there isn\u2019t currently a nice, clean way to incorporate a theme into the codebase.\nYou and the business make the decision that you will hack a theme into the product in two days. It\u2019s going to be messy, it\u2019s going to be ugly, but you can\u2019t afford to lose a huge customer just because your CSS isn\u2019t quite right, right now. This is technical debt.\nYou deliver the theme, the customer signs up, and everyone is happy. Except you (and the business, because you are one and the same) have a decision to make:\n\nDo we go back and build theming into the CSS architecture as a first-class citizen, porting the hacked theme back into a codified and formal framework?\nDo we carry on as we are? Things are working okay, and the customer paid up, so is there any reason to invest time and effort into things after we (and the customer) got what we wanted?\n\nOption 1 is choosing to pay off your debts; Option 2 is ignoring your repayments.\nWith Option 1, you\u2019re acknowledging that you did what you could given the constraints, but, free of constraints, you\u2019d have done something different. Now, you are choosing to implement that something different.\nWith Option 2, however, you are avoiding your responsibility to repay your debt, and you are letting interest accrue. The problem here is that\u2026\n\nyour SaaS product now offers theming to one of your customers;\nanother potential customer might also demand the ability to theme their instance of your product;\nyou can\u2019t refuse them that request, nor can you quickly fulfil it;\nyou hack in another theme, thus adding to the balance of your existing debt;\nand so on (plus interest) for every subsequent theme you need to implement.\n\nHere you have increased entropy whilst making little to no attempt to address what you already knew to be problems.\nYour second, third, fourth, fifth request for theming will be hacked on top of your hack, further accumulating debt whilst offering nothing by way of a repayment. After a long enough period, the code involved will get so unwieldy, so hard to work with, that you are forced to tear it all down and start again, and the most painful part of this is that you\u2019re actually paying off even more than your debt repayments would have been in the first place. Two days of hacking plus, say, five days of subsequent refactoring, would still have been substantially less than the weeks you will now have to spend rewriting your CSS to fix and incorporate the themes properly. You\u2019ve made a loss; your strategic debt ultimately became a loss-making exercise.\nThe important thing to note here is that you didn\u2019t necessarily write bad code. You knew there were two options: the quick way and the correct way. The decision to take the quick route was a definite choice, because you knew there was a better way. Implementing the better way is your repayment.\nGood Debt and Bad Debt\nTechnical debt is acceptable as long as you have intentions to settle; it can be a valuable solution to a business problem, provided the right approach is taken afterwards. That doesn\u2019t, however, mean that all debt is born equal. Just as in real life, there is good debt and there is bad debt.\nGood debt might be\u2026\n\na mortgage;\na student loan, or;\na business loan.\n\nThese are types of debt that will secure you the means of repaying them. These are well considered debts whose very reason for being will allow you to make the money to pay them off\u2014they have real, tangible benefit.\nA business loan to secure some equipment and premises will allow you to start an enterprise whose revenue will allow you to pay that debt back; a student loan will allow you to secure the kind of job that has the ability to pay a student loan back.\nThese kinds of debt involve a considered and well-balanced decision to acquire something in the short term in the knowledge that you will have the means, in the long term, to pay it back.\nConversely, bad debt might be\u2026\n\nborrowing $1,000 from a loan shark so you can go to Vegas, or;\ntaking out a payday loan in order to buy a new television.\n\nBoth of these kinds of debt will leave you paying for things that didn\u2019t provide you a way of earning your own capital. That is to say, the loans taken did not secure anything that would help pay off said loans. These are bad debts that will usually provide a net loss. You really are only gaining the short term in exchange for a long term financial responsibility: i.e., was it worth it?\nA good litmus test for debt is to compare the gains of its immediate benefit with the cost of its long term commitment.\nThe earlier example of theming a site is a good debt, provided we are keeping up our repayments (all debt is bad debt if you don\u2019t). A calculated decision to do something \u2018wrong\u2019 in the short term with the promise of better payoffs later on.\nBad Technical Debt\nThe majority of my work is with front-end development teams\u2014CSS is what I do. To that end, the most succinct example of technical debt for that audience is simply:\n!important\nAll front-end developers know the horrors and dangers associated with using !important, yet we continue to use it. Why?\nIt\u2019s not necessarily because we\u2019re bad developers, but because we see a shortcut. !important is usually implemented as a quick way out of a sticky specificity situation. We could spend the rest of the day refactoring our CSS to fix the issue at its source, or we can spend mere seconds typing the word !important and patch over the symptoms.\nThis is us making an explicit decision to do something less than ideal now in exchange for immediate benefit. After all, refactoring our CSS will take a lot more time, and will still only leave us with the same outcome that the vastly quicker !important solution will, so it seems to make better business sense.\nHowever, this is a bad debt. !important takes seconds to implement but weeks to refactor. The cost of refactoring this back out later will be an order of magnitude higher than it would be to have done things properly the first time. The first !important usually sets a precedent, and subsequent developers are likely to have to use it themselves in order to get around the one that you left.\nSo many CSS projects deteriorate because of this one simple word, and rewrites become more and more imminent. That makes it possibly the most costly 10 bytes a CSS developer could ever write.\nBad Code\nNow we\u2019ve got a good idea of what constitutes technical debt, let\u2019s take a look at what constitutes bad code. Something I hear time and time again in my client work goes a little like this:\n\nWe\u2019ve amassed a lot of technical debt and we\u2019d like to get a strategy in place\nto begin dealing with it.\n\nWhilst I genuinely admire their willingness to identify and desire to fix problems in their code, sometimes they\u2019re not looking at technical debt at\nall\u2014sometimes they\u2019re just looking at bad code, plain and simple.\nWhere technical debt is knowing that there\u2019s a better way, but the quicker way makes more sense right now, bad code is not caring if there\u2019s a better way at all.\nAgain, looking at a CSS-specific world, a lot of bad code is contributed by non-front-end developers with little training, appreciation, or even respect for the front-end landscape. Writing code with reckless abandon should not be described as technical debt, because to do so would imply that\u2026\n\nthe developers knew they were implementing a sub-par solution, but\u2026\nthe developers also knew that a better solution was out there, which\u2026\nimplies that it can be tidied up relatively simply.\n\nDevelopers writing bad code is a larger and more cultural problem that requires a lot more effort to fix. Hopefully\u2014and usually\u2014bad code is in the minority, but it helps to be objective in identifying and solving it. Bad code usually doesn\u2019t happen for a good enough reason, and is therefore much harder to justify.\nTechnical debt often represents ability in judgement, whereas bad code often represents a gap in skills.\nTakeaway\nTake time to familiarise yourself with the true concepts underlying technical debt and why it exists. Understand that technical debt can be good or bad. Admit that sometimes code is just of poor quality.\nUnderstanding these points will allow you to make better calls around what you might need to refactor and when, and what skills gaps you might have in your team.\n\nSometimes it\u2019s okay to cut corners if there is a tangible gain to be had in the immediate term.\nTechnical debt is okay provided it is a sensible debt and you have intentions to pay it off.\nTechnical debt is not necessarily synonymous with bad code, and bad code isn\u2019t necessarily technical debt. Technical debt is code that was implemented given limited knowledge or resource, with the understanding that you would need to repay something in future.\nTechnical debt is not inherently bad\u2014failure to make repayments is. Periodically, it is justifiable\u2014encouraged, even\u2014to enter a debt in order to fulfil a more pressing matter. However, it is imperative that we begin making repayments as soon as we are capable, be that based on newly available time or knowledge.\nBad code is worse than technical debt as it represents a lack of knowledge or quality control within a team. It needs a much more fundamental fix.", "year": "2016", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2016-12-05T00:00:00+00:00", "url": "https://24ways.org/2016/we-need-to-talk-about-technical-debt/", "topic": "code"} {"rowid": 308, "title": "How to Make a Chrome Extension to Delight (or Troll) Your Friends", "contents": "If you\u2019re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius. \nBut today is different. You\u2019re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack\u2026 right? \nNope. \nIf there\u2019s one thing 2016 has taught me, it\u2019s that we\u2014the self-serious, world-changing tech movers and shakers of the universe\u2014haven\u2019t changed one bit from our younger, more delightable selves.\nHow do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like \u201cStinky Dinosaur\u201d or \u201cTiny Potato\u201d. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there\u2019s still plenty of room in our big, grown-up hearts for fun.\n\n\nToday, we\u2019re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension. \nHere\u2019s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to.\n\nAlong the way, we\u2019ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful. \nWhat you\u2019ll need\n\nAdobe Illustrator (or a similar illustration program to export PNG)\nSome images of your friends\nA text editor\n\nNote: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you\u2019ll find ways to do it.\nGetting started\n\nDownload a local copy of the boilerplate for today\u2019s tutorial here, and open it in a text editor. Inside, you\u2019ll find a simple web app that you can run in Chrome. \nOpen index.html in Chrome. You should see a grey page that says \u201cNoname\u201d.\nOpen template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template.\n\nNote: We\u2019re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn\u2019t totally cross-browser compatible, but that\u2019s okay.\nStep 1: Gather your friends\nThe first thing to do is choose who your muses are. Since the holidays are upon us, I\u2019d suggest finding inspiration in your family.\nCreate your artwork\nFor each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here\u2019s my crew:\nAs you can see, my use of the word \u201cfamily\u201d extends to cats. There\u2019s no judgement here.\nNotice that some of my photos don\u2019t completely fill the artboard\u2013that\u2019s fine. The images will be clipped into ovals when they\u2019re rendered in the application.\nNow, export your images by following these steps:\n\nTurn the Template layer off and export the images as PNGs. \nIn the Export dialog, tick the \u201cUse Artboards\u201d checkbox and enter the range with your faces. \nExport at 72ppi to keep things running fast. \nSave your images into the images/ folder in your project.\n\nAdd your images to config.js\nOpen scripts/config.js. This is where you configure your extension. \nAdd key value pairs to the faces object. The key should be the person\u2019s name, and the value should be the filepath to the image.\nfaces: {\n leslie: 'images/face_leslie.png',\n kyle: 'images/face_kyle.png',\n beep: 'images/face_beep.png'\n}\nThe application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that\u2019s special about the faces object is that person\u2019s name will also be displayed when their face is chosen.\nNow, when you refresh the project in Chrome, you should see one of your friends along with their name, like this:\n\nCongrats, you\u2019re off and running!\nStep 2: Add adjectives\nNow that you\u2019ve loaded your friends into the application, it\u2019s time to call them names. This step definitely yields the most laughs for the least amount of effort.\nAdd a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering\u2026\nprefixes: [\n 'Loving',\n 'Drunk',\n 'Chatty',\n 'Merry',\n 'Creepy',\n 'Introspective',\n 'Cheerful',\n 'Awkward',\n 'Unrelatable',\n 'Hungry',\n ...\n]\nWhen you refresh Chrome, you should see one of these words prefixed before your friend\u2019s name. Voila!\n\nStep 3: Choose your color palette\nReal talk: I\u2019m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you\u2019ve been blessed with the gift of color aptitude, skip ahead.\nHow to choose colors\nTo create a color palette, I start by going to a Coolors.co, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like.\nCopy these colors into your swatches in Adobe Illustrator. They\u2019ll be the base for any illustrations you create later.\nNow you need a set of background colors. Here\u2019s my trick to making these consistent with your illustration palette without completely blending in. Use the \u201cAdjust Palette\u201d tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors.\n\nAdd your background colors to config.js\nCopy your hex codes into the bgColors array in config.js.\nbgColors: [\n '#FFDD77',\n '#FF8E72',\n '#ED5E84',\n '#4CE0B3',\n '#9893DA',\n ...\n]\nNow when you go back to Chrome and refresh the page, you\u2019ll see your new palette!\n\nStep 4: Accessorize\nThis is the fun part. We\u2019re going to illustrate objects, accessories, lizards\u2014whatever you want\u2014and layer them on top of your friends.\nYour objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like \u201chats\u201d or \u201cglasses\u201d. This will allow combinations of accessories to show at once, without showing two of the same type on the same person.\nCreate a group of accessories\nTo get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don\u2019t feel like illustrating, you can use cut-out images instead.\n\nNext, follow the same steps as you did when you exported the faces. Here they are again:\n\nTurn the Template layer off and export the images as PNGs. \nIn the Export dialog, tick the \u201cUse Artboards\u201d checkbox and enter the range with your hats. \nExport at 72ppi to keep things running fast. \nSave your images into the images/ folder in your project.\n\nAdd your accessories to config.js\nIn config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array:\ncustomProps: {\n hats: [ \n 'images/hat_crown.png',\n 'images/hat_santa.png',\n 'images/hat_tophat.png',\n 'images/hat_antlers.png'\n ]\n}\nRefresh Chrome and behold, accessories!\n\nCreate as many more accessories as you want\nRepeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this:\n\nThe last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses:\ncustomProps: {\n hair: [ \n 'images/hair_bowl.png',\n 'images/hair_bob.png'\n ], \n hats: [ \n 'images/hat_crown.png',\n 'images/hat_santa.png',\n 'images/hat_tophat.png',\n 'images/hat_antlers.png'\n ],\n glasses: [\n 'images/glasses_aviators.png',\n 'images/glasses_monacle.png'\n ]\n}\nAnd, there you have it! Randomly generated friends with random accessories. \n\nFeel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress\u2026\nStep 5: Publish it\nIt\u2019s time to put this in your new tabs! You have two options:\n\nPublish it as a Chrome extension in the Chrome Web Store.\nHost it as a website and point to it with the New Tab Redirect extension.\n\nToday, we\u2019re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I\u2019d suggest sticking to Option #2.\nHow to make a simple Chrome extension to replace the new tab page\nAll you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement:\n{\n \"manifest_version\": 2,\n \"name\": \"Your extension name\",\n \"version\": \"1.0\",\n \"chrome_url_overrides\" : {\n \"newtab\": \"index.html\"\n }\n}\nTo test your extension, you\u2019ll need to run it in Developer Mode. Here\u2019s how to do that:\n\nGo to the Extensions page in Chrome by navigating to chrome://extensions/.\nTick the checkbox in the upper-right corner labelled \u201cDeveloper Mode\u201d.\nClick \u201cLoad unpacked extension\u2026\u201d and select this project.\nIf everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page.\n\nVoila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you.\nShare the love\nNow that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they\u2019ve become a part of everyday life\u2013just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats. \nAt the end of the day, let\u2019s make something that makes us happy. Cheers!", "year": "2016", "author": "Leslie Zacharkow", "author_slug": "lesliezacharkow", "published": "2016-12-08T00:00:00+00:00", "url": "https://24ways.org/2016/how-to-make-a-chrome-extension/", "topic": "code"} {"rowid": 307, "title": "Get the Balance Right: Responsive Display Text", "contents": "Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers\u2019 attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout.\nWhen setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader\u2019s preferences. You can take the same approach with display text by choosing larger sizes from the same scale.\nHowever, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window.\nIntroducing viewport units\nWith CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels:\n\nvw - viewport width,\nvh - viewport height,\nvmin - viewport height or width, whichever is smaller\nvmax - viewport height or width, whichever is larger\n\nIn one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width:\nh1 {\n font-size: 13 vw;\n}\nSo for a selection of widths, the rendered font size would be:\nRendered font size (px)\nViewport width\n13\u202fvw\n320\n42\n768\n100\n1024\n133\n1280\n166\n1920\n250\n\nA problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation. \nLandscape text is much bigger than portrait text when using vw units.\nThe proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression.\nHowever if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering.\nh1 {\n font-size: 13vmin;\n}\nLandscape text is consistent with portrait text when using vmin units.\nComparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude:\nRendered font size (px)\nViewport\n13\u202fvw\n13\u202fvmin\n320 \u00d7 480\n42\n42\n414 \u00d7 736\n54\n54\n768 \u00d7 1024\n100\n100\n1024 \u00d7 768\n133\n100\n1280 \u00d7 720\n166\n94\n1366 \u00d7 768\n178\n100\n1440 \u00d7 900\n187\n117\n1680 \u00d7 1050\n218\n137\n1920 \u00d7 1080\n250\n140\n2560 \u00d7 1440\n333\n187\n\nHybrid font sizing\nUsing vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading:\nh1 {\n font-size: 13vmin;\n}\nh2 {\n font-size: 5vmin;\n}\nUsing vmin alone for supporting text can scale it too quickly\nThe balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens.\nA solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example:\nh2 {\n font-size: calc(0.5rem + 2.5vmin);\n}\nFor a 320 px wide screen, the font size will be 16 px, calculated as follows:\n(0.5 \u00d7 16) + (320 \u00d7 0.025) = 8 + 8 = 16px\nFor a 768 px wide screen, the font size will be 27 px:\n(0.5 \u00d7 16) + (768 \u00d7 0.025) = 8 + 19 = 27px\nThis results in a more balanced subheading that doesn\u2019t take emphasis away from the main heading:\n\nTo give you an idea of the effect of using a hybrid approach, here\u2019s a side-by-side comparison of hybrid and viewport text sizing:\ntable.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em}\n\ntop: calc() hybrid method; bottom: vmin only\n16\n20\n27\n32\n35\n40\n44\n16\n24\n38\n48\n54\n64\n72\n320\n480\n768\n960\n1080\n1280\n1440\n\nOver this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting.\n\n\n\n\nA reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays.\u00a0\u21a9\ufe0e\n\n\nFor even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller.\u00a0\u21a9\ufe0e", "year": "2016", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2016-12-09T00:00:00+00:00", "url": "https://24ways.org/2016/responsive-display-text/", "topic": "code"} {"rowid": 292, "title": "Watch Your Language!", "contents": "I\u2019m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can\u2019t be expressed in some languages, while other languages express these concepts with ease.\nIt also helped me understand the way we label languages. English: business. French: romance. Here\u2019s an example of how words, or the absence thereof, can affect the way we think:\nIn French we love everything. There\u2019s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It\u2019s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn\u2019t grasped the difference. Needless to say, I\u2019ve scared away plenty of first dates!\nLearning another language made me realize the limitations of my native language and revealed concepts I didn\u2019t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas.\nWhen I lived in Montr\u00e9al, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient.\n\nI\u2019m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job.\nWhen I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn\u2019t consider accessibility. All this made editing views a laborious process.\nGrep. Replace all. Test. Ship it. Repeat.\nThis wasn\u2019t sustainable at Shopify\u2019s scale, so the newly-formed front end team was given two missions:\n\nMake the app responsive (AKA Let\u2019s Make This Thing Responsive ASAP)\nMake the view layer scalable and maintainable (AKA Let\u2019s Build a Pattern Library\u2026 in Ruby)\n\nLet\u2019s make this thing responsive ASAP\nThe year was 2015. The Shopify admin wasn\u2019t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries.\nIt seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn\u2019t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool.\nWriting the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive.\nBut with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn\u2019t performant at all. Components would jarringly jump around the page before settling in on first paint.\nIt was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn\u2019t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout.\nIn other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly.\nLet\u2019s build a pattern library\u2026 in Ruby\nIn order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping.\nWhen I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn\u2019t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan.\nWe put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks.\nSince the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn\u2019t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn\u2019t going to cut it.\nI don\u2019t know the end of this story, because we haven\u2019t written it yet. We\u2019ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven\u2019t determined the winner yet. Only time (and data gathering) will tell.\nRuby is great but it doesn\u2019t speak the browser\u2019s language efficiently. It was not the right language for the job.\n\nLearning a new spoken language has had an impact on how I write code. It has taught me that you don\u2019t know what you don\u2019t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you\u2019re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language.", "year": "2016", "author": "Annie-Claude C\u00f4t\u00e9", "author_slug": "annieclaudecote", "published": "2016-12-10T00:00:00+00:00", "url": "https://24ways.org/2016/watch-your-language/", "topic": "code"} {"rowid": 298, "title": "First Steps in VR", "contents": "The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not. \nI think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people.\nEvery year we see articles saying it will be the \u201cyear of virtual reality\u201d, that was especially prevalent this year. 2016 has been a year of progress, VR isn\u2019t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web.\nWebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier.\nGetting Started with A-Frame\nI like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That\u2019s not to say you can\u2019t build complex things, you have complete access to write JavaScript and shaders.\nI\u2019m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There\u2019s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It\u2019s fun.\nFor this project I wanted to keep things as simple as possible, so I can easily explain it without the classic \u201cdraw a circle then draw an owl\u201d. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy:\n\nMust work on Google Cardboard at a minimum, because of price\nTherefore, it must not rely on having a controller\nAuto-moving around a maze would be a good example\nMove in direction you look\nStretch goal: Scoring, time until you hit a wall or get stuck in maze\nStretch goal: Levels, so the map doesn\u2019t need to be random\nStretch goal: Snow!\n\nI decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground. \n\n\n\n \n \n 24 ways\n \n \n\n\n\n \n \n\n \n\n \n\n \n\n \n \n\n\n\n\nAs you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an which is the container for everything that is going to show up on the screen. There are a few which can be compared to
as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them.\nStyling\nNow we\u2019ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave.\nNote, you will probably need a server running for images to work. You can do this by running python -m \"SimpleHTTPServer\" in the folder where the code is, then go to localhost:8000 in browser.\nTextures\nUnless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor.\n\n \n \n\nThe is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures. \nTo apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image.\nReplace: \n\nWith:\n\nThis will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined.\nbox.setAttribute('material', 'src: #texture-wall');\nThat\u2019s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object. \nLighting\nThe next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished.\nThere are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit.\nTo start with I wanted to light up the scene with a general light, type=\"ambient\", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (), as it looked a bit odd with a white sky.\n\n\nI felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=\"point\" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn\u2019t a walking human or anything but by moving light where the player is it feels a bit more physical.\n\n\nBy this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see.\n\nI was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers.\nMovement\nOne of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you\u2019re looking, but I wasn\u2019t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working.\nAFRAME.registerComponent('automove-controls', {\n init: function () {\n this.speed = 0.1;\n this.isMoving = true;\n this.velocityDelta = new THREE.Vector3();\n },\n isVelocityActive: function () {\n return this.isMoving;\n },\n getVelocityDelta: function () {\n this.velocityDelta.z = this.isMoving ? -speed : 0;\n return this.velocityDelta.clone();\n }\n});\nReplace:\nuniversal-controls\nWith:\nuniversal-controls=\"movementControls: automove, gamepad, keyboard\"\nThis works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn\u2019t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels.\nBuilding a map\nCurrently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels.\nI made the maze in Tiled by finding a random tileset online (we don\u2019t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn\u2019t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it\u2019s easy to update in the future. \nvar map =\n{\n \"data\":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n \"height\":10,\n \"width\":10\n}\n\nAs you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don\u2019s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2.\ndocument.querySelector('a-scene').addEventListener('render-target-loaded', function () {\n var WALL_SIZE = 5,\n WALL_HEIGHT = 3;\n var el = document.querySelector('#walls');\n var wall;\n\n for (var x = 0; x < map.height; x++) {\n for (var y = 0; y < map.width; y++) {\n\n var i = y*map.width + x;\n var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE;\n if (map.data[i] === 1) {\n // Create wall\n wall = document.createElement('a-box');\n el.appendChild(wall);\n wall.setAttribute('color', '#fff');\n wall.setAttribute('material', 'src: #texture-wall;');\n wall.setAttribute('width', WALL_SIZE);\n wall.setAttribute('height', WALL_HEIGHT);\n wall.setAttribute('depth', WALL_SIZE);\n wall.setAttribute('position', position);\n wall.setAttribute('static-body', ');\n }\n\n if (map.data[i] === 2) {\n // Set player position\n document.querySelector('#player').setAttribute('position', position);\n }\n\n }\n }\n console.info('Walls added.');\n});\n\nWith this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There\u2019s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web.\nIt\u2019s Not All Fun And Games\nA lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that\u2019s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation.\nThere have been a number of cases where it has been shown virtual reality can help as a tool for therapies:\n\nOxford study finds virtual reality can help treat severe paranoia\nVirtual Reality Therapy for Phobias at the Duke Faculty Practice\nBravemind: Virtual Reality Exposure Therapy at the University of Southern California\n\nThese are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool.\nWrapping Up\nTen years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. \u201cThe mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing \u201cdesktop web\u201d skills to understand how to develop content for it.\u201d\nWe can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the \u201cdesktop web\u201d Cameron was referring to.\nSo while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don\u2019t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come\u2026 well, let\u2019s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it\u2019s fun\u2026 have a go anyway.", "year": "2016", "author": "Shane Hudson", "author_slug": "shanehudson", "published": "2016-12-11T00:00:00+00:00", "url": "https://24ways.org/2016/first-steps-in-vr/", "topic": "code"} {"rowid": 306, "title": "What next for CSS Grid Layout?", "contents": "In 2012 I wrote an article for 24 ways detailing a new CSS Specification that had caught my eye, at the time with an implementation only in Internet Explorer. What I didn\u2019t realise at the time was that CSS Grid Layout was to become a theme on which I would base the next four years of research, experimentation, writing and speaking. \nAs I write this article in December 2016, we are looking forward to CSS Grid Layout being shipped in Chrome and Firefox. What will ship early next year in those browsers is expanded and improved from the early implementation I explored in 2012. Over the last four years the spec has been developed as part of the CSS Working Group process, and has had input from browser engineers, specification writers and web developers. Use cases have been discussed, and features added.\nThe CSS Grid Layout specification is now a Candidate Recommendation. This status means the spec is to all intents and purposes, finished. The discussions now happening are on fine implementation details, and not new feature ideas. It makes sense to draw a line under a specification in order that browser vendors can ship complete, interoperable implementations. That approach is good for all of us, it makes development far easier if we know that a browser supports all of the features of a specification, rather than working out which bits are supported. However it doesn\u2019t mean that works stops here, and that new use cases and features can\u2019t be proposed for future levels of Grid Layout. Therefore, in this article I\u2019m going to take a look at some of the things I think grid layout could do in the future. I would love for these thoughts to prompt you to think about how Grid - or any CSS specification - could better suit the use cases you have.\nSubgrid - the missing feature of Level 1\nThe implementation of CSS Grid Layout in Chrome, Firefox and Webkit is comparable and very feature complete. There is however one standout feature that has not been implemented in any browser as yet - subgrid. Once you set the value of the display property to grid, any direct children of that element become grid items. This is similar to the way that flexbox behaves, set display: flex and all direct children become flex items. The behaviour does not apply to children of those items. You can nest grids, just as you can nest flex containers, but the child grids have no relationship to the parent.\n\nNesting Grids by Rachel Andrew (@rachelandrew) on CodePen.\nThe subgrid behaviour would enable the grid defined on the parent to be used by the children. I feel this would be most useful when working with a multiple column flexible grid - for example a typical 12 column grid. I could define a grid on a wrapper, then position UI elements on that grid - from the major structural elements of my page down through the child elements to a form where I wanted the field to line up with items above.\nThe specification contained an initial description of subgrid, with a value of subgrid for grid-template-columns and grid-template-rows, you can read about this in the August 2015 Working Draft. This version of the specification would have meant you could declare a subgrid in one dimension only, and create a different set of tracks in the other.\nIn an attempt to get some implementation of subgrid, a revised specification was proposed earlier this year. This gives a single subgrid value of the display property. As we now cannot specify a subgrid on rows OR columns this limits us to have a subgrid that works in two dimensions. At this point neither version has been implemented by anyone, and subgrids are marked as \u201cat risk\u201d in the Level 1 Candidate Recommendation. With regard to \u2018at-risk\u2019 this is explained as follows:\n\n\u201c\u2018At-risk\u2019 is a W3C Process term-of-art, and does not necessarily imply that the feature is in danger of being dropped or delayed. It means that the WG believes the feature may have difficulty being interoperably implemented in a timely manner, and marking it as such allows the WG to drop the feature if necessary when transitioning to the Proposed Rec stage, without having to publish a new Candidate Rec without the feature first.\u201d \n\nIf we lose subgrid from Level 1, as it looks likely that we will, this does give us a chance to further discuss and iterate on that feature. My current thoughts are that I\u2019m not completely happy about subgrids being tied to both dimensions and feel that a return to the earlier version, or something like it, would be preferable. \nFurther reading about subgrid\n\nMy post from 2015 detailing why I feel subgrid is important\nMy post based on the revised specification\nEric Meyer\u2019s thoughts on subgrid\nWrite-up of a discussion from Igalia who work on the Blink and Webkit browser implementations\n\nStyling cells, tracks and areas\nHaving defined a grid with CSS Grid Layout you can place child elements into that grid, however what you can\u2019t do is style the grid tracks or cells. Grid doesn\u2019t even go as far as multiple column layout, which has the column-rule properties.\nIn order to set a background colour on a grid cell at the moment you would have to add an empty HTML element or insert some generated content as in the below example. I\u2019m using a 1 pixel grid gap to fake lines between grid cells, and empty div elements, and some generated content to colour those cells.\n\nFaked backgrounds and borders by Rachel Andrew (@rachelandrew) on CodePen.\nI think it would be a nice addition to Grid Layout to be able to directly add backgrounds and borders to cells, tracks and areas. There is an Issue raised in the CSS WG Drafts repository for Decorative Grid Cell pseudo-elements, if you want to add thoughts to that.\nMore control over auto placement\nIf you haven\u2019t explicitly placed the direct children of your grid element they will be laid out according to the grid auto placement rules. You can see in this example how we have created a grid and the items are placing themselves into cells on that grid.\n\nItems auto-place on a defined grid by Rachel Andrew (@rachelandrew) on CodePen.\nThe auto-placement algorithm is very cool. We can position some items, leaving others to auto-place; we can set items to span more than one track; we can use the grid-auto-flow property with a value of dense to backfill gaps in our grid.\n\nWebsafe colors meet CSS Grid (auto-placement demo) by Rachel Andrew (@rachelandrew) on CodePen.\nI think however this could be taken further. In this issue posted to my CSS Grid AMA on GitHub, the question is raised as to whether it would be possible to ask grid to place items on the next available line of a certain name. This would allow you to skip tracks in the grid when using auto-placement, an issue that has also been raised by Emil Bj\u00f6rklund in this post to the www-style list prior to spec discussion moving to Github. I think there are probably similar issues, if you can think of one add a comment here.\nCreating non-rectangular grid areas\nA grid area is a collection of grid cells, defined by setting the start and end lines for columns and rows or by creating the area in the value of the grid-template-areas property as shown below. Those areas however must be rectangular - you can\u2019t create an L-shaped or otherwise non-regular shape.\n\nGrid Areas by Rachel Andrew (@rachelandrew) on CodePen.\n\nPerhaps in the future we could define an L-shape or other non-rectangular area into which content could flow, as in the below currently invalid code where a quote is embedded into an L-shaped content area.\n.wrapper {\n display: grid;\n grid-template-areas:\n \"sidebar header header\"\n \"sidebar content quote\"\n \"sidebar content content\";\n}\nFlowing content through grid cells or areas\nSome uses cases I have seen perhaps are not best solved by grid layout at all, but would involve grid working alongside other CSS specifications. As I detail in this post, there are a class of problems that I believe could be solved with the CSS Regions specification, or a revised version of that spec.\nBeing able to create a grid layout, then flow content through the areas could be very useful. Jen Simmons presented to the CSS Working Group at the Lisbon meeting a suggestion as to how this might work.\nIn a post from earlier this year I looked at a collection of ideas from specifications that include Grid, Regions and Exclusions. These working notes from my own explorations might prompt ideas of your own.\nSolving the keyboard/layout disconnect\nOne issue that grid, and flexbox to a lesser extent, raises is that it is very easy to end up with a layout that is disconnected from the underlying markup. This raises problems for people navigating using the keyboard as when tabbing around the document you find yourself jumping to unexpected places. The problem is explained by L\u00e9onie Watson with reference to flexbox in Flexbox and the keyboard navigation disconnect.\nThe grid layout specification currently warns against creating such a disconnect, however I think it will take careful work by web developers in order to prevent this. It\u2019s also not always as straightforward as it seems. In some cases you want the logical order to follow the source, and others it would make more sense to follow the visual. People are thinking about this issue, as you can read in this mailing list discussion.\nBringing your ideas to the future of Grid Layout\nWhen I\u2019m not getting excited about new CSS features, my day job involves working on a software product - the CMS that is serving this very website, Perch. When we launched Perch there were many use cases that we had never thought of, despite having a good idea of what might be needed in a CMS and thinking through lots of use cases. The additional use cases brought to our attention by our customers and potential customers informed the development of the product from launch. The same will be true for Grid Layout.\nAs a \u201cproduct\u201d grid has been well thought through by many people. Yet however hard we try there will be use cases we just didn\u2019t think of. You may well have one in mind right now. That\u2019s ok, because as with any CSS specification, once Level One of grid is complete, work can begin on Level Two. The feature set of Level Two will be informed by the use cases that emerge as people get to grips with what we have now.\nThis is where you get to contribute to the future of layout on the web. When you hit up against the things you cannot do, don\u2019t just mutter about how the CSS Working Group don\u2019t listen to regular developers and code around the problem. Instead, take a few minutes and write up your use case. Post it to your blog, to Medium, create a CodePen and go to the CSS Working Group GitHub specs repository and post an issue there. Write some pseudo-code, draw a picture, just make sure that the use case is described in enough detail that someone can see what problem you want grid to solve. It may be that - as with any software development - your use case can\u2019t be solved in exactly the way you suggest. However once we have a use case, collected with other use cases, methods of addressing that class of problems can be investigated. \nI opened this article by explaining I\u2019d written about grid layout four years ago, and how we\u2019re only now at a point where we will have Grid Layout available in the majority of browsers. Specification development, and implementation into browsers takes time. This is actually a good thing, as it\u2019s impossible to take back CSS once it is out there and being used by production websites. We want CSS in the wild to be well thought through and that takes time. So don\u2019t feel that because you don\u2019t see your use case added to a spec immediately it has been ignored. Do your future self a favour and write down your frustrations or thoughts, and we can all make sure that the web platform serves the use cases we\u2019re dealing with now and in the future.", "year": "2016", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2016-12-12T00:00:00+00:00", "url": "https://24ways.org/2016/what-next-for-css-grid-layout/", "topic": "code"} {"rowid": 309, "title": "HTTP/2 Server Push and Service Workers: The Perfect Partnership", "contents": "Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push.\nDuring this year\u2019s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination.\n\nIf you\u2019ve never heard of HTTP/2 Server Push before, fear not - it\u2019s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users.\nWhat is HTTP/2 Server Push?\nWhen a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower. \nBefore we go any further, let\u2019s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this:\n\nAs you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs. \nThis is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and \u201cpushes\u201d it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster. \nUsing the same example above, let\u2019s \u201cpush\u201d the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like.\n\nWhoa, that looks different - let\u2019s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn\u2019t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better! \nThere are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I\u2019ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I\u2019m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings.\nHTTP/2 Server Push is awesome, but it isn\u2019t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn\u2019t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache \u201caware\u201d. This means that the server isn\u2019t aware about the state of your client. If you\u2019ve visited a web page before, the server isn\u2019t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they\u2019re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs.\nHTTP/2 Server Push & Service Workers\nSo where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you\u2019ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand.\nUsing Service Workers, you can easily cache assets on a user\u2019s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server.\nLet\u2019s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load!\nLet\u2019s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files.\n\n\n\n \n HTTP2 Push Demo\n\n\n

HTTP2 Push

\n \n \n
\n
\n \n \n \n \n \n \n\n\nIn the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push.\nThe Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns.\nI\u2019ve add the following code to the service-worker.js file.\n(global => {\n 'use strict';\n\n // Load the sw-toolbox library.\n importScripts('/js/sw-toolbox/sw-toolbox.js');\n\n // The route for any requests\n toolbox.router.get('/push', global.toolbox.fastest);\n\n toolbox.router.get('/images/(.*)', global.toolbox.fastest);\n\n toolbox.router.get('/js/(.*)', global.toolbox.fastest);\n\n // Ensure that our service worker takes control of the page as soon as possible.\n global.addEventListener('install', event => event.waitUntil(global.skipWaiting()));\n global.addEventListener('activate', event => event.waitUntil(global.clients.claim()));\n})(self);\nLet\u2019s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I\u2019ve told the toolbox to listen to these routes too.\nThe best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn\u2019t exist in cache, the code above will add it into cache so that we can retrieve it when it\u2019s needed again.\nYou may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit.\nBut what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to \u201cpush\u201d everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing! \nUsing this technique, the waterfall chart for a repeat visit should look like the image below.\n\nIf you look closely at the image above, you\u2019ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache.\nWhether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user\u2019s browser doesn\u2019t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance.\nSummary\nWhen used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site\u2019s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it\u2019s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don\u2019t just simply \u201cpush\u201d everything; it could actually lead to having slower page load times. If you\u2019d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information. \nAll of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I\u2019ll get back to you as soon as possible. \nIf you\u2019d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone\u2019s talk at this years Chrome Developer Summit. \nI\u2019d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live!", "year": "2016", "author": "Dean Hume", "author_slug": "deanhume", "published": "2016-12-15T00:00:00+00:00", "url": "https://24ways.org/2016/http2-server-push-and-service-workers/", "topic": "code"} {"rowid": 296, "title": "Animation in Design Systems", "contents": "Our modern front-end workflow has matured over time to include design systems and component libraries that help us stay organized, improve workflows, and simplify maintenance. These systems, when executed well, ensure proper documentation of the code available and enable our systems to scale with reduced communication conflicts. \nBut while most of these systems take a critical stance on fonts, colors, and general building blocks, their treatment of animation remains disorganized and ad-hoc. Let\u2019s leverage existing structures and workflows to reduce friction when it comes to animation and create cohesive and performant user experiences. \nUnderstand the importance of animation\nPart of the reason we treat animation like a second-class citizen is that we don\u2019t really consider its power. When users are scanning a website (or any environment or photo), they are attempting to build a spatial map of their surroundings. During this process, nothing quite commands attention like something in motion. \nWe are biologically trained to notice motion: evolutionarily speaking, our survival depends on it. For this reason, animation when done well can guide your users. It can aid and reinforce these maps, and give us a sense that we understand the UX more deeply. We retrieve information and put it back where it came from instead of something popping in and out of place. \n\n\u201cWhere did that menu go? Oh it\u2019s in there.\u201d \n\nFor a deeper dive into how animation can connect disparate states, I wrote about the Importance of Context-Shifting in UX Patterns for CSS-Tricks.\nAn animation flow on mobile.\nAnimation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn\u2019t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped.\n\n14 second generic loading screen.22 second custom loading screen.\nThis also works for form submission. Giving your personal information over to an online process like a static form can be a bit harrowing. It becomes more harrowing without animation used as a signal that something is happening, and that some process is completing. That same animation can also entertain users and make them feel as though the wait isn\u2019t as long. \nEli Fitch gave a talk at CSS Dev Conf called: \u201cPerceived Performance: The Only Kind That Really Matters\u201d, which is one of my favorite talk titles of all time. In it, he discussed how we tend to measure things like timelines and network requests because they are more quantifiable\u2013and therefore easier to measure\u2013but that measuring how a user feels when visiting the site is more important and worth the time and attention. \nIn his talk, he states \u201cHumans over-estimate passive waits by 36%, per Richard Larson of MIT\u201d. This means that if you\u2019re not using animation to speed up how fast the wait time of a form submission loads, users are perceiving it to be much slower than the dev tools timeline is recording.\nReign it in\nUnlike fonts, colors, and so on, we tend to add animation in as a last step, which leads to disorganized implementations that lack overall cohesion. If you asked a designer or developer if they would create a mockup or build a UI without knowing the fonts they were working with, they would dislike the idea. Not knowing the building blocks they\u2019re working with means that the design can fall apart or the development can break with something so fundamental left out at the start. Good animation works the same way.\nThe first step in reigning in your use of animation is to perform an animation audit. Look at all the places you are using animation on your site, or the places you aren\u2019t using animation but probably should. (Hint: perceived performance of a loader on a form submission can dramatically change your bounce rates.) \nNot sure how to perform a good audit? Val Head has a great chapter on it in her book, Designing Interface Animations, which has of buckets of research and great ideas.\nEven some beautiful component libraries that have animation in the docs make this mistake. You don\u2019t need every kind of animation, just like you don\u2019t need every kind of font. This bloats our code. Ask yourself questions like: do you really need a flip 180 degree animation? I can\u2019t even conceive of a place on a typical UI where that would be useful, yet most component libraries that I\u2019ve seen have a mixin that does just this.\nWhich leads to\u2026\nHave an opinion\nMany people are confused about Material Design. They think that Material Design is Motion Design, mostly because they\u2019ve never seen anyone take a stance on animation before and document these opinions well. But every time you use Material Design as your motion design language, people look at your site and think GOOGLE. Now that\u2019s good branding.\nBy using Google\u2019s motion design language and not your own, you\u2019re losing out on a chance to be memorable on your own website.\nWhat does having an opinion on motion look like in practice? It could mean you\u2019ve decided that you never flip things. It could mean that your eases are always going to glide. In that instance, you would put your efforts towards finding an ease that looks \u201cgliding\u201d and pulling out any transform: scaleX(-1) animation you find on your site. Across teams, everyone knows not to spend time mocking up flipping animation (even if they\u2019re working on an entirely different codebase), and to instead work on something that feels like it glides. You save time and don\u2019t have to communicate again and again to make things feel cohesive.\nCreate good developer resources\nSometimes people don\u2019t incorporate animation into a design system because they aren\u2019t sure how, beyond the base hover states. All animation properties can be broken into interchangeable pieces. This allows developers and designers alike to mix and match and iterate quickly, while still staying in the correct language. Here are some recommendations (with code and a demo to follow):\nCreate timing units, similar to h1, h2, h3. In a system I worked on recently, I called these t1, t2, t3. T1 would be reserved for longer pieces, down to t5 which is a bit like h5 in that it\u2019s the default (usually around .25 seconds or thereabouts).\nKeep animation easings for entrance, exit, entrance emphasis and exit emphasis that people can commonly refer to. This, and the animation-fill-mode, are likely to be the only two properties that can be reused for the entrance and exit of the animation.\nUse the animation-name property to define the keyframes for the animation itself. I would recommend starting with 5 or 6 before making a slew of them, and see if you need more. Writing 30 different animations might seem like a nice resource, but just like your color palette having too many can unnecessarily bulk up your codebase, and keep it from feeling cohesive. Think critically about what you need here. \nSee the Pen Modularized Animation for Component Libraries by Sarah Drasner (@sdras) on CodePen.\n\nThe example above is pared-down, but you can see how in a robust system, having pieces that are interchangeable cached across the whole system would save time for iterations and prototyping, not to mention make it easy to make adjustments for different feeling movement on the same animation easily.\nOne low hanging fruit might be a loader that leads to a success dialog. On a big site, you might have that pattern many times, so writing up a component that does only that helps you move faster while also allowing you to really zoom in and focus on that pattern. You avoid throwing something together at the last minute, or using a GIF, which are really heavy and mushy on retina. You can make singular pieces that look really refined and are reusable. \nReact and Vue Implementations are great for reusable components, as you can create a building block with a common animation pattern, and once created, it can be a resource for all. Remember to take advantage of things like props to allow for timing and easing adjustments like we have in the previous example!\nResponsive\nAt the very least we should ensure that interaction also works well on mobile, but if we\u2019d like to create interactions that take advantage of all of the gestures mobile has to offer, we can use libraries like zingtouch or hammer to work with swipe or multiple finger detection. With a bit of work, these can all be created through native detection as well.\nResponsive web pages can specify initial-scale=1.0 in the meta tag so that the device is not waiting the required 300ms on the secondary tap before calling action. Interaction for touch events must either start from a larger touch-target (40px \u00d7 40px or greater) or use @media(pointer:coarse) as support allows.\nBuy-in\nSometimes people don\u2019t create animation resources simply because it gets deprioritized. But design systems were also something we once had to fight for, too. This year at CSS Dev Conf, Rachel Nabors demonstrated how to plot out animation wants vs. needs on a graph (reproduced with her permission) to help prioritize them:\n\n\nThis helps people you\u2019re working with figure out the relative necessity and workload of the addition of these animations and think more critically about it. You\u2019re also more likely to get something through if you\u2019re proving that what you\u2019re making is needed and can be reused. \nGood compromises can be made this way: \u201cwe\u2019re not going to go all out and create an animated \u2018About Us\u2019 page like you wanted, but I suppose we can let our users know their contact email went through with a small progress and success notification.\u201d \nSuccessfully pushing smaller projects through helps build trust with your team, and lets them see what this type of collaboration can look like. This builds up the type of relationship necessary to push through projects that are more involved. It can\u2019t be overstressed that good communication is key.\nGet started!\nWith these tools and good communication, we can make our codebases more efficient, performant, and feel better for our users. We can enhance the user experience on our sites, and create great resources for our teams to allow them to move more quickly while innovating beautifully.", "year": "2016", "author": "Sarah Drasner", "author_slug": "sarahdrasner", "published": "2016-12-16T00:00:00+00:00", "url": "https://24ways.org/2016/animation-in-design-systems/", "topic": "code"} {"rowid": 289, "title": "Front-End Developers Are Information Architects Too", "contents": "The theme of this year\u2019s World IA Day was \u201cInformation Everywhere, Architects Everywhere\u201d. This article isn\u2019t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People\u2019s ability to be successful online is unequivocally connected to the quality of the code that is written.\nPlaces made of information\nIn information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated:\n\nPeople live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds.\n25 Theses\n\nWe\u2019re creating structures which people rely on for significant parts of their lives, so it\u2019s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML.\nSemantics\nThe word \u201csemantic\u201d has its origin in Greek words meaning \u201csignificant\u201d, \u201csignify\u201d, and \u201csign\u201d. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building\u2019s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don\u2019t need an \u201centrance\u201d sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building\u2019s purpose happens, but the affect is similar: the building is signifying something about its purpose.\nHTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating:\n\nElements, attributes, and attribute values in HTML are defined \u2026 to have certain meanings (semantics). For example, the
    element represents an ordered list, and the lang attribute represents the language of the content.\nHTML 5.1 Semantics, structure, and APIs of HTML documents\n\nHTML\u2019s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they\u2019re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It\u2019s also what a front-end developer does, whether they realise it or not.\nA brief introduction to information architecture\nWe\u2019re going to start by looking at what an information architect is. There are many definitions, and I\u2019m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is:\n\nthe individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information.\nOf Patterns And Structures\n\nTo me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people.\nJust as there are many definitions of what an information architect is, there are for information architecture itself. I\u2019m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as:\nThe structural design of shared information environments.\nThe synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems.\nThe art and science of shaping information products and experiences to support usability, findability, and understanding.\nInformation Architecture For The World Wide Web, 4th Edition\nTo me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015\u2019s State Of The Browser conference, Edd Sowden talked about the accessibility of s. He discovered that by simply not using the semantically-correct
    element to mark up headings, in some situations browsers will decide that a
    is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by L\u00e9onie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page.\nOur definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we\u2019re going to use. Dan defines these terms as:\nOntology\nThe definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate.\nWhat we mean when we say what we say.\nTaxonomy\nThe arrangements of the parts. Developing systems and structures for what everything\u2019s called, where everything\u2019s sorted, and the relationships between labels and categories\nChoreography\nRules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time.\n\nWe now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states:\n\n\u2026 data is not information. Information is data endowed with relevance and purpose.\nManaging For The Future\n\nIf we use the correct semantic element to mark up content then we\u2019re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we\u2019re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an

    element should be relevant to its parent, which should be the

    . By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you\u2019ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways:\n\nby creating a list of headings so users can quickly scan the page for information\nby using a keyboard command to cycle through one heading at a time\n\nIf we had a document for Christmas Day TV we might structure it something like this:\n

    Christmas Day TV schedule

    \n

    BBC1

    \n

    Morning

    \n

    Evening

    \n

    BBC2

    \n

    Morning

    \n

    Evening

    \n

    ITV

    \n

    Morning

    \n

    Evening

    \n

    Channel 4

    \n

    Morning

    \n

    Evening

    \nIf I use VoiceOver to generate a list of headings, I get this:\n\nOnce I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the

    s:\n\nIf we hadn\u2019t used headings, of if we\u2019d nested them incorrectly, our users would be frustrated.\nPutting this together\nLet\u2019s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a \n\n
    \n \n
    \nThere\u2019s quite a bit going on here. We\u2019re using the:\n\naria-controls attribute to architect a connection between the \n );\n}\n(Note:className is the JSX way of setting a class on an element; this example uses React, but Stylable itself is framework-agnostic.)\nI style it using a Stylable stylesheet (the .st.css suffix tells the preprocessor to process this file):\n/* button.st.css */\n\n/* note that the root class is automatically placed on the root HTML \nelement by Stylable React integration */\n.root {\n background: #b0e0e6;\n}\n\n.icon {\n display: block; \n height: 2em;\n background-image: url('./assets/btnIcon.svg');\n}\n\n.label {\n font-size: 1.2em;\n color: rgba(81, 12, 68, 1.0);\n}\nNote that Stylable allows all the CSS that you know and love to be included. As Drew Powers wrote in his review:\n\nwith Stylable, you get CSS, and every part of CSS. This seems like a \u201cduh\u201d observation, but this is significant if you\u2019ve ever battled with a CSS-in-JS framework over a lost or \u201chacky\u201d implementation of a basic CSS feature.\n\nI can import my Button component into another component - this time, panel.jsx:\n/* panel.jsx */\nimport * as React from 'react';\nimport {properties, stylable} from 'wix-react-tools';\nimport {Button} from '../button';\nimport style from './panel.st.css';\n\nexport const Panel = stylable(style)(() => (\n
    \n
    \n));\nIn panel.st.css: \n/* panel.st.css */\n:import {\n -st-from: './button.st.css';\n -st-default: Button;\n}\n\n/* cancelBtn is of type Button */\n.cancelBtn {\n -st-extends: Button;\n background: cornflowerblue;\n}\n\n/* targets the label of \n \n\nColor picker\nThis is our new color picker function. It targets the input element by its class and gets its value. \nfunction getColor() {\n return document.querySelector(\".color\").value;\n}\nUp until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We\u2019ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.\nfunction drawLine() {\n //...code... \n context.strokeStyle = getColor();\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n //...code... \n}\nClear button\nThis is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.\nfunction clearCanvas() {\n if (confirm(\"Want to clear?\")) {\n context.clearRect(0, 0, w, h);\n }\n}\nThe method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. \nIf we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.\nLet\u2019s add this event handler to init:\nfunction init() {\n //...code...\n canvas.onpointermove = handleMouseMove;\n canvas.onpointerdown = handleMouseDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n document.querySelector(\".clear\").onclick = clearCanvas;\n}\nSee the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 3: Draw with 2 lines\nIt\u2019s time to make a line appear where no pointer has gone before. A ghost line! \nFor that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it\u2019s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn\u2019t matter which one we choose. Let\u2019s go with the x-axis. \nHere is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). \nNow, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.\nA sketch illustrating the mathematics of reflecting a point.\nWhat happens to the x coordinates?\nThe variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them \u201cthe x coordinates\u201d. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. \nWhat happens to the y coordinates?\nWhat about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.\nThis is the new code for the app\u2019s variables and the two lines: the one that fills the pointer\u2019s path and the one mirroring it across the x-axis.\nfunction drawLine() {\n var a = prevX, a_ = a,\n b = prevY, b_ = h-b,\n c = currX, c_ = c,\n d = currY, d_ = h-d;\n\n //... code ...\n\n // Draw line #1, at the pointer's location\n context.moveTo(a, b);\n context.lineTo(c, d);\n\n // Draw line #2, mirroring the line #1\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ...\n}\nIn case this was too abstract for you, let\u2019s look at some actual numbers to see how this works.\nLet\u2019s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.\nSo b' = 10 - 2 = 8 and d' = 10 - 3 = 7.\nWe use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That\u2019s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.\nIf you are still confused, I don\u2019t blame you. \nHere is the result. Draw something and see what happens.\nSee the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 4: Draw with 8 lines\nI have made yet another confusing sketch, with points C and D, so you understand what we\u2019re trying to do. Later on we\u2019ll look at points E, F, G and H as well. The circled point is the one we\u2019re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. \nA sketch illustrating points C and D.\nThis is the part where the math gets a bit mathier, as our drawLine function evolves further. We\u2019ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let\u2019s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).\nfunction drawLine() {\n\n //... code ... \n\n // Reassign values\n a_ = w-a; b_ = b;\n c_ = w-c; d_ = d;\n\n // Draw the 3rd line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n // Reassign values\n a_ = w-a; b_ = h-b;\n c_ = w-c; d_ = h-d;\n\n // Draw the 4th line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ... \nWhat is happening?\nYou might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That\u2019s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. \nAlso, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It\u2019s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). \nWhat happens to the x coordinates?\nAs you recall, our x coordinates are a (prevX) and c (currX).\nFor the third line we are adding, a' = w - a and c' = w - c, which means\u2026\nFor the fourth line, the same thing happens to our x coordinates a and c.\nWhat happens to the y coordinates?\nAs you recall, our y coordinates are b (prevY) and d (currY).\nFor the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. \nFor the fourth line, b' = h - b and d' = h - d, which we\u2019ve seen before: that\u2019s a reflection across the x-axis.\nWe have four more lines, or locations, to define. Note: the part of the code that\u2019s responsible for drawing a micro-line between the newly calculated coordinates is always the same:\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\nWe can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. \nOnce again, we need some concrete examples to see where we\u2019re going, so here\u2019s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.\nA sketch illustrating points E and F.\nThis is the code for E and F:\n // Reassign for 5\n a_ = w/2+h/2-b; b_ = w/2+h/2-a;\n c_ = w/2+h/2-d; d_ = w/2+h/2-c;\n\n // Reassign for 6\n a_ = w/2+h/2-b; b_ = h/2-w/2+a;\n c_ = w/2+h/2-d; d_ = h/2-w/2+c;\nTheir x coordinates are identical and their y coordinates are reversed to one another.\nThis one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).\nA sketch illustrating points G and H.\nThis is the code:\n // Reassign for 7\n a_ = w/2-h/2+b; b_ = w/2+h/2-a;\n c_ = w/2-h/2+d; d_ = w/2+h/2-c;\n\n // Reassign for 8\n a_ = w/2-h/2+b; b_ = h/2-w/2+a;\n c_ = w/2-h/2+d; d_ = h/2-w/2+c;\n //...code... \n}\nOnce again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won\u2019t go into the full details, since this has been a long enough journey as it is, and I think we\u2019ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.\nI hope you had fun learning! This is our final app:\nSee the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.", "year": "2018", "author": "Hagar Shilo", "author_slug": "hagarshilo", "published": "2018-12-02T00:00:00+00:00", "url": "https://24ways.org/2018/the-art-of-mathematics/", "topic": "code"} {"rowid": 258, "title": "Mistletoe Offline", "contents": "It\u2019s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.\nI am, of course, talking about Murphy, and the golden rule he gave unto us:\n\nAnything that can go wrong will go wrong.\n\nSo true! I mean, that\u2019s why we make sure we\u2019ve got nice 404 pages. It\u2019s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it\u2019s bound to happen sometime. Murphy\u2019s Law, innit?\nBut there are some Murphyesque situations where even your lovingly crafted 404 page won\u2019t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can\u2014and will\u2014go wrong.\nI guess there\u2019s nothing we can do about those particular situations, right?\nWrong!\nA service worker is a Murphy-battling technology that you can inject into a visitor\u2019s device from your website. Once it\u2019s installed, it can intercept any requests made to your domain. If anything goes wrong with a request\u2014as is inevitable\u2014you can provide instructions for the browser. That\u2019s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.\nIf you\u2019ve got a custom 404 page, why not make a custom offline page too?\nGet your server in order\nStep one is to make \u2026actually, wait. There\u2019s a step before that. Step zero. Get your site running on HTTPS, if it isn\u2019t already. You won\u2019t be able to use a service worker unless everything\u2019s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.\nIf you\u2019re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.\nMake an offline page\nAlright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here\u2019s an example of the custom offline page for this year\u2019s Ampersand conference.\nWhen you\u2019re done, publish the offline page at suitably imaginative URL, like, say /offline.html.\nPre-cache your offline page\nNow create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user\u2019s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:\naddEventListener('install', installEvent => {\n// put your instructions here.\n}); // end addEventListener\nIn this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I\u2019m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nI\u2019m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html',\n '/path/to/stylesheet.css',\n '/path/to/javascript.js',\n '/path/to/image.jpg'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nMake sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.\nIntercept requests\nThe next event you want to listen for is the fetch event. This is probably the most powerful\u2014and, let\u2019s be honest, the creepiest\u2014feature of a service worker. Once it has been installed, the service worker lurks on the user\u2019s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:\naddEventListener('fetch', fetchEvent => {\n// What happens next is up to you!\n}); // end addEventListener\nLet\u2019s write a fairly conservative script with the following logic:\n\nWhenever a file is requested,\nFirst, try to fetch it from the network,\nBut if that doesn\u2019t work, try to find it in the cache,\nBut if that doesn\u2019t work, and it\u2019s a request for a web page, show the custom offline page instead.\n\nHere\u2019s how that translates into JavaScript:\n// Whenever a file is requested\naddEventListener('fetch', fetchEvent => {\n const request = fetchEvent.request;\n fetchEvent.respondWith(\n // First, try to fetch it from the network\n fetch(request)\n .then( responseFromFetch => {\n return responseFromFetch;\n }) // end fetch.then\n // But if that doesn't work\n .catch( fetchError => {\n // try to find it in the cache\n caches.match(request)\n .then( responseFromCache => {\n if (responseFromCache) {\n return responseFromCache;\n // But if that doesn't work\n } else {\n // and it's a request for a web page\n if (request.headers.get('Accept').includes('text/html')) {\n // show the custom offline page instead\n return caches.match('/offline.html');\n } // end if\n } // end if/else\n }) // end match.then\n }) // end fetch.catch\n ); // end respondWith\n}); // end addEventListener\nI am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what\u2019s happening at each point in the code, I\u2019ve written a whole book for you. It\u2019s the perfect present for Murphymas.\nHook up your service worker script\nYou can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you\u2019re calling in to every page on your site, or add this in a script element at the end of every page\u2019s HTML:\nif (navigator.serviceWorker) {\n navigator.serviceWorker.register('/serviceworker.js');\n}\nThat tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.\nYou might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don\u2019t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you\u2019re making sure that every request can be intercepted.\nGo further!\nNicely done! You\u2019ve made sure that if\u2014no, when\u2014a visitor can\u2019t reach your website, they\u2019ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!\nThis is just the beginning. You can do more with service workers.\nWhat if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they\u2019re offline, you could show them the cached version.\nOr, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version\u2014which would be blazingly fast\u2014and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.\nSo many options! The hard part isn\u2019t writing the code, it\u2019s figuring out the steps you want to take. Once you\u2019ve got those steps written out, then it\u2019s a matter of translating them into JavaScript.\nInevitably there will be some obstacles along the way\u2014usually it\u2019s a misplaced curly brace or a missing parenthesis. Don\u2019t be too hard on yourself if your code doesn\u2019t work at first. That\u2019s just Murphy\u2019s Law in action.", "year": "2018", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2018-12-04T00:00:00+00:00", "url": "https://24ways.org/2018/mistletoe-offline/", "topic": "code"} {"rowid": 242, "title": "Creating My First Chrome Extension", "contents": "Writing a Chrome Extension isn\u2019t as scary at it seems!\nNot too long ago, I used a Chrome extension called 20 Cubed. I\u2019m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.\nUnfortunately, the developer stopped updating the extension and removed it from Chrome\u2019s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?\nFortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the \u201cold-fashioned\u201d way. Sky\u2019s the limit!\nSetup\nBut first, some things you\u2019ll need to know about before getting started:\n\nCallbacks\nTimeouts\nChrome Dev Tools\n\nDeveloping with Chrome extension methods requires a lot of callbacks. If you\u2019ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I\u2019d highly recommend brushing up on that subject before getting started.\nHyperbole and a Half\nTimeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn\u2019t consider the fact that I\u2019d be juggling three timers. And I probably would\u2019ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.\nOn the note of organization, abstraction is important! You might have any combination of the following:\n\nThe Chrome extension options page\nThe popup from the Chrome Menu\nThe windows or tabs you create\nThe background scripts\n\nAnd that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you\u2019ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.\nAlright, now that you know all that up front, let\u2019s get going!\nDocumentation\nTL;DR READ THE DOCS.\nA few things to get started:\n\nRead Google\u2019s primer on browser extensions\nHave a look at their Getting started tutorial\nCheck out their overview on Chrome Extensions\n\nThis overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.\nThe manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here\u2019s what my manifest.json looked like, for example:\nhttps://github.com/jennz0r/eye-rest/blob/master/manifest.json\nBecause I\u2019m a visual learner, I found the images that describe the extension\u2019s architecture most helpful.\n\nTo clarify this diagram, the background.js file is the extension\u2019s event handler. It\u2019s constantly listening for browser events, which you\u2019ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.\nThe Popup is the little window that appears when you click on an extension\u2019s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { \"default_popup\": FILE_NAME_HERE }.\nThe Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose \u201cOptions\u201d under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.\nContent scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.\nDebugging\nA quick note: don\u2019t forget the debugging tutorial!\nJust like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you\u2019re likely to have several inspector windows open \u2013 one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.\nFor example, I kept seeing the error \u201cThis request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.\u201d Well, it turns out there are limitations on how often you can sync stored information.\nAnother error I kept seeing was \u201cAlarm delay is less than minimum of 1 minutes. In released .crx, alarm \u201cALARM_NAME_HERE\u201d will fire in approximately 1 minutes\u201d. Well, it turns out there are minimum interval times for alarms.\nChrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!\nMe adding a ton of `console.log`s while trying to debug my alarms.\nEye Rest Functionality\nOk, so what is the extension I created? Again, it\u2019s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:\n\nIf the extension is running AND\nIf the user has not clicked Pause in the Popup HTML AND\nIf the counter in the Popup HTML is down to 00:00 THEN\n\nOpen a new window with Timer HTML AND\nStart a 20 sec countdown in Timer HTML AND\nReset the Popup HTML counter to 20:00\n\nIf the Timer HTML is down to 0 sec THEN\n\nClose that window. Rinse. Repeat.\n\n\nSounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that\u2019s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn\u2019t realize until I started coding.\nFor visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it\u2019s a pun.)\nAPI\nNow let\u2019s discuss the APIs that I used to build this extension.\nAlarms\nWhat even are alarms? I didn\u2019t know either.\nAlarms are basically Chrome\u2019s setTimeout and setInterval. They exist because, as Google says\u2026\n\nDOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.\n\nFor more information, check out this background migration doc.\nOne interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn\u2019t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.\nBackground Scripts\nFor Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.\nI wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.\nI also wanted my Popup script to do the same things when the user has unpaused the extension\u2019s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.\nOther APIs\nWindows\nI use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.\nOne day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window\u2019s HTML. Until then, it hadn\u2019t dawned on me that the url could be relative. Duh. I was tired!\nI pass the timer.html as the url option, as well as type, size, position, and other visual options.\nStorage\nMaybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome\u2019s storage is avoiding quotas and write operation maximums.\nI wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it\u2019s quite complicated and requires lots of writes. That\u2019s why I went with the user\u2019s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.\nDeclarative Content\nThe Declarative Content API allows you to show your extension\u2019s page action based on several type of matches, without needing to take a host permission or inject a content script. So you\u2019ll need this to get your extension to work in the browser!\nYou can see me set this in my Background script. Because I want my extension\u2019s popup to appear on every page one is browsing, I leave the page matchers empty.\nThere are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!\nThe Extension\nHere\u2019s what my original Popup looked like before I added styles.\nAnd here\u2019s what it looks like with new styles. I guess I\u2019m going for a Nickelodeon feel.\nAnd here\u2019s the Timer window and Popup together! \nPublishing\nPublishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That\u2019s all! Then your extension will be available on the Chrome extension store! Neato :D\nMy extension is now available for you to install.\nConclusion\nI thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it\u2019s definitely achievable with some time, persistence, and good ole Google searches.\nEventually, I\u2019d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!\nEither way, with Eye Rest\u2019s framework built out, I\u2019m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.", "year": "2018", "author": "Jennifer Wong", "author_slug": "jenniferwong", "published": "2018-12-05T00:00:00+00:00", "url": "https://24ways.org/2018/my-first-chrome-extension/", "topic": "code"} {"rowid": 247, "title": "Managing Flow and Rhythm with CSS Custom Properties", "contents": "An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn\u2019t just make your web pages and views look better, but it can also improve the scan-ability. \nBrowsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as
    and
    also have no default margin or padding associated with them. \nI\u2019ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I\u2019m going to show you how it works. Let\u2019s dive in!\nThe Flow utility\nWith the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it\u2019s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. \nThat\u2019s fine for 90% of contexts, but sometimes, it\u2019s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy.\nThe code\n.flow {\n --flow-space: 1em;\n} \n\n.flow > * + * { \n margin-top: var(--flow-space);\n}\nWhat this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don\u2019t have any idea what surrounds them. \nThe other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let\u2019s look at some examples.\nA basic example\nSee the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/LXqerj\nWhat we\u2019ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there\u2019s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. \nBecause our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a

    in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we\u2019d affect everything because margin values would be calculated as 1.1X the font size. \nThis example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? \nSee the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/qQgxaY\nI like lots of whitespace with my article layouts, so the 1em space isn\u2019t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances:\nh2 {\n --flow-space: 3rem;\n}\nNotice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it\u2019s better for accessibility to use flexible units like em, rem and %, so that a user\u2019s font size preferences are honoured. \nA more advanced example\nAlthough the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space.\nSee the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/ZmPGyL\nIn this example, we\u2019ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article\u2019s content would space itself:\n
    \n ...\n
    \nSomething to remember though: the custom properties cascade in the same way that other CSS values do, so you\u2019ve got to keep that in mind. We\u2019ve got a great example of that in this example where because we\u2019ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we\u2019ve had to set another value on the .features__list element.\n\u201cBut what about old browsers?\u201d, I hear you cry\nWe\u2019re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element\u2019s font-size:\n.flow {\n --flow-space: 1em;\n}\n\n.flow > * + * { \n margin-top: 1em;\n margin-top: var(--flow-space);\n}\nThanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them\u2014those that don\u2019t will ignore them. Yay for the cascade and progressive enhancement! \nWrapping up\nThis tiny little utility can bring great power for when you want to consistently space elements, vertically. It also\u2014thanks to the power of the modern web\u2014allows us to create contextual overrides without creating modifier classes or shame CSS. \nIf you\u2019ve got other methods of doing this sort of work, please let me know on Twitter. I\u2019d love to see what you\u2019re working on!", "year": "2018", "author": "Andy Bell", "author_slug": "andybell", "published": "2018-12-07T00:00:00+00:00", "url": "https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/", "topic": "code"} {"rowid": 257, "title": "The (Switch)-Case for State Machines in User Interfaces", "contents": "You\u2019re tasked with creating a login form. Email, password, submit button, done.\n\u201cThis will be easy,\u201d you think to yourself.\nLogin form by Selecto\nYou\u2019ve made similar forms many times in the past; it\u2019s essentially muscle memory at this point. You\u2019re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you\u2019ll have to translate the pixels to meaningful, responsive CSS values, but that\u2019s the least of your problems.\nAs you\u2019re writing up the HTML structure and CSS layout and styles for this form, you realize that you don\u2019t know what the successful \u201clogged in\u201d page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.\n\nWhat if login fails? Where do those errors show up?\nShould we show errors differently if the user forgot to enter their email, or password, or both?\nOr should the submit button be disabled?\nShould we validate the email field?\nWhen should we show validation errors \u2013 as they\u2019re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)\nWhen should the errors disappear?\nWhat do we show during the login process? Some loading spinner?\nWhat if loading takes too long, or a server error occurs?\n\nMany more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.\nModeling Behavior\nDescribing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story \u2013 they often leave out potential edge-cases or small yet important bits of information.\nHowever, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.\nThe general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:\n\nstart - not submitted yet\nloading - submitted and logging in\nsuccess - successfully logged in\nerror - login failed\n\nAdditionally, we can describe an application as accepting a finite number of events \u2013 that is, all the possible events that can be \u201csent\u201d to the application, either from the user or some other external entity:\n\nSUBMIT - pressing the submit button\nRESOLVE - the server responds, indicating that login is successful\nREJECT - the server responds, indicating that login failed\n\nThen, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:\n\nFrom the start state, when the SUBMIT event occurs, the app should be in the loading state.\nFrom the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.\nIf login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.\nFrom the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.\nOtherwise, if any other event occurs, don\u2019t do anything and stay in the same state.\n\nThat\u2019s a pretty thorough description, similar to a user story! It\u2019s also a bit more symbolic than a user story (e.g., \u201cwhen the SUBMIT event occurs\u201d instead of \u201cwhen the user presses the submit button\u201d), and that\u2019s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:\n\nEvery state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.\nFrom Visuals to Code\nDrawing a state machine doesn\u2019t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn\u2019t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.\nWith the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:\nfunction loginMachine(state, event) {\n switch (state) {\n case 'start':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n case 'loading':\n if (event === 'RESOLVE') {\n return 'success';\n } else if (event === 'REJECT') {\n return 'error';\n }\n break;\n case 'success':\n // Accept no further events\n break;\n case 'error':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n default:\n // This should never occur\n return undefined;\n }\n}\n\nconsole.log(loginMachine('start', 'SUBMIT'));\n// => 'loading'\nThis is fine (I suppose) but personally, I find it much easier to use objects:\nconst loginMachine = {\n initial: \"start\",\n states: {\n start: {\n on: { SUBMIT: 'loading' }\n },\n loading: {\n on: {\n REJECT: 'error',\n RESOLVE: 'success'\n }\n },\n error: {\n on: {\n SUBMIT: 'loading'\n }\n },\n success: {}\n }\n};\n\nfunction transition(state, event) {\n return machine\n .states[state] // Look up the state\n .on[event] // Look up the next state based on the event\n || state; // If not found, return the current state\n}\n\nconsole.log(transition('start', 'SUBMIT'));\nAs you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:\n\nA Common Language Between Designers and Developers\nAlthough finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:\n\nidentify all possible states, and potentially missing states\ndescribe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)\neliminate impossible states and identify states that are \u201cunreachable\u201d (have no entry transition) or \u201csunken\u201d (have no exit transition)\nadd features with full confidence of knowing what other states it might affect\nsimplify redundant states or complex user flows\ncreate test paths for almost every possible user flow, and easily identify edge cases\ncollaborate better by understanding the entire application model equally.\n\nNot a New Idea\nI\u2019m not the first to suggest that state machines can help bridge the gap between design and development.\n\nVince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines\nUser flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.\nIn 1999, Ian Horrocks wrote a book titled \u201cConstructing the User Interface with Statecharts\u201d, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.\nMore than a decade earlier, David Harel published \u201cStatecharts: A Visual Formalism for Complex Systems\u201d, in which the statechart - an extended hierarchical state machine model - is born.\n\nState machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:\n\nVisualized modeling\nPrecise diagrams\nAutomatic code generation\nComprehensive test coverage\nAccommodation of late-breaking requirements changes\n\nMoving Forward\nIt\u2019s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:\n\nThe World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications\nThe Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling\nI gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.\nI have also been working on XState, which is a library for \u201cstate machines and statecharts for the modern web\u201d. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).\n\nI\u2019m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.", "year": "2018", "author": "David Khourshid", "author_slug": "davidkhourshid", "published": "2018-12-12T00:00:00+00:00", "url": "https://24ways.org/2018/state-machines-in-user-interfaces/", "topic": "code"} {"rowid": 255, "title": "Inclusive Considerations When Restyling Form Controls", "contents": "I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility.\nI would like to begin by saying all these things. However, they\u2019re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I\u2019m thinking of next year.\nReturning to reality, styling form controls is more tricky and time consuming these days rather than flat out \u201chard\u201d. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you\u2019re tasked to match.\nHowever, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they\u2019ve been designed for, then we\u2019ve potentially deprived many people the experiences they deserve.\nQuick level setting\nGetting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following:\n\nMany form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels.\nIt is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics. \nMake sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they\u2019re judged solely by their visual presentation in a single browser, or with limited AT testing.\n\nVisually resetting and restyling form controls\nOver the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors. \nAs I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation.\nSee the Pen form control styling comparisons by Scott (@scottohara) on CodePen.\nhttps://codepen.io/scottohara/pen/gZOrZm/\nOver the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated. \nIf you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control\u2014all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing. \n\n You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting?\n \nIt is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements:\n\nNon-standard\nThis feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future.\n\nWhile this may be a deterrent for some, it\u2019s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won\u2019t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all.\nCan\u2019t make it? Fake it.\nInternet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won\u2019t render as intended. \nAdditionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you\u2019ll need to take a different approach in styling.\nGetting clever with markup and CSS\nThe following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox.\nSee the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen.\nhttps://codepen.io/scottohara/pen/RqEayN/\nCustomizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen?\nAs we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It\u2019s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others.\nLimitations to cleverness\nCreative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control\u2019s inherent usability and accessibility for each control we take this approach to.\nHowever, this method of restyling still doesn\u2019t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don\u2019t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there\u2019s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control\u2019s behavior to be worth the level of effort to implement. Here\u2019s where we need to take another approach.\nUsing ARIA when appropriate\nSometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we\u2019re not leveraging a native HTML control as our foundation, it doesn\u2019t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA).\nARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren\u2019t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA:\n\nIf you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.\n\nBy using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible.\nExceptions to the rule\nOne example of a control that allows for an exception to the first rule of Using ARIA would be a switch control.\nSwitches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized.\nWhile a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control.\nSee the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen.\nhttps://codepen.io/scottohara/pen/XyvoeE/\nBy adding a role=\"switch\" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it\u2019s inherent ability to be focused by Tab key, and Space key to toggle state.\nBut while this is a valid approach to take in building a switch, how does this actually match up to reality?\nDoes it pass the test(s)?\nWhether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we\u2019ve restyled or rebuilt works the way people expect it to, if not better.\nWhat we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example:\n\n\nIs the control implemented in all supported browsers?\nIf not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field? \nIf so: is each browser\u2019s implementation a good user experience? Is there room for improvement that can be tested against the native baseline? \n\n\nTest with multiple input devices.\nWhere the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few. \nYou\u2019ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well.\n\n\nHow well does the control take to custom styles?\nIf a control can be styled enough to not need to be rebuilt from scratch, that\u2019s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling. \nAlways test with different browser and AT pairings to ensure nothing is lost when controls are restyled. \n\n\nDo specifications match reality?\nIf recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations? \nFor instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don\u2019t match people\u2019s expectations, then maybe another solution is in order? \n\n\nWrapping up\nWhile styling form controls is definitely easier than it\u2019s ever been, that doesn\u2019t mean that it\u2019s at all simple, nor will it likely ever be. The level of difficulty you\u2019re going to face is going to depend entirely on what it is you\u2019re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you\u2019ll still be reliant on browsers and assistive technologies being able to fully understand the component they\u2019ve been presented.\nForms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option.\n2018 didn\u2019t end up being the year we got full customization of form controls sorted out. But that\u2019s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs. \nAnd hey. There\u2019s always next year, right?", "year": "2018", "author": "Scott O'Hara", "author_slug": "scottohara", "published": "2018-12-13T00:00:00+00:00", "url": "https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/", "topic": "code"} {"rowid": 243, "title": "Researching a Property in the CSS Specifications", "contents": "I frequently joke that I\u2019m \u201creading the specs so you don\u2019t have to\u201d, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial. \nWhat if you could just look it up yourself? That\u2019s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs.\nWhere are the CSS Specifications?\nThe easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4.\nWho are the specifications for?\nCSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected.\nWhich version of a spec should I look at?\nThere are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I\u2019m referring to a particular Working Draft in an article I\u2019ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing.\nIf you want the very latest additions and changes to the spec, then the Editor\u2019s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor\u2019s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor\u2019s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed.\nIf you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated.\nHow to approach a spec\nThe first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won\u2019t need to read all the way to the bottom, or read every section in detail.\nThe second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as:\n\nValues and Units\nIntrinsic and Extrinsic Sizing\nBox Alignment\n\nWhen something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore.\nResearching your property\nAs an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property.\nYou might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties.\nClicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns.\nValue\nFor value we can see that the property accepts a value . The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec.\nThe value is defined as accepting various values:\n\n\nminmax( , )\nfit-content( \n\nWe need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size.\n is defined just below as:\n\n\n\nmin-content\nmax-content\nauto\n\nSo these are all things that would be valid to use as a value for grid-auto-rows.\nThe first value of is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values.\ngrid-auto-rows: 100px;\ngrid-auto-rows: 1em;\ngrid-auto-rows: 30%;\nWhen using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work.\n\n\u201c values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.\u201d\n\nThis means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid.\nThe second value of is also defined here, as \u201cA non-negative dimension with the unit fr specifying the track\u2019s flex factor.\u201d This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be:\ngrid-auto-rows: 1fr;\nThere is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed.\nWe then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out.\nFor example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary.\nWe can now add min-content and max-content to our available values.\ngrid-auto-rows: min-content;\ngrid-auto-rows: max-content;\nThe final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this.\ngrid-auto-rows: auto;\nSo, this was the list for , the next possible value is minmax( , ). So this is telling us that we can use minmax() as a value, the final (max) value will be and we have already unpacked all of the allowable values there. The first value (min) is detailed as an . If we look at the values for this, we discover that they are the same as , minus the value:\n\n\nmin-content\nmax-content\nauto\n\nWe already know what all of these do, so we can add possible minmax() values to our list of values for .\ngrid-auto-rows: minmax(100px, 200px);\ngrid-auto-rows: minmax(20%, 1fr);\ngrid-auto-rows: minmax(1em, auto);\ngrid-auto-rows: minmax(min-content, max-content);\nFinally we can use fit-content( . We can see that fit-content takes a value of which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it.\ngrid-auto-rows: fit-content(200px);\ngrid-auto-rows: fit-content(20%);\nThose are all of our possible values, and to round things off, check again at the initial value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, \u201cA plus (+) indicates that the preceding type, word, or group occurs one or more times.\u201d This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size:\n\n\u201cIf multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.\u201d\n\nTherefore with the following CSS, if five implicit rows were needed they would be as follows:\n\n100px\n1fr\nauto\n100px\n1fr\n\n.grid {\n display: grid;\n grid-auto-rows: 100px 1fr auto;\n}\nInitial\nWe can now move to the next line in the box, and you\u2019ll be glad to know that it isn\u2019t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one.\nApplies to\nThis line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers.\nInherited\nThis tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element.\nPercentages\nAre percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area.\nMedia\nThis defines the media group that the property belongs to. In this case visual.\nComputed Value\nThis details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute.\nCanonical Order\nIf you have a property\u2013generally a shorthand property\u2013which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don\u2019t need to worry about this value in the table.\nAnimation Type\nThis details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet!\nThat\u2019s (mostly) it!\nSometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property.\nIn selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole.\nBeing able to work out what is valid for each property is incredibly useful. It means you don\u2019t waste time trying to use a value that doesn\u2019t work for that property. You also may find that there are values you weren\u2019t aware of, that solve problems for you.\nFurther reading\nSpecifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need.\nYou may also find useful:\n\nValue Definition Syntax on MDN.\nThe MDN Glossary defines many common terms.\nUnderstanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax.\nHow to read W3C Specs - from 2001 but still relevant.\n\nI hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed.", "year": "2018", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2018-12-14T00:00:00+00:00", "url": "https://24ways.org/2018/researching-a-property-in-the-css-specifications/", "topic": "code"} {"rowid": 244, "title": "It\u2019s Beginning to Look a Lot Like XSSmas", "contents": "I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional.\nIt\u2019s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it.\nWith platforms like GitHub and npm, giving the gift of code is so easy it\u2019s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it\u2019s also risky.\nVulnerabilities and Open Source\nOpen source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps.\nGraph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017\nIn the first 24 ways article this year, Katie referenced a few different types of vulnerability:\n\nCross-site Request Forgery (also known as CSRF)\nSQL Injection\nCross-site Scripting (also known as XSS)\n\nThere are many more types of vulnerability, and those that live under the same category share similarities. \nFor example, my favourite \u2013 is it weird to have a favourite vulnerability? \u2013 is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks \u2013 share it generously with your colleagues.\nMost vulnerabilities like this are not added intentionally \u2013 they\u2019re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library. \nOthers, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer\u2019s curiosity.\nAnother tactic to get developers to unwittingly install malicious packages into their codebase is \u201ctyposquatting\u201d \u2013 back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env). \nThis is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas.\nSanta\u2019s on his sleigh\nIf you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn\u2019t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too.\nLet\u2019s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that\u2019s just the tip of the iceberg \u2013 have a look at the full dependency tree which contains 109 dependencies \u2013 more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies.\nFixing vulnerabilities \u2013 the ultimate christmas gift\nIf you\u2019re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy.\nWhen you find out about a new vulnerability, you have some options:\nFix the vulnerability via an upgrade\nYou can often fix a vulnerability by upgrading the library to the latest version. Make sure you\u2019re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version.\nPatch the vulnerable code\nSometimes, a fix for a vulnerable library isn\u2019t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won\u2019t change anything else.\nSwitch to a different library\nIf the library you\u2019re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it\u2019s being unmaintained.\nResponsibly disclosing vulnerabilities\nKnowing how to responsibly disclose vulnerabilities is something I\u2019m ashamed to admit that I didn\u2019t know about before I joined a security company. But it\u2019s so important! On discovering a new vulnerability, a developer has a few options: \n\nA malicious developer will exploit that vulnerability for their own gain. \nA reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited.\nAn ethical and aware developer will follow what\u2019s known as a \u201cresponsible disclosure process\u201d. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too.\n\nIt\u2019s important to understand this process if you\u2019re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code.\nSo what does responsible disclosure look like? I\u2019ll take Node.js\u2019s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There\u2019s a separate process for bug found in third-party npm packages). Once you\u2019ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they\u2019ll give you regular updates about the progress they\u2019re making towards fixing it. As part of this, they\u2019ll figure out which versions are affected, and prepare fixes for them. They\u2019ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org.\nTim Kadlec published an in-depth article about responsible disclosures if you\u2019re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed.\nEncourage responsible disclosure\nAdd a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability \u2013 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn\u2019t published one one.\nStats from the State of Open Source Security 2017\nBug bounties\nSome companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program.\nIf you don\u2019t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money. \n\nOn one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question.\n\nA gift worth giving\nIt\u2019s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool \u2013 I\u2019m just a little biased towards Snyk, but as Katie mentioned, there\u2019s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities.\nStats from the State of Open Source Security 2017\nMost importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too.", "year": "2018", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2018-12-17T00:00:00+00:00", "url": "https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/", "topic": "code"} {"rowid": 256, "title": "Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist", "contents": "We\u2019re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we\u2019re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.\n*(a Naturalist is someone who likes learning about nature, not someone who\u2019s a fan of being naked, that\u2019s a \u2018Naturist\u2019\u2026 different thing!)\nLooking for critters in rocky intertidal habitats\nOne of my favourite things to do is going rockpooling, or as we call it over here in California, \u2018tidepooling\u2019. Amounting to the same thing, it\u2019s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this \u2018rocky intertidal habitat\u2019\nA particularly exciting creature that lives here is the Nudibranch, a type of super colourful \u2018sea slug\u2019. There are over 3000 species of Nudibranch worldwide. (The word \u201cnudibranch\u201d comes from the Latin nudus, naked, and the Greek \u03b2\u03c1\u03b1\u03bd\u03c7\u03b9\u03b1 / brankhia, gills.)\n\u200b\n\nThey are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.\nMy favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition \u2018Mavericks\u2019). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.\n\u200b\n\nI was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn\u2019t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist. \nUsing technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!\nFinding nearby animals with iNaturalist\nWe\u2019re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.\nI\u2019ve been using iNaturalist for over two years to record and identify plants and animals that I\u2019ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I\u2019ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting. \nThis process is great because once an observation has been identified by at least two people it becomes \u2018verified\u2019 and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.\n\u200b\n\niNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the \u2018Try it out\u2019 button.\n\u200b\n\nYou\u2019ll then get a URL that looks a bit like\nhttps://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at \nwhich you can call and interrrogate using a programming language of your choice.\nIf you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).\nRapid development using Observable Notebooks\nWe\u2019re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn\u2019t require any setup at all.\nYou can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example. \nEach \u2018notebook\u2019 consists of a page with a column of \u2018cells\u2019, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.\nDeveloping your Naturalist superpowers\nIf you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.\nFor our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.\nWe could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com\n\u200b\n\nThe tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the \u2018DublinCore\u2019 format as it\u2019s closest to the format needed by the iNaturalist API.\n westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811\nA quick map primer:\nThe higher the latitude the more north it is\nThe lower the latitude the more south it is\nLatitude 0 = the equator\n\nThe higher the longitude the more east it is of Greenwich\nThe lower the longitude the more west it is of Greenwich\nLongitude 0 = Greenwich\nIn the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:\nnelat = highest latitude = north limit = 37.499811\nnelng = highest longitude = east limit = -122.492738\nswlat = smallest latitude = south limit = 37.492805\nswlng = smallest longitude = west limit = 122.50542\nAs API parameters these look like this:\n?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542\nThese parameters in this format can be used for most of the iNaturalist API methods.\nNudibranch seasonality in Pillar Point\nWe can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.\nIn addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent \u2018Order Nudibranchia\u2019. \nAnother useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg \u2018Glaucus Atlanticus\u2019 the first Latin word is the \u2018Genus\u2019 like a family name \u2018Glaucus\u2019, and the second word identifies that particular species, like a given name \u2018Atlanticus\u2019. \nThe two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.\nThe following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:\npillar_point_counts_per_week = fetch(\n \"https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true\"\n ).then(response => {\n return response.json();\n})\nOur next step is to take this data and draw a graph! We\u2019ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks. \n(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)\nThe iNaturalist API returns data that looks like this:\n{\n \"total_results\": 53,\n \"page\": 1,\n \"per_page\": 53,\n \"results\": {\n \"week_of_year\": {\n \"1\": 136,\n \"2\": 20,\n \"3\": 150,\n \"4\": 65,\n \"5\": 186,\n \"6\": 74,\n \"7\": 47,\n \"8\": 87,\n \"9\": 64,\n \"10\": 56,\nBut for our Vega-Lite graph we need data that looks like this:\n[{\n \"week\": \"01\",\n \"value\": 136\n}, {\n \"week\": \"02\",\n \"value\": 20\n}, ...]\nWe can convert what we get back from the API to the second format using a loop that iterates over the object keys:\nobjects_to_plot = {\n let objects = [];\n Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {\n objects.push({\n week: `Wk ${week_index.toString()}`,\n observations: pillar_point_counts_per_week.results.week_of_year[week_index]\n });\n })\n return objects;\n}\nWe can then plug this into Vega-Lite to draw us a graph:\nvegalite({\n data: {values: objects_to_plot},\n mark: \"bar\",\n encoding: {\n x: {field: \"week\", type: \"nominal\", sort: null},\n y: {field: \"observations\", type: \"quantitative\"}\n },\n width: width * 0.9\n})\n\nIt\u2019s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences. \nSo, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.\npillar_point_nudibranches = {\n let api_results = await fetch(\n \"https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true\"\n ).then(r => r.json())\n\n let species_list = api_results.results.map(i => ({\n value: i.taxon.id,\n label: `${i.taxon.preferred_common_name} (${i.taxon.name})`\n }));\n\n return species_list\n}\nWe can create an interactive select box by importing code from Jeremy Ashkanas\u2019 Observable Notebook: add import {select} from \"@jashkenas/inputs\" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn\u2019t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!\nviewof select_species = select({\n title: \"Which Nudibranch do you want to see seasonality for?\",\n options: [{value: \"\", label: \"All the Nudibranchs!\"}, ...pillar_point_nudibranches],\n value: \"\"\n})\nThen we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.\npillar_point_counts_per_month_per_species = fetch(\n `https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`\n).then(r => r.json())\nNow for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:\nobjects_to_plot_species_month = {\n let objects = [];\n Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {\n objects.push({\n month: (new Date(2018, (month_index - 1), 1)).toLocaleString(\"en\", {month: \"long\"}),\n observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]\n });\n })\n return objects;\n}\n(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)\nAnd we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!\nvegalite({\n data: {values: objects_to_plot_species_month},\n mark: \"bar\",\n encoding: {\n x: {field: \"month\", type: \"nominal\", sort:null},\n y: {field: \"observations\", type: \"quantitative\"}\n },\n width: width * 0.9\n})\nNow we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.\n\u200b\n\nThis tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?\nWell\u2026 we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!\nOur select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.\nviewof selected_month = select({\n title: \"When do you want to see Nudibranchs?\",\n options: [\n { label: \"Whenever\", value: \"\" },\n { label: \"January\", value: \"1\" },\n { label: \"February\", value: \"2\" },\n { label: \"March\", value: \"3\" },\n { label: \"April\", value: \"4\" },\n { label: \"May\", value: \"5\" },\n { label: \"June\", value: \"6\" },\n { label: \"July\", value: \"7\" },\n { label: \"August\", value: \"8\" },\n { label: \"September\", value: \"9\" },\n { label: \"October\", value: \"10\" },\n { label: \"November\", value: \"11\" },\n { label: \"December\", value: \"12\" },\n ],\n value: \"\"\n })\nWe then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We\u2019ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.\nall_species_data = fetch(\n `https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`\n).then(r => r.json())\nYou can render HTML directly in a notebook cell using Observable\u2019s html tagged template literal:\n\n\n

    If you go to Pillar Point ${\n {\"\": \"\",\n \"1\":\"in January\",\n \"2\":\"in Febrary\",\n \"3\":\"in March\",\n \"4\":\"in April\",\n \"5\":\"in May\",\n \"6\":\"in June\",\n \"7\":\"in July\",\n \"8\":\"in August\",\n \"9\":\"in September\",\n \"10\":\"in October\",\n \"11\":\"in November\",\n \"12\":\"in December\",\n }[selected_month]\n } you might see\u2026

    \n\n
    \n${all_species_data.results.map(s => `

    ${s.taxon.name}

    \n

    Seen ${s.count} times

    \n
    \n`)}\n
    \nThese few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!\n\u200b\n\nPlay with it yourself in this Observable Notebook.\nConclusion\nI hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new \u2018knowledge of nature\u2019 superpower.\nLastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.\nHere is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.", "year": "2018", "author": "Natalie Downe", "author_slug": "nataliedowne", "published": "2018-12-18T00:00:00+00:00", "url": "https://24ways.org/2018/observable-notebooks-and-inaturalist/", "topic": "code"} {"rowid": 249, "title": "Fast Autocomplete Search for Your Website", "contents": "Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.\nI\u2019m going to build a search engine for 24 ways that\u2019s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I\u2019ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.\n\nTry it out here, then read on to see how I built it.\nFirst step: crawling the data\nThe first step in building a search engine is to grab a copy of the data that you plan to make searchable.\nThere are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don\u2019t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.\nI\u2019m going to do exactly that against 24 ways: I\u2019ll build a simple crawler using wget, a command-line tool that features a powerful \u201crecursive\u201d mode that\u2019s ideal for scraping websites.\nWe\u2019ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.\nThen we\u2019ll tell wget to recursively crawl the website, using the --recursive flag.\nWe don\u2019t want to fetch every single page on the site - we\u2019re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017\nWe want to be polite, so let\u2019s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2\nThe first time I ran this, I accidentally downloaded the comments pages as well. We don\u2019t want those, so let\u2019s exclude them from the crawl using -X \"/*/*/comments\".\nFinally, it\u2019s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.\nTie all of those options together and we get this command:\nwget --recursive --wait 2 --no-clobber \n -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 \n -X \"/*/*/comments\" \n https://24ways.org/archives/ \nIf you leave this running for a few minutes, you\u2019ll end up with a folder structure something like this:\n$ find 24ways.org\n24ways.org\n24ways.org/2013\n24ways.org/2013/why-bother-with-accessibility\n24ways.org/2013/why-bother-with-accessibility/index.html\n24ways.org/2013/levelling-up\n24ways.org/2013/levelling-up/index.html\n24ways.org/2013/project-hubs\n24ways.org/2013/project-hubs/index.html\n24ways.org/2013/credits-and-recognition\n24ways.org/2013/credits-and-recognition/index.html\n...\nAs a quick sanity check, let\u2019s count the number of HTML pages we have retrieved:\n$ find 24ways.org | grep index.html | wc -l\n328\nThere\u2019s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren\u2019t linked in the /archives/ yet so we need to point our crawler at the site\u2019s front page instead:\nwget --recursive --wait 2 --no-clobber \n -I /2018 \n -X \"/*/*/comments\" \n https://24ways.org/\nThanks to --no-clobber, this is safe to run every day in December to pick up any new content.\nWe now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let\u2019s use them to build ourselves a search index.\nBuilding a search index using SQLite\nThere are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.\nI\u2019m going to use something that\u2019s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.\nSQLite is the world\u2019s most widely deployed database, even though many people have never even heard of it. That\u2019s because it\u2019s designed to be used as an embedded database: it\u2019s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!\nSQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn\u2019t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.\nThis doesn\u2019t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.\nIt turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.\nI\u2019ve been doing a lot of work with SQLite recently, and as part of that, I\u2019ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It\u2019s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that\u2019s similar to the Observable notebooks Natalie described on 24 ways yesterday.\nIf you haven\u2019t used Jupyter before, here\u2019s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn\u2019t clash with any other installed packages:\n$ python3 -m venv ./jupyter-venv\n$ ./jupyter-venv/bin/pip install jupyter\n# ... lots of installer output\n# Now lets install some extra packages we will need later\n$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib\n# And start the notebook web application\n$ ./jupyter-venv/bin/jupyter-notebook\n# This will open your browser to Jupyter at http://localhost:8888/\nYou should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.\nA neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I\u2019ve published the notebook I used to build the search index on my GitHub account. \n\u200b\n\nHere\u2019s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what\u2019s going on.\nfrom pathlib import Path\nfrom bs4 import BeautifulSoup as Soup\nbase = Path(\"/Users/simonw/Dropbox/Development/24ways-search\")\narticles = list(base.glob(\"*/*/*/*.html\"))\n# articles is now a list of paths that look like this:\n# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')\ndocs = []\nfor path in articles:\n year = str(path.relative_to(base)).split(\"/\")[1]\n url = 'https://' + str(path.relative_to(base).parent) + '/'\n soup = Soup(path.open().read(), \"html5lib\")\n author = soup.select_one(\".c-continue\")[\"title\"].split(\n \"More information about\"\n )[1].strip()\n author_slug = soup.select_one(\".c-continue\")[\"href\"].split(\n \"/authors/\"\n )[1].split(\"/\")[0]\n published = soup.select_one(\".c-meta time\")[\"datetime\"]\n contents = soup.select_one(\".e-content\").text.strip()\n title = soup.find(\"title\").text.split(\" \u25c6\")[0]\n try:\n topic = soup.select_one(\n '.c-meta a[href^=\"/topics/\"]'\n )[\"href\"].split(\"/topics/\")[1].split(\"/\")[0]\n except TypeError:\n topic = None\n docs.append({\n \"title\": title,\n \"contents\": contents,\n \"year\": year,\n \"author\": author,\n \"author_slug\": author_slug,\n \"published\": published,\n \"url\": url,\n \"topic\": topic,\n })\nAfter running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:\n[\n {\n \"title\": \"Why Bother with Accessibility?\",\n \"contents\": \"Web accessibility (known in other fields as inclus...\",\n \"year\": \"2013\",\n \"author\": \"Laura Kalbag\",\n \"author_slug\": \"laurakalbag\",\n \"published\": \"2013-12-10T00:00:00+00:00\",\n \"url\": \"https://24ways.org/2013/why-bother-with-accessibility/\",\n \"topic\": \"design\"\n },\n {\n \"title\": \"Levelling Up\",\n \"contents\": \"Hello, 24 ways. Iu2019m Ashley and I sell property ins...\",\n \"year\": \"2013\",\n \"author\": \"Ashley Baxter\",\n \"author_slug\": \"ashleybaxter\",\n \"published\": \"2013-12-06T00:00:00+00:00\",\n \"url\": \"https://24ways.org/2013/levelling-up/\",\n \"topic\": \"business\"\n },\n ...\nMy sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here\u2019s how to do that using this list of dictionaries.\nimport sqlite_utils\ndb = sqlite_utils.Database(\"/tmp/24ways.db\")\ndb[\"articles\"].insert_all(docs)\nThat\u2019s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.\n(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)\nYou can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:\n$ sqlite3 /tmp/24ways.db\nsqlite> .headers on\nsqlite> .mode column\nsqlite> select title, author, year from articles;\ntitle author year \n------------------------------ ------------ ----------\nWhy Bother with Accessibility? Laura Kalbag 2013 \nLevelling Up Ashley Baxte 2013 \nProject Hubs: A Home Base for Brad Frost 2013 \nCredits and Recognition Geri Coady 2013 \nManaging a Mind Christopher 2013 \nRun Ragged Mark Boulton 2013 \nGet Started With GitHub Pages Anna Debenha 2013 \nCoding Towards Accessibility Charlie Perr 2013 \n...\n\nThere\u2019s one last step to take in our notebook. We know we want to use SQLite\u2019s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:\ndb[\"articles\"].enable_fts([\"title\", \"author\", \"contents\"])\nIntroducing Datasette\nDatasette is the open-source tool I\u2019ve been building that makes it easy to both explore SQLite databases and publish them to the internet.\nWe\u2019ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn\u2019t it be nice if we could use a more human-friendly interface for that?\nIf you don\u2019t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I\u2019ll show you how to deploy Datasette to Heroku like this later in the article.\nIf you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:\n./jupyter-venv/bin/pip install datasette\nThis will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.\nNow you can run Datasette against the 24ways.db file we created earlier like so:\n./jupyter-venv/bin/datasette /tmp/24ways.db\nThis will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.\nIf you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db\nPublishing the database to the internet\nOne of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here\u2019s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku\u2019s free tier) using datasette publish:\n$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways\n-----> Python app detected\n-----> Installing requirements with pip\n\n-----> Running post-compile hook\n-----> Discovering process types\n Procfile declares types -> web\n\n-----> Compressing...\n Done: 47.1M\n-----> Launching...\n Released v8\n https://search-24ways.herokuapp.com/ deployed to Heroku\nIf you try this out, you\u2019ll need to pick a different --name, since I\u2019ve already taken search-24ways.\nYou can run this command as many times as you like to deploy updated versions of the underlying database.\nSearching and faceting\nDatasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.\n\u200b\n\nSQLite search supports wildcards, so if you want autocomplete-style search where you don\u2019t need to enter full words to start getting results you can add a * to the end of your search term. Here\u2019s a search for access* which returns articles on accessibility:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A\nA neat feature of Datasette is the ability to calculate facets against your data. Here\u2019s a page showing search results for svg with facet counts calculated against both the year and the topic columns:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic\nEvery page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A\nBetter search using custom SQL\nThe search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they\u2019re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.\nThis exposes the underlying SQL query, which looks like this:\nselect rowid, * from articles where rowid in (\n select rowid from articles_fts where articles_fts match :search\n) order by rowid limit 101\nWe can do better than this by constructing a custom SQL query. Here\u2019s the query we will use instead:\nselect\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\nwhere articles_fts match :search || \"*\"\n order by rank limit 10;\nYou can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it\u2019s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.\nThere\u2019s a lot going on here! Let\u2019s break the SQL down line-by-line:\nselect\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\nWe\u2019re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you\u2019ll see why in the JavaScript later on.\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nThese are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\narticles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.\nwhere articles_fts match :search || \"*\"\n order by rank limit 10;\n:search || \"*\" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.\nHow do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we\u2019re going to use ?_shape=array to get back a plain array of objects:\nJSON API call to search for articles matching SVG\nThe HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.\nA simple JavaScript autocomplete search interface\nI considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.\nWe need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:\nfunction debounce(func, wait, immediate) {\n let timeout;\n return function() {\n let context = this, args = arguments;\n let later = () => {\n timeout = null;\n if (!immediate) func.apply(context, args);\n };\n let callNow = immediate && !timeout;\n clearTimeout(timeout);\n timeout = setTimeout(later, wait);\n if (callNow) func.apply(context, args);\n };\n};\nWe\u2019ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.\nSince we\u2019re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I\u2019m amazed that browsers still don\u2019t bundle a default one of these:\nconst htmlEscape = (s) => s.replace(\n />/g, '>'\n).replace(\n /Autocomplete search

    \n
    \n

    \n\n
    \nAnd now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:\n// Embed the SQL query in a multi-line backtick string:\nconst sql = `select\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\nwhere articles_fts match :search || \"*\"\n order by rank limit 10`;\n\n// Grab a reference to the \nconst searchbox = document.getElementById(\"searchbox\");\n\n// Used to avoid race-conditions:\nlet requestInFlight = null;\n\nsearchbox.onkeyup = debounce(() => {\n const q = searchbox.value;\n // Construct the API URL, using encodeURIComponent() for the parameters\n const url = (\n \"https://search-24ways.herokuapp.com/24ways-866073b.json?sql=\" +\n encodeURIComponent(sql) +\n `&search=${encodeURIComponent(q)}&_shape=array`\n );\n // Unique object used just for race-condition comparison\n let currentRequest = {};\n requestInFlight = currentRequest;\n fetch(url).then(r => r.json()).then(d => {\n if (requestInFlight !== currentRequest) {\n // Avoid race conditions where a slow request returns\n // after a faster one.\n return;\n }\n let results = d.map(r => `\n
    \n

    ${htmlEscape(r.title)}

    \n

    ${htmlEscape(r.author)} - ${r.year}

    \n

    ${highlight(r.snippet)}

    \n
    \n `).join(\"\");\n document.getElementById(\"results\").innerHTML = results;\n });\n}, 100); // debounce every 100ms\nThere\u2019s just one more utility function, used to help construct the HTML results:\nconst highlight = (s) => htmlEscape(s).replace(\n /b4de2a49c8/g, ''\n).replace(\n /8c94a2ed4b/g, ''\n);\nThis is what those unique strings passed to the snippet() function were for.\nAvoiding race conditions in autocomplete\nOne trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:\n\nUser types acces\nBrowser sends request A - querying documents matching acces*\nUser continues to type accessibility\nBrowser sends request B - querying documents matching accessibility*\nRequest B returns. It was fast, because there are fewer documents matching the full term\nThe results interface updates with the documents from request B, matching accessibility*\nRequest A returns results (this was the slower of the two requests)\nThe results interface updates with the documents from request A - results matching access*\n\nThis is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.\nThankfully there\u2019s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.\nAny time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.\nWhen the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.\nIt\u2019s not a lot of code, really\nAnd that\u2019s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don\u2019t think it matters. You\u2019re welcome to refactor it as much you like.\nHow good is this search implementation? I\u2019ve been building search engines for a long time using a wide variety of technologies and I\u2019m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it\u2019s based on SQL makes it easy and flexible to work with.\nA surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.\nMore importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you\u2019re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.\nFor more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.", "year": "2018", "author": "Simon Willison", "author_slug": "simonwillison", "published": "2018-12-19T00:00:00+00:00", "url": "https://24ways.org/2018/fast-autocomplete-search-for-your-website/", "topic": "code"} {"rowid": 253, "title": "Clip Paths Know No Bounds", "contents": "CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance.\nThe basics of a clip path\nBefore we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed.\nUsing the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely\u2019s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element\u2019s dimensions.\nSee the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen.\n\nSo for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines.\nclip-path: polygon(\n 30% 0%,\n 70% 0%,\n 100% 30%,\n 100% 70%,\n 70% 100%,\n 30% 100%,\n 0% 70%,\n 0% 30%\n);\nA shape with less vertices than the eye can see\nIt\u2019s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box \u2014 or more specifically when we think outside the range of 0% - 100%.\nOur element\u2019s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element.\nSee the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen.\n\nBy going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons.\nOur earlier octagon can similarly be made with only four points.\nSee the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen.\n\nMultiple shapes, one clip path\nWe can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon().\nSee the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen.\n\nDepending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element\u2019s box, we can draw extra lines to help us get where we need to go next as needed.\nIt can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles.\nSee the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen.\n\nDifferent shapes with fill rules\nA polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification \u2014 the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape.\nSee the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen.\n\nAs lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path\u2019s lines, the point is considered outside, and if it crosses an odd number the point is inside.\nOrder of operations\nIt is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more.\nThese compositing effects are applied in the order:\n\nCSS Filters (e.g. filter: blur(2px))\nClipping (e.g. what this article is about)\nMasking (Clipping\u2019s cousin)\nBlend Modes (e.g. mix-blend-mode: multiply)\nOpacity\n\nThis means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element\u2019s box edges.\nSee the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen.\n\nIf we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally.\nRevealing content with animation\nCSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. \nSee the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen.\n\nDon\u2019t keep CSS Shapes in a box\nClip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.", "year": "2018", "author": "Dan Wilson", "author_slug": "danwilson", "published": "2018-12-20T00:00:00+00:00", "url": "https://24ways.org/2018/clip-paths-know-no-bounds/", "topic": "code"} {"rowid": 241, "title": "Jank-Free Image Loads", "contents": "There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then \u2014\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\n\u2014 oops! \u2014 an image pops in above it, pushing said text down the page, and our poor reader loses their place.\nBy default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I\u2019ll chart a path into a jank-free future \u2013 one in which it\u2019s easy and natural to author elements that load like this:\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\nJank is a very old problem, and there is a very old solution to it: the width and height attributes on . The idea is: if we stick an image\u2019s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.\n\nwidth\nSpecifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.\n\n\u2014The HTML 3.2 Specification, published on January 14 1997\nUnfortunately for us, when width and height were first spec\u2019d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:\nSee the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.\n\nwidth and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.\nI have good news, bad news, and great news.\nThe good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.\nThe bad news is that these techniques are all terrible, cumbersome hacks. They\u2019re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.\nSo, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.\naspect-ratio in CSS\nThe first proposed feature? An aspect-ratio property in CSS!\nThis would allow us to write CSS like this:\nimg {\n width: 100%;\n}\n\n.thumb {\n aspect-ratio: 1/1;\n}\n\n.hero {\n aspect-ratio: 16/9;\n}\nThis\u2019ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.\nAlas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It\u2019s images \u2013 possibly user generated images \u2013 that can have any aspect ratio. The really tricky problem is unknown-when-you\u2019re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:\n\nAnd inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style=\"\" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I\u2019m cutting off my, (or anyone else\u2019s) ability to change those aspect ratios, for whatever reason, later.\nHow might we specify aspect ratios at a lower level? How might we give browsers information about an image\u2019s dimensions, without giving them explicit instructions about how to style it?\nI\u2019ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!\nA brief note on intrinsic and extrinsic sizing\nWhat do I mean by \u201cintrinsic\u201d and \u201cextrinsic?\u201d\nThe intrinsic size of an image is, put simply, how big it\u2019d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800\u00d7600 image has an intrinsic width of 800px.\nThe extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800\u00d7600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn\u2019t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.\nIt surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).\nCSS aspect-ratio lets us avoid setting extrinsic heights and widths \u2013 and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.\nThe last tool I\u2019m going to talk about gets us out of the extrinsic sizing game all together \u2014 which, I think, is only appropriate for a feature that we\u2019re going to be using in HTML.\nintrinsicsize in HTML\nThe proposed intrinsicsize attribute will let you do this:\n\nThat tells the browser, \u201chey, this image.jpg that I\u2019m using here \u2013 I know you haven\u2019t loaded it yet but I\u2019m just going to let you know right away that it\u2019s going to have an intrinsic size of 800\u00d7600.\u201d This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image\u2019s intrinsic size.\nYou may ask (I did!): wait, what if my references multiple resources, which all have different intrinsic sizes? Well, if you\u2019re using srcset, intrinsicsize is a bit of a misnomer \u2013 what the attribute will do then, is specify an intrinsic aspect ratio:\n\nIn the future (and behind the \u201cExperimental Web Platform Features\u201d flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 \u2013 it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.\nCan\u2019t wait\nI seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!\nDon\u2019t let all of these details bury the big takeaway here: sometime soon (\ud83e\udd1e 2019\u203d \ud83e\udd1e), we\u2019ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can\u2019t wait!", "year": "2018", "author": "Eric Portis", "author_slug": "ericportis", "published": "2018-12-21T00:00:00+00:00", "url": "https://24ways.org/2018/jank-free-image-loads/", "topic": "code"} {"rowid": 246, "title": "Designing Your Site Like It\u2019s 1998", "contents": "It\u2019s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary\u2014and my fourteenth contribution to 24 ways\u2014 I\u2019d like to explain how I would\u2019ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.\nMy design for Planes, Trains and Automobiles is fixed at 800px wide.\nDeveloping a framework\nI\u2019ll start by using frames to set up the framework for this new website. Frames are individual pages\u2014one for navigation, the other for my content\u2014pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a element.\nI add two rows to my ; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don\u2019t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0:\n\n[\u2026]\n\nNext I add the source of my two frame documents. I don\u2019t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame:\n\n\n\n\nI do want links from my navigation to open in the content frame, so I give each a name so I can specify where I want links to open:\n\n\n\n\nThe framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets\u2014and as many individual documents\u2014as I need:\n\n \n \n \n \n \n\nLetterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space.\nHandling older browsers\nSadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content:\n\n<body>\n<p>This page uses frames, but your browser doesn\u2019t support them. \n Please upgrade your browser.</p>\n</body>\n\nForcing someone back into a frame\nSometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a , people could easily miss out on other parts of a design. This short script will prevent this happening and because it\u2019s vanilla Javascript, it doesn\u2019t require a library such as jQuery:\n\n\nLaying out my page\nBefore starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:\n\nI want absolute control over how people experience my design and don\u2019t want to allow it to stretch, so I first need a

    which limits the width of my layout to 800px. The align attribute will keep this
    in the centre of someone\u2019s screen:\n
    \n \n \n \n
    [\u2026]
    \nAlthough they were developed for displaying tabular information, the cells and rows which make up the element make it ideal for the precise implementation of a design. I need several tables\u2014often nested inside each other\u2014to implement my design. These include tables for a banner and three rows of content:\n
    \n
    [\u2026]
    \n \n
    \n
    [\u2026]
    \n
    \n \n [\u2026]
    \n [\u2026]
    \n\nThe width of the first table\u2014used for my banner\u2014is fixed to match the logo it contains. As I don\u2019t need borders, padding, or spacing between these cells, I use attributes to remove them:\n\n \n \n \n
    \"Logo\"
    \nThe next table\u2014which contains the largest image, introduction, and a call-to-action\u2014is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:\n\n \n \n\nThe height and width of these \u201cshims\u201d or \u201cspacers\u201d is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.\nFor the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:\n\n \u00a0\n [\u2026]\n\n\n\n \u00a0\n\nI use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table\u2014this time with a white background\u2014inside it:\n\n \n \n \n
    \n \n[\u2026]\n
    \n
    \nAdding details\nTables are fabulous tools for laying out a page, but they\u2019re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my \u201cBuy the DVD\u201d call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:\n\n \n \n\n \n\n \n \n
    \n
    \n Buy the DVD\n
    \n
    \nI use those same elements to add details to headlines and lists too. Adding a \u201cbullet\u201d to each item in a list needs only two additional table cells, a circular graphic, and a spacer:\n\n \n \n \n \n \n
    \u00a0\u00a0Directed by John Hughes
    \nImplementing a typographic hierarchy\nSo far I\u2019ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use elements to change the typeface from the browser\u2019s default to any font installed on someone\u2019s device:\nPlanes, Trains and Automobiles is a comedy film [\u2026]\nTo adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser\u2019s default. I use a size of 4 for this headline and 2 for the text which follows:\nSteve Martin\n\nAn American actor, comedian, writer, producer, and musician.\nWhen I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.\nNB: I use as many
    elements as needed to create space between headlines and paragraphs.\nView the final result (and especially the source.)\nMy modern day design for Planes, Trains and Automobiles.\nI can imagine many people reading this and thinking \u201cThis is terrible advice because we don\u2019t develop websites like this in 2018.\u201d That\u2019s true.\nWe have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:\nfont-variant-caps: titling-caps;\nfont-variant-ligatures: common-ligatures;\nfont-variant-numeric: oldstyle-nums;\nGrid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:\nbody {\n display: grid;\n grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;\n grid-template-rows: auto;\n grid-column-gap: 2vw;\n grid-row-gap: 1vh;\n}\nFlexbox has made it easy to develop flexible components such as navigation links:\nnav ul { display: flex; }\nnav li { flex: 1; }\nJust one line of CSS can create multiple columns of fluid type:\nmain { column-width: 12em; }\nCSS Shapes enable text to flow around irregular shapes including polygons:\n[src*=\"main-img\"] {\n float: left;\n shape-outside: polygon(\u2026);\n}\nToday, we wouldn\u2019t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:\n.btn {\n background: linear-gradient(#8B1212, #DD3A3C);\n border-radius: 1em;\n box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);\n}\nCSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we\u2019re certain we know how to design and develop products and websites better than we did at the end of 1998.\nStrange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That\u2019s why it\u2019s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today\u2014tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass\u2014will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.\nI have no prediction for what the web will be like twenty years from now. However, I want to believe we\u2019ll build on what we\u2019ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won\u2019t be repeated.\n\nHead over to my website if you\u2019d like to read about how I\u2019d implement my design for \u2018Planes, Trains and Automobiles\u2019 today.", "year": "2018", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2018-12-23T00:00:00+00:00", "url": "https://24ways.org/2018/designing-your-site-like-its-1998/", "topic": "code"} {"rowid": 264, "title": "Dynamic Social Sharing Images", "contents": "Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you\u2019d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days! \nWith so many social channels and a proliferation of content and promotions flying past in everyone\u2019s streams, it\u2019s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader\u2019s attention if you\u2019re trying to get as many eyes as possible on that sweet, sweet social content.\nOne of the best ways to grab attention with your posts or tweets is to include an image. There\u2019s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.\nSo without hard stats to quote, we\u2019ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.\nAdding sharing images\nThe process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.\n\nThere\u2019s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.\nThis is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don\u2019t necessarily have an image? One approach is to use stock photography, but that\u2019s not going to be right for every situation.\nThis was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don\u2019t usually lend themselves directly to being the main \u2018hero\u2019 image for an article.\nPutting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.\nOne of the hand-made sharing images from 2017\nEach day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.\nHatching a new plan\nOne initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.\nThe more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!\nIn fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn\u2019t this just be another page on the site generated by the CMS?\nBreaking it down, I figured the steps needed would be something like:\n\nCreate the CSS to lay out a component that could be turned into an image\nAdd a new field to articles in the CMS to hold a handpicked quote\nBuild a new article template in the CMS to output the author name and quote dynamically for any article\n\u2026 um \u2026 screenshot?\n\nI thought I\u2019d get cracking and see if I could figure out the final steps later.\nBuilding the page\nThe first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.\nPaul\u2019s code uses a fixed dimension container in the HTML, set to 600 \u00d7 315px. This is to make it the correct aspect ratio for Facebook\u2019s recommended image size. It\u2019s useful to remember here that it doesn\u2019t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.\nWith the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul\u2019s markup.\nI also added the quote as a new field on the article in the CMS, so each \u2018image\u2019 could be quickly and easily customised in the editing process.\nI added a new field to the article template to capture the quote.\nWith very little effort, we quickly had a page to dynamically generate our \u2018image\u2019 right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.\nOur automatically generated layout direct from the CMS\nIt soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that\u2019s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought\u2026 how could I automatically take a screenshot?\nEnter Puppeteer\nPuppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It\u2019s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.\nHeadless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or\u2026 get this\u2026 rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.\nUsing Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that\u2019s it\u2019s actually Puppeteer\u2019s \u2018hello world\u2019 example.\nFirst you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don\u2019t need to worry about installing that. At the command line:\nnpm i puppeteer\nThen save a new file as example.js with this code:\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://example.com');\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nand then run it using Node:\nnode example.js\nThis will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:\n\nLaunch a browser\nOpen up a new page\nGoto a URL\nTake a screenshot\nClose the browser\n\nThe async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That\u2019s useful with actions like loading a web page that might take some time to complete. They\u2019re used with Promises, and the effect is to make asynchronous code behave as if it\u2019s synchronous. You can read more about async and await at MDN if you\u2019re interested.\nThat\u2019s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?\nOur content is up in the corner of the image with lots of empty space.\nThat\u2019s not great. It\u2019s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 \u00d7 600, so we need to find out how to adjust that. Fortunately, the docs aren\u2019t the worst and I was able to find the page.setViewport() method pretty easily.\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');\n await page.setViewport({\n width: 600,\n height: 315\n });\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nThis worked! The screenshot is now 600 \u00d7 315 as expected. That\u2019s exactly what we asked for. Trouble is, that\u2019s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\nPerfect! We now have a programmatic way of turning a URL into an image.\nImproving the script\nRather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site\u2019s build process to automate the generation of the image.\nOur goal is to call the script like this:\nnode shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/\nAnd I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.\n// Get the URL and the slug segment from it\nconst url = process.argv[2];\nconst segments = url.split('/');\n// Get the second-to-last segment (the slug)\nconst slug = segments[segments.length-2];\nWe can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto(url + 'sharing');\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\n await page.screenshot({path: slug + '.png'});\n await browser.close();\n})();\nOnce you\u2019re generating the image with Node, there\u2019s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.\nYou can also run optimisations on the file. I\u2019m using imagemin with pngquant to reduce the file size a little.\nconst imagemin = require('imagemin');\nconst imageminPngquant = require('imagemin-pngquant');\n\nawait imagemin([slug + '.png'], 'build', {\n plugins: [\n imageminPngquant({quality: '75-90'})\n ]\n});\n\nYou can see the completed example as a gist.\nIntegrating it with your CMS\nSo we now have a command we can run to take a URL and generate a custom image for that URL. It\u2019s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you\u2019re using, but it\u2019s likely not too hard as long as you can run a command as part of the process.\nFor 24 ways this year, I\u2019ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I\u2019ll look to automate this one step further.\nWe may also look at having a few slightly different layouts to choose from, so that each day isn\u2019t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there\u2019s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!\n\nBy using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we\u2019ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.\nI hope you\u2019ll give it a try!", "year": "2018", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2018-12-24T00:00:00+00:00", "url": "https://24ways.org/2018/dynamic-social-sharing-images/", "topic": "code"}