{"rowid": 333, "title": "The Attribute Selector for Fun and (no ad) Profit", "contents": "If I had a favourite CSS selector, it would undoubtedly be the attribute selector (Ed: You really need to get out more). For those of you not familiar with the attribute selector, it allows you to style an element based on the existence, value or partial value of a specific attribute.\n\nAt it\u2019s very basic level you could use this selector to style an element with particular attribute, such as a title attribute.\n\nCSS\n\nIn this example I\u2019m going to make all elements with a title attribute grey. I am also going to give them a dotted bottom border that changes to a solid border on hover. Finally, for that extra bit of feedback, I will change the cursor to a question mark on hover as well. \n\nabbr[title] {\n color: #666;\n border-bottom: 1px dotted #666;\n }\n\n abbr[title]:hover {\n border-bottom-style: solid;\n cursor: help;\n }\n\nThis provides a nice way to show your site users that elements with title tags are special, as they contain extra, hidden information.\n\nMost modern browsers such as Firefox, Safari and Opera support the attribute selector. Unfortunately Internet Explorer 6 and below does not support the attribute selector, but that shouldn\u2019t stop you from adding nice usability embellishments to more modern browsers.\n\nInternet Explorer 7 looks set to implement this CSS2.1 selector, so expect to see it become more common over the next few years.\n\nStyling an element based on the existence of an attribute is all well and good, but it is still pretty limited. Where attribute selectors come into their own is their ability to target the value of an attribute. You can use this for a variety of interesting effects such as styling VoteLinks.\n\nVoteWhats?\n\nIf you haven\u2019t heard of VoteLinks, it is a microformat that allows people to show their approval or disapproval of a links destination by adding a pre-defined keyword to the rev attribute.\n\nFor instance, if you had a particularly bad meal at a restaurant, you could signify your dissaproval by adding a rev attribute with a value of vote-against.\n\nMomma Cherri's\n\nYou could then highlight these links by adding an image to the right of these links.\n\na[rev=\"vote-against\"]{\n padding-right: 20px;\n background: url(images/vote-against.png) no-repeat right top;\n}\n\nThis is a useful technique, but it will only highlight VoteLinks on sites you control. This is where user stylesheets come into effect. If you create a user stylesheet containing this rule, every site you visit that uses VoteLinks will receive your new style.\n\nCool huh?\n\nHowever my absolute favourite use for attribute selectors is as a lightweight form of ad blocking. Most online adverts conform to industry-defined sizes. So if you wanted to block all banner-ad sized images, you could simply add this line of code to your user stylesheet.\n\nimg[width=\"468\"][height=\"60\"],\nimg[width=\"468px\"][height=\"60px\"] {\n display: none !important;\n}\n\nTo hide any banner-ad sized element, such as flash movies, applets or iFrames, simply apply the above rule to every element using the universal selector.\n\n*[width=\"468\"][height=\"60\"], *[width=\"468px\"][height=\"60px\"] {\n display: none !important;\n}\n\nJust bare in mind when using this technique that you may accidentally hide something that isn\u2019t actually an advert; it just happens to be the same size.\n\nThe Interactive Advertising Bureau lists a number of common ad sizes. Using these dimensions, you can create stylesheet that blocks all the popular ad formats. Apply this as a user stylesheet and you never need to suffer another advert again.\n\nHere\u2019s wishing you a Merry, ad-free Christmas.", "year": "2005", "author": "Andy Budd", "author_slug": "andybudd", "published": "2005-12-11T00:00:00+00:00", "url": "https://24ways.org/2005/the-attribute-selector-for-fun-and-no-ad-profit/", "topic": "code"} {"rowid": 195, "title": "Levelling Up for Junior Developers", "contents": "If you are a junior developer starting out in the web industry, things can often seem a little daunting. There are so many things to learn, and as soon as you\u2019ve learnt one framework or tool, there seems to be something new out there.\nI am lucky enough to lead a team of developers building applications for the web. During a recent One to One meeting with one of our junior developers, he asked me about a learning path and the basic fundamentals that every developer should know. After a bit of digging around, I managed to come up with a (not so exhaustive) list of principles that was shared with him.\n\nIn this article, I will share the list with you, and hopefully help you level up from junior developer and become a better developer all round. This list doesn\u2019t focus on an particular programming language, but rather coding concepts as a whole. The idea behind this list is that whether you are a front-end developer, back-end developer, full stack developer or just a curious one, these principles apply to everyone that writes code. \nI have tried to be technology agnostic, so that you can use these tips to guide you, whatever your tech stack might be.\nWithout any further ado and in no particular order, let\u2019s get started.\nRefactoring code like a boss\nThe Boy Scouts have a rule that goes \u201calways leave the campground cleaner than you found it.\u201d This rule can be applied to code too and ensures that you leave code cleaner than you found it. As a junior developer, it\u2019s almost certain that you will either create or come across older code that could be improved. The resources below are a guide that will help point you in the right direction.\n\nMy favourite book on this subject has to be Clean Code by Robert C. Martin. It\u2019s a must read for anyone writing code as it helps you identify bad code and shows you techniques that you can use to improve existing code.\nIf you find that in your day to day work you deal with a lot of legacy code, Improving Existing Technology through Refactoring is another useful read.\nDesign Patterns are a general repeatable solution to a commonly occurring problem in software design. My friend and colleague Ranj Abass likes to refer to them as a \u201ccommon language\u201d that helps developers discuss the way that we write code as a pattern. My favourite book on this subject is Head First Design Patterns which goes right back to the basics. Another great read on this topic is Refactoring to Patterns.\nWorking Effectively With Legacy Code is another one that I found really valuable.\n\nImproving your debugging skills\nA solid understanding of how to debug code is a must for any developer. Whether you write code for the web or purely back-end code, the ability to debug will save you time and help you really understand what is going on under the hood.\n\nIf you write front-end code for the web, one of my favourite resources to help you understand how to debug code in Chrome can be found on the Chrome Dev Tools website. While some of the tips are specific to Chrome, these techniques apply to any modern browser of your choice.\nAt Settled, we use Node.js for much of our server side code. Without a doubt, our most trusted IDE has to be Visual Studio Code and the built-in debuggers are amazing. Regardless of whether you use Node.js or not, there are a number of plugins and debuggers that you can use in the IDE. I recommend reading the website of your favourite IDE for more information. \nAs a side note, it is worth mentioning that Chrome Developer Tools actually has functionality that allows you to debug Node.js code too. This makes it a seamless transition from front-end code to server-side code debugging.\nThe Debugging Mindset is an informative online article by Devon H. O\u2019Dell and discusses the the psychology of learning strategies that lead to effective problem-solving skills. \n\nA good understanding of relational databases and NoSQL databases\nAlmost all developers will need to persist data at some point in their career. Even if you don\u2019t write SQL queries in your day to day job, a solid understanding of how they work will help you become a better developer.\n\nIf you are a complete newbie when it comes to databases, I recommend checking out Code Academy. They offer a free online course that can help you get your head around how relational databases work. The course is quite basic, but is a useful hands-on approach to learning this topic.\nThis article provides a great explainer for the difference between the SQL and NoSQL databases, and this Stackoverflow answer goes a little deeper into the subject of the two database types.\nIf you\u2019d like to learn more about NoSQL queries, I would recommend starting with this article on MongoDB queries. Unfortunately, there isn\u2019t one overall course as most NoSQL databases have their own syntax. \n\nYou may also have noticed that I haven\u2019t included other types of databases such as Graph or In-memory; it\u2019s worth focussing on the basics before going any deeper.\nPerformance on the web\nIf you build for the web today, it is important to understand how the browser receives and renders the content that you send it. I am pretty passionate about Web Performance, and hope that everyone can learn how to make websites faster and more efficient. It can be fun at the same time!\n\nSteve Souders High Performance Websites is the godfather of web performance books. While it was created a few years ago and many of the techniques might have changed slightly, it is the original book on the subject and set up many of the ground rules that we know about web performance today.\nA free online resource on this topic is the Google Developers website. The site is an up to date guide on the best web performance techniques for your site. It is definitely worth a read.\nThe network plays a key role in delivering data to your users, and it plays a big role in performance on the web. A fantastic book on this topic is Ilya Grigorik\u2019s High Performance Browser Networking. It is also available to read online at hpbn.co.\n\nUnderstand the end to end architecture of your software project\nI find that one of the best ways to improve my knowledge is to learn about the architecture of the software at the company I work at. It gives you a good understanding as to why things are designed the way they are, why certain decisions were made, and gives you an understanding of how you might do things differently with hindsight.\nTry and find someone more senior, such as a Technical Lead or Software Architect, at your company and ask them to explain the overall architecture and draw a few high-level diagrams for you. Not to mention that they will be impressed with your willingness to learn.\n\nI recommend reading Clean Architecture: A Craftsman\u2019s Guide to Software Structure and Design for more detail on this subject.\nFar too often, software projects can be over-engineered and over-architected, it is worth reading Just Enough Software Architecture. The book helps developers understand how the smallest of changes can affect the outcome of your software architecture.\n\nHow are things deployed\nA big part of creating software is actually shipping it! How is the software at your company released into the wild? Does your company do Continuous Integration? Continuous Deployment?\n\nEven if you answered no to any of these questions, it is worth finding someone with the knowledge in your company to explain these things to you. If it is not already documented, perhaps you could start a wiki to document everything you\u2019re learning about the system - this is a great way to level up and be appreciated and invaluable.\nA streamlined deployment process is a beautiful thing, and understanding how they work can help you grow your knowledge as a developer. \nContinuous Integration is a practical read on the ins and outs of implementing this deployment technique.\nDocker is another great tool to use when it comes to software deployment. It can be tricky at first to wrap your head around, but it is definitely worth learning about this great technology. The documentation on the website will teach and guide you on how to get started using Docker.\n\nWriting Tests\nTesting is an essential tool in the developer bag of skills. They help you to make big refactoring changes to your code, and feel a lot more confident knowing that your changes haven\u2019t broken anything. There are so many benefits to testing, which make it so important for developers at every level to become acquainted with it/them.\n\nThe book that started it all for me was Roy Osherove\u2019s The Art of Unit Testing. The code in the book is written in C#, but the principles apply to every language. It\u2019s a great, easy-to-understand read.\nAnother great read is How Google Tests Software and covers exactly what it says on the tin. It covers many different testing techniques such as exploratory, black box, white box, and acceptance testing and really helps you understand how large organisations test their code.\n\nSoft skills\nWhilst reading through this article, you\u2019ve probably noticed that a large chunk of it focusses on code and technical ability. Without a doubt, I\u2019d say that it is even more important to be a good teammate. If you look up the definition of soft skills in the dictionary, it is defined as \u201cpersonal attributes that enable someone to interact effectively and harmoniously with other people\u201d and I think that it sums this up perfectly. Working on your \u201csoft skills\u201d is something that can truly help you level up in your career. You may be the world\u2019s greatest coder, but if you colleagues can\u2019t get along with you, your coding skills won\u2019t matter!\nWhile you may not learn how to become the perfect co-worker overnight, I really try and live by the motto \u201cdon\u2019t be an arsehole\u201d. Think about how you like to be treated and then try and treat your co-workers with the same courtesy and respect. The next time you need to make a decision at work, ask yourself \u201cis this something an arsehole would do\u201d? If you answered yes to that question, you probably shouldn\u2019t do it!\nSummary\nLevelling up as a junior developer doesn\u2019t have to be scary. Focus on the fundamentals and they should hold you in good stead, regardless of the new things that come along. Software engineering is built on these great principles that have stood the test of time.\nWhilst researching for this article, I came across a useful Github repo that is worth mentioning. Things Every Programmer Should Know is packed with useful information. I have to admit, I didn\u2019t know everything on there!\nI hope that you have found this list helpful. Some of the topics I have mentioned might not be relevant for you at this stage in your career, but should give a nudge in the right direction. After all, knowledge is power!\nIf you are a junior developer reading this article, what would you add to it?", "year": "2017", "author": "Dean Hume", "author_slug": "deanhume", "published": "2017-12-05T00:00:00+00:00", "url": "https://24ways.org/2017/levelling-up-for-junior-developers/", "topic": "code"} {"rowid": 121, "title": "Hide And Seek in The Head", "contents": "If you want your JavaScript-enhanced pages to remain accessible and understandable to scripted and noscript users alike, you have to think before you code. Which functionalities are required (ie. should work without JavaScript)? Which ones are merely nice-to-have (ie. can be scripted)? You should only start creating the site when you\u2019ve taken these decisions.\n\nSpecial HTML elements\n\nOnce you have a clear idea of what will work with and without JavaScript, you\u2019ll likely find that you need a few HTML elements for the noscript version only.\n\nTake this example: A form has a nifty bit of Ajax that automatically and silently sends a request once the user enters something in a form field. However, in order to preserve accessibility, the user should also be able to submit the form normally. So the form should have a submit button in noscript browsers, but not when the browser supports sufficient JavaScript.\n\nSince the button is meant for noscript browsers, it must be hard-coded in the HTML:\n\n\n\nWhen JavaScript is supported, it should be removed:\n\nvar checkJS = [check JavaScript support];\nwindow.onload = function () {\n\tif (!checkJS) return;\n\tdocument.getElementById('noScriptButton').style.display = 'none';\n}\n\nProblem: the load event\n\nAlthough this will likely work fine in your testing environment, it\u2019s not completely correct. What if a user with a modern, JavaScript-capable browser visits your page, but has to wait for a huge graphic to load? The load event fires only after all assets, including images, have been loaded. So this user will first see a submit button, but then all of a sudden it\u2019s removed. That\u2019s potentially confusing.\n\nFortunately there\u2019s a simple solution: play a bit of hide and seek in the :\n\nvar checkJS = [check JavaScript support];\nif (checkJS) {\n\tdocument.write('');\n}\n\nFirst, check if the browser supports enough JavaScript. If it does, document.write an extra \n\n

The test passes if it has the same visual effect as reference.

\n
\n
\n
\n
\n
\n
\nI am testing the new gap property (renamed grid-gap). The reference file can be found by looking for the line:\n\nIn that file, I am using absolute positioning to mock up the way the file would look if gap is implemented correctly.\n\n\nCSS Grid Layout Reference: a square with a green cross\n\n\n\n
\n
\n
\n
\n
\n
\nThe tests are compared in an automated way by taking screenshots of the test and reference.\nThese are relatively simple tests to write, you will find the work is not in writing the test however. The work is really in doing the research, and making sure you understand what is supposed to happen so you can write the test. Which is why, if you really want to get your hands dirty in the web platform, this is a good place to start.\nCommitting a test\nOnce you have written a test you can run the lint tool to make sure that everything is tidy. This tool is run automatically after you submit your pull request, and reviewers won\u2019t accept a test with lint errors, so do this locally first to catch anything obvious.\nTests are added as a pull request, once you have your test ready to go you can create a pull request to add it to the repository. Your test will be tested and it will then wait for a review.\nYou may well then find yourself in a bit of a waiting game, as the test needs to be reviewed. How long that takes will depend on how active work is on that spec. People who are in the OWNERS file for that spec should be notified. You can always ask in IRC to see if someone is available who can look at and potentially merge your test.\nUsually the reviewer will have some comments as to how the test can be improved, in the same as the owner of an open source project you submit a PR to might ask you to change some things. Work with them to make your test as good as it can be, the things you learn on the first test you submit will make future ones easier. You can then bask in the glow of knowing you have done something towards the aim of a more interoperable web for all of us.\nChristmas gifts for your future self\nI have been a web developer for over 20 years. I have no idea what the web platform will look like in 20 more years, but for as long as I\u2019m working on it I\u2019ll keep on trying to make it better. Making the web more interoperable makes it a better place to be a web developer, storing up some Christmas gifts for my future self, while learning new things as I do so.\nResources\nI rounded up everything I could find on WPT while researching this article. As well as some other links that might be helpful for you. These links are below. Happy testing!\n\nWeb Platform Tests\nUsing testharness.js\nIRC Channel irc://irc.w3.org:6667/testing\nEdge Issue Tracker\nMozilla Issue Tracker\nWebKit Issue Tracker\nChromium Issue Tracker\nReducing an Issue - guide to created a reduced test case\nEffectively Using Web Platform Tests: Slides and Video\nAn excellent walkthrough from Lyza Gardner on her working on tests for the HTML specification - Moving Targets: a case study on testing web standards.\nImproving interop with web-platform-tests: Slides and Video", "year": "2017", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2017-12-10T00:00:00+00:00", "url": "https://24ways.org/2017/testing-the-web-platform/", "topic": "code"} {"rowid": 186, "title": "The Web Is Your CMS", "contents": "It is amazing what you can do these days with the services offered on the web. Flickr stores terabytes of photos for us and converts them automatically to all kind of sizes, finds people in them and even allows us to edit them online. YouTube does almost the same complete job with videos, LinkedIn allows us to maintain our CV, Delicious our bookmarks and so on.\n\nWe don\u2019t have to do these tasks ourselves any more, as all of these systems also come with ways to use the data in the form of Application Programming Interfaces, or APIs for short. APIs give us raw data when we send requests telling the system what we want to get back.\n\nThe problem is that every API has a different idea of what is a simple way of accessing this data and in which format to give it back.\n\nMaking it easier to access APIs\n\nWhat we need is a way to abstract the pains of different data formats and authentication formats away from the developer \u2014 and this is the purpose of the Yahoo Query Language, or YQL for short. \n\nLibraries like jQuery and YUI make it easy and reliable to use JavaScript in browsers (yes, even IE6) and YQL allows us to access web services and even the data embedded in web documents in a simple fashion \u2013 SQL style.\n\nSelect * from the web and filter it the way I want\n\nYQL is a web service that takes a few inputs itself:\n\n\n\tA query that tells it what to get, update or access\n\tAn output format \u2013 XML, JSON, JSON-P or JSON-P-X\n\tA callback function (if you defined JSON-P or JSON-P-X)\n\n\nYou can try it out yourself \u2013 check out this link to get back Flickr photos for the search term \u2018santa\u2019*%20from%20flickr.photos.search%20where%20text%3D%22santa%22&format=xml in XML format. The YQL query for this is \n\nselect * from flickr.photos.search where text=\"santa\"\n\nThe easiest way to take your first steps with YQL is to look at the console. There you get sample queries, access to all the data sources available to you and you can easily put together complex queries. In this article, however, let\u2019s use PHP to put together a web page that pulls in Flickr photos, blog posts, Videos from YouTube and latest bookmarks from Delicious.\n\nCheck out the demo and get the source code on GitHub.\n\nquery->results->results;\n /* YouTube output */\n $youtube = '';\n /* Flickr output */\n $flickr = '';\n /* Delicious output */\n $delicious = '';\n /* Blog output */\n $blog = '';\n function undoYouTubeMarkupCrimes($str){\n\t$cleaner = preg_replace('/555px/','100%',$str);\n\t$cleaner = preg_replace('/width=\"[^\"]+\"/','',$cleaner);\n\t$cleaner = preg_replace('//','',$cleaner);\n\treturn $cleaner;\n }\n?>\n\nWhat we are doing here is create a few different YQL statements and queue them together with the query.multi table. Each of these can be run inside YQL itself. Check out the YouTube, Flickr, Delicious and Blog example in the console if you don\u2019t believe me. The benefit of using this table is that we don\u2019t make individual requests for each query but we get all the data in one single request \u2013 which means a much better performing solution as the YQL server farm is faster on the web than our servers.\n\nWe point the query to the YQL web service end point and get the resulting data using cURL. All that we need to do then is to convert the returned data to HTML lists that can be printed out inside an HTML template.\n\nMixing, matching and using HTML as a data source\n\nThis was a simple example of what YQL can do for you. Where it gets really powerful however is by mixing and matching different APIs. YQL is also a good tool to get information from HTML documents. By using the html table you can load the content of an HTML document (which gets fixed automatically by HTMLTidy) and use XPATH to filter down results to what you need. Take the following example which takes headlines from the news.bbc.co.uk homepage and runs the results through Yahoo\u2019s Term Extractor API to give you a list of currently hot topics.\n\nselect * from search.termextract where context in (\n select content from html where url=\"http://news.bbc.co.uk\" and xpath=\"//table[@width=800]//a\"\n)\n\nTry it out in the console or see the results here. In English, this means:\n\n\n\tGo to http://news.bbc.co.uk and get me the HTML\n\tRun it through HTML Tidy to clean it up.\n\tGet me only the links inside the table with an attribute of width and the value 800\n\tGet only the content of the link and for each of the links\n\t\n\t\tTake the content and send it as context to the Yahoo Term Extractor API\n\t\n\t\n\nIf we choose JSON-P as the output format we can use the outcome directly in JavaScript (see this demo or see its source):\n\n\n\n\n\nUsing JSON, we can also use PHP which means the demo works for everybody \u2013 not only those with JavaScript enabled (see this demo or see its source):\n\n\n\nSummary\n\nThis article could only scratch the surface of YQL. You have not only read access to the web but you can also write to web services. For example you can update Twitter, post to your WordPress blog or shorten a URL with bit.ly. Using Open Tables you can add any web service to the YQL interface and you can even run server-side JavaScript which is for example useful to return Flickr photos as HTML or get the HTML content from a document that needs POST data.\n\nThe web of data is already here, and using YQL you don\u2019t have to be a web services expert to use it and be part of it.", "year": "2009", "author": "Christian Heilmann", "author_slug": "chrisheilmann", "published": "2009-12-17T00:00:00+00:00", "url": "https://24ways.org/2009/the-web-is-your-cms/", "topic": "code"} {"rowid": 116, "title": "The IE6 Equation", "contents": "It is the destiny of one browser to serve as the nemesis of web developers everywhere. At the birth of the Web Standards movement, that role was played by Netscape Navigator 4; an outdated browser that refused to die. Its tenacious existence hampered the adoption of modern standards. Today that role is played by Internet Explorer 6.\n\nThere\u2019s a sensation that I\u2019m sure you\u2019re familiar with. It\u2019s a horrible mixture of dread and nervousness. It\u2019s the feeling you get when\u2014after working on a design for a while in a standards-compliant browser like Firefox, Safari or Opera\u2014you decide that you can no longer put off the inevitable moment when you must check the site in IE6. Fingers are crossed, prayers are muttered, but alas, to no avail. The nemesis browser invariably screws something up.\n\nWhat do you do next? If the differences in IE6 are minor, you could just leave it be. After all, websites don\u2019t need to look exactly the same in all browsers. But if there are major layout issues and a significant portion of your audience is still using IE6, you\u2019ll probably need to roll up your sleeves and start fixing the problems.\n\nA common approach is to quarantine IE6-specific CSS in a separate stylesheet. This stylesheet can then be referenced from the HTML document using conditional comments like this:\n\n\n\nThat stylesheet will only be served up to Internet Explorer where the version number is less than 7.\n\nYou can put anything inside a conditional comment. You could put a script element in there. So as well as serving up browser-specific CSS, it\u2019s possible to serve up browser-specific JavaScript.\n\nA few years back, before Microsoft released Internet Explorer 7, JavaScript genius Dean Edwards wrote a script called IE7. This amazing piece of code uses JavaScript to make Internet Explorer 5 and 6 behave like a standards-compliant browser. Dean used JavaScript to bootstrap IE\u2019s CSS support.\n\nBecause the script is specifically targeted at Internet Explorer, there\u2019s no point in serving it up to other browsers. Conditional comments to the rescue:\n\n\n\nStandards-compliant browsers won\u2019t fetch the script. Users of IE6, on the hand, will pay a kind of bad browser tax by having to download the JavaScript file.\n\nSo when should you develop an IE6-specific stylesheet and when should you just use Dean\u2019s JavaScript code? This is the question that myself and my co-worker Natalie Downe set out to answer one morning at Clearleft. We realised that in order to answer that question you need to first answer two other questions, how much time does it take to develop for IE6? and how much of your audience is using IE6?\n\nLet\u2019s say that t represents the total development time. Let t6 represent the portion of that time you spend developing for IE6. If your total audience is a, then a6 is the portion of your audience using IE6. With some algebraic help from our mathematically minded co-worker Cennydd Bowles, Natalie and I came up with the following equation to calculate the percentage likelihood that you should be using Dean\u2019s IE7 script:\n\n\n\np = 50 [ log ( at6 / ta6 ) + 1 ]\n\nTry plugging in your own numbers. If you spend a lot of time developing for IE6 and only a small portion of your audience is using that browser, you\u2019ll get a very high number out of the equation; you should probably use the IE7 script. But if you only spend a little time developing for IE6 and a significant portion of you audience are still using that browser, you\u2019ll get a very small value for p; you might as well write an IE6-specific stylesheet.\n\nOf course this equation is somewhat disingenuous. While it\u2019s entirely possible to research the percentage of your audience still using IE6, it\u2019s not so easy to figure out how much of your development time will be spent developing for that one browser. You can\u2019t really know until you\u2019ve already done the development, by which time the equation is irrelevant.\n\nInstead of using the equation, you could try imposing a limit on how long you will spend developing for IE6. Get your site working in standards-compliant browsers first, then give yourself a time limit to get it working in IE6. If you can\u2019t solve all the issues in that time limit, switch over to using Dean\u2019s script. You could even make the time limit directly proportional to the percentage of your audience using IE6. If 20% of your audience is still using IE6 and you\u2019ve just spent five days getting the site working in standards-compliant browsers, give yourself one day to get it working in IE6. But if 50% of your audience is still using IE6, be prepared to spend 2.5 days wrestling with your nemesis.\n\nAll of these different methods for dealing with IE6 demonstrate that there\u2019s no one single answer that works for everyone. They also highlight a problem with the current debate around dealing with IE6. There\u2019s no shortage of blog posts, articles and even entire websites discussing when to drop support for IE6. But very few of them take the time to define what they mean by \u201csupport.\u201d This isn\u2019t a binary issue. There is no Boolean answer. Instead, there\u2019s a sliding scale of support:\n\n\n\tBlock IE6 users from your site.\n\tDevelop with web standards and don\u2019t spend any development time testing in IE6.\n\tUse the Dean Edwards IE7 script to bootstrap CSS support in IE6.\n\tWrite an IE6 stylesheet to address layout issues.\n\tMake your site look exactly the same in IE6 as in any other browser.\n\n\nEach end of that scale is extreme. I don\u2019t think that anybody should be actively blocking any browser but neither do I think that users of an outdated browser should get exactly the same experience as users of a more modern browser. The real meanings of \u201csupporting\u201d or \u201cnot supporting\u201d IE6 lie somewhere in-between those extremes.\n\nJust as I think that semantics are important in markup, they are equally important in our discussion of web development. So let\u2019s try to come up with some better terms than using the catch-all verb \u201csupport.\u201d If you say in your client contract that you \u201csupport\u201d IE6, define exactly what that means. If you find yourself in a discussion about \u201cdropping support\u201d for IE6, take the time to explain what you think that entails.\n\nThe web developers at Yahoo! are on the right track with their concept of graded browser support. I\u2019m interested in hearing more ideas of how to frame this discussion. If we can all agree to use clear and precise language, we stand a better chance of defeating our nemesis.", "year": "2008", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2008-12-08T00:00:00+00:00", "url": "https://24ways.org/2008/the-ie6-equation/", "topic": "code"} {"rowid": 321, "title": "Tables with Style", "contents": "It might not seem like it but styling tabular data can be a lot of fun. From a semantic point of view, there are plenty of elements to tie some style into. You have cells, rows, row groups and, of course, the table element itself. Adding CSS to a paragraph just isn\u2019t as exciting.\n\nWhere do I start?\n\nFirst, if you have some tabular data (you know, like a spreadsheet with rows and columns) that you\u2019d like to spiffy up, pop it into a table \u2014 it\u2019s rightful place!\n\nTo add more semantics to your table \u2014 and coincidentally to add more hooks for CSS \u2014 break up your table into row groups. There are three types of row groups: the header (thead), the body (tbody) and the footer (tfoot). You can only have one header and one footer but you can have as many table bodies as is appropriate.\n\nSample table example\n\nInspiration\n\nTable Striping\n\nTo improve scanning information within a table, a common technique is to style alternating rows. Also known as zebra tables. Whether you apply it using a class on every other row or turn to JavaScript to accomplish the task, a handy-dandy trick is to use a semi-transparent PNG as your background image. This is especially useful over patterned backgrounds. \n\ntbody tr.odd td {\n background:transparent url(background.png) repeat top left;\n }\n\n * html tbody tr.odd td {\n background:#C00;\n filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(\n src='background.png', sizingMethod='scale');\n }\n\nWe turn off the default background and apply our PNG hack to have this work in Internet Explorer. \n\nStyling Columns\n\nDid you know you could style a column? That\u2019s right. You can add special column (col) or column group (colgroup) elements. With that you can add border or background styles to the column.\n\n\n \n \n \n ...\n\nCheck out the example.\n\nFun with Backgrounds\n\nPop in a tiled background to give your table some character! Internet Explorer\u2019s PNG hack unfortunately only works well when applied to a cell.\n\nTo figure out which background will appear over another, just remember the hierarchy:\n\n (bottom) Table \u2192 Column \u2192 Row Group \u2192 Row \u2192 Cell (top)\n\nThe Future is Bright\n\nOnce browser-makers start implementing CSS3, we\u2019ll have more power at our disposal. Just with :first-child and :last-child, you can pull off a scalable version of our previous table with rounded corners and all \u2014 unfortunately, only Firefox manages to pull this one off successfully. And the selector the masses are clamouring for, nth-child, will make zebra tables easy as eggnog.", "year": "2005", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2005-12-19T00:00:00+00:00", "url": "https://24ways.org/2005/tables-with-style/", "topic": "code"} {"rowid": 20, "title": "Make Your Browser Dance", "contents": "It was a crisp winter\u2019s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue.\n\nThis was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these\u2026 and working out ways to mix them as the music played.\n\nThis was it. This was me VJing.\n\nThis was all a lifetime (well a decade!) ago.\n\nWhen I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API\n\nBut wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser?\n\nWell, there\u2019s only one thing to do: let\u2019s try it!\n\nLet\u2019s take to the dance floor \n\nOver the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let\u2019s take the same approach here.\n\nThe main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right?\n\nSo, let\u2019s start at the beginning: creating a simple visual. For this I\u2019m going to create a CSS animation. It\u2019s just a funky i element with the opacity being changed to make it flash.\n\n See the Pen Creating a light by Rumyra (@Rumyra) on CodePen\n\nA note about prefixes: I\u2019ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article\n\nStart the music\n\nWell, that\u2019s pretty easy so far. Next up: loading in some sound. For this we\u2019ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device\u2019s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext.\n\nSo, let\u2019s start by initialising our audio context.\n\nvar contextClass = window.AudioContext;\nif (contextClass) {\n //web audio api available.\n var audioContext = new contextClass();\n} else {\n //web audio api unavailable\n //warn user to upgrade/change browser\n}\n\nNow let\u2019s load our sound file into the new context we created with an XMLHttpRequest.\n\nfunction loadSound() {\n\t//set audio file url\n\tvar audioFileUrl = '/octave.ogg';\n\t//create new request\n\tvar request = new XMLHttpRequest();\n\trequest.open(\"GET\", audioFileUrl, true);\n\trequest.responseType = \"arraybuffer\";\n\n\trequest.onload = function() {\n\t\t//take from http request and decode into buffer\n\t\tcontext.decodeAudioData(request.response, function(buffer) {\n\t \taudioBuffer = buffer;\n\t });\n\t\t}\n\trequest.send();\n}\n\nPhew! Now we\u2019ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O\u2019Reilly Web Audio API book by Boris Smus is available to read online free.\n\nAll we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have.\n\n Learning the steps\n\nLet\u2019s take a minute to step back and remember our school days and science class. I\u2019m sure if I drew a picture of a sound wave, we would all start nodding our heads.\n\n \n\nThe sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size.\n\nThis is great when everything is analogue, but the waveform varies continuously and it\u2019s not suitable for digital processing: there\u2019s an infinite set of values. For digital processing, we need discrete numbers.\n\nWe have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we\u2019re doing in the code above is piping that data in the audio context. All we need to do now is access it.\n\nWe can do this with the Web Audio API\u2019s analysing functionality. Just pop in an analysing node before we connect the source to its destination node.\n\nfunction createAnalyser(source) {\n\t//create analyser node\n\tanalyser = audioContext.createAnalyser();\n\t//connect to source\n\tsource.connect(analyzer);\n\t//pipe to speakers\n\tanalyser.connect(audioContext.destination);\n}\n\nThe data I\u2019m really interested in here is frequency. Later we could look into amplitude or time, but for now I\u2019m going to stick with frequency.\n\nThe analyser node gives us frequency data via the getFrequencyByteData method.\n\n Don\u2019t forget to count!\n\nTo collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it?\n\nThis is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context\u2019s sampleRate attribute. This is good to bear in mind when you\u2019re thinking about your resolution of frequencies.\n\nvar sampleRate = audioContext.sampleRate;\n\nLet\u2019s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we\u2019re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0\u201320,000hz.\n\nSo, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property.\n\n//set our FFT size\nanalyzer.fftSize = 2048;\n//create an empty array with 1024 items\nvar frequencyData = new Uint8Array(1024);\n\nSo, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 \u00f7 1,024 = 23.44Hz apart.\n\nThe thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame.\n\nfunction update() {\n \t//constantly getting feedback from data\n \trequestAnimationFrame(update);\n \tanalyzer.getByteFrequencyData(frequencyData);\n}\n\n Putting it all together\n\nSweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier.\n\nWe can easily pause and run our CSS animation from JavaScript:\n\nelement.style.webkitAnimationPlayState = \"paused\";\nelement.style.webkitAnimationPlayState = \"running\";\n\nUnfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle.\n\nThere is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms.\n\nThis seems a bit much for our proof of concept, so let\u2019s backtrack a little. We know by the animation we\u2019ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript.\n\nelement.style.opacity = \"1\";\nelement.style.opacity = \"0.2\";\n\nSo let\u2019s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I\u2019ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold.\n\n//get light elements\nvar lights = document.getElementsByTagName('i');\nvar totalLights = lights.length;\n\nfor (var i=0; i 160){\n //start animation on element\n lights[i].style.opacity = \"1\";\n } else {\n lights[i].style.opacity = \"0.2\";\n }\n}\n\nSee all the code in action here. I suggest viewing in a modern browser :)\n\nAwesome! It is true \u2014 we can VJ in our browser!\n\nLet\u2019s dance!\n\nSo, let\u2019s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs.\n\nCheck it out!\n\nI don\u2019t know about you, but I\u2019m pretty excited \u2014 that\u2019s just a bit of HTML, CSS and JavaScript!\n\nThe other thing to think about, of course, is the sound that you would get at a venue. We don\u2019t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I\u2019ve found, is to capture what my laptop\u2019s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia.\n\nLet\u2019s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash.\n\n And relax :)\n\nThere you have it. Sit back, play some music and enjoy the Winamp like experience in front of you.\n\nSo, where do we go from here? I already have a wealth of ideas. We haven\u2019t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it\u2019s questionable whether the browser is the best environment for this. For one, I\u2019m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it\u2019s fun, and it looks cool and sometimes I think it\u2019s OK to just dance.", "year": "2013", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2013-12-02T00:00:00+00:00", "url": "https://24ways.org/2013/make-your-browser-dance/", "topic": "code"} {"rowid": 177, "title": "HTML5: Tool of Satan, or Yule of Santa?", "contents": "It would lead to unseasonal arguments to discuss the title of this piece here, and the arguments are as indigestible as the fourth turkey curry of the season, so we\u2019ll restrict our article to the practical rather than the philosophical: what HTML5 can you reasonably expect to be able to use reliably cross-browser in the early months of 2010?\n\nThe answer is that you can use more than you might think, due to the seasonal tinsel of feature-detection and using the sparkly pixie-dust of IE-only VML (but used in a way that won\u2019t damage your Elf).\n\nCanvas\n\ncanvas is a 2D drawing API that defines a blank area of the screen of arbitrary size, and allows you to draw on it using JavaScript. The pictures can be animated, such as in this canvas mashup of Wolfenstein 3D and Flickr. (The difference between canvas and SVG is that SVG uses vector graphics, so is infinitely scalable. It also keeps a DOM, whereas canvas is just pixels so you have to do all your own book-keeping yourself in JavaScript if you want to know where aliens are on screen, or do collision detection.)\n\nPreviously, you needed to do this using Adobe Flash or Java applets, requiring plugins and potentially compromising keyboard accessibility. Canvas drawing is supported now in Opera, Safari, Chrome and Firefox. The reindeer in the corner is, of course, Internet Explorer, which currently has zero support for canvas (or SVG, come to that).\n\nNow, don\u2019t pull a face like all you\u2019ve found in your Yuletide stocking is a mouldy satsuma and a couple of nuts\u2014that\u2019s not the end of the story. Canvas was originally an Apple proprietary technology, and Internet Explorer had a similar one called Vector Markup Language which was submitted to the W3C for standardisation in 1998 but which, unlike canvas, was not blessed with retrospective standardisation.\n\nWhat you need, then, is some way for Internet Explorer to translate canvas to VML on-the-fly, while leaving the other, more standards-compliant browsers to use the HTML5. And such a way exists\u2014it\u2019s a JavaScript library called excanvas. It\u2019s downloadable from http://code.google.com/p/explorercanvas/ and it\u2019s simple to include it via a conditional comment in the head for IE:\n\n\n\nSimply include this, and your canvas will be natively supported in the modern browsers (and the library won\u2019t even be downloaded) whereas IE will suddenly render your canvas using its own VML engine. Be sure, however, to check it carefully, as the IE JavaScript engine isn\u2019t so fast and you\u2019ll need to be sure that performance isn\u2019t too degraded to use.\n\nForms\n\nSince the beginning of the Web, developers have been coding forms, and then writing JavaScript to check whether an input is a correctly formed email address, URL, credit card number or conforms to some other pattern. The cumulative labour of the world\u2019s developers over the last 15 years makes whizzing round in a sleigh and delivering presents seem like popping to the corner shop in comparison.\n\nWith HTML5, that\u2019s all about to change. As Yaili began to explore on Day 3, a host of new attributes to the input element provide built-in validation for email address formats (input type=email), URLs (input type=url), any pattern that can be expressed with a JavaScript-syntax regex (pattern=\"[0-9][A-Z]{3}\") and the like. New attributes such as required, autofocus, input type=number min=3 max=50 remove much of the tedious JavaScript from form validation.\n\nOther, really exciting input types are available (see all input types). The datalist is reminiscent of a select box, but allows the user to enter their own text if they don\u2019t want to choose one of the pre-defined options. input type=range is rendered as a slider, while input type=date pops up a date picker, all natively in the browser with no JavaScript required at all.\n\nCurrently, support is most complete in an experimental implementation in Opera and a number of the new attributes in Webkit-based browsers. But don\u2019t let that stop you! The clever thing about the specification of the new Web Forms is that all the new input types are attributes (rather than elements). input defaults to input type=text, so if a browser doesn\u2019t understand a new HTML5 type, it gracefully degrades to a plain text input.\n\nSo where does that leave validation in those browsers that don\u2019t support Web Forms? The answer is that you don\u2019t retire your pre-existing JavaScript validation just yet, but you leave it as a fallback after doing some feature detection. To detect whether (say) input type=email is supported, you make a new input type=email with JavaScript but don\u2019t add it to the page. Then, you interrogate your new element to find out what its type attribute is. If it\u2019s reported back as \u201cemail\u201d, then the browser supports the new feature, so let it do its work and don\u2019t bring in any JavaScript validation. If it\u2019s reported back as \u201ctext\u201d, it\u2019s fallen back to the default, indicating that it\u2019s not supported, so your code should branch to your old validation routines. Alternatively, use the small (7K) Modernizr library which will do this work for you and give you JavaScript booleans like Modernizr.inputtypes[email] set to true or false.\n\nSo what does this buy you? Well, first and foremost, you\u2019re future-proofing your code for that time when all browsers support these hugely useful additions to forms. Secondly, you buy a usability and accessibility win. Although it\u2019s tempting to style the stuffing out of your form fields (which can, incidentally, lead to madness), whatever your branding people say, it\u2019s better to leave forms as close to the browser defaults as possible. A browser\u2019s slider and date pickers will be the same across different sites, making it much more comprehensible to users. And, by using native controls rather than faking sliders and date pickers with JavaScript, your forms are much more likely to be accessible to users of assistive technology.\n\nHTML5 DOCTYPE\n\nYou can use the new DOCTYPE !doctype html now and \u2013 hey presto \u2013 you\u2019re writing HTML5, as it\u2019s pretty much a superset of HTML4. There are some useful advantages to doing this. The first is that the HTML5 validator (I use http://html5.validator.nu) also validates ARIA information, whereas the HTML4 validator doesn\u2019t, as ARIA is a new spec developed after HTML4. (Actually, it\u2019s more accurate to say that it doesn\u2019t validate your ARIA attributes, but it doesn\u2019t automatically report them as an error.)\n\nAnother advantage is that HTML5 allows tabindex as a global attribute (that is, on any element). Although originally designed as an accessibility bolt-on, I ordinarily advise you don\u2019t use it; a well-structured page should provide a logical tab order through links and form fields already.\n\nHowever, tabindex=\"-1\" is a legal value in HTML5 as it allows for the element to be programmatically focussable by JavaScript. It\u2019s also very useful for correcting a bug in Internet Explorer when used with a keyboard; in-page links go nowhere if the destination doesn\u2019t have a proprietary property called hasLayout set or a tabindex of -1.\n\nSo, whether it is the tool of Satan or yule of Santa, HTML5 is just around the corner. Some you can use now, and by the end of 2010 I predict you\u2019ll be able to use a whole lot more as new browser versions are released.", "year": "2009", "author": "Bruce Lawson", "author_slug": "brucelawson", "published": "2009-12-05T00:00:00+00:00", "url": "https://24ways.org/2009/html5-tool-of-satan-or-yule-of-santa/", "topic": "code"} {"rowid": 246, "title": "Designing Your Site Like It\u2019s 1998", "contents": "It\u2019s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary\u2014and my fourteenth contribution to 24 ways\u2014 I\u2019d like to explain how I would\u2019ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.\nMy design for Planes, Trains and Automobiles is fixed at 800px wide.\nDeveloping a framework\nI\u2019ll start by using frames to set up the framework for this new website. Frames are individual pages\u2014one for navigation, the other for my content\u2014pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a element.\nI add two rows to my ; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don\u2019t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0:\n\n[\u2026]\n\nNext I add the source of my two frame documents. I don\u2019t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame:\n\n\n\n\nI do want links from my navigation to open in the content frame, so I give each a name so I can specify where I want links to open:\n\n\n\n\nThe framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets\u2014and as many individual documents\u2014as I need:\n\n \n \n \n \n \n\nLetterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space.\nHandling older browsers\nSadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content:\n\n<body>\n<p>This page uses frames, but your browser doesn\u2019t support them. \n Please upgrade your browser.</p>\n</body>\n\nForcing someone back into a frame\nSometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a , people could easily miss out on other parts of a design. This short script will prevent this happening and because it\u2019s vanilla Javascript, it doesn\u2019t require a library such as jQuery:\n\n\nLaying out my page\nBefore starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:\n\nI want absolute control over how people experience my design and don\u2019t want to allow it to stretch, so I first need a
which limits the width of my layout to 800px. The align attribute will keep this
in the centre of someone\u2019s screen:\n
\n \n \n \n
[\u2026]
\nAlthough they were developed for displaying tabular information, the cells and rows which make up the element make it ideal for the precise implementation of a design. I need several tables\u2014often nested inside each other\u2014to implement my design. These include tables for a banner and three rows of content:\n
\n
[\u2026]
\n \n
\n
[\u2026]
\n \n \n [\u2026]
\n [\u2026]
\n\nThe width of the first table\u2014used for my banner\u2014is fixed to match the logo it contains. As I don\u2019t need borders, padding, or spacing between these cells, I use attributes to remove them:\n\n \n \n \n
\"Logo\"
\nThe next table\u2014which contains the largest image, introduction, and a call-to-action\u2014is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:\n\n \n \n\nThe height and width of these \u201cshims\u201d or \u201cspacers\u201d is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.\nFor the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:\n\n \u00a0\n [\u2026]\n\n\n\n \u00a0\n\nI use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table\u2014this time with a white background\u2014inside it:\n\n \n \n \n
\n \n[\u2026]\n
\n
\nAdding details\nTables are fabulous tools for laying out a page, but they\u2019re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my \u201cBuy the DVD\u201d call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:\n\n \n \n\n \n\n \n \n
\n
\n Buy the DVD\n
\n
\nI use those same elements to add details to headlines and lists too. Adding a \u201cbullet\u201d to each item in a list needs only two additional table cells, a circular graphic, and a spacer:\n\n \n \n \n \n \n
\u00a0\u00a0Directed by John Hughes
\nImplementing a typographic hierarchy\nSo far I\u2019ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use elements to change the typeface from the browser\u2019s default to any font installed on someone\u2019s device:\nPlanes, Trains and Automobiles is a comedy film [\u2026]\nTo adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser\u2019s default. I use a size of 4 for this headline and 2 for the text which follows:\nSteve Martin\n\nAn American actor, comedian, writer, producer, and musician.\nWhen I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.\nNB: I use as many
elements as needed to create space between headlines and paragraphs.\nView the final result (and especially the source.)\nMy modern day design for Planes, Trains and Automobiles.\nI can imagine many people reading this and thinking \u201cThis is terrible advice because we don\u2019t develop websites like this in 2018.\u201d That\u2019s true.\nWe have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:\nfont-variant-caps: titling-caps;\nfont-variant-ligatures: common-ligatures;\nfont-variant-numeric: oldstyle-nums;\nGrid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:\nbody {\n display: grid;\n grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;\n grid-template-rows: auto;\n grid-column-gap: 2vw;\n grid-row-gap: 1vh;\n}\nFlexbox has made it easy to develop flexible components such as navigation links:\nnav ul { display: flex; }\nnav li { flex: 1; }\nJust one line of CSS can create multiple columns of fluid type:\nmain { column-width: 12em; }\nCSS Shapes enable text to flow around irregular shapes including polygons:\n[src*=\"main-img\"] {\n float: left;\n shape-outside: polygon(\u2026);\n}\nToday, we wouldn\u2019t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:\n.btn {\n background: linear-gradient(#8B1212, #DD3A3C);\n border-radius: 1em;\n box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);\n}\nCSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we\u2019re certain we know how to design and develop products and websites better than we did at the end of 1998.\nStrange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That\u2019s why it\u2019s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today\u2014tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass\u2014will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.\nI have no prediction for what the web will be like twenty years from now. However, I want to believe we\u2019ll build on what we\u2019ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won\u2019t be repeated.\n\nHead over to my website if you\u2019d like to read about how I\u2019d implement my design for \u2018Planes, Trains and Automobiles\u2019 today.", "year": "2018", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2018-12-23T00:00:00+00:00", "url": "https://24ways.org/2018/designing-your-site-like-its-1998/", "topic": "code"} {"rowid": 64, "title": "Being Responsive to the Small Things", "contents": "It\u2019s that time of the year again to trim the tree with decorations. Or maybe a DOM tree?\nAny web page is made of HTML elements that lay themselves out in a tree structure. We start at the top and then have multiple branches with branches that branch out from there. \n\nTo decorate our tree, we use CSS to specify which branches should receive the tinsel we wish to adorn upon it. It\u2019s all so lovely.\nIn years past, this was rather straightforward. But these days, our trees need to be versatile. They need to be responsive!\nResponsive web design is pretty wonderful, isn\u2019t it? Based on our viewport, we can decide how elements on the page should change their appearance to accommodate various constraints using media queries.\nClearleft have a delightfully clean and responsive site\nAlas, it\u2019s not all sunshine, lollipops, and rainbows. \nWith complex layouts, we may have design chunks \u2014 let\u2019s call them components \u2014 that appear in different contexts. Each context may end up providing its own constraints on the design, both in its default state and in its possibly various responsive states.\n\nMedia queries, however, limit us to the context of the entire viewport, not individual containers on the page. For every container our component lives in, we need to specify how to rearrange things in that context. The more complex the system, the more contexts we need to write code for.\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\nEach new component and each new breakpoint just makes the entire system that much more difficult to maintain. \n@media (min-width: 600px) {\n .features > .component { }\n .grid > .component {}\n}\n\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\n\n@media (min-width: 1024px) {\n .features > .component { }\n}\nEnter container queries\nContainer queries, also known as element queries, allow you to specify conditional CSS based on the width (or maybe height) of the container that an element lives in. In doing so, you no longer have to consider the entire page and the interplay of all the elements within. \nWith container queries, you\u2019ll be able to consider the breakpoints of just the component you\u2019re designing. As a result, you end up specifying less code and the components you develop have fewer dependencies on the things around them. (I guess that makes your components more independent.)\nAwesome, right?\nThere\u2019s only one catch.\nBrowsers can\u2019t do container queries. There\u2019s not even an official specification for them yet. The Responsive Issues (n\u00e9e Images) Community Group is looking into solving how such a thing would actually work. \nSee, container queries are tricky from an implementation perspective. The contents of a container can affect the size of the container. Because of this, you end up with troublesome circular references. \nFor example, if the width of the container is under 500px then the width of the child element should be 600px, and if the width of the container is over 500px then the width of the child element should be 400px. \nCan you see the dilemma? When the container is under 500px, the child element resizes to 600px and suddenly the container is 600px. If the container is 600px, then the child element is 400px! And so on, forever. This is bad.\nI guess we should all just go home and sulk about how we just got a pile of socks when we really wanted the Millennium Falcon. \nOur saviour this Christmas: JavaScript\nThe three wise men \u2014 Tim Berners-Lee, H\u00e5kon Wium Lie, and Brendan Eich \u2014 brought us the gifts of HTML, CSS, and JavaScript. \nTo date, there are a handful of open source solutions to fill the gap until a browser implementation sees the light of day.\n\nElementary by Scott Jehl\nElementQuery by Tyson Matanich\nEQ.js by Sam Richards\nCSS Element Queries from Marcj\n\nUsing any of these can sometimes feel like your toy broke within ten minutes of unwrapping it.\nEach take their own approach on how to specify the query conditions. For example, Elementary, the smallest of the group, only supports min-width declarations made in a :before selector.\n.mod-foo:before {\n content: \u201c300 410 500\u201d;\n}\nThe script loops through all the elements that you specify, reading the content property and then setting an attribute value on the HTML element, allowing you to use CSS to style that condition. \n.mod-foo[data-minwidth~=\"300\"] {\n background: blue;\n}\nTo get the script to run, you\u2019ll need to set up event handlers for when the page loads and for when it resizes. \nwindow.addEventListener( \"load\", window.elementary, false );\nwindow.addEventListener( \"resize\", window.elementary, false );\nThis works okay for static sites but breaks down on pages where elements can expand or contract, or where new content is dynamically inserted.\nIn the case of EQ.js, the implementation requires the creation of the breakpoints in the HTML. That means that you have implementation details in HTML, JavaScript, and CSS. (Although, with the JavaScript, once it\u2019s in the build system, it shouldn\u2019t ever be much of a concern unless you\u2019re tracking down a bug.)\nAnother problem you may run into is the use of content delivery networks (CDNs) or cross-origin security issues. The ElementQuery and CSS Element Queries libraries need to be able to read the CSS file. If you are unable to set up proper cross-origin resource sharing (CORS) headers, these libraries won\u2019t help.\nAt Shopify, for example, we had all of these problems. The admin that store owners use is very dynamic and the CSS and JavaScript were being loaded from a CDN that prevented the JavaScript from reading the CSS. \nTo go responsive, the team built their own solution \u2014 one similar to the other scripts above, in that it loops through elements and adds or removes classes (instead of data attributes) based on minimum or maximum width.\nThe caveat to this particular approach is that the declaration of breakpoints had to be done in JavaScript. \n elements = [\n { \u2018module\u2019: \u201c.carousel\u201d, \u201cclassName\u201d:\u2019alpha\u2019, minWidth: 768, maxWidth: 1024 },\n { \u2018module\u2019: \u201c.button\u201d, \u201cclassName\u201d:\u2019beta\u2019, minWidth: 768, maxWidth: 1024 } ,\n { \u2018module\u2019: \u201c.grid\u201d, \u201cclassName\u201d:\u2019cappa\u2019, minWidth: 768, maxWidth: 1024 }\n ]\nWith that done, the script then had to be set to run during various events such as inserting new content via Ajax calls. This sometimes reveals itself in flashes of unstyled breakpoints (FOUB). An unfortunate side effect but one largely imperceptible.\nUsing this approach, however, allowed the Shopify team to make the admin responsive really quickly. Each member of the team was able to tackle the responsive story for a particular component without much concern for how all the other components would react. \n\nEach element responds to its own breakpoint that would amount to dozens of breakpoints using traditional breakpoints. This approach allows for a truly fluid and adaptive interface for all screens.\nChristmas is over\nI wish I were the bearer of greater tidings and cheer. It\u2019s not all bad, though. We may one day see browsers implement container queries natively. At which point, we shall all rejoice!", "year": "2015", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2015-12-19T00:00:00+00:00", "url": "https://24ways.org/2015/being-responsive-to-the-small-things/", "topic": "code"} {"rowid": 258, "title": "Mistletoe Offline", "contents": "It\u2019s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.\nI am, of course, talking about Murphy, and the golden rule he gave unto us:\n\nAnything that can go wrong will go wrong.\n\nSo true! I mean, that\u2019s why we make sure we\u2019ve got nice 404 pages. It\u2019s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it\u2019s bound to happen sometime. Murphy\u2019s Law, innit?\nBut there are some Murphyesque situations where even your lovingly crafted 404 page won\u2019t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can\u2014and will\u2014go wrong.\nI guess there\u2019s nothing we can do about those particular situations, right?\nWrong!\nA service worker is a Murphy-battling technology that you can inject into a visitor\u2019s device from your website. Once it\u2019s installed, it can intercept any requests made to your domain. If anything goes wrong with a request\u2014as is inevitable\u2014you can provide instructions for the browser. That\u2019s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.\nIf you\u2019ve got a custom 404 page, why not make a custom offline page too?\nGet your server in order\nStep one is to make \u2026actually, wait. There\u2019s a step before that. Step zero. Get your site running on HTTPS, if it isn\u2019t already. You won\u2019t be able to use a service worker unless everything\u2019s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.\nIf you\u2019re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.\nMake an offline page\nAlright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here\u2019s an example of the custom offline page for this year\u2019s Ampersand conference.\nWhen you\u2019re done, publish the offline page at suitably imaginative URL, like, say /offline.html.\nPre-cache your offline page\nNow create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user\u2019s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:\naddEventListener('install', installEvent => {\n// put your instructions here.\n}); // end addEventListener\nIn this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I\u2019m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nI\u2019m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html',\n '/path/to/stylesheet.css',\n '/path/to/javascript.js',\n '/path/to/image.jpg'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nMake sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.\nIntercept requests\nThe next event you want to listen for is the fetch event. This is probably the most powerful\u2014and, let\u2019s be honest, the creepiest\u2014feature of a service worker. Once it has been installed, the service worker lurks on the user\u2019s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:\naddEventListener('fetch', fetchEvent => {\n// What happens next is up to you!\n}); // end addEventListener\nLet\u2019s write a fairly conservative script with the following logic:\n\nWhenever a file is requested,\nFirst, try to fetch it from the network,\nBut if that doesn\u2019t work, try to find it in the cache,\nBut if that doesn\u2019t work, and it\u2019s a request for a web page, show the custom offline page instead.\n\nHere\u2019s how that translates into JavaScript:\n// Whenever a file is requested\naddEventListener('fetch', fetchEvent => {\n const request = fetchEvent.request;\n fetchEvent.respondWith(\n // First, try to fetch it from the network\n fetch(request)\n .then( responseFromFetch => {\n return responseFromFetch;\n }) // end fetch.then\n // But if that doesn't work\n .catch( fetchError => {\n // try to find it in the cache\n caches.match(request)\n .then( responseFromCache => {\n if (responseFromCache) {\n return responseFromCache;\n // But if that doesn't work\n } else {\n // and it's a request for a web page\n if (request.headers.get('Accept').includes('text/html')) {\n // show the custom offline page instead\n return caches.match('/offline.html');\n } // end if\n } // end if/else\n }) // end match.then\n }) // end fetch.catch\n ); // end respondWith\n}); // end addEventListener\nI am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what\u2019s happening at each point in the code, I\u2019ve written a whole book for you. It\u2019s the perfect present for Murphymas.\nHook up your service worker script\nYou can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you\u2019re calling in to every page on your site, or add this in a script element at the end of every page\u2019s HTML:\nif (navigator.serviceWorker) {\n navigator.serviceWorker.register('/serviceworker.js');\n}\nThat tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.\nYou might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don\u2019t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you\u2019re making sure that every request can be intercepted.\nGo further!\nNicely done! You\u2019ve made sure that if\u2014no, when\u2014a visitor can\u2019t reach your website, they\u2019ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!\nThis is just the beginning. You can do more with service workers.\nWhat if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they\u2019re offline, you could show them the cached version.\nOr, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version\u2014which would be blazingly fast\u2014and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.\nSo many options! The hard part isn\u2019t writing the code, it\u2019s figuring out the steps you want to take. Once you\u2019ve got those steps written out, then it\u2019s a matter of translating them into JavaScript.\nInevitably there will be some obstacles along the way\u2014usually it\u2019s a misplaced curly brace or a missing parenthesis. Don\u2019t be too hard on yourself if your code doesn\u2019t work at first. That\u2019s just Murphy\u2019s Law in action.", "year": "2018", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2018-12-04T00:00:00+00:00", "url": "https://24ways.org/2018/mistletoe-offline/", "topic": "code"} {"rowid": 292, "title": "Watch Your Language!", "contents": "I\u2019m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can\u2019t be expressed in some languages, while other languages express these concepts with ease.\nIt also helped me understand the way we label languages. English: business. French: romance. Here\u2019s an example of how words, or the absence thereof, can affect the way we think:\nIn French we love everything. There\u2019s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It\u2019s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn\u2019t grasped the difference. Needless to say, I\u2019ve scared away plenty of first dates!\nLearning another language made me realize the limitations of my native language and revealed concepts I didn\u2019t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas.\nWhen I lived in Montr\u00e9al, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient.\n\nI\u2019m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job.\nWhen I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn\u2019t consider accessibility. All this made editing views a laborious process.\nGrep. Replace all. Test. Ship it. Repeat.\nThis wasn\u2019t sustainable at Shopify\u2019s scale, so the newly-formed front end team was given two missions:\n\nMake the app responsive (AKA Let\u2019s Make This Thing Responsive ASAP)\nMake the view layer scalable and maintainable (AKA Let\u2019s Build a Pattern Library\u2026 in Ruby)\n\nLet\u2019s make this thing responsive ASAP\nThe year was 2015. The Shopify admin wasn\u2019t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries.\nIt seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn\u2019t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool.\nWriting the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive.\nBut with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn\u2019t performant at all. Components would jarringly jump around the page before settling in on first paint.\nIt was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn\u2019t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout.\nIn other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly.\nLet\u2019s build a pattern library\u2026 in Ruby\nIn order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping.\nWhen I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn\u2019t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan.\nWe put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks.\nSince the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn\u2019t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn\u2019t going to cut it.\nI don\u2019t know the end of this story, because we haven\u2019t written it yet. We\u2019ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven\u2019t determined the winner yet. Only time (and data gathering) will tell.\nRuby is great but it doesn\u2019t speak the browser\u2019s language efficiently. It was not the right language for the job.\n\nLearning a new spoken language has had an impact on how I write code. It has taught me that you don\u2019t know what you don\u2019t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you\u2019re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language.", "year": "2016", "author": "Annie-Claude C\u00f4t\u00e9", "author_slug": "annieclaudecote", "published": "2016-12-10T00:00:00+00:00", "url": "https://24ways.org/2016/watch-your-language/", "topic": "code"} {"rowid": 180, "title": "Going Nuts with CSS Transitions", "contents": "I\u2019m going to show you how CSS 3 transforms and WebKit transitions can add zing to the way you present images on your site.\n\nLaying the foundations\n\nFirst we are going to make our images look like mini polaroids with captions. Here\u2019s the markup:\n\n
\n\t\"\"\n\t

Found this little cutie on a walk in New Zealand!

\n
\n\nYou\u2019ll notice we\u2019re using a somewhat presentational class of pull-right here. This means the logic is kept separate from the code that applies the polaroid effect. The polaroid class has no positioning, which allows it to be used generically anywhere that the effect is required. The pull classes set a float and add appropriate margins\u2014they can be used for things like blockquotes as well.\n\n.polaroid {\n\twidth: 150px;\n\tpadding: 10px 10px 20px 10px;\n\tborder: 1px solid #BFBFBF;\n\tbackground-color: white;\n\t-webkit-box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4);\n\t-moz-box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4);\n\tbox-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4);\n}\n\nThe actual polaroid effect itself is simply applied using padding, a border and a background colour. We also apply a nice subtle box shadow, using a property that is supported by modern WebKit browsers and Firefox 3.5+. We include the box-shadow property last to ensure that future browsers that support the eventual CSS3 specified version natively will use that implementation over the legacy browser specific version.\n\nThe box-shadow property takes four values: three lengths and a colour. The first is the horizontal offset of the shadow\u2014positive values place the shadow on the right, while negative values place it to the left. The second is the vertical offset, positive meaning below. If both of these are set to 0, the shadow is positioned equally on all four sides. The last length value sets the blur radius\u2014the larger the number, the blurrier the shadow (therefore the darker you need to make the colour to have an effect).\n\nThe colour value can be given in any format recognised by CSS. Here, we\u2019re using rgba as explained by Drew behind the first door of this year\u2019s calendar.\n\nRotation\n\nFor browsers that understand it (currently our old favourites WebKit and FF3.5+) we can add some visual flair by rotating the image, using the transform CSS 3 property.\n\n-webkit-transform: rotate(9deg);\n-moz-transform: rotate(9deg);\ntransform: rotate(9deg);\n\nRotations can be specified in degrees, radians (rads) or grads. WebKit also supports turns unfortunately Firefox doesn\u2019t just yet.\n\nFor our example, we want any polaroid images on the left hand side to be rotated in the opposite direction, using a negative degree value:\n\n.pull-left.polaroid {\n\t-webkit-transform: rotate(-9deg);\n\t-moz-transform: rotate(-9deg);\n\ttransform: rotate(-9deg);\n}\n\nMultiple class selectors don\u2019t work in IE6 but as luck would have it, the transform property doesn\u2019t work in any current IE version either. The above code is a good example of progressive enrichment: browsers that don\u2019t support box-shadow or transform will still see the image and basic polaroid effect.\n\n\n\nAnimation\n\nWebKit is unique amongst browser rendering engines in that it allows animation to be specified in pure CSS. Although this may never actually make it in to the CSS 3 specification, it degrades nicely and more importantly is an awful lot of fun!\n\nLet\u2019s go nuts.\n\nIn the next demo, the image is contained within a link and mousing over that link causes the polaroid to animate from being angled to being straight.\n\nHere\u2019s our new markup:\n\n\n\t\"\"\n\tWhite water rafting in Queenstown\n\n\nAnd here are the relevant lines of CSS:\n\na.polaroid {\n\t/* ... */\n -webkit-transform: rotate(10deg);\n -webkit-transition: -webkit-transform 0.5s ease-in;\n}\na.polaroid:hover,\na.polaroid:focus,\na.polaroid:active {\n\t/* ... */\n\t-webkit-transform: rotate(0deg);\n}\n\nThe @-webkit-transition@ property is the magic wand that sets up the animation. It takes three values: the property to be animated, the duration of the animation and a \u2018timing function\u2019 (which affects the animation\u2019s acceleration, for a smoother effect).\n\n-webkit-transition only takes affect when the specified property changes. In pure CSS, this is done using dynamic pseudo-classes. You can also change the properties using JavaScript, but that\u2019s a story for another time.\n\nThrowing polaroids at a table\n\nImagine there are lots of differently sized polaroid photos scattered on a table. That\u2019s the effect we are aiming for with our next demo.\n\n\n\nAs an aside: we are using absolute positioning to arrange the images inside a flexible width container (with a minimum and maximum width specified in pixels). As some are positioned from the left and some from the right when you resize the browser they shuffle underneath each other. This is an effect used on the UX London site.\n\nThis demo uses a darker colour shadow with more transparency than before. The grey shadow in the previous example worked fine, but it was against a solid background. Since the images are now overlapping each other, the more opaque shadow looked fake.\n\n-webkit-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3);\n-moz-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3);\nbox-shadow: 2px 2px 4px rgba(0,0, 0, 0.3);\n\nOn hover, as well as our previous trick of animating the image rotation back to straight, we are also making the shadow darker and setting the z-index to be higher than the other images so that it appears on top.\n\nAnd Finally\u2026\n\nFinally, for a bit more fun, we\u2019re going to simulate the images coming towards you and lifting off the page. We\u2019ll achieve this by making them grow larger and by offsetting the shadow & making it longer.\n\n\n\n\nScreenshot 1 shows the default state, while 2 shows our previous hover effect. Screenshot 3 is the effect we are aiming for, illustrated by demo 4.\n\na.polaroid {\n\t/* ... */\n\tz-index: 2;\n\t-webkit-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3);\n\t-moz-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3);\n\tbox-shadow: 2px 2px 4px rgba(0,0, 0, 0.3);\n\t-webkit-transform: rotate(10deg);\n\t-moz-transform: rotate(10deg);\n\ttransform: rotate(10deg);\n\t-webkit-transition: all 0.5s ease-in;\n}\na.polaroid:hover,\na.polaroid:focus,\na.polaroid:active {\n\tz-index: 999;\n\tborder-color: #6A6A6A;\n\t-webkit-box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4);\n\t-moz-box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4);\n\tbox-shadow: 15px 15px 20px rgba(0,0, 0, 0.4);\n\t-webkit-transform: rotate(0deg) scale(1.05);\n\t-moz-transform: rotate(0deg) scale(1.05);\n\ttransform: rotate(0deg) scale(1.05);\n}\n\nYou\u2019ll notice we are now giving the transform property another transform function: scale, which takes increases the size by the specified factor. Other things you can do with transform include skewing, translating or you can go mad creating your own transforms with a matrix.\n\nThe box-shadow has both its offset and blur radius increased dramatically, and is darkened using the alpha channel of the rgba colour.\n\nAnd because we want the effects to all animate smoothly, we pass a value of all to the -webkit-transition property, ensuring that any changed property on that link will be animated.\n\nDemo 5 is the finished example, bringing everything nicely together.\n\nCSS transitions and transforms are a great example of progressive enrichment, which means improving the experience for a portion of the audience without negatively affecting other users. They are also a lot of fun to play with!\n\nFurther reading\n\n\n\t-moz-transform \u2013 the mozilla developer center has a comprehensive explanation of transform that also applies to -webkit-transform and transform.\n\tCSS: Animation Using CSS Transforms \u2013 this is a good, more indepth tutorial on animations.\n\tCSS Animation \u2013 the Safari blog explains the usage of -webkit-transform.\n\tDinky pocketbooks with transform \u2013 another use for transforms, create your own printable pocketbook.\n\tA while back, Simon wrote a little bookmarklet to spin the entire page\u2026 warning: this will spin the entire page.", "year": "2009", "author": "Natalie Downe", "author_slug": "nataliedowne", "published": "2009-12-14T00:00:00+00:00", "url": "https://24ways.org/2009/going-nuts-with-css-transitions/", "topic": "code"} {"rowid": 193, "title": "Web Content Accessibility Guidelines\u2014for People Who Haven't Read Them", "contents": "I\u2019ve been a huge fan of the Web Content Accessibility Guidelines 2.0 since the World Wide Web Consortium (W3C) published them, nine years ago. I\u2019ve found them practical and future-proof, and I\u2019ve found that they can save a huge amount of time for designers and developers. You can apply them to anything that you can open in a browser. My favourite part is when I use the guidelines to make a website accessible, and then attend user-testing and see someone with a disability easily using that website.\nToday, the United Nations International Day of Persons with Disabilities, seems like a good time to re-read Laura Kalbag\u2019s explanation of why we should bother with accessibility. That should motivate you to devour this article.\nIf you haven\u2019t read the Web Content Accessibility Guidelines 2.0, you might find them a bit off-putting at first. The editors needed to create a single standard that countries around the world could refer to in legislation, and so some of the language in the guidelines reads like legalese. The editors also needed to future-proof the guidelines, and so some terminology\u2014such as \u201ctime-based media\u201d and \u201cprogrammatically determined\u201d\u2014can sound ambiguous. The guidelines can seem lengthy, too: printing the guidelines, the Understanding WCAG 2.0 document, and the Techniques for WCAG 2.0 document would take 1,200 printed pages.\nThis festive season, let\u2019s rip off that legalese and ambiguous terminology like wrapping paper, and see\u2014in a single article\u2014what gifts the Web Content Accessibility Guidelines 2.0 editors have bestowed upon us.\nCan your users perceive the information on your website?\nThe first guideline has criteria that help you prevent your users from asking \u201cWhat the **** is this thing here supposed to be?\u201d\n1.1.1 Text is the most accessible format for information. Screen readers\u2014such as the \u201cVoiceOver\u201d setting on your iPhone or the \u201cTalkBack\u201d app on your Android phone\u2014understand text better than any other format. The same applies for other assistive technology, such as translation apps and Braille displays. So, if you have anything on your webpage that\u2019s not text, you must add some text that gives your user the same information. You probably know how to do this already; for example:\n\nfor images in webpages, put some alternative text in an alt attribute to tell your user what the image conveys to the user;\nfor photos in tweets, add a description to make the images accessible;\nfor Instagram posts, write a caption that conveys the photo\u2019s information.\n\nThe alternative text should allow the user to get the same information as someone who can see the image. For websites that have too many images for someone to add alternative text to, consider how machine learning and Dynamically Generated Alt Text might\u2014might\u2014be appropriate.\nYou can probably think of a few exceptions where providing text to describe an image might not make sense. Remember I described these guidelines as \u201cpractical\u201d? They cover all those exceptions:\n\nUser interface controls such as buttons and text inputs must have names or labels to tell your user what they do.\nIf your webpage has video or audio (more about these later on!), you must\u2014at least\u2014have text to tell the user what they are.\nMaybe your webpage has a test where your user has to answer a question about an image or some audio, and alternative text would give away the answer. In that case, just describe the test in text so your users know what it is.\nIf your webpage features a work of art, tell your user the experience it evokes.\nIf you have to include a Captcha on your webpage\u2014and please avoid Captchas if at all possible, because some users cannot get past them\u2014you must include text to tell your user what it is, and make sure that it doesn\u2019t rely on only one sense, such as vision.\nIf you\u2019ve included something just as decoration, you must make sure that your user\u2019s assistive technology can ignore it. Again, you probably know how to do this. For example, you could use CSS instead of HTML to include decorative images, or you could add an empty alt attribute to the img element. (Please avoid that recent trend where developers add empty alt attributes to all images in a webpage just to make the HTML validate. You\u2019re better than that.)\n\n(Notice that the guidelines allow you to choose how to conform to them, with whatever technology you choose. To make your website conform to a guideline, you can either choose one of the techniques for WCAG 2.0 for that guideline or come up with your own. Choosing a tried-and-tested technique usually saves time!)\n1.2.1 If your website includes a podcast episode, speech, lecture, or any other recorded audio without video, you must include a transcription or some other text to give your user the same information. In a lot of cases, you might find this easier than you expect: professional transcription services can prove relatively inexpensive and fast, and sometimes a speaker or lecturer can provide the speech or lecture notes that they read out word-for-word. Just make sure that all your users can get the same information and the same results, whether they can hear the audio or not. For example, David Smith and Marco Arment always publish episode transcripts for their Under the Radar podcast. \nSimilarly, if your website includes recorded video without audio\u2014such as an animation or a promotional video\u2014you must either use text to detail what happens in the video or include an audio version. Again, this might work out easier then you perhaps fear: for example, you could check to see whether the animation started life as a list of instructions, or whether the promotional video conveys the same information as the \u201cAbout Us\u201d webpage. You want to make sure that all your users can get the same information and the same results, whether they can see that video or not.\n1.2.2 If your website includes recorded videos with audio, you must add captions to those videos for users who can\u2019t hear the audio. Professional transcription services can provide you with time-stamped text in caption formats that YouTube supports, such as .srt and .sbv. You can upload those to YouTube, so captions appear on your videos there. YouTube can auto-generate captions, but the quality varies from impressively accurate to comically inaccurate. If you have a text version of what the people in the video said\u2014such as the speech that a politician read or the bedtime story that an actor read\u2014you can create a transcript file in .txt format, without timestamps. YouTube then creates captions for your video by synchronising that text to the audio in the video. If you host your own videos, you can ask a professional transcription service to give you .vtt files that you can add to a video element\u2019s track element\u2014or you can handcraft your own. (A quick aside: if your website has more videos than you can caption in a reasonable amount of time, prioritise the most popular videos, the most important videos, and the videos most relevant to people with disabilities. Then make sure your users know how to ask you to caption other videos as they encounter them.)\n1.2.3 If your website has recorded videos that have audio, you must add an \u201caudio description\u201d narration to the video to describe important visual details, or add text to the webpage to detail what happens in the video for users who cannot see the videos. (I like to add audio files from videos to my Huffduffer account so that I can listen to them while commuting.) Maybe your home page has a video where someone says, \u201cI\u2019d like to explain our new TPS reports\u201d while \u201cBill Lumbergh, division Vice President of Initech\u201d appears on the bottom of the screen. In that case, you should add an audio description to the video that announces \u201cBill Lumbergh, division Vice President of Initech\u201d, just before Bill starts speaking. As always, you can make life easier for yourself by considering all of your users, before the event: in this example, you could ask the speaker to begin by saying, \u201cI\u2019m Bill Lumbergh, division Vice President of Initech, and I\u2019d like to explain our new TPS reports\u201d\u2014so you won\u2019t need to spend time adding an audio description afterwards. \n1.2.4 If your website has live videos that have some audio, you should get a stenographer to provide real-time captions that you can include with the video. I\u2019ll be honest: this can prove tricky nowadays. The Web Content Accessibility Guidelines 2.0 predate YouTube Live, Instagram live Stories, Periscope, and other such services. If your organisation creates a lot of live videos, you might not have enough resources to provide real-time captions for each one. In that case, if you know the contents of the audio beforehand, publish the contents during the live video\u2014or failing that, publish a transcription as soon as possible.\n1.2.5 Remember what I said about the recorded videos that have audio? If you can choose to either add an audio description or add text to the webpage to detail what happens in the video, you should go with the audio description.\n1.2.6 If your website has recorded videos that include audio information, you could provide a sign language version of the audio information; some people understand sign language better than written language. (You don\u2019t need to caption a video of a sign language version of audio information.)\n1.2.7 If your website has recorded videos that have audio, and you need to add an audio description, but the audio doesn\u2019t have enough pauses for you to add an \u201caudio description\u201d narration, you could provide a separate version of that video where you have added pauses to fit the audio description into.\n1.2.8 Let\u2019s go back to the recorded videos that have audio once more! You could add text to the webpage to detail what happens in the video, so that people who can neither read captions nor hear dialogue and audio description can use braille displays to understand your video.\n1.2.9 If your website has live audio, you could get a stenographer to provide real-time captions. Again, if you know the contents of the audio beforehand, publish the contents during the live audio or publish a transcription as soon as possible.\n(Congratulations on making it this far! I know that seems like a lot to remember, but keep in mind that we\u2019ve covered a complex area: helping your users to understand multimedia information that they can\u2019t see and/or hear. Grab a mince pie to celebrate, and let\u2019s keep going.)\n1.3.1 You must mark up your website\u2019s content so that your user\u2019s browser, and any assistive technology they use, can understand the hierarchy of the information and how each piece of information relates to the rest. Once again, you probably know how to do this: use the most appropriate HTML element for each piece of information. Mark up headings, lists, buttons, radio buttons, checkboxes, and links with the most appropriate HTML element. If you\u2019re looking for something to do to keep you busy this Christmas, scroll through the list of the elements of HTML. Do you notice any elements that you didn\u2019t know, or that you\u2019ve never used? Do you notice any elements that you could use on your current projects, to mark up the content more accurately? Also, revise HTML table advanced features and accessibility, how to structure an HTML form, and how to use the native form widgets\u2014you might be surprised at how much you can do with just HTML! Once you\u2019ve mastered those, you can make your website much more usable for your all of your users.\n1.3.2 If your webpage includes information that your user has to read in a certain order, you must make sure that their browser and assistive technology can present the information in that order. Don\u2019t rely on CSS or whitespace to create that order visually. Check that the order of the information makes sense when CSS and whitespace aren\u2019t formatting it. Also, try using the Tab key to move the focus through the links and form widgets on your webpage. Does the focus go where you expect it to? Keep this in mind when using order in CSS Grid or Flexbox.\n1.3.3 You must not presume that your users can identify sensory characteristics of things on your webpage. Some users can\u2019t tell what you\u2019ve positioned where on the screen. For example, instead of asking your users to \u201cChoose one of the options on the left\u201d, you could ask them to \u201cChoose one of our new products\u201d and link to that section of the webpage.\n1.4.1 You must not rely on colour as the only way to convey something to your users. Some of your users can\u2019t see, and some of your users can\u2019t distinguish between colours. For example, if your webpage uses green to highlight the products that your shop has in stock, you could add some text to identify those products, or you could group them under a sub-heading.\n1.4.2 If your webpage automatically plays a sound for more than 3 seconds, you must make sure your users can stop the sound or change its volume. Don\u2019t rely on your user turning down the volume on their computer; some users need to hear the screen reader on their computer, and some users just want to keep listening to whatever they were listening before your webpage interrupted them!\n1.4.3 You should make sure that your text contrasts enough with its background, so that your users can read it. Bookmark Lea Verou\u2019s Contrast Ratio calculator now. You can enter the text colour and background colour as named colours, or as RGB, RGBa, HSL, or HSLa values. You should make sure that:\n\nnormal text that set at 24px or larger has a ratio of at least 3:1;\nbold text that set at 18.75px or larger has a ratio of at least 3:1;\nall other text has a ratio of at least 4\u00bd:1.\n\nYou don\u2019t have to do this for disabled form controls, decorative stuff, or logos\u2014but you could!\n1.4.4 You should make sure your users can resize the text on your website up to 200% without using their assistive technology\u2014and still access all your content and functionality. You don\u2019t have to do this for subtitles or images of text.\n1.4.5 You should avoid using images of text and just use text instead. In 1998, Jeffrey Veen\u2019s first Hot Design Tip said, \u201cText is text. Graphics are graphics. Don\u2019t confuse them.\u201d Now that you can apply powerful CSS text-styling properties, use CSS Grid to precisely position text, and choose from thousands of web fonts (Jeffrey co-founded Typekit to help with this), you pretty much never need to use images of text. The guidelines say you can use images of text if you let your users specify the font, size, colour, and background of the text in the image of text\u2014but I\u2019ve never seen that on a real website. Also, this doesn\u2019t apply to logos.\n1.4.6 Let\u2019s go back to colour contrast for a second. You could make your text contrast even more with its background, so that even more of your users can read it. To do that, use Lea Verou\u2019s Contrast Ratio calculator to make sure that:\n\nnormal text that is 24px or larger has a ratio of at least 4\u00bd:1;\nbold text that 18.75px or larger has a ratio of at least 4\u00bd:1;\nall other text has a ratio of at least 7:1.\n\n1.4.7 If your website has recorded speech, you could make sure there are no background sounds, or that your users can turn off any background sounds. If that\u2019s not possible, you could make sure that any background sounds that last longer than a couple of seconds are at least four times quieter than the speech. This doesn\u2019t apply to audio Captchas, audio logos, singing, or rapping. (Yes, these guidelines mention rapping!)\n1.4.8 You could make sure that your users can reformat blocks of text on your website so they can read them better. To do this, make sure that your users can:\n\nspecify the colours of the text and the background, and\nmake the blocks of text less than 80-characters wide, and \nalign text to the left (or right for right-to-left languages), and \nset the line height to 150%, and \nset the vertical distance between paragraphs to 1\u00bd times the line height of the text, and \nresize the text (without using their assistive technology) up to 200% and still not have to scroll horizontally to read it.\n\nBy the way, when you specify a colour for text, always specify a colour for its background too. Don\u2019t rely on default background colours!\n1.4.9 Let\u2019s return to images of text for a second. You could make sure that you use them only for decoration and logos.\nCan users operate the controls and links on your website?\nThe second guideline has criteria that help you prevent your users from asking, \u201cHow the **** does this thing work?\u201d\n2.1.1 You must make sure that you users can carry out all of your website\u2019s activities with just their keyboard, without time limits for pressing keys. (This doesn\u2019t apply to drawing or anything else that requires a pointing device such as a mouse.) Again, if you use the most appropriate HTML element for each piece of information and for each form element, this should prove easy.\n2.1.2 You must make sure that when the user uses the keyboard to focus on some part of your website, they can then move the focus to some other part of your webpage without needing to use a mouse or touch the screen. If your website needs them to do something complex before they can move the focus elsewhere, explain that to your user. These \u201ckeyboard traps\u201d have become rare, but beware of forms that move focus from one text box to another as soon as they receive the correct number of characters.\n2.1.3 Let\u2019s revisit making sure that you users can carry out all of your website\u2019s activities with just their keyboard, without time limits for pressing keys. You could make sure that your user can do absolutely everything on your website with just the keyboard.\n2.2.1 Sometimes people need more time than you might expect to complete a task on your website. If any part of your website imposes a time limit on a task, you must do at least one of these: \n\nlet your users turn off the time limit before they encounter it; or\nlet your users increase the time limit to at least 10 times the default time limit before they encounter it; or\nwarn your users before the time limit expires and give them at least 20 seconds to extend it, and let them extend it at least 10 times.\n\nRemember: these guidelines are practical. They allow you to enforce time limits for real-time events such as auctions and ticket sales, where increasing or extending time limits wouldn\u2019t make sense. Also, the guidelines allow you to enforce a maximum time limit of 20 hours. The editors chose 20 hours because people need to go to sleep at some stage. See? Practical!\n2.2.2 In my experience, this criterion remains the least well-known\u2014even though some users can only use websites that conform to it. If your website presents content alongside other content that can distract users by automatically moving, blinking, scrolling, or updating, you must make sure that your users can:\n\npause, stop, or hide the other content if it\u2019s not essential and lasts more than 5 seconds; and\npause, stop, hide, or control the frequency of the other content if it automatically updates.\n\nIt\u2019s OK if your users miss live information such as stock price updates or football scores; you can\u2019t do anything about that! Also, this doesn\u2019t apply to animations such as progress bars that you put on a website to let all users know that the webpage isn\u2019t frozen.\n(If this one sounds complex, just add a pause button to anything that might distract your users.)\n2.2.3 Let\u2019s go back to time limits on tasks on your website. You could make your website even easier to use by removing all time limits except those on real-time events such as auctions and ticket sales. That would mean your user wouldn\u2019t need to interact with a timer at all.\n2.2.4 You could let your users turn off all interruptions\u2014server updates, promotions, and so on\u2014apart from any emergency information.\n2.2.5 This is possibly my favourite of these criteria! After your website logs your user out, you could make sure that when they log in again, they can continue from where they were without having lost any information. Do that, and you\u2019ll be on everyone\u2019s Nice List this Christmas.\n2.3.1 You must make sure that nothing flashes more than three times a second on your website, unless you can make sure that the flashes remain below the acceptable general flash and red flash thresholds\u2026\n2.3.2 \u2026or you could just make sure that nothing flashes more than three times per second on your website. This is usually an easier goal.\n2.4.1 You must make sure that your users can jump past any blocks of content, such as navigation menus, that are repeated throughout your website. You know the drill here: using HTML\u2019s sectioning elements such as header, nav, main, aside, and footer allows users with assistive technology to go straight to the content they need, and adding \u201cSkip Navigation\u201d links allows everyone to get to your main content faster.\n2.4.2 You must add a proper title to describe each webpage\u2019s topic. Your webpage won\u2019t even validate without a title element, so make it a useful one.\n2.4.3 If your users can focus on links and native form widgets, you must make sure that they can focus on elements in an order that makes sense.\n2.4.4 You must make sure that your users can understand the purpose of a link when they read:\n\nthe text of the link; or\nthe text of the paragraph, list item, table cell, or table header for the cell that contains the link; or\nthe heading above the link.\n\nYou don\u2019t have to do that for games and quizzes.\n2.4.5 You should give your users multiple ways to find any webpage within a set of webpages. Add site-wide search and a site map and you\u2019re done!\nThis doesn\u2019t apply for a webpage that is part of a series of actions (like a shopping cart and checkout flow) or to a webpage that is a result of a series of actions (like a webpage confirming that the user has bought what was in the shopping cart).\n2.4.6 You should help your users to understand your content by providing:\n\nheadings that describe the topics of you content;\nlabels that describe the purpose of the native form widgets on the webpage.\n\n2.4.7 You should make sure that users can see which element they have focussed on. Next time you use your website, try hitting the Tab key repeatedly. Does it visually highlight each item as it moves focus to it? If it doesn\u2019t, search your CSS to see whether you\u2019ve applied outline: 0; to all elements\u2014that\u2019s usually the culprit. Use the :focus pseudo-element to define how elements should appear when they have focus.\n2.4.8 You could help your user to understand where the current webpage is located within your website. Add \u201cbreadcrumb navigation\u201d and/or a site map and you\u2019re done.\n2.4.9 You could make links even easier to understand, by making sure that your users can understand the purpose of a link when they read the text of the link. Again, you don\u2019t have to do that for games and quizzes.\n2.4.10 You could use headings to organise your content by topic. \nCan users understand your content?\nThe third guideline has criteria that help you prevent your users from asking, \u201cWhat the **** does this mean?\u201d\n3.1.1 Let\u2019s start this section with the criterion that possibly takes the least time to implement; you must make sure that the user\u2019s browser can identify the main language that your webpage\u2019s content is written in. For a webpage that has mainly English content, use . \n3.1.2 You must specify when content in another language appears in your webpage, like so: I wish you a Joyeux No\u00ebl.. You don\u2019t have to do this for proper names, technical terms, or words that you can\u2019t identify a language for. You also don\u2019t have to do it for words from a different language that people consider part of the language around those words; for example, Come to our Christmas rendezvous! is OK.\n3.1.3 You could make sure that your users can find out the meaning of any unusual words or phrases, including idioms like \u201cstocking filler\u201d or \u201cBah! Humbug!\u201d and jargon such as \u201cVoiceOver\u201d and \u201cTalkBack\u201d. Provide a glossary or link to a dictionary.\n3.1.4 You could make sure that your users can find out the meaning of any abbreviation. For example, VoiceOver pronounces \u201cXmas\u201d as \u201cSmas\u201d instead of \u201cChristmas\u201d. Using the abbr element and linking to a glossary can help. (Interestingly, VoiceOver pronounces \u201cabbr\u201d as \u201cabbreviation\u201d!)\n3.1.5 Do your users need to be able to read better than a typically educated nine-year-old, to read your content (apart from proper names and titles)? If so, you could provide a version that doesn\u2019t require that level of reading ability, or you could provide images, videos, or audio to explain your content. (You don\u2019t have to add captions or audio description to those videos.)\n3.1.6 You could make sure that your users can access the pronunciation of any word in your content if that word\u2019s meaning depends on its pronunciation. For example, the word \u201cclose\u201d could have one of two meanings, depending on pronunciation, in a phrase such as, \u201cReady for Christmas? Close now!\u201d\n3.2.1 Some users need to focus on elements to access information about them. You must make sure that focusing on an element doesn\u2019t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form.\n3.2.2 Webpages are easier for users when the controls do what they\u2019re supposed to do. Unless you have warned your users about it, you must make sure that changing the value of a control such as a text box, checkbox, or drop-down list doesn\u2019t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form.\n3.2.3 To help your users to find the content they want on each webpage, you should put your navigation elements in the same place on each webpage. (This doesn\u2019t apply when your user has changed their preferences or when they use assistive technology to change how your content appears.) \n3.2.4 When a set of webpages includes things that have the same functionality on different webpages, you should name those things consistently. For example, don\u2019t use the word \u201cSearch\u201d for the search box on one webpage and \u201cFind\u201d for the search box on another webpage within that set of webpages.\n3.2.5 Let\u2019s go back to major changes, such as a new window opening, another element taking focus, or a form being submitted. You could make sure that they only happen when users deliberately make them happen, or when you have warned users about them first. For example, you could give the user a button for updating some content instead of automatically updating that content. Also, if a link will open in a new window, you could add the words \u201copens in new window\u201d to the link text.\n3.3.1 Users make mistakes when filling in forms. Your website must identify each mistake to your user, and must describe the mistake to your users in text so that the user can fix it. One way to identify mistakes reliably to your users is to set the aria-invalid attribute to true in the element that has a mistake. That makes sure that users with assistive technology will be alerted about the mistake. Of course, you can then use the [aria-invalid=\"true\"] attribute selector in your CSS to visually highlight any such mistakes. Also, look into how certain attributes of the input element such as required, type, and list can help prevent and highlight mistakes.\n3.3.2 You must include labels or instructions (and possibly examples) in your website\u2019s forms, to help your users to avoid making mistakes. \n3.3.3 When your user makes a mistake when filling in a form, your webpage should suggest ways to fix that mistake, if possible. This doesn\u2019t apply in scenarios where those suggestions could affect the security of the content.\n3.3.4 Whenever your user submits information that:\n\nhas legal or financial consequences; or\naffects information that they have previously saved in your website; or\nis part of a test\n\n\u2026you should make sure that they can:\n\nundo it; or\ncorrect any mistakes, after your webpage checks their information; or\nreview, confirm, and correct the information before they finally submit it.\n\n3.3.5 You could help prevent your users from making mistakes by providing obvious, specific help, such as examples, animations, spell-checking, or extra instructions.\n3.3.6 Whenever your user submits any information, you could make sure that they can:\n\nundo it; or\ncorrect any mistakes, after your webpage checks their information; or\nreview, confirm, and correct the information before they finally submit it.\n\nHave you made your website robust enough to work on your users\u2019 browsers and assistive technologies?\nThe fourth and final guideline has criteria that help you prevent your users from asking, \u201cWhy the **** doesn\u2019t this work on my device?\u201d\n4.1.1 You must make sure that your website works as well as possible with current and future browsers and assistive technology. Prioritise complying with web standards instead of relying on the capabilities of currently popular devices and browsers. Web developers didn\u2019t expect their users to be unwrapping the Wii U Browser five years ago\u2014who knows what browsers and assistive technologies our users will be unwrapping in five years\u2019 time? Avoid hacks, and use the W3C Markup Validation Service to make sure that your HTML has no errors.\n4.1.2 If you develop your own user interface components, you must make their name, role, state, properties, and values available to your user\u2019s browsers and assistive technologies. That should make them almost as accessible as standard HTML elements such as links, buttons, and checkboxes.\n\u201c\u2026and a partridge in a pear tree!\u201d\n\u2026as that very long Christmas song goes. We\u2019ve covered a lot in this article\u2014because your users have a lot of different levels of ability. Hopefully this has demystified the Web Content Accessibility Guidelines 2.0 for you. Hopefully you spotted a few situations that could arise for users on your website, and you now know how to tackle them. \nTo start applying what we\u2019ve covered, you might like to look at Sarah Horton and Whitney Quesenbery\u2019s personas for Accessible UX. Discuss the personas, get into their heads, and think about which aspects of your website might cause problems for them. See if you can apply what we\u2019ve covered today, to help users like them to do what they need to do on your website.\nHow to know when your website is perfectly accessible for everyone\nLOL! There will never be a time when your website becomes perfectly accessible for everyone. Don\u2019t aim for that. Instead, aim for regularly testing and making your website more accessible.\nWeb Content Accessibility Guidelines (WCAG) 2.1\nThe W3C hope to release the Web Content Accessibility Guidelines (WCAG) 2.1 as a \u201crecommendation\u201d (that\u2019s what the W3C call something that we should start using) by the middle of next year. Ten years may seem like a long time to move from version 2.0 to version 2.1, but consider the scale of the task: the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible. Keep an eye out for 2.1!\nYou\u2019ll go down in history\nOne last point: I\u2019ve met a surprising number of web designers and developers who do great work to make their websites more accessible without ever telling their users about it. Some of your potential customers have possibly tried and failed to use your website in the past. They probably won\u2019t try again unless you let them know that things have improved. A quick Twitter search for your website\u2019s name alongside phrases like \u201cassistive technology\u201d, \u201cdoesn\u2019t work\u201d, or \u201c#fail\u201d can let you find frustrated users\u2014so you can tell them about how you\u2019re making your website more accessible. Start making your websites work better for everyone\u2014and please, let everyone know.", "year": "2017", "author": "Alan Dalton", "author_slug": "alandalton", "published": "2017-12-03T00:00:00+00:00", "url": "https://24ways.org/2017/wcag-for-people-who-havent-read-them/", "topic": "code"} {"rowid": 37, "title": "JavaScript Modules the ES6 Way", "contents": "JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. \n\nThis is something nearly all other languages come with out of the box, whether it be Ruby\u2019s require, Python\u2019s import, or any other language you\u2019re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let\u2019s be clear: it doesn\u2019t mean the new module system in the upcoming version of JavaScript won\u2019t be useful to you if you\u2019re building smaller websites rather than the next Instagram.\n\nThankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won\u2019t be landing in browsers for a while yet \u2013 but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I\u2019d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It\u2019s much simpler than you might think.\n\nWhat is ES6?\n\nECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It\u2019s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn\u2019t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet.\n\nThe ES6 module spec\n\nThe ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I\u2019ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it\u2019s what I\u2019ll focus on more.\n\nModule syntax\n\nThe key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we\u2019ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it.\n\nAnother important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There\u2019s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous.\n\nNamed exports\n\nModules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword:\n\nexport function double(x) {\n return x + x;\n};\n\n\nYou can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces.\n\nvar double = function(x) {\n return x + x;\n}\n\nexport { double };\n\nA module can then import the double function like so:\n\nimport { double } from 'mymodule';\ndouble(2); // 4\n\nAgain, curly braces are required around the variable you would like to import. It\u2019s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension.\n\nThe reason for those extra braces is that this syntax lets you export multiple variables:\n\nvar double = function(x) {\n return x + x;\n}\n\nvar square = function(x) {\n return x * x;\n}\n\nexport { double, square }\n\nI personally prefer this syntax over the export function \u2026, but only because it makes it much clearer to me what the module exports. Typically I will have my export {\u2026} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting.\n\nA file importing both double and square can do so in just the way you\u2019d expect:\n\nimport { double, square } from 'mymodule';\ndouble(2); // 4\nsquare(3); // 9\n\nWith this approach you can\u2019t easily import an entire module and all its methods. This is by design \u2013 it\u2019s much better and you\u2019re encouraged to import just the functions you need to use.\n\nDefault exports\n\nAlong with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so:\n\nexport default function(x) {\n return x + x;\n}\n\nAnd that can be imported:\n\nimport double from 'mymodule';\ndouble(2); // 4\n\n\nThis time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you\u2019d like. Default exports are not named, so you can import them as anything you like:\n\nimport christmas from 'mymodule';\nchristmas(2); // 4\n\nThe above is entirely valid.\n\nAlthough it\u2019s not something that is used too often, a module can have both named exports and a default export, if you wish.\n\nOne of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that\u2019s fine, and you shouldn\u2019t change that to meet the preferences of those designing the spec.\n\nProgrammatic API\n\nAlong with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It\u2019s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user\u2019s browser didn\u2019t support a feature your app relied on. An example of doing this is:\n\nif(someFeatureNotSupported) {\n System.import('my-polyfill').then(function(myPolyFill) {\n // use the module from here\n });\n}\n\nSystem.import will return a promise, which, if you\u2019re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete.\n\nThis programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task.\n\nHow to use it today\n\nIt\u2019s all well and good having this new syntax, but right now it won\u2019t work in any browser \u2013 and it\u2019s not likely to for a long time. Maybe in next year\u2019s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we\u2019re stuck with a bit of extra work.\n\nES6 module transpiler\n\nOne solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it \u2013 not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp.\n\nThe advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don\u2019t have any system in place for handling modules in the browser, using the transpiler doesn\u2019t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn\u2019t do anything to help you get that code running in the browser, but if you have that part sorted it\u2019s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler.\n\nSystemJS\n\nAnother solution is SystemJS. It\u2019s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser).\n\nTo load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn\u2019t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you.\n\nAll you need to do to get SystemJS running is to add a \n\nWhen you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn\u2019t make a lot of requests on the live site. In development though, it makes for a really nice workflow.\n\nWhen working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application\u2019s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file.\n\nSystemJS also provides a workflow for bundling your application together into one file.\n\nConclusion\n\nES6 modules may be at least six months to a year away (if not more) but that doesn\u2019t mean they can\u2019t be used today. Although there is an overhead to using them now \u2013 with the work required to set up SystemJS, the module transpiler, or another solution \u2013 that doesn\u2019t mean it\u2019s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You\u2019ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution.\n\nIf you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road.\n\nFurther reading\n\nIf you\u2019d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources:\n\n\n\tECMAScript 6 modules: the final syntax by Axel Rauschmayer\n\tPractical Workflows for ES6 Modules by Guy Bedford\n\tECMAScript 6 resources for the curious JavaScripter by Addy Osmani\n\tTracking ES6 support by Addy Osmani\n\tES6 Tools List by Addy Osmani\n\tUsing Grunt and the ES6 Module Transpiler by Thomas Boyt\n\tJavaScript Modules and Dependencies with jspm by myself\n\tUsing ES6 Modules Today by Guy Bedford", "year": "2014", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2014-12-03T00:00:00+00:00", "url": "https://24ways.org/2014/javascript-modules-the-es6-way/", "topic": "code"} {"rowid": 326, "title": "Don't be eval()", "contents": "JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It\u2019s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you\u2019re using eval() there\u2019s probably something wrong with your design.\n\nCommon mistakes\n\nHere\u2019s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it \u2013 but you don\u2019t know the name of the property until runtime. Here\u2019s how NOT to do it:\n\nvar property = 'bar';\nvar value = eval('foo.' + property);\n\nYes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It\u2019s also dirt ugly.\n\nHere\u2019s the right way of doing the above:\n\nvar property = 'bar';\nvar value = foo[property];\n\nIn JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string.\n\nSecurity issues\n\nIn any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript \u2013 you should be extremely cautious of running eval() against any code that may have been tampered with \u2013 for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks.\n\nWhat\u2019s it good for?\n\nSome programmers say that eval() is B.A.D. \u2013 Broken As Designed \u2013 and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that \u2013 it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval().\n\nfunction evalRequest(url) {\n var xmlhttp = new XMLHttpRequest();\n xmlhttp.onreadystatechange = function() {\n if (xmlhttp.readyState==4 && xmlhttp.status==200) {\n eval(xmlhttp.responseText);\n }\n }\n xmlhttp.open(\"GET\", url, true);\n xmlhttp.send(null);\n }\n\nIf you want this to work with Internet Explorer you\u2019ll need to include this compatibility patch.", "year": "2005", "author": "Simon Willison", "author_slug": "simonwillison", "published": "2005-12-07T00:00:00+00:00", "url": "https://24ways.org/2005/dont-be-eval/", "topic": "code"} {"rowid": 11, "title": "JavaScript: Taking Off the Training Wheels", "contents": "JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it\u2019s understandable that when 24 ways asked, \u201cWhat one thing do you wish you had more time to learn about?\u201d, a number of you answered \u201cJavaScript!\u201d\n\nThis article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can\u2019t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources.\n\nWhy learn JavaScript?\n\nSo what\u2019s in it for you? Why take the next step and learn the fundamentals?\n\nConfidence with jQuery\n\nIf nothing else, learning JavaScript will improve your jQuery code; you\u2019ll be comfortable writing jQuery from scratch and feel happy bending others\u2019 code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it\u2019s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence.\n\nIn fact, you could say that jQuery\u2019s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That\u2019s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn\u2019t needed.\n\nAn example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here\u2019s a comparison of some jQuery code and the equivalent without.\n\n$('.counter').each(function (index) {\n $(this).text(index + 1);\n});\n\nvar counters = document.querySelectorAll('.counter');\n[].slice.call(counters).forEach(function (elem, index) {\n elem.textContent = index + 1;\n});\n\nSolving problems no one else has!\n\nWhen you have to go to the internet to solve a problem, you\u2019re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has.\n\nNode.js\n\nNode.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don\u2019t worry: the Node community is thriving, very friendly and willing to help.\n\nI think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting!\n\nHere\u2019s an example web server written with Node. http is a module that allows you to create servers and, like jQuery\u2019s $.ajax, make requests. It\u2019s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it\u2019s certainly not out of your reach.\n\nvar http = require('http');\nhttp.createServer(function (req, res) {\n res.writeHead(200, {'Content-Type': 'text/plain'});\n res.end('Hello World');\n}).listen(1337);\nconsole.log('Server running at http://localhost:1337/');\n\nGrunt and other website tools\n\nNode has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I\u2019ll talk a little bit about Grunt here.\n\nGrunt is a task runner, and many people use it for compiling Sass or compressing their site\u2019s JavaScript and images. It\u2019s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer.\n\nWays to improve your skills\n\nSo you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas.\n\nRebuild a jQuery app\n\nConverting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you\u2019re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way\u2019s jQuery to JavaScript article.\n\nFind a mentor\n\nIf you think you\u2019d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I\u2019d look out for someone who\u2019s active and friendly on Twitter, and does the kind of work you\u2019d like to do. Introduce yourself over Twitter or send them an email. I wouldn\u2019t expect a full tutoring course (although that is another option!) but they\u2019ll be very glad to answer a question and any follow-ups every now and then.\n\nGo to a workshop\n\nMany conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there\u2019s one in your area. Workshops are great because you can ask direct questions, and you\u2019re in an environment where others are learning just like you are \u2014 no need to learn alone!\n\nSet yourself challenges\n\nThis is one way I like to learn new things. I have a new thing that I\u2019m not very good at, so I pick something that I think is just out of my reach and I try to build it. It\u2019s learning by doing and, even if you fail, it can be enormously valuable.\n\nWhere to start?\n\nIf you\u2019ve decided learning JavaScript is an important step for you, your next question may well be where to go from here.\n\nI\u2019ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want.\n\nBeginner\n\nIf you\u2019re just getting started with JavaScript, I\u2019d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They\u2019re all reputable sources (although, I\u2019ve included something I wrote \u2014 you can decide about that one!) and will not lead you astray.\n\n\n\tjQuery\u2019s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro.\n\tCodecademy\u2019s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you.\n\tHTMLDog\u2019s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!]\n\tThe tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript.\n\n\nGetting in-depth\n\nFor more comprehensive documentation and help I\u2019d recommend adding these places to your list of go-tos.\n\n\n\tMDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it\u2019s a great place to just go and browse.\n\tAxel Rauschmayer\u2019s 2ality is a stunning collection of articles that will take you deep into JavaScript. It\u2019s certainly worth looking at.\n\tAddy Osmani\u2019s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications.\n\n\nAnd finally\u2026\n\nI think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you\u2019ll get there. I bet you\u2019ll learn a whole lot along the way. Good luck!\n\nMany thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag.", "year": "2013", "author": "Tom Ashworth", "author_slug": "tomashworth", "published": "2013-12-05T00:00:00+00:00", "url": "https://24ways.org/2013/javascript-taking-off-the-training-wheels/", "topic": "code"} {"rowid": 63, "title": "Be Fluid with Your Design Skills: Build Your Own Sites", "contents": "Just five years ago in 2010, when we were all busy trying to surprise and delight, learning CSS3 and trying to get whole websites onto one page, we had a poster on our studio wall. It was entitled \u2018Designers Vs Developers\u2019, an infographic that showed us the differences between the men(!) who created websites. \nDesigners wore skinny jeans and used Macs and developers wore cargo pants and brought their own keyboards to work. We began to learn that designers and developers were not only doing completely different jobs but were completely different people in every way. This opinion was backed up by hundreds of memes, millions of tweets and pages of articles which used words like void and battle and versus.\nThankfully, things move quickly in this industry; the wide world of web design has moved on in the last five years. There are new devices, technologies, tools \u2013 and even a few women. Designers have been helped along by great apps, software, open source projects, conferences, and a community of people who, to my unending pride, love to share their knowledge and their work.\nSo the world has moved on, and if Miley Cyrus, Ruby Rose and Eliot Sumner are identifying as gender fluid (an identity which refers to a gender which varies over time or is a combination of identities), then I would like to come out as discipline fluid! \nOK, I will probably never identify as a developer, but I will identify as fluid! How can we be anything else in an industry that moves so quickly? That\u2019s how we should think of our skills, our interests and even our job titles. After all, Steve Jobs told us that \u201cDesign is not just what it looks like and feels like. Design is how it works.\u201d Sorry skinny-jean-wearing designers \u2013 this means we\u2019re all designing something together. And it\u2019s not just about knowing the right words to use: you have to know how it feels. How it feels when you make something work, when you fix that bug, when you make it work on IE.\nLike anything in life, things run smoothly when you make the effort to share experiences, empathise and deeply understand the needs of others. How can designers do that if they\u2019ve never built their own site? I\u2019m not talking the big stuff, I\u2019m talking about your portfolio site, your mate\u2019s business website, a website for that great idea you\u2019ve had. I\u2019m talking about doing it yourself to get an unique insight into how it feels.\nWe all know that designers and developers alike love an
    , so here it is.\nTen reasons designers should be fluid with their skills and build their own sites\n1. It\u2019s never been easier\nNow here\u2019s where the definition of \u2018build\u2019 is going to get a bit loose and people are going to get angry, but when I say it\u2019s never been easier I mean because of the existence of apps and software like WordPress, Squarespace, Tumblr, et al. It\u2019s easy to make something and get it out there into the world, and these are all gateway drugs to hard coding!\n2. You\u2019ll understand how it feels\nHow it feels to be so proud that something actually works that you momentarily don\u2019t notice if the kerning is off or the padding is inconsistent. How it feels to see your site appear when you\u2019ve redirected a URL. How it feels when you just can\u2019t work out where that one extra space is in a line of PHP that has killed your whole site.\n3. It makes you a designer\nNot a better designer, it makes you a designer when you are designing how things look and how they work. \n4. You learn about movement\nPhotoshop and Sketch just don\u2019t cut it yet. Until you see your site in a browser or your app on a phone, it\u2019s hard to imagine how it moves. Building your own sites shows you that it\u2019s not just about how the content looks on the screen, but how it moves, interacts and feels.\n5. You make techie friends\nAll the tutorials and forums in the world can\u2019t beat your network of techie friends. Since I started working in web design I have worked with, sat next to, and co-created with some of the greatest developers. Developers who\u2019ve shared their knowledge, encouraged me to build things, patiently explained HTML, CSS, servers, divs, web fonts, iOS development. There has been no void, no versus, very few battles; just people who share an interest and love of making things. \n6. You will own domain names\nWhen something is paid for, online and searchable then it\u2019s real and you\u2019ve got to put the work in. Buying domains has taught me how to stop procrastinating, but also about DNS, FTP, email, and how servers work.\n7. People will ask you to do things\u2028\nLearning about code and development opens a whole new world of design. When you put your own personal websites and projects out there people ask you to do more things. OK, so sometimes those things are \u201cMake me a website for free\u201d, but more often it\u2019s cool things like \u201cCome and speak at my conference\u201d, \u201cWrite an article for my magazine\u201d and \u201cCollaborate with me.\u201d\n8. The young people are coming!\nThey love typography, they love print, they love layout, but they\u2019ve known how to put a website together since they started their first blog aged five and they show me clever apps they\u2019ve knocked together over the weekend! They\u2019re new, they\u2019re fluid, and they\u2019re better than us!\n9. Your portfolio is your portfolio\nOK, it\u2019s an obvious one, but as designers our work is our CV, our legacy! We need to show our skill, our attention to detail and our creativity in the way we showcase our work. Building your portfolio is the best way to start building your own websites. (And please be that designer who\u2019s bothered to work out how to change the Squarespace favicon!) \n10. It keeps you fluid!\nBuilding your own websites is tough. You\u2019ll never be happy with it, you\u2019ll constantly be updating it to keep up with technology and fashion, and by the time you\u2019ve finished it you\u2019ll want to start all over again. Perfect for forcing you to stay up-to-date with what\u2019s going on in the industry.\n
", "year": "2015", "author": "Ros Horner", "author_slug": "roshorner", "published": "2015-12-12T00:00:00+00:00", "url": "https://24ways.org/2015/be-fluid-with-your-design-skills-build-your-own-sites/", "topic": "code"} {"rowid": 234, "title": "An Introduction to CSS 3-D Transforms", "contents": "Ladies and gentlemen, it is the second decade of the third millennium and we are still kicking around the same 2-D interface we got three decades ago. Sure, Apple debuted a few apps for OSX 10.7 that have a couple more 3-D flourishes, and Microsoft has had that Flip 3D for a while. But c\u2019mon \u2013 2011 is right around the corner. That\u2019s Twenty Eleven, folks. Where is our 3-D virtual reality? By now, we should be zipping around the Metaverse on super-sonic motorbikes.\n\nGranted, the capability of rendering complex 3-D environments has been present for years. On the web, there are already several solutions: Flash; three.js in ; and, eventually, WebGL. Finally, we meagre front-end developers have our own three-dimensional jewel: CSS 3-D transforms!\n\nRationale\n\nLike a beautiful jewel, 3-D transforms can be dazzling, a true spectacle to behold. But before we start tacking 3-D diamonds and rubies to our compositions like Liberace\u2018s tailor, we owe it to our users to ask how they can benefit from this awesome feature. \n\nAn entire application should not take advantage of 3-D transforms. CSS was built to style documents, not generate explorable environments. I fail to find a benefit to completing a web form that can be accessed by swivelling my viewport to the Sign-Up Room (although there have been proposals to make the web just that). Nevertheless, there are plenty of opportunities to use 3-D transforms in between interactions with the interface, via transitions.\n\nTake, for instance, the Weather App on the iPhone. The application uses two views: a details view; and an options view. Switching between these two views is done with a 3-D flip transition. This informs the user that the interface has two \u2013 and only two \u2013 views, as they can exist only on either side of the same plane.\n\n Flipping from details view to options view via a 3-D transition\n\nAlso, consider slide shows. When you\u2019re looking at the last slide, what cues tip you off that advancing will restart the cycle at the first slide? A better paradigm might be achieved with a 3-D transform, placing the slides side-by-side in a circle (carousel) in three-dimensional space; in that arrangement, the last slide obviously comes before the first.\n\n3-D transforms are more than just eye candy. We can also use them to solve dilemmas and make our applications more intuitive. \n\nCurrent support\n\nThe CSS 3D Transforms module has been out in the wild for over a year now. Currently, only Safari supports the specification \u2013 which includes Safari on Mac OS X and Mobile Safari on iOS. \n\nThe support roadmap for other browsers varies. The Mozilla team has taken some initial steps towards implementing the module. Mike Taylor tells me that the Opera team is keeping a close eye on CSS transforms, and is waiting until the specification is fleshed out. And our best friend Internet Explorer still needs to catch up to 2-D transforms before we can talk about the 3-D variety.\n\nTo make matters more perplexing, Safari\u2019s WebKit cousin Chrome currently accepts 3-D transform declarations, but renders them in 2-D space. Chrome team member Paul Irish, says that 3-D transforms are on the horizon, perhaps in one of the next 8.0 releases.\n\nThis all adds up to a bit of a challenge for those of us excited by 3-D transforms. I\u2019ll give it to you straight: missing the dimension of depth can make degradation a bit ungraceful. Unless the transform is relatively simple and holds up in non-3D-supporting browsers, you\u2019ll most likely have to design another solution. But what\u2019s another hurdle in a steeplechase? We web folk have had our mettle tested for years. We\u2019re prepared to devise multiple solutions.\n\nHere\u2019s the part of the article where I mention Modernizr, and you brush over it because you\u2019ve read this part of an article hundreds of times before. But seriously, it\u2019s the best way to test for CSS 3-D transform support. Use it.\n\nEven with these difficulties mounting up, trying out 3-D transforms today is the right move. The CSS 3-D transforms module was developed by the same team at Apple that produced the CSS 2D Transforms and Animation modules. Both specifications have since been adopted by Mozilla and Opera. Transforming in three-dimensions now will guarantee you\u2019ll be ahead of the game when the other browsers catch up.\n\nThe choice is yours. You can make excuses and pooh-pooh 3-D transforms because they\u2019re too hard and only snobby Apple fans will see them today. Or, with a tip of the fedora to Mr Andy Clarke, you can get hard-boiled and start designing with the best features out there right this instant.\n\nSo, I bid you, in the words of the eternal Optimus Prime\u2026\n\n\n\tTransform and roll out.\n\n\nLet\u2019s get coding.\n\nPerspective\n\nTo activate 3-D space, an element needs perspective. This can be applied in two ways: using the transform property, with the perspective as a functional notation:\n\n-webkit-transform: perspective(600);\n\nor using the perspective property: \n\n-webkit-perspective: 600;\n\nSee example: Perspective 1.\n\n\n\n The red element on the left uses transform: perspective() functional notation; the blue element on the right uses the perspective property\n\n\n\nThese two formats both trigger a 3-D space, but there is a difference. The first, functional notation is convenient for directly applying a 3-D transform on a single element (in the previous example, I use it in conjunction with a rotateY transform). But when used on multiple elements, the transformed elements don\u2019t line up as expected. If you use the same transform across elements with different positions, each element will have its own vanishing point. To remedy this, use the perspective property on a parent element, so each child shares the same 3-D space.\n\nSee Example: Perspective 2.\n\n\n\n Each red box on the left has its own vanishing point within the parent container; the blue boxes on the right share the vanishing point of the parent container\n\n\n\nThe value of perspective determines the intensity of the 3-D effect. Think of it as a distance from the viewer to the object. The greater the value, the further the distance, so the less intense the visual effect. perspective: 2000; yields a subtle 3-D effect, as if we were viewing an object from far away. perspective: 100; produces a tremendous 3-D effect, like a tiny insect viewing a massive object.\n\nBy default, the vanishing point for a 3-D space is positioned at its centre. You can change the position of the vanishing point with perspective-origin property.\n\n-webkit-perspective-origin: 25% 75%;\n\nSee Example: Perspective 3.\n\n\n\n\n\n\n\n3-D transform functions\n\nAs a web designer, you\u2019re probably well acquainted with working in two dimensions, X and Y, positioning items horizontally and vertically. With a 3-D space initialised with perspective, we can now transform elements in all three glorious spatial dimensions, including the third Z dimension, depth. \n\n3-D transforms use the same transform property used for 2-D transforms. If you\u2019re familiar with 2-D transforms, you\u2019ll find the basic 3D transform functions fairly similar. \n\n\n\trotateX(angle)\n\trotateY(angle)\n\trotateZ(angle)\n\ttranslateZ(tz)\n\tscaleZ(sz)\n\n\nWhereas translateX() positions an element along the horizontal X-axis, translateZ() positions it along the Z-axis, which runs front to back in 3-D space. Positive values position the element closer to the viewer, negative values further away.\n\nThe rotate functions rotate the element around the corresponding axis. This is somewhat counter-intuitive at first, as you might imagine that rotateX will spin an object left to right. Instead, using rotateX(45deg) rotates an element around the horizontal X-axis, so the top of the element angles back and away, and the bottom gets closer to the viewer.\n\nSee Example: Transforms 1.\n\n\n\n3-D rotate() and translate() functions around each axis\n\n\n\nThere are also several shorthand transform functions that require values for all three dimensions:\n\n\n\ttranslate3d(tx,ty,tz)\n\tscale3d(sx,sy,sz)\n\trotate3d(rx,ry,rz,angle)\n\n\nPro-tip: These foo3d() transform functions also have the benefit of triggering hardware acceleration in Safari. Dean Jackson, CSS 3-D transform spec author and main WebKit dude, writes (to Thomas Fuchs):\n\n\n\tIn essence, any transform that has a 3D operation as one of its functions will trigger hardware compositing, even when the actual transform is 2D, or not doing anything at all (such as translate3d(0,0,0)). Note this is just current behaviour, and could change in the future (which is why we don\u2019t document or encourage it). But it is very helpful in some situations and can significantly improve redraw performance.\n\n\nFor the sake of simplicity, my demos will use the basic transform functions, but if you\u2019re writing production-ready CSS for iOS or Safari-only, make sure to use the foo3d() functions to get the best rendering performance.\n\nCard flip\n\nWe now have all the tools to start making 3-D objects. Let\u2019s get started with something simple: flipping a card.\n\nHere\u2019s the basic markup we\u2019ll need:\n\n
\n
\n
1
\n
2
\n
\n
\n\nThe .container will house the 3-D space. The #card acts as a wrapper for the 3-D object. Each face of the card has a separate element: .front; and .back. Even for such a simple object, I recommend using this same pattern for any 3-D transform. Keeping the 3-D space element and the object element(s) separate establishes a pattern that is simple to understand and easier to style.\n\nWe\u2019re ready for some 3-D stylin\u2019. First, apply the necessary perspective to the parent 3-D space, along with any size or positioning styles.\n\n.container { \n width: 200px;\n height: 260px;\n position: relative;\n -webkit-perspective: 800;\n}\n\nNow the #card element can be transformed in its parent\u2019s 3-D space. We\u2019re combining absolute and relative positioning so the 3-D object is removed from the flow of the document. We\u2019ll also add width: 100%; and height: 100%;. This ensures the object\u2019s transform-origin will occur in the centre of .container. More on transform-origin later. \n\nLet\u2019s add a CSS3 transition so users can see the transform take effect. \n\n#card {\n width: 100%;\n height: 100%;\n position: absolute;\n -webkit-transform-style: preserve-3d;\n -webkit-transition: -webkit-transform 1s;\n}\n\nThe .container\u2019s perspective only applies to direct descendant children, in this case #card. In order for subsequent children to inherit a parent\u2019s perspective, and live in the same 3-D space, the parent can pass along its perspective with transform-style: preserve-3d. Without 3-D transform-style, the faces of the card would be flattened with its parents and the back face\u2019s rotation would be nullified. \n\nTo position the faces in 3-D space, we\u2019ll need to reset their positions in 2-D with position: absolute. In order to hide the reverse sides of the faces when they are faced away from the viewer, we use backface-visibility: hidden. \n\n#card figure {\n display: block;\n position: absolute;\n width: 100%;\n height: 100%;\n -webkit-backface-visibility: hidden;\n}\n\nTo flip the .back face, we add a basic 3-D transform of rotateY(180deg). \n\n#card .front {\n background: red;\n}\n#card .back {\n background: blue;\n -webkit-transform: rotateY(180deg);\n}\n\nWith the faces in place, the #card requires a corresponding style for when it is flipped.\n\n#card.flipped {\n -webkit-transform: rotateY(180deg);\n}\n\nNow we have a working 3-D object. To flip the card, we can toggle the flipped class. When .flipped, the #card will rotate 180 degrees, thus exposing the .back face.\n\nSee Example: Card 1.\n\n\n\nFlipping a card in three dimensions\n\n\n\nSlide-flip\n\nTake another look at the Weather App 3-D transition. You\u2019ll notice that it\u2019s not quite the same effect as our previous demo. If you follow the right edge of the card, you\u2019ll find that its corners stay within the container. Instead of pivoting from the horizontal centre, it pivots on that right edge. But the transition is not just a rotation \u2013 the edge moves horizontally from right to left. We can reproduce this transition just by modifying a couple of lines of CSS from our original card flip demo.\n\nThe pivot point for the rotation occurs at the right side of the card. By default, the transform-origin of an element is at its horizontal and vertical centre (50% 50% or center center). Let\u2019s change it to the right side:\n\n#card { -webkit-transform-origin: right center; }\n\nThat flip now needs some horizontal movement with translateX. We\u2019ll set the rotation to -180deg so it flips right side out.\n\n#card.flipped {\n -webkit-transform: translateX(-100%) rotateY(-180deg);\n}\n\nSee Example: Card 2.\n\n\n\nCreating a slide-flip from the right edge of the card\n\n\n\nCube\n\nCreating 3-D card objects is a good way to get started with 3-D transforms. But once you\u2019ve mastered them, you\u2019ll be hungry to push it further and create some true 3-D objects: prisms. We\u2019ll start out by making a cube.\n\nThe markup for the cube is similar to the card. This time, however, we need six child elements for all six faces of the cube:\n\n
\n
\n
1
\n
2
\n
3
\n
4
\n
5
\n
6
\n
\n
\n\nBasic position and size styles set the six faces on top of one another in the container.\n\n.container {\n width: 200px;\n height: 200px;\n position: relative;\n -webkit-perspective: 1000;\n}\n#cube {\n width: 100%;\n height: 100%;\n position: absolute;\n -webkit-transform-style: preserve-3d;\n}\n#cube figure {\n width: 196px;\n height: 196px;\n display: block;\n position: absolute;\n border: 2px solid black;\n}\n\nWith the card, we only had to rotate its back face. The cube, however, requires that five of the six faces to be rotated. Faces 1 and 2 will be the front and back. Faces 3 and 4 will be the sides. Faces 5 and 6 will be the top and bottom.\n\n#cube .front { -webkit-transform: rotateY(0deg); }\n#cube .back { -webkit-transform: rotateX(180deg); }\n#cube .right { -webkit-transform: rotateY(90deg); }\n#cube .left { -webkit-transform: rotateY(-90deg); }\n#cube .top { -webkit-transform: rotateX(90deg); }\n#cube .bottom { -webkit-transform: rotateX(-90deg); }\n\nWe could remove the first #cube .front style declaration, as this transform has no effect, but let\u2019s leave it in to keep our code consistent.\n\nNow each face is rotated, and only the front face is visible. The four side faces are all perpendicular to the viewer, so they appear invisible. To push them out to their appropriate sides, they need to be translated out from the centre of their positions. Each side of the cube is 200 pixels wide. From the cube\u2019s centre they\u2019ll need to be translated out half that distance, 100px.\n\n#cube .front { -webkit-transform: rotateY(0deg) translateZ(100px); }\n#cube .back { -webkit-transform: rotateX(180deg) translateZ(100px); }\n#cube .right { -webkit-transform: rotateY(90deg) translateZ(100px); }\n#cube .left { -webkit-transform: rotateY(-90deg) translateZ(100px); }\n#cube .top { -webkit-transform: rotateX(90deg) translateZ(100px); }\n#cube .bottom { -webkit-transform: rotateX(-90deg) translateZ(100px); }\n\nNote here that the translateZ function comes after the rotate. The order of transform functions is important. Take a moment and soak this up. Each face is first rotated towards its position, then translated outward in a separate vector.\n\nWe have a working cube, but we\u2019re not done yet.\n\nReturning to the Z-axis origin\n\nFor the sake of our users, our 3-D transforms should not distort the interface when the active panel is at its resting position. But once we start pushing elements off their Z-axis origin, distortion is inevitable. \n\nIn order to keep 3-D transforms snappy, Safari composites the element, then applies the transform. Consequently, anti-aliasing on text will remain whatever it was before the transform was applied. When transformed forward in 3-D space, significant pixelation can occur. \n\nSee Example: Transforms 2.\n\n\n\n\n\n\n\nLooking back at the Perspective 3 demo, note that no matter how small the perspective value is, or wherever the transform-origin may be, the panel number 1 always returns to its original position, as if all those funky 3-D transforms didn\u2019t even matter.\n\nTo resolve the distortion and restore pixel perfection to our #cube, we can push the 3-D object back, so that the front face will be positioned back to the Z-axis origin.\n\n#cube { -webkit-transform: translateZ(-100px); }\n\nSee Example: Cube 1.\n\n\n\nRestoring the front face to the original position on the Z-axis\n\n\n\nRotating the cube\n\nTo expose any face of the cube, we\u2019ll need a style that rotates the cube to expose any face. The transform values are the opposite of those for the corresponding face. We toggle the necessary class on the #box to apply the appropriate transform.\n\n#cube.show-front { -webkit-transform: translateZ(-100px) rotateY(0deg); }\n#cube.show-back { -webkit-transform: translateZ(-100px) rotateX(-180deg); }\n#cube.show-right { -webkit-transform: translateZ(-100px) rotateY(-90deg); }\n#cube.show-left { -webkit-transform: translateZ(-100px) rotateY(90deg); }\n#cube.show-top { -webkit-transform: translateZ(-100px) rotateX(-90deg); }\n#cube.show-bottom { -webkit-transform: translateZ(-100px) rotateX(90deg); }\n\nNotice how the order of the transform functions has reversed. First, we push the object back with translateZ, then we rotate it.\n\nFinishing up, we can add a transition to animate the rotation between states. \n\n#cube { -webkit-transition: -webkit-transform 1s; }\n\nSee Example: Cube 2.\n\n\n\nRotating the cube with a CSS transition\n\n\n\nRectangular prism\n\nCubes are easy enough to generate, as we only have to worry about one measurement. But how would we handle a non-regular rectangular prism? Let\u2019s try to make one that\u2019s 300 pixels wide, 200 pixels high, and 100 pixels deep. \n\nThe markup remains the same as the #cube, but we\u2019ll switch the cube id for #box. The container styles remain mostly the same:\n\n.container {\n width: 300px;\n height: 200px;\n position: relative;\n -webkit-perspective: 1000;\n}\n#box {\n width: 100%;\n height: 100%;\n position: absolute;\n -webkit-transform-style: preserve-3d;\n}\n\nNow to position the faces. Each set of faces will need their own sizes. The smaller faces (left, right, top and bottom) need to be positioned in the centre of the container, where they can be easily rotated and then shifted outward. The thinner left and right faces get positioned left: 100px ((300\u2009\u2212\u2009100)\u2009\u00f7\u20092), The stouter top and bottom faces get positioned top: 50px ((200\u2009\u2212\u2009100)\u2009\u00f7\u20092).\n\n#box figure {\n display: block;\n position: absolute;\n border: 2px solid black;\n}\n#box .front,\n#box .back {\n width: 296px;\n height: 196px;\n}\n#box .right,\n#box .left {\n width: 96px;\n height: 196px;\n left: 100px;\n}\n#box .top,\n#box .bottom {\n width: 296px;\n height: 96px;\n top: 50px;\n}\n\nThe rotate values can all remain the same as the cube example, but for this rectangular prism, the translate values do differ. The front and back faces are each shifted out 50 pixels since the #box is 100 pixels deep. The translate value for the left and right faces is 150 pixels for their 300 pixels width. Top and bottom panels take 100 pixels for their 200 pixels height:\n\n#box .front { -webkit-transform: rotateY(0deg) translateZ(50px); }\n#box .back { -webkit-transform: rotateX(180deg) translateZ(50px); }\n#box .right { -webkit-transform: rotateY(90deg) translateZ(150px); }\n#box .left { -webkit-transform: rotateY(-90deg) translateZ(150px); }\n#box .top { -webkit-transform: rotateX(90deg) translateZ(100px); }\n#box .bottom { -webkit-transform: rotateX(-90deg) translateZ(100px); }\n\nSee Example: Box 1.\n\n\n\n\n\n\n\nJust like the cube example, to expose a face, the #box needs to have a style to reverse that face\u2019s transform. Both the translateZ and rotate values are the opposites of the corresponding face.\n\n#box.show-front { -webkit-transform: translateZ(-50px) rotateY(0deg); }\n#box.show-back { -webkit-transform: translateZ(-50px) rotateX(-180deg); }\n#box.show-right { -webkit-transform: translateZ(-150px) rotateY(-90deg); }\n#box.show-left { -webkit-transform: translateZ(-150px) rotateY(90deg); }\n#box.show-top { -webkit-transform: translateZ(-100px) rotateX(-90deg); }\n#box.show-bottom { -webkit-transform: translateZ(-100px) rotateX(90deg); }\n\nSee Example: Box 2.\n\n\n\nRotating the rectangular box with a CSS transition\n\n\n\nCarousel\n\nFront-end developers have a myriad of choices when it comes to content carousels. Now that we have 3-D capabilities in our browsers, why not take a shot at creating an actual 3-D carousel?\n\nThe markup for this demo takes the same form as the box, cube and card. Let\u2019s make it interesting and have a carousel with nine panels.\n\n
\n
\n
1
\n
2
\n
3
\n
4
\n
5
\n
6
\n
7
\n
8
\n
9
\n
\n
\n\nNow, apply basic layout styles. Let\u2019s give each panel of the #carousel 20 pixel gaps between one another, done here with left: 10px; and top: 10px;. The effective width of each panel is 210 pixels.\n\n.container {\n width: 210px;\n height: 140px;\n position: relative;\n -webkit-perspective: 1000;\n}\n#carousel {\n width: 100%;\n height: 100%;\n position: absolute;\n -webkit-transform-style: preserve-3d;\n}\n#carousel figure {\n display: block;\n position: absolute;\n width: 186px;\n height: 116px;\n left: 10px;\n top: 10px;\n border: 2px solid black;\n}\n\nNext up: rotating the faces. This #carousel has nine panels. If each panel gets an equal distribution on the carousel, each panel would be rotated forty degrees from its neighbour (360\u2009\u00f7\u20099).\n\n#carousel figure:nth-child(1) { -webkit-transform: rotateY(0deg); }\n#carousel figure:nth-child(2) { -webkit-transform: rotateY(40deg); }\n#carousel figure:nth-child(3) { -webkit-transform: rotateY(80deg); }\n#carousel figure:nth-child(4) { -webkit-transform: rotateY(120deg); }\n#carousel figure:nth-child(5) { -webkit-transform: rotateY(160deg); }\n#carousel figure:nth-child(6) { -webkit-transform: rotateY(200deg); }\n#carousel figure:nth-child(7) { -webkit-transform: rotateY(240deg); }\n#carousel figure:nth-child(8) { -webkit-transform: rotateY(280deg); }\n#carousel figure:nth-child(9) { -webkit-transform: rotateY(320deg); }\n\nNow, the outward shift. Back when we were creating the cube and box, the translate value was simple to calculate, as it was equal to one half the width, height or depth of the object. With this carousel, there is no size we can automatically use as a reference. We\u2019ll have to calculate the distance of the shift by other means.\n\n\n\nDrawing a diagram of the carousel, we can see that we know only two things: the width of each panel is 210 pixels; and the each panel is rotated forty degrees from the next. If we split one of these segments down its centre, we get a right-angled triangle, perfect for some trigonometry.\n\nWe can determine the length of r in this diagram with a basic tangent equation:\n\n\n\nThere you have it: the panels need to be translated 288 pixels in 3-D space. \n\n#carousel figure:nth-child(1) { -webkit-transform: rotateY(0deg) translateZ(288px); }\n#carousel figure:nth-child(2) { -webkit-transform: rotateY(40deg) translateZ(288px); }\n#carousel figure:nth-child(3) { -webkit-transform: rotateY(80deg) translateZ(288px); }\n#carousel figure:nth-child(4) { -webkit-transform: rotateY(120deg) translateZ(288px); }\n#carousel figure:nth-child(5) { -webkit-transform: rotateY(160deg) translateZ(288px); }\n#carousel figure:nth-child(6) { -webkit-transform: rotateY(200deg) translateZ(288px); }\n#carousel figure:nth-child(7) { -webkit-transform: rotateY(240deg) translateZ(288px); }\n#carousel figure:nth-child(8) { -webkit-transform: rotateY(280deg) translateZ(288px); }\n#carousel figure:nth-child(9) { -webkit-transform: rotateY(320deg) translateZ(288px); }\n\nIf we decide to change the width of the panel or the number of panels, we only need to plug in those two variables into our equation to get the appropriate translateZ value. In JavaScript terms, that equation would be:\n\nvar tz = Math.round( ( panelSize / 2 ) / \n Math.tan( ( ( Math.PI * 2 ) / numberOfPanels ) / 2 ) );\n// or simplified to\nvar tz = Math.round( ( panelSize / 2 ) / \n Math.tan( Math.PI / numberOfPanels ) );\n\nJust like our previous 3-D objects, to show any one panel we need only apply the reverse transform on the carousel. Here\u2019s the style to show the fifth panel:\n\n-webkit-transform: translateZ(-288px) rotateY(-160deg);\n\nSee Example: Carousel 1.\n\n\n\n\n\n\n\nBy now, you probably have two thoughts: \n\n\n\tRewriting transform styles for each panel looks tedious.\n\tWhy bother doing high school maths? Aren\u2019t robots supposed to be doing all this work for us?\n\n\nAnd you\u2019re absolutely right. The repetitive nature of 3-D objects lends itself to scripting. We can offload all the monotonous transform styles to our dynamic script, which, if done correctly, will be more flexible than the hard-coded version.\n\nSee Example: Carousel 2.\n\nConclusion\n\n3-D transforms change the way we think about the blank canvas of web design. Better yet, they change the canvas itself, trading in the flat surface for voluminous depth.\n\nMy hope is that you took at least one peak at a demo and were intrigued. We web designers, who have rejoiced for border-radius, box-shadow and background gradients, now have an incredible tool at our disposal in 3-D transforms. They deserve just the same enthusiasm, research and experimentation we have seen on other CSS3 features. Now is the perfect time to take the plunge and start thinking about how to use three dimensions to elevate our craft. I\u2019m breathless waiting for what\u2019s to come. \n\nSee you on the flip side.", "year": "2010", "author": "David DeSandro", "author_slug": "daviddesandro", "published": "2010-12-14T00:00:00+00:00", "url": "https://24ways.org/2010/intro-to-css-3d-transforms/", "topic": "code"} {"rowid": 307, "title": "Get the Balance Right: Responsive Display Text", "contents": "Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers\u2019 attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout.\nWhen setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader\u2019s preferences. You can take the same approach with display text by choosing larger sizes from the same scale.\nHowever, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window.\nIntroducing viewport units\nWith CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels:\n\nvw - viewport width,\nvh - viewport height,\nvmin - viewport height or width, whichever is smaller\nvmax - viewport height or width, whichever is larger\n\nIn one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width:\nh1 {\n font-size: 13 vw;\n}\nSo for a selection of widths, the rendered font size would be:\nRendered font size (px)\nViewport width\n13\u202fvw\n320\n42\n768\n100\n1024\n133\n1280\n166\n1920\n250\n\nA problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation. \nLandscape text is much bigger than portrait text when using vw units.\nThe proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression.\nHowever if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering.\nh1 {\n font-size: 13vmin;\n}\nLandscape text is consistent with portrait text when using vmin units.\nComparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude:\nRendered font size (px)\nViewport\n13\u202fvw\n13\u202fvmin\n320 \u00d7 480\n42\n42\n414 \u00d7 736\n54\n54\n768 \u00d7 1024\n100\n100\n1024 \u00d7 768\n133\n100\n1280 \u00d7 720\n166\n94\n1366 \u00d7 768\n178\n100\n1440 \u00d7 900\n187\n117\n1680 \u00d7 1050\n218\n137\n1920 \u00d7 1080\n250\n140\n2560 \u00d7 1440\n333\n187\n\nHybrid font sizing\nUsing vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading:\nh1 {\n font-size: 13vmin;\n}\nh2 {\n font-size: 5vmin;\n}\nUsing vmin alone for supporting text can scale it too quickly\nThe balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens.\nA solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example:\nh2 {\n font-size: calc(0.5rem + 2.5vmin);\n}\nFor a 320 px wide screen, the font size will be 16 px, calculated as follows:\n(0.5 \u00d7 16) + (320 \u00d7 0.025) = 8 + 8 = 16px\nFor a 768 px wide screen, the font size will be 27 px:\n(0.5 \u00d7 16) + (768 \u00d7 0.025) = 8 + 19 = 27px\nThis results in a more balanced subheading that doesn\u2019t take emphasis away from the main heading:\n\nTo give you an idea of the effect of using a hybrid approach, here\u2019s a side-by-side comparison of hybrid and viewport text sizing:\ntable.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em}\n\ntop: calc() hybrid method; bottom: vmin only\n16\n20\n27\n32\n35\n40\n44\n16\n24\n38\n48\n54\n64\n72\n320\n480\n768\n960\n1080\n1280\n1440\n\nOver this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting.\n\n\n\n\nA reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays.\u00a0\u21a9\ufe0e\n\n\nFor even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller.\u00a0\u21a9\ufe0e", "year": "2016", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2016-12-09T00:00:00+00:00", "url": "https://24ways.org/2016/responsive-display-text/", "topic": "code"} {"rowid": 83, "title": "Cut Copy Paste", "contents": "Long before I got into this design thing, I was heavily into making my own music inspired by the likes of Coldcut and Steinski. I would scour local second-hand record shops in search of obscure beats, loops and bits of dialogue in the hope of finding that killer sample I could then splice together with other things to make a huge hit that everyone would love. While it did eventually lead to a record contract and getting to release a few 12\u2033 singles, ultimately I knew I\u2019d have to look for something else to pay the bills.\n\nI may not make my own records any more, but the approach I took back then \u2013 finding (even stealing) things, cutting and pasting them into interesting combinations \u2013 is still at the centre of how I work, only these days it\u2019s pretty much bits of code rather than bits of vinyl. Over the years I\u2019ve stored these little bits of code (some I\u2019ve found, some I\u2019ve created myself) in Evernote, ready to be dialled up whenever I need them. \n\nSo when Drew got in touch and asked if I\u2019d like to do something for this year\u2019s 24 ways I thought it might be kind of cool to share with you a few of these snippets that I find really useful. Think of these as a kind of coding mix tape; but remember \u2013 don\u2019t just copy as is: play around, combine and remix them into other wonderful things. \n\nSome of this stuff is dirty; some of it will make hardcore programmers feel ill. For those people, remember this \u2013 while you were complaining about the syntax, I made something.\n\nCreate unique colours\n\nLet\u2019s start right away with something I stole. Well, actually it was given away at the time by Matt Biddulph who was then at Dopplr before Nokia destroyed it. Imagine you have thousands of words and you want to assign each one a unique colour. Well, Matt came up with a crazily simple but effective way to do that using an MD5 hash. Just encode said word using an MD5 hash, then take the first six characters of the string you get back to create a hexadecimal colour representation. \n\nI can\u2019t guarantee that it will be a harmonious colour palette, but it\u2019s still really useful. The thing I love the most about this technique is the left-field thinking of using an encryption system to create colours! Here\u2019s an example using JavaScript:\n\n// requires the MD5 library available at http://pajhome.org.uk/crypt/md5\n\n function MD5Hex(str){\n result = MD5.hex(str).substring(0, 6);\n return result;\n }\n\nMake something breathe using a sine wave\n\nI never paid attention in school, especially during double maths. As a matter of fact, the only time I received corporal punishment \u2013 several strokes of the ruler \u2013 was in maths class. Anyway, if they had shown me then how beautiful mathematics actually is, I might have paid more attention. Here\u2019s a little example of how a sine wave can be used to make something appear to breathe. \n\nI recently used this on an Arduino project where an LED ring surrounding a button would gently breathe. Because of that it felt much more inviting. I love mathematics.\n\nfor(int i = 0; i<360; i++){ \n float rad = DEG_TO_RAD * i;\n int sinOut = constrain((sin(rad) * 128) + 128, 0, 255);\n analogWrite(LED, sinOut);\n delay(10); \n}\n\nSnap position to grid\n\nThis is so elegant I love it, and it was shown to me by Gary Burgess, or Boom Boom as myself and others like to call him. It snaps a position, in this case the X-position, to a grid. Just define your grid size (say, twenty pixels) and you\u2019re good.\n\nsnappedXpos = floor( xPos / gridSize) * gridSize;\n\nCalculate the distance between two objects\n\nFor me, interaction design is about the relationship between two objects: you and another object; you and another person; or simply one object to another. How close these two things are to each other can be a handy thing to know, allowing you to react to that information within your design. Here\u2019s how to calculate the distance between two objects in a 2-D plane:\n\ndeltaX = round(p2.x-p1.x);\ndeltaY = round(p2.y-p1.y);\ndiff = round(sqrt((deltaX*deltaX)+(deltaY*deltaY)));\n\nFind the X- and Y-position between two objects\n\nWhat if you have two objects and you want to place something in-between them? A little bit of interruption and disruption can be a good thing. This small piece of code will allow you to place an object in-between two other objects:\n\n// set the position: 0.5 = half-way\t\n\nfloat position = 0.5;\nfloat x = x1 + (x2 - x1) *position; \nfloat y = y1 + (y2 - y1) *position; \n\nDistribute objects equally around a circle \t\n\nMore fun with maths, this time adding cosine to our friend sine. Let\u2019s say you want to create a circular navigation of arbitrary elements (yeah, Jakob, you heard), or you want to place images around a circle. Well, this piece of code will do just that. You can adjust the size of the circle by changing the distance variable and alter the number of objects with the numberOfObjects variable. Example below is for use in Processing.\n\n// Example for Processing available for free download at processing.org\n\nvoid setup() {\n\n size(800,800);\n int numberOfObjects = 12;\n int distance = 100;\n float inc = (TWO_PI)/numberOfObjects;\n float x,y;\n float a = 0;\n\n for (int i=0; i < numberOfObjects; i++) {\n x = (width/2) + sin(a)*distance;\n y = (height/2) + cos(a)*distance;\n ellipse(x,y,10,10);\n a += inc;\n\n }\n}\n\nUse modulus to make a grid\n\nThe modulus operator, represented by %, returns the remainder of a division. Fallen into a coma yet? Hold on a minute \u2013 this seemingly simple function is very powerful in lots of ways. At a simple level, you can use it to determine if a number is odd or even, great for creating alternate row colours in a table for instance:\n\nboolean checkForEven(numberToCheck) {\n if (numberToCheck % 2 == 0) \n return true;\n } else {\n return false; \n }\n}\n\nThat\u2019s all well and good, but here\u2019s a use of modulus that might very well blow your mind. Construct a grid with only a few lines of code. Again the example is in Processing but can easily be ported to any other language.\n\nvoid setup() {\n\nsize(600,600);\nint numItems = 120;\nint numOfColumns = 12;\nint xSpacing = 40;\nint ySpacing = 40;\nint totalWidth = xSpacing*numOfColumns;\n\nfor (int i=0; i < numItems; i++) {\n\nellipse(floor((i*xSpacing)%totalWidth),floor((i*xSpacing)/totalWidth)*ySpacing,10,10);\n\n}\n}\n\nNot all the bits of code I keep around are for actual graphical output. I also have things that are very utilitarian, but which I still consider part of the design process. Here\u2019s a couple of things that I\u2019ve found really handy lately in my design workflow. They may be a little specific, but I hope they demonstrate that it\u2019s not about working harder, it\u2019s about working smarter. \n\nMerge CSV files into one file\n\nRecently, I\u2019ve had to work with huge \u2013 about 1GB \u2013 CSV text files that I then needed to combine into one master CSV file so I could then process the data. Opening up each text file and then copying and pasting just seemed really dumb, not to mention slow, so I thought there must be a better way. After some Googling I found this command line script that would combine .txt files into one file and add a new line after each: \n\nawk 1 *.txt > finalfile.txt\n\nBut that wasn\u2019t what I was ideally after. I wanted to merge the CSV files, keeping the first row of the first file (the column headings) and then ignore the first row of subsequent files. Sure enough I found the answer after some Googling and it worked like a charm. Apologies to the original author but I can\u2019t remember where I found it, but you, sir or madam, are awesome. Save this as a shell script:\n\nFIRST=\n\nfor FILE in *.csv\n do\n exec 5<\"$FILE\" # Open file\n read LINE <&5 # Read first line\n [ -z \"$FIRST\" ] && echo \"$LINE\" # Print it only from first file\n FIRST=\"no\"\n\n cat <&5 # Print the rest directly to standard output\n exec 5<&- # Close file\n # Redirect stdout for this section into file.out \n\ndone > file.out\n\nCreate a symbolic link to another file or folder\n\nOftentimes, I\u2019ll find myself hunting through a load of directories to load a file to be processed, like a CSV file. Use a symbolic link (in the Terminal) to place a link on your desktop or wherever is most convenient and it\u2019ll save you loads of time. Especially great if you\u2019re going through a Java file dialogue box in Processing or something that doesn\u2019t allow the normal Mac dialog box or aliases.\n\ncd /DirectoryYouWantShortcutToLiveIn\nln -s /Directory/You/Want/ShortcutTo/ TheShortcut\n\nYou can do it, in the mix\n\nI hope you\u2019ve found some of the above useful and that they\u2019ve inspired a few ideas here and there. Feel free to tell me better ways of doing things or offer up any other handy pieces of code. Most of all though, collect, remix and combine the things you discover to make lovely new things.", "year": "2012", "author": "Brendan Dawes", "author_slug": "brendandawes", "published": "2012-12-17T00:00:00+00:00", "url": "https://24ways.org/2012/cut-copy-paste/", "topic": "code"} {"rowid": 283, "title": "CSS3 Patterns, Explained", "contents": "Many of you have probably seen my CSS3 patterns gallery. It became very popular throughout the year and it showed many web developers how powerful CSS3 gradients really are. But how many really understand how these patterns are created? The biggest benefit of CSS-generated backgrounds is that they can be modified directly within the style sheet. This benefit is void if we are just copying and pasting CSS code we don\u2019t understand. We may as well use a data URI instead.\n\nImportant note\n\nIn all the examples that follow, I\u2019ll be using gradients without a vendor prefix, for readability and brevity. However, you should keep in mind that in reality you need to use all the vendor prefixes (-moz-, -ms-, -o-, -webkit-) as no browser currently implements them without a prefix. Alternatively, you could use -prefix-free and have the current vendor prefix prepended at runtime, only when needed.\n\nThe syntax described here is the one that browsers currently implement. The specification has since changed, but no browser implements the changes yet. If you are interested in what is coming, I suggest you take a look at the dev version of the spec.\n\nIf you are not yet familiar with CSS gradients, you can read these excellent tutorials by John Allsopp and return here later, as in the rest of the article I assume you already know the CSS gradient basics:\n\n\n\tCSS3 Linear Gradients\n\tCSS3 Radial Gradients\n\n\nThe main idea\n\nI\u2019m sure most of you can imagine the background this code generates:\n\nbackground: linear-gradient(left, white 20%, #8b0 80%);\n\nIt\u2019s a simple gradient from one color to another that looks like this:\n\n See this example live\n\nAs you probably know, in this case the first 20% of the container\u2019s width is solid white and the last 20% is solid green. The other 60% is a smooth gradient between these colors. Let\u2019s try moving these color stops closer to each other:\n\nbackground: linear-gradient(left, white 30%, #8b0 70%);\n\n See this example live\n\nbackground: linear-gradient(left, white 40%, #8b0 60%);\n\n See this example live\n\nbackground: linear-gradient(left, white 50%, #8b0 50%);\n\n See this example live\n\nNotice how the gradient keeps shrinking and the solid color areas expanding, until there is no gradient any more in the last example. We can even adjust the position of these two color stops to control where each color abruptly changes into another:\n\nbackground: linear-gradient(left, white 30%, #8b0 30%);\n\n See this example live\n\nbackground: linear-gradient(left, white 90%, #8b0 90%);\n\n See this example live\n\nWhat you need to take away from these examples is that when two color stops are at the same position, there is no gradient, only solid colors. Even without going any further, this trick is useful for a number of different use cases like faux columns or the effect I wanted to achieve in my homepage or the -prefix-free page where the background is only shown on one side and hidden on the other:\n\n\n\nCombining with background-size\n\nWe can do wonders, however, if we combine this with the CSS3 background-size property:\n\nbackground: linear-gradient(left, white 50%, #8b0 50%);\nbackground-size: 100px 100px;\n\n See this example live\n\nAnd there it is. We just created the simplest of patterns: (vertical) stripes. We can remove the first parameter (left) or replace it with top and we\u2019ll get horizontal stripes. However, let\u2019s face it: Horizontal and vertical stripes are kinda boring. Most stripey backgrounds we see on the web are diagonal. So, let\u2019s try doing that.\n\nOur first attempt would be to change the angle of the gradient to something like 45deg. However, this results in an ugly pattern like this: \n\n See this example live\n\nBefore reading on, think for a second: why didn\u2019t this produce the desired result? Can you figure it out?\n\nThe reason is that the gradient angle rotates the gradient inside each tile, not the tiled background as a whole. However, didn\u2019t we have the same problem the first time we tried to create diagonal stripes with an image? And then we learned that every stripe has to be included twice, like so:\n\n\n\nSo, let\u2019s try to create that effect with CSS gradients. It\u2019s essentially what we tried before, but with more color stops:\n\nbackground: linear-gradient(45deg, white 25%,\n #8b0 25%, #8b0 50%, \n white 50%, white 75%, \n #8b0 75%);\nbackground-size:100px 100px;\n\n See this example live\n\nAnd there we have our stripes! An easy way to remember the order of the percentages and colors it is that you always have two of the same in succession, except the first and last color.\n\nNote: Firefox for Mac also needs an additional 100% color stop at the end of any pattern with more than two stops, like so: ..., white 75%, #8b0 75%, #8b0). The bug was reported in February 2011 and you can vote for it and track its progress at Bugzilla.\n\nUnfortunately, this is essentially a hack and we will realize that if we try to change the gradient angle to 60deg:\n\n See this example live\n\nNot that maintainable after all, eh? Luckily, CSS3 offers us another way of declaring such backgrounds, which not only helps this case but also results in much more concise code:\n\nbackground: repeating-linear-gradient(60deg, white, white 35px, #8b0 35px, #8b0 70px);\n\n See this example live\n\nIn this case, however, the size has to be declared in the color stop positions and not through background-size, since the gradient is supposed to cover the entire container. You might notice that the declared size is different from the one specified the previous way. This is because the size of the stripes is measured differently: in the first example we specify the dimensions of the tile itself; in the second, the width of the stripes (35px), which is measured diagonally.\n\nMultiple backgrounds\n\nUsing only one gradient you can create stripes and that\u2019s about it. There are a few more patterns you can create with just one gradient (linear or radial) but they are more or less boring and ugly. Almost every pattern in my gallery contains a number of different backgrounds. For example, let\u2019s create a polka dot pattern:\n\nbackground: radial-gradient(circle, white 10%, transparent 10%),\nradial-gradient(circle, white 10%, black 10%) 50px 50px;\nbackground-size:100px 100px;\n\n See this example live\n\nNotice that the two gradients are almost the same image, but positioned differently to create the polka dot effect. The only difference between them is that the first (topmost) gradient has transparent instead of black. If it didn\u2019t have transparent regions, it would effectively be the same as having a single gradient, as the topmost gradient would obscure everything beneath it.\n\nThere is an issue with this background. Can you spot it?\n\nThis background will be fine for browsers that support CSS gradients but, for browsers that don\u2019t, it will be transparent as the whole declaration is ignored. We have two ways to provide a fallback, each for different use cases. We have to either declare another background before the gradient, like so:\n\nbackground: black;\nbackground: radial-gradient(circle, white 10%, transparent 10%),\nradial-gradient(circle, white 10%, black 10%) 50px 50px;\nbackground-size:100px 100px;\n\nor declare each background property separately:\n\nbackground-color: black;\nbackground-image: radial-gradient(circle, white 10%, transparent 10%),\nradial-gradient(circle, white 10%, transparent 10%);\nbackground-size:100px 100px;\nbackground-position: 0 0, 50px 50px;\n\nThe vigilant among you will have noticed another change we made to our code in the last example: we altered the second gradient to have transparent regions as well. This way background-color serves a dual purpose: it sets both the fallback color and the background color of the polka dot pattern, so that we can change it with just one edit. Always strive to make code that can be modified with the least number of edits. You might think that it will never be changed in that way but, almost always, given enough time, you\u2019ll be proved wrong.\n\nWe can apply the exact same technique with linear gradients, in order to create checkerboard patterns out of right triangles:\n\nbackground-color: white;\nbackground-image: linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%), \nlinear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%);\nbackground-size:100px 100px;\nbackground-position: 0 0, 50px 50px;\n\n See this example live\n\nUsing the right units\n\nDon\u2019t use pixels for the sizes without any thought. In some cases, ems make much more sense. For example, when you want to make a lined paper background, you want the lines to actually follow the text. If you use pixels, you have to change the size every time you change font-size. If you set the background-size in ems, it will naturally follow the text and you will only have to update it if you change line-height.\n\nIs it possible?\n\nThe shapes that can be achieved with only one gradient are:\n\n\n\tstripes\n\tright triangles\n\tcircles and ellipses\n\tsemicircles and other shapes formed from slicing ellipses horizontally or vertically\n\n\nYou can combine several of them to create squares and rectangles (two right triangles put together), rhombi and other parallelograms (four right triangles), curves formed from parts of ellipses, and other shapes.\n\nJust because you can doesn\u2019t mean you should\n\nTechnically, anything can be crafted with these techniques. However, not every pattern is suitable for it. The main advantages of this technique are:\n\n\n\tno extra HTTP requests\n\tshort code\n\thuman-readable code (unlike data URIs) that can be changed without even leaving the CSS file.\n\n\nComplex patterns that require a large number of gradients are probably better left to SVG or bitmap images, since they negate almost every advantage of this technique:\n\n\n\tthey are not shorter\n\tthey are not really comprehensible \u2013 changing them requires much more effort than using an image editor\n\n\nThey still save an HTTP request, but so does a data URI.\n\nI have included some very complex patterns in my gallery, because even though I think they shouldn\u2019t be used in production (except under very exceptional conditions), understanding how they work and coding them helps somebody understand the technology in much more depth.\n\nAnother rule of thumb is that if your pattern needs shapes to obscure parts of other shapes, like in the star pattern or the yin yang pattern, then you probably shouldn\u2019t use it. In these patterns, changing the background color requires you to also change the color of these shapes, making edits very tedious.\n\nIf a certain pattern is not practicable with a reasonable amount of CSS, that doesn\u2019t mean you should resort to bitmap images. SVG is a very good alternative and is supported by all modern browsers.\n\nBrowser support\n\nCSS gradients are supported by Firefox 3.6+, Chrome 10+, Safari 5.1+ and Opera 11.60+ (linear gradients since Opera 11.10). Support is also coming in Internet Explorer when IE10 is released. You can get gradients in older WebKit versions (including most mobile browsers) by using the proprietary -webkit-gradient(), if you really need them.\n\nEpilogue\n\nI hope you find these techniques useful for your own designs. If you come up with a pattern that\u2019s very different from the ones already included, especially if it demonstrates a cool new technique, feel free to send a pull request to the github repo of the patterns gallery. Also, I\u2019m always fascinated to see my techniques put in practice, so if you made something cool and used CSS patterns, I\u2019d love to know about it!\n\nHappy holidays!", "year": "2011", "author": "Lea Verou", "author_slug": "leaverou", "published": "2011-12-16T00:00:00+00:00", "url": "https://24ways.org/2011/css3-patterns-explained/", "topic": "code"} {"rowid": 168, "title": "Unobtrusively Mapping Microformats with jQuery", "contents": "Microformats are everywhere. You can\u2019t shake an electronic stick these days without accidentally poking a microformat-enabled site, and many developers use microformats as a matter of course. And why not? After all, why invent your own class names when you can re-use pre-defined ones that give your site extra functionality for free?\n\nNevertheless, while it\u2019s good to know that users of tools such as Tails and Operator will derive added value from your shiny semantics, it\u2019s nice to be able to reuse that effort in your own code.\n\nWe\u2019re going to build a map of some of my favourite restaurants in Brighton. Fitting with the principles of unobtrusive JavaScript, we\u2019ll start with a semantically marked up list of restaurants, then use JavaScript to add the map, look up the restaurant locations and plot them as markers.\n\nWe\u2019ll be using a couple of powerful tools. The first is jQuery, a JavaScript library that is ideally suited for unobtrusive scripting. jQuery allows us to manipulate elements on the page based on their CSS selector, which makes it easy to extract information from microformats.\n\nThe second is Mapstraction, introduced here by Andrew Turner a few days ago. We\u2019ll be using Google Maps in the background, but Mapstraction makes it easy to change to a different provider if we want to later.\n\nGetting Started\n\nWe\u2019ll start off with a simple collection of microformatted restaurant details, representing my seven favourite restaurants in Brighton. The full, unstyled list can be seen in restaurants-plain.html. Each restaurant listing looks like this:\n\n
  • \n\t

    Riddle & Finns

    \n\t
    \n\t\t

    12b Meeting House Lane

    \n\t\t

    Brighton, UK

    \n\t\t

    BN1 1HB

    \n\t
    \n\t

    Telephone: +44 (0)1273 323 008

    \n\t

    E-mail: info@riddleandfinns.co.uk

    \n
  • \n\nSince we\u2019re dealing with a list of restaurants, each hCard is marked up inside a list item. Each restaurant is an organisation; we signify this by placing the classes fn and org on the element surrounding the restaurant\u2019s name (according to the hCard spec, setting both fn and org to the same value signifies that the hCard represents an organisation rather than a person).\n\nThe address information itself is contained within a div of class adr. Note that the HTML
    element is not suitable here for two reasons: firstly, it is intended to mark up contact details for the current document rather than generic addresses; secondly, address is an inline element and as such cannot contain the paragraphs elements used here for the address information.\n\nA nice thing about microformats is that they provide us with automatic hooks for our styling. For the moment we\u2019ll just tidy up the whitespace a bit; for more advanced style tips consult John Allsop\u2019s guide from 24 ways 2006.\n\n.vcard p {\n\tmargin: 0;\n}\n.adr {\n\tmargin-bottom: 0.5em;\n}\n\nTo plot the restaurants on a map we\u2019ll need latitude and longitude for each one. We can find this out from their address using geocoding. Most mapping APIs include support for geocoding, which means we can pass the API an address and get back a latitude/longitude point. Mapstraction provides an abstraction layer around these APIs which can be included using the following script tag:\n\n\n\nWhile we\u2019re at it, let\u2019s pull in the other external scripts we\u2019ll be using:\n\n\n\n\n\n\nThat\u2019s everything set up: let\u2019s write some JavaScript!\n\nIn jQuery, almost every operation starts with a call to the jQuery function. The function simulates method overloading to behave in different ways depending on the arguments passed to it. When writing unobtrusive JavaScript it\u2019s important to set up code to execute when the page has loaded to the point that the DOM is available to be manipulated. To do this with jQuery, pass a callback function to the jQuery function itself:\n\njQuery(function() {\n\t// This code will be executed when the DOM is ready\n});\n\nInitialising the map\n\nThe first thing we need to do is initialise our map. Mapstraction needs a div with an explicit width, height and ID to show it where to put the map. Our document doesn\u2019t currently include this markup, but we can insert it with a single line of jQuery code:\n\njQuery(function() {\n\t// First create a div to host the map\n\tvar themap = jQuery('
    ').css({\n\t\t'width': '90%',\n\t\t'height': '400px'\n\t}).insertBefore('ul.restaurants');\n});\n\nWhile this is technically just a single line of JavaScript (with line-breaks added for readability) it\u2019s actually doing quite a lot of work. Let\u2019s break it down in to steps:\n\nvar themap = jQuery('
    ')\n\nHere\u2019s jQuery\u2019s method overloading in action: if you pass it a string that starts with a < it assumes that you wish to create a new HTML element. This provides us with a handy shortcut for the more verbose DOM equivalent:\n\nvar themap = document.createElement('div');\nthemap.id = 'themap';\n\nNext we want to apply some CSS rules to the element. jQuery supports chaining, which means we can continue to call methods on the object returned by jQuery or any of its methods:\n\nvar themap = jQuery('
    ').css({\n\t'width': '90%',\n\t'height': '400px'\n})\n\nFinally, we need to insert our new HTML element in to the page. jQuery provides a number of methods for element insertion, but in this case we want to position it directly before the
      we are using to contain our restaurants. jQuery\u2019s insertBefore() method takes a CSS selector indicating an element already on the page and places the current jQuery selection directly before that element in the DOM.\n\nvar themap = jQuery('
      ').css({\n\t'width': '90%',\n\t'height': '400px'\n}).insertBefore('ul.restaurants');\n\nFinally, we need to initialise the map itself using Mapstraction. The Mapstraction constructor takes two arguments: the first is the ID of the element used to position the map; the second is the mapping provider to use (in this case google ):\n\n// Initialise the map\nvar mapstraction = new Mapstraction('themap','google');\n\nWe want the map to appear centred on Brighton, so we\u2019ll need to know the correct co-ordinates. We can use www.getlatlon.com to find both the co-ordinates and the initial map zoom level.\n\n// Show map centred on Brighton\nmapstraction.setCenterAndZoom(\n\tnew LatLonPoint(50.82423734980143, -0.14007568359375),\n\t15 // Zoom level appropriate for Brighton city centre\n);\n\nWe also want controls on the map to allow the user to zoom in and out and toggle between map and satellite view.\n\nmapstraction.addControls({\n\tzoom: 'large',\n\tmap_type: true\n});\n\nAdding the markers\n\nIt\u2019s finally time to parse some microformats. Since we\u2019re using hCard, the information we want is wrapped in elements with the class vcard. We can use jQuery\u2019s CSS selector support to find them:\n\nvar vcards = jQuery('.vcard');\n\nNow that we\u2019ve found them, we need to create a marker for each one in turn. Rather than using a regular JavaScript for loop, we can instead use jQuery\u2019s each() method to execute a function against each of the hCards.\n\njQuery('.vcard').each(function() {\n\t// Do something with the hCard\n});\n\nWithin the callback function, this is set to the current DOM element (in our case, the list item). If we want to call the magic jQuery methods on it we\u2019ll need to wrap it in another call to jQuery:\n\njQuery('.vcard').each(function() {\n\tvar hcard = jQuery(this);\n});\n\nThe Google maps geocoder seems to work best if you pass it the street address and a postcode. We can extract these using CSS selectors: this time, we\u2019ll use jQuery\u2019s find() method which searches within the current jQuery selection:\n\nvar streetaddress = hcard.find('.street-address').text();\nvar postcode = hcard.find('.postal-code').text();\n\nThe text() method extracts the text contents of the selected node, minus any HTML markup.\n\nWe\u2019ve got the address; now we need to geocode it. Mapstraction\u2019s geocoding API requires us to first construct a MapstractionGeocoder, then use the geocode() method to pass it an address. Here\u2019s the code outline:\n\nvar geocoder = new MapstractionGeocoder(onComplete, 'google');\ngeocoder.geocode({'address': 'the address goes here');\n\nThe onComplete function is executed when the geocoding operation has been completed, and will be passed an object with the resulting point on the map. We just want to create a marker for the point:\n\nvar geocoder = new MapstractionGeocoder(function(result) {\n\tvar marker = new Marker(result.point);\n\tmapstraction.addMarker(marker);\n}, 'google'); \n\nFor our purposes, joining the street address and postcode with a comma to create the address should suffice:\n\ngeocoder.geocode({'address': streetaddress + ', ' + postcode}); \n\nThere\u2019s one last step: when the marker is clicked, we want to display details of the restaurant. We can do this with an info bubble, which can be configured by passing in a string of HTML. We\u2019ll construct that HTML using jQuery\u2019s html() method on our hcard object, which extracts the HTML contained within that DOM node as a string.\n\nvar marker = new Marker(result.point);\nmarker.setInfoBubble(\n\t'
      ' + hcard.html() + '
      '\n);\nmapstraction.addMarker(marker);\n\nWe\u2019ve wrapped the bubble in a div with class bubble to make it easier to style. Google Maps can behave strangely if you don\u2019t provide an explicit width for your info bubbles, so we\u2019ll add that to our CSS now:\n\n.bubble {\n\twidth: 300px;\n}\n\nThat\u2019s everything we need: let\u2019s combine our code together:\n\njQuery(function() {\n\t// First create a div to host the map\n\tvar themap = jQuery('
      ').css({\n\t\t'width': '90%',\n\t\t'height': '400px'\n\t}).insertBefore('ul.restaurants');\n\t// Now initialise the map\n\tvar mapstraction = new Mapstraction('themap','google');\n\tmapstraction.addControls({\n\t\tzoom: 'large',\n\t\tmap_type: true\n\t});\n\t// Show map centred on Brighton\n\tmapstraction.setCenterAndZoom(\n\t\tnew LatLonPoint(50.82423734980143, -0.14007568359375),\n\t\t15 // Zoom level appropriate for Brighton city centre\n\t);\n\t// Geocode each hcard and add a marker\n\tjQuery('.vcard').each(function() {\n\t\tvar hcard = jQuery(this);\n\t\tvar streetaddress = hcard.find('.street-address').text();\n\t\tvar postcode = hcard.find('.postal-code').text();\n\t\tvar geocoder = new MapstractionGeocoder(function(result) {\n\t\t\tvar marker = new Marker(result.point);\n\t\t\tmarker.setInfoBubble(\n\t\t\t\t'
      ' + hcard.html() + '
      '\n\t\t\t);\n\t\t\tmapstraction.addMarker(marker);\n\t\t}, 'google');\t \n\t\tgeocoder.geocode({'address': streetaddress + ', ' + postcode});\n\t});\n});\n\nHere\u2019s the finished code.\n\nThere\u2019s one last shortcut we can add: jQuery provides the $ symbol as an alias for jQuery. We could just go through our code and replace every call to jQuery() with a call to $(), but this would cause incompatibilities if we ever attempted to use our script on a page that also includes the Prototype library. A more robust approach is to start our code with the following:\n\njQuery(function($) {\n\t// Within this function, $ now refers to jQuery\n\t// ...\n});\n\njQuery cleverly passes itself as the first argument to any function registered to the DOM ready event, which means we can assign a local $ variable shortcut without affecting the $ symbol in the global scope. This makes it easy to use jQuery with other libraries.\n\nLimitations of Geocoding\n\nYou may have noticed a discrepancy creep in to the last example: whereas my original list included seven restaurants, the geocoding example only shows five. This is because the Google Maps geocoder incorporates a rate limit: more than five lookups in a second and it starts returning error messages instead of regular results.\n\nIn addition to this problem, geocoding itself is an inexact science: while UK postcodes generally get you down to the correct street, figuring out the exact point on the street from the provided address usually isn\u2019t too accurate (although Google do a pretty good job).\n\nFinally, there\u2019s the performance overhead. We\u2019re making five geocoding requests to Google for every page served, even though the restaurants themselves aren\u2019t likely to change location any time soon. Surely there\u2019s a better way of doing this?\n\nMicroformats to the rescue (again)! The geo microformat suggests simple classes for including latitude and longitude information in a page. We can add specific points for each restaurant using the following markup:\n\n
    • \n\t

      E-Kagen

      \n\t
      \n\t\t

      22-23 Sydney Street

      \n\t\t

      Brighton, UK

      \n\t\t

      BN1 4EN

      \n\t
      \n\t

      Telephone: +44 (0)1273 687 068

      \n\t

      Lat/Lon: \n\t\t50.827917, \n\t\t-0.137764\n\t

      \n
    • \n\nAs before, I used www.getlatlon.com to find the exact locations \u2013 I find satellite view is particularly useful for locating individual buildings.\n\nLatitudes and longitudes are great for machines but not so useful for human beings. We could hide them entirely with display: none, but I prefer to merely de-emphasise them (someone might want them for their GPS unit):\n\n.vcard .geo {\n\tmargin-top: 0.5em;\n\tfont-size: 0.85em;\n\tcolor: #ccc;\n}\n\nIt\u2019s probably a good idea to hide them completely when they\u2019re displayed inside an info bubble:\n\n.bubble .geo {\n\tdisplay: none;\n}\n\nWe can extract the co-ordinates in the same way we extracted the address. Since we\u2019re no longer geocoding anything our code becomes a lot simpler:\n\n$('.vcard').each(function() {\n\tvar hcard = $(this);\n\tvar latitude = hcard.find('.geo .latitude').text();\n\tvar longitude = hcard.find('.geo .longitude').text();\n\tvar marker = new Marker(new LatLonPoint(latitude, longitude));\n\tmarker.setInfoBubble(\n\t\t'
      ' + hcard.html() + '
      '\n\t);\n\tmapstraction.addMarker(marker);\n});\n\nAnd here\u2019s the finished geo example.\n\nFurther reading\n\nWe\u2019ve only scratched the surface of what\u2019s possible with microformats, jQuery (or just regular JavaScript) and a bit of imagination. If this example has piqued your interest, the following links should give you some more food for thought.\n\n\n\tThe hCard specification\n\tNotes on parsing hCards\n\tjQuery for JavaScript programmers \u2013 my extended tutorial on jQuery.\n\tDann Webb\u2019s Sumo \u2013 a full JavaScript library for parsing microformats, based around some clever metaprogramming techniques.\n\tJeremy Keith\u2019s Adactio Austin \u2013 the first place I saw using microformats to unobtrusively plot locations on a map. Makes clever use of hEvent as well.", "year": "2007", "author": "Simon Willison", "author_slug": "simonwillison", "published": "2007-12-12T00:00:00+00:00", "url": "https://24ways.org/2007/unobtrusively-mapping-microformats-with-jquery/", "topic": "code"} {"rowid": 117, "title": "The First Tool You Reach For", "contents": "Microsoft recently announced that Internet Explorer 8 will be released in the first half of 2009. Compared to the standards support of other major browsers, IE8 will not be especially great, but it will finally catch up with the state of the art in one specific area: support for CSS tables. This milestone has the potential to trigger an important change in the way you approach web design.\n\nTo show you just how big a difference CSS tables can make, think about how you might code a fluid, three-column layout from scratch. Just to make your life more difficult, give it one fixed-width column, with a background colour that differs from the rest of the page. Ready? Go!\n\nOkay, since you\u2019re the sort of discerning web designer who reads 24ways, I\u2019m going to assume you at least considered doing this without using HTML tables for the layout. If you\u2019re especially hardcore, I imagine you began thinking of CSS floats, negative margins, and faux columns. If you did, colour me impressed!\n\nNow admit it: you probably also gave an inward sigh about the time it would take to figure out the math on the negative margin overlaps, check for dropped floats in Internet Explorer and generally wrestle each of the major browsers into giving you what you want. If after all that you simply gave up and used HTML tables, I can\u2019t say I blame you.\n\nThere are plenty of professional web designers out there who still choose to use HTML tables as their main layout tool. Sure, they may know that users with screen readers get confused by inappropriate use of tables, but they have a job to do, and they want tools that will make that job easy, not difficult.\n\nNow let me show you how to do it with CSS tables. First, we have a div element for each of our columns, and we wrap them all in another two divs:\n\n
      \n\t
      \n\t\t
      \n\t\t\u22ee\n\t\t
      \n\t\t
      \n\t\t\u22ee\n\t\t
      \n\t\t
      \n\t\t\u22ee\n\t\t
      \n\t
      \n
      \n\nDon\u2019t sweat the \u201cdiv clutter\u201d in this code. Unlike tables, divs have no semantic meaning, and can therefore be used liberally (within reason) to provide hooks for the styles you want to apply to your page.\n\nUsing CSS, we can set the outer div to display as a table with collapsed borders (i.e. adjacent cells share a border) and a fixed layout (i.e. cell widths unaffected by their contents):\n\n.container {\n\tdisplay: table;\n\tborder-collapse: collapse;\n\ttable-layout: fixed;\n}\n\nWith another two rules, we set the middle div to display as a table row, and each of the inner divs to display as table cells:\n\n.container > div {\n\tdisplay: table-row;\n}\n.container > div > div {\n\tdisplay: table-cell;\n}\n\nFinally, we can set the widths of the cells (and of the table itself) directly:\n\n.container {\n\twidth: 100%;\n}\n#menu {\n\twidth: 200px;\n}\n#content {\n\twidth: auto;\n}\n#sidebar {\n\twidth: 25%;\n}\n\nAnd, just like that, we have a rock solid three-column layout, ready to be styled to your own taste, like in this example:\n\n\n\nThis example will render perfectly in reasonably up-to-date versions of Firefox, Safari and Opera, as well as the current beta release of Internet Explorer 8.\n\nCSS tables aren\u2019t only useful for multi-column page layout; they can come in handy in most any situation that calls for elements to be displayed side-by-side on the page. Consider this simple login form layout:\n\n\n\nThe incantation required to achieve this layout using CSS floats may be old hat to you by now, but try to teach it to a beginner, and watch his eyes widen in horror at the hoops you have to jump through (not to mention the assumptions you have to build into your design about the length of the form labels).\n\nHere\u2019s how to do it with CSS tables:\n\n
      \n\t
      \n\t\t
      \n\t\t\t\n\t\t\t\n\t\t
      \n\t\t
      \n\t\t\t\n\t\t\t\n\t\t
      \n\t\t
      \n\t\t\t\n\t\t\t\n\t\t
      \n\t
      \n
      \n\nThis time, we\u2019re using a mixture of divs and spans as semantically transparent styling hooks. Let\u2019s look at the CSS code.\n\nFirst, we set up the outer div to display as a table, the inner divs to display as table rows, and the labels and spans as table cells (with right-aligned text):\n\nform > div {\n\tdisplay: table;\n}\nform > div > div {\n\tdisplay: table-row;\n}\nform label,\nform span {\n\tdisplay: table-cell;\n\ttext-align: right;\n}\n\nWe want the first column of the table to be wide enough to accommodate our labels, but no wider. With CSS float techniques, we had to guess at what that width was likely to be, and adjust it whenever we changed our form labels. With CSS tables, we can simply set the width of the first column to something very small (1em), and then use the white-space property to force the column to the required width:\n\nform label {\n\twhite-space: nowrap;\n\twidth: 1em;\n}\n\nTo polish off the layout, we\u2019ll make our text and password fields occupy the full width of the table cells that contain them:\n\ninput[type=text],\ninput[type=password] {\n\twidth: 100%;\n}\n\nThe rest is margins, padding and borders to get the desired look. Check out the finished example.\n\nAs the first tool you reach for when approaching any layout task, CSS tables make a lot more sense to your average designer than the cryptic incantations called for by CSS floats. When IE8 is released and all major browsers support CSS tables, we can begin to gradually deploy CSS table-based layouts on sites that are more and more mainstream.\n\nIn our new book, Everything You Know About CSS Is Wrong!, Rachel Andrew and I explore in much greater detail how CSS tables work as a page layout tool in the real world. CSS tables have their quirks just like floats do, but they don\u2019t tend to affect common layout tasks, and the workarounds tend to be less fiddly too. Check it out, and get ready for the next big step forward in web design with CSS.", "year": "2008", "author": "Kevin Yank", "author_slug": "kevinyank", "published": "2008-12-13T00:00:00+00:00", "url": "https://24ways.org/2008/the-first-tool-you-reach-for/", "topic": "code"} {"rowid": 175, "title": "Front-End Code Reusability with CSS and JavaScript", "contents": "Most web standards-based developers are more than familiar with creating their sites with semantic HTML with lots and lots of CSS. With each new page in a design, the CSS tends to grow and grow and more elements and styles are added. But CSS can be used to better effect.\n\nThe idea of object-oriented CSS isn\u2019t new. Nicole Sullivan has written a presentation on the subject and outlines two main concepts: separate structure and visual design; and separate container and content. Jeff Croft talks about Applying OOP Concepts to CSS:\n\n\n\tI can make a class of .box that defines some basic layout structure, and another class of .rounded that provides rounded corners, and classes of .wide and .narrow that define some widths, and then easily create boxes of varying widths and styles by assigning multiple classes to an element, without having to duplicate code in my CSS.\n\n\nThis concept helps reduce CSS file size, allows for great flexibility, rapid building of similar content areas and means greater consistency throughout the entire design. You can also take this concept one step further and apply it to site behaviour with JavaScript.\n\nBuild a versatile slideshow\n\nI will show you how to build multiple slideshows using jQuery, allowing varying levels of functionality which you may find on one site design. The code will be flexible enough to allow you to add previous/next links, image pagination and the ability to change the animation type. More importantly, it will allow you to apply any combination of these features.\n\nImage galleries are simply a list of images, so the obvious choice of marking the content up is to use a
        . Many designs, however, do not cater to non-JavaScript versions of the website, and thus don\u2019t take in to account large multiple images. You could also simply hide all the other images in the list, apart from the first image. This method can waste bandwidth because the other images might be downloaded when they are never going to be seen.\n\nTaking this second concept \u2014 only showing one image \u2014 the only code you need to start your slideshow is an tag. The other images can be loaded dynamically via either a per-page JavaScript array or via AJAX.\n\nThe slideshow concept is built upon the very versatile Cycle jQuery Plugin and is structured in to another reusable jQuery plugin. Below is the HTML and JavaScript snippet needed to run every different type of slideshow I have mentioned above.\n\n\"About\n\n\nSlideshow plugin\n\nIf you\u2019re not familiar with jQuery or how to write and author your own plugin there are plenty of articles to help you out.\n\njQuery has a chainable interface and this is something your plugin must implement. This is easy to achieve, so your plugin simply returns the collection it is using:\n\nreturn this.each(\n\tfunction () {}\n};\n\nLocal Variables\n\nTo keep the JavaScript clean and avoid any conflicts, you must set up any variables which are local to the plugin and should be used on each collection item. Defining all your variables at the top under one statement makes adding more and finding which variables are used easier. For other tips, conventions and improvements check out JSLint, the \u201cJavaScript Code Quality Tool\u201d.\n\nvar $$, $div, $images, $arrows, $pager,\n\tid, selector, path, o, options,\n\theight, width,\n\tlist = [], li = 0,\n\tparts = [], pi = 0,\n\tarrows = ['Previous', 'Next'];\n\nCache jQuery Objects\n\nIt is good practice to cache any calls made to jQuery. This reduces wasted DOM calls, can improve the speed of your JavaScript code and makes code more reusable.\n\nThe following code snippet caches the current selected DOM element as a jQuery object using the variable name $$. Secondly, the plugin makes its settings available to the Metadata plugin\u2021 which is best practice within jQuery plugins.\n\nFor each slideshow the plugin generates a
        with a class of slideshow and a unique id. This is used to wrap the slideshow images, pagination and controls.\n\nThe base path which is used for all the images in the slideshow is calculated based on the existing image which appears on the page. For example, if the path to the image on the page was /img/flowers/1.jpg the plugin would use the path /img/flowers/ to load the other images.\n\n$$ = $(this);\no = $.metadata ? $.extend({}, settings, $$.metadata()) : settings;\nid = 'slideshow-' + (i++ + 1);\n$div = $('
        ').addClass('slideshow').attr('id', id);\nselector = '#' + id + ' ';\npath = $$.attr('src').replace(/[0-9]\\.jpg/g, '');\noptions = {};\nheight = $$.height();\nwidth = $$.width();\n\nNote: the plugin uses conventions such as folder structure and numeric filenames. These conventions help with the reusable aspect of plugins and best practices.\n\nBuild the Images\n\nThe cycle plugin uses a list of images to create the slideshow. Because we chose to start with one image we must now build the list programmatically. This is a case of looping through the images which were added via the plugin options, building the appropriate HTML and appending the resulting
          to the DOM.\n\n$.each(o.images, function () {\n\tlist[li++] = '
        • ';\n\tlist[li++] = '';\n\tlist[li++] = '
        • ';\n});\n$images = $('
            ').addClass('cycle-images');\n$images.append(list.join('')).appendTo($div);\n\nAlthough jQuery provides the append method it is much faster to create one really long string and append it to the DOM at the end.\n\nUpdate the Options\n\nHere are some of the options we\u2019re making available by simply adding classes to the . You can change the slideshow effect from the default fade to the sliding effect. By adding the class of stopped the slideshow will not auto-play and must be controlled via pagination or previous and next links.\n\n// different effect\nif ($$.is('.slide')) {\n\toptions.fx = 'scrollHorz';\n}\n// don't move by default\nif ($$.is('.stopped')) {\n\toptions.timeout = 0;\n}\n\nIf you are using the same set of images throughout a website you may wish to start on a different image on each page or section. This can be easily achieved by simply adding the appropriate starting class to the .\n\n// based on the class name on the image\nif ($$.is('[class*=start-]')) {\n\toptions.startingSlide = parseInt($$.attr('class').replace(/.*start-([0-9]+).*/g, \"$1\"), 10) - 1;\n}\n\nFor example:\n\n\"About\n\nBy default, and without JavaScript, the third image in this slideshow is shown. When the JavaScript is applied to the page the slideshow must know to start from the correct place, this is why the start class is required.\n\nYou could capture the default image name and parse it to get the position, but only the default image needs to be numeric to work with this plugin (and could easily be changed in future). Therefore, this extra specifically defined option means the plugin is more tolerant.\n\nPrevious/Next Links\n\nA common feature of slideshows is previous and next links enabling the user to manually progress the images. The Cycle plugin supports this functionality, but you must generate the markup yourself. Most people add these directly in the HTML but normally only support their behaviour when JavaScript is enabled. This goes against progressive enhancement. To keep with the best practice progress enhancement method the previous/next links should be generated with JavaScript.\n\nThe follow snippet checks whether the slideshow requires the previous/next links, via the arrows class. It restricts the Cycle plugin to the specific slideshow using the selector we created at the top of the plugin. This means multiple slideshows can run on one page without conflicting each other.\n\nThe code creates a
              using the arrows array we defined at the top of the plugin. It also adds a class to the slideshow container, meaning you can style different combinations of options in your CSS.\n\n// create the arrows\nif ($$.is('.arrows') && list.length > 1) {\n\toptions.next = selector + '.next';\n\toptions.prev = selector + '.previous';\n\t$arrows = $('
                ').addClass('cycle-arrows');\n\t$.each(arrows, function (i, val) {\n\t\tparts[pi++] = '
              • ';\n\t\tparts[pi++] = '';\n\t\tparts[pi++] = '' + val + '';\n\t\tparts[pi++] = '';\n\t\tparts[pi++] = '
              • ';\n\t});\n\t$arrows.append(parts.join('')).appendTo($div);\n\t$div.addClass('has-cycle-arrows');\n}\n\nThe arrow array could be placed inside the plugin settings to allow for localisation.\n\nPagination\n\nThe Cycle plugin creates its own HTML for the pagination of the slideshow. All our plugin needs to do is create the list and selector to use. This snippet creates the pagination container and appends it to our specific slideshow container. It sets the Cycle plugin pager option, restricting it to the specific slideshow using the selector we created at the top of the plugin. Like the previous/next links, a class is added to the slideshow container allowing you to style the slideshow itself differently.\n\n// create the clickable pagination\nif ($$.is('.pagination') && list.length > 1) {\n\toptions.pager = selector + '.cycle-pagination';\n\t$pager = $('
                  ').addClass('cycle-pagination');\n\t$pager.appendTo($div);\n\t$div.addClass('has-cycle-pagination');\n}\n\nNote: the Cycle plugin creates a
                    with anchors listed directly inside without the surrounding
                  • . Unfortunately this is invalid markup but the code still works.\n\nDemos\n\nWell, that describes all the ins-and-outs of the plugin, but demos make it easier to understand! Viewing the source on the demo page shows some of the combinations you can create with a simple , a few classes and some thought-out JavaScript.\n\nView the demos \u2192\n\nDecide on defaults\n\nThe slideshow plugin uses the exact same settings as the Cycle plugin, but some are explicitly set within the slideshow plugin when using the classes you have set.\n\nWhen deciding on what functionality is going to be controlled via this class method, be careful to choose your defaults wisely. If all slideshows should auto-play, don\u2019t make this an option \u2014 make the option to stop the auto-play. Similarly, if every slideshow should have previous/next functionality make this the default and expose the ability to remove them with a class such as \u201cno-pagination\u201d.\n\nIn the examples presented on this article I have used a class on each . You can easily change this to anything you want and simply apply the plugin based on the jQuery selector required.\n\nGrab your images\n\nIf you are using AJAX to load in your images, you can speed up development by deciding on and keeping to a folder structure and naming convention. There are two methods: basing the image path based on the current URL; or based on the src of the image. The first allows a different slideshow on each page, but in many instances a site will have a couple of sets of images and therefore the second method is probably preferred.\n\nMetadata \u2021\n\nA method which allows you to directly modify settings in certain plugins, which also uses the classes from your HTML already exists. This is a jQuery plugin called Metadata. This method allows for finer control over the plugin settings themselves. Some people, however, may dislike the syntax and prefer using normal classes, like above which when sprinkled with a bit more JavaScript allows you to control what you need to control.\n\nThe takeaway\n\nHopefully you have understood not only what goes in to a basic jQuery plugin but also learnt a new and powerful idea which you can apply to other areas of your website.\n\nThe idea can also be applied to other common interfaces such as lightboxes or mapping services such as Google Maps \u2014 for example creating markers based on a list of places, each with different pin icons based the anchor class.", "year": "2009", "author": "Trevor Morris", "author_slug": "trevormorris", "published": "2009-12-06T00:00:00+00:00", "url": "https://24ways.org/2009/front-end-code-reusability-with-css-and-javascript/", "topic": "code"} {"rowid": 294, "title": "New Tricks for an Old Dog", "contents": "Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy.\nI\u2019ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn\u2019t know or exposes an area of uncertainty that you can go work on.\nIn this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways!\nHow do you do, fellow kids?\nTo start with I\u2019d like to introduce you to our JavaScript framework, Flight. Right now it\u2019s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React.\nOver time, as we used Flight for more complex interfaces, we found it wasn\u2019t scaling with us.\nComposing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn\u2019t come built-in to Flight, and it made reusing components a real challenge.\nThere was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn\u2019t predict how a component would be built until you opened it.\nMaking matters worse, Flight relied on events to move data around the application. Unfortunately, events aren\u2019t good for giving structure to complex logic. They jump around in a way that\u2019s hard to understand and debug, and force you to search your code for a specific string \u2014 the event name\u201a to figure out what\u2019s going on.\nTo find fixes for these problems, we looked around at other frameworks. We like React for it\u2019s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone.\nBut when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you\u2019ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort!\nInstead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using.\nBoiled down, what we liked seemed quite simple:\n\nComponent nesting & composition\nEasy, predictable state management\nNormal functions for data manipulation\n\nMaking these ideas applicable to Flight took some time, but we\u2019re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app.\nWhile the specifics of our situation and Flight aren\u2019t really important, this experience taught me something: \n\nDistill good tech into great ideas. You can apply great ideas anywhere.\n\nYou don\u2019t have to use cool kids\u2019 latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already?\nTimes, they are a changin\u2019\nApart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code.\nGoing back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module\u2019s source was encased in AMD boilerplate.\nIn fact, we found ourselves with a mix of both AMD syntaxes:\ndefine([\"lodash\"], function (_) {\n // . . .\n});\n\ndefine(function (require) {\n var _ = require(\"lodash\");\n // . . .\n});\nThis just wouldn\u2019t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax:\nimport _ from \"lodash\";\nThese days we\u2019re using Babel, Webpack, ES2015 modules and many new language features that make development just\u2026 better. But how did we get there?\nTo explain, I want to introduce you to codemods and jscodeshift.\nA codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL(\"...\") to new URL(\"...\").\njscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file\u2019s syntax tree \u2013 a data-structure representation of the code \u2014 finding and modifying in place as it goes.\nHere\u2019s an example that renames all instances of the variable foo to bar:\nmodule.exports = function (fileInfo, api) {\n return api\n .jscodeshift(fileInfo.source)\n .findVariableDeclarators('foo')\n .renameTo('bar')\n .toSource();\n};\nIt\u2019s a seriously powerful tool, and we\u2019ve used it to write a series of codemods that:\n\nrename modules,\nunify our use of AMD to a single syntax,\ntransition from one testing framework to another, and\nswitch from AMD to CommonJS.\n\nThese changes can be pretty huge and far-reaching. Here\u2019s an example commit from when we switched to CommonJS:\ncommit 8f75de8fd4c702115c7bf58febba1afa96ae52fc\nDate: Tue Jul 12 2016\n\n Run AMD -> CommonJS codemod\n\n 418 files changed, 47550 insertions(+), 48468 deletions(-)\n\nYep, that\u2019s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone!\n\nFrom this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes:\n\nFind all the existing patterns\nChoose the two most similar\nUnify with a codemod\nRepeat.\n\nFor example:\n\nFor module loading, we had 2 competing AMD patterns plus some use of CommonJS\nThe two AMD syntaxes were the most similar\nWe used a codemod to move to unify the AMD patterns\nLater we returned to AMD to convert it to CommonJS\n\nIt\u2019s worked for us, and if you\u2019d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer.\nWelcome aboard!\nAs TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie.\nInspired by Amy\u2019s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks.\nJoining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding!\nAnd as you build up an onboarding process, you\u2019ll create things that are useful for more than just new hires; it\u2019ll force you to write documentation, for example, in a way that\u2019s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help.\nThis is something that\u2019s taken for granted in open source, but somehow I think we forget about it in big companies.\nAfter all, better documentation is just a good thing. You will forget things from time to time, and you\u2019d be surprised how often the \u201cbeginner\u201d docs help!\nFor TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts:\n\nWhat are our dependencies?\nWhere are the potential points of failure?\nWhere does authentication live? Storage? Caching?\nWho owns \u201cX\u201d?\n\n\nOf course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track.\nTo address this, we\u2019ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review.\nIn my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team\u2019s work. But, if you\u2019re not doing it, here\u2019s my suggestion for getting started:\n\nEvery pull request gets a +1 from someone else.\n\nThat\u2019s all \u2014 it\u2019s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master.\nAt Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things:\n\nDon\u2019t review for more than hour 1\nKeep reviews smaller than ~400 lines 2\nCode review your own code first 2\n\nAfter an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone\u2019s put code up for a review, that review is blocking them doing other work. It\u2019s your job to unblock them.\nOn TweetDeck, we actually try to keep reviews under 250 lines. It doesn\u2019t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration.\nBut the most important thing I\u2019ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team\u2019s: with fresh, critical eyes, after a break, using a dedicated code review tool.\nIt\u2019s amazing what you can spot when you put a new in a new interface around code you\u2019ve been staring at for hours!\nAnd yes, this list features science. The data backs up these conclusions, and if you\u2019d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It\u2019s ace.\nFor more dedicated information sharing, we\u2019ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it\u2019s someone else\u2019s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results.\nIf you\u2019d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session \u2014 thanks to James for this idea. And guess what\u2026 your onboarding architecture diagrams will help (and benefit from) this!\nMore, please!\nThere\u2019s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations.\nAnd finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk.\n\n\n\n\nDunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software.\u00a0\u21a9\n\n\nCohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476.\u00a0\u21a9 \u21a9", "year": "2016", "author": "Tom Ashworth", "author_slug": "tomashworth", "published": "2016-12-18T00:00:00+00:00", "url": "https://24ways.org/2016/new-tricks-for-an-old-dog/", "topic": "code"} {"rowid": 165, "title": "Transparent PNGs in Internet Explorer 6", "contents": "Newer breeds of browser such as Firefox and Safari have offered support for PNG images with full alpha channel transparency for a few years. With the use of hacks, support has been available in Internet Explorer 5.5 and 6, but the hacks are non-ideal and have been tricky to use. With IE7 winning masses of users from earlier versions over the last year, full PNG alpha-channel transparency is becoming more of a reality for day-to-day use.\n\nHowever, there are still numbers of IE6 users out there who we can\u2019t leave out in the cold this Christmas, so in this article I\u2019m going to look what we can do to support IE6 users whilst taking full advantage of transparency for the majority of a site\u2019s visitors.\n\nSo what\u2019s alpha channel transparency?\n\nCast your minds back to the Ghost of Christmas Past, the humble GIF. Images in GIF format offer transparency, but that transparency is either on or off for any given pixel. Each pixel\u2019s either fully transparent, or a solid colour. In GIF, transparency is effectively just a special colour you can chose for a pixel.\n\nThe PNG format tackles the problem rather differently. As well as having any colour you chose, each pixel also carries a separate channel of information detailing how transparent it is. This alpha channel enables a pixel to be fully transparent, fully opaque, or critically, any step in between.\n\nThis enables designers to produce images that can have, for example, soft edges without any of the \u2018halo effect\u2019 traditionally associated with GIF transparency. If you\u2019ve ever worked on a site that has different colour schemes and therefore requires multiple versions of each graphic against a different colour, you\u2019ll immediately see the benefit. \n\nWhat\u2019s perhaps more interesting than that, however, is the extra creative freedom this gives designers in creating beautiful sites that can remain web-like in their ability to adjust, scale and reflow.\n\nThe Internet Explorer problem\n\nUp until IE7, there has been no fully native support for PNG alpha channel transparency in Internet Explorer. However, since IE5.5 there has been some support in the form of proprietary filter called the AlphaImageLoader. Internet Explorer filters can be applied directly in your CSS (for both inline and background images), or by setting the same CSS property with JavaScript. \n\nCSS:\n\nimg {\n\tfilter: progid:DXImageTransform.Microsoft.AlphaImageLoader(...);\n}\n\nJavaScript:\n\nimg.style.filter = \"progid:DXImageTransform.Microsoft.AlphaImageLoader(...)\";\n\nThat may sound like a problem solved, but all is not as it may appear. Firstly, as you may realise, there\u2019s no CSS property called filter in the W3C CSS spec. It\u2019s a proprietary extension added by Microsoft that could potentially cause other browsers to reject your entire CSS rule. \n\nSecondly, AlphaImageLoader does not magically add full PNG transparency support so that a PNG in the page will just start working. Instead, when applied to an element in the page, it draws a new rendering surface in the same space that element occupies and loads a PNG into it. If that sounds weird, it\u2019s because that\u2019s precisely what it is. However, by and large the result is that PNGs with an alpha channel can be accommodated.\n\nThe pitfalls\n\nSo, whilst support for PNG transparency in IE5.5 and 6 is possible, it\u2019s not without its problems.\n\nBackground images cannot be positioned or repeated\n\nThe AlphaImageLoader does work for background images, but only for the simplest of cases. If your design requires the image to be tiled (background-repeat) or positioned (background-position) you\u2019re out of luck. The AlphaImageLoader allows you to set a sizingMethod to either crop the image (if necessary) or to scale it to fit. Not massively useful, but something at least.\n\nDelayed loading and resource use\n\nThe AlphaImageLoader can be quite slow to load, and appears to consume more resources than a standard image when applied. Typically, you\u2019d need to add thousands of GIFs or JPEGs to a page before you saw any noticeable impact on the browser, but with the AlphaImageLoader filter applied Internet Explorer can become sluggish after just a handful of alpha channel PNGs.\n\nThe other noticeable effect is that as more instances of the AlphaImageLoader are applied, the longer it takes to render the PNGs with their transparency. The user sees the PNG load in its original non-supported state (with black or grey areas where transparency should be) before one by one the filter kicks in and makes them properly transparent.\n\nBoth the issue of sluggish behaviour and delayed load only really manifest themselves with volume and size of image. Use just a couple of instances and it\u2019s fine, but be careful adding more than five or six. As ever, test, test, test.\n\nLinks become unclickable, forms unfocusable \n\nThis is a big one. There\u2019s a bug/weirdness with AlphaImageLoader that sometimes prevents interaction with links and forms when a PNG background image is used. This is sometimes reported as a z-index issue, but I don\u2019t believe it is. Rather, it\u2019s an artefact of that weird way the filter gets applied to the document almost outside of the normal render process. \n\nOften this can be solved by giving the links or form elements hasLayout using position: relative; where possible. However, this doesn\u2019t always work and the non-interaction problem cannot always be solved. You may find yourself having to go back to the drawing board.\n\nSidestepping the danger zones\n\nFrankly, it\u2019s pretty bad news if you design a site, have that design signed off by your client, build it and then find out only at the end (because you don\u2019t know what might trigger a problem) that your search field can\u2019t be focused in IE6. That\u2019s an absolute nightmare, and whilst it\u2019s not likely to happen, it\u2019s possible that it might. It\u2019s happened to me. So what can you do?\n\nThe best approach I\u2019ve found to this scenario is\n\n\n\tIsolate the PNG or PNGs that are causing the problem. Step through the PNGs in your page, commenting them out one by one and retesting. Typically it\u2019ll be the nearest PNG to the problem, so try there first. Keep going until you can click your links or focus your form fields.\n\tThis is where you really need luck on your side, because you\u2019re going to have to fake it. This will depend on the design of the site, but some way or other create a replacement GIF or JPEG image that will give you an acceptable result. Then use conditional comments to serve that image to only users of IE older than version 7.\n\n\nA hack, you say? Well, you started it chum.\n\nApplying AlphaImageLoader\n\nBecause the filter property is invalid CSS, the safest pragmatic approach is to apply it selectively with JavaScript for only Internet Explorer versions 5.5 and 6. This helps ensure that by default you\u2019re serving standard CSS to browsers that support both the CSS and PNG standards correct, and then selectively patching up only the browsers that need it. \n\nSeveral years ago, Aaron Boodman wrote and released a script called sleight for doing just that. However, sleight dealt only with images in the page, and not background images applied with CSS. Building on top of Aaron\u2019s work, I hacked sleight and came up with bgsleight for applying the filter to background images instead. That was in 2003, and over the years I\u2019ve made a couple of improvements here and there to keep it ticking over and to resolve conflicts between sleight and bgsleight when used together. However, with alpha channel PNGs becoming much more widespread, it\u2019s time for a new version.\n\nIntroducing SuperSleight\n\nSuperSleight adds a number of new and useful features that have come from the day-to-day needs of working with PNGs.\n\n\n\tWorks with both inline and background images, replacing the need for both sleight and bgsleight\n\tWill automatically apply position: relative to links and form fields if they don\u2019t already have position set. (Can be disabled.)\n\tCan be run on the entire document, or just a selected part where you know the PNGs are. This is better for performance.\n\tDetects background images set to no-repeat and sets the scaleMode to crop rather than scale.\n\tCan be re-applied by any other JavaScript in the page \u2013 useful if new content has been loaded by an Ajax request.\n\n\n Download SuperSleight \n\nImplementation\n\nGetting SuperSleight running on a page is quite straightforward, you just need to link the supplied JavaScript file (or the minified version if you prefer) into your document inside conditional comments so that it is delivered to only Internet Explorer 6 or older.\n\n\n\nSupplied with the JavaScript is a simple transparent GIF file. The script replaces the existing PNG with this before re-layering the PNG over the top using AlphaImageLoaded. You can change the name or path of the image in the top of the JavaScript file, where you\u2019ll also find the option to turn off the adding of position: relative to links and fields if you don\u2019t want that.\n\nThe script is kicked off with a call to supersleight.init() at the bottom. The scope of the script can be limited to just one part of the page by passing an ID of an element to supersleight.limitTo(). And that\u2019s all there is to it.\n\nUpdate March 2008: a version of this script as a jQuery plugin is also now available.", "year": "2007", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2007-12-01T00:00:00+00:00", "url": "https://24ways.org/2007/supersleight-transparent-png-in-ie6/", "topic": "code"} {"rowid": 129, "title": "Knockout Type - Thin Is Always In", "contents": "OS X has gorgeous native anti-aliasing (although I will admit to missing 10px aliased Geneva \u2014 *sigh*). This is especially true for dark text on a light background. However, things can go awry when you start using light text on a dark background. Strokes thicken. Counters constrict. Letterforms fill out like seasonal snackers.\n\n \n\nSo how do we combat the fat? In Safari and other Webkit-based browsers we can use the CSS \u2018text-shadow\u2019 property. While trying to add a touch more contrast to the navigation on haveamint.com I noticed an interesting side-effect on the weight of the type. \n\n\n\nThe second line in the example image above has the following style applied to it:\n\n \n\nThis creates an invisible drop-shadow. (Why is it invisible? The shadow is positioned directly behind the type (the first two zeros) and has no spread (the third zero). So the color, black, is completely eclipsed by the type it is supposed to be shadowing.)\n\n \n\nWhy applying an invisible drop-shadow effectively lightens the weight of the type is unclear. What is clear is that our light-on-dark text is now of a comparable weight to its dark-on-light counterpart.\n\n \n\nYou can see this trick in effect all over ShaunInman.com and in the navigation on haveamint.com and Subtraction.com. The HTML and CSS source code used to create the example images used in this article can be found here.", "year": "2006", "author": "Shaun Inman", "author_slug": "shauninman", "published": "2006-12-17T00:00:00+00:00", "url": "https://24ways.org/2006/knockout-type/", "topic": "code"} {"rowid": 323, "title": "Introducing UDASSS!", "contents": "Okay. What\u2019s that mean?\n\nUnobtrusive Degradable Ajax Style Sheet Switcher!\n\nBoy are you in for treat today \u2018cause we\u2019re gonna have a whole lotta Ajaxifida Unobtrucitosity CSS swappin\u2019 Fun!\n\nOkay are you really kidding? Nope. I\u2019ve even impressed myself on this one. Unfortunately, I don\u2019t have much time to tell you the ins and outs of what I actually did to get this to work. We\u2019re talking JavaScript, CSS, PHP\u2026Ajax. But don\u2019t worry about that. I\u2019ve always believed that a good A.P.I. is an invisible A.P.I\u2026 and this I felt I achieved. The only thing you need to know is how it works and what to do.\n\nA Quick Introduction Anyway\u2026\n\nFirst of all, the idea is very simple. I wanted something just like what Paul Sowden put together in \nAlternative Style: Working With Alternate Style Sheets from Alistapart Magazine EXCEPT a few minor (not-so-minor actually) differences which I\u2019ve listed briefly below:\n\n\n\n\tAllow users to switch styles without JavaScript enabled (degradable)\n\tPreventing the F.O.U.C. before the window \u2018load\u2019 when getting preferred styles\n\tKeep the JavaScript entirely off our markup (no onclick\u2019s or onload\u2019s)\n\tMake it very very easy to implement (ok, Paul did that too)\n\n\nWhat I did to achieve this was used server-side cookies instead of JavaScript cookies. Hence, PHP. However this isn\u2019t a \u201cPHP style switcher\u201d \u2013 which is where Ajax comes in. For the extreme technical folks, no, there is no xml involved here, or even a callback response. I only say Ajax because everyone knows what \u2018it\u2019 means. With that said, it\u2019s the Ajax that sets the cookies \u2018on the fly\u2019. Got it? Awesome!\n\nWhat you need\n\nLuckily, I\u2019ve done the work for you. It\u2019s all packaged up in a nice zip file (at the end\u2026keep reading for now) \u2013 so from here on out, \njust follow these instructions\n\nAs I\u2019ve mentioned, one of the things we\u2019ll be working with is PHP. So, first things first, open up a file called index and save it with a \u2018.php\u2019 extension.\n\nNext, place the following text at the top of your document (even above your DOCTYPE)\n\nadd('css/global.css','screen,projection'); // [Global Styles]\n $styleSheet->add('css/preferred.css','screen,projection','Wog Standard'); // [Preferred Styles]\n $styleSheet->add('css/alternate.css','screen,projection','Tiny Fonts',true); // [Alternate Styles]\n $styleSheet->add('css/alternate2.css','screen,projection','Big O Fonts',true); // // [Alternate Styles]\n $styleSheet->getPreferredStyles();\n ?>\n\nThe way this works is REALLY EASY. Pay attention closely.\n\nNotice in the first line we\u2019ve included our style-switcher.php file.\n\nNext we instantiate a PHP class called AlternateStyles() which will allow us to configure our style sheets. \nSo for kicks, let\u2019s just call our object $styleSheet\n\nAs part of the AlternateStyles object, there lies a public method called add. So naturally with our $styleSheet object, we can call it to (da \u2013 da-da-da!) Add Style Sheets!\n\nHow the add() method works\n\nThe add method takes in a possible four arguments, only one is required. However, you\u2019ll want to add some\u2026 since the whole point is working with alternate style sheets.\n\n$path can simply be a uri, absolute, or relative path to your style sheet. $media adds a media attribute to your style sheets. $title gives a name to your style sheets (via title attribute).$alternate (which shows boolean) simply tells us that these are the alternate style sheets.\n\nadd() Tips\n\nFor all global style sheets (meaning the ones that will always be seen and will not be swapped out), simply use the add method as shown next to // [Global Styles].\n\nTo add preferred styles, do the same, but add a \u2018title\u2019.\n\nTo add the alternate styles, do the same as what we\u2019ve done to add preferred styles, but add the extra boolean and set it to true.\n\nNote following when adding style sheets\n\n\n\tMultiple global style sheets are allowed\n\tYou can only have one preferred style sheet (That\u2019s a browser rule)\n\tFeel free to add as many alternate style sheets as you like\n\n\nMoving on\n\nSimply add the following snippet to the of your web document:\n\n\n \n \n drop();\n ?>\n\nNothing much to explain here. Just use your copy & paste powers.\n\nHow to Switch Styles\n\nWhether you knew it or not, this baby already has the built in \u2018ubobtrusive\u2019 functionality that lets you switch styles by the drop of any link with a class name of \u2018altCss\u2018. Just drop them where ever you like in your document as follows:\n\nBog Standard\n Small Fonts\n Large Fonts\n\nTake special note where the file is linking to. Yep. Just linking right back to the page we\u2019re on. The only extra parameters we pass in is a variable called \u2018css\u2019 \u2013 and within that we append the names of our style sheets.\n\nAlso take very special note on the names of the style sheets have an under_score to take place of any spaces we might have.\n\nGo ahead\u2026 play around and change the style sheet on the example page. Try disabling JavaScript and refreshing your browser. Still works!\n\nCool eh?\n\nWell, I put this together in one night so it\u2019s still a work in progress and very beta. If you\u2019d like to hear more about it and its future development, be sure stop on by my site where I\u2019ll definitely be maintaining it.\n\nDownload the beta anyway\n\nWell this wouldn\u2019t be fun if there was nothing to download. So we\u2019re hooking you up so you don\u2019t go home (or logoff) unhappy\n\n Download U.D.A.S.S.S | V0.8\n\nMerry Christmas!\n\nThanks for listening and I hope U.D.A.S.S.S. has been well worth your time and will bring many years of Ajaxy Style Switchin\u2019 Fun!\n\nMany Blessings, Merry Christmas and have a great new year!", "year": "2005", "author": "Dustin Diaz", "author_slug": "dustindiaz", "published": "2005-12-18T00:00:00+00:00", "url": "https://24ways.org/2005/introducing-udasss/", "topic": "code"} {"rowid": 49, "title": "Universal React", "contents": "One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications.\nMore generally we\u2019ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google\u2019s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed.\nThe good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows:\n\nThe user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page.\nIn the background, the client-side JavaScript is executed and takes over the duty of rendering the page.\nThe next time a user clicks, rather than being sent to the server, the client-side app is in control.\nIf the user doesn\u2019t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again.\n\nThis means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that\u2019s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem.\nIt\u2019s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you\u2019d like a refresher, the ReactJS docs are a good place to start.\n\u00a0Getting started\nWe\u2019re going to create a tiny ReactJS application that will work on the server and the client. First we\u2019ll need to create a new project and install some dependencies. In a new, blank directory, run:\nnpm init -y\nnpm install --save ejs express react react-router react-dom\nThat will create a new project and install our dependencies:\n\nejs is a templating engine that we\u2019ll use to render our HTML on the server.\nexpress is a small web framework we\u2019ll run our server on.\nreact-router is a popular routing solution for React so our app can fully support and respect URLs.\nreact-dom is a small React library used for rendering React components.\n\nWe\u2019re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too.\nnpm install --save-dev babel-cli babel-preset-es2015 babel-preset-react\nThen, create a .babelrc file that contains the following:\n{\n \"presets\": [\"es2015\", \"react\"]\n}\nWhat we\u2019ve done here is install Babel\u2019s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We\u2019ll need the React transforms when we start writing JSX when working with React.\nCreating a server\nFor now, our ExpressJS server is pretty straightforward. All we\u2019ll do is render a view that says \u2018Hello World\u2019. Here\u2019s our server code:\nimport express from 'express';\nimport http from 'http';\n\nconst app = express();\n\napp.use(express.static('public'));\n\napp.set('view engine', 'ejs');\n\napp.get('*', (req, res) => {\n res.render('index');\n});\n\nconst server = http.createServer(app);\n\nserver.listen(3003);\nserver.on('listening', () => {\n console.log('Listening on 3003');\n});\nHere we\u2019re using ES6 modules, which I wrote about on 24 ways last year, if you\u2019d like a reminder. We tell the app to render the index view on any GET request (that\u2019s what app.get('*') means, the wildcard matches any route).\nWe now need to create the index view file, which Express expects to be defined in views/index.ejs:\n\n\n \n My App\n \n\n \n Hello World\n \n\nFinally, we\u2019re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command:\n./node_modules/.bin/babel-node server.js\nAnd you should now be able to visit http://localhost:3003 and see \u2018Hello World\u2019 right there:\n\nBuilding the React app\nNow we\u2019ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs.\nFirstly, let\u2019s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so:\n\n <%- markup %>\n\nNext, we\u2019ll define the routes we want our app to have using React Router. For now we\u2019ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it\u2019s clearer to define them as an object. Here\u2019s what we\u2019re starting with:\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\nThese are just placed at the top of server.js, after the import statements. Later we\u2019ll move these into a separate file, but for now they are fine where they are.\nNotice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let\u2019s quickly define components/app.js and components/index.js. app.js looks like so:\nimport React from 'react';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n
                    \n

                    Welcome to my App

                    \n { this.props.children }\n
                    \n );\n }\n}\nWhen a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland:\nimport React from 'react';\n\nexport default class IndexComponent extends React.Component {\n render() {\n return (\n
                    \n

                    This is the index page

                    \n
                    \n );\n }\n}\nServer-side routing with React Router\nHead back into server.js, and firstly we\u2019ll need to add some new imports:\nimport React from 'react';\nimport { renderToString } from 'react-dom/server';\nimport { match, RoutingContext } from 'react-router';\n\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nThe ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It\u2019s this method that we\u2019ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we\u2019ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don\u2019t need to concern yourself about how this component works, so don\u2019t worry too much.\nNow for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes:\napp.get('*', (req, res) => {\n // routes is our object of React routes defined above\n match({ routes, location: req.url }, (err, redirectLocation, props) => {\n if (err) {\n // something went badly wrong, so 500 with a message\n res.status(500).send(err.message);\n } else if (redirectLocation) {\n // we matched a ReactRouter redirect, so redirect from the server\n res.redirect(302, redirectLocation.pathname + redirectLocation.search);\n } else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString();\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n\n } else {\n // no route match, so 404. In a real app you might render a custom\n // 404 view here\n res.sendStatus(404);\n }\n });\n});\nWe call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err, redirectLocation and props as the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional, else if (props). If we got given props and we\u2019ve made it this far it means we found a matching component to render and we can use this code to render it:\n...\n} else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString();\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n} else {\n ...\n}\nThe renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components.\nNote the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent:\n\n\n// OR:\n\nconst props = { a: \"foo\", b: \"bar\" };\n\nRunning the server again\nI know that felt like a lot of work, but the good news is that once you\u2019ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working!\n\nRefactoring and one more route\nBefore we move on to getting this code running on the client, let\u2019s add one more route and do some tidying up. First, move our routes object out into routes.js:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\n\nexport { routes };\nAnd then update server.js. You can remove the two component imports and replace them with:\nimport { routes } from './routes';\nFinally, let\u2019s add one more route for ./about and links between them. Create components/about.js:\nimport React from 'react';\n\nexport default class AboutComponent extends React.Component {\n render() {\n return (\n
                    \n

                    A little bit about me.

                    \n
                    \n );\n }\n}\nAnd then you can add it to routes.js too:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nimport AboutComponent from './components/about';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n },\n {\n path: '/about',\n component: AboutComponent\n }\n ]\n}\n\nexport { routes };\nIf you now restart the server and head to http://localhost:3003/about` you\u2019ll see the about page!\n\nFor the finishing touch we\u2019ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so:\nimport React from 'react';\nimport { Link } from 'react-router';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n
                    \n

                    Welcome to my App

                    \n
                      \n
                    • Home
                    • \n
                    • About
                    • \n
                    \n { this.props.children }\n
                    \n );\n }\n}\nYou can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we\u2019re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience.\nClient-side rendering\nFirst, we\u2019re going to make a small change to views/index.ejs. React doesn\u2019t like rendering directly into the body and will give a warning when you do so. To prevent this we\u2019ll wrap our app in a div:\n\n
                    <%- markup %>
                    \n \n\nI\u2019ve also added in a script tag to build.js, which is the file we\u2019ll generate containing all our client-side code.\nNext, create client-render.js. This is going to be the only bit of JavaScript that\u2019s exclusive to the client side. In it we need to pull in our routes and render them to the DOM.\nimport React from 'react';\nimport ReactDOM from 'react-dom';\nimport { Router } from 'react-router';\n\nimport { routes } from './routes';\n\nimport createBrowserHistory from 'history/lib/createBrowserHistory';\n\nReactDOM.render(\n ,\n document.getElementById('app')\n)\nThe first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser\u2019s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we\u2019ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation.\nFinally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element.\nGenerating build.js\nWe\u2019re actually almost there! The final thing we need to do is generate our client side bundle. For this we\u2019re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We\u2019ll install it and babel-loader, a webpack plugin for transforming code through Babel.\nnpm install --save-dev webpack babel-loader\nTo run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the root of our application and add the following code:\nvar path = require('path');\nmodule.exports = {\n entry: path.join(process.cwd(), 'client-render.js'),\n output: {\n path: './public/',\n filename: 'build.js'\n },\n module: {\n loaders: [\n {\n test: /.js$/,\n loader: 'babel'\n }\n ]\n }\n}\nNote first that this file can\u2019t be written in ES6 as it doesn\u2019t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location \u2013 if we just gave it the string \u2018client-render.js\u2019, webpack wouldn\u2019t be able to find it.\nNext, we tell webpack where to output our file, and here I\u2019m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first.\nNow we\u2019re ready to generate the bundle!\n./node_modules/.bin/webpack\nThis will take a fair few seconds to run (on my machine it\u2019s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you\u2019ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect!\nThe first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you\u2019re developing.\nConclusions\nFirst, if you\u2019d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions.\nNext, I want to stress that you shouldn\u2019t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you\u2019d be right. I used it as it\u2019s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it\u2019s a suitable infrastructure for you.\nWith that, all that\u2019s left for me to do is wish you a very merry Christmas and best of luck with your React applications!", "year": "2015", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2015-12-05T00:00:00+00:00", "url": "https://24ways.org/2015/universal-react/", "topic": "code"} {"rowid": 157, "title": "Capturing Caps Lock", "contents": "One of the more annoying aspects of having to remember passwords (along with having to remember loads of them) is that if you\u2019ve got Caps Lock turned on accidentally when you type one in, it won\u2019t work, and you won\u2019t know why. Most desktop computers alert you in some way if you\u2019re trying to enter your password to log on and you\u2019ve enabled Caps Lock; there\u2019s no reason why the web can\u2019t do the same. What we want is a warning \u2013 maybe the user wants Caps Lock on, because maybe their password is in capitals \u2013 rather than something that interrupts what they\u2019re doing. Something subtle.\n\nBut that doesn\u2019t answer the question of how to do it. Sadly, there\u2019s no way of actually detecting whether Caps Lock is on directly. However, there\u2019s a simple work-around; if the user presses a key, and it\u2019s a capital letter, and they don\u2019t have the Shift key depressed, why then they must have Caps Lock on! Simple. \n\nDOM scripting allows your code to be notified when a key is pressed in an element; when the key is pressed, you get the ASCII code for that key. Capital letters, A to Z, have ASCII codes 65 to 90. So, the code would look something like:\n\non a key press\n\tif the ASCII code for the key is between 65 and 90 *and* if shift is pressed\n\t\twarn the user that they have Caps Lock on, but let them carry on\n\tend if\nend keypress\n\nThe actual JavaScript for this is more complicated, because both event handling and keypress information differ across browsers. Your event handling functions are passed an event object, except in Internet Explorer where you use the global event object; the event object has a which parameter containing the ASCII code for the key pressed, except in Internet Explorer where the event object has a keyCode parameter; some browsers store whether the shift key is pressed in a shiftKey parameter and some in a modifiers parameter. All this boils down to code that looks something like this:\n\nkeypress: function(e) {\n\tvar ev = e ? e : window.event;\n\tif (!ev) {\n\t\treturn;\n\t}\n\tvar targ = ev.target ? ev.target : ev.srcElement;\n\t// get key pressed\n\tvar which = -1;\n\tif (ev.which) {\n\t\twhich = ev.which;\n\t} else if (ev.keyCode) {\n\t\twhich = ev.keyCode;\n\t}\n\t// get shift status\n\tvar shift_status = false;\n\tif (ev.shiftKey) {\n\t\tshift_status = ev.shiftKey;\n\t} else if (ev.modifiers) {\n\t\tshift_status = !!(ev.modifiers & 4);\n\t}\n\n\t// At this point, you have the ASCII code in \u201cwhich\u201d, \n\t// and shift_status is true if the shift key is pressed\n}\n\nThen it\u2019s just a check to see if the ASCII code is between 65 and 90 and the shift key is pressed. (You also need to do the same work if the ASCII code is between 97 (a) and 122 (z) and the shift key is not pressed, because shifted letters are lower-case if Caps Lock is on.)\n\nif (((which >= 65 && which <= 90) && !shift_status) || \n\t((which >= 97 && which <= 122) && shift_status)) {\n\t// uppercase, no shift key\n\t/* SHOW THE WARNING HERE */\n} else {\n\t/* HIDE THE WARNING HERE */\n}\n\nThe warning can be implemented in many different ways: highlight the password field that the user is typing into, show a tooltip, display text next to the field. For simplicity, this code shows the warning as a previously created image, with appropriate alt text. Showing the warning means creating a new tag with DOM scripting, dropping it into the page, and positioning it so that it\u2019s next to the appropriate field. The image looks like this:\n\n\n\nYou know the position of the field the user is typing into (from its offsetTop and offsetLeft properties) and how wide it is (from its offsetWidth properties), so use createElement to make the new img element, and then absolutely position it with style properties so that it appears in the appropriate place (near to the text field). \n\nThe image is a transparent PNG with an alpha channel, so that the drop shadow appears nicely over whatever else is on the page. Because Internet Explorer version 6 and below doesn\u2019t handle transparent PNGs correctly, you need to use the AlphaImageLoader technique to make the image appear correctly.\n\nnewimage = document.createElement('img');\nnewimage.src = \"http://farm3.static.flickr.com/2145/2067574980_3ddd405905_o_d.png\";\nnewimage.style.position = \"absolute\";\nnewimage.style.top = (targ.offsetTop - 73) + \"px\";\nnewimage.style.left = (targ.offsetLeft + targ.offsetWidth - 5) + \"px\";\nnewimage.style.zIndex = \"999\";\nnewimage.setAttribute(\"alt\", \"Warning: Caps Lock is on\");\nif (newimage.runtimeStyle) {\n\t// PNG transparency for IE\n\tnewimage.runtimeStyle.filter += \"progid:DXImageTransform.Microsoft.AlphaImageLoader(src='http://farm3.static.flickr.com/2145/2067574980_3ddd405905_o_d.png',sizingMethod='scale')\";\n}\ndocument.body.appendChild(newimage);\n\nNote that the alt text on the image is also correctly set. Next, all these parts need to be pulled together. On page load, identify all the password fields on the page, and attach a keypress handler to each. (This only needs to be done for password fields because the user can see if Caps Lock is on in ordinary text fields.)\n\nvar inps = document.getElementsByTagName(\"input\");\nfor (var i=0, l=inps.length; i\n\nThe \u201ccreate an image\u201d code from above should only be run if the image is not already showing, so instead of creating a newimage object, create the image and attach it to the password field so that it can be checked for later (and not shown if it\u2019s already showing). For safety, all the code should be wrapped up in its own object, so that its functions don\u2019t collide with anyone else\u2019s functions. So, create a single object called capslock and make all the functions be named methods of the object:\n\nvar capslock = {\n\t... \n\tkeypress: function(e) {\n\t}\n\t...\n}\n\nAlso, the \u201ccreate an image\u201d code is saved into its own named function, show_warning(), and the converse \u201cremove the image\u201d code into hide_warning(). This has the advantage that developers can include the JavaScript library that has been written here, but override what actually happens with their own code, using something like:\n\n\n\n\nAnd that\u2019s all. Simply include the JavaScript library in your pages, override what happens on a warning if that\u2019s more appropriate for what you\u2019re doing, and that\u2019s all you need.\n\n See the script in action.", "year": "2007", "author": "Stuart Langridge", "author_slug": "stuartlangridge", "published": "2007-12-04T00:00:00+00:00", "url": "https://24ways.org/2007/capturing-caps-lock/", "topic": "code"} {"rowid": 104, "title": "Sitewide Search On A Shoe String", "contents": "One of the questions I got a lot when I was building web sites for smaller businesses was if I could create a search engine for their site. Visitors should be able to search only this site and find things without the maintainer having to put \u201crelated articles\u201d or \u201cfeatured content\u201d links on every page by hand. \n\nBack when this was all fields this wasn\u2019t easy as you either had to write your own scraping tool, use ht://dig or a paid service from providers like Yahoo, Altavista or later on Google. In the former case you had to swallow the bitter pill of computing and indexing all your content and storing it in a database for quick access and in the latter it hurt your wallet.\n\nTimes have moved on and nowadays you can have the same functionality for free using Yahoo\u2019s \u201cBuild your own search service\u201d \u2013 BOSS. The cool thing about BOSS is that it allows for a massive amount of hits a day and you can mash up the returned data in any format you want. Another good feature of it is that it comes with JSON-P as an output format which makes it possible to use it without any server-side component!\n\nStarting with a working HTML form\n\nIn order to add a search to your site, you start with a simple HTML form which you can use without JavaScript. Most search engines will allow you to filter results by domain. In this case we will search \u201cbbc.co.uk\u201d. If you use Yahoo as your standard search, this could be: \n\n
                    \n\t
                    \n\t\t\n\t\t\n\t\t\n\t\t\n\t
                    \n
                    \n\nThe Google equivalent is:\n\n
                    \n\t
                    \n\t\t\n\t\t\n\t\t\n\t\t\n\t
                    \n
                    \n\nIn any case make sure to use the ID term for the search term and site for the site, as this is what we are going to use for the script. To make things easier, also have an ID called customsearch on the form.\n\nTo use BOSS, you should get your own developer API for BOSS and replace the one in the demo code. There is click tracking on the search results to see how successful your app is, so you should make it your own.\n\nAdding the BOSS magic\n\nBOSS is a REST API, meaning you can use it in any HTTP request or in a browser by simply adding the right parameters to a URL. Say for example you want to search \u201cbbc.co.uk\u201d for \u201cchristmas\u201d all you need to do is open the following URL:\n\nhttp://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&format=xml&appid=YOUR-APPLICATION-ID\n\nTry it out and click it to see the results in XML. We don\u2019t want XML though, which is why we get rid of the format=xml parameter which gives us the same information in JSON:\n\nhttp://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&appid=YOUR-APPLICATION-ID\n\nJSON makes most sense when you can send the output to a function and immediately use it. For this to happen all you need is to add a callback parameter and the JSON will be wrapped in a function call. Say for example we want to call SITESEARCH.found() when the data was retrieved we can do it this way:\n\nhttp://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=YOUR-APPLICATION-ID\n\nYou can use this immediately in a script node if you want to. The following code would display the total amount of search results for the term christmas on bbc.co.uk as an alert:\n\n\n\n\nHowever, for our example, we need to be a bit more clever with this.\n\nEnhancing the search form\n\n\n\n\nHere\u2019s the script that enhances a search form to show results below it.\n\nSITESEARCH = function(){\n\tvar config = {\n\t\tIDs:{\n\t\t\tsearchForm:'customsearch',\n\t\t\tterm:'term',\n\t\t\tsite:'site'\n\t\t},\n\t\tloading:'Loading results...',\n\t\tnoresults:'No results found.',\n\t\tappID:'YOUR-APP-ID',\n\t\tresults:20\n\t};\n\tvar form;\n\tvar out;\n\tfunction init(){\n\t\tif(config.appID === 'YOUR-APP-ID'){\n\t\t\talert('Please get a real application ID!');\n\t\t} else {\n\t\t\tform = document.getElementById(config.IDs.searchForm);\n\t\t\tif(form){\n\t\t\t\tform.onsubmit = function(){\n\t\t\t\t\tvar site = document.getElementById(config.IDs.site).value;\n\t\t\t\t\tvar term = document.getElementById(config.IDs.term).value;\n\t\t\t\t\tif(typeof site === 'string' && typeof term === 'string'){\n\t\t\t\t\t\tif(typeof out !== 'undefined'){\n\t\t\t\t\t\t\tout.parentNode.removeChild(out);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tout = document.createElement('p');\n\t\t\t\t\t\tout.appendChild(document.createTextNode(config.loading));\n\t\t\t\t\t\tform.appendChild(out);\n\t\t\t\t\t\tvar APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + \n\t\t\t\t\t\t\t\t\t\t\t\t\tterm + '?callback=SITESEARCH.found&sites=' + \n\t\t\t\t\t\t\t\t\t\t\t\t\tsite + '&count=' + config.results + \n\t\t\t\t\t\t\t\t\t\t\t\t\t'&appid=' + config.appID;\n\t\t\t\t\t\tvar s = document.createElement('script');\n\t\t\t\t\t\ts.setAttribute('src',APIurl);\n\t\t\t\t\t\ts.setAttribute('type','text/javascript');\n\t\t\t\t\t\tdocument.getElementsByTagName('head')[0].appendChild(s);\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t};\n\t\t\t}\n\t\t}\n\t};\n\tfunction found(o){\n\t\tvar list = document.createElement('ul');\n\t\tvar results = o.ysearchresponse.resultset_web;\n\t\tif(results){\n\t\t\tvar item,link,description;\n\t\t\tfor(var i=0,j=results.length;i\n\t
                    \n\t\t\n\t\t\n\t\t\n\t\t\n\t
                    \n\n\n\n\nWhere to go from here\n\nThis is just a very simple example of what you can do with BOSS. You can define languages and regions, retrieve and display images and news and mix the results with other data sources before displaying them. One very cool feature is that by adding a view=keyterms parameter to the URL you can get the keywords of each of the results to drill deeper into the search. An example for this written in PHP is available on the YDN blog. For JavaScript solutions there is a handy wrapper called yboss available to help you go nuts.", "year": "2008", "author": "Christian Heilmann", "author_slug": "chrisheilmann", "published": "2008-12-04T00:00:00+00:00", "url": "https://24ways.org/2008/sitewide-search-on-a-shoestring/", "topic": "code"} {"rowid": 163, "title": "Get To Grips with Slippy Maps", "contents": "Online mapping has definitely hit mainstream. Google Maps made \u2018slippy maps\u2019 popular and made it easy for any developer to quickly add a dynamic map to his or her website. You can now find maps for store locations, friends nearby, upcoming events, and embedded in blogs. \n\nIn this tutorial we\u2019ll show you how to easily add a map to your site using the Mapstraction mapping library. There are many map providers available to choose from, each with slightly different functionality, design, and terms of service. Mapstraction makes deciding which provider to use easy by allowing you to write your mapping code once, and then easily switch providers.\n\nAssemble the pieces\n\nUtilizing any of the mapping library typically consists of similar overall steps:\n\n\n\tCreate an HTML div to hold the map\n\tInclude the Javascript libraries\n\tCreate the Javascript Map element\n\tSet the initial map center and zoom level\n\tAdd markers, lines, overlays and more\n\n\nCreate the Map Div\n\nThe HTML div is where the map will actually show up on your page. It needs to have a unique id, because we\u2019ll refer to that later to actually put the map here. This also lets you have multiple maps on a page, by creating individual divs and Javascript map elements. The size of the div also sets the height and width of the map. You set the size using CSS, either inline with the element, or via a CSS reference to the element id or class. For this example, we\u2019ll use inline styling.\n\n
                    \n\nInclude Javascript libraries\n\nA mapping library is like any Javascript library. You need to include the library in your page before you use the methods of that library. For our tutorial, we\u2019ll need to include at least two libraries: Mapstraction, and the mapping API(s) we want to display. Our first example we\u2019ll use the ubiquitous Google Maps library. However, you can just as easily include Yahoo, MapQuest, or any of the other supported libraries.\n\nAnother important aspect of the mapping libraries is that many of them require an API key. You will need to agree to the terms of service, and get an API key these.\n\n\n\n\nCreate the Map\n\nGreat, we\u2019ve now put in all the pieces we need to start actually creating our map. This is as simple as creating a new Mapstraction object with the id of the HTML div we created earlier, and the name of the mapping provider we want to use for this map. \n\nWith several of the mapping libraries you will need to set the map center and zoom level before the map will appear. The map centering actually triggers the initialization of the map. \n\nvar mapstraction = new Mapstraction('map','google');\nvar myPoint = new LatLonPoint(37.404,-122.008);\nmapstraction.setCenterAndZoom(myPoint, 10);\n\nA note about zoom levels. The setCenterAndZoom function takes two parameters, the center as a LatLonPoint, and a zoom level that has been defined by mapping libraries. The current usage is for zoom level 1 to be \u201czoomed out\u201d, or view the entire earth \u2013 and increasing the zoom level as you zoom in. Typically 17 is the maximum zoom, which is about the size of a house. \n\nDifferent mapping providers have different quality of zoomed in maps over different parts of the world. This is a perfect reason why using a library like Mapstraction is very useful, because you can quickly change mapping providers to accommodate users in areas that have bad coverage with some maps. \n\nTo switch providers, you just need to include the Javascript library, and then change the second parameter in the Mapstraction creation. Or, you can call the switch method to dynamically switch the provider.\n\nSo for Yahoo Maps (demo):\n\nvar mapstraction = new Mapstraction('map','yahoo');\n\nor Microsoft Maps (demo):\n\nvar mapstraction = new Mapstraction('map','microsoft');\n\nwant a 3D globe in your browser? try FreeEarth (demo):\n\nvar mapstraction = new Mapstraction('map','freeearth');\n\nor even OpenStreetMap (free your data!) (demo):\n\nvar mapstraction = new Mapstraction('map','openstreetmap');\n\nVisit the Mapstraction multiple map demo page for an example of how easy it is to have many maps on your page, each with a different provider. \n\nAdding Markers\n\nWhile adding your first map is fun, and you can probably spend hours just sliding around, the point of adding a map to your site is usually to show the location of something. So now you want to add some markers. There are a couple of ways to add to your map.\n\nThe simplest is directly creating markers. You could either hard code this into a rather static page, or dynamically generate these using whatever tools your site is built on.\n\nvar marker = new Marker( new LatLonPoint(37.404,-122.008) );\nmarker.setInfoBubble(\"It's easy to add maps to your site\");\nmapstraction.addMarker( marker );\n\nThere is a lot more you can do with markers, including changing the icon, adding timestamps, automatically opening the bubble, or making them draggable. \n\nWhile it is straight-forward to create markers one by one, there is a much easier way to create a large set of markers. And chances are, you can make it very easy by extending some data you already are sharing: RSS. \n\nSpecifically, using GeoRSS you can easily add a large set of markers directly to a map. GeoRSS is a community built standard (like Microformats) that added geographic markup to RSS and Atom entries. It\u2019s as simple as adding 42 -83 to your feeds to share items via GeoRSS. Once you\u2019ve done that, you can add that feed as an \u2018overlay\u2019 to your map using the function:\n\nmapstraction.addOverlay(\"http://api.flickr.com/services/feeds/groups_pool.gne?id=322338@N20&format=rss_200&georss=1\");\n\nMapstraction also supports KML for many of the mapping providers. So it\u2019s easy to add various data sources together with your own data. Check out Mapufacture for a growing index of available GeoRSS feeds and KML documents. \n\nPlay with your new toys\n\nMapstraction offers a lot more functionality you can utilize for demonstrating a lot of geographic data on your website. It also includes geocoding and routing abstraction layers for making sure your users know where to go. You can see more on the Mapstraction website: http://mapstraction.com.", "year": "2007", "author": "Andrew Turner", "author_slug": "andrewturner", "published": "2007-12-02T00:00:00+00:00", "url": "https://24ways.org/2007/get-to-grips-with-slippy-maps/", "topic": "code"} {"rowid": 21, "title": "Keeping Parts of Your Codebase Private on GitHub", "contents": "Open source is brilliant, there\u2019s no denying that, and GitHub has been instrumental in open source\u2019s recent success. I\u2019m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private!\n\nA slightly less common issue, and one I\u2019ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes.\n\nBefore we begin, it\u2019s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site.\n\nN.B. This post requires some basic Git knowledge.\n\nAdding your public remote\n\nLet\u2019s assume you\u2019re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the \u201cAdding your private remote\u201d section.)\n\nSo, we have a clean slate: nothing has been set up yet, we\u2019re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account).\n\nOn your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff \u2014 whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this:\n\n$ git remote add origin git@github.com:[user]/site.com.git\n\nHere we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you\u2019ve explicitly changed it) to that remote:\n\n$ git push -u origin master\n\nHere we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like.\n\nThis is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote.\n\nAdding your private remote\n\nWe\u2019ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one.\n\nIn your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don\u2019t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository.\n\n$ git checkout -b dev\n\nWe have now made a new branch called dev off the branch we were on last (master, unless you renamed it).\n\nNow we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote:\n\n$ git remote add private git@github.com:[user]/private.site.com.git\n\nLike before, we are just telling Git to add a new remote to this repo, only this time we\u2019ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it.\n\nNow we need to tell our dev branch to push to our private remote:\n\n$ git push -u private dev\n\nHere, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we\u2019ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise).\n\nNow you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively.\n\nAny work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote.\n\nAdding more branches\n\nSo far we\u2019ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you\u2019d like. Of course, you\u2019d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let\u2019s imagine we want to create a branch to try something out real quickly:\n\n$ git checkout -b test\n\nNow, when we come to push this branch, we can choose which remote we send it to:\n\n$ git push -u private test\n\nThis pushes the new test branch to our private remote (again, setting the persistent tracking with -u).\n\nYou can have as many or as few remotes or branches as you like.\n\nCombining the two\n\nLet\u2019s say you\u2019ve been working on a new feature in private for a few days, and you\u2019ve kept that on the private remote. You\u2019ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch:\n\n$ git checkout master\n\nThen merge in the branch that contained the feature:\n\n$ git merge dev\n\nNow master contains the commits that were made on dev and, once you\u2019ve pushed master to its remote, those commits will be viewable publicly on GitHub:\n\n$ git push\n\nNote that we can just run $ git push on the master branch as we\u2019d previously set up our upstream tracking (-u).\n\nMultiple machines\n\nSo far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let\u2019s say you\u2019ve got yourself a new Mac (yay!) and you want to clone an existing project:\n\n$ git clone git@github.com:[user]/site.com.git\n\nThis will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above.\n\nDone!\n\nIf you\u2019d like to see me blitz through all that in one go, check the showterm recording.\n\nThe beauty of this is that we can still share our code, but we don\u2019t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it\u2019s ready for merge. Have a blog post in a Jekyll site that you\u2019re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it\u2019s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain.\n\nAll this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you!", "year": "2013", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2013-12-09T00:00:00+00:00", "url": "https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/", "topic": "code"} {"rowid": 296, "title": "Animation in Design Systems", "contents": "Our modern front-end workflow has matured over time to include design systems and component libraries that help us stay organized, improve workflows, and simplify maintenance. These systems, when executed well, ensure proper documentation of the code available and enable our systems to scale with reduced communication conflicts. \nBut while most of these systems take a critical stance on fonts, colors, and general building blocks, their treatment of animation remains disorganized and ad-hoc. Let\u2019s leverage existing structures and workflows to reduce friction when it comes to animation and create cohesive and performant user experiences. \nUnderstand the importance of animation\nPart of the reason we treat animation like a second-class citizen is that we don\u2019t really consider its power. When users are scanning a website (or any environment or photo), they are attempting to build a spatial map of their surroundings. During this process, nothing quite commands attention like something in motion. \nWe are biologically trained to notice motion: evolutionarily speaking, our survival depends on it. For this reason, animation when done well can guide your users. It can aid and reinforce these maps, and give us a sense that we understand the UX more deeply. We retrieve information and put it back where it came from instead of something popping in and out of place. \n\n\u201cWhere did that menu go? Oh it\u2019s in there.\u201d \n\nFor a deeper dive into how animation can connect disparate states, I wrote about the Importance of Context-Shifting in UX Patterns for CSS-Tricks.\nAn animation flow on mobile.\nAnimation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn\u2019t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped.\n\n14 second generic loading screen.22 second custom loading screen.\nThis also works for form submission. Giving your personal information over to an online process like a static form can be a bit harrowing. It becomes more harrowing without animation used as a signal that something is happening, and that some process is completing. That same animation can also entertain users and make them feel as though the wait isn\u2019t as long. \nEli Fitch gave a talk at CSS Dev Conf called: \u201cPerceived Performance: The Only Kind That Really Matters\u201d, which is one of my favorite talk titles of all time. In it, he discussed how we tend to measure things like timelines and network requests because they are more quantifiable\u2013and therefore easier to measure\u2013but that measuring how a user feels when visiting the site is more important and worth the time and attention. \nIn his talk, he states \u201cHumans over-estimate passive waits by 36%, per Richard Larson of MIT\u201d. This means that if you\u2019re not using animation to speed up how fast the wait time of a form submission loads, users are perceiving it to be much slower than the dev tools timeline is recording.\nReign it in\nUnlike fonts, colors, and so on, we tend to add animation in as a last step, which leads to disorganized implementations that lack overall cohesion. If you asked a designer or developer if they would create a mockup or build a UI without knowing the fonts they were working with, they would dislike the idea. Not knowing the building blocks they\u2019re working with means that the design can fall apart or the development can break with something so fundamental left out at the start. Good animation works the same way.\nThe first step in reigning in your use of animation is to perform an animation audit. Look at all the places you are using animation on your site, or the places you aren\u2019t using animation but probably should. (Hint: perceived performance of a loader on a form submission can dramatically change your bounce rates.) \nNot sure how to perform a good audit? Val Head has a great chapter on it in her book, Designing Interface Animations, which has of buckets of research and great ideas.\nEven some beautiful component libraries that have animation in the docs make this mistake. You don\u2019t need every kind of animation, just like you don\u2019t need every kind of font. This bloats our code. Ask yourself questions like: do you really need a flip 180 degree animation? I can\u2019t even conceive of a place on a typical UI where that would be useful, yet most component libraries that I\u2019ve seen have a mixin that does just this.\nWhich leads to\u2026\nHave an opinion\nMany people are confused about Material Design. They think that Material Design is Motion Design, mostly because they\u2019ve never seen anyone take a stance on animation before and document these opinions well. But every time you use Material Design as your motion design language, people look at your site and think GOOGLE. Now that\u2019s good branding.\nBy using Google\u2019s motion design language and not your own, you\u2019re losing out on a chance to be memorable on your own website.\nWhat does having an opinion on motion look like in practice? It could mean you\u2019ve decided that you never flip things. It could mean that your eases are always going to glide. In that instance, you would put your efforts towards finding an ease that looks \u201cgliding\u201d and pulling out any transform: scaleX(-1) animation you find on your site. Across teams, everyone knows not to spend time mocking up flipping animation (even if they\u2019re working on an entirely different codebase), and to instead work on something that feels like it glides. You save time and don\u2019t have to communicate again and again to make things feel cohesive.\nCreate good developer resources\nSometimes people don\u2019t incorporate animation into a design system because they aren\u2019t sure how, beyond the base hover states. All animation properties can be broken into interchangeable pieces. This allows developers and designers alike to mix and match and iterate quickly, while still staying in the correct language. Here are some recommendations (with code and a demo to follow):\nCreate timing units, similar to h1, h2, h3. In a system I worked on recently, I called these t1, t2, t3. T1 would be reserved for longer pieces, down to t5 which is a bit like h5 in that it\u2019s the default (usually around .25 seconds or thereabouts).\nKeep animation easings for entrance, exit, entrance emphasis and exit emphasis that people can commonly refer to. This, and the animation-fill-mode, are likely to be the only two properties that can be reused for the entrance and exit of the animation.\nUse the animation-name property to define the keyframes for the animation itself. I would recommend starting with 5 or 6 before making a slew of them, and see if you need more. Writing 30 different animations might seem like a nice resource, but just like your color palette having too many can unnecessarily bulk up your codebase, and keep it from feeling cohesive. Think critically about what you need here. \nSee the Pen Modularized Animation for Component Libraries by Sarah Drasner (@sdras) on CodePen.\n\nThe example above is pared-down, but you can see how in a robust system, having pieces that are interchangeable cached across the whole system would save time for iterations and prototyping, not to mention make it easy to make adjustments for different feeling movement on the same animation easily.\nOne low hanging fruit might be a loader that leads to a success dialog. On a big site, you might have that pattern many times, so writing up a component that does only that helps you move faster while also allowing you to really zoom in and focus on that pattern. You avoid throwing something together at the last minute, or using a GIF, which are really heavy and mushy on retina. You can make singular pieces that look really refined and are reusable. \nReact and Vue Implementations are great for reusable components, as you can create a building block with a common animation pattern, and once created, it can be a resource for all. Remember to take advantage of things like props to allow for timing and easing adjustments like we have in the previous example!\nResponsive\nAt the very least we should ensure that interaction also works well on mobile, but if we\u2019d like to create interactions that take advantage of all of the gestures mobile has to offer, we can use libraries like zingtouch or hammer to work with swipe or multiple finger detection. With a bit of work, these can all be created through native detection as well.\nResponsive web pages can specify initial-scale=1.0 in the meta tag so that the device is not waiting the required 300ms on the secondary tap before calling action. Interaction for touch events must either start from a larger touch-target (40px \u00d7 40px or greater) or use @media(pointer:coarse) as support allows.\nBuy-in\nSometimes people don\u2019t create animation resources simply because it gets deprioritized. But design systems were also something we once had to fight for, too. This year at CSS Dev Conf, Rachel Nabors demonstrated how to plot out animation wants vs. needs on a graph (reproduced with her permission) to help prioritize them:\n\n\nThis helps people you\u2019re working with figure out the relative necessity and workload of the addition of these animations and think more critically about it. You\u2019re also more likely to get something through if you\u2019re proving that what you\u2019re making is needed and can be reused. \nGood compromises can be made this way: \u201cwe\u2019re not going to go all out and create an animated \u2018About Us\u2019 page like you wanted, but I suppose we can let our users know their contact email went through with a small progress and success notification.\u201d \nSuccessfully pushing smaller projects through helps build trust with your team, and lets them see what this type of collaboration can look like. This builds up the type of relationship necessary to push through projects that are more involved. It can\u2019t be overstressed that good communication is key.\nGet started!\nWith these tools and good communication, we can make our codebases more efficient, performant, and feel better for our users. We can enhance the user experience on our sites, and create great resources for our teams to allow them to move more quickly while innovating beautifully.", "year": "2016", "author": "Sarah Drasner", "author_slug": "sarahdrasner", "published": "2016-12-16T00:00:00+00:00", "url": "https://24ways.org/2016/animation-in-design-systems/", "topic": "code"} {"rowid": 54, "title": "Putting My Patterns through Their Paces", "contents": "Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.\nThe thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.\nMeet the Teaser\nHere\u2019s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)\nIn its simplest form, a teaser contains a headline, which links to an article:\n\nFairly straightforward, sure. But it\u2019s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types \u2013 some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.\n\n Nearly every element visible on this page is built out of our generic \u201cteaser\u201d pattern.\n \nBut the teaser variation I\u2019d like to call out is the one that appears on The Toast\u2019s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.\n\n The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.\n \nAnd this is, as it happens, the teaser variation that gave me pause. Back in the old days \u2013 you know, like six months ago \u2013 I probably would\u2019ve marked this module up to match the design. In other words, I would\u2019ve looked at the module\u2019s visual hierarchy (metadata up top, headline and content below) and written the following HTML:\n
                    \n \n 126 comments\n

                    Article Title

                    \n

                    Lorem ipsum dolor sit amet, consectetur\u2026

                    \n
                    \nBut then I caught myself, and realized this wasn\u2019t the best approach.\nMoving Beyond Layout\nSince I\u2019ve started working responsively, there\u2019s a question I work into every step of my design process. Whether I\u2019m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:\n\nWhat if someone doesn\u2019t browse the web like I do?\n\n\u2026Okay, that doesn\u2019t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it\u2019s been invaluable to so many aspects of my practice. If I\u2019m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I\u2019m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It\u2019s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.\nAnd that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it\u2019d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they\u2019re associated.\nThat\u2019s why I find it\u2019s helpful to begin designing a pattern\u2019s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I\u2019m trying to support. In other words, if someone\u2019s encountering my design without the CSS I\u2019ve written, what should their experience be?\nSo I took a step back, and came up with a different approach:\n
                    \n

                    Article Title

                    \n \n

                    \n Lorem ipsum dolor sit amet, consectetur\u2026\n 126 comments\n

                    \n
                    \nMuch, much better. This felt like a better match for the content I was designing: the headline \u2013 easily most important element \u2013 was at the top, followed by the author\u2019s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that\u2019s why it\u2019s at the very end of the excerpt, the last element within our teaser. And with some light styling, we\u2019ve got a respectable-looking hierarchy in place:\n\nYeah, you\u2019re right \u2013 it\u2019s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we\u2019ll bolster the markup with an extra element around our title and byline:\n
                    \n \n \u2026\n
                    \nWith that in place, we can use flexbox to tweak our layout, like so:\n.teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\nflex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.\n\nGetting closer! But as great as flexbox is, it doesn\u2019t do anything for elements outside our container, like our little comment count, which is, as you\u2019ve probably noticed, still stranded at the very bottom of our teaser.\nFlexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser\u2019s using a sufficiently modern version of flexbox. Here\u2019s the one we used:\nvar doc = document.body || document.documentElement;\nvar style = doc.style;\n\nif ( style.webkitFlexWrap == '' ||\n style.msFlexWrap == '' ||\n style.flexWrap == '' ) {\n doc.className += \" supports-flex\";\n}\nEagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser\u2019s design:\n.supports-flex .teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\n.supports-flex .teaser .comment-count {\n position: absolute;\n right: 0;\n top: 1.1em;\n}\nIf the supports-flex class is present, we can apply our flexbox layout to the title area, sure \u2013 but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don\u2019t meet our threshold for our advanced styles are left with an attractive design that matches our HTML\u2019s content hierarchy; but the ones that pass our test receive the finished, final design.\n\nAnd with that, our teaser\u2019s complete.\nDiving Into Device-Agnostic Design\nThis is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I\u2019d recommend Heydon Pickering\u2019s \u201cFlexbox Grid Finesse\u201d, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you\u2019d be right! In fact, it\u2019s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I\u2019ve found that thinking about my design as existing in broad experience tiers \u2013 in layers \u2013 is one of the best ways of designing for the modern web. And what\u2019s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.\nOpen video\n \n Even a simple search form can be conditionally enhanced, given a little layered thinking.\n \nThis more layered approach to interface design isn\u2019t a new one, mind you: it\u2019s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I\u2019ve found to practice responsive design. As Trent Walton once wrote,\n\nLike cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web\u2019s inherent variability.\n\nWe have a weird job, working on the web. We\u2019re designing for the latest mobile devices, sure, but we\u2019re increasingly aware that our definition of \u201csmartphone\u201d is much too narrow. Browsers have started appearing on our wrists and in our cars\u2019 dashboards, but much of the world\u2019s mobile data flows over sub-3G networks. After all, the web\u2019s evolution has never been charted along a straight line: it\u2019s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don\u2019t yet know about, a more device-agnostic, more layered design process can better prepare our patterns \u2013 and ourselves \u2013 for the future.\n(It won\u2019t help you get enough to eat at holiday parties, though.)", "year": "2015", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2015-12-10T00:00:00+00:00", "url": "https://24ways.org/2015/putting-my-patterns-through-their-paces/", "topic": "code"} {"rowid": 202, "title": "Design Systems and CSS Grid", "contents": "Recently, my client has been looking at creating a few new marketing pages for their website. They currently have a design system in place but they\u2019re looking to push this forward into 2018 with some small and possibly some larger changes.\nTo start with we are creating a couple of new marketing pages. As well as making use of existing components within the design systems component library there are a couple of new components. Looking at the first couple of sketch files I felt this would be a great opportunity to use CSS Grid, to me the newer components need to be laid out on that page and grid would help with this perfectly.\n\nAs well as this layout of the new components and the text within it, imagery would be used that breaks out of the grid and pushes itself into the spaces where the text is aligned.\nThe existing grid system\nWhen the site was rebuilt in 2015 the team decided to make use of Sass and Susy, a \u201clightweight grid-layout engine using Sass\u201d. It was built separating the grid system from the components that would be laid out on the page with a container, a row, an optional column, and a block.\nTo make use of the grid system on a page for a component that would take the full width of the row you would have to write something like this:\n
                    \n
                    \n
                    \n
                    \n \n
                    \n
                    \n
                    \n
                    \nUsing a grid system similar to this can easily create quite the tag soup. It could fill the HTML full of divs that may become complex to understand and difficult to edit.\nAlthough there is this reliance on several
                    s to lay out the components on a page it does allow a tidy way to place the component code within that page. It abstracts the layout of the page to its own code, its own system, so the components can \u2018fit\u2019 where needed.\nThe requirements of the new grid system\nMoving forward I set myself some goals for what I\u2019d like to have achieved in this new grid system:\nIt needs to behave like the existing grid systems\nWe are not ripping up the existing grid system, it would be too much work, for now, to retrofit all of the existing components to work in a grid that has a different amount of columns, and spacing at various viewport widths.\nAllow full-width components\nCurrently the grid system is a 14 column grid that becomes centred on the page when viewport is wide enough. We have, in the past, written some CSS that would allow for a full-width component, but his had always felt like a hack. We want the option to have a full-width element as part of the new grid system, not something that needs CSS to fight against.\nLess of a tag soup\nIdeally we want to end up writing less HTML to layout the page. Although the existing system can be quite clear as to what each element is doing, it can also become a little laborious in working out what each grid row or block is doing where.\nI would like to move the layout logic to CSS as much as is possible, potentially creating some utility classes or additional \u2018layout classes\u2019 for the components.\nEasier for people to use and author\nWith many people using the existing design systems codebase we need to create a new grid system that is as easy or easier to use than the existing one. I think and hope this would be helped by removing as many
                    s as needed and would require new documentation and examples, and potentially some initial training.\nSeparating layout from style\nThere still needs to be a separation of layout from the styles for the component. To allow for the component itself to be placed wherever needed in the page we need to make sure that the CSS for the layout is a separate entity to the CSS for that styling.\nWith these base requirements I took to CodePen and started working on some throwaway code to get started.\nMaking the new grid(s)\nThe Full-Width Grid\nTo start with I created a grid that had three columns, one for the left, one for the middle, and one for the right. This would give the full-width option to components.\nThankfully, one of Rachel Andrew\u2019s many articles on Grid discussed this exact requirement of the new grid system to break out with Grid.\nI took some of the code in the examples and edited to make grid we needed.\n.container {\n display: grid;\n grid-template-columns:\n [full-start]\n minmax(.75em, 1fr)\n [main-start]\n minmax(0, 1008px)\n [main-end]\n minmax(.75em, 1fr)\n [full-end];\n}\nWe are declaring a grid, we have four grid column lines which we name and we define how the three columns they create react to the viewport width. We have a left and right column that have a minimum of 12px, and a central column with a maximum width of 1008px.\nBoth left and right columns fill up any additional space if the viewport is wider that 1032px wide. We are also not declaring any gutters to this grid, the left and right columns would act as gutters at smaller viewports.\nAt this point I noticed that older versions of Sass cannot parse the brackets in this code. To combat this I used Sass\u2019 unquote method to wrap around the value of the grid-template-column.\n.container {\n display: grid;\n grid-template-columns:\n unquote(\"\n [full-start]\n minmax(.75em, 1fr)\n [main-start]\n minmax(0, 1008px)\n [main-end]\n minmax(.75em, 1fr)\n [full-end]\n \");\n}\nThe existing codebase makes use of Sass variables, mixins and functions so to remove that would be a problem, but luckily the version of Sass used is up-to-date (note: example CodePens will be using CSS).\nThe initial full-width grid displays on a webpage as below:\n\nThe 14 column grid\nI decided to work out the 14 column grid as a separate prototype before working out how it would fit within the full-width grid. This grid is very similar to the 12 column grids that have been used in web design. Here we need 14 columns with a gutter between each one.\nAlong with the many other resources on Grid, Mozilla\u2019s MDN site had a page on common layouts using CSS Grid. This gave me the perfect CSS I needed to create my grid and I edited it as required:\n.inner {\n display: grid;\n grid-template-columns: repeat(14, [col-start] 1fr);\n grid-gap: .75em;\n}\nWe, again, are declaring a grid, and we are splitting up the available space by creating 14 columns with 1 fr-unit and giving each one a starting line named col-start.\nThis grid would display on web page as below:\n\nBringing the grids together\nNow that we have got the two grids we need to help fulfil our requirements we need to put them together so that they are actually we we need.\nThe subgrid\nThere is no subgrid in CSS, yet. To workaround this for the new grid system we could nest the 14 column grid inside the full-width grid.\nIn the HTML we nest the 14 column inner grid inside the full-width container.\n
                    \n
                    \n
                    \n
                    \nSo that the inner knows where to be laid out within the container we tell it what column to start and end with, with this code it would be the start and end of the main column.\n.inner {\n display: grid;\n grid-column: main-start / main-end;\n grid-template-columns: repeat(14, [col-start] 1fr);\n grid-gap: .75em;\n}\nThe CSS for the container remains unchanged.\n\nThis works, but we have added another div to our HTML. One of our requirements is to try and remove the potential for tag soup.\nThe faux subgrid subgrid\nI wanted to see if it would be possible to place the CSS for the 14 column grid within the CSS for the full-width grid. I replaced the CSS for the main grid and added the grid-column-gap to the .container.\n.container {\n display: grid;\n grid-gap: .75em;\n grid-template-columns:\n [full-start]\n minmax(.75em, 1fr)\n [main-start]\n repeat(14, [col-start] 1fr)\n [main-end]\n minmax(.75em, 1fr)\n [full-end];\n}\nWhat this gave me was a 16 column grid. I was unable to find a way to tell the main grid, the grid betwixt main-start and main-end to be a maximum of 1008px as required.\n\nI trawled the internet to find if it was possible to create our main requirement, a 14 column grid which also allows for full-width components. I found that we could not reverse minmax to minmax(1fr, 72px) as 1fr is not allowed as a minimum if there is a maximum. I tried working out if we could make the min larger than its max but in minmax it would be ignored.\nI was struggling, I was hoping for a cleaner version of the grid system we currently use but getting to the point where needing that extra
                    would be a necessity.\nAt 3 in the morning, when I was failing to get to sleep, my mind happened upon an question: \u201cCould you use calc?\u201d\nAt some point I drifted back to sleep so the next day I set upon seeing if this was possible. I knew that the maximum width of the central grid needed to be 1008px. The left and right columns needed to be however many pixels were left in the viewport divided by 2. In CSS it looked like I would need to use calc twice. The first time to takeaway 1008px from 100% of the viewport width and the second to divide that result by 2.\ncalc(calc(100% - 1008px) / 2)\nThe CSS above was part of the value that I would need to include in the declaration for the grid.\n.container {\n display: grid;\n grid-gap: .75em;\n grid-template-columns:\n [full-start]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [main-start]\n repeat(14, [col-start] 1fr)\n [main-end]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [full-end];\n}\nWe have created the grid required. A full-width grid, with a central 14 column grid, using fewer
                    elements.\n\nSee the Pen Design Systems and CSS Grid, 6 by Stuart Robson (@sturobson) on CodePen.\n\nSuccess!\nProgressive enhancement\nNow that we have created the grid system required we need to back-track a little.\nNot all browsers support Grid, over the last 9 months or so this has gotten a lot better. However there will still be browsers that visit that potentially won\u2019t have support. The effort required to make the grid system fall back for these browsers depends on your product or sites browser support.\n\nTo determine if we will be using Grid or not for a browser we will make use of feature queries. This would mean that any version of Internet Explorer will not get Grid, as well as some mobile browsers and older versions of other browsers.\n@supports (display: grid) {\n /* Styles for browsers that support Grid */\n}\nIf a browser does not pass the requirements for @supports we will fallback to using flexbox where possible, and if that is not supported we are happy for the page to be laid out in one column.\nA website doesn\u2019t have to look the same in every browser after all.\nA responsive grid\nWe started with the big picture, how the grid would be at a large viewport and the grid system we have created gets a little silly when the viewport gets smaller.\nAt smaller viewports we have a single column layout where every item of content, every component stacks atop each other. We don\u2019t start to define a grid before we the viewport gets to 700px wide. At this point we have an 8 column grid and if the viewport gets to 1100px or wider we have our 14 column grid.\n/*\n * to start with there is no 'grid' just a single column\n */\n.container {\n padding: 0 .75em;\n}\n\n/*\n * when we get to 700px we create an 8 column grid with\n * a left and right area to breakout of the grid.\n */\n@media (min-width: 700px) {\n .container {\n display: grid;\n grid-gap: .75em;\n grid-template-columns:\n [full-start]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [main-start]\n repeat(8, [col-start] 1fr)\n [main-end]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [full-end];\n padding: 0;\n }\n}\n\n/*\n * when we get to 1100px we create an 14 column grid with\n * a left and right area to breakout of the grid.\n */\n@media (min-width: 1100px) {\n .container {\n grid-template-columns:\n [full-start]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [main-start]\n repeat(14, [col-start] 1fr)\n [main-end]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [full-end];\n }\n}\nBeing explicit in creating this there is some repetition that we could avoid, we will define the number of columns for the inner grid by using a Sass variable or CSS custom properties (more commonly termed as CSS variables).\nLet\u2019s use CSS custom properties. We need to declare the variable first by adding it to our stylesheet.\n:root {\n --inner-grid-columns: 8;\n}\nWe then need to edit a few more lines. First make use of the variable for this line.\nrepeat(8, [col-start] 1fr)\n/* replace with */\nrepeat(var(--inner-grid-columns), [col-start] 1fr)\nThen at the 1100px breakpoint we would only need to change the value of the \u2014inner-grid-columns value.\n@media (min-width: 1100px) {\n .container {\n grid-template-columns:\n [full-start]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [main-start]\n repeat(14, [col-start] 1fr)\n [main-end]\n minmax(calc(calc(100% - 1008px) / 2), 1fr)\n [full-end];\n }\n}\n/* replace with */\n@media (min-width: 1100px) {\n .container {\n --inner-grid-columns: 14;\n }\n}\nSee the Pen Design Systems and CSS Grid, 8 by Stuart Robson (@sturobson) on CodePen.\n\nThe final grid system\nWe have finally created our new grid for the design system. It stays true to the existing grid in place, adds the ability to break-out of the grid, removes a
                    that could have been needed for the nested 14 column grid.\nWe can move on to the new component.\nCreating a new component\nBack to the new components we are needing to create.\n\nTo me there are two components one of which is a slight variant of the first. This component contains a title, subtitle, a paragraph (potentially paragraphs) of content, a list, and a call to action.\nTo start with we should write the HTML for the component, something like this:\n
                    \n

                    \n

                    \n
                    \n

                    \n
                    \n
                      \n
                    • \n
                    • \n
                    \n \n
                    \nTo place the component on the existing grid is fine, but as child elements are not affected by the container grid we need to define another grid for the features component.\nAs the grid doesn\u2019t get invoked until 700px it is possible to negate the need for a media query.\n.features {\n grid-column: col-start 1 / span 6;\n}\n\n@supports (display: grid) {\n @media (min-width: 1100px) {\n .features {\n grid-column-end: 9;\n }\n }\n}\nWe can also avoid duplication of declarations by making use of the grid-column shorthand and longhand. We need to write a little more CSS for the variant component, the one that will sit on the right side of the page too.\n.features:nth-of-type(even) {\n grid-column-start: 4;\n grid-row: 2;\n}\n\n@supports (display: grid) {\n @media (min-width: 1100px) {\n .features:nth-of-type(even) {\n grid-column-start: 9;\n grid-column-end: 16;\n }\n }\n}\nWe cannot place the items within features on the container grid as they are not direct children. To make this work we have to define a grid for the features component.\nWe can do this by defining the grid at the first breakpoint of 700px making use of CSS custom properties again to define how many columns there will need to be.\n.features {\n grid-column: col-start 1 / span 6;\n --features-grid-columns: 5;\n}\n\n@supports (display: grid) {\n @media (min-width: 700px) {\n .features {\n display: grid;\n grid-gap: .75em;\n grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr);\n }\n }\n}\n\n@supports (display: grid) {\n @media (min-width: 1100px) {\n .features {\n grid-column-end: 9;\n --features-grid-columns: 7;\n }\n }\n}\nSee the Pen Design Systems and CSS Grid, 10 by Stuart Robson (@sturobson) on CodePen.\n\nLaying out the parts\nLooking at the spec and reading several articles I feel there are two ways that I could layout the text of this component on the grid.\nWe could use the grid-column shorthand that incorporates grid-column-start and grid-column-end or we can make use of grid-template-areas.\ngrid-template-areas allow for a nice visual way of representing how the parts of the component would be laid out. We can take the the mock of the features on the grid and represent them in text in our CSS.\n\nWithin the .features rule we can add the relevant grid-template-areas value to represent the above.\n.features {\n display: grid;\n grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr);\n grid-template-areas:\n \". title title title title title title\"\n \". subtitle subtitle subtitle subtitle subtitle . \"\n \". content content content content . . \"\n \". list list list . . . \"\n \". . . . link link link \";\n}\n\nIn order to make the variant of the component we would have to create the grid-template-areas for that component too.\nWe then need to tell each element of the component in what grid-area it should be placed within the grid.\n.features__title { grid-area: title; }\n.features__subtitle { grid-area: subtitle; }\n.features__content { grid-area: content; }\n.features__list { grid-area: list; }\n.features__link { grid-area: link; }\nSee the Pen Design Systems and CSS Grid, 12 by Stuart Robson (@sturobson) on CodePen.\n\nThe other way would be to use the grid-column shorthand and the grid-column-start and grid-column-end we have used previously.\n.features .features__title {\n grid-column: col-start 2 / span 6;\n}\n.features .features__subtitle {\n grid-column: col-start 2 / span 5;\n}\n.features .features__content {\n grid-column: col-start 2 / span 4;\n}\n.features .features__list {\n grid-column: col-start 2 / span 4;\n}\n.features .features__link {\n grid-column: col-start 5 / span 3;\n}\nFor the variant of the component we can use the grid-column-start property as it will inherit the span defined in the grid-column shorthand.\n.features:nth-of-type(even) .features__title {\n grid-column-start: col-start 1;\n}\n.features:nth-of-type(even) .features__subtitle {\n grid-column-start: col-start 1;\n}\n.features:nth-of-type(even) .features__content {\n grid-column-start: col-start 3;\n}\n.features:nth-of-type(even) .features__list {\n grid-column-start: col-start 3;\n}\n.features:nth-of-type(even) .features__link {\n grid-column-start: col-start 1;\n}\nSee the Pen Design Systems and CSS Grid, 14 by Stuart Robson (@sturobson) on CodePen.\n\nI think, for now, we will go with using grid-column properties rather than grid-template-areas. The repetition needed for creating the variant feels too much where we can change the grid-column-start instead, keeping the components elements layout properties tied a little closer to the elements rather than the grid.\nSome additional decisions\nThe current component library has existing styles for titles, subtitles, lists, paragraphs of text and calls to action. These are name-spaced so that they shouldn\u2019t clash with any other components. Looking forward there will be a chance that other products adopt the component library, but they may bring their own styles for titles, subtitles, etc.\nOne way that we could write our code now for that near future possibility is to make sure our classes are working hard. Using class-attribute selectors we can target part of the class attributes that we know the elements in the component will have using *=.\n.features [class*=\"title\"] {\n grid-column: col-start 2 / span 6;\n}\n.features [class*=\"subtitle\"] {\n grid-column: col-start 2 / span 5;\n}\n.features [class*=\"content\"] {\n grid-column: col-start 2 / span 4;\n}\n.features [class*=\"list\"] {\n grid-column: col-start 2 / span 4;\n}\n.features [class*=\"link\"] {\n grid-column: col-start 5 / span 3;\n}\nSee the Pen Design Systems and CSS Grid, 15 by Stuart Robson (@sturobson) on CodePen.\n\nAlthough the component we have created have a title, subtitle, paragraphs, a list, and a call to action there may be a time where one ore more of these is not required or available. One thing I found out is that if the element doesn\u2019t exist then grid will not create space for it. This may be obvious, but it can be really helpful in making a nice malleable component.\nWe have only looked at columns, as existing components have their own spacing for the vertical rhythm of the page we don\u2019t really want to have them take up equal space in the component and just take up the space as needed. We can do this by adding grid-auto-rows: min-content; to our .features. This is useful if you also need your component to take up a height that is more than the component itself.\nThe grid of the future\nFrom prototyping this new grid and components in CSS Grid, I\u2019ve found it a fantastic way to reimagine how we can create a layout or grid system for our sites. It gives us options to create the same layouts in differing ways that could suit a project and its needs.\nIt allows us to carry on \u2013 if we choose to \u2013 using a
                    -based grid but swapping out floats for CSS Grid or to tie it to our components so they have specific places to go depending on what component is being used. Or we could have several \u2018grid components\u2019 in our design system that we could use to layout various components throughout a page.\nIf you find yourself tasked with creating some new components for your design system try it. If you are starting from scratch I believe you really should start with CSS Grid for your layout.\nIt really feels like the possibilities are endless in terms of layout for the web.\nResources\nHere are just a few resources I have pawed over these last few weeks whilst getting acquainted with CSS Grid.\n\nA collection of CodePens from this article\nGrid by Example from Rachel Andrew\nA Complete Guide to CSS Grid on Codrops from Hui Jing Chen\nRachel Andrew\u2019s Blog Archive tagged: cssgrid\nCSS Grid Layout Examples\nMDN\u2019s CSS Grid Layout\nA Complete Guide to Grid from CSS-Tricks\nCSS Grid Layout Module Level 1 Specification", "year": "2017", "author": "Stuart Robson", "author_slug": "stuartrobson", "published": "2017-12-12T00:00:00+00:00", "url": "https://24ways.org/2017/design-systems-and-css-grid/", "topic": "code"} {"rowid": 91, "title": "Infinite Canvas: Moving Beyond the Page", "contents": "Remember Web 2.0? I do. In fact, that phrase neatly bifurcates my life on the internet. Pre-2.0, I was occupied by chatting on AOL and eventually by learning HTML so I could build sites on Geocities. Around 2002, however, I saw a WYSIWYG demo in Dreamweaver. The instructor was dragging boxes and images around a canvas. With a few clicks he was able to build a dynamic, single-page interface. Coming from the world of tables and inline HTML styles, I was stunned.\n\nAs I entered college the next year, the web was blossoming: broadband, Wi-Fi, mobile (proud PDA owner, right here), CSS, Ajax, Bloglines, Gmail and, soon, Google Maps. I was a technology fanatic and a hobbyist web developer. For me, the web had long been informational. It was now rapidly becoming something else, something more: sophisticated, presentational, actionable.\n\nIn 2003 we watched as the internet changed. The predominant theme of those early Web 2.0 years was the withering of Internet Explorer 6 and the triumph of web standards. Upon cresting that mountain, we looked around and collectively breathed the rarefied air of pristine HMTL and CSS, uncontaminated by toxic hacks and forks \u2013 only to immediately begin hurtling down the other side at what is, frankly, terrifying speed.\n\nTen years later, we are still riding that rocket. Our days (and nights) are spent cramming for exams on CSS3 and RWD and Sass and RESS. We are the proud, frazzled owners of tiny pocket computers that annihilate the best laptops we could have imagined, and the architects of websites that are no longer restricted to big screens nor even segregated by device. We dragoon our sites into working any time, anywhere. At this point, we can hardly ask the spec developers to slow down to allow us to catch our breath, nor should we. It is, without a doubt, a most wonderful time to be a web developer.\n\nBut despite the newfound luxury of rounded corners, gradients, embeddable fonts, low-level graphics APIs, and, glory be, shadows, the canyon between HTML and native appears to be as wide as ever. The improvements in HTML and CSS have, for the most part, been conveniences rather than fundamental shifts. What I\u2019d like to do now, if you\u2019ll allow me, is outline just a few of the remaining gaps that continue to separate web sites and applications from their native companions.\n\nWhat I\u2019d like for Christmas\n\nThere is one irritant which is the grandfather of them all, the one from which all others flow and have their being, and it is, simply, the page refresh. That\u2019s right, the foundational principle of the web is our single greatest foe. To paraphrase a patron saint of designers everywhere, if you see a page refresh, we blew it.\n\nThe page refresh brings with it, of course, many noble and lovely benefits: addressability, for one; and pagination, for another. (See also caching, resource loading, and probably half a dozen others.) Still, those concerns can be answered (and arguably answered more compellingly) by replacing the weary page with the young and hearty document. Flash may be dead, but it has many lessons yet to bequeath.\n\nPreparing a single document when the site loads allows us to engage the visitor in a smooth and engrossing experience. We have long known this, of course. Twitter was not the first to attempt, via JavaScript, to envelop the user in a single-page application, nor the first to abandon it. Our shared task is to move those technologies down the stack, to make them more primitive, so that the next Twitter can be built with the most basic combination of HTML and CSS rather than relying on complicated, slow, and unreliable scripted solutions.\n\nSo, let\u2019s take a look at what we can do, right now, that we might have a better idea of where our current tools fall short.\n\nA print magazine in HTML clothing\n\nLike many others, I suspect, one of my earliest experiences with publishing was laying out newsletters and newspapers on a computer for print. If you\u2019ve ever used InDesign or Quark or even Microsoft Publisher, you\u2019ll remember reflowing content from page to page. The advent of the internet signaled, in many ways, the abandonment of that model. Articles were no longer constrained by the physical limitations of paper. In shedding our chains, however, it is arguable that we\u2019ve lost something useful. We had a self-contained and complete package, a closed loop. It was a thing that could be handled and finished, and doing so provided a sense of accomplishment that our modern, infinitely scrolling, ever-fractal web of content has stolen.\n\nFor our purposes today, we will treat 24 ways as the online equivalent of that newspaper or magazine. A single year\u2019s worth of articles could easily be considered an issue. Right now, navigating between articles means clicking on the article you\u2019d like to view and being taken to that specific address via a page reload. If Drew wanted to, it wouldn\u2019t be difficult to update the page in place (via JavaScript) and change the address (again via JavaScript with the History API) to reflect the new content found at the new location. But what if Drew wanted to do that without JavaScript? And what if he wanted the site to not merely load the content but actually whisk you along the page in a compelling and delightful way, \u00e0 la the Mag+ demo we all saw a few years ago when the iPad was first introduced? Uh, no.\n\nWe\u2019re all familiar with websites that have attempted to go beyond the page by weaving many chunks of content together into a large document and for good reason. There is tremendous appeal in opening and exploring the canvas beyond the edges of our screens.\n\nIn one rather straightforward example from last year, Mozilla contacted Full Stop to build a website promoting Aza Raskin\u2019s proposal for a set of Creative Commons-style privacy icons. Like a lot of the sites we build (including our own), the amount of information we were presenting was minimal. In these instances, we encourage our clients to consider including everything on a single page. The result was a horizontally driven site that was, if not whimsical, at least clever and attractive to the intended audience. An experience that is taken for granted when using device-native technology is utterly, maddeningly impossible to replicate on the web without jumping through JavaScript hoops.\n\nIn another, more complex example, we again had the pleasure of working with Aza earlier this year, this time on a redesign of the Massive Health website. Our assignment was to design and build a site that communicated Massive\u2019s commitment to modern personal health. The site had to be visually and interactively stunning while maintaining a usable and clear interface for the casual visitor. Our solution was to extend the infinite company logo into a ribbon that carried the visitor through the site narrative. It also meant we\u2019d be asking the browser to accommodate something it was never designed to handle: a non-linear design. (Be sure to play around. There\u2019s a lot going on under the hood. We were also this close to a ZUI, if WebKit didn\u2019t freak out when pages were scaled beyond 10\u00d7.) Despite the apparent and deliberate design simplicity, the techniques necessary to implement it are anything but. From updating the URL to moving the visitor from section to section, we\u2019re firmly in JavaScript territory. And that\u2019s a shame.\n\nWhat can we do?\n\nWe might not be able to specify these layouts in HTML and CSS just yet, but that doesn\u2019t mean we can\u2019t learn a few new tricks while we wait. Let\u2019s see how close we can come to recreating the privacy icons design, the Massive design, or the Mag+ design without resorting to JavaScript.\n\nA horizontally paginated site\n\nThe first thing we\u2019re going to need is the concept of a page within our HTML document. Using plain old HTML and CSS, we can stack a series of
                    s sideways (with a little assist from our new friend, the viewport-width unit, not that he was strictly necessary). All we need to know is how many pages we have. (And, boy, wouldn\u2019t it be nice to be able to know that without having to predetermine it or use JavaScript?)\n\n.window {\noverflow: hidden;\n width: 100%;\n}\n.pages {\n width: 200vw;\n}\n.page {\n float: left;\n overflow: hidden;\n width: 100vw;\n}\n\nIf you look carefully, you\u2019ll see that the conceit we\u2019ll use in the rest of the demos is in place. Despite the document containing multiple pages, only one is visible at any given time. This allows us to keep the user focused on the task (or content) at hand.\n\nBy the way, you\u2019ll need to use a modern, WebKit-based browser for these demos. I recommend downloading the WebKit nightly builds, Chrome Canary, or being comfortable with setting flags in Chrome.\n\nA horizontally paginated site, with transitions\n\nAh, here\u2019s the rub. We have functional navigation, but precious few cues for the user. It\u2019s not much good shoving the visitor around various parts of the document if they don\u2019t get the pleasant whooshing experience of the journey. You might be thinking, what about that new CSS selector, target-something\u2026? Well, my friend, you\u2019re on the right track. Let\u2019s test it. We\u2019re going to need to use a bit of sleight of hand. While we\u2019d like to simply offset the containing element by the number of pages we\u2019re moving (like we did on Massive), CSS alone can\u2019t give us that information, and that means we\u2019re going to need to fake it by expanding and collapsing pages as you navigate. Here are the bits we\u2019re going to need:\n\n.page {\n -webkit-transition: width 1s; // Naturally you're going to want to include all the relevant prefixes here\n float: left;\n left: 0;\n overflow: hidden;\n position: relative;\n width: 100vw;\n}\n.page:not(:target) {\n width: 0;\n}\n\nAh, but we\u2019re not fooling anyone with that trick. As soon as you move beyond a single page, the visitor\u2019s disbelief comes tumbling down when the linear page transitions are unaffected by the distance the pages are allegedly traveling. And you may have already noticed an even more fatal flaw: I secretly linked you to the first page rather than the unadorned URL. If you visit the same page with no URL fragment, you get a blank screen. Sure, we could force a redirect with some server-side trickery, but that feels like cheating. Perhaps if we had the CSS4 subject selector we could apply styles to the parent based on the child being targeted by the URL. We might also need a few more abilities, like determining the total number of pages and having relative sibling selectors (e.g. nth-sibling), but we\u2019d sure be a lot closer.\n\nA horizontally paginated site, with transitions \u2013 no cheating\n\nWell, what other cards can we play? How about the checkbox hack? Sure, it\u2019s a garish trick, but it might be the best we can do today. Check it out. \n\nlabel {\n cursor: pointer;\n}\ninput {\n display: none;\n}\ninput:not(:checked) + .page {\n max-height: 100vh;\n width: 0;\n}\n\nFinally, we can see the first page thanks to the state we are able to set on the appropriate radio button. Of course, now we don\u2019t have URLs, so maybe this isn\u2019t a winning plan after all. While our HTML and CSS toolkit may feel primitive at the moment, we certainly don\u2019t want to sacrifice the addressability of the web. If there\u2019s one bedrock principle, that\u2019s it.\n\nA horizontally paginated site, with transitions \u2013 no cheating and a gorgeous homepage\n\nGorgeous may not be the right word, but our little magazine is finally shaping up. Thanks to the CSS regions spec, we\u2019ve got an exciting new power, the ability to begin an article in one place and bend it to our will. (Remember, your everyday browser isn\u2019t going to work for these demos. Try the WebKit nightly build to see what we\u2019re talking about.) As with the rest of the examples, we\u2019re clearly abusing these features. Off-canvas layouts (you can thank Luke Wroblewski for the name) are simply not considered to be normal patterns\u2026 yet.\n\nHere\u2019s a quick look at what\u2019s going on:\n\n.excerpt-container {\n float: left;\n padding: 2em;\n position: relative;\n width: 100%;\n}\n.excerpt {\n height: 16em;\n}\n.excerpt_name_article-1,\n.page-1 .article-flow-region {\n -webkit-flow-from: article-1;\n}\n.article-content_for_article-1 {\n -webkit-flow-into: article-1;\n}\n\nThe regions pattern is comprised of at least three components: a beginning; an ending; and a source. Using CSS, we\u2019re able to define specific elements that should be available for the content to flow through. If magazine-style layouts are something you\u2019re interested in learning more about (and you should be), be sure to check out the great work Adobe has been doing.\n\nLooking forward, and backward\n\nAs designers, builders, and consumers of the web, we share a desire to see the usability and enjoyability of websites continue to rise. We are incredibly lucky to be working in a time when a three-month-old website can be laughably outdated. Our goal ought to be to improve upon both the weaknesses and the strengths of the web platform. We seek not only smoother transitions and larger canvases, but fine-grained addressability. Our URLs should point directly and unambiguously to specific content elements, be they pages, sections, paragraphs or words. Moreover, off-screen design patterns are essential to accommodating and empowering the multitude of devices we use to access the web. We should express the desire that interpage links take advantage of the CSS transitions which have been put to such good effect in every other aspect of our designs. Transitions aren\u2019t just nice to have, they\u2019re table stakes in the highly competitive world of native applications. \n\nThe tools and technologies we have right now allow us to create smart, beautiful, useful webpages. With a little help, we can begin removing the seams and sutures that bind the web to an earlier, less sophisticated generation.", "year": "2012", "author": "Nathan Peretic", "author_slug": "nathanperetic", "published": "2012-12-21T00:00:00+00:00", "url": "https://24ways.org/2012/infinite-canvas-moving-beyond-the-page/", "topic": "code"} {"rowid": 136, "title": "Making XML Beautiful Again: Introducing Client-Side XSL", "contents": "Remember that first time you saw XML and got it? When you really understood what was possible and the deep meaning each element could carry? Now when you see XML, it looks ugly, especially when you navigate to a page of XML in a browser. Well, with every modern browser now supporting XSL 1.0, I\u2019m going to show you how you can turn something as simple as an ATOM feed into a customised page using a browser, Notepad and some XSL.\n\nWhat on earth is this XSL?\n\nXSL is a family of recommendations for defining XML document transformation and presentation. It consists of three parts:\n\n\n\tXSLT 1.0 \u2013 Extensible Stylesheet Language Transformation, a language for transforming XML\n\tXPath 1.0 \u2013 XML Path Language, an expression language used by XSLT to access or refer to parts of an XML document. (XPath is also used by the XML Linking specification)\n\tXSL-FO 1.0 \u2013 Extensible Stylesheet Language Formatting Objects, an XML vocabulary for specifying formatting semantics\n\n\nXSL transformations are usually a one-to-one transformation, but with newer versions (XSL 1.1 and XSL 2.0) its possible to create many-to-many transformations too. So now you have an overview of XSL, on with the show\u2026\n\nSo what do I need?\n\nSo to get going you need a browser an supports client-side XSL transformations such as Firefox, Safari, Opera or Internet Explorer. Second, you need a source XML file \u2013 for this we\u2019re going to use an ATOM feed from Flickr.com. And lastly, you need an editor of some kind. I find Notepad++ quick for short XSLs, while I tend to use XMLSpy or Oxygen for complex XSL work. \n\nBecause we\u2019re doing a client-side transformation, we need to modify the XML file to tell it where to find our yet-to-be-written XSL file. Take a look at the source XML file, which originates from my Flickr photos tagged sky, in ATOM format.\n\nThe top of the ATOM file now has an additional instruction, as can been seen on Line 2 below. This instructs the browser to use the XSL file to transform the document.\n\n\n\n\n\nYour first transformation\n\nYour first XSL will look something like this:\n\n\n\n\t\n\n\nThis is pretty much the starting point for most XSL files. You will notice the standard XML processing instruction at the top of the file (line 1). We then switch into XSL mode using the XSL namespace on all XSL elements (line 2). In this case, we have added namespaces for ATOM (line 4) and Dublin Core (line 5). This means the XSL can now read and understand those elements from the source XML. \n\nAfter we define all the namespaces, we then move onto the xsl:output element (line 6). This enables you to define the final method of output. Here we\u2019re specifying html, but you could equally use XML or Text, for example. The encoding attributes on each element do what they say on the tin. As with all XML, of course, we close every element including the root.\n\nThe next stage is to add a template, in this case an as can be seen below:\n\n\n\n\t\n\t\n\t\t\n\t\t\t\n\t\t\t\tMaking XML beautiful again : Transforming ATOM\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\n\t\n\n\nThe beautiful thing about XSL is its English syntax, if you say it out loud it tends to make sense. \n\nThe / value for the match attribute on line 8 is our first example of XPath syntax. The expression / matches any element \u2013 so this will match against any element in the document. As the first element in any XML document is the root element, this will be the one matched and processed first.\n\nOnce we get past our standard start of a HTML document, the only instruction remaining in this is to look for and match all elements using the in line 14, above.\n\n\n\n\t\n\t\n\t\t\n\t\n\t\n\t\t
                    \n\t\t\t

                    \n\t\t\t\t\n\t\t\t

                    \n\t\t\t

                    \n\t\t\t\t\n\t\t\t

                    \n\t\t\t
                      \n\t\t\t\t\n\t\t\t
                    \n\t\t
                    \n\t
                    \n
                    \n\nThis new template (line 12, above) matches and starts to write the new HTML elements out to the output stream. The does exactly what you\u2019d expect \u2013 it finds the value of the item specifed in its select attribute. With XPath you can select any element or attribute from the source XML. \n\nThe last part is a repeat of the now familiar from before, but this time we\u2019re using it inside of a called template. Yep, XSL is full of recursion\u2026\n\n\n\t
                  • \n\t\t

                    \n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t

                    \n\t\t

                    \n\t\t\t()\n\t\t

                    \n\t\t

                    \n\t\t\t\n\t\t

                    \n\t\t\n\t
                  • \n\n\nThe which matches atom:entry (line 1) occurs every time there is a element in the source XML file. So in total that is 20 times, this is naturally why XSLT is full of recursion. This has been matched and therefore called higher up in the document, so we can start writing list elements directly to the output stream. The first part is simply a

                    with a link wrapped within it (lines 3-7). We can select attributes using XPath using @. \n\nThe second part of this template selects the date, but performs a XPath string function on it. This means that we only get the date and not the time from the string (line 9). This is achieved by getting only the part of the string that exists before the T. \n\nRegular Expressions are not part of the XPath 1.0 string functions, although XPath 2.0 does include them. Because of this, in XSL we tend to rely heavily on the available XML output. \n\nThe third part of the template (line 12) is a again, but this time we use an attribute of called disable output escaping to turn escaped characters back into XML. \n\nThe very last section is another call, taking us three templates deep. Do not worry, it is not uncommon to write XSL which go 20 or more templates deep!\n\n\n\t\n\t\t\n\t\t\t\n\t\t\t\ttag\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t \n\t\n\n\nIn our final , we see a combination of what we have done before with a couple of twists. Once we match atom:category we then count how many elements there are at that same level (line 2). The XPath . means \u2018self\u2019, so we count how many category elements are within the element. \n\nFollowing that, we start to output a link with a rel attribute of the predefined text, tag (lines 4-6). In XSL you can just type text, but results can end up with strange whitespace if you do (although there are ways to simply remove all whitespace). \n\nThe only new XPath function in this example is concat(), which simply combines what XPaths or text there might be in the brackets. We end the output for this tag with an actual tag name (line 10) and we add a space afterwards (line 12) so it won\u2019t touch the next tag. (There are better ways to do this in XSL using the last() XPath function). \n\nAfter that, we go back to the element again if there is another category element, otherwise we end the loop and end this .\n\nA touch of style\n\nBecause we\u2019re using recursion through our templates, you will find this is the end of the templates and the rest of the XML will be ignored by the parser. Finally, we can add our CSS to finish up. (I have created one for Flickr and another for News feeds)\n\n\n\nSo we end up with a nice simple to understand but also quick to write XSL which can be used on ATOM Flickr feeds and ATOM News feeds. With a little playing around with XSL, you can make XML beautiful again.\n\nAll the files can be found in the zip file (14k)", "year": "2006", "author": "Ian Forrester", "author_slug": "ianforrester", "published": "2006-12-07T00:00:00+00:00", "url": "https://24ways.org/2006/beautiful-xml-with-xsl/", "topic": "code"} {"rowid": 92, "title": "Redesigning the Media Query", "contents": "Responsive web design is showing us that designing content is more important than designing containers. But if you\u2019ve given RWD a serious try, you know that shifting your focus from the container is surprisingly hard to do. There are many factors and\ninstincts working against you, and one culprit is a perpetrator you\u2019d least suspect.\n\nThe media query is the ringmaster of responsive design. It lets us establish the rules of the game and gives us what we need most: control. However, like some kind of evil double agent, the media query is actually working against you.\n\nIts very nature diverts your attention away from content and forces you to focus on the container.\n\nThe very act of choosing a media query value means choosing a screen size.\n\nLook at the history of the media query\u2014it\u2019s always been about the container. Values like screen, print, handheld and tv don\u2019t have anything to do with content. The modern media query lets us choose screen dimensions, which is great because it makes RWD possible. But it\u2019s still the act of choosing something that is completely unpredictable.\n\nContent should dictate our breakpoints, not the container. In order to get our focus back to the only thing that matters, we need a reengineered media query\u2014one that frees us from thinking about screen dimensions. A media query that works for your content, not the window. Fortunately, Sass 3.2 is ready and willing to take on this challenge.\n\nThinking in Columns\n\nFluid grids never clicked for me. I feel so disoriented and confused by their squishiness. Responsive design demands their use though, right?\n\nI was ready to surrender until I found a grid that turned my world upright again. The Frameless Grid by Joni Korpi demonstrates that column and gutter sizes can stay fixed. As the screen size changes, you simply add or remove columns to accommodate. This made sense to me and armed with this concept I was able to give Sass the first component it needs to rewrite the media query: fixed column and gutter size variables.\n\n$grid-column: 60px;\n$grid-gutter: 20px;\n\nWe\u2019re going to want some resolution independence too, so let\u2019s create a function that converts those nasty pixel values into ems.\n\n@function em($px, $base: $base-font-size) {\n\t@return ($px / $base) * 1em;\n}\n\nWe now have the components needed to figure out the width of multiple columns in ems. Let\u2019s put them together in a function that will take any number of columns and return the fixed width value of their size.\n\n@function fixed($col) {\n\t@return $col * em($grid-column + $grid-gutter)\n}\n\nWith the math in place we can now write a mixin that takes a column count as a parameter, then generates the perfect media query necessary to fit that number of columns on the screen. We can also build in some left and right margin for our layout by adding an additional gutter value (remembering that we already have one gutter built into our fixed function).\n\n@mixin breakpoint($min) {\n\t@media (min-width: fixed($min) + em($grid-gutter)) {\n\t\t@content\n\t}\n}\n\nAnd, just like that, we\u2019ve rewritten the media query. Instead of picking a minimum screen size for our layout, we can simply determine the number of columns needed. Let\u2019s add a wrapper class so that we can center our content on the screen.\n\n@mixin breakpoint($min) {\n @media (min-width: fixed($min) + em($grid-gutter)) {\n\t.wrapper {\n\t\twidth: fixed($min) - em($grid-gutter);\n\t\tmargin-left: auto; margin-right: auto;\n\t}\n\t@content\n }\n}\n\nDesigning content with a column count gives us nice, easy, whole numbers to work with. Sizing content, sidebars or widgets is now as simple as specifying a single-digit number.\n\n@include breakpoint(8) {\n\t.main { width: fixed(5); }\n\t.sidebar { width: fixed(3); }\n}\n\nThose four lines of Sass just created a responsive layout for us. When the screen is big enough to fit eight columns, it will trigger a fixed width layout. And give widths to our main content and sidebar. The following is the outputted CSS\u2026\n\n@media (min-width: 41.25em) {\n .wrapper {\n width: 38.75em;\n margin-left: auto; margin-right: auto;\n }\n .main { width: 25em; }\n .sidebar { width: 15em; }\n}\n\nDemo\n\nI\u2019ve created a Codepen demo that demonstrates what we\u2019ve covered so far. I\u2019ve added to the demo some grid classes based on Griddle by Nicolas Gallagher to create a floatless layout. I\u2019ve also added a CSS gradient overlay to help you visualize columns. Try changing the column variable sizes or the breakpoint includes to see how the layout reacts to different screen sizes.\n\nResponsive Images\n\nResponsive images are a serious problem, but I\u2019m excited to see the community talk so passionately about a solution. Now, there are some excellent stopgaps while we wait for something official, but these solutions require you to mirror your breakpoints in JavaScript or HTML. This poses a serious problem for my Sass-generated media queries, because I have no idea what the real values of my breakpoints are anymore. For responsive images to work, JavaScript needs to recognize which media query is active so that proper images can be loaded for that layout.\n\nWhat I need is a way to label my breakpoints. Fortunately, people much smarter than I have figured this out. Jeremy Keith devised a labeling method by using CSS-generated content as the storage method for breakpoint labels. We can use this technique in our breakpoint mixin by passing a label as another argument.\n\n@include breakpoint(8, 'desktop') { /* styles */ }\n\nSass can take that label and use it when writing the corresponding media query. We just need to slightly modify our breakpoint mixin.\n\n@mixin breakpoint($min, $label) {\n @media (min-width: fixed($min) + em($grid-gutter)) {\n\n // label our mq with CSS generated content\n\tbody::before { content: $label; display: none; }\n\n\t.wrapper {\n\t\twidth: fixed($min) - em($grid-gutter);\n\t\tmargin-left: auto; margin-right: auto;\n\t}\n\t@content\n }\n}\n\nThis allows us to label our breakpoints with a user-friendly string. Now that our media queries are defined and labeled, we just need JavaScript to step in and read which label is active.\n\n// get css generated label for active media query\nvar label = getComputedStyle(document.body, '::before')['content'];\n\nJavaScript now knows which layout is active by reading the label in the current media query\u2014we just need to match that label to an image. I prefer to store references to different image sizes as data attributes on my image tag.\n\n\n\n\nThese data attributes have names that match the labels set in my CSS. So while there is some duplication going on, setting a keyword like \u2018tablet\u2019 in two places is much easier than hardcoding media query values. With matching labels in CSS and HTML our script can marry the two and load the right sized image for our layout.\n\n// get css generated label for active media query\nvar label = getComputedStyle(document.body, '::before')['content'];\n\n// select image\nvar $image = $('.responsive-image');\n\n// create source from data attribute\n$image.attr('src', $image.data(label));\n\nDemo\n\nWith some slight additions to our previous Codepen demo you can see this responsive image technique in action. While the above JavaScript will work it is not nearly robust enough for production so the demo uses a jQuery plugin that can accomodate multiple images, reloading on screen resize and fallbacks if something doesn\u2019t match up.\n\nCreating a Framework\n\nThis media query mixin and responsive image JavaScript are the center piece of a front end framework I use to develop websites. It\u2019s a fluid, mobile first foundation that uses the breakpoint mixin to structure fixed width layouts for tablet and desktop. Significant effort was focused on making this framework completely cross-browser. For example, one of the problems with using media queries is that essential desktop structure code ends up being hidden from legacy Internet Explorer. Respond.js is an excellent polyfill, but if you\u2019re comfortable serving a single desktop layout to older IE, we don\u2019t need JavaScript. We simply need to capture layout code outside of a media query and sandbox it under an IE only class name.\n\n// set IE fallback layout to 8 columns\n$ie-support = 8;\n\n// inside of our breakpoint mixin (but outside the media query)\n@if ($ie-support and $min <= $ie-support) {\n\t.lt-ie9 { @content; }\n}\n\nPerspective Regained\n\nThinking in columns means you are thinking about content layout. How big of a screen do you need for 12 columns? Who cares? Having Sass write media queries means you can use intuitive numbers for content layout. A fixed grid means more layout control and less edge cases to test than a fluid grid. Using CSS labels for activating responsive images means you don\u2019t have to duplicate breakpoints across separations of concern. \n\nIt\u2019s a harmonious blend of approaches that gives us something we need\u2014responsive design that feels intuitive. And design that, from the very outset, focuses on what matters most. Just like our kindergarten teachers taught us: It\u2019s what\u2019s inside that counts.", "year": "2012", "author": "Les James", "author_slug": "lesjames", "published": "2012-12-13T00:00:00+00:00", "url": "https://24ways.org/2012/redesigning-the-media-query/", "topic": "code"} {"rowid": 263, "title": "Securing Your Site like It\u2019s 1999", "contents": "Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.\nAs the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.\nWhat follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.\nBad input validation: Trusting anything the user sends you\nOur story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.\nOne such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.\nOne day, a player discovered a hidden field in the forum\u2019s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum\u2019s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player\u2019s profile.\nNeedless to say, this idyllic online community descended into chaos. Users changed each other\u2019s passwords, deleted each other\u2019s messages, and attacked each-other under the cover of complete anonymity. What happened?\nThere aren\u2019t any official rules for developing software on the web. But if there were, my golden rule would be:\nNever trust user input. Ever.\nAlways ask yourself how users will send you data that isn\u2019t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.\nMake sure you validate user input to make sure it\u2019s of the correct type (e.g. string, number, JSON string) and that it\u2019s the length that you were expecting. Don\u2019t forget that user input doesn\u2019t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.\nMake sure to check a user\u2019s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum\u2019s owner.\nFinally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.\nSQL injection: Allowing the user to run their own database queries\nA long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I\u2019d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.\nOne day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn\u2019t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.\nIt turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?\n' OR '1'='1\nSQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Alice'\nAND PASSWORD='hunter2'\nThis query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let\u2019s see what happens when we use our magic password instead!\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Admin'\nAND PASSWORD='' OR '1'='1'\nDoes the password look like part of the query to you? That\u2019s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!\nSo how can you protect yourself from SQL injection?\nNever build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.\nExpert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!\nCross site request forgery: Getting other users to do your dirty work for you\nDo you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.\nHave you ever clicked on a hyperlink, only to find something that you weren\u2019t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky\u2026\nLet\u2019s just say there are older and fouler things than Rick Astley in the dark places of the web.\nWhat if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:\n\nDid you notice the source URL of the image? It\u2019s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user\u2019s name and shipping address, and you could ship yourself DVDs completely free of charge!\nThis attack is possible when websites unconditionally trust a user\u2019s session cookies without checking where HTTP requests come from.\nThe first check you can make is to verify that a request\u2019s origin and referer headers match the location of the website. These headers can\u2019t be programmatically set.\nAnother check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren\u2019t stored in cookies.\nYou can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.\nCross site scripting: Someone else\u2019s code running on your website\nIn 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.\nSamy enjoyed using MySpace which, at the time, was the world\u2019s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn\u2019t have to wade through photos of avocado toast back then\u2026\nSamy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
                    tags into his headline, but \n\nThe aliases create shorthand links to all of the Flash-based APIs.\n\nNow is probably a good time to explain how to debug your application.\n\nDebugging our application\n\nSo, with our XML file created and HTML file started, let\u2019s try testing our \u2018application\u2019. We\u2019ll need the ADL application located in BIN folder of the SDK and tell it to run the application.xml file.\n\n/path/to/adl /path/to/application.xml\n\nYou can also just drag the XML file onto ADL and it\u2019ll accomplish the same thing. If you just did that and noticed that your blank application didn\u2019t load, you\u2019d be correct. It\u2019s running but isn\u2019t visible. Which at this point means you\u2019ll have to shut down the ADL process. Sorry about that!\n\nChanging the visibility\n\nYou have two ways to make your application visible. You can do it automatically by setting the placing true in the visible tag within the application.xml file.\n\ntrue\n\nThe other way is to do it programmatically from within your application. You\u2019d want to do it this way if you had other startup tasks to perform before showing the interface. To turn the UI on programmatically, simple set the visible property of nativeWindow to true.\n\n\n\nSandbox Security\n\nNow that we have an application that we can see when we start it, it\u2019s time to build the to-do list application. In doing so, you\u2019d probably think that using a JavaScript library is a really good idea \u2014 and it can be but there are some limitations within AIR that have to be considered.\n\nAn HTML document, by default, runs within the application sandbox. You have full access to the AIR APIs but once the onload event of the window has fired, you\u2019ll have a limited ability to make use of eval and other dynamic script injection approaches. This limits the ability of external sources from gaining access to everything the AIR API offers, such as database and local file system access. You\u2019ll still be able to make use of eval for evaluating JSON responses, which is probably the most important if you wish to consume JSON-based services.\n\nIf you wish to create a greater wall of security between AIR and your HTML document loading in external resources, you can create a child sandbox. We won\u2019t need to worry about it for our application so I won\u2019t go any further into it but definitely keep this in mind.\n\nFinally, our application\n\nGetting tired of all this preamble? Let\u2019s actually build our to-do list application. I\u2019ll use jQuery because it\u2019s small and should suit our needs nicely. Let\u2019s begin with some structure:\n\n\n\t\n\t\n\t
                      \n\n\nNow we need to wire up that button to actually add a new item to our to-do list.\n\n\n\nAnd just like that, we\u2019ve got a to-do list! That\u2019s it! Just never close your application and you\u2019ll remember everything. Okay, that\u2019s not very practical. You need to have some way of storing your to-do items until the next time you open up the application.\n\nStoring Data\n\nYou\u2019ve essentially got 4 different ways that you can store data:\n\n\n\tUsing the local database. AIR comes with SQLLite built in. That means you can create tables and insert, update and select data from that database just like on a web server.\n\tUsing the file system. You can also create files on the local machine. You have access to a few folders on the local system such as the documents folder and the desktop.\n\tUsing EcryptedLocalStore. I like using the EcryptedLocalStore because it allows you to easily save key/value pairs and have that information encrypted. All this within just a couple lines of code.\n\tSending the data to a remote API. Our to-do list could sync up with Remember the Milk, for example.\n\n\nTo demonstrate some persistence, we\u2019ll use the file system to store our files. In addition, we\u2019ll let the user specify where the file should be saved. This way, we can create multiple to-do lists, keeping them separate and organized.\n\nThe application is now broken down into 4 basic tasks:\n\n\n\tLoad data from the file system.\n\tPerform any interface bindings.\n\tManage creating and deleting items from the list.\n\tSave any changes to the list back to the file system.\n\n\nLoading in data from the file system\n\nWhen the application starts up, we\u2019ll prompt the user to select a file or specify a new to-do list. Within AIR, there are 3 main file objects: File, FileMode, and FileStream. File handles file and path names, FileMode is used as a parameter for the FileStream to specify whether the file should be read-only or for write access. The FileStream object handles all the read/write activity.\n\nThe File object has a number of shortcuts to default paths like the documents folder, the desktop, or even the application store. In this case, we\u2019ll specify the documents folder as the default location and then use the browseForSave method to prompt the user to specify a new or existing file. If the user specifies an existing file, they\u2019ll be asked whether they want to overwrite it.\n\nvar store = air.File.documentsDirectory;\nvar fileStream = new air.FileStream();\nstore.browseForSave(\"Choose To-do List\");\n\nThen we add an event listener for when the user has selected a file. When the file is selected, we check to see if the file exists and if it does, read in the contents, splitting the file on new lines and creating our list items within the interface.\n\nstore.addEventListener(air.Event.SELECT, fileSelected);\nfunction fileSelected()\n{\n\tair.trace(store.nativePath);\n\t// load in any stored data\n\tvar byteData = new air.ByteArray();\n\tif(store.exists)\n\t{\n\t\tfileStream.open(store, air.FileMode.READ);\n\t\tfileStream.readBytes(byteData, 0, store.size);\n\t\tfileStream.close();\n\n\t\tif(byteData.length > 0)\n\t\t{\n\t\t\tvar s = byteData.readUTFBytes(byteData.length);\n\t\t\toldlist = s.split(\u201c\\r\\n\u201d);\n\n\t\t\t// create todolist items\n\t\t\tfor(var i=0; i < oldlist.length; i++)\n\t\t\t{\n\t\t\t\tcreateItem(oldlist[i], (new Date()).getTime() + i );\n\t\t\t}\n\t\t}\n\t}\n}\n\nPerform Interface Bindings\n\nThis is similar to before where we set the click event on the Add button but we\u2019ve moved the code to save the list into a separate function.\n\n$('#add').click(function(){\n\t\tvar t = $('#text').val();\n\t\tif(t){\n\t\t\t// create an ID using the time\n\t\t\tcreateItem(t, (new Date()).getTime() );\n\t\t}\n})\n\nManage creating and deleting items from the list\n\nThe list management is now in its own function, similar to before but with some extra information to identify list items and with calls to save our list after each change.\n\nfunction createItem(t, id)\n{\n\tif(t.length == 0) return;\n\t// add it to the todo list\n\ttodolist[id] = t;\n\t// use DOM methods to create the new list item\n\tvar li = document.createElement('li');\n\t// the extra space at the end creates a buffer between the text\n\t// and the delete link we're about to add\n\tli.appendChild(document.createTextNode(t + ' '));\n\t// create the delete link\n\tvar del = document.createElement('a');\n\t// this makes it a true link. I feel dirty doing this.\n\tdel.setAttribute('href', '#');\n\tdel.addEventListener('click', function(evt){\n\t\tvar id = this.id.substr(1);\n\t\tdelete todolist[id]; // remove the item from the list\n\t\tthis.parentNode.parentNode.removeChild(this.parentNode);\n\t\tsaveList();\n\t});\n\tdel.appendChild(document.createTextNode('[del]'));\n\tdel.id = 'd' + id;\n\tli.appendChild(del);\n\t// append everything to the list\n\t$('#list').append(li);\n\t//reset the text box\n\t$('#text').val('');\n\tsaveList();\n}\n\nSave changes to the file system\n\nAny time a change is made to the list, we update the file. The file will always reflect the current state of the list and we\u2019ll never have to click a save button. It just iterates through the list, adding a new line to each one.\n\nfunction saveList(){\n\tif(store.isDirectory) return;\n\tvar packet = '';\n\tfor(var i in todolist)\n\t{\n\t\tpacket += todolist[i] + '\\r\\n';\n\t}\n\tvar bytes = new air.ByteArray();\n\tbytes.writeUTFBytes(packet);\n\tfileStream.open(store, air.FileMode.WRITE);\n\tfileStream.writeBytes(bytes, 0, bytes.length);\n\tfileStream.close();\n}\n\nOne important thing to mention here is that we check if the store is a directory first. The reason we do this goes back to our browseForSave call. If the user cancels the dialog without selecting a file first, then the store points to the documentsDirectory that we set it to initially. Since we haven\u2019t specified a file, there\u2019s no place to save the list.\n\nHopefully by this point, you\u2019ve been thinking of some cool ways to pimp out your list. Now we need to package this up so that we can let other people use it, too.\n\nCreating a Package\n\nNow that we\u2019ve created our application, we need to package it up so that we can distribute it. This is a two step process. The first step is to create a code signing certificate (or you can pay for one from Thawte which will help authenticate you as an AIR application developer).\n\nTo create a self-signed certificate, run the following command. This will create a PFX file that you\u2019ll use to sign your application.\n\nadt -certificate -cn todo24ways 1024-RSA todo24ways.pfx mypassword\n\nAfter you\u2019ve done that, you\u2019ll need to create the package with the certificate\n\nadt -package -storetype pkcs12 -keystore todo24ways.pfx todo24ways.air application.xml .\n\nThe important part to mention here is the period at the end of the command. We\u2019re telling it to package up all files in the current directory.\n\nAfter that, just run the AIR file, which will install your application and run it.\n\nImportant things to remember about AIR\n\nWhen developing an HTML application, the rendering engine is Webkit. You\u2019ll thank your lucky stars that you aren\u2019t struggling with cross-browser issues. (My personal favourites are multiple backgrounds and border radius!)\n\nBe mindful of memory leaks. Things like Ajax calls and event binding can cause applications to slowly leak memory over time. Web pages are normally short lived but desktop applications are often open for hours, if not days, and you may find your little desktop application taking up more memory than anything else on your machine!\n\nThe WebKit runtime itself can also be a memory hog, usually taking about 15MB just for itself. If you create multiple HTML windows, it\u2019ll add another 15MB to your memory footprint. Our little to-do list application shouldn\u2019t be much of a concern, though.\n\nThe other important thing to remember is that you\u2019re still essentially running within a Flash environment. While you probably won\u2019t notice this working in small applications, the moment you need to move to multiple windows or need to accomplish stuff beyond what HTML and JavaScript can give you, the need to understand some of the Flash-based elements will become more important.\n\nLastly, the other thing to remember is that HTML links will load within the AIR application. If you want a link to open in the users web browser, you\u2019ll need to capture that event and handle it on your own. The following code takes the HREF from a clicked link and opens it in the default web browser.\n\nair.navigateToURL(new air.URLRequest(this.href));\n\nOnly the beginning\n\nOf course, this is only the beginning of what you can do with Adobe AIR. You don\u2019t have the same level of control as building a native desktop application, such as being able to launch other applications, but you do have more control than what you could have within a web application. Check out the Adobe AIR Developer Center for HTML and Ajax for tutorials and other resources.\n\nNow, go forth and create your desktop applications and hopefully you finish all your shopping before Christmas!\n\nDownload the example files.", "year": "2007", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2007-12-19T00:00:00+00:00", "url": "https://24ways.org/2007/christmas-is-in-the-air/", "topic": "code"} {"rowid": 215, "title": "Teach the CLI to Talk Back", "contents": "The CLI is a daunting tool. It\u2019s quick, powerful, but it\u2019s also incredibly easy to screw things up in \u2013 either with a mistyped command, or a correctly typed command used at the wrong moment. This puts a lot of people off using it, but it doesn\u2019t have to be this way.\nIf you\u2019ve ever interacted with Slack\u2019s Slackbot to set a reminder or ask a question, you\u2019re basically using a command line interface, but it feels more like having a conversation. (My favourite Slack app is Lunch Train which helps with the thankless task of herding colleagues to a particular lunch venue on time.)\nSame goes with voice-operated assistants like Alexa, Siri and Google Home. There are even games, like Lifeline, where you interact with a stranded astronaut via pseudo SMS, and KOMRAD where you chat with a Soviet AI.\nI\u2019m not aiming to build an AI here \u2013 my aspirations are a little more down to earth. What I\u2019d like is to make the CLI a friendlier, more forgiving, and more intuitive tool for new or reluctant users. I want to teach it to talk back.\nInteractive command lines in the wild\nIf you\u2019ve used dev tools in the command line, you\u2019ve probably already used an interactive prompt \u2013 something that asks you questions and responds based on your answers. Here are some examples:\nYeoman\nIf you have Yeoman globally installed, running yo will start a command prompt.\n\nThe prompt asks you what you\u2019d like to do, and gives you options with how to proceed. Seasoned users will run specific commands for these options rather than go through this prompt, but it\u2019s a nice way to start someone off with using the tool.\nnpm\nIf you\u2019re a Node.js developer, you\u2019re probably familiar with typing npm init to initialise a project. This brings up prompts that will populate a package.json manifest file for that project.\n\nThe alternative would be to expect the user to craft their own package.json, which is more error-prone since it\u2019s in JSON format, so something as trivial as an extraneous comma can throw an error.\nSnyk\nSnyk is a dev tool that checks for known vulnerabilities in your dependencies. Running snyk wizard in the CLI brings up a list of all the known vulnerabilities, and gives you options on how to deal with it \u2013 such as patching the issue, applying a fix by upgrading the problematic dependency, or ignoring the issue (you are then prompted for a reason).\n\nThese decisions get mapped to the manifest and a .snyk file, and committed into the repo so that the settings are the same for everyone who uses that project.\nI work at Snyk, and running the wizard is what made me think about building my own personal assistant in the command line to help me with some boring, repetitive tasks.\nWriting your own\nSomething I do a lot is add bookmarks to styleguides.io \u2013 I pull down the entire repo, copy and paste a template YAML file, and edit to contents. Sometimes I get it wrong and break the site. So I\u2019ve been putting together a tool to help me add bookmarks.\nIt\u2019s called bookmarkbot \u2013 it\u2019s a personal assistant squirrel called Mark who will collect and bury your bookmarks for safekeeping.*\n\n*Fortunately, this metaphor also gives me a charming excuse for any situation where bookmarks sometimes get lost \u2013 it\u2019s not my poorly-written code, honest, it\u2019s just being realistic because sometimes squirrels forget where they buried things!\nWhen you run bookmarkbot, it will ask you for some information, and save that information as a Markdown file in YAML format.\nFor this demo, I\u2019m going to use a Node.js package called inquirer, which is a well supported tool for creating command line prompts. I like it because it has a bunch of different question types; from input, which asks for some text back, confirm which expects a yes/no response, or a list which gives you a set of options to choose from. You can even nest questions, Choose Your Own Adventure style.\nPrerequisites\n\nNode.js\nnpm\nRubyGems (Only if you want to go as far as serving a static site for your bookmarks, and you want to use Jekyll for it)\n\nDisclaimer\nBear in mind that this is a really simplified walkthrough. It doesn\u2019t have any error states, and it doesn\u2019t handle the situation where we save a file with the same name. But it gets you in a good place to start building out your tool.\nLet\u2019s go!\nCreate a new folder wherever you keep your projects, and give it an awesome name (I\u2019ve called mine bookmarks and put it in the Sites directory because I\u2019m unimaginative). Now cd to that directory. \ncd Sites/bookmarks\nLet\u2019s use that example I gave earlier, the trusty npm init.\nnpm init\nPop in the information you\u2019d like to provide, or hit ENTER to skip through and save the defaults. Your directory should now have a package.json file in it. Now let\u2019s install some of the dependencies we\u2019ll need.\nnpm install --save inquirer\nnpm install --save slugify\nNext, add the following snippet to your package.json to tell it to run this file when you run npm start.\n\"scripts\": {\n \u2026\n \"start\": \"node index.js\"\n}\nThat index.js file doesn\u2019t exist yet, so let\u2019s create it in the root of our folder, and add the following:\n// Packages we need\nvar fs = require('fs'); // Creates our file (part of Node.js so doesn't need installing)\nvar inquirer = require('inquirer'); // The engine for our questions prompt\nvar slugify = require('slugify'); // Will turn a string into a usable filename\n\n// The questions\nvar questions = [\n {\n type: 'input',\n name: 'name',\n message: 'What is your name?',\n },\n];\n\n// The questions prompt\nfunction askQuestions() {\n\n // Ask questions\n inquirer.prompt(questions).then(answers => {\n\n // Things we'll need to generate the output\n var name = answers.name;\n\n // Finished asking questions, show the output\n console.log('Hello ' + name + '!');\n\n });\n\n}\n\n// Kick off the questions prompt\naskQuestions();\nThis is just some barebones where we\u2019re including the inquirer package we installed earlier. I\u2019ve stored the questions in a variable, and the askQuestions function will prompt the user for their name, and then print \u201cHello \u201d in the console.\nEnough setup, let\u2019s see some magic. Save the file, go back to the command line and run npm start.\n\nExtending what we\u2019ve learnt\nAt the moment, we\u2019re just saving a name to a file, which isn\u2019t really achieving our goal of saving bookmarks. We don\u2019t want our tool to forget our information every time we talk to it \u2013 we need to save it somewhere. So I\u2019m going to add a little function to write the output to a file.\nSaving to a file\nCreate a folder in your project\u2019s directory called _bookmarks. This is where the bookmarks will be saved.\nI\u2019ve replaced my questions array, and instead of asking for a name, I\u2019ve extended out the questions, asking to be provided with a link and title (as a regular input type), a list of tags (using inquirer\u2019s checkbox type), and finally a description, again, using the input type.\nSo this is how my code looks now:\n// Packages we need\nvar fs = require('fs'); // Creates our file\nvar inquirer = require('inquirer'); // The engine for our questions prompt\nvar slugify = require('slugify'); // Will turn a string into a usable filename\n\n// The questions\nvar questions = [\n {\n type: 'input',\n name: 'link',\n message: 'What is the url?',\n },\n {\n type: 'input',\n name: 'title',\n message: 'What is the title?',\n },\n {\n type: 'checkbox',\n name: 'tags',\n message: 'Would you like me to add any tags?',\n choices: [\n { name: 'frontend' },\n { name: 'backend' },\n { name: 'security' },\n { name: 'design' },\n { name: 'process' },\n { name: 'business' },\n ],\n },\n {\n type: 'input',\n name: 'description',\n message: 'How about a description?',\n },\n];\n\n// The questions prompt\nfunction askQuestions() {\n\n // Say hello\n console.log('\ud83d\udc3f Oh, hello! Found something you want me to bookmark?\\n');\n\n // Ask questions\n inquirer.prompt(questions).then((answers) => {\n\n // Things we'll need to generate the output\n var title = answers.title;\n var link = answers.link;\n var tags = answers.tags + '';\n var description = answers.description;\n var output = '---\\n' +\n 'title: \"' + title + '\"\\n' +\n 'link: \"' + link + '\"\\n' +\n 'tags: [' + tags + ']\\n' +\n '---\\n' + description + '\\n';\n\n // Finished asking questions, show the output\n console.log('\\n\ud83d\udc3f All done! Here is what I\\'ve written down:\\n');\n console.log(output);\n\n // Things we'll need to generate the filename\n var slug = slugify(title);\n var filename = '_bookmarks/' + slug + '.md';\n\n // Write the file\n fs.writeFile(filename, output, function () {\n console.log('\\n\ud83d\udc3f Great! I have saved your bookmark to ' + filename);\n });\n\n });\n\n}\n\n// Kick off the questions prompt\naskQuestions();\nThe output is formatted into YAML metadata as a Markdown file, which will allow us to turn it into a static HTML file using a build tool later. Run npm start again and have a look at the file it outputs.\n\nGetting confirmation\nBefore the user makes critical changes, it\u2019s good to verify those changes first. We\u2019re going to add a confirmation step to our tool, before writing the file. More seasoned CLI users may favour speed over a \u201chey, can you wait a sec and just check this is all ok\u201d step, but I always think it\u2019s worth adding one so you can occasionally save someone\u2019s butt.\nSo, underneath our questions array, let\u2019s add a confirmation array.\n// Packages we need\n\u2026\n// The questions\n\u2026\n\n// Confirmation questions\nvar confirm = [\n {\n type: 'confirm',\n name: 'confirm',\n message: 'Does this look good?',\n },\n];\n\n// The questions prompt\n\u2026\n\nAs we\u2019re adding the confirm step before the file gets written, we\u2019ll need to add the following inside the askQuestions function:\n// The questions prompt\nfunction askQuestions() {\n // Say hello\n \u2026\n // Ask questions\n inquirer.prompt(questions).then((answers) => {\n \u2026\n // Things we'll need to generate the output\n \u2026\n // Finished asking questions, show the output\n \u2026\n\n // Confirm output is correct\n inquirer.prompt(confirm).then(answers => {\n\n // Things we'll need to generate the filename\n var slug = slugify(title);\n var filename = '_bookmarks/' + slug + '.md';\n\n if (answers.confirm) {\n // Save output into file\n fs.writeFile(filename, output, function () {\n console.log('\\n\ud83d\udc3f Great! I have saved your bookmark to ' +\n filename);\n });\n } else {\n // Ask the questions again\n console.log('\\n\ud83d\udc3f Oops, let\\'s try again!\\n');\n askQuestions();\n }\n\n });\n\n });\n}\n\n// Kick off the questions prompt\naskQuestions();\nNow run npm start and give it a go!\n\nTyping y will write the file, and n will take you back to the start. Ideally, I\u2019d store the answers already given as defaults so the user doesn\u2019t have to start from scratch, but I want to keep this demo simple.\nServing the files\nNow that your bookmarking tool is successfully saving formatted Markdown files to a folder, the next step is to serve those files in a way that lets you share them online. The easiest way to do this is to use a static-site generator to convert your YAML files into HTML, and pop them all on one page. Now, you\u2019ve got a few options here and I don\u2019t want to force you down any particular path, as there are plenty out there \u2013 it\u2019s just a case of using the one you\u2019re most comfortable with.\nI personally favour Jekyll because of its tight integration with GitHub Pages \u2013 I don\u2019t want to mess around with hosting and deployment, so it\u2019s really handy to have my bookmarks publish themselves on my site as soon as I commit and push them using Git.\nI\u2019ll give you a very brief run-through of how I\u2019m doing this with bookmarkbot, but I recommend you read my Get Started With GitHub Pages (Plus Bonus Jekyll) guide if you\u2019re unfamiliar with them, because I\u2019ll be glossing over some bits that are already covered in there.\nSetting up a build tool\nIf you haven\u2019t already, install Jekyll and Bundler globally through RubyGems. Jekyll is our static-site generator, and Bundler is what we use to install Ruby dependencies.\ngem install jekyll bundler\nIn my project folder, I\u2019m going to run the following which will install the Jekyll files we\u2019ll need to build our listing page. I\u2019m using --force, otherwise it will complain that the directory isn\u2019t empty.\njekyll new . --force\nIf you check your project folder, you\u2019ll see a bunch of new files. Now run the following to start the server:\nbundle exec jekyll serve\nThis will build a new directory called _site. This is where your static HTML files have been generated. Don\u2019t touch anything in this folder because it will get overwritten the next time you build.\nNow that serve is running, go to http://127.0.0.1:4000/ and you\u2019ll see the default Jekyll page and know that things are set up right. Now, instead, we want to see our list of bookmarks that are saved in the _bookmarks directory (make sure you\u2019ve got a few saved). So let\u2019s get that set up next.\nOpen up the _config.yml file that Jekyll added earlier. In here, we\u2019re going to tell it about our bookmarks. Replace everything in your _config.yml file with the following:\ntitle: My Bookmarks\ndescription: These are some of my favourite articles about the web.\nmarkdown: kramdown\nbaseurl: /bookmarks # This needs to be the same name as whatever you call your repo on GitHub.\ncollections:\n - bookmarks\nThis will make Jekyll aware of our _bookmarks folder so that we can call it later. Next, create a new directory and file at _layouts/home.html and paste in the following.\n\n\n\n \n {{site.title}}\n \n\n\n\n\n

                      {{site.title}}

                      \n

                      {{site.description}}

                      \n\n
                        \n {% for bookmark in site.bookmarks %}\n
                      • \n \n

                        {{bookmark.title}}

                        \n
                        \n {{bookmark.content}}\n {% if bookmark.tags %}\n
                          \n {% for tags in bookmark.tags %}
                        • {{tags}}
                        • {% endfor %}\n
                        \n {% endif %}\n
                      • \n {% endfor %}\n
                      \n\n\n\n\nRestart Jekyll for your config changes to kick in, and go to the url it provides you (probably http://127.0.0.1:4000/bookmarks, unless you gave something different as your baseurl).\n\nIt\u2019s a decent start \u2013 there\u2019s a lot more we can do in this area but now we\u2019ve got a nice list of all our bookmarks, let\u2019s get it online!\nIf you want to use GitHub Pages to host your files, your first step is to push your project to GitHub. Go to your repository and click \u201csettings\u201d. Scroll down to the section labelled \u201cGitHub Pages\u201d, and from here you can enable it. Select your master branch, and it will provide you with a url to view your published pages.\n\nWhat next?\nNow that you\u2019ve got a framework in place for publishing bookmarks, you can really go to town on your listing page and make it your own. First thing you\u2019ll probably want to do is add some CSS, then when you\u2019ve added a bunch of bookmarks, you\u2019ll probably want to have some filtering in place for the tags, perhaps extend the types of questions that you ask to include an image (if you\u2019re feeling extra-fancy, you could just ask for a url and pull in metadata from the site itself). Maybe you\u2019ve got an idea that doesn\u2019t involve bookmarks at all.\nYou could use what you\u2019ve learnt to build a place where you can share quotes, a list of your favourite restaurants, or even Christmas gift ideas.\nHere\u2019s one I made earlier\n\nMy demo, bookmarkbot, is on GitHub, and I\u2019ve reused a lot of the code from styleguides.io. Feel free to grab bits of code from there, and do share what you end up making!", "year": "2017", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2017-12-11T00:00:00+00:00", "url": "https://24ways.org/2017/teach-the-cli-to-talk-back/", "topic": "code"} {"rowid": 126, "title": "Intricate Fluid Layouts in Three Easy Steps", "contents": "The Year of the Script may have drawn attention away from CSS but building fluid, multi-column, cross-browser CSS layouts can still be as unpleasant as a lump of coal. Read on for a worry-free approach in three quick steps.\n\nThe layout system I developed, YUI Grids CSS, has three components. They can be used together as we\u2019ll see, or independently.\n\nThe Three Easy Steps\n\n\n\tChoose fluid or fixed layout, and choose the width (in percents or pixels) of the page.\n\tChoose the size, orientation, and source-order of the main and secondary blocks of content.\n\tChoose the number of columns and how they distribute (for example 50%-50% or 25%-75%), using stackable and nestable grid structures.\n\n\nThe Setup\n\nThere are two prerequisites: We need to normalize the size of an em and opt into the browser rendering engine\u2019s Strict Mode.\n\nEms are a superior unit of measure for our case because they represent the current font size and grow as the user increases their font size setting. This flexibility\u2014the container growing with the user\u2019s wishes\u2014means larger text doesn\u2019t get crammed into an unresponsive container. We\u2019ll use YUI Fonts CSS to set the base size because it provides consistent-yet-adaptive font-sizes while preserving user control.\n\nThe second prerequisite is to opt into Strict Mode (more info on rendering modes) by declaring a Doctype complete with URI. You can choose XHTML or HTML, and Transitional or Strict. I prefer HTML 4.01 Strict, which looks like this:\n\n\n\nIncluding the CSS\n\nA single small CSS file powers a nearly-infinite number of layouts thanks to a recursive system and the interplay between the three distinct components. You could prune to a particular layout\u2019s specific needs, but why bother when the complete file weighs scarcely 1.8kb uncompressed? Compressed, YUI Fonts and YUI Grids combine for a miniscule 0.9kb over the wire.\n\nYou could save an HTTP request by concatenating the two CSS files, or by adding their contents to your own CSS, but I\u2019ll keep them separate for now:\n\n\n\n\nExample: The Setup\n\nNow we\u2019re ready to build some layouts.\n\nStep 1: Choose Fluid or Fixed Layout\n\nChoose between preset widths of 750px, 950px, and 100% by giving a document-wrapping div an ID of doc, doc2, or doc3. These options cover most use cases, but it\u2019s easy to define a custom fixed width.\n\nThe fluid 100% grid (doc3) is what I\u2019ve been using almost exclusively since it was introduced in the last YUI released.\n\n\n\t
                      \n\n\nAll pages are centered within the viewport, and grow with font size. The 100% width page (doc3) preserves 10px of breathing room via left and right margins. If you prefer your content flush to the viewport, just add doc3 {margin:auto} to your CSS.\n\nRegardless of what you choose in the other two steps, you can always toggle between these widths and behaviors by simply swapping the ID value. It\u2019s really that simple.\n\nExample: 100% fluid layout\n\nStep 2: Choose a Template Preset\n\nThis is perhaps the most frequently omitted step (they\u2019re all optional), but I use it nearly every time. In a source-order-independent way (good for accessibility and SEO), \u201cTemplate Presets\u201d provide commonly used template widths compatible with ad-unit dimension standards defined by the Interactive Advertising Bureau, an industry association.\n\nChoose between the six Template Presets (.yui-t1 through .yui-t6) by setting the class value on the document-wrapping div established in Step 1. Most frequently I use yui-t3, which puts the narrow secondary block on the left and makes it 300px wide. \n\n\n\t
                      \n\n\nThe Template Presets control two \u201cblocks\u201d of content, which are defined by two divs, each with yui-b (\u201cb\u201d for \u201cblock\u201d) class values. Template Presets describe the width and orientation of the secondary block; the main block will take up the rest of the space.\n\n\n\t
                      \n\t
                      \n\t
                      \n\t
                      \n\n\nUse a wrapping div with an ID of yui-main to structurally indicate which block is the main block. This wrapper\u2014not the source order\u2014identifies the main block.\n\n\n\t
                      \n\t
                      \n\t
                      \n\t
                      \n\t
                      \n\t
                      \n\n\nExample: Main and secondary blocks sized and oriented with .yui-t3 Template Preset\n\nAgain, regardless of what values you choose in the other steps, you can always toggle between these Template Presets by toggling the class value of your document-wrapping div. It\u2019s really that simple.\n\nStep 3: Nest and Stack Grid Structures.\n\nThe bulk of the power of the system is in this third step. The key is that columns are built by parents telling children how to behave. By default, two children each consume half of their parent\u2019s area. Put two units inside a grid structure, and they will sit side-by-side, and they will each take up half the space. Nest this structure and two columns become four. Stack them for rows of columns.\n\nAn Even Number of Columns\n\nThe default behavior creates two evenly-distributed columns. It\u2019s easy. Define one parent grid with .yui-g (\u201cg\u201d for grid) and two child units with .yui-u (\u201cu\u201d for unit). The code looks like this:\n\n
                      \n\t
                      \n\t
                      \n
                      \n\nBe sure to indicate the \u201cfirst\u201c unit because the :first-child pseudo-class selector isn\u2019t supported across all A-grade browsers. It\u2019s unfortunate we need to add this, but luckily it\u2019s not out of place in the markup layer since it is structural information.\n\nExample: Two evenly-distributed columns in the main content block\n\nAn Odd Number of Columns\n\nThe default system does not work for an odd number of columns without using the included \u201cSpecial Grids\u201d classes. To create three evenly distributed columns, use the \u201cyui-gb\u201c Special Grid:\n\n
                      \n\t
                      \n\t
                      \n\t
                      \n
                      \n\nExample: Three evenly distributed columns in the main content block\n\nUneven Column Distribution\n\nSpecial Grids are also used for unevenly distributed column widths. For example, .yui-ge tells the first unit (column) to take up 75% of the parent\u2019s space and the other unit to take just 25%.\n\n
                      \n\t
                      \n\t
                      \n
                      \n\nExample: Two columns in the main content block split 75%-25%\n\nPutting It All Together\n\nStart with a full-width fluid page (div#doc3). Make the secondary block 180px wide on the right (div.yui-t4). Create three rows of columns: Three evenly distributed columns in the first row (div.yui-gb), two uneven columns (66%-33%) in the second row (div.yui-gc), and two evenly distributed columns in the thrid row.\n\n\n\t\n\t
                      \n\t\t\n\t\t
                      \n\t\t\t
                      \n\t\t\t\t\n\t\t\t\t
                      \n\t\t\t\t\t
                      \n\t\t\t\t\t
                      \n\t\t\t\t\t
                      \n\t\t\t\t
                      \n\t\t\t\t\n\t\t\t\t
                      \n\t\t\t\t\t
                      \n\t\t\t\t\t
                      \n\t\t\t\t
                      \n\t\t\t\t\n\t\t\t\t
                      \n\t\t\t\t\t
                      \n\t\t\t\t\t
                      \n\t\t\t\t
                      \n\t\t\t
                      \n\t\t
                      \n\t\t\n\t\t
                      \n\t
                      \n\n\nExample: A complex layout.\n\nWasn\u2019t that easy? Now that you know the three \u201clevers\u201d of YUI Grids CSS, you\u2019ll be creating headache-free fluid layouts faster than you can say \u201cPeace on Earth\u201d.", "year": "2006", "author": "Nate Koechley", "author_slug": "natekoechley", "published": "2006-12-20T00:00:00+00:00", "url": "https://24ways.org/2006/intricate-fluid-layouts/", "topic": "code"} {"rowid": 327, "title": "Improving Form Accessibility with DOM Scripting", "contents": "The form label element is an incredibly useful little element \u2013 it lets you link the form field unquestionably with the descriptive label text that sits alongside or above it. This is a very useful feature for people using screen readers, but there are some problems with this element.\n\nWhat happens if you have one piece of data that, for various reasons (validation, the way your data is collected/stored etc), needs to be collected using several form elements?\n\nThe classic example is date of birth \u2013 ideally, you\u2019ll ask for the date of birth once but you may have three inputs, one each for day, month and year, that you also need to provide hints about the format required. The problem is that to be truly accessible you need to label each field. So you end up needing something to say \u201cthis is a date of birth\u201d, \u201cthis is the day field\u201d, \u201cthis is the month field\u201d and \u201cthis is the day field\u201d. Seems like overkill, doesn\u2019t it? And it can uglify a form no end.\n\nThere are various ways that you can approach it (and I think I\u2019ve seen them all). Some people omit the label and rely on the title attribute to help the user through; others put text in a label but make the text 1 pixel high and merging in to the background so that screen readers can still get that information. The most common method, though, is simply to set the label to not display at all using the CSS display:none property/value pairing (a technique which, for the time being, seems to work on most screen readers). But perhaps we can do more with this?\n\nThe technique I am suggesting as another alternative is as follows (here comes the pseudo-code):\n\n\n\tStart with a totally valid and accessible form\n\tEnsure that each form input has a label that is linked to its related form control\n\tApply a class to any label that you don\u2019t want to be visible (for example superfluous)\n\n\nThen, through the magic of unobtrusive JavaScript/the DOM, manipulate the page as follows once the page has loaded:\n\n\n\tFind all the label elements that are marked as superfluous and hide them\n\tFind out what input element each of these label elements is related to\n\tThen apply a hint about formatting required for input (gleaned from the original, now-hidden label text) \u2013 add it to the form input as default text\n\tFinally, add in a behaviour that clears or selects the default text (as you choose)\n\n\nSo, here\u2019s the theory put into practice \u2013 a date of birth, grouped using a fieldset, and with the behaviours added in using DOM, and here\u2019s the JavaScript that does the heavy lifting. \n\nBut why not just use display:none? As demonstrated at Juicy Studio, display:none seems to work quite well for hiding label elements. So why use a sledge hammer to crack a nut? In all honesty, this is something of an experiment, but consider the following:\n\n\n\tUsing the DOM, you can add extra levels of help, potentially across a whole form \u2013 or even range of forms \u2013 without necessarily increasing your markup (it goes beyond simply hiding labels)\n\tScreen readers today may identify a label that is set not to display, but they may not in the future \u2013 this might provide a way around\n\tBy expanding this technique above, it might be possible to visually change the parent container that groups these items \u2013 in this case, a fieldset and legend, which are notoriously difficult to style consistently across different browsers \u2013 while still retaining the underlying semantic/logical structure\n\n\nWell, it\u2019s an idea to think about at least. How is it for you? How else might you use DOM scripting to improve the accessiblity or usability of your forms?", "year": "2005", "author": "Ian Lloyd", "author_slug": "ianlloyd", "published": "2005-12-03T00:00:00+00:00", "url": "https://24ways.org/2005/improving-form-accessibility-with-dom-scripting/", "topic": "code"} {"rowid": 184, "title": "Spruce It Up", "contents": "The landscape of web typography is changing quickly these days. We\u2019ve gone from the wild west days of sIFR to Cuf\u00f3n to finally seeing font embedding seeing wide spread adoption by browser developers (and soon web designers) with @font-face. For those who\u2019ve felt limited by the typographic possibilities before, this has been a good year.\n\nAs Mark Boulton has so eloquently elucidated, @font-face embedding doesn\u2019t come without its drawbacks. Font files can be quite large and FOUT\u2014that nasty flash of unstyled text\u2014can be a distraction for users.\n\nData URIs\n\nWe can battle FOUT by using Data URIs. A Data URI allows the font to be encoded right into the CSS file. When the font comes with the CSS, the flash of unstyled text is mitigated. No extra HTTP requests are required. \n\nDon\u2019t be a grinch, though. Sending hundreds of kilobytes down the pipe still isn\u2019t great. Sometimes, all we want to do is spruce up our site with a little typographic sugar. \n\nBe Selective\n\nDan Cederholm\u2019s SimpleBits is an attractive site. \n\n\n\nTake a look at the ampersand within the header of his site. It\u2019s the lovely (and free) Goudy Bookletter 1911 available from The League of Movable Type. The Opentype format is a respectable 28KB. Nothing too crazy but hold on here. Mr. Cederholm is only using the ampersand! Ouch. That\u2019s a lot of bandwidth just for one character.\n\nCan we optimize a font like we can an image? Yes. Image optimization essentially works by removing unnecessary image data such as colour data, hidden comments or using compression algorithms. How do you remove unnecessary information from a font? Subsetting. \n\nIf you\u2019re the adventurous type, grab a copy of FontForge, which is an open source font editing tool. You can open the font, view and edit any of the glyphs and then re-generate the font. The interface is a little clunky but you\u2019ll be able to select any character you don\u2019t want and then cut the glyphs. Re-generate your font and you\u2019ve now got a smaller file. \n\n\n\nThere are certainly more optimizations that can also be made such as removing hinting and kerning information. Keep in mind that removing this information may affect how well the type renders.\n\nAt this time of year, though, I\u2019m sure you\u2019re quite busy. Save yourself some time and head on over to the Font Squirrel Font Generator.\n\n\n\nThe Font Generator is extremely handy and allows for a number of optimizations and cross-platform options to be generated instantly. Select the font from your local system\u2014make sure that you are only using properly licensed fonts! \n\nIn this particular case, we only want the ampersand. Click on Subset Fonts which will open up a new menu. Unselect any preselected sets and enter the ampersand into the Single Characters text box. \n\nGenerate your font and what are you left with? 3KB. \n\n\n\nThe Font Generator even generates a base64 encoded data URI stylesheet to be imported easily into your project.\n\nCheck out the Demo page. (This demo won\u2019t work in Internet Explorer as we\u2019re only demonstrating the Data URI font embedding and not using the EOT file format that IE requires.) \n\nNo Unnecessary Additives\n\nIf you peeked under the hood of that demo, did you notice something interesting? There\u2019s no around the ampersand. The great thing about this is that we can take advantage of the font stack\u2019s natural ability to switch to a fallback font when a character isn\u2019t available.\n\nJust like that, we\u2019ve managed to spruce up our page with a little typographic sugar without having to put on too much weight.", "year": "2009", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2009-12-19T00:00:00+00:00", "url": "https://24ways.org/2009/spruce-it-up/", "topic": "code"} {"rowid": 192, "title": "Cleaner Code with CSS3 Selectors", "contents": "The parts of CSS3 that seem to grab the most column inches on blogs and in articles are the shiny bits. Rounded corners, text shadow and new ways to achieve CSS layouts are all exciting and bring with them all kinds of possibilities for web design. However what really gets me, as a developer, excited is a bit more mundane. \n\nIn this article I\u2019m going to take a look at some of the ways our front and back-end code will be simplified by CSS3, by looking at the ways we achieve certain visual effects now in comparison to how we will achieve them in a glorious, CSS3-supported future. I\u2019m also going to demonstrate how we can use these selectors now with a little help from JavaScript \u2013 which can work out very useful if you find yourself in a situation where you can\u2019t change markup that is being output by some server-side code.\n\nThe wonder of nth-child\n\nSo why does nth-child get me so excited? Here is a really common situation, the designer would like the tables in the application to look like this:\n\n\n\nSetting every other table row to a different colour is a common way to enhance readability of long rows. The tried and tested way to implement this is by adding a class to every other row. If you are writing the markup for your table by hand this is a bit of a nuisance, and if you stick a row in the middle you have to change the rows the class is applied to. If your markup is generated by your content management system then you need to get the server-side code to add that class \u2013 if you have access to that code.\n\n\n\n\nStriping every other row - using classes\n\n\n\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
                      NameCards sentCards receivedCards written but not sent
                      Ann40284
                      Joe22729
                      Paul5352
                      Louise65650
                      \n\n\n\nView Example 1\n\nThis situation is something I deal with on almost every project, and apart from being an extra thing to do, it just isn\u2019t ideal having the server-side code squirt classes into the markup for purely presentational reasons. This is where the nth-child pseudo-class selector comes in. The server-side code creates a valid HTML table for the data, and the CSS then selects the odd rows with the following selector:\n\ntr:nth-child(odd) td {\n\tbackground-color: #86B486;\n}\n\nView Example 2\n\nThe odd and even keywords are very handy in this situation \u2013 however you can also use a multiplier here. 2n would be equivalent to the keyword \u2018odd\u2019 3n would select every third row and so on.\n\nBrowser support\n\nSadly, nth-child has pretty poor browser support. It is not supported in Internet Explorer 8 and has somewhat buggy support in some other browsers. Firefox 3.5 does have support. In some situations however, you might want to consider using JavaScript to add this support to browsers that don\u2019t have it. This can be very useful if you are dealing with a Content Management System where you have no ability to change the server-side code to add classes into the markup.\n\nI\u2019m going to use jQuery in these examples as it is very simple to use the same CSS selector used in the CSS to target elements with jQuery \u2013 however you could use any library or write your own function to do the same job. In the CSS I have added the original class selector to the nth-child selector:\n\ntr:nth-child(odd) td, tr.odd td {\n\tbackground-color: #86B486;\n}\n\nThen I am adding some jQuery to add a class to the markup once the document has loaded \u2013 using the very same nth-child selector that works for browsers that support it. \n\n \n \n\nView Example 3\n\nWe could just add a background colour to the element using jQuery, however I prefer not to mix that information into the JavaScript as if we change the colour on our table rows I would need to remember to change it both in the CSS and in the JavaScript.\n\nDoing something different with the last element\n\nSo here\u2019s another thing that we often deal with. You have a list of items all floated left with a right hand margin on each element constrained within a fixed width layout. If each element has the right margin applied the margin on the final element will cause the set to become too wide forcing that last item down to the next row as shown in the below example where I have used a grey border to indicate the fixed width.\n\n\n\nCurrently we have two ways to deal with this. We can put a negative right margin on the list, the same width as the space between the elements. This means that the extra margin on the final element fills that space and the item doesn\u2019t drop down. \n\n\n\n\nThe last item is different\n\n\n\n\t
                      \n\t\t
                        \n\t\t\t
                      • \"baubles\"
                      • \n\t\t\t
                      • \"star\"
                      • \n\t\t\t
                      • \"wreath\"
                      • \n\t\t
                      \n\t
                      \n\n\n\nView Example 4\n\nThe other solution will be to put a class on the final element and in the CSS remove the margin for this class. \n\nul.gallery li.last {\n\tmargin-right: 0;\n}\n\nThis second solution may not be easy if the content is generated from server-side code that you don\u2019t have access to change.\n\nIt could all be so different. In CSS3 we have marvellously common-sense selectors such as last-child, meaning that we can simply add rules for the last list item. \n\nul.gallery li:last-child {\n\tmargin-right: 0;\n}\n\nView Example 5\n\nThis removed the margin on the li which is the last-child of the ul with a class of gallery. No messing about sticking classes on the last item, or pushing the width of the item out wit a negative margin.\n\nIf this list of items repeated ad infinitum then you could also use nth-child for this task. Creating a rule that makes every 3rd element margin-less.\n\nul.gallery li:nth-child(3n) {\n\tmargin-right: 0;\n}\n\nView Example 6\n\n\n\nA similar example is where the designer has added borders to the bottom of each element \u2013 but the last item does not have a border or is in some other way different. Again, only a class added to the last element will save you here if you cannot rely on using the last-child selector.\n\nBrowser support for last-child\n\nThe situation for last-child is similar to that of nth-child, in that there is no support in Internet Explorer 8. However, once again it is very simple to replicate the functionality using jQuery. Adding our .last class to the last list item.\n\n$(\"ul.gallery li:last-child\").addClass(\"last\");\n\nWe could also use the nth-child selector to add the .last class to every third list item.\n\n$(\"ul.gallery li:nth-child(3n)\").addClass(\"last\");\n\nView Example 7\n\nFun with forms\n\nStyling forms can be a bit of a trial, made difficult by the fact that any CSS applied to the input element will effect text fields, submit buttons, checkboxes and radio buttons. As developers we are left adding classes to our form fields to differentiate them. In most builds all of my text fields have a simple class of text whereas I wouldn\u2019t dream of adding a class of para to every paragraph element in a document.\n\n\n\n\nSyling form fields\n\n\n\n\t

                      Send your Christmas list to Santa

                      \n\t
                      \n\t\t
                      \n\t\t
                      \n\t\t
                      \n\t\t
                      \n\t\t
                      \n\t\t
                      \n\t\t
                      \n\t
                      \n\n\n\nView Example 8\n\nAttribute selectors provide a way of targeting elements by looking at the attributes of those elements. Unlike the other examples in this article which are CSS3 selectors, the attribute selector is actually a CSS2.1 selector \u2013 it just doesn\u2019t get much use because of lack of support in Internet Explorer 6. Using attribute selectors we can write rules for text inputs and form buttons without needing to add any classes to the markup. For example after removing the text and button classes from my text and submit button input elements I can use the following rules to target them:\n\nform input[type=\"text\"] {\n border: 1px solid #333;\n padding: 0.2em;\n width: 400px;\n}\nform input[type=\"submit\"]{\n border: 1px solid #333;\n background-color: #eee;\n color: #000;\n padding: 0.1em;\n} \n\nView Example 9\n\nAnother problem that I encounter with forms is where I am using CSS to position my labels and form elements by floating the labels. This works fine as long as I want all of my labels to be floated, however sometimes we get a set of radio buttons or a checkbox, and I don\u2019t want the label field to be floated. As you can see in the below example the label for the checkbox is squashed up into the space used for the other labels, yet it makes more sense for the checkbox to display after the text.\n\n\n\nI could use a class on this label element however CSS3 lets me to target the label attribute directly by looking at the value of the for attribute.\n\nlabel[for=\"fOptIn\"] {\n float: none;\n width: auto;\n}\n\n\n\nBeing able to precisely target attributes in this way is incredibly useful, and once IE6 is no longer an issue this will really help to clean up our markup and save us from having to create all kinds of special cases when generating this markup on the server-side.\n\nBrowser support\n\nThe news for attribute selectors is actually pretty good with Internet Explorer 7+, Firefox 2+ and all other modern browsers all having support. As I have already mentioned this is a CSS2.1 selector and so we really should expect to be able to use it as we head into 2010! Internet Explorer 7 has slightly buggy support and will fail on the label example shown above however I discovered a workaround in the Sitepoint CSS reference comments. Adding the selector label[htmlFor=\"fOptIn\"] to the correct selector will create a match for IE7.\n\nIE6 does not support these selector but, once again, you can use jQuery to plug the holes in IE6 support. The following jQuery will add the text and button classes to your fields and also add a checks class to the label for the checkbox, which you can use to remove the float and width for this element.\n\n$('form input[type=\"submit\"]').addClass(\"button\");\n$('form input[type=\"text\"]').addClass(\"text\");\n$('label[for=\"fOptIn\"]').addClass(\"checks\");\n\nView Example 10\n\nThe selectors I\u2019ve used in this article are easy to overlook as we do have ways to achieve these things currently. As developers \u2013 especially when we have frameworks and existing code that cope with these situations \u2013 it is easy to carry on as we always have done. \n\nI think that the time has come to start to clean up our front and backend code and replace our reliance on classes with these more advanced selectors. With the help of a little JavaScript almost all users will still get the full effect and, where we are dealing with purely visual effects, there is definitely a case to be made for not worrying about the very small percentage of people with old browsers and no JavaScript. They will still receive a readable website, it may just be missing some of the finesse offered to the modern browsing experience.", "year": "2009", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2009-12-20T00:00:00+00:00", "url": "https://24ways.org/2009/cleaner-code-with-css3-selectors/", "topic": "code"} {"rowid": 31, "title": "Dealing with Emergencies in Git", "contents": "The stockings were hung by the chimney with care,\nIn hopes that version control soon would be there.\n\nThis summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I\u2019ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information.\n\nLet\u2019s start by painting an updated version of Clement Clarke Moore\u2019s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen.\n\nPerhaps a few of these images are familiar, or maybe they\u2019re just settings you\u2019ve seen in a movie. It doesn\u2019t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: \u2018What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?\u2019 It\u2019s okay if the map isn\u2019t perfect. As you refine your understanding of a new topic, you\u2019ll outgrow the initial metaphors, analogies, and comparisons.\n\nWith apologies to Mr. Moore, let\u2019s give it a try.\n\nGetting Interrupted in Git\n\nWhen on the roof there arose such a clatter!\n\nYou\u2019re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you\u2019ve been working on is going to need to wait while you investigate the commotion.\n\nIf you\u2019ve got even a little bit of experience working with Git, you know that you cannot simply change what you\u2019re working on in times of emergency. If you\u2019ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state.\n\nUp to this point, you\u2019ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of \u2018switching branches for a sec\u2019. This isn\u2019t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren\u2019t finished with what you were working on.\n\nYou don\u2019t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren\u2019t putting your book away though, you\u2019re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down.\n\nLet\u2019s say you\u2019ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof:\n\n$ git stash\n\nAfter running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work.\n\nWith the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it\u2019s not reindeer after all, but just your boss who thought they\u2019d help out by writing some code on the project you\u2019ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time.\n\nYou fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy:\n\n$ git fetch\n$ git branch -r\n$ git checkout -b helpful-boss-branch origin/helpful-boss-branch\n\nYou are now in a local copy of the branch where you are free to look around, and figure out exactly what\u2019s going on.\n\nYou sigh audibly and say, \u2018Okay. Tell me what was happening when you first realised you\u2019d gotten into a mess\u2019 as you look through the log messages for the branch.\n\n$ git log --oneline\n$ git log\n\nBy using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof.\n\nYou may also want to compare the work your boss has done to the main branch for your project. For this article, we\u2019ll assume the main branch is named master.\n\n$ git diff master\n\nLooking through the commits, you may be able to see that things started out okay but then took a turn for the worse.\n\nChecking out a single commit\n\nUsing commands you\u2019re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch.\n\nUsing the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let\u2019s say the unique identifier you want to checkout is 25f6d7f.\n\n$ git checkout 25f6d7f\n\nNote: checking out '25f6d7f'.\n\nYou are in 'detached HEAD' state. You can look around,\nmake experimental changes and commit them, and you can\ndiscard any commits you make in this state without\nimpacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example:\n\n$ git checkout -b new_branch_name\n\nHEAD is now at 25f6d7f... Removed first paragraph.\n\nThis is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic.\n\nTake a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you\u2019ve temporarily disconnected from a known chain of events. In other words, you\u2019re currently looking at the middle of a story (or branch) about what happened \u2013 and you\u2019re not at the endpoint for this particular story.\n\nGit allows you to view the history of your repository as a timeline (technically it\u2019s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you\u2019re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work.\n\n$ git checkout master\n\nWarning: you are leaving 1 commit behind, not connected to\nany of your branches:\n\n 7a85788 Your witty holiday commit message.\n\nIf you want to keep them by creating a new branch, this may be a good time to do so with:\n\n$ git branch new_branch_name 7a85788\n\nSwitched to branch 'master'\nYour branch is up-to-date with 'origin/master'.\n\nSo, if you want to save the commits you\u2019ve made while in a detached HEAD state, you simply need to put them on a new branch.\n\n$ git branch saved-headless-commits 7a85788\n\nWith this trick under your belt, you can jingle around in history as much as you\u2019d like. It\u2019s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you\u2019ll need to move back to the branch tip by checking out the branch again.\n\n$ git checkout helpful-boss-branch\n\nYou\u2019re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you\u2019ve followed the instructions Git gave you. That wasn\u2019t so scary after all, now, was it?\n\nBack to our reindeer problem.\n\nIf your boss is anything like the bosses I\u2019ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you\u2019ll want to capture the good work using one of several different methods.\n\nBack in the living room, we\u2019ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work.\n\nSaving just one commit\n\nAbout that bowl of nuts. If you\u2019re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we\u2019re just going to pick out one nut from the bowl. In Git terms, we\u2019re going to cherry-pick a commit and save it to another branch.\n\nFirst, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into.\n\n$ git checkout master\n$ git checkout -b rescue-the-boss\n\nFrom your boss\u2019s branch, helpful-boss-branch locate the commit you want to keep.\n\n$ git log --oneline helpful-boss-branch\n\nLet\u2019s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch.\n\n$ git cherry-pick e08740b\n\nIf you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss\u2019s branch.\n\nAt this point you might need to make a few additional fixes to help your boss out. (You\u2019re angling for a bonus out of all this. Go the extra mile.) Once you\u2019ve made your additional changes, you\u2019ll need to add that work to the branch as well.\n\n$ git add [filename(s)]\n$ git commit -m \"Building on boss's work to improve feature X.\"\n\nGo ahead and test everything, and make sure it\u2019s perfect. You don\u2019t want to introduce your own mistakes during the rescue mission!\n\nUploading the fixed branch\n\nThe next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch.\n\n$ git push -u origin rescue-the-boss\n\nCleaning up and getting back to work\n\nWith your boss rescued, and your bonus secured, you can now delete the local temporary branches.\n\n$ git branch --delete rescue-the-boss\n$ git branch --delete helpful-boss-branch\n\nAnd settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat.\n\n$ git checkout waiting-for-st-nicholas\n$ git stash pop\n\nYour working directory has been returned to exactly the same state you were in at the beginning of the article.\n\nHaving fun with analogies\n\nI\u2019ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it\u2019s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it\u2019s like trying to find that one broken light on the string of Christmas lights). It doesn\u2019t matter if the analogy isn\u2019t perfect. It\u2019s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner\u2019s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works.\n\nOr, if you\u2019re like me, you can choose to never grow old and just keep mucking about in the analogies. I\u2019d argue it\u2019s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.", "year": "2014", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2014-12-02T00:00:00+00:00", "url": "https://24ways.org/2014/dealing-with-emergencies-in-git/", "topic": "code"} {"rowid": 289, "title": "Front-End Developers Are Information Architects Too", "contents": "The theme of this year\u2019s World IA Day was \u201cInformation Everywhere, Architects Everywhere\u201d. This article isn\u2019t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People\u2019s ability to be successful online is unequivocally connected to the quality of the code that is written.\nPlaces made of information\nIn information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated:\n\nPeople live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds.\n25 Theses\n\nWe\u2019re creating structures which people rely on for significant parts of their lives, so it\u2019s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML.\nSemantics\nThe word \u201csemantic\u201d has its origin in Greek words meaning \u201csignificant\u201d, \u201csignify\u201d, and \u201csign\u201d. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building\u2019s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don\u2019t need an \u201centrance\u201d sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building\u2019s purpose happens, but the affect is similar: the building is signifying something about its purpose.\nHTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating:\n\nElements, attributes, and attribute values in HTML are defined \u2026 to have certain meanings (semantics). For example, the
                        element represents an ordered list, and the lang attribute represents the language of the content.\nHTML 5.1 Semantics, structure, and APIs of HTML documents\n\nHTML\u2019s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they\u2019re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It\u2019s also what a front-end developer does, whether they realise it or not.\nA brief introduction to information architecture\nWe\u2019re going to start by looking at what an information architect is. There are many definitions, and I\u2019m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is:\n\nthe individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information.\nOf Patterns And Structures\n\nTo me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people.\nJust as there are many definitions of what an information architect is, there are for information architecture itself. I\u2019m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as:\nThe structural design of shared information environments.\nThe synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems.\nThe art and science of shaping information products and experiences to support usability, findability, and understanding.\nInformation Architecture For The World Wide Web, 4th Edition\nTo me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015\u2019s State Of The Browser conference, Edd Sowden talked about the accessibility of s. He discovered that by simply not using the semantically-correct
                        element to mark up headings, in some situations browsers will decide that a
                        is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by L\u00e9onie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page.\nOur definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we\u2019re going to use. Dan defines these terms as:\nOntology\nThe definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate.\nWhat we mean when we say what we say.\nTaxonomy\nThe arrangements of the parts. Developing systems and structures for what everything\u2019s called, where everything\u2019s sorted, and the relationships between labels and categories\nChoreography\nRules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time.\n\nWe now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states:\n\n\u2026 data is not information. Information is data endowed with relevance and purpose.\nManaging For The Future\n\nIf we use the correct semantic element to mark up content then we\u2019re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we\u2019re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an

                        element should be relevant to its parent, which should be the

                        . By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you\u2019ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways:\n\nby creating a list of headings so users can quickly scan the page for information\nby using a keyboard command to cycle through one heading at a time\n\nIf we had a document for Christmas Day TV we might structure it something like this:\n

                        Christmas Day TV schedule

                        \n

                        BBC1

                        \n

                        Morning

                        \n

                        Evening

                        \n

                        BBC2

                        \n

                        Morning

                        \n

                        Evening

                        \n

                        ITV

                        \n

                        Morning

                        \n

                        Evening

                        \n

                        Channel 4

                        \n

                        Morning

                        \n

                        Evening

                        \nIf I use VoiceOver to generate a list of headings, I get this:\n\nOnce I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the

                        s:\n\nIf we hadn\u2019t used headings, of if we\u2019d nested them incorrectly, our users would be frustrated.\nPutting this together\nLet\u2019s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a \n\n
                        \n \n
                        \nThere\u2019s quite a bit going on here. We\u2019re using the:\n\naria-controls attribute to architect a connection between the