{"rowid": 290, "title": "Creating a Weekly Research Cadence", "contents": "Working on a product team, it\u2019s easy to get hyper-focused on building features and lose sight of your users and their daily challenges. User research can be time-consuming to set up, so it often becomes ad-hoc and irregular, only performed in response to a particular question or concern. But without frequent touch points and opportunities for discovery, your product will stagnate and become less and less relevant. Setting up an efficient cadence of weekly research conversations will re-focus your team on user problems and provide a steady stream of insights for product development.\nAs my team transitioned into a Lean process earlier this year, we needed a way to get more feedback from users in a short amount of time. Our users are internet marketers\u2014always busy and often difficult to reach. Scheduling research took days of emailing back and forth to find mutually agreeable times, and juggling one-off conversations made it difficult to connect with more than one or two people per week. The slow pace of research was allowing additional risk to creep into our product development.\nI wanted to find a way for our team to test ideas and validate assumptions sooner and more often\u2014but without increasing the administrative burden of scheduling. The solution: creating a regular cadence of research and testing that required a minimum of effort to coordinate. \nSetting up a weekly user research cadence accelerated our learning and built momentum behind strategic experiments. By dedicating time every week to talk to a few users, we made ongoing research a painless part of every weekly sprint. But increasing the frequency of our research had other benefits as well. With only five working days between sessions, a weekly cadence forced us to keep our work small and iterative. Committing to testing something every week meant showing work earlier and more often than we might have preferred\u2014pushing us out of your comfort zone into a process of more rapid experimentation.\nBest of all, frequent conversations with users helped us become more customer-focused. After just a few weeks in a consistent research cadence, I noticed user feedback weaving itself through our planning and strategy sessions. Comments like \u201cRemember what Jenna said last week, about not being able to customize her lists?\u201d would pop up as frequent reference points to guide our decisions. As discussions become less about subjective opinions and more about responding to user needs, we saw immediate improvement in the quality of our solutions.\nEstablishing an efficient recruitment process\nThe key to creating a regular cadence of ongoing user research is an efficient recruitment and scheduling process\u2014along with a commitment to prioritize the time needed for research conversations. This is an invaluable tool for product teams (whether or not they follow a Lean process), but could easily be adapted for content strategy teams, agency teams, a UX team of one, or any other project that would benefit from short, frequent conversations with users. \nThe process I use requires a few hours of setup time at the beginning, but pays off in better learning and better releases over the long run. Almost any team could use this as a starting point and adapt it to their own needs.\nPick a dedicated time each week for research\nIn order to make research a priority, we started by choosing a time each week when everyone on the product team was available. Between stand-ups, grooming sessions, and roadmap reviews, it wasn\u2019t easy to do! Nevertheless, it\u2019s important to include as many people as possible in conversations with your users. Getting a second-hand summary of research results doesn\u2019t have the same impact as hearing someone describe their frustrations and concerns first-hand. The more people in the room to hear those concerns, the more likely they are to become priorities for your team.\nI blocked off 2 hours for research conversations every Thursday afternoon. We make this time sacred, and never schedule other meetings or work across those hours. \nDivide your time into several research slots\nAfter my weekly cadence was set, I divided the time into four 20-minute time slots. Twenty minutes is long enough for us to ask several open-ended questions or get feedback on a prototype, without being a burden on our users\u2019 busy schedules. Depending on your work, you may need schedule longer sessions\u2014but beware the urge to create blocks that last an hour or more. A weekly research cadence is designed to facilitate rapid, ongoing feedback and testing; it should force you to talk to users often and to keep your work small and iterative. Projects that require longer, more in-depth testing will probably need a dedicated research project of their own.\nI used the scheduling software Calendly to create interview appointments on a calendar that I can share with users, and customized the confirmation and reminder emails with information about how to access our video conferencing software. (Most of our research is done remotely, but this could be set up with details for in-person meetings as well.) Automating these emails and reminders took a little bit of time to set up, but was worth it for how much faster it made the process overall.\n\n\nInvite users to sign up for a time that\u2019s convenient for them\nWith a calendar set up and follow-up emails automated, it becomes incredibly easy to schedule research conversations. Each week, I send a short email out to a small group of users inviting them to participate, explaining that this is a chance to provide feedback that will improve our product or occasionally promoting the opportunity to get a sneak peek at new features we\u2019re working on. The email includes a link to the Calendly appointments, allowing users who are interested to opt in to a time that fits their schedule.\nSetting up appointments the first go around involved a bit of educated guessing. How many invitations would it take to fill all four of my weekly slots? How far in advance did I need to recruit users? But after a few weeks of trial and error, I found that sending 12-16 invitations usually allows me to fill all four interview slots. Our users often have meetings pop up at short notice, so we get the best results when I send the recruiting email on Tuesday, two days before my research block.\nIt may take a bit of experimentation to fine tune your process, but it\u2019s worth the effort to get it right. (The worst thing that\u2019s happened since I began recruiting this way was receiving emails from users complaining that there were no open slots available!) I can now fill most of an afternoon with back-to-back user research sessions just by sending just one or two emails each week, increasing our research pace while leaving plenty time to focus on discovery and design.\nGetting the most out of your research sessions\nAs you get comfortable with the rhythm of talking to users each week, you\u2019ll find more and more ways to get value out of your conversations. At first, you may prefer to just show work in progress\u2014such as mockups or a simple prototype\u2014and ask open-ended questions to measure user reaction. When you begin new projects, you may want to use this time to research behavior on existing features\u2014either watching participants as they use part of your product or asking them to give an account of a recent experience in your app. You may even want to run more abstracted Lean experiments, if that\u2019s the best way to validate the assumptions your team is working from.\nWhatever you do, plan some time a day or two later to come back together and review what you\u2019ve learned each week. Synthesizing research outcomes as a group will help keep your team in alignment and allow each person to highlight what they took away from each conversation. \nOver time, you may find that the pace of weekly user research becomes more exhausting than energizing, especially if the responsibility for scheduling and planning falls on just one person. Don\u2019t allow yourself to get burned out; a healthy research cadence should also include time to rest and reflect if the pace becomes too rapid to sustain. Take breaks as needed, then pick up the pace again as soon as you\u2019re ready.", "year": "2016", "author": "Wren Lanier", "author_slug": "wrenlanier", "published": "2016-12-02T00:00:00+00:00", "url": "https://24ways.org/2016/creating-a-weekly-research-cadence/", "topic": "ux"} {"rowid": 203, "title": "Jobs-to-Be-Done in Your UX Toolbox", "contents": "Part 1: What is JTBD?\nThe concept of a \u201cjob\u201d in \u201cJobs-To-Be-Done\u201d is neatly encapsulated by a oft-quoted line from Theodore Levitt:\n\n\u201cPeople want a quarter-inch hole, not a quarter inch drill\u201d.\n\nEven so, Don Norman pointed out that perhaps Levitt \u201cstopped too soon\u201d at what the real customer goal might be. In the \u201cThe Design of Everyday Things\u201d, he wrote:\n\n\u201cLevitt\u2019s example of the drill implying that the goal is really a hole is only partially correct, however. When people go to a store to buy a drill, that is not their real goal. But why would anyone want a quarter-inch hole? Clearly that is an intermediate goal. Perhaps they wanted to hang shelves on the wall. Levitt stopped too soon. Once you realize that they don\u2019t really want the drill, you realize that perhaps they don\u2019t really want the hole, either: they want to install their bookshelves. Why not develop methods that don\u2019t require holes? Or perhaps books that don\u2019t require bookshelves.\u201d\n\nIn other words, a \u201cjob\u201d in JTBD lingo is a way to express a user need or provide a customer-centric problem frame that\u2019s independent of a solution. As Tony Ulwick says:\n\n\u201cA job is stable, it doesn\u2019t change over time.\u201d\n\nAn example of a job is \u201ctiding you over from breakfast to lunch.\u201d You could hire a donut, a flapjack or a banana for that mid-morning snack\u2014whatever does the job. If you can arrive at a clearly identified primary job (and likely some secondary ones too), you can be more creative in how you come up with an effective solution while keeping the customer problem in focus.\nThe team at Intercom wrote a book on their application of JTBD. In it, Des Traynor cleverly characterised how JTBD provides a different way to think about solutions that compete for the same job: \n\n\u201cEconomy travel and business travel are both capable candidates applying for [the job: Get me face-to-face with my colleague in San Francisco], though they\u2019re looking for significantly different salaries. Video conferencing isn\u2019t as capable, but is willing to work for a far smaller salary. I\u2019ve a hiring choice to make.\u201d\n\nSo far so good: it\u2019s relatively simple to understand what a job is, once you understand how it\u2019s different from a \u201ctask\u201d. Business consultant and Harvard professor Clay Christensen talks about the concept of \u201chiring\u201d a product to do a \u201cjob\u201d, and firing it when something better comes along. If you\u2019re a company that focuses solutions on the customer job, you\u2019re more likely to succeed. You\u2019ll find these concepts often referred to as \u201cJobs-to-be-Done theory\u201d. But the application of Jobs-to-Be-Done theory is a little more complicated; it comprises several related approaches.\nI particularly like Jim Kalbach\u2019s description of how JTBD is a \u201clens through which to understand value creation\u201d. But it is also more. In my view, it\u2019s a family of frameworks and methods\u2014and perhaps even a philosophy.\nDifferent facets in a family of frameworks\nJTBD has its roots in market research and business strategy, and so it comes to the research table from a slightly different place compared to traditional UX or design research\u2014we have our roots in human-computer interaction and ergonomics. I\u2019ve found it helpful to keep in mind is that the application of JTBD theory is an evolving beast, so it\u2019s common to find contradictions across different resources. My own use of it has varied from project to project. In speaking to others who have adopted it in different measures, it seems that we have all applied it in somewhat multifarious ways. As we like to often say in interviews: there are no wrong answers.\nOutcome Driven Innovation\nTony Ulwick\u2019s version of the JTBD history began with Outcome Driven Innovation (ODI), and this approach is best outlined in his seminal article published in the Harvard Business Review in 2002. To understand his more current JTBD approach in his new book \u201cJobs to Be Done: Theory to Practice\u201d, I actually found it beneficial to read his approach in the original 2002 article for a clearer reference point.\nIn the earlier article, Ulwick presented a rigorous approach that combines interviews, surveys and an \u201copportunity\u201d algorithm\u2014a sequence of steps to determine the business opportunity. ODI centres around working with \u201cdesired outcome statements\u201d that you unearth through interviews, followed by a means to quantify the gap between importance and satisfaction in a survey to different types of customers. \nSince 2008, Ulwick has written about using job maps to make sense of what the customer may be trying to achieve. In a recent article, he describes the aim of the activity is \u201cto discover what the customer is trying to get done at different points in executing a job and what must happen at each juncture in order for the job to be carried out successfully.\u201d\nA job map is not strictly a journey map, however tempting it is to see it that way. From a UX perspective, is one of many models we can use\u2014and as our research team at Clearleft have found, how we use model can depend on the nature of the jobs we\u2019ve uncovered in interviews and the characteristics of the problem we\u2019re attempting to solve.\nFigure 1. Universal job map\nUlwick\u2019s current methodology is outlined in his new book, where he describes a complete end-to-end process: from customer and competitor research to framing market and product strategy.\nThe Jobs-To-Be-Done Interview\nBack in 2013, I attended a workshop by Chris Spiek and Bob Moesta from the ReWired Group on JTBD at the behest of a then-MailChimp colleague, and I came away excited about their approach to product research. It felt different from anything I\u2019d done before and for the first time in years, I felt that I was genuinely adding something new to my research toolbox.\nA key idea is that if you focus on the stories of those who switched to you, and those who switch away from you, you can uncover the core jobs through looking at these opposite ends of engagement.\nThis framework centres around the JTBD interview method, which harnesses the power of a narrative framework to elicit the real reasons why someone \u201chired\u201d something to do a job\u2014be it something physical like a new coffee maker, or a digital service, such as a to-do list app. As you interview, you are trying to unearth the context around the key moments on the JTBD timeline (Figure 2). A common approach is to begin from the point the customer might have purchased something, back to the point where the thought of buying this thing first occurred to them.\nFigure 2. JTBD Timeline\nFigure 3. The Four Forces\nThe Forces Diagram (Figure 3) is a post-interview analysis tool where you can map out what causes customers to switch to something new and what holds them back.\nThe JTBD interview is effective at identifying core and secondary jobs, as well as some context around the user need. Because this method is designed to extract the story from the interviewee, it\u2019s a powerful way to facilitate recall. Having done many such interviews, I\u2019ve noticed one interesting side effect: participants often remember more details later on after the conversation has formally ended. It is worth scheduling a follow-up phone call or keep the channels open.\nStrengths aside, it\u2019s good to keep in mind that the JTBD interview is still primarily an interview technique, so you are relying on the context from the interviewee\u2019s self-reported perspective. For example, a stronger research methodology combines JTBD interviews with contextual research and quantitative methods. \nJob Stories\nAlan Klement is credited for coming up with the term \u201cjob story\u201d to describe the framing of jobs for product design by the team at Intercom:\n\n\u201cWhen \u2026 I want to \u2026 so I can \u2026.\u201d\n\nFigure 4. Anatomy of a Job Story\nUnlike a user story that traditionally frames a requirement around personas, job stories frame the user need based on the situation and context. Paul Adams, the VP of Product at Intercom, wrote:\n\n\u201cWe frame every design problem in a Job, focusing on the triggering event or situation, the motivation and goal, and the intended outcome. [\u2026] We can map this Job to the mission and prioritise it appropriately. It ensures that we are constantly thinking about all four layers of design. We can see what components in our system are part of this Job and the necessary relationships and interactions required to facilitate it. We can design from the top down, moving through outcome, system, interactions, before getting to visual design.\u201d\n\nSystems of Progress\nApart from advocating using job stories, Klement believes that a core tenet of applying JTBD revolves around our desire for \u201cself-betterment\u201d\u2014and that focusing on everyone\u2019s desire for self-betterment is core to a successful strategy.\nIn his book, Klement takes JTBD further to being a tool for change through applying systems thinking. There, he introduces the systems of progress and how it can help focus product strategy approach to be more innovative.\nCoincidentally, I applied similar thinking on mapping systemic change when we were looking to improve users\u2019 trust with a local government forum earlier this year. It\u2019s not just about capturing and satisfying the immediate job-to-be-done, it\u2019s about framing the job so that you can a clear vision forward on how you can help your users improve their lives in the ways they want to.\nThis is really the point where JTBD becomes a philosophy of practice.\nPart 2: Mixing It Up\nThere has been some misunderstanding about how adopting JTBD means ditching personas or some of our existing design tools or research techniques. This couldn\u2019t have been more wrong.\nFigure 5: Jim Kalbach\u2019s JTBD model\nJim Kalbach has used Outcome-Driven Innovation for around 10 years. In a 2016 article, he presents a synthesised model of how to think about that has key elements from ODI, Christensen\u2019s theories and the structure of the job story.\nMore interestingly, Kalbach has also combined the use of mental models with JTBD.\nClaire Menke of UDemy has written a comprehensive article about using personas, JTBD and customer journey maps together in order to communicate more complete story from the users\u2019 perspective. Claire highlights an especially interesting point in her article as she described her challenges:\n\u201cAfter much trial and error, I arrived at a foundational research framework to suit every team\u2019s needs \u2014 allowing everyone to share the same holistic understanding, but extract the type of information most applicable to their work.\u201d\nIn other words, the organisational context you are in likely can dictate what works best\u2014after all the goal is to arrive at the best user experience for your audiences. Intercom can afford to go full-on on applying JTBD theory as a dominant approach because they are a start-up, but a large company or organisation with multiple business units may require a mix of tools, outputs and outcomes.\nJTBD is an immensely powerful approach on many fronts\u2014you\u2019ll find many different references that lists the ways you can apply JTBD. However, in the context of this discussion, it might also be useful to we examine where it lies in our models of how we think about our UX and product processes.\nJTBD in the UX ecosystem\nFigure 6. The Elements of User Experience (source)\nThere are many ways we have tried to explain the UX discipline but I think Jesse James Garrett\u2019s Elements of User Experience is a good place to begin.\nI sometimes also use little diagram to help me describe the different levels you might work at when you work through the complexity of designing and developing a product. A holistic UX strategy needs to address all the different levels for a comprehensive experience: your individual product UI, product features, product propositions and brand need to have a cohesive definition.\nFigure 7. Which level of product focus?\nWe could, of course, also think about where it fits best within the double diamond.\nAgain, bearing in mind that JTBD has its roots in business strategy and market research, it is excellent at clarifying user needs, defining high-level specifications and content requirements. It is excellent for validating brand perception and value proposition \u2014all the way down to your feature set. In other words, it can be extremely powerful all the way through to halfway of the second diamond. You could quite readily combine the different JTBD approaches because they have differences as much as overlaps. However, JTBD generally starts getting a little difficult to apply once we get to the details of UI design.\nThe clue lies in JTBD\u2019s raison d\u2019\u00eatre: a job statement is solution independent. Hence, once we get to designing solutions, we potentially fall into a existential black hole.\nThat said, Jim Kalbach has a quick case study on applying JTBD to content design tucked inside the main article on a synthesised JTBD model. Alan Klement has a great example of how you could use UI to resolve job stories. You\u2019ll notice that the available language of \u201cjobs\u201d drops off at around that point.\nJob statements and outcome statements provide excellent \u201cmini north-stars\u201d as customer-oriented focal points, but purely satisfying these statements would not necessarily guarantee that you have created a seamless and painless user experience.\nPlaying well with others\nYou will find that JTBD plays well with Lean, and other strategy tools like the Value Proposition Canvas. With every new project, there is potential to harness the power of JTBD alongside our established toolbox.\nWhen we need to understand complex contexts where cultural or socioeconomic considerations have to be taken into account, we are better placed with combining JTBD with more anthropological approaches. And while we might be able to evaluate if our product, website or app satisfies the customer jobs through interviews or surveys, without good old-fashioned usability testing we are unlikely to be able to truly validate why the job isn\u2019t being represented as it should. In this case, individual jobs solved on the UI can be set up as hypotheses to be proven right or wrong. \nThe application of Jobs-to-be-Done is still evolving. I\u2019ve found it to be very powerful and I struggle to remember what my UX professional life was like before I encountered it\u2014it has completely changed my approach to research and design.\nThe fact JTBD is still evolving as a practice means we need to be watchful of dogma\u2014there\u2019s no right way to get a UX job done after all, it nearly always depends. At the end of the day, isn\u2019t it about having the right tool for the right job?", "year": "2017", "author": "Steph Troeth", "author_slug": "stephtroeth", "published": "2017-12-04T00:00:00+00:00", "url": "https://24ways.org/2017/jobs-to-be-done-in-your-ux-toolbox/", "topic": "ux"} {"rowid": 267, "title": "Taming Complexity", "contents": "I\u2019m going to step into my UX trousers for this one. I wouldn\u2019t usually wear them in public, but it\u2019s Christmas, so there\u2019s nothing wrong with looking silly.\n\nAnyway, to business. Wherever I roam, I hear the familiar call for simplicity and the denouncement of complexity. I read often that the simpler something is, the more usable it will be. We understand that simple is hard to achieve, but we push for it nonetheless, convinced it will make what we build easier to use. Simple is better, right?\n\nWell, I\u2019ll try to explore that. Much of what follows will not be revelatory to some but, like all good lessons, I think this serves as a welcome reminder that as we live in a complex world it\u2019s OK to sometimes reflect that complexity in the products we build.\n\nMyths and legends\n\nLess is more, we\u2019ve been told, ever since master of poetic verse Robert Browning used the phrase in 1855. Well, I\u2019ve conducted some research, and it appears he knew nothing of web design. Neither did modernist architect Ludwig Mies van der Rohe, a later pedlar of this worthy yet contradictory notion. Broad is narrow. Tall is short. Eggs are chips. See: anyone can come up with this stuff.\n\nTo paraphrase Einstein, simple doesn\u2019t have to be simpler. In other words, simple doesn\u2019t dictate that we remove the complexity. Complex doesn\u2019t have to be confusing; it can be beautiful and elegant. On the web, complex can be necessary and powerful. A website that simplifies the lives of its users by offering them everything they need in one site or screen is powerful. For some, the greater the density of information, the more useful the site.\n\nIn our decision-making process, principles such as Occam\u2019s razor\u2019s_razor (in a nutshell: simple is better than complex) are useful, but simple is for the user to determine through their initial impression and subsequent engagement. What appears simple to me or you might appear very complex to someone else, based on their own mental model or needs. We can aim to deliver simple, but they\u2019ll be the judge.\n\nAs a designer, developer, content alchemist, user experience discombobulator, or whatever you call yourself, you\u2019re often wrestling with a wealth of material, a huge number of features, and numerous objectives. In many cases, much of that stuff is extraneous, and goes in the dustbin. However, it can be just as likely that there\u2019s a truckload of suggested features and content because it all needs to be there. Don\u2019t be afraid of that weight.\n\nIn the right hands, less can indeed mean more, but it\u2019s just as likely that less can very often lead to, well\u2026 less.\n\nComplexity is powerful\n\nSimple is the ability to offer a powerful experience without overwhelming the audience or inducing information anxiety. Giving them everything they need, without having them ferret off all over a site to get things done, is important.\n\nIt\u2019s useful to ask throughout a site\u2019s lifespan, \u201cdoes the user have everything they need?\u201d It\u2019s so easy to let our designer egos get in the way and chop stuff out, reduce down to only the things we want to see. That benefits us in the short term, but compromises the audience long-term.\n\nThe trick is not to be afraid of complexity in itself, but to avoid creating the perception of complexity. Give a user a flight simulator and they\u2019ll crash the plane or jump out. Give them everything they need and more, but make it feel simple, and you\u2019re building a relationship, empowering people.\n\nThis can be achieved carefully with what some call gradual engagement, and often the sensible thing might be to unleash complexity in carefully orchestrated phases, initially setting manageable levels of engagement and interaction, gradually increasing the inherent power of the product and fostering an empowered community.\n\nThe design aesthetic\n\nHere\u2019s a familiar scenario: the client or project lead gets overexcited and skips most of the important decision-making, instead barrelling straight into a bout of creative direction Tourette\u2019s. Visually, the design needs to be minimal, white, crisp, full of white space, have big buttons, and quite likely be \u201cclean\u201d. Of course, we all like our websites to be clean as that\u2019s more hygienic.\n\nBut what do these words even mean, really? Early in a project they\u2019re abstract distractions, unnecessary constraints. This premature narrowing forces us to think much more about throwing stuff out rather than acknowledging that what we\u2019re building is complex, and many of the components perhaps necessary.\n\nSimple is not a formula. It cannot be achieved just by using a white background, by throwing things away, or by breathing a bellowsful of air in between every element and having it all float around in space. Simple is not a design treatment. Simple is hard. Simple requires deep investigation, a thorough understanding of every aspect of a project, in line with the needs and expectations of the audience.\n\nRecognizing this helps us empathize a little more with those most vocal of UX practitioners. They usually appreciate that our successes depend on a thorough understanding of the user\u2019s mental models and expected outcomes. I personally still consider UX people to be web designers like the rest of us (mainly to wind them up), but they\u2019re web designers that design every decision, and by putting the user experience at the heart of their process, they have a greater chance of finding simplicity in complexity. The visual design aesthetic \u2014 the fa\u00e7ade \u2014 is only a part of that.\n\nDivide and conquer\n\nI\u2019m currently working on an app that\u2019s complex in architecture, and complex in ambition. We\u2019ll be releasing in carefully orchestrated private phases, gradually introducing more complexity in line with the unavoidably complex nature of the objective, but my job is to design the whole, the complete system as it will be when it\u2019s out of beta and beyond.\n\nI\u2019ve noticed that I\u2019m not throwing much out; most of it needs to be there. Therefore, my responsibility is to consider interesting and appropriate methods of navigation and bring everything together logically.\n\nI\u2019m using things like smart defaults, graphical timelines and colour keys to make sense of the complexity, techniques that are sympathetic to the content. They act as familiar points of navigation and reference, yet are malleable enough to change subtly to remain relevant to the information they connect. It\u2019s really OK to have a lot of stuff, so long as we make each component work smartly.\n\nIt\u2019s a divide and conquer approach. By finding simplicity and logic in each content bucket, I\u2019ve made more sense of the whole, allowing me to create key layouts where most of the simplified buckets are collated and sometimes combined, providing everything the user needs and expects in the appropriate places.\n\nI\u2019m also making sure I don\u2019t reduce the app\u2019s power. I need to reflect the scale of opportunity, and provide access to or knowledge of the more advanced tools and features for everyone: a window into what they can do and how they can help. I know it\u2019s the minority who will be actively building the content, but the power is in providing those opportunities for all.\n\nMuch of this will be familiar to the responsible practitioners who build websites for government, local authorities, utility companies, newspapers, magazines, banking, and we-sell-everything-ever-made online shops. Across the web, there are sites and tools that thrive on complexity.\n\nAlas, the majority of such sites have done little to make navigation intuitive, or empower audiences. Where we can make a difference is by striving to make our UIs feel simple, look wonderful, not intimidating \u2014 even if they\u2019re mind-meltingly complex behind that fa\u00e7ade.\n\nEmbrace, empathize and tame\n\nSo, there are loads of ways to exploit complexity, and make it seem simple. I\u2019ve hinted at some methods above, and we\u2019ve already looked at gradual engagement as a way to make sense of complexity, so that\u2019s a big thumbs-up for a release cycle that increases audience power.\n\nPrior to each and every release, it\u2019s also useful to rest on the finished thing for a while and use it yourself, even if you\u2019re itching to release. \u2018Ready\u2019 often isn\u2019t, and \u2018finished\u2019 never is, and the more time you spend browsing around the sites you build, the more you learn what to question, where to add, or subtract. It\u2019s definitely worth building in some contingency time for sitting on your work, so to speak.\n\nOne thing I always do is squint at my layouts. By squinting, I get a sort of abstract idea of the overall composition, and general feel for the thing. It makes my face look stupid, but helps me see how various buckets fit together, and how simple or complex the site feels overall.\n\nI mentioned the need to put our design egos to one side and not throw out anything useful, and I think that\u2019s vital. I\u2019m a big believer in economy, reduction, and removing the extraneous, but I\u2019m usually referring to decoration, bells and whistles, and fluff. I wouldn\u2019t ever advocate the complete removal of powerful content from a project roadmap.\n\nAbove all, don\u2019t fear complexity. Embrace and tame it. Work hard to empathize with audience needs, and you can create elegant, playful, risky, surprising, emotive, delightful, and ultimately simple things.", "year": "2011", "author": "Simon Collison", "author_slug": "simoncollison", "published": "2011-12-21T00:00:00+00:00", "url": "https://24ways.org/2011/taming-complexity/", "topic": "ux"} {"rowid": 78, "title": "Fluent Design through Early Prototyping", "contents": "There\u2019s a small problem with wireframes. They\u2019re not good for showing the kind of interactions we now take for granted \u2013 transitions and animations on the web, in Android, iOS, and other platforms. There\u2019s a belief that early prototyping requires a large amount of time and effort, and isn\u2019t worth an early investment. But it\u2019s not true!\n\nIt\u2019s still normal to spend a significant proportion of time working in wireframes. Given that wireframes are high-level and don\u2019t show much detail, it\u2019s tempting to give up control and responsibility for things like transitions and other things sidelined as visual considerations. These things aren\u2019t expressed well, and perhaps not expressed at all, in wireframes, yet they critically influence the quality of a product. Rapid prototyping early helps to bring sidelined but significant design considerations into focus.\n\nSpeaking fluent design\n\nFluency in a language means being able to speak it confidently and accurately. The Latin root means flow.\n\nBy design fluency, I mean using a set of skills in order to express or communicate an idea. Prototyping is a kind of fluency. It takes designers beyond the domain of grey and white boxes to consider all the elements that make up really good product design.\n\nDesigners shouldn\u2019t be afraid of speaking fluent design. They should think thoroughly about product decisions beyond their immediate role \u2014 not for the sake of becoming some kind of power-hungry design demigod, but because it will lead to better, more carefully considered product design.\n\nWireframes are incomplete sentences\n\nWireframes, once they\u2019ve served their purpose, are a kind of self-imposed restriction.\n\nMostly made out of grey and white boxes, they deliberately express the minimum. Important details \u2014 visuals, nuanced transitions, sounds \u2014 are missing. Their appearance bears little resemblance to the final thing. Responsibility for things that traditionally didn\u2019t matter (or exist) is relinquished. Animations and transitions in particular are increasingly relevant to the mobile designer\u2019s methods. And rather than being fanciful and superfluous visual additions to a product, they help to clarify designs and provide information about context.\n\nWireframes are useful in the early stages. As a designer trying to persuade stakeholders, clients, or peers, sometimes it will be in your interests to only tell half the story. They\u2019re ideal for gauging whether a design is taking the right direction, and they\u2019re the right medium for deciding core things, such as the overall structure and information architecture.\n\nBut spending a long time in wireframes means delaying details to a later stage in the project, or to the end, when the priority is shifted to getting designs out of the door. This leaves little time to test, finesse and perfect things which initially seemed to be less important. I think designers should move away from using wireframes as primary documentation once the design has reached a certain level of maturity.\n\nA prototype is multiple complete sentences\n\nParagraphs, even.\n\nUnlike a wireframe, a prototype is a persuasive storyteller. It can reveal the depth and range of design decisions, not just the layout, but also motion: animations and transitions. If it\u2019s a super-high-fidelity prototype, it\u2019s a perfect vessel for showing the visual design as well. It\u2019s all of these things that contribute to the impression that a product is good\u2026 and useful, and engaging, and something you\u2019d like to use.\n\nA prototype is impressive. A good prototype can help to convince stakeholders and persuade clients. With a compelling demo, people can more easily imagine that this thing could actually exist. \u201cHey\u201d, they\u2019re thinking. \u201cThis might actually be pretty good!\u201d\n\nHow to make a prototype in no time and with no effort\n\nNow, it does take time and effort to make a prototype. However, good news! It used to require a lot more effort. There are tools that make prototyping much quicker and easier.\n\nIf you\u2019re making a mobile prototype (this seems quite likely), you will want to test and show this on the actual device. This sounds like it could be a pain, but there are a few ways to do this that are quite easy.\n\nKeynote, Apple\u2019s presentation software, is an unlikely candidate for a prototyping tool, but surprisingly great and easy for creating prototypes with transitions that can be shown on different devices.\n\nKeynote enables you to do a few useful, excellent things. You can make each screen in your design a slide, which can be linked together to allow you to click through the prototype. You can add customisable transitions between screens. If you want to show a panel that can slide open or closed on your iPad mockup, for example, transitions can also be added to individual elements on the screen. The design can be shown on tablet and mobile devices, and interacted with like it\u2019s a real app. Another cool feature is that you can export the prototype as a video, which works as another effective format for demoing a design.\n\nOverall, Keynote offers a very quick, lightweight way to prototype a design. Once you\u2019ve learned the basics, it shouldn\u2019t take longer than a few hours \u2013 at most \u2013 to put together a respectable clickable prototype with transitions.\n\nDownload the interactive MOV example\n\nHolly icon by Megan Sheehan from The Noun Project\n\nThis is a Quicktime movie exported from Keynote. This version is animated for demonstration purposes, but download the interactive original and you can click the screen to move through the prototype. It demonstrates the basic interactivity of an iPhone app. This anonymised example was used on a project at Fjord to create a master example of an app\u2019s transitions.\n\nPrototyping drawbacks, and perceived drawbacks\n\nIf prototyping is so great, then why do we leave it to the end, or not bother with it at all? There are multiple misconceptions about prototyping: they\u2019re too difficult to make; they take too much time; or they\u2019re inaccurate (and dangerous) documentation.\n\nA prototype is a preliminary model. There should always be a disclaimer that it\u2019s not the real thing to avoid setting up false expectations.\n\nA prototype doesn\u2019t have to be the main deliverable. It can be a key one that\u2019s supported by visual and interaction specifications. And a prototype is a lightweight means of managing and reflecting changes and requirements in a project.\n\nAn actual drawback of prototyping is that to make one too early could mean being gung-ho with what you thought a client or stakeholder wanted, and delivering something inappropriate. To avoid this, communicate, iterate, and keep things simple until you\u2019re confident that the client or other stakeholders are happy with your chosen direction.\n\nThe key throughout any design project is iteration. Designers build iterative models, starting simple and becoming increasingly sophisticated. It\u2019s a process of iterative craft and evolution. There\u2019s no perfect methodology, no magic recipe to follow.\n\nWhat to do next\n\nMake a prototype! It\u2019s the perfect way to impress your friends.\n\nIt can help to advance a brilliant idea with a fraction of the effort of complete development. Sketches and wireframes are perfect early on in a project, but once they\u2019ve served their purpose, prototypes enable the design to advance, and push thinking towards clarifying other important details including transitions.\n\nFor Keynote tutorials, Keynotopia is a great resource. Axure is standard and popular prototyping software many UX designers will already be familiar with; it\u2019s possible to create transitions in Axure. POP is an iPhone app that allows you to design apps on paper, take photos with your phone, and turn them into interactive prototypes. Ratchet is an elegant iPhone prototyping tool aimed at web developers.\n\nThere are perhaps hundreds of different prototyping tools and methods. My final advice is not to get bogged down in (or limited by) any particular tool, but to remember you\u2019re making quick and iterative models. Experiment and play!\n\nPrototyping will push you and your designs to a scary place without limitations. No more grey and white boxes, just possibilities!", "year": "2012", "author": "Rebecca Cottrell", "author_slug": "rebeccacottrell", "published": "2012-12-10T00:00:00+00:00", "url": "https://24ways.org/2012/fluent-design-through-early-prototyping/", "topic": "ux"} {"rowid": 33, "title": "Five Ways to Animate Responsibly", "contents": "It\u2019s been two years since I wrote about \u201cFlashless Animation\u201d on this very site. Since then, animation has steadily begun popping up on websites, from sleek app-like user interfaces to interactive magazine-like spreads. It\u2019s an exciting time for web animation wonks, interaction developers, UXers, UI designers and a host of other acronyms! \n\nBut in our rush to experiment with animation it seems that we\u2019re having fewer conversations about whether or not we should use it, and more discussions about what we can do with it. We spend more time fretting over how to animate all the things at 60fps than we do devising ways to avoid incapacitating users with vestibular disorders.\n\nI love web animation. I live it. And I make adorably silly things with it that have no place on a self-respecting production website. I know it can be abused. We\u2019ve all made fun of Flash-turbation. But how quickly we forget the lessons we learned from that period of web design. Parallax scrolling effects may be the skip intro of this generation. Surely we have learned better in the sobering up period between Flash and the web animation API.\n\nSo here are five bits of advice we can use to pull back from the edge of animation abuse. With these thoughts in mind, we can make 2015 the year web animation came into its own. \n\nAnimate deliberately\n\nSadly, animation is considered decorative by the bulk of the web development community. UI designers and interaction developers know better, of course. But when I\u2019m teaching a workshop on animation for interaction, I know that my students face an uphill battle against decision makers who consider it nice to have, and tack it on at the end of a project, if at all. \n\nThis stigma is hard to shake. But it starts with us using animation deliberately or not at all. Poorly considered, tacked-on animation will often cause more harm than good. Users may complain that it\u2019s too slow or too fast, or that they have no idea what just happened.\n\nWhen I was at Chrome Dev Summit this year, I had the privilege to speak with Roma Sha, the UX lead behind Polymer\u2019s material design (with the wonderful animation documentation). I asked her what advice she\u2019d give to people using animation and transitions in their own designs. She responded simply: animate deliberately. If you cannot afford to slow down to think about animation and make well-informed and well-articulated decisions on behalf of the user, it is better that you not attempt it at all. Animation takes energy to perform, and a bad animation is worse than none at all. \n\nIt takes more than twelve principles\n\nWe always try to draw correlations between disparate things that spark our interest. Recently it feels like more and more people are putting the The Illusion of Life on their reading shelf next to Understanding Comics. These books give us so many useful insights from other industries. However, we should never mistake a website for a comic book or an animated feature film. Some of these concepts, while they help us see our work in a new light, can be more or less relevant to producing said work. \n\n\nThe illusion of life from cento lodigiani on Vimeo.\n\nI am specifically thinking of the twelve principles of animation put forth by Disney studio veterans in that great tome The Illusion of Life. These principles are very useful for making engaging, lifelike animation, like a ball bouncing or a squirrel scampering, or the physics behind how a lightbox should feel transitioning off a page. But they provide no direction at all for when or how something should be animated as part of a greater interactive experience, like how long a drop-down should take to fully extend or if a group of manipulable objects should be animated sequentially or as a whole.\n\nThe twelve principles are a great place to start, but we have so much more to learn. I\u2019ve documented at least six more functions of interactive animation that apply to web and app design. When thinking about animation, we should consider why and how, not just what, the physics. Beautiful physics mean nothing if the animation is superfluous or confusing.\n\nUseful and necessary, then beautiful\n\nThere is a Shaker saying: \u201cDon\u2019t make something unless it is both necessary and useful; but if it is both necessary and useful, don\u2019t hesitate to make it beautiful.\u201d When it comes to animation and the web, currently there is very little documentation about what makes it useful or necessary. We tend to focus more on the beautiful, the delightful, the aesthetic. And while aesthetics are important, they take a back seat to the user\u2019s overall experience. \n\n\n\nThe first time I saw the load screen for Pokemon Yellow on my Game Boy, I was enthralled. By the sixth time, I was mashing the start button as soon as Game Freak\u2019s logo hit the screen. What\u2019s delightful and meaningful to us while working on a project is not always so for our users. And even when a purely delightful animation is favorably received, as with Pokemon Yellow\u2019s adorable opening screen, too many repetitions of the cutest but ultimately useless animation, and users start to resent it as a hindrance.\n\n \n\nIf an animation doesn\u2019t help the user in some way, by showing them where they are or how two elements on a page relate to each other, then it\u2019s using up battery juice and processing cycles solely for the purpose of delight. Hardly the best use of resources.\n\nRather than animating solely for the sake of delight, we should first be able to articulate two things the animation does for the user. As an example, take this menu icon from Finethought.com (found via Use Your Interface). The menu icon does two things when clicked: \n\n\n\tIt gives the user feedback by animating, letting the user know its been clicked.\n\tIt demonstrates its changed relationship to the page\u2019s content by morphing into a close button.\n\n\n\n\nAssuming we have two good reasons to animate something, there is no reason our third cannot be to delight the user. \n\nGo four times faster\n\nThere is a rule of thumb in the world of traditional animation which is applicable to web animation: however long you think your animation should last, take that time and halve it. Then halve it again! When we work on an animation for hours, our sense of time dilates. What seems fast to us is actually unbearably slow for most users. In fact, the most recent criticism from users of animated interfaces on websites seems to be, \u201cIt\u2019s so slow!\u201d A good animation is unobtrusive, and that often means running fast.\n\nWhen getting your animations ready for prime time, reduce those durations to 25% of their original speed: a four-second fade out should be over in one. \n\nInstall a kill switch\n\nNo matter how thoughtful and necessary an animation, there will be people who become physically sick from seeing it. For these people, we must add a way to turn off animations on the website. \n\nFortunately, web designers are already thinking of ways to empower users to make their own decisions about how they experience the web. As an example, this site for the animated film Little from the Fish Shop allows users to turn off most of the parallax effects. While it doesn\u2019t remove the animation entirely, this website does reduce the most nauseating of the animations. \t\n\n\n\n\n\nAnimation is a powerful tool in our web design arsenal. But we must take care: if we abuse animation it might get a bad reputation; if we underestimate it, it won\u2019t be prioritized. But if we wield it thoughtfully, use it where it is both necessary and useful, and empower users to turn it off, animation is a tool that will help us build things that are easier to use and more delightful for years to come.\n\nLet\u2019s make 2015 the year web animation went to work for users.", "year": "2014", "author": "Rachel Nabors", "author_slug": "rachelnabors", "published": "2014-12-14T00:00:00+00:00", "url": "https://24ways.org/2014/five-ways-to-animate-responsibly/", "topic": "ux"} {"rowid": 219, "title": "Speed Up Your Site with Delayed Content", "contents": "Speed remains one of the most important factors influencing the success of any website, and the first rule of performance (according to Yahoo!) is reducing the number of HTTP requests. Over the last few years we\u2019ve seen techniques like sprites and combo CSS/JavaScript files used to reduce the number of HTTP requests. But there\u2019s one area where large numbers of HTTP requests are still a fact of life: the small avatars attached to the comments on articles like this one.\n\nAvatars\n\nMany sites like 24 ways use a fantastic service called Gravatar to provide user images. As a user, you can sign up to Gravatar, give them your e-mail address, and upload an image to represent you. Sites can then include your image by generating a one way hash of your e-mail address and using that to build an image URL. For example, the markup for the comments on this page looks something like this:\n\n
\n\t

\n\t\t\"\"\n Drew McLellan\n\t

\n\t

This is a great article!

\n
\n\nThe Gravatar URL contains two parts. 100 is the size in pixels of the image we want. 13734b0cb20708f79e730809c29c3c48 is an MD5 digest of Drew\u2019s e-mail address. Using MD5 means we can request an image for a user without sharing their e-mail address with anyone who views the source of the page.\n\nSo what\u2019s wrong with avatars?\n\nThe problem is that a popular article can easily get hundreds of comments, and every one of them means another image has to be individually requested from Gravatar\u2019s servers. Each request is small and the Gravatar servers are fast but, when you add them up, it can easily add seconds to the rendering time of a page. Worse, they can delay the loading of more important assets like the CSS required to render the main content of the page.\n\nThese images aren\u2019t critical to the page, and don\u2019t need to be loaded up front. Let\u2019s see if we can delay loading them until everything else is done. That way we can give the impression that our site has loaded quickly even if some requests are still happening in the background.\n\nDelaying image loading\n\nThe first problem we find is that there\u2019s no way to prevent Internet Explorer, Chrome or Safari from loading an image without removing it from the HTML itself. Tricks like removing the images on the fly with JavaScript don\u2019t work, as the browser has usually started requesting the images before we get a chance to stop it.\n\nRemoving the images from the HTML means that people without JavaScript enabled in their browser won\u2019t see avatars. As Drew mentioned at the start of the month, this can affect a large number of people, and we can\u2019t completely ignore them. But most sites already have a textual name attached to each comment and the avatars are just a visual enhancement. In most cases it\u2019s OK if some of our users don\u2019t see them, especially if it speeds up the experience for the other 98%.\n\nRemoving the images from the source of our page also means we\u2019ll need to put them back at some point, so we need to keep a record of which images need to be requested. All Gravatar images have the same URL format; the only thing that changes is the e-mail hash. Storing this is a great use of HTML5 data attributes.\n\nHTML5 data what?\n\nData attributes are a new feature in HTML5. The latest version of the spec says:\n\n\n\tA custom data attribute is an attribute in no namespace whose name starts with the string \u201cdata-\u201d, has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z).\n[\u2026]\nCustom data attributes are intended to store custom data private to the page or application, for which there are no more appropriate attributes or elements. These attributes are not intended for use by software that is independent of the site that uses the attributes.\n\n\nIn other words, they\u2019re attributes of an HTML element that start with \u201cdata-\u201d which you can use to share data with scripts running on your site. They\u2019re great for adding small bits of metadata that don\u2019t fit into an existing markup pattern the way microformats do.\n\nLet\u2019s see this in action\n\nTake a look at the markup for comments again:\n\n
\n\t

\n\t\t\"\"\n Drew McLellan\n\t

\n\t

This is a great article!

\n
\n\nLet\u2019s replace the element with a data-gravatar-hash attribute on the element:\n\n
\n\t

\n Drew McLellan\n\t

\n\t

This is a great article!

\n
\n\nOnce we\u2019ve done this, we\u2019ll need a small bit of JavaScript to find all these attributes, and replace them with images after the page has loaded. Here\u2019s an example using jQuery:\n\n$(window).load(function() {\n\t$('a[data-gravatar-hash]').prepend(function(index){\n\t\tvar hash = $(this).attr('data-gravatar-hash')\n\t\treturn '\"\"'\n\t})\n})\n\nThis code waits until everything on the page is loaded, then uses jQuery.prepend to insert an image into every link containing a data-gravatar-hash attribute. It\u2019s short and relatively simple, but in tests it reduced the rendering time of a sample page from over three seconds to well under one.\n\nFinishing touches\n\nWe still need to consider the appearance of the page before the avatars have loaded. When our script adds extra content to the page it will cause a browser reflow, which is visually annoying. We can avoid this by using CSS to reserve some space for each image before it\u2019s inserted into the HTML:\n\n#comments div {\n\tpadding-left: 110px;\n\tmin-height: 100px;\n\tposition: relative;\n}\n#comments div h4 img {\n\tdisplay: block;\n\tposition: absolute;\n\ttop: 0;\n\tleft: 0;\n}\n\nIn a real world example, we\u2019ll also find that the HTML for a comment is more varied as many users don\u2019t provide a web page link. We can make small changes to our JavaScript and CSS to handle this case.\n\nPut this all together and you get this example.\n\nTaking this idea further\n\nThere\u2019s no reason to limit this technique to sites using Gravatar; we can use similar code to delay loading any images that don\u2019t need to be present immediately. For example, this year\u2019s redesigned Flickr photo page uses a \u201cdata-defer-src\u201d attribute to describe any image that doesn\u2019t need to be loaded straight away, including avatars and map tiles.\n\nYou also don\u2019t have to limit yourself to loading the extra resources once the page loads. You can get further bandwidth savings by waiting until the user takes an action before downloading extra assets. Amazon has taken this tactic to the extreme on its product pages \u2013 extra content is loaded as you scroll down the page.\n\nSo next time you\u2019re building a page, take a few minutes to think about which elements are peripheral and could be delayed to allow more important content to appear as quickly as possible.", "year": "2010", "author": "Paul Hammond", "author_slug": "paulhammond", "published": "2010-12-18T00:00:00+00:00", "url": "https://24ways.org/2010/speed-up-your-site-with-delayed-content/", "topic": "ux"} {"rowid": 197, "title": "Designing for Mobile Performance", "contents": "Last year, some colleagues at Google ran a research study titled \u201cThe Need for Mobile Speed\u201d to find out what the impact of performance and perception of speed had on the way people use the web on their mobile devices. \nThat\u2019s not a trivial distinction; when considering performance, how fast something feels is often more important than how fast it actually is. When dealing with sometimes underpowered mobile devices and slow mobile networks, designing experiences that feel fast is exceptionally important.\nOne of the most startling numbers we found in the study was that 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.\nWe wanted to find out more, so following on from this study, we conducted research to define what the crucial elements of speed are. We took into consideration the user experience (UX), overall perception of speed, and how differing contexts the user finds themselves in can alter how fast a user thinks something loaded.\nTo understand speed and load times first we must understand that user mobile web behaviour is broken down into three buckets;\n\nIntention\nLocation\nState of mind\n\nLet\u2019s look at each of those in turn.\nIntention\nUsers browse sites on a mobile device for many different reasons. To be able to effectively design a performant user experience for them, it\u2019s important to understand what those reasons might be. When asked to describe their reason for visiting a site, approximately 30% of people asked by the study claimed that they were simply browsing without a particular purpose in mind. Looking deeper, we found that this number increased slightly (34%) for retail sites. 30% said they were just there to find out some information for a future task or action, such as booking a flight.\nInterestingly, the research shows that users are actually window shopping using their mobile browser. Only 29% actually said they had a specific goal or intent in mind, and this number increases significantly for financial services like banking sites (57%). This goes against a traditionally held view of users wanting to perform simple actions efficiently on their mobile device. Sure, some users are absolutely doing that, but many are just browsing around without a goal in mind, just like they would on a desktop browser.\nThis gives great insight into the user\u2019s intentions. It tells us that users are actively using sites on their mobile, but a large majority do not intend to do anything instantly. There\u2019s no goal they\u2019re under pressure to achieve. If a site\u2019s performance is lousy or janky, this will only reaffirm to the user to that they can hold off on completing a task, so they might just give up. \nHowever, if a site is quick to load, sophisticated in expressing its value proposition quickly, and enables the user to perform their actions seamlessly, then turning that \u201cbrowsing user\u201d into a \u201cbuying user\u201d becomes all that much easier. When the user has no goal, there\u2019s more opportunity to convert, and you stand a greater chance of doing that if the performance is good enough so they stick around.\nLocation\nObviously, mobile devices by their nature can be used in many different locations. This is an interesting consideration, because it\u2019s not something we traditionally need to take into account designing experiences for static platforms like desktop computers.\nThe in the study, we found that 82% of users browse the web on their mobile phone while in their home. In contrast, only 7% do the same while at work. This might come across as a bit of a shock, but when you look at mobile usage \u2013 in particular app usage \u2013 most of the apps being used are either a social network or some sort of entertainment or media app. Due to the unreliability of network connections, users will often alternate between these two types of apps.\nThe consequence being that if a site doesn\u2019t work offline, or otherwise compensate for bad network connectivity in some way by providing opportunities to allow users to browse their site, then it becomes a self-fulfilling prophecy as to why users mostly view the mobile web from the comfort of their homes where there is a strong WiFi connection. They\u2019re using mobile devices, but they\u2019re not actually mobile themselves.\nAnother thing to bear in mind is how users alternate between devices, a study by comScore found that 80% of transactions take place on desktop while 69% of the browsing takes place on mobile. Any given user might access from more than one location - they might visit one day from a bus queue on their phone, and then next day from a laptop at home.\nState of mind\nOne more thing we need to take into consideration is the user\u2019s state of mind. Whilst browsing at home, users tend to be more relaxed, and in the research 76% stated they were generally calmer at home. The user\u2019s state of mind can have quite a big impact on how they perceive things. The calmer they are, the quicker a site might appear to load. If the user is anxious and impatiently drumming their fingers on the table, things seem to take longer, and even a small wait can feel like an eternity.\nThis is quite key. Over 40% of sites take longer than 4 seconds to load for users who are are out and about and using a mobile data connection. Coupled with our perception, and amplified by a potentially less-than-calm state of mind, this can seem like an age.\nWhat does this all mean?\nI think we can all agree that users prefer strong, steady connections and comfort when completing transactions. It seems like common sense when we say it out loud. Recreating these feelings and sensations of comfort and predictability under all circumstances therefore becomes paramount. Equally, when asked in the study, users all claimed that speed was the most important factor impacting their mobile web usage. Over 40%, in fact, said it was the most important UX feature of a site, and nobody asked considered it to be of no importance at all.\n\nThe meaning of speed\nWhen it comes to performance, speed is measured in two ways \u2013 real speed; as measured on a clock, and perceived speed; how responsive an interaction feels. We can, of course, improve how quickly a site loads by simply making files smaller. Even then, the study showed that 32% of users felt a site can feel slow even when it loads in less than 4 seconds. This gets even worse when you look at it by age group, with 50% if young people (18-24 year olds) thinking a site was slower than it actually was. When you add to the mix that users think a site loaded faster when they are sitting compared to when they are standing up, then you are in a world of trouble if your site doesn\u2019t have any clear indicators to let the user know the loading state of you website or app.\nSo what can we do about this to improve our designs?\nHow to fix / hack user perception\nThere are some golden rules of speed, the first thing is hacking response time. If a page takes more than 3 seconds to load, you will certainly start to lose your users. However, if that slowness is part of a UX flow such as processing information, the user understands it might take a little time. Under those circumstances, a load time of under 5 seconds is acceptable, but even then, you should take caution. Anything above 8 seconds and you are in very real danger of losing your audience completely. \n\n\n\nLoad time\nUser impression\n\n\n\n\n200 ms\nFeels instant\n\n\n1 s\nFeels it is performing smoothly\n\n\n5 s\nPart of user flow\n\n\n8 s\nLose attention\n\n\n\nRemove the tap delay\nMobile browsers often use a 300-350ms delay between the triggering of the touchend and click events. This delay was added so the browser could determine if there was going to be a double-tap triggered or not, since double-tap is a common gesture used to zoom into text. This delay can have the side effect of making interactions feel laggy, and therefore giving the user the impression that the site is slow, even if it\u2019s their own browser causing the problem.\nFortunately there\u2019s a way to remove the delay. Add following in the of your page, and the delay no longer takes effect:\n\nYou can also use touch-action: manipulation in newer browsers to eliminate click delay. For old browsers, FastClick by FT Labs uses touch events to trigger clicks faster and remove the double-tap gesture.\nMake use of Skeleton Screens\nA skeleton layout is a wireframe version of your app that displays while content is being loaded. This demonstrates to the user that content is about to be loaded, giving the impression that something is happening more quickly than it really is. Consider also using a preloader UI as well, with a text label informing the user that the app is loading. One example would be to pulsate the wireframe content giving the app the feeling that it is alive and loading. This reassures the user that something is happening and helps prevent resubmissions or refreshes of your app. Razvan Caliman created a Codepen example of how to create this effect in purely CSS. \nOne thing to consider though, if data doesn\u2019t load then you might need to create a fallback 404 or error page to let the user know what happened. \n\nExample by Owen-Campbell Moore\nResponsive Touch Feedback\nCarefully designing the process by which items load is one aspect of increasing the perceived speed of your page, but reassuring the user that an action they have taken is in process is another. At Google we use something called a Ripple, which is is animating dot that expands or ripples in order to confirm to the user that their input has been triggered. This happens immediately, expanding outward from the point of touch. This reaffirms to the user that their input has been received and is being acted on, even before the site has had a chance to process or respond to the action. From the user\u2019s point of view, they\u2019ve tapped and the page has responded immediately, so it feels really quick and satisfying.\nYou can mimic this same behavior using our Material Design Components Web GitHub repo.\n\nProgress bars\nThese UI elements have existed for a very long time, but research conducted by Chris Harrison and published in New Scientist found that the style of a progress bar can alter the perception of speed drastically. As a matter of fact, progress bars with ripples that animate towards the left appear like they are loading faster by at least 11% percent. So when including them in your site, take into consideration that ripples and progress bars that pulsate faster as they get to the end will make your sites seem quicker.\n\n \n \n Faster Progress Bars: Manipulating Perceived Duration with Visual Augmentations\nNavigation\nThe speed with which a user can locate navigational items or call to actions adds to their perceived performance of a site. If the user\u2019s next action is quick to spot on the screen, they don\u2019t spend time hunting around the interface with their eyes and fingers. So no matter how quickly your code runs, hiding items behind a nav bar will make a site feel slower than it actually is. \nFacebook found that switching to using bottom navigation saw an increase in engagement, satisfaction, revenue, speed, and importantly, perception of speed. If the user sees the navigation items they\u2019re looking for quickly, the interaction feels fast. What\u2019s more, end-to-end task completion is quicker too, as the interface not only feels quicker, but actually measures quicker as well. Similar reactions were found with Spotify and Redbooth.\nLuke Wroblewski gave a talk last year in Ireland titled \u201cObvious Always Wins\u201d which he demonstrated through the work he did with Google+. Luke\u2019s message is that by making the core features of your app obvious to your user, you will see engagement go up. This again seems obvious, right? However, it is important to note that adding bottom navigation doesn\u2019t just mean a black bar at the bottom of your screen like some kind of performance magic bullet. The goal is to make the items clear to the user so they know what they need to be doing, and how you achieve that could be different from one interface to the next. Google keeps experimenting with different navigation styles, but finally settled with the below when they conducted user research and testing.\n\nConclusion\nBy utilizing a collection of UI patterns and speed optimisation techniques, you can improve not only the actual speed of a site, but the perception of how quickly a user thinks your site is loading. It is critical to remember that users will not always be using your site in a calm and relaxed manner and that even their age can impact how they will use or not use your site. By improving your site\u2019s stability, you increase the likelihood of positive user engagement and task completion.\nYou can learn more about techniques to hack user perception and improve user speed by taking a look at an E-Book we published with Awwwards.com called Speed Matters: Design for Mobile Performance.", "year": "2017", "author": "Mustafa Kurtuldu", "author_slug": "mustafakurtuldu", "published": "2017-12-18T00:00:00+00:00", "url": "https://24ways.org/2017/designing-for-mobile-performance/", "topic": "ux"} {"rowid": 159, "title": "How Media Studies Can Massage Your Message", "contents": "A young web designer once told his teacher \u2018just get to the meat already.\u2019 He was frustrated with what he called the \u2018history lessons and name-dropping\u2019 aspects of his formal college course. He just wanted to learn how to build Web sites, not examine the reasons why.\n\nTechnique and theory are both integrated and necessary portions of a strong education. The student\u2019s perspective has direct value, but also holds a distinct sorrow: Knowing the how without the why creates a serious problem. Without these surrounding insights we cannot tap into the influence of the history and evolved knowledge that came before. We cannot properly analyze, criticize, evaluate and innovate beyond the scope of technique.\n\nHistory holds the key to transcending former mistakes. Philosophy encourages us to look at different points of view. Studying media and social history empowers us as Web workers by bringing together myriad aspects of humanity in direct relation to the environment of society and technology. Having an understanding of media, communities, communication arts as well as logic, language and computer savvy are all core skills of the best of web designers in today\u2019s medium.\n\nControlling the Message\n\n\n\t\u2018The computer can\u2019t tell you the emotional story. It can give you the exact mathematical design, but what\u2019s missing is the eyebrows.\u2019 \u2013 Frank Zappa\n\n\nMedia is meant to express an idea. The great media theorist Marshall McLuhan suggests that not only is media interesting because it\u2019s about the expression of ideas, but that the media itself actually shapes the way a given idea is perceived. This is what McLuhan meant when he uttered those famous words: \u2018The medium is the message.\u2019\n\nIf instead of actually serving a steak to a vegetarian friend, what might a painting of the steak mean instead? Or a sculpture of a cow? Depending upon the form of media in question, the message is altered.\n\nFigure 1\n\nMust we know the history of cows to appreciate the steak on our plate? Perhaps not, but if we begin to examine how that meat came to be on the plate, the social, cultural and ideological associations of that cow, we begin to understand the complexity of both medium and message. A piece of steak on my plate makes me happy. A vegetarian friend from India might disagree and even find that that serving her a steak was very insensitive.\n\nTakeaway: Getting the message right involves understanding that message in order to direct it to your audience accordingly.\n\nA Separate Piece\n\nIf we revisit the student who only wants technique, while he might become extremely adept at the rendering of an idea, without an understanding of the medium, how is he to have greater control over how that idea is perceived? Ultimately, his creativity is limited and his perspective narrowed, and the teacher has done her student a disservice without challenging him, particularly in a scholastic environment, to think in liberal, creative and ultimately innovative terms.\n\nFor many years, web pundits including myself have promoted the idea of separation as a core concept within creating effective front-end media for the Web. By this, we\u2019ve meant literal separation of the technologies and documents: Markup for content; CSS for presentation; DOM Scripting for behavior. While the message of separation was an important part of understanding and teaching best practices for manageable, scalable sites, that separation is really just a separation of pieces, not of entire disciplines.\n\nFor in fact, the medium of the Web is an integrated one. That means each part of the desired message must be supported by the media silos within a given site. The visual designer must study the color, space, shape and placement of visual objects (including type) into a meaningful expression. The underlying markup is ideally written as semantically as possible, promote the meaning of the content it describes. Any interaction and functionality must make the process of the medium support, not take away from, the meaning of the site or Web application. \n\nExamination: The Glass Bead Game\n\nFigure 2\n\nFigure 2 shows two screenshots of CoreWave\u2019s historic \u2018Glass Bead Game.\u2019 Fashioned after Herman Hesse\u2019s novel of the same name, the game was an exploration of how ideas are connected to other ideas via multiple forms of media. It was created for the Web in 1996 using server-side randomization with .htmlx files in order to allow players to see how random associations are in fact not random at all.\n\nTakeaway: We can use the medium itself to explore creative ideas, to link us from one idea to the next, and to help us better express those ideas to our audiences.\n\nComputers and Human Interaction\n\nSince our medium involves computers and human interaction, it does us well to look to foundations of computers and reason. Not long ago I was chatting with Jared Spool on IM about this and that, and he asked me \u2018So how do you feel about that?\u2019 This caused me no end of laughter and I instantly quipped back \u2018It\u2019s okay by me ELIZA.\u2019 We both enjoyed the joke, but then I tried to share it with another dare I say younger colleague, and the reference was lost.\n\nRaise your hand if you got the reference! Some of you will, but many people who come to the Web medium do not get the benefit of such historical references because we are not formally educated in them. Joseph Weizenbaum created the ELIZA program, which emulates a Rogerian Therapist, in 1966. It was an early study of computers and natural human language. I was a little over 2 years old, how about you?\n\nConversation with Eliza\n\nThere are fortunately a number of ELIZA emulators on the Web. I found one at http://www.chayden.net/eliza/Eliza.html that actually contains the source code (in Java) that makes up the ELIZA script.\n\nFigure 3 shows a screen shot of the interaction. ELIZA first welcomes me, says \u2018Hello, How do you do. Please state your problem\u2019 and we continue in a short loop of conversation, the computer using cues from my answers to create new questions and leading fragments of conversation.\n\nFigure 3\n\nAlbeit a very limited demonstration of how humans could interact with a computer in 1966, it\u2019s amusing to play with now and compare it to something as richly interactive as the Microsoft Surface (Figure 4). Here, we see clear Lucite blocks that display projected video. Each side of the block has a different view of the video, so not only does one have to match up the images as they are moving, but do so in the proper directionality.\n\nFigure 4\n\nTakeway: the better we know our environment, the more we can alter it to emulate, expand and even supersede our message.\n\nLeveraging Holiday Cheer\n\nSince most of us at least have a few days off for the holidays now that Christmas is upon us, now\u2019s a perfect time to reflect on ones\u2019 environment and examine the messages within it. Convince your spouse to find you a few audio books for stocking stuffers. Find interactive games to play with your kids and observe them, and yourself, during the interaction. Pour a nice egg-nog and sit down with a copy of Marshall McLuhan\u2019s \u2018The Medium is the Massage.\u2019 Leverage that holiday cheer and here\u2019s to a prosperous, interactive new year.", "year": "2007", "author": "Molly Holzschlag", "author_slug": "mollyholzschlag", "published": "2007-12-22T00:00:00+00:00", "url": "https://24ways.org/2007/how-media-studies-can-massage-your-message/", "topic": "ux"} {"rowid": 317, "title": "Putting the World into \"World Wide Web\"", "contents": "Despite the fact that the Web has been international in scope from its inception, the predominant mass of Web sites are written in English or another left-to-right language. Sites are typically designed visually for Western culture, and rely on an enormous body of practices for usability, information architecture and interaction design that are by and large centric to the Western world.\n\nThere are certainly many reasons this is true, but as more and more Web sites realize the benefits of bringing their products and services to diverse, global markets, the more demand there will be on Web designers and developers to understand how to put the World into World Wide Web.\n\nInternationalization\n\nAccording to the W3C, Internationalization is:\n\n\n\t\u201c\u2026the design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language.\u201d\n\n\nMany Web designers and developers have at least heard, if not read, about Internationalization. We understand that the Web is in fact worldwide, but many of us never have the opportunity to work with Internationalization. Or, when we do, think of it in purely technical terms, such as \u201cwhich character set do I use?\u201d\n\nAt first glance, it might seem to many that Internationalization is the act of making Web sites available to international audiences. And while that is in fact true, this isn\u2019t done by broad-stroking techniques and technologies. Instead, it involves a far more narrow understanding of geographical, cultural and linguistic differences in specific areas of the world. This is referred to as localization and is the act of making a Web site make sense in the context of the region, culture and language(s) the people using the site are most familiar with.\n\nInternationalization itself includes the following technical tasks:\n\n\n\tEnsuring no barrier exists to the localization of sites. Of critical importance in the planning stages of a site for Internationalized audiences, the role of the developer is to ensure that no barrier exists. This means being able to perform such tasks as enabling Unicode and making sure legacy character encodings are properly handled.\n\tPreparing markup and CSS with Internationalization in mind. The earlier in the site development process this occurs, the better. Issues such as ensuring that you can support bidirectional text, identifying language, and using CSS to support non-Latin typographic features.\n\tEnabling code to support local, regional, language or culturally related references. Examples in this category would include time/date formats, localization of calendars, numbering systems, sorting of lists and managing international forms of addresses.\n\tEmpowering the user. Sites must be architected so the user can easily choose or implement the localized alternative most appropriate to them.\n\n\nLocalization\n\nAccording to the W3C, Localization is the:\n\n\n\t\u2026adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a \u201clocale\u201d).\n\n\nSo here\u2019s where we get down to thinking about the more sociological and anthropological concerns. Some of the primary localization issues are:\n\n\n\tNumeric formats. Different languages and cultures use numbering systems unlike ours. So, any time we need to use numbers, such as in an ordered list, we have to have a means of representing the accurate numbering system for the locale in question.\n\tMoney, honey! That\u2019s right. I\u2019ve got a pocketful of ugly U.S. dollars (why is U.S. money so unimaginative?). But I also have a drawer full of Japanese Yen, Australian Dollars, and Great British Pounds. Currency, how it\u2019s calculated and how it\u2019s represented is always a consideration when dealing with localization.\n\tUsing symbols, icons and colors properly. Using certain symbols or icons on sites where they might offend or confuse is certainly not in the best interest of a site that wants to sell or promote a product, service or information type. Moreover, the colors we use are surprisingly persuasive \u2013 or detrimental. Think about colors that represent death, for example. In many parts of Asia, white is the color of death. In most of the Western world, black represents death. For Catholic Europe, shades of purple (especially lavender) have represented Christ on the cross and mourning since at least Victorian times. When Walt Disney World Europe launched an ad campaign using a lot of purple and very glitzy imagery, millions of dollars were lost as a result of this seeming subtle issue. Instead of experiencing joy and celebration at the ads, the European audience, particularly the French, found the marketing to be overly American, aggressive, depressing and basically unappealing. Along with this and other cultural blunders, Disney Europe has become a well-known case study for businesses wishing to become international. By failing to understand localization differences, and how powerful color and imagery act on the human psyche, designers and developers are put to more of a disadvantage when attempting to communicate with a given culture.\n\tChoosing appropriate references to objects and ideas. What seems perfectly natural in one culture in terms of visual objects and ideas can get confused in another environment. One of my favorite cases of this has to do with Gerber baby food. In the U.S., the baby food is marketed using a cute baby on the package. Most people in the U.S. culturally do not make an immediate association that what is being represented on the label is what is inside the container. However, when Gerber expanded to Africa, where many people don\u2019t read, and where visual associations are less abstract, people made the inference that a baby on the cover of a jar of food represented what is in fact in the jar. You can imagine how confused and even angry people became. Using such approaches as a marketing ploy in the wrong locale can and will render the marketing a failure.\n\n\nAs you can see, the act of localization is one that can have profound impact on the success of a business or organization as it seeks to become available to more and more people across the globe.\n\nRethinking Design in the Context of Culture\n\nWhile well-educated designers and those individuals working specifically for companies that do a lot of localization understand these nuances, most of us don\u2019t get exposed to these ideas. Yet, we begin to see how necessary it becomes to have an awareness of not just the technical aspects of Internationalization, but the socio-cultural ones within localization.\n\nWhat\u2019s more, the bulk of information we have when it comes to designing sites typically comes from studies and work done on sites built in English and promoted to Western culture at large. We\u2019re making a critical mistake by not including diverse languages and cultural issues within our usability and information architecture studies.\n\nConsider the following design from the BBC:\n\n\n\nIn this case, we\u2019re dealing with English, which is read left to right. We are also dealing with U.K. cultural norms. Notice the following:\n\n\n\tLocation of of navigation\n\tUse of the color red\n\tUse of diverse symbols\n\tMix of symbols, icons and photos\n\tLocation of Search\n\n\nNow look at this design, which is the Arabic version of the BBC News, read right to left, and dealing with cultural norms within the Arabic-speaking world.\n\n\n\nNotice the following:\n\n\n\tLocation of of navigation (location switches to the right)\n\tUse of the color blue (blue is considered the \u201csafest\u201d global color)\n\tNo use of symbols and icons whatsoever\n\tLimitation of imagery to photos\n\tIn most cases, the photos show people, not objects\n\tLocation of Search\n\n\nAdmittedly, some choices here are more obvious than others in terms of why they were made. But one thing that stands out is that the placement of search is the same for both versions. Is this the result of a specific localization decision, or based on what we believe about usability at large? This is exactly the kind of question that designers working on localization have to seek answers to, instead of relying on popular best practices and belief systems that exist for English-only Web sites.\n\nIt\u2019s a Wide World Web After All\n\nFrom this brief article on Internationalization, it becomes apparent that the art and science of creating sites for global audiences requires a lot more preparation and planning than one might think at first glance. Developers and designers not working to address these issues specifically due to time or awareness will do well to at least understand the basic process of making sites more culturally savvy, and better prepared for any future global expansion.\n\nOne thing is certain: We not only are on a dramatic learning curve for designing and developing Web sites as it is, the need to localize sites is going to become more and more a part of the day to day work. Understanding aspects of what makes a site international and local will not only help you expand your skill set and make you more marketable, but it will also expand your understanding of the world and the people within it, how they relate to and use the Web, and how you can help make their experience the best one possible.", "year": "2005", "author": "Molly Holzschlag", "author_slug": "mollyholzschlag", "published": "2005-12-09T00:00:00+00:00", "url": "https://24ways.org/2005/putting-the-world-into-world-wide-web/", "topic": "ux"} {"rowid": 125, "title": "Accessible Dynamic Links", "contents": "Although hyperlinks are the soul of the World Wide Web, it\u2019s worth using them in moderation. Too many links becomes a barrier for visitors navigating their way through a page. This difficulty is multiplied when the visitor is using assistive technology, or is using a keyboard; being able to skip over a block of links doesn\u2019t make the task of finding a specific link any easier.\n\nIn an effort to make sites easier to use, various user interfaces based on the hiding and showing of links have been crafted. From drop-down menus to expose the deeper structure of a website, to a decluttering of skip links so as not to impact design considerations. Both are well intentioned with the aim of preserving a good usability experience for the majority of a website\u2019s audience; hiding the real complexity of a page until the visitor interacts with the element.\n\nWhen JavaScript is not available\n\nThe modern dynamic link techniques rely on JavaScript and CSS, but regardless of whether scripting and styles are enabled or not, we should consider the accessibility implications, particularly for screen-reader users, and people who rely on keyboard access.\n\nIn typical web standards-based drop-down navigation implementations, the rough consensus is that the navigation should be structured as nested lists so when JavaScript is not available the entire navigation map is available to the visitor. This creates a situation where a visitor is faced with potentially well over 50 links on every page of the website. Keyboard access to such structures is frustrating, there\u2019s far too many options, and the method of serially tabbing through each link looking for a specific one is tedious.\n\nInstead of offering the visitor an indigestible chunk of links when JavaScript is not available, consider instead having the minimum number of links on a page, and when JavaScript is available bringing in the extra links dynamically. Santa Chris Heilmann offers an excellent proof of concept in making Ajax navigation optional.\n\nWhen JavaScript is enabled, we need to decide how to hide links. One technique offers a means of comprehensively hiding links from keyboard users and assistive technology users. Another technique allows keyboard and screen-reader users to access links while they are hidden, and making them visible when reached.\n\nHiding the links\n\nIn JavaScript enhanced pages whether a link displays on screen depends on a certain event happening first. For example, a visitor needs to click a top-level navigation link that makes a set of sub-navigation links appear. In these cases, we need to ensure that these links are not available to any user until that event has happened.\n\nThe typical way of hiding links is to style the anchor elements, or its parent nodes with display: none. This has the advantage of taking the links out of the tab order, so they are not focusable. It\u2019s useful in reducing the number of links presented to a screen-reader or keyboard user to a minimum. Although the links are still in the document (they can be referenced and manipulated using DOM Scripting), they are not directly triggerable by a visitor.\n\nOnce the necessary event has happened, like our visitor has clicked on a top-level navigation link which shows our hidden set of links, then we can display the links to the visitor and make them triggerable. This is done simply by undoing the display: none, perhaps by setting the display back to block for block level elements, or inline for inline elements. For as long as this display style remains, the links are in the tab order, focusable by keyboard, and triggerable.\n\nA common mistake in this situation is to use visibility: hidden, text-indent: -999em, or position: absolute with left: -999em to position these links off-screen. But all of these links remain accessible via keyboard tabbing even though the links remain hidden from screen view. In some ways this is a good idea, but for hiding sub-navigation links, it presents the screen-reader user and keyboard user with too many links to be of practical use.\n\nMoving the links out of sight\n\nIf you want a set of text links accessible to screen-readers and keyboard users, but don\u2019t want them cluttering up space on the screen, then style the links with position: absolute; left: -999em. Links styled this way remain in the tab order, and are accessible via keyboard. (The position: absolute is added as a style to the link, not to a parent node of the link \u2013 this will give us a useful hook to solve the next problem).\n\na.helper {\n\tposition: absolute;\n\tleft: -999em;\n}\n\nOne important requirement when displaying links off-screen is that they are visible to a keyboard user when they receive focus. Tabbing on a link that is not visible is a usability mudpit, since the visitor has no visible cue as to what a focused link will do, or where it will go.\n\nThe simple answer is to restyle the link so that it appears on the screen when the hidden link receives focus. The anchor\u2019s :focus pseudo-class is a logical hook to use, and with the following style repositions the link onscreen when it receives the focus:\n\na.helper:focus, a.helper.focus {\n\ttop: 0;\n\tleft: 0;\n}\n\nThis technique is useful for hiding skip links, and options you want screen-reader and keyboard users to use, but don\u2019t want cluttering up the page. Unfortunately Internet Explorer 6 and 7 don\u2019t support the focus pseudo-class, which is why there\u2019s a second CSS selector a.helper.focus so we can use some JavaScript to help out. When the page loads, we look for all links that have a class of helper and add in onfocus and onblur event handlers:\n\nif (anchor.className == \"helper\") {\n\tanchor.onfocus = function() {\n\t\tthis.className = 'helper focus';\n\t}\n\tanchor.onblur = function() {\n\t\tthis.className = 'helper';\n\t}\n}\n\nSince we are using JavaScript to cover up for deficiencies in Internet Explorer, it makes sense to use JavaScript initially to place the links off-screen. That way an Internet Explorer user with JavaScript disabled can still use the skip link functionality.\n\nIt is vital that the number of links rendered in this way is kept to a minimum. Every link you offer needs to be tabbed through, and gets read out in a screen reader. Offer these off-screen links that directly benefit these types of visitor.\n\nAndy Clarke and Kimberly Blessing use a similar technique in the Web Standards Project\u2018s latest design, but their technique involves hiding the skip link in plain sight and making it visible when it receives focus. Navigate the page using just the tab key to see the accessibility-related links appear when they receive focus.\n\nThis technique is also a good way of hiding image replaced text. That way the screen-readers still get the actual text, and the website still gets its designed look.\n\nWhich way?\n\nIf the links are not meant to be reachable until a certain event has occurred, then the display: none technique is the preferred approach. If you want the links accessible but out of the way until they receive focus, then the off-screen positioning (or Andy\u2019s hiding in plain sight technique) is the way to go.", "year": "2006", "author": "Mike Davies", "author_slug": "mikedavies", "published": "2006-12-05T00:00:00+00:00", "url": "https://24ways.org/2006/accessible-dynamic-links/", "topic": "ux"} {"rowid": 274, "title": "Adaptive Images for Responsive Designs", "contents": "So you\u2019ve been building some responsive designs and you\u2019ve been working through your checklist of things to do:\n\n\n\tYou started with the content and designed around it, with mobile in mind first.\n\tYou\u2019ve gone liquid and there\u2019s nary a px value in sight; % is your weapon of choice now.\n\tYou\u2019ve baked in a few media queries to adapt your layout and tweak your design at different window widths.\n\tYou\u2019ve made your images scale to the container width using the fluid Image technique.\n\tYou\u2019ve even done the same for your videos using a nifty bit of JavaScript.\n\n\nYou\u2019ve done a good job so pat yourself on the back. But there\u2019s still a problem and it\u2019s as tricky as it is important: image resolutions.\n\nHTML has an problem\n\nCSS is great at adapting a website design to different window sizes \u2013 it allows you not only to tweak layout but also to send rescaled versions of the design\u2019s images. And you want to do that because, after all, a smartphone does not need a 1,900-pixel background image1.\n\nHTML is less great. In the same way that you don\u2019t want CSS background images to be larger than required, you don\u2019t want that happening with s either. A smartphone only needs a small image but desktop users need a large one. Unfortunately s can\u2019t adapt like CSS, so what do we do?\n\nWell, you could just use a high resolution image and the fluid image technique would scale it down to fit the viewport; but that\u2019s sending an image five or six times the file size that\u2019s really needed, which makes it slow to download and unpleasant to use. Smartphones are pretty impressive devices \u2013 my ancient iPhone 3G is more powerful in every way than my first proper computer \u2013 but they\u2019re still terribly slow in comparison to today\u2019s desktop machines. Sending a massive image means it has to be manipulated in memory and redrawn as you scroll. You\u2019ll find phones rapidly run out of RAM and slow to a crawl.\n\nWell, OK. You went mobile first with everything else so why not put in mobile resolution s too? Because even though mobile devices are rapidly gaining share in your analytics stats, they\u2019re still not likely to be the major share of your user base. I don\u2019t think desktop users would be happy with pokey little mobile resolution images, do you? What we need are adaptive images.\n\nAdaptive image techniques\n\nThere are a number of possible solutions, each with pros and cons, and it\u2019s not as simple to find a graceful solution as you might expect.\n\nYour first thought might be to use JavaScript to trawl through the markup and rewrite the source attribute. That\u2019ll get you the right end result, but it\u2019ll have done it in a way you absolutely don\u2019t want. That\u2019s because of the way browsers load resources. It starts to load the HTML and builds the page on-the-fly; as soon as it finds an element it immediately asks the server for that image. After the HTML has finished loading, the JavaScript will run, change the src attribute, and then the browser will request that new image too. Not instead of, but as well as. Not good: that\u2019s added more bloat instead of cutting it.\n\nPlain JavaScript is out then, which is a problem, because what other tools do we have to work with as web designers? Let\u2019s ignore that for now and I\u2019ll outline another issue with the concept of serving different resolution images for different window widths: a basic file management problem. To request a different image, that image has to exist on the server. How\u2019s it going to get there? That\u2019s not a trivial problem, especially if you have non-technical users that update content through a CMS. Let\u2019s say you solve that \u2013 do you plan on a simple binary switch: big image|little image? Is that really efficient or future-proof? What happens if you have an archive of existing content that needs to behave this way? Can you apply such a solution to existing content or markup?\n\nThere\u2019s a detailed round-up of potential techniques for solving the adaptive images problem over at the Cloud Four blog if you fancy a dig around exploring all the options currently available. But I\u2019m here to show you what I think is the most flexible and easy to implement solution, so here we are.\n\nAdaptive Images\n\nAdaptive Images aims to mitigate most of the issues surrounding the problems of bringing the venerable tag into the 21st century. And it works by leaving that tag completely alone \u2013 just add that desktop resolution image into the markup as you\u2019ve been doing for years now. We\u2019ll fix it using secret magic techniques and bottled pixie dreams. Well, fine: with one .htaccess file, one small PHP file and one line of JavaScript. But you\u2019re killing the mystique with that kind of talk.\n\nSo, what does this solution do?\n\n\n\tIt allows s to adapt to the same break points you use in your media queries, giving granular control in the same way you get with your CSS.\n\tIt installs on your server in five minutes or less and after that is automatic and you don\u2019t need to do anything.\n\tIt generates its own rescaled images on the server and doesn\u2019t require markup changes, so you can apply it to existing web content.\n\tIf you wish, it will make all of your images go mobile-first (just in case that\u2019s what you want if JavaScript and cookies aren\u2019t available).\n\n\nSound good? I hope so. Here\u2019s what you do.\n\nSetting up and rolling out\n\nI\u2019ll assume you have some basic server knowledge along with that wealth of front-end wisdom exploding out of your head: that you know not to overwrite any existing .htaccess file for example, and how to set file permissions on your server. Feeling up to it? Excellent.\n\n\n\tDownload the latest version of Adaptive Images either from the website or from the GitHub repository.\n\tUpload the included .htaccess and adaptive-images.php files into the root folder of your website.\n\tCreate a directory called ai-cache and make sure the server can write to it (CHMOD 755 should do it).\n\tAdd the following line of JavaScript into the of your site:\n\n\n\n\nThat\u2019s it, unless you want to tweak the default settings. You likely do, but essentially you\u2019re already up and running.\n\nHow it works\n\nAdaptive Images does a number of things depending on the scenario the script has to handle, but here\u2019s a basic overview of what it does when you load a page running it:\n\n\n\tA session cookie is written with the value of the visitor\u2019s screen size in pixels.\n\tThe HTML encounters an tag and sends a request to the server for that image. It also sends the cookie, because that\u2019s how browsers work.\n\tApache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL.\n\tThere are! The .htaccess says \u201cHey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.\u201d\n\tThe PHP file then does some intelligent thinking which can cover a number of scenarios, but I\u2019ll illustrate one path that can happen:\n\n\n\t\n\t\tThe PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px.\n\t\tThe PHP has a look at the available media query sizes that were configured and decides which one matches the user\u2019s device.\n\t\tIt then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there.\n\t\tWe\u2019ll pretend it doesn\u2019t \u2013 the PHP then goes to the actual requested URI and finds that the original file does exist.\n\t\tIt has a look to see how wide that image is. If it\u2019s already smaller than the user\u2019s screen width it sends it along and stops there. But, let\u2019s pretend the image is 1,000px wide.\n\t\tThe PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it.\n\t\n\nIt also does a few other things when needs arise, for example:\n\n\n\tIt sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it.\n\tIt handles cases where there isn\u2019t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size.\n\tIt compares timestamps between the source image and the generated cache image \u2013 to ensure that if the source image gets updated, the old cached file won\u2019t be sent.\n\n\nCustomizing\n\nThere are a few options you can customize if you don\u2019t like the default values. By looking in the PHP\u2019s configuration section at the top of the file, you can:\n\n\n\tSet the resolution breakpoints to match your media query break points.\n\tChange the name and location of the ai-cache folder.\n\tChange the quality level any generated JPG images are saved at.\n\tHave it perform a subtle sharpen on rescaled images to help keep detail.\n\tToggle whether you want it to compare the files in the cache folder with the source ones or not.\n\tSet how long the browser should cache the images for.\n\tSwitch between a mobile-first or desktop-first approach when a cookie isn\u2019t found.\n\n\nMore importantly, you probably want to omit a few folders from the AI behaviour. You don\u2019t need or want it resizing the images you\u2019re using in your CSS, for example. That\u2019s fine \u2013 just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you\u2019re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it\u2019ll only apply AI behaviour to a given list of folders.\n\nCaveats\n\nAs I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I\u2019ve seen so far.\n\nThis is a PHP solution\n\nI wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don\u2019t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3. \n\nContent delivery networks\n\nAdaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won\u2019t allow that to happen. Adaptive Images will not work if you\u2019re using a CDN to deliver your website.\n\nA minor but interesting cookie issue.\n\nAs Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested \u2013 even though the JavaScript that sets the cookie is loaded by the browser before it finds any tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem:\n\n\n\tThe $mobile_first toggle allows you to choose what to send to a browser if a cookie isn\u2019t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest.\n\tEven if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE.\n\n\nThis means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn\u2019t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest.\n\nThe best way to get a cookie written is to use JavaScript as I\u2019ve explained above, because it\u2019s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP \u2018image\u2019 to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won\u2019t be set until after images have been requested.\n\nThe future\n\nFor today, this is a pretty good solution. It works, and as it doesn\u2019t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you\u2019re good to go \u2013 you\u2019d never know AI had been there.\n\nHowever, this isn\u2019t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future.\n\nFirst, we could do with browsers sending far more information about the user\u2019s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn\u2019t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for.\n\nSecondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does.\n\nI hope you\u2019ve found this interesting and will find Adaptive Images useful.\n\nFootnotes\n\n1 While I\u2019m talking about preventing smartphones from downloading resources they don\u2019t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files.\n\n2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository. \n\n3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images.", "year": "2011", "author": "Matt Wilcox", "author_slug": "mattwilcox", "published": "2011-12-04T00:00:00+00:00", "url": "https://24ways.org/2011/adaptive-images-for-responsive-designs/", "topic": "ux"} {"rowid": 106, "title": "Checking Out: Progress Meters", "contents": "It\u2019s the holiday season, so you know what that means: online shopping! When I started developing Web sites back in the 90s, many of my first clients were small local shops wanting to sell their goods online, so I developed many a checkout system. And because of slow dial-up speeds back then, informing the user about where they were in the checkout process was pretty important.\n\nEven though we\u2019re (mostly) beyond the dial-up days, informing users about where they are in a flow is still important. In usability tests at the companies I\u2019ve worked at, I\u2019ve seen time and time again how not adequately informing the user about their state can cause real frustration. This is especially true for two sets of users: mobile users and users of assistive devices, in particular, screen readers.\n\nThe progress meter is a very common design solution used to indicate to the user\u2019s state within a flow. On the design side, much effort may go in to crafting a solution that is as visually informative as possible. On the development side, however, solutions range widely. I\u2019ve checked out the checkouts at a number of sites and here\u2019s what I\u2019ve found when it comes to progress meters: they\u2019re sometimes inaccessible and often confusing or unhelpful \u2014 all because of the way in which they\u2019re coded. For those who use assistive devices or text-only browsers, there must be a better way to code the progress meter \u2014 and there is.\n\n(Note: All code samples are from live sites but have been tweaked to hide the culprits\u2019 identities.)\n\nHow not to make progress\n\nA number of sites assemble their progress meters using non- or semi-semantic markup and images with no alternate text. On text-only browsers (like my mobile phone) and to screen readers, this looks and reads like chunks of content with no context given.\n\n
\n\t\"\"\n\tShipping information\n\t\"\"\n\t\"\"\n\tPayment information\n\t\"\"\n\t\"\"\n\tPlace your order\n
\n\nIn the above example, the third state, \u201cPlace your order\u201d, is the current state. But a screen reader may not know that, and my cell phone only displays \"Shipping informationPayment informationPlace your order\". Not good.\n\nIs this progress?\n\nOther sites present the entire progress meter as a graphic, like the following:\n\n\n\nNow, I have no problem with using a graphic to render a very stylish progress meter (my sample above is probably not the most stylish example, of course, but you understand my point). What becomes important in this case is the use of appropriate alternate text to describe the image. Disappointingly, sites today have a wide range of solutions, including using no alternate text. Check out these code samples which call progress meter images.\n\n\"\"\n\nI think we can all agree that the above is bad, unless you really don\u2019t care whether or not users know where they are in a flow.\n\n\"Shipping\n\nThe alt text in the example above just copies all of the text found in the graphic, but it doesn\u2019t represent the status at all. So for every page in the checkout, the user sees or hears the same text. Sure, by the second or third page in the flow, the user has figured out what\u2019s going on, but she or he had to think about it. I don\u2019t think that\u2019s good.\n\n\"Checkout:\n\nThe above probably has the best alternate text out of these examples, because the user at least understands that they\u2019re in the Checkout process, on the Place your order page. But going through the flow with alt text like this, the user doesn\u2019t know how many steps are in the flow.\n\nSemantic progress\n\nOf course, there are some sites that use an ordered list when marking up the progress meter. Hooray! Unfortunately, no text-only browser or screen reader would be able to describe the user\u2019s current state given this markup.\n\n
    \n\t
  1. shipping information
  2. \n\t
  3. payment information
  4. \n\t
  5. place your order
  6. \n
\n\nWithout CSS enabled, the above is rendered as follows:\n\n\n\nProgress at last\n\nWe all know that semantic markup makes for the best foundation, so we\u2019ll start with the markup found above. In order to make the state information accessible, let\u2019s add some additional text in paragraph and span elements.\n\n
\n\t

There are three steps in this checkout process.

\n\t
    \n\t\t
  1. Enter your shipping information
  2. \n\t\t
  3. Enter your payment information
  4. \n\t\t
  5. Review details and place your order
  6. \n\t
\n
\n\nAdd on some simple CSS to hide the paragraph and spans, and arrange the list items on a single line with a background image to represent the large number, and this is what you\u2019ll get:\n\n \n\tThere are three steps in this checkout process.\n\t\n\t\tEnter your shipping information\n\t\tEnter your payment information\n\t\tReview details and place your order\n\t\n \n\nTo display and describe a state as active, add the class \u201ccurrent\u201d to one of the list items. Then change the hidden content such that it better describes the state to the user.\n\n
\n\t

There are three steps in this checkout process.

\n\t
    \n\t\t
  1. You are currently entering your shipping information
  2. \n\t\t
  3. In the next step, you will enter your payment information
  4. \n\t\t
  5. In the last step, you will review the details and place your order
  6. \n\t
\n
\n\nThe end result is an attractive progress meter that gives much greater semantic and contextual information.\n\n \n\tThere are three steps in this checkout process.\n\t\n\t\tYou are currently entering your shipping information\n\t\tIn the next step, you will enter your payment information\n\t\tIn the last step, you will review the details and place your order\n\t\n \n\nFor example, the above example renders in a text-only browser as follows:\n\n \n\tThere are three steps in this checkout process.\n\t\n\t\tYou are currently entering your shipping information\n\t\tIn the next step, you will enter your payment information\n\t\tIn the last step, you will review the details and place your order\n\t\n \n\nAnd the screen reader I use for testing announces the following:\n\n \n\tThere are three steps in this checkout process. List of three items. You are currently entering your shipping information. In the next step, you will enter your payment information. In the last step, you will review the details and place your order. List end.\n \n\nHere\u2019s a sample code page that summarises this approach.\n\nHappy frustration-free online shopping with this improved progress meter!", "year": "2008", "author": "Kimberly Blessing", "author_slug": "kimberlyblessing", "published": "2008-12-12T00:00:00+00:00", "url": "https://24ways.org/2008/checking-out-progress-meters/", "topic": "ux"} {"rowid": 278, "title": "Going Both Ways", "contents": "It\u2019s that time of the year again: Santa is getting ready to travel the world. Up until now, girls and boys from all over have sent in letters asking for what they want. I hope that Santa and his elves have\u2014unlike me\u2014learned more than just English.\n\nOn the Internet, those girls and boys want to participate in sharing their stories and videos of opening presents and of being with friends and family. Ah, yes, the wonders of user generated content. But more than that, people also want to be able to use sites in the language they know.\n\nWhile you and I might expect the text to read from left to right, not all languages do. Some go from right to left, such as Arabic and Hebrew. (Some also go from top to bottom, but for now, let\u2019s just worry about those first two directions!)\n\nIf we were building a site for girls and boys to send their letters to Santa, we need to consider having the interface in the language and direction that they prefer. On the elves\u2019 side, they may be viewing the site in one direction but reading the user generated content in the other direction. We need to build a site that supports bidirectional (or bidi) text.\n\nLet\u2019s take a look at some things to be aware of when it comes to building bidi interfaces.\n\nSetting the direction of the interface\n\nRight off the bat, we need to tell the browser what direction the text should be going in. To do this, we add the dir attribute to an HTML element and set it to either LTR (for left to right) or RTL (for right to left).\n\n\n\nYou can add the dir attribute to any element and it will set or change the direction for the content within that element. \n\n\n Here is English Content.\n
\u0627\u0644\u0645\u0648\u0636\u0648\u0639
\n\n\nYou can also set the direction via CSS.\n\n.rtl {\n direction: rtl;\n }\n\nIt\u2019s generally recommended that you don\u2019t use CSS to set the direction of the text. Text direction is an important part of the content that should be retained even in environments where the CSS may not be available or fails to load.\n\nHow things change with the direction attribute\n\nJust adding the dir attribute tells the browser to render the content within it differently. \n\n\n\nThe text aligns to the right of the page and, interestingly, punctuation appears at the left of the sentence. (We\u2019ll get to that in a little bit.) \n\nScrollbars in most browsers will appear on the left instead of the right. Webkit is the notable exception here which always shows the scrollbar on the right, no matter what the text direction is. Avoid having a design that has an expectation that the scrollbar will be in a specific place (and a specific size).\n\nChanging the order of text mid-way\n\nAs we saw in that previous example, the punctuation appeared at the beginning of the sentence instead of the end, even though the text was English. At Yahoo!, we have an interesting dilemma where the company name has punctuation in it. Therefore, when the name appears in the middle of (for example) Arabic text, the exclamation mark appears at the beginning of the word instead of the end.\n\n\n\nThere are two ways in which this problem can be solved:\n\n1. Use HTML around the left-to-right content, or\n\nTo solve the problem of the Yahoo! name in the midst of Arabic text, we can wrap a span around it and change the direction on that element.\n\n\n\n2. Use a text direction mark in the content.\n\nUnicode has two marks, U+200E and U+200F, that tell the browser that the text is in a particular direction. Placing this right after the punctuation will correct the placement.\n\nUsing the HTML entity:\nYahoo!\u200e\n\nTables\n\nThankfully, the cells of a data table also get reordered from right to left. Equally as nice, if you\u2019re using display:table, the content will still get reordered.\n\n\n\nCSS\n\nSo far, we\u2019ve seen that the dir attribute does a pretty decent job of getting content flowing in the direction that we need it. Unfortunately, there are huge swaths of design that is handled by CSS that the handy dir attribute has zero effect over.\n\nMany properties, like float or absolute positioning with left and right values, are unaffected and must be handled manually. Elements that were floated left must now by floated right. Left margins and paddings must now move to the right and the right margins and paddings must now move to the left.\n\nSince the browser won\u2019t handle this for us, we have a couple approaches that we can use:\n\nCSS Only\n\nWe can take advantage of the attribute selector to target CSS to apply in one direction or another.\n\n[dir=ltr] .module {\n\tfloat: left;\n\tmargin: 0 0 0 20px;\n}\n\n[dir=rtl] .module {\n\tfloat: right;\n\tmargin: 0 20px 0 0;\n}\n\nAs you can see from this example, both of the properties have been modified for the flipped interface. If your interface is rather complicated, you will have to create a lot of duplicate rules to have the site looking good in both directions while serving up a single stylesheet.\n\nCSSJanus\n\nGoogle has a tool called CSSJanus. It\u2019s a Python script that runs over the LTR versions of your CSS files and generates RTL versions. For the RTL version of the site, just serve up those CSS files instead of the LTR versions.\n\nThe script looks for keywords and value combinations and automatically swaps them so you don\u2019t have to. \n\nAt Yahoo!, CSSJanus was a huge help in speeding up our development of a bidi interface. We\u2019ve also made a number of improvements to the script to better handle border radius, background positioning, and gradients. We will be pushing those changes back into the CSSJanus project. \n\n\n\nBackground Images\n\nBackground images, especially for things like CSS sprites, also raise an interesting dilemma. Background images are positioned relative to the left of the element. In a flipped interface, however, we need to position it relative to the right. An icon that would be to the left of some text will now need to appear on the right.\n\n\n\nIf the x position of the background is percentage-based, then it\u2019s fairly easy to swap the values. 0 becomes 100%, 10% becomes 90% and so on. If the x position is pixel-based, then we\u2019re in a bit of a pickle. There\u2019s no way to say that the image should be a certain number of pixels from the right.\n\nTherefore, you\u2019ll need to ensure that any background image that needs to be swapped should be percentage-based. (99.9% of the the time, the background position will need to be 0 so that it can be changed to 100% for RTL.)\n\nIf you\u2019re taking an existing implementation, background positioning will likely be the biggest hurdle you\u2019ll have to overcome in swapping your interface around. If you make sure your x position is always percentage-based from the beginning, you\u2019ll have a much smoother process ahead of you!\n\nFlipping Images\n\nThis is a more subtle point and one where you\u2019ll really want an expert with the region to weigh in on. In RTL interfaces, users may expect certain icons to also be flipped. Pencil icons that skew to the right in LTR interfaces might need to be swapped to skew to the left, instead. Chat bubbles that come from the left will need to come from the right.\n\nThe easiest way to handle this is to create new images. Name the LTR versions with -ltr in the name and name the RTL versions with -rtl in the name. CSSJanus will automatically rename all file references from -ltr to -rtl.\n\nThe Future\n\nThankfully, those within the W3C recognize that CSS should be more agnostic. As a result, they\u2019ve begun introducing new properties that allow the browser to manage the swapping from left to right for us.\n\nThe CSS3 specification for backgrounds allows for the background-position to be relative to other corners other than the top left by specifying keywords before each position.\n\nThis will position the background 5px from the bottom right of the element.\n\nbackground-position: right 5px bottom 5px;\n\nOpera 11.60 is currently the only browser that supports this syntax.\n\nFor margin and padding, we have margin-start and margin-end. In LTR interfaces, margin-start would be the same as margin-left and in RTL interfaces, margin-start would be the same as margin-right. \n\nFirefox and Webkit support these but with vendor prefixes right now:\n\n-webkit-margin-start: 20px;\n-moz-margin-start: 20px;\n\nIn the CSS3 Images working draft specification, there\u2019s an image() property that allows us to specify image fallbacks and whether those fallbacks are for LTR or RTL interfaces.\n\nbackground: image('sprite.png' ltr, 'sprite-rtl.png' rtl);\n\nUnfortunately, no browser supports this yet but it\u2019s nice to be able to dream of how much easier this will be in the future!\n\nHo Ho Ho\n\nHopefully, after all of this, you\u2019re full of cheer knowing that you\u2019re well on your way to creating interfaces that can go both ways!", "year": "2011", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2011-12-19T00:00:00+00:00", "url": "https://24ways.org/2011/going-both-ways/", "topic": "ux"} {"rowid": 69, "title": "How to Do a UX Review", "contents": "A UX review is where an expert goes through a website looking for usability and experience problems and makes recommendations on how to fix them. \nI\u2019ve completed a number of UX reviews over my twelve years working as a user experience consultant and I thought I\u2019d share my approach. \nI\u2019ll be talking about reviewing websites here; you can adapt the approach for web apps, or mobile or desktop apps. \nWhy conduct a review\nTypically, a client asks for a review to be undertaken by a trusted and, ideally, detached third party who either works for an agency or is a freelancer. Often they may ask a new member of the UX team to complete one, or even set it as a task for a job interview. This indicates the client is looking for an objective view, seen from the outside as a user would see the website. \nI always suggest conducting some user research rather than a review. Users know their goals and watching them make (what you might think of as) mistakes on the website is invaluable. Conducting research with six users can give you six hours\u2019 worth of review material from six viewpoints. In short, user research can identify more problems and show how common those problems might be. \nThere are three reasons, though, why a review might better suit client needs than user research: \n\nQuick results: user research and analysis takes at least three weeks.\nLimited budget: the \u00a36\u201310,000 cost to run user research is about twice the cost of a UX review. \nUsers are hard to reach: in the business-to-business world, reaching users is difficult, especially if your users hold senior positions in their organisations. Working with consumers is much easier as there are often more of them. \n\nThere is some debate about the benefits of user research over UX review. In my experience you learn far more from research, but opinions differ. \nBe objective\nThe number one mistake many UX reviewers make is reporting back the issues they identify as their opinion. This can cause credibility problems because you have to keep justifying why your opinion is correct. \nI\u2019ve had the most success when giving bad news in a UX review and then finally getting things fixed when I have been as objective as possible, offering evidence for why something may be a problem. \nTo be objective we need two sources of data: numbers from analytics to appeal to reason; and stories from users in the form of personas to speak to emotions. Highlighting issues with dispassionate numerical data helps show the extent of the problem. Making the problems more human using personas can make the problem feel more real. \nNumbers from analytics\nThe majority of clients I work with use Google Analytics, but if you use a different analytics package the same concepts apply. I use analytics to find two sets of things.\n1. Landing pages and search terms\nLanding pages are the pages users see first when they visit a website \u2013 more often than not via a Google search. Landing pages reveal user goals. If a user landed on a page called \u2018Yellow shoes\u2019 their goal may well be to find out about or buy some yellow shoes. \nIt would be great to see all the search terms bringing people to the website but in 2011 Google stopped providing search term data to (rightly!) protect users\u2019 privacy. You can get some search term data from Google Webmaster tools, but we must rely on landing pages as a clue to our users\u2019 goals. \nThe thing to look for is high-traffic landing pages with a high bounce rate. Bounce rate is the percentage of visitors to a website who navigate away from the site after viewing only one page. A high bounce rate (over 50%) isn\u2019t good; above 70% is bad.\nTo get a list of high-traffic landing pages with a high bounce rate install this bespoke report.\nGoogle Analytics showing landing pages ordered by popularity and the bounce rate for each.\nThis is the list of pages with high demand and that have real problems as the bounce rate is high. This is the main focus of the UX review. \n2. User flows\nWe have the beginnings of the user journey: search terms and initial landing pages. Now we can tap into the really useful bit of Google Analytics. Called behaviour flows, they show the most common order of pages visited. \nBehaviour flows from Google Analytics, showing the routes users took through the website.\nHere we can see the second and third (and so on) pages users visited. Importantly, we can also see the drop-outs at each step. \nIf your client has it set up, you can also set goal pages (for example, a post-checkout contact us and thank you page). You can then see a similar view that tracks back from the goal pages. If your client doesn\u2019t have this, suggest they set up goal tracking. It\u2019s easy to do. \nWe now have the remainder of the user journey. \nA user journey\nExpect the work in analytics to take up to a day. \nWe may well identify more than one user journey, starting from different landing pages and going to different second- and third-level pages. That\u2019s a good thing and shows we have different user types. Talking of user types, we need to define who our users are. \nPersonas\nWe have some user journeys and now we need to understand more about our users\u2019 motivations and goals. \nI have a love-hate relationship with personas, but used properly these portraits of users can help bring a human touch to our UX review. \nI suggest using a very cut-down view of a persona. My old friends Steve Cable and Richard Caddick at cxpartners have a great free template for personas from their book Communicating the User Experience.\nThe first thing to do is find a picture that represents that persona. Don\u2019t use crappy stock photography \u2013 it\u2019s sometimes hard to relate to perfect-looking people) \u2013 use authentic-looking people. Here\u2019s a good collection of persona photos. \nAn example persona.\nThe personas have three basic attributes:\n\nGoals: we can complete these drawing on the analytics data we have (see example).\nMusts: things we have to do to meet the persona\u2019s needs.\nMust nots: a list of things we really shouldn\u2019t do. \n\nCompleting points 2 and 3 can often be done during the writing of the report. \nLet\u2019s take an example. We know that the search term \u2018yellow shoes\u2019 takes the user to the landing page for yellow shoes. We also know this page has a high bounce rate, meaning it doesn\u2019t provide a good experience. \nWith our expert hat on we can review the page. We will find two types of problem: \n\nUsability issues: ineffective button placement or incorrect wording, links not looking like links, and so on. \nExperience issues: for example, if a product is out of stock we have to contact the business to ask them to restock. \n\nThat link is very small and hard to see.\nWe could identify that the contact button isn\u2019t easy to find (a usability issue) but that\u2019s not the real problem here. That the user has to ask the business to restock the item is a bad user experience. We add this to our personas\u2019 must nots. The big experience problems with the site form the musts and must nots for our personas. \nWe now have a story around our user journey that highlights what is going wrong. \nIf we\u2019ve identified a number of user journeys, multiple landing pages and differing second and third pages visited, we can create more personas to match. A good rule of thumb is no more than three personas. Any more and they lose impact, watering down your results. \nExpect persona creation to take up to a day to complete. \nLet\u2019s start the review\nWe take the user journeys and we follow them step by step, working through the website looking for the reasons why users drop out at each step. Using Keynote or PowerPoint, I structure the final report around the user journey with separate sections for each step.\nFor each step we\u2019ll find both usability and experience problems. Split the results into those two groups. \nUsability problems are fairly easy to fix as they\u2019re often quick design changes. As you go along, note the usability problems in one place: we\u2019ll call this \u2018quick wins\u2019. Simple quick fixes are a reassuring thing for a client to see and mean they can get started on stuff right away. You can mark the severity of usability issues. Use a scale from 1 to 3 (if you use 1 to 5 everything ends up being a 3!) where 1 is minor and 3 is serious. \nReview the website on the device you\u2019d expect your persona to use. Are they using the site on a smartphone? Review it on a smartphone. \nI allow one page or slide per problem, which allows me to explain what is going wrong. For a usability problem I\u2019ll often make a quick wireframe or sketch to explain how to address it. \nA UX review slide displaying all the elements to be addressed. These slides may be viewed from across the room on a screen so zoom into areas of discussion.\n(Quick tip: if you use Google Chrome, try Awesome Screenshot to capture screens.)\nWhen it comes to the more severe experience problems \u2013 things like an online shop not offering next day delivery, or a business that needs to promise to get back to new customers within a few hours \u2013 these will take more than a tweak to the UI to fix. \nCall these something different. I use the terms like business challenges and customer experience issues as they show that it will take changes to the organisation and its processes to address them. It\u2019s often beyond the remit of a humble UX consultant to recommend how to fix organisational issues, so don\u2019t try. \nAgain, create a page within your document to collect all of the business challenges together. \nExpect the review to take between one and three days to complete. \nThe final report should follow this structure:\n\nThe approach\nOverview of usability quick wins\nOverview of experience issues\nOverview of Google Analytics findings\nThe user journeys \nThe personas\nDetailed page-by-page review (broken down by steps on the user journey)\n\nThere are two academic theories to help with the review. \nHeuristic evaluation is a set of criteria to organise the issues you find. They\u2019re great for categorising the usability issues you identify but in practice they can be quite cumbersome to apply. \nI prefer the more scientific and much simpler cognitive walkthrough that is focused on goals and actions.\nA workshop to go through the findings\nThe most important part of the UX review process is to talk through the issues with your client and their team. \nA document can only communicate a certain amount. Conversations about the findings will help the team understand the severity of the issues you\u2019ve uncovered and give them a chance to discuss what to do about them. \nExpect the workshop to last around three hours.\nWhen presenting the report, explain the method you used to conduct the review, the data sources, personas and the reasoning behind the issues you found. Start by going through the usability issues. Often these won\u2019t be contentious and you can build trust and improve your credibility by making simple, easy to implement changes. \nThe most valuable part of the workshop is conversation around each issue, especially the experience problems. The workshop should include time to talk through each experience issue and how the team will address it. \nI collect actions on index cards throughout the workshop and make a note of who will take what action with each problem. \nIndex cards showing the problem and who is responsible.\nWhen talking through the issues, the person who designed the site is probably in the room \u2013 they may well feel threatened. So be nice.\nWhen I talk through the report I try to have strong ideas, weakly held.\nAt the end of the workshop you\u2019ll have talked through each of the issues and identified who is responsible for addressing them. To close the workshop I hand out the cards to the relevant people, giving them a physical reminder of the next steps they have to take. \nThat\u2019s my process for conducting a review. I\u2019d love to hear any tips you have in the comments.", "year": "2015", "author": "Joe Leech", "author_slug": "joeleech", "published": "2015-12-03T00:00:00+00:00", "url": "https://24ways.org/2015/how-to-do-a-ux-review/", "topic": "ux"} {"rowid": 280, "title": "Conditional Loading for Responsive Designs", "contents": "On the eighteenth day of last year\u2019s 24 ways, Paul Hammond wrote a great article called Speed Up Your Site with Delayed Content. He outlined a technique for loading some content \u2014 like profile avatars \u2014 after the initial page load. This gives you a nice performance boost.\n\nThere\u2019s another situation where this kind of delayed loading could be really handy: mobile-first responsive design.\n\nResponsive design combines three techniques:\n\n\n\ta fluid grid\n\tflexible images\n\tmedia queries\n\n\nAt first, responsive design was applied to existing desktop-centric websites to allow the layout to adapt to smaller screen sizes. But more recently it has been combined with another innovative approach called mobile first.\n\nRather then starting with the big, bloated desktop site and then scaling down for smaller devices, it makes more sense to start with the constraints of the small screen and then scale up for larger viewports. Using this approach, your layout grid, your large images and your media queries are applied on top of the pre-existing small-screen design. It\u2019s taking progressive enhancement to the next level.\n\nOne of the great advantages of the mobile-first approach is that it forces you to really focus on the core content of your page. It might be more accurate to think of this as a content-first approach. You don\u2019t have the luxury of sidebars or multiple columns to fill up with content that\u2019s just nice to have rather than essential.\n\nBut what happens when you apply your media queries for larger viewports and you do have sidebars and multiple columns? Well, you can load in that nice-to-have content using the same kind of Ajax functionality that Paul described in his article last year. The difference is that you first run a quick test to see if the viewport is wide enough to accommodate the subsidiary content. This is conditional delayed loading.\n\nConsider this situation: I\u2019ve published an article about cats and I\u2019d like to include relevant cat-related news items in the sidebar \u2026but only if there\u2019s enough room on the screen for a sidebar.\n\nI\u2019m going to use Google\u2019s News API to return the search results. This is the ideal time to use delayed loading: I don\u2019t want a third-party service slowing down the rendering of my page so I\u2019m going to fire off the request after my document has loaded.\n\nHere\u2019s an example of the kind of Ajax function that I would write:\n\nvar searchNews = function(searchterm) {\n\tvar elem = document.createElement('script');\n\telem.src = 'http://ajax.googleapis.com/ajax/services/search/news?v=1.0&q='+searchterm+'&callback=displayNews';\n\tdocument.getElementsByTagName('head')[0].appendChild(elem);\n};\n\nI\u2019ve provided a callback function called displayNews that takes the JSON result of that Ajax request and adds it an element on the page with the ID newsresults:\n\nvar displayNews = function(news) {\n\tvar html = '',\n\titems = news.responseData.results,\n\ttotal = items.length;\n\tif (total>0) {\n\t\tfor (var i=0; i';\n\t\t\thtml+= '';\n\t\t\thtml+= '

'+item.titleNoFormatting+'

';\n\t\t\thtml+= '
';\n\t\t\thtml+= '

';\n\t\t\thtml+= item.content;\n\t\t\thtml+= '

';\n\t\t\thtml+= '';\n\t\t}\n\t\tdocument.getElementById('newsresults').innerHTML = html;\n\t}\n};\n\nNow, I can call that function at the bottom of my document:\n\n\n\nIf I only want to run that search when there\u2019s room for a sidebar, I can wrap it in an if statement:\n\n\n\nIf the browser is wider than 640 pixels, that will fire off a search for news stories about cats and put the results into the newsresults element in my markup:\n\n
\n \n
\n\nThis works pretty well but I\u2019m making an assumption that people with small-screen devices wouldn\u2019t be interested in seeing that nice-to-have content. You know what they say about assumptions: they make an ass out of you and umptions. I should really try to give everyone at least the option to get to that extra content:\n\n
\n Search Google News\n
\n\nSee the result\n\nVisitors with small-screen devices will see that link to the search results; visitors with larger screens will get the search results directly.\n\nI\u2019ve been concentrating on HTML and JavaScript, but this technique has consequences for content strategy and information architecture. Instead of thinking about possible page content in a binary way as either \u2018on the page\u2019 or \u2018not on the page\u2019, conditional loading introduces a third \u2018it\u2019s complicated\u2019 option.\n\nThis was just a simple example but I hope it illustrates that conditional loading could become an important part of the content-first responsive design approach.", "year": "2011", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2011-12-02T00:00:00+00:00", "url": "https://24ways.org/2011/conditional-loading-for-responsive-designs/", "topic": "ux"} {"rowid": 48, "title": "A Holiday Wish", "contents": "A friend and I were talking the other day about why clients spend more on toilet cleaning than design, and how the industry has changed since the mid-1990s, when we got our starts. Early in his career, my friend wrote a fine CSS book, but for years he has called himself a UX designer. And our conversation got me thinking about how I reacted to that title back when I first started hearing it.\n\n\u201cJust what this business needs,\u201d I said to myself, \u201canother phony expert.\u201d\n\nOkay, so I was wrong about UX, but my touchiness was not altogether unfounded. In the beginning, our industry was divided between freelance jack-of-all-trade punks, who designed and built and coded and hosted and Photoshopped and even wrote the copy when the client couldn\u2019t come up with any, and snot-slick dot-com mega-agencies that blew up like Alice and handed out titles like impoverished nobles in the years between the world wars. \n\nI was the former kind of designer, a guy who, having failed or just coasted along at a cluster of other careers, had suddenly, out of nowhere, blossomed into a web designer\u2014an immensely curious designer slash coder slash writer with a near-insatiable lust to shave just one more byte from every image. We had modems back then, and I dreamed in sixteen colors. My source code was as pretty as my layouts (arguably prettier) and I hoovered up facts and opinions from newsgroups and bulletin boards as fast as any loudmouth geek could throw them. It was a beautiful life.\n\nBut soon, too soon, the professional digital agencies arose, buying loft buildings downtown, jacking in at T1 speeds, charging a hundred times what I did, and communicating with their clients in person, in large artfully bedecked rooms, wearing hand-tailored Barney\u2019s suits and bringing back the big city bullshit I thought I\u2019d left behind when I quit advertising to become a web designer. \n\nJust like the big bad ad agencies of my early career, the new digital agencies stocked every meeting with a totem pole worth of ranks and titles. If the client brought five upper middle managers to the meeting, the agency did likewise. If fifteen stakeholders got to ask for a bigger logo, fifteen agency personnel showed up to take notes on the percentage of enlargement required.\n\nBut my biggest gripe was with the titles.\n\nThe bigger and more expensive the agency, the lousier it ran with newly invented titles. Nobody was a designer any more. Oh, no. Designer, apparently, wasn\u2019t good enough. Designer was not what you called someone you threw that much money at.\n\nInstead of designers, there were user interaction leads and consulting middleware integrators and bilabial experience park rangers and you name it. At an AIGA Miami event where I was asked to speak in the 1990s, I once watched the executive creative director of the biggest dot-com agency of the day make a presentation where he spent half his time bragging that the agency had recently shaved down the number of titles for people who basically did design stuff from forty-six to just twenty-three\u2014he presented this as though it were an Einsteinian coup\u2014and the other half of his time showing a film about the agency\u2019s newly opened branch in Oslo. The Oslo footage was shot in December. I kept wondering which designer in the audience who lived in the constant breezy balminess of Miami they hoped to entice to move to dark, wintry Norway. But I digress.\n\nShortly after I viewed this presentation, the dot-com world imploded, brought about largely by the euphoric excess of the agencies and their clients. But people still needed websites, and my practice flourished\u2014to the point where, in 1999, I made the terrifying transition from guy in his underwear working freelance out of his apartment to head of a fledgling design studio. (Note: you never stop working on that change.)\n\nI had heard about experience design in the 1990s, but assumed it was a gig for people who only knew one font. \n\nBut sometime around 2004 or 2005, among my freelance and small-studio colleagues, like a hobbit in the Shire, I began hearing whispers in the trees of a new evil stirring. The fires of Mordor were burning. Web designers were turning in their HTML editing tools and calling themselves UXers.\n\nI wasn\u2019t sure if they pronounced it \u201cuck-sir,\u201d or \u201cyou-ex-er,\u201d but I trusted their claims to authenticity about as far as I trusted the actors in a Doctor Pepper commercial when they claimed to be Peppers. I\u2019m an UXer, you\u2019re an UXer, wouldn\u2019t you like to be an UXer too? No thanks, said I. I still make things. With my hands.\n\nSuch was my thinking. I may have earned an MFA at the end of some long-past period of soul confusion, but I have working-class roots and am profoundly suspicious of, well, everything, but especially of anything that smacks of pretense. I got exporting GIFs. I didn\u2019t get how white papers and bullet points helped anybody do anything.\n\nI was wrong. And gradually I came to know I was wrong. And before other members of my tribe embraced UX, and research, and content strategy, and the other airier consultant services, I was on board. It helped that my wife of the time was a librarian from Michigan, so I\u2019d already bought into the cult of information architecture. And if I wasn\u2019t exactly the seer who first understood how borderline academic practices related to UX could become as important to our medium and industry as our craft skills, at least I was down a lot faster than Judd Apatow got with feminism. But I digress.\n\nI love the web and all the people in it. Today I understand design as a strategic practice above all. The promise of the web, to make all knowledge accessible to all people, won\u2019t be won by HTML5, WCAG 2, and responsive web design alone. \n\nWe are all designers. You may call yourself a front-end developer, but if you spend hours shaving half-seconds off an interaction, that\u2019s user experience and you, my friend, are a designer. If the client asks, \u201cCan you migrate all my old content to the new CMS?\u201d and you answer, \u201cOf course we can, but should we?\u201d, you are a designer. Even our users are designers. Think about it. \n\nOnce again, as in the dim dumb dot-com past, we seem to be divided by our titles. But, O, my friends, our varied titles are only differing facets of the same bright gem. Sisters, brothers, we are all designers. Love on! Love on!\n\nAnd may all your web pages, cards, clusters, clumps, asides, articles, and relational databases be bright.", "year": "2014", "author": "Jeffrey Zeldman", "author_slug": "jeffreyzeldman", "published": "2014-12-18T00:00:00+00:00", "url": "https://24ways.org/2014/a-holiday-wish/", "topic": "ux"} {"rowid": 127, "title": "Showing Good Form", "contents": "Earlier this year, I forget exactly when (it\u2019s been a good year), I was building a client site that needed widgets which look like this (designed, incidentally, by my erstwhile writing partner, Cameron Adams):\n\n\n\nBuilding this was a challenge not just in CSS, but in choosing the proper markup \u2013 how should such a widget be constructed?\n\nMmm \u2026 markup\n\nIt seemed to me there were two key issues to deal with:\n\n\n\tThe function of the interface is to input information, so semantically this is a form, therefore we have to find a way of building it using form elements: fieldset, legend, label and input\n\tWe can\u2019t use a table for layout, even though that would clearly be the easiest solution!\n\n\nAbusing tables for layout is never good \u2013 physical layout is not what table semantics mean. But even if this data can be described as a table, we shouldn\u2019t mix forms markup with non-forms markup, because of the behavioral impact this can have on a screen reader:\n\nTo take a prominent example, the screen reader JAWS has a mode specifically for interacting with forms (cunningly known as \u201cforms mode\u201d). When running in this mode its output only includes relevant elements \u2013 legends, labels and form controls themselves. Any other kind of markup \u2013 like text in a previous table cell, a paragraph or list in between \u2013 is simply ignored. The user in this situation would have to switch continually in and out of forms mode to hear all the content. (For more about this issue and some test examples, there\u2019s a thread at accessify forum which wanders in that direction.)\n\nOne further issue for screen reader users is implied by the design: the input fields are associated together in rows and columns, and a sighted user can visually scan across and down to make those associations; but a blind user can\u2019t do that. For such a user the row and column header data will need to be there at every axis; in other words, the layout should be more like this:\n\n\n\nAnd constructed with appropriate semantic markup to convey those relationships. By this point the selection of elements seems pretty clear: each row is a fieldset, the row header is a legend, and each column header is a label, associated with an input.\n\nHere\u2019s what that form looks like with no CSS:\n\n\n\nAnd here\u2019s some markup for the first row (with most of the attributes removed just to keep this example succinct):\n\n
\n\t\n\t\tMatch points\n\t\n\t\n\t\n\t\n\t\n
\n\nThe span inside each legend is because legend elements are highly resistant to styling! Indeed they\u2019re one of the most stubborn elements in the browsers\u2019 vocabulary. Oh man \u2026 how I wrestled with the buggers \u2026 until this obvious alternative occurred to me! So the legend element itself is just a container, while all the styling is on the inner span.\n\nOh yeah, there was some CSS too\n\nI\u2019m not gonna dwell too much on the CSS it took to make this work \u2013 this is a short article, and it\u2019s all there in the demo [demo page, style sheet]\n\nBut I do want to touch on the most interesting bit \u2013 where we get from a layout with headers on every row, to one where only the top row has headers \u2013 or at least, so it appears to graphical browsers. For screen readers, as we noted, we need those headers on every row, so we should employ some cunning CSS to partly negate their visual presence, without removing them from the output.\n\nThe core styling for each label span is like this:\n\nlabel span\n{\n\tdisplay:block;\n\tpadding:5px;\n\tline-height:1em;\n\tbackground:#423221;\n\tcolor:#fff;\n\tfont-weight:bold;\n}\n\nBut in the rows below the header they have these additional rules:\n\nfieldset.body label span\n{\n\tpadding:0 5px;\n\tline-height:0;\n\tposition:relative;\n\ttop:-10000em;\n}\n\nThe rendered width of the element is preserved, ensuring that the surrounding label is still the same width as the one in the header row above, and hence a unified column width is preserved all the way down. But the element effectively has no height, and so it\u2019s effectively invisible. The styling is done this way, rather than just setting the height to zero and using overflow:hidden, because to do that would expose an unrelated quirk with another popular screen reader! (It would hide the output from Window Eyes, as shown in this test example at access matters.)\n\nThe finished widget\n\nIt\u2019s an intricate beast allright! But after all that we do indeed get the widget we want:\n\n\n\tDemo page\n\tStyle sheet\n\n\nIt\u2019s not perfect, most notably because the legends have to have a fixed width; this can be in em to allow for text scaling, but it still doesn\u2019t allow the content to break into multiple lines. It also doesn\u2019t look quite right in Safari; and some CSS hacking was needed to make it look right in IE6 and IE7.\n\nStill it worked well enough for the purpose, and satisfied the client completely. And most of all it re-assured me in my faith \u2013 that there\u2019s never any need to abuse tables for layout. (Unless of course you think this content is a table anyway, but that\u2019s another story!)", "year": "2006", "author": "James Edwards", "author_slug": "jamesedwards", "published": "2006-12-11T00:00:00+00:00", "url": "https://24ways.org/2006/showing-good-form/", "topic": "ux"} {"rowid": 269, "title": "Adaptive Images for Responsive Designs\u2026 Again", "contents": "When I was asked to write an article for 24 ways I jumped at the chance, as I\u2019d been wanting to write about some fun hacks for responsive images and related parsing behaviours. My heart sank a little when Matt Wilcox beat me to the subject, but it floated back up when I realized I disagreed with his method and still had something to write about.\n\nSo, Matt Wilcox, if that is your real name (and I\u2019m pretty sure it is), I disagree. I see your dirty server-based hack and raise you an even dirtier client-side hack. Evil laugh, etc., etc.\n\nYou guys can stomach yet another article about responsive design, right? Right?\n\nHalf the room gets up to leave\n\nWhoa, whoa\u2026 OK, I\u2019ll cut to the chase\u2026\n\nTL;DR\n\nIn a previous episode, we were introduced to Debbie and her responsive cat poetry page. Well, now she\u2019s added some reviews of cat videos and some images of cats. Check out her new page and have a play around with the browser window. At smaller widths, the images change and the design responds. The benefits of this method are:\n\n\n\tit\u2019s entirely client-side\n\timages are still shown to users without JavaScript\n\tyour media queries stay in your CSS file\n\tno repetition of image URLs\n\tno extra downloads per image\n\tit\u2019s fast enough to work on resize\n\tit\u2019s pure filth\n\n\nWhat\u2019s wrong with the server-side solution?\n\nResponsive design is a client-side issue; involving the server creates a boatload of problems.\n\n\n\tIt sets a cookie at the top of the page which is read in subsequent requests. However, the cookie is not guaranteed to be set in time for requests on the same page, so the server may see an old value or no value at all.\n\tServing images via server scripts is much slower than plain old static hosting.\n\tThe URL can only cache with vary: cookie, so the cache breaks when the cookie changes, even if the change is unrelated. Also, far-future caching is out for devices that can change width.\n\tIt depends on detecting screen width, which is rather messy on mobile devices.\n\tResponding to things other than screen width (such as DPI) means packing more information into the cookie, and a more complicated script at the top of each page.\n\n\nSo, why isn\u2019t this straightforward on the client?\n\nClient-side solutions to the problem involve JavaScript testing user agent properties (such as screen width), looping through some images and setting their URLs accordingly. However, by the time JavaScript has sprung into action, the original image source has already started downloading. If you change the source of an image via JavaScript, you\u2019re setting off yet another request.\n\nImages are downloaded as soon as their DOM node is created. They don\u2019t need to be visible, they don\u2019t need to be in the document.\n\nnew Image().src = url\n\nThe above will start an HTTP request for url. This is a handy trick for quick requests and preloading, but also shows the browser\u2019s eagerness to download images.\n\nHere\u2019s an example of that in action. Check out the network tab in Web Inspector (other non-WebKit development aids are available) to see the image requests.\n\nBecause of this, some client-side solutions look like this:\n\n\n\nwhere t.gif is a 1\u00d71px tiny transparent GIF.\n\nThis results in no images if JavaScript isn\u2019t available. Dealing with the absence of JavaScript is still important, even on mobile. I was recently asked to make a website work on an old Blackberry 9000. I was able to get most of the way there by preventing that OS parsing any JavaScript, and that was only possible because the site didn\u2019t depend on it.\n\nWe need to delay loading images for JavaScript users, but ensure they load for users without JavaScript. How can we conditionally parse markup depending on JavaScript support?\n\nOh yeah!