Real World Accessibility: HTML5, ARIA and the Modern Web

real-world-accessibility-feature

Web designers and developers need techniques that work now while keeping an eye on what lies ahead. To do that, I’ve had one focus in every workshop I’ve taught over the past seven years: getting the best real world solutions for creating accessible websites and applications into the hands of developers and designers.

What do I even mean by real world? It’s very simple: Accessibility doesn’t and shouldn’t exist in a vacuum. The real world is messy. It isn’t black and white but every shade of grey in between. This applies to accessibility but also to our craft in general. We have to make compromises. We have to make choices. We have to change tactics mid-stream. And we have to have it finished yesterday.

And that’s why we focus on Real World Accessibility. You need to know what works now so that you can make informed decisions when you’re implementing modern web technologies. For the last two years, we’ve focused on Real World Accessibility for HTML5, CSS3 and ARIA—the workshop we’re bringing to Australia this month.

In preparation for the @feather Tour Down Under 2011 (I’m @feather on twitter) we crafted a page highlighting the content and details of what you’ll get. It isn’t the first HTML5 site/page we built, but we wanted to do something that created a unique design, was accessible, and gave us an opportunity to test out a few modern techniques for web design and development.

Every time we build something with HTML5, we get a chance to take a closer look at how accessible it is right now and we get a glimpse of what accessibility will be with new technologies.

HTML5 Semantics

I’m a fan of the new semantics that HTML5 brings us, for the most part. It opens new doors for us—authors around the world are excited to use the semantically sound elements that were introduced in HTML5.

And everyone knows that foundation of accessibility is semantics, right? Right.

So what happens when you combine a reasonably semantically rich language like HTML5 with browsers and assistive technology that was written before the semantics of HTML5 were even considered?

Nothing. And that’s the problem.

The advantages that the semantics of HTML5 bring to us are not really the boon to accessibility that they should be—yet. (To follow the development of which HTML5 elements are fully exposed by browsers and pass through to the accessibility APIs of different operating systems, be sure to check out HTML5accessibility.com.)

Accessibility continues to evolve in HTML5. There’s a team at the W3C dedicated to HTML5 Accessibility. And while accessibility for HTML5 continues to develop, I’m going to do my very best to ensure that what we’re doing for accessibility isn’t just about what will work in the future, but also about what works in the “here and now.” Why? Because we have “support” for HTML5 in browsers now. And that means that people are going to use it now. Which means it needs to be accessible now.

Here’s the real world situation with HTML5 accessibility as of this writing:

  1. We can use the new elements (article, section, aside, header, footer, nav to name a few) in our websites and apps right now.
  2. We can trick browsers that don’t natively understand those elements exist by using the HTML5 shiv. Yes it uses JavaScript and no, that’s not a reason not to use it. Yes, you need to consider the “JavaScript off” scenario, but we’ll talk more about that later.
  3. We must know that the semantics of HTML5 are NOT expressed or interpreted right now by existing assistive technologies. That’s okay—support will be coming, but it will take some time.
  4. While we’re waiting for support for emerging technologies such as HTML5, we need to find solutions that work now.

Web Content Accessibility Guidelines

I’m happy that we’ve moved on from WCAG 1.0, but I miss one of its hallmarks: the Interim Solutions Guideline. That told us to:

“Use interim solutions.

Use interim accessibility solutions so that assistive technologies and older browsers will operate correctly.”

Let’s be clear: I don’t like the specific interim solutions that were advocated in WCAG 1.0. These checkpoints were all about “Until user agents” allow for X or have the functionality to do Y, use these techniques. User agents now do those things, so we don’t need to worry about them with WCAG 2.0. They’re so unimportant I’m not even going to mention them here.

But the idea behind using interim solutions was all about recognizing that certain techniques may not have full support in older assistive technologies or browsers and that we must think about those scenarios.

And that’s the case with HTML5 right now.

So what “Interim solutions” can we use with HTML5 to help assistive technology understand the semantics?

Using WAI-ARIA

ARIA (Accessible Rich Internet Applications) is a technology that is used to help specify programmatically what web a user interface component (or “widget”) is more clearly, so that assistive technologies can provide more meaning to its users.

It is often referred to as a “bridge technology” —in that it helps us bridge the gaps between the past, the present and the future. In short, if we have an existing legacy web application that uses older markup and coding practices, we can help make that more accessible by applying ARIA attributes to the markup without rewriting the entire application with more modern languages.

Here’s how ARIA helps.

Consider this Google Maps implementation: http://examples.furtherahead.com/aria/slider/index-3.html

real-world-accessibility-1

Note that we’ve used a custom set of controls on the map that allow for excellent keyboard accessibility. We’ve also implemented a custom slider control to replace the standard Google Maps control.

We don’t have a native slider element in HTML (at least not at present), so we built the slider with HTML, CSS and JavaScript such that it approximates the visual appearance and functionality of a slider control. The problem is that when we do so, we’re really creating a complex user interface component that is comprised of semantically meaningless divs, spans, images, links and buttons.

  1. For any complex user interface component, we need to know three basic pieces of information:
  2. What is this thing? (its role)
  3. What general characteristics does it have? (its properties)
  4. What can you tell me about it right now? (its state)

The whole is greater than the sum of its parts; taken together, those divs, spans, images and links create something more (a slider). ARIA lets us precisely specify the meaning of the whole so that assistive technologies can better understand the whole instead of breaking it down and trying to understand its individual components.

Traditionally we might have used markup something like this:

<a href="#"
      id="handle_zoomSlider">
<span>11</span>
</a>

The link gives us a focusable control that we can use for the slider’s “thumb” that will slide along the rail. The link text tells us the current zoom level (11), which is also displayed on the screen. We’d use a foreground or background image for the “rail” to make it look like a slider, and then we’d apply the appropriate JavaScript to allow us to  manipulate the thumb with the mouse and they keyboard.

To assistive technology though, this is just a link with some text in it. There’s no indication of what the link is about or what it does.

How can we make this better? We add some ARIA attributes:

<a href="#"
      id="handle_zoomSlider"
      role="slider"
      aria-orientation="vertical"
      aria-valuemin="0"
      aria-valuemax="17"
      aria-valuetext="14"
      aria-valuenow="14" >
<span>11</span>
</a>

The markup is fairly simple. We did a few things:

  1. indicated a role for the component (role="slider"),
  2. used aria-orientation="vertical" to specify that it is oriented vertically in the page, and,
  3. added several properties to that slider (aria-valuemin="0", aria-valuemax="17",aria-valuetext="14", and aria-valuenow="14").

When we use JavaScript to change the position of the thumb on the rail we also change the value of aria-valuenow and aria-valuetext to match the zoom level. The ARIA properties are updated, and provided we’re using the right pieces of assistive technology, we get the appropriate announcements or interaction with that interface component. A screen reader user, for example, will hear that the user interface component is a slider, and they will hear values being read out as the value changes in response to moving the thumb along the rail.

This is what ARIA gives us: the ability to provide better programmatic accessibility for complex user interface components. There’s a lot more detail to it, but that’s ARIA in a nutshell.

Now, then … I told you that to tell you this.

ARIA and HTML5

Remember we talked about the idea of Interim Solutions? That’s where ARIA and HTML5 come together.

ARIA has decent support in assistive technology. It isn’t perfect, but screen readers and magnifiers and other technologies are much further along in supporting ARIA than they are supporting HTML5. Why? Quite simply, even though ARIA is still “new,” it is older than HTML5. Assistive technology vendors have implemented support for certain parts of ARIA, and that support continues to grow.

Because of this level of support, we can add ARIA to give us some of the semantics that HTML5 should give us, but isn’t supported in or exposed to assistive technology just yet.

We use ARIA to complement the semantics of HTML5 in the sites that we build. We relaunched our accessibility focused site Simply Accessible in October of 2010 and defined a number of ARIA landmark roles that help give meaning where it can’t be delivered via HTML5 yet.

We added role=”main” to the main content on the page that is also marked up with HTML5’s <article> element. We used role=”complementary” on the <aside> element for the related content in the sidebar. And we used role=”navigation” on the <nav> elements at both the top and the bottom of the page.

See how it works? We are still using HTML5’s elements, but we’re supporting that with ARIA for those assistive technology and browser combinations that don’t “get” HTML5 yet.

We consider things in this order:

  1. What is the most semantic HTML5 that we can write to solve a problem?
  2. How can we convey as close to the same meaning with ARIA for technology that doesn’t understand HTML5?
  3. How can we convey as close to the same meaning for technologies that don’t understand either of HTML5 and ARIA?

Let’s wrap this up with one final example.

Figures in Web Pages

In a recent article Accessibility and HTML5 Block Links I talk about some of the issues that the block link structure creates with screen reading technology. In that article I use an HTML5 figure element to express the relationship between the image and the caption that is below it:

real-world-accessibility-2

<figure>
<img
      src="../blocklinksvoiceover-500-2.png"
      alt="Screenshot of block links with Voiceover on the Mac.">
<figcaption id="figcaption1">
<strong>Screenshot</strong>: We use block links on the promo page...
In other screen readers, parts of the link aren’t read at all.
</figcaption>
</figure>

This structure creates an explicit relationship where it didn’t exist before. The parent <figure> element contains a <figcaption> element with a sibling <img> element. This HTML5 structure formalizes what people have been doing for years — placing an image in the page and including a paragraph of text after it in the markup.

How does this work for a browser and assistive technology combination that doesn’t understand HTML5? Well, a browser that doesn’t understand an element just skips that element. So when we don’t have full HTML5 support, we fall back to the association by proximity: the image and the paragraph are siblings in the source code, so they must be related.

What can we do, though, for assistive technology that understands ARIA, but not <figure> and <figcaption>? We can create a programmatic association by adding the aria-describedby attribute:

<figure>
<img
      src="../blocklinksvoiceover-500-2.png"
      alt="Screenshot of block links with Voiceover on the Mac."
      aria-describedby="figcaption1">
<figcaption id="figcaption1">
      <strong>Screenshot</strong>: We use block links on the promo page...
      In other screen readers, parts of the link aren’t read at all.
</figcaption>
</figure>

Now there is a stronger association than just having the two pieces of content related by proximity, created by adding ARIA. The value of the aria-describedby attribute does just what you think it does: it contains an id value of the node in the page that describes the content. The detailed description of the <img> is provided by the node with the corresponding id—in this case, the <figcaption>.

What about the case where we have no HTML5 or ARIA support? We fall back to the age-old method of association by proximity: the image and the description are right next to each other in the code.

And that’s Real World Accessibility. It works now and will work better in the future.

Working through these kinds of scenarios is a pragmatic way to learn about Real World Accessibility, and these are exactly the kinds of scenarios we explore—in about as hands-on a way as you can imagine—in our full-day workshops. If you’re in Australia, you can register today, otherwise keep a look out for when we’re in your part of the world.

And in the meantime, let me know what you think about all this.

This entry was posted in Accessibility, ARIA, CSS, HTML & CSS, HTML5 Tutorials & Articles, WCAG, Web Tech and tagged , , , , , , , , , . Bookmark the permalink.