>> [INTRO MUSIC]
>> MICHAEL BECK: Welcome to Technica11y, the Webinar Series devoted to the technical challenges of making the web accessible. This month’s presenter is Thomas Logan, founder and CEO of Equal Entry.
>> MICHAEL BECK: All right, everyone. Welcome to the December version of technica11y. Michael Beck, the operations manager at Tenon. This month we have Thomas Logan from Equal Entry who will be discussing single switch usability. Correct?
>> THOMAS LOGAN: Yes.
>> MICHAEL BECK: Yes. So, take it away, Thomas, all over to you.
>> THOMAS LOGAN: Thank you very much.
>> MICHAEL BECK: Before we begin if anybody could put any questions in the chat, we’ll get to them at the end of the presentation.
>> THOMAS LOGAN: Great. So, I’m just going to get my screen set up here. Make sure this comes through.
All right. So as introduced here. My name is Thomas Logan. I’ve worked in the accessibility space my whole career. So, I started as a Computer Science student at University of North Carolina and I worked with a student who was blind in the classics department thinking about accessibility for ancient world maps. And that was my first exposure to thinking about technology and how technology can provide access to information. And so, I’ve basically spent my whole career always being interested in this topic, of what can we do to make different experiences that prior to technology might have been very difficult, what can we enable via technology.
So, today’s topic of considering single switch accessibility. I’m very appreciative to have this opportunity to talk to you all about this today.
I was very interested to get to work on this topic in a small consulting project this year where, basically, the task and the request was to consider how to improve a specific experience for someone that uses a single switch.
And I appreciated that opportunity as an area to focus on. Because I think, historically, I’ve been over 15 years working in accessibility, I find that a lot of the work rightfully so often focuses more broadly on standards like the Web Content Accessibility Guidelines and my most experience is working with screen readers and considering how people who are blind get programmatic access or underlying access to information.
So, this opportunity to really think deeply about the single switch accessibility use case, really gave me that opportunity to explore and understand some of the Web Content Accessibility Guidelines better or through a different lens than I had in the past. So, I wanted to have this be a conversation and demonstration today. Probably a lot of you have not had projects working on single switch accessibility. Maybe some of you on the webinar today have, so, I kind of lead here with this is my experience focused working really on a project but I’m always eager to learn from everyone else’s experiences.
So, I wanted to start with this quote and key point when we’re thinking about single switches.
So, the concept is, “for some people their interactions with the world take place through a single switch-image having to control everything you want using a light switch.”
So, I like that this is actually from a research paper I’ll be showing later in the presentation. But I liked that way to consider this: actually turning on that light switch. It’s a binary on or off. It’s a single switch we can use to turn a light on or off in a room and this is basically what we need to consider or think about for this technology use case. We have the ability to control a single switch.
How can we get every interaction that needs to be taken made available through just that single switch?
So, I think some of the ideas here, some of the ways that people can control switches: we can blink eyes. We can move our head. We can move a hand, maybe some people would have the dexterity to move their entire hand, maybe balled up as a fist, or maybe just only having one finger on a hand. Again, considering that someone may be able to move all five fingers or ten fingers.
The ability to control a switch — to control a single switch. We could potentially just use a single finger to control that single switch. Use an elbow, a foot, a toe, a knee, a tongue.
It’s definitely broad spectrum here. And again, what I want to stress of one of the points in today’s presentation is I wanted to look from the single switch use case because that was actually the design thinking. But, obviously the reality and one of the things that’s interesting about considering use cases for people with motor impairments of the upper or lower extremity, is any combination of these can be present for an individual so some individuals may be able to control five switches, eight switches, some individuals may only be able to control one switch.
So, that’s just part of the design thinking that I think will also be interesting as we have this discussion.
So, one of the techniques, and I’ll be demonstrating this with the different products. Scanning is the way that on a technology interface we can have a single switch be able to access information and scanning is basically being able to move through a list of items, one at a time, and then indicate by tapping or switching.
So on the screen I have — this is basically going to be the iPad demo.
But on the iOS, we have inside of the accessibility settings. The great thing about Apple’s iOS, each new release, getting more and more of these accessibility options and settings. But you know, one piece of that is we have to look through this whole list to find the one we’re looking for The switch control is the accessibility control on iOS that you can use to see this functionality so inside of these configuration settings I am only set to have a single switch, so this is again this idea that many people may be able to control more than one switch, but, if we start from the baseline we can start with a single switch and then we have a lot of different configuration options for that single switch.
But the concept is when we turn the single switch on, we now see a red highlight rectangle that starts moving between each interactive item that’s on the screen.
So, it skips over things that are just plain text. But anything that could be clicked on to be controlled has this highlight focus move through the screen.
And the number of times that loops through the screen can be controlled. The speed of that loop can also be controlled.
So, I have this going at 0.5 seconds. It’s actually like 500 milliseconds on each element that could be clicked on on the screen and that’s actually moving quite fast.
So that’s, again, part of understanding the individual use cases is someone may be able to move very quickly with the head or the tongue or the part of their body they are using to control the switch. Another user may need more time to actually hit and activate that switch. So that’s why all of these different options — I’m going to tap off the switch right now and hopefully do a good demo of that.
So, I’ve turned off the switch. But I could actually give myself more time for how long to control that switch by changing this auto scanning time to a different value, and that will slow down how quickly that highlight rectangle moves around the interface.
But I think one of the interesting points here is I, at least, very frequently if I’m doing accessibility testing, I typically use VoiceOver. And I have this model of sometimes swiping right and just swiping through every element that’s on the screen to make sure that it has an accessibility alternative and a name, a role, a value, those types of things.
But one of the things that’s cool with the switch is seeing that rather than having to swipe right, swipe left, and navigate through all of the elements in the screen using the switch access is a good way to actually test the logical ordering of elements on a particular screen that you’re looking at.
So that will be a part I’ll really stress when we go into the illustrative example of composing music is the position and the logical ordering that we choose of elements on the screen is one thing I think as people who work in developing or designing software, this is a consideration that could be put more at the forefront for some of the design thinking when we want to consider all users that need to access the technology. This is just the other point to say when we think about one of the WCAG standards is saying everything has to be accessible from the keyboard, operable from the keyboard.
There’s over 100 switches on a standard keyboard. So, the dexterity and ability to individually access all of those keys with ten fingers. We got all of these different switches and we can frequently get very complicated interactions when we design for keyboard accessibility and that’s one of the assumptions that we need to challenge. We need more consideration especially for software design for the web and for desktop is that these complex key sequences like holding Control-Option-U or, you know, having Control-Alt-Delete is another common one from the Windows days. Those combinations and those requirements to control the interface through the keyboard can be much more difficult when you don’t have control of multiple switches to execute those quickly or easily.
So that is another thing that I just wanted to quickly show as a demonstration comparison is that I’m on the — I’m on a Windows PC here but this is one of the tools built into — it’s been built into Windows…at least since Windows Vista, maybe earlier. But there’s the ability to use a single switch with this virtual keyboard that exists on Windows and so this is I think a good example. This functionality does at least come in by default that I could basically select any key on the keyboard through this scanning method. Similar to how we saw a scanning method on iOS, on Windows, this functionality actually groups keys together into groups of four and I can use my space bar or some other single switch to look for a row of keys, wait until it gets highlighted on the key that I want, and then tap to start typing.
And so this keyboard has up at the top some auto complete functionality to learn, again typing this way and actually needing to type — complete a form field say on the web using just a single switch.
The more that this auto prediction can work to guess the words auto complete the words that need to be typed in again that’s — the feature or function that can help someone that needs to do input via this type of mechanism. But I would say that obviously one of the things for me in getting to do this project, I looked at what comes out of the box on Windows, on Mac, on iOS, and Android. And I think for right now, on Windows, a lot of the functionality there for more advanced functionalities for switch control will require either purchasing third party software or obtaining Open Source software can run on Windows that gives some more of the advanced features that you would need for controlling an interface.
Because built into Windows we just have this keyboard interface control.
I wanted to mention this video, I’m not going to play the video in the stream today. But I do have an article that I’ll be showing in the presentation that has links to everything that I’m demonstrating today. But this is the Apple accessibility video from a few years ago. This is a woman named Sadie who was able to create the video,, and if you haven’t seen the vide, I highly recommend watching the entire video, but the point of the video is Sadie is a cinematographer/video editor and Sadie in this case has the ability to control two switches with her head, so, she’s actually using an interface or a configuration on the Mac where if she moves her head to the left and clicks this switch, she can use one piece of functionality for scanning and when she moves her head to another side and clicks the switch on the other side of her head, she can use that to actually tap or select the item underneath the functionality.
And so, in the video itself, one of the examples they are showing is actually doing complex mouse operations using those switches where basically on the left we have iMovie or FinalCut, some type of video editing software and we’re able to using those single switches click and select items. And then glide or drag them down into the timeline view to create editing. So, this is showing another thing I was showing on the Mac. We have the keyboard emulation and this is also something we would have on Windows but this would showing mouse emulation and that’s something that needs to be available to the single switch users: the ability to left click, double click, triple click, gliding, dragging, all of those features could also potentially be exposed through named commands and basically a list of commands on the screen.
I wanted to comment on the Game Accessibility Guidelines.
So, I’ve been doing, and I think a lot of people have been, paying more attention to game accessibility as of late. And one thing that’s interesting about the Game Accessibility Guidelines that’s a little different from, say, the Web Content Accessibility Guidelines is that there is a good focus or a larger focus on considerations for people that need to have remapped controls or be able to use switches or alternative input devices to control games.
So one of the –one of the cool things lately is the Xbox adaptive controller. So, this is actually something that was developed and released this year in 2018.
But the Xbox adaptive controller is a controller that comes with basically the left, right, up, down analogue controls and then it has these two very large buttons similar to what could be purchased as a single switch. There’s two single switches on this adaptive controller.
And then the control itself has a bunch of other switches that can be connected to the back. So this is showing that you could basically set up a system or set up an interface to control typing in the Y button, the X button, the B button, the A button, all of the different buttons that get used because different games use different buttons.
This piece of hardware sort of shows this idea of flexibility for like how can we connect all of these different types of switches and these are examples of what other switches might look like to connect to the device to allow people to control them.
So, one of the things in the Game Accessibility Guidelines is this requirement to allow controls to be remapped and reconfigured. So that’s something that I think is really important, and that I don’t see lots of common design implementations of in desktop and web software. I think we do have obviously lots of software that does do these things, but, there’s not very general ways for a lot of web functionality or desktop functionality to easily have the controls being remapped and reconfigured.
So, this is just a quote for one example of someone that would benefit from remapped and reconfigured: “I was born with cerebral palsy I can’t walk and have very limited use of my hands. I love video games of all shapes and sizes and have been playing since I was 3 years old. I just want to thank you Blizzard for having near endless control in Overwatch. I don’t know if this was your goal but because of your extensive options I was able to play every character in the roster and it feels great. Because of you, I made my first snipe in a video game today.”
Snipe is a video game term for shooting someone and getting them with one hit.
That’s one of the features that Overwatch supports is this rich configuration UI where you don’t have to say that a particular button performs a certain action in the user interface.
And that’s something that, again, when we think about a single switch: if you think about a game that has lots of different buttons, we could think about making the single switch the ordering of the buttons that get cycled through for that switch. It could be a limited set really customized to a game.
And if we have this full remapping and customization, someone that even uses a single switch hopefully will be able to get that snipe.
So, we’ve got Game Accessibility Guidelines. We’ve got this.
So now I’m going to go into basically the key part of today, which was just to take you through an illustration of some work that I did considering making Song Maker, which is a music composition app from Google Creative Labs.
And inside of the video, I’m going to switch to this presentation, this is the link that I’ll be sending out to the attendees today but this has links to all of the demonstrations and resources that are in the presentation today.
>> MUSIC LAB VOICE: Music is for everyone. That’s how we — why we started Chrome Music Lab.
>> THOMAS LOGAN: Inside of this the basic premise of how the Song Maker is explained is music making should be for everyone. And in this example, it’s showing touchscreen where we can tap and input notes into a musical grid. And then hit a big play button that then starts playing music on the screen.
And they have adapted this and made that work on iOS or — sorry; on an I — a mobile device screen. A tablet device screen. And like on a desktop screen.
And so, the idea here was the whole purpose of Song Maker was to create a very minimal interface for music composition, so they really wanted that to be easy for everyone to use and to enjoy. And it was specifically for students. And they wanted to have a consideration of how could we make this particular application work well for someone that maybe only can tell an interface with a single switch.
So, in the video they are showing I would say the standard way of controlling this interface is clicking. Inside of the interface And clicking — [Music].
>> THOMAS LOGAN: Here to build up a rhythm. So, we have a way with a mouse or with a touch finger to control the interface.
So, the first thing to consider or the first thing that we were considering for figuring out how to make this available to a single switch user was, “What would be an alternative way to input the notes and control this interface if we couldn’t use the mouse and we could not use touch on the screen?”
So, this is one of the core concepts that I thought was a really cool design on the site is that very much at the top of the interface, there’s this control which is basically a show or high more gain controls button.
This as far as I know isn’t like a design pattern that’s been standardized anywhere but I liked the concept here that it’s fairly hidden from the start. It’s almost like a skip navigation link on a Web site. But when you expand the control, now we have the ability via a keyboard or via an input we can tab to those controls and input notes. So now, instead of having to use a mouse or keyboard to control the interface and also not having to have a rich set of controls, inside the grid itself is actually keyboard accessible here.
So, we could also implement notes in that range but for someone using a switch or single switch, we were demonstrating it basically uses the scanning feature where we have to scan through everything where a note could be input.
So, if you were trying to compose a song and you needed to write out the melody line, having to scan through every interactive line in this grid would be more difficult than being able to actually find positions in the interface via these controls and then basically input those controls.
So that was part of the design thinking is, “Well, if we have these as buttons, we would have a way for someone then with a single switch without any configuration to be able to toggle and launch those controls and then input them using just a single switch.”
So, the other part of considering this was I thought — and again, this is something that probably goes into the design of a lot of games, but, also goes into a lot of design for web interfaces is how to simplify the interface.
So we would want — or I would want someone that’s trying out this experience from the beginning to have a good experience and to be able to get started and then build their skills or advance over time. So, one of the things in the documentation for teachers considering how single switch users could interact with this control is through configuration settings and so that was if we can configure this interface to basically have less number of bars and use like a pentatonic scale, which a pentatonic scale would have notes that all sound good together but require less notes, and maybe just one octave, we can basically get a more simple interface that allows someone again to control and have fun composing music but again with a smaller grid and maybe a more focused area to get started and have an experience with. I think of an analogue to this on the web could be having a responsive design Web site where, on the desktop width application, we might have lots of different controls and lots of navigation elements. But when we get down to the mobile device or the responsive view of the mobile device it might be a simpler interface to navigate. And that’s something that someone that was using a single switch — the assumption is that the simpler the interface or the less elements to need to navigate through will make someone using a single switch more efficient.
So, the other part that — I just want to show this video. But this is what I think is a very super genius feature inside of iOS that I have not seen in another technology implementation. But I share it as just something that, again, to spark the conversation and spark the consideration for what can be done on other software interfaces.
This particular tool called Guided Access allows you to select elements that you don’t want to receive focus or to be clickable.
And so this is just a short video, I’ll play it again. It was going pretty fast. I’ll slow this.
Inside of — when you turn on Guided Access inside of iOS, it lets you actually draw around certain parts of the user interface and say, “I don’t want keyboard focus or input focusing to go to the elements that I’m drawing circles around or drawing squares around.”
So, on the screen I’m drawing around parts of this user interface that I don’t want to receive focus. And now I have just the — at the end of that exercise, I have just the show/hide controls button available in the UI and I have the play button available in the UI.
So thinking about how I could make this experience the best experience to me right now it seemed like the easiest one for educating educators, if they had iPads in the classroom, it’s like this is actually a way that we can say with this particular software interface using conjunction with Guided Access, we can basically reduce all of the different tab stops that are on the screen and get it down to basically just the control that shows or hides other controls. And then the control that actually lets the user of Song Maker start or stop the music.
So that’s, I think, ultimately one of the things that at least for really wanting someone who uses a single switch to be as efficient and successful as possible in the end scenario of the technology we’re designing. This seemed like an essential part to me, where basically in the default setup, if I had not turned on guided access, the switch control is going to be stopping at the tab inside of the Chrome UI, going into the back button, the refresh button, the address bar. It’s going to go through other features in the UI that every single one of those tab stops is going to take time. If you remember at the beginning of the presentation, I was showing we could configure the auto scanning time to figure out how long the highlight for the scan should stay on a single element. And I had set it to 500 milliseconds, but we changed it to one second.
If we think about 1 second without using this Guided Access tool, then it’s going to be one second waiting on this control, maybe one second waiting on this group of controls, one second waiting here. And again, as those seconds or that time adds up, and when we think about someone enjoying or having a successful experience, I think the amount of time it takes to have an experience with it is a critical number, a critical thing to consider for someone’s enjoyment and/or benefit of a piece of technology.
So again, this is all documented and shown on the screen. But I think one of the things that’s interesting about this exercise is it does allow you to have potentially a thought process about the experience you’re designing: what are the most important features or functions?
And so, one of the things that’s still sort of an open issue to me in this design is that the close button, once we have opened up this list of controls, again, I’m just going to simulate the auto scanning tool by hitting the tab key right now.
But, the order of the actions that we put into this particular control, ideally we put them in the order of the most frequently used action to the least frequently used action because again, each time we’re having the scanning go through the control, it’s taking that time. It’s having a period of needing to wait. And so, if it’s more common to enter a note, then we should move the note to the beginning of this list of controls to make that a more efficient experience. Similarly, because once we open this grid and we’re entering notes, it’s probably more common that we’re going to be entering a lot of notes and navigating through this machine between it’s not going to be as common that we’re opening and closing the display.
So, this particular control itself could be moved to the end of that list and that would then make this potentially a more efficient interface and take less time to complete.
So what I wanted to show here is just — I did do a breakdown of using 0.5 seconds, that’s the same thing I started with 500 milliseconds for each switch, and with Guided Access turned on it has 7 interactive stops on the page and take the user 3.5 seconds to go once fully through the interface with the seven controls that shows on display.
So I’m going to let this video just play for it’s about a minute 48 but this will let you see the experience of inputting a melody and a rhythm with this particular setup enabled.
[VIDEO PLAYING WITH MUSIC]
>> THOMAS LOGAN: So that right there is about 1:45, and I think that was just a cool experience to see with this design and with this combination of features, we can enable a student using a single switch to still be involved in the creative process. As I showed here, this interface, as someone became more comfortable using it, we could expand the number of notes, expand what’s available, expand what gets — what settings get available, and which settings are not available.
But it showed kind of this ideation process of how you have these considerations. So, the next part that I think is interesting and again, this is just to spark discussion and thought processes, is that this idea of creating this control for this interface, you can on — on Apple you can basically create — they have an accessibility keyboard active panel where we can choose basically four specific applications. Only a certain set of buttons that we want shown. And I apologize. This is pretty small.
But this is just the left down — I’m sorry; left, up arrow, down arrow, right arrow buttons. And then it has the ability to make that be the keyboard.
So if I added — like say I’m going to add a button and add the enter key, I could basically build — I could build a set of keys or I could build this control for specific interface. And then have that be an interface that’s available for that application or for that site.
Now, you know, there’s still more things to do potentially if we’re considering the web and having this be able to be a keyboard that appears on the web. But, it’s cool to actually see that this functionality is built in, on at least the Mac OS, that if you needed to have more custom controls or a set of actions or a set of functions, you can design and develop this group of keys. And again, the idea then only having five keys is we could use a single switch to just move through these five keys on the screen and then just use those five keys when we’re inside of that application.
So that’s one cool feature.
Another cool feature, I don’t get to show this application enough. But I like to show this demonstration of a desktop software. It doesn’t have all accessibility. But it does have this remapping feature that I think could be very cool for a lot of software interfaces.
This is also a music composition application called Ableton Live. And one of the ideas of Ableton Live is to make it so any hardware interface can easily connect with it to control this software interface.
And one thing that they have built into it is they also make it very easy to control it from the keyboard.
So, if I hit the keyboard shortcut command key, all of the actions in the UI basically have a way for me to click on them and map a keyboard to them. So, all of the interactive elements get this highlighting and I can say, “Oh, the play button, I want that to be controlled by pressing the number 1.
“The stop button, I want that to be controlled by pressing the number 2.
“I want the sound itself to be muted by pressing the number 3.”
So, I can choose any of these elements that are highlighted in yellow, I can click on them and then set a keyboard shortcut or modify the keyboard shortcut that currently controls it.
So, when I do that…
…I can now use those keys on my keyboard to control the interface.
So again, if we think on top of that, if we had single switch sets of controls that became more common to be used for types of experiences on the web or types of desktop software, and we had this ability to remap the keys so that for that particular application, whatever it may be, we can change or modify and say, “Well this key should do this on that computer or that application.”
I think that’s just a really cool feature. And so, I just wanted to show that as a design that I think could be more applied into other software interfaces where, when we have programmatic access for tools like screen readers, we are working to make sure all interactive elements have a programmatically accessible element. And then, this sort of design feature shows a way that we can overlay on top of that other functionality then to assign actions or assign keystrokes that are specific to that user or configurable for that user.
This is the document — I made heavy use of this document. “Switch Access to Technology: A Comprehensive Guide.” So, this is just a resource that I also wanted to advocate for. David Colven and Simon Judge did a very excellent job. It’s a very comprehensive guide. It starts on basically taking you through the different user interfaces for switching and letting you understand the different ways of controlling scans, highlighter movement controls, timeline and input filtering. Just lots and lots of details so, I highly recommend it if you’re interested in this area, taking a look at this document, Switch Access to Technology, and it pairs well if you have an iOS device to being able to basically read that document and get a good understanding of why most of the settings that do exist inside of the switch control setting on iOS, why they exist, why they are there.
And then I guess I just wanted to — I see we have some questions. I wanted to make sure there was time here at the end to have a discussion and happy to share.
>> MICHAEL BECK: Oh, yeah, there’s plenty of time.
So, we’ll get to PJ’s first question. Well, first, thank you for the presentation. That was very interesting. A lot of it reminded me of — I don’t know if you’re familiar with the guitar player Jason Becker.
>> THOMAS LOGAN: No.
>> MICHAEL BECK: He was one of the big shredders of the mid ’80s and he was unfortunately diagnosed with ALS shortly after he got a gig with David Lee Roth and while ALS is usually tantamount to a death sentence, he still composes music today with his father. And it’s — his story is amazing. There’s a great documentary called Not Dead Yet which I highly recommend and a lot of this reminded me of him and how he has pushed the use of software in order to compose in single switch. He can only use his eyes. He only has eye movement. And it just reminded me of that.
But our first question is from PJ. She asked, “With Guided Access how does someone leave the interface when most of the controls are hidden?”
>> THOMAS LOGAN: Yeah, so I think that’s one — that’s a great question, thank you, PJ.
>> MICHAEL BECK: I thought about that, as well.
>> THOMAS LOGAN: Yeah, and actually, I should definitely demo this. Because if you go and, you know, you get like, oh, this was a cool presentation, I’m going to try this out, this definitely had a similar sort of — there’s a little bit of a pairing of those two features that’s a concern. Sort of like sometimes when people get freaked out the first time they turn on VoiceOver and they are like, “How do I turn it off?”
It is sort of for me the thing with Guided Access — let me get to the Guided Access tool.
The core reason Guided Access was implemented was for, I think, having the ability to have people also have that for maybe their children, also, and having people ways to control, I don’t want these certain apps to be used.
The pass code, this is sort of one of the parts I was struggling with of that’s usually the way the — usually the way Guided Access gets turned on and off, although it does have the Touch ID feature where it would allow if someone had the ability with the finger to use that but Guided Access is set up that when you turn it on, right here I’m on the music maker access. When I turn on Guided Access, this is basically — you get a UI telling you that Guided Access started. When you try to turn that off, I’m using the triple click shortcut. So, that’s probably the normal way to turn it on and off is to add it to your accessibility shortcuts menu.
This accessibility shortcuts menu is accessible with the switch control as well.
But it is very difficult that the combination if you have the switch control on and you go to Guided Access, you have to enter your pass code to get — to turn it off if you haven’t set up the Touch ID. That’s actually — I’m glad that you brought up this question because this was something that I did want to pose to Apple but I was not getting switch access to that particular prompt, to actually type in and use the switch to type in. Since I don’t have the switch, I can type in my pass code and that’s actually how you turn on and off this functionality.
So, when I go into this feature, I could then say maybe I don’t want — I also don’t want this button to be there. But I do want these buttons to be there.
So, this is sort of the options that you have there. You can have it be some other options for turning on and off the feature, but that’s the traditional way I do it is via the accessibility shortcut and then I’ve been using the pass code to turn it on and off.
>> MICHAEL BECK: Okay. I hope that answered your question, PJ.
Our next one is from Sarah. Is the iOS switch mode using its own focus indication? Or more to the point, how important is visible display of focus on a web page/app for single switch users?
>> THOMAS LOGAN: So, on iOS at least it is using its own focus. So, on the switch control, you have a – yeah, you actually can get the cursor color: there’s five options, blue, red, green, yellow and orange, so it’s not using or displaying the keyboard focus indicator. It’s using its own.
And I also had the large cursor turned on.
That being said, though, that’s something that, at least in my experience, this was one of the other challenges I had. I was super motivated to work on this project. I was like, “Oh this is exciting.” I want to figure out how to do this and have it work well for Android out of the box, iOS out of the box, Mac OS out of the box and Windows OS. That’s what I try to do for screen reader and some of the other features. But it was difficult that they are so different between the platforms of what is sort of built in for free.
I’m just answering Sarah’s question for iOS. I’m not confident that that would be the same answer on one of the other platforms. But on iOS it draws its own.
>> MICHAEL BECK: Okay. And one final one, have you seen Chris Hills’ book about switch control on iOS?
>> THOMAS LOGAN: Is Chris — I’ll have to ask if Chris was the one switch guy? I did also link — did Chris write —
>> MICHAEL BECK: He said it’s in the iTunes section.
>> THOMAS LOGAN: Okay, yeah, I did — yes. So I haven’t seen the book. But I did — do have that also mentioned in his — in the writeup I did of this that he does have a switch control overview that goes into a really good depth of — on YouTube, of the different features and the different modes. And so it sounds like, too, we’re being made aware that there’s a book, too. I didn’t know that. I’ll definitely check that out. So thank you for letting me know.
>> MICHAEL BECK: And one more. Can individual user experiences be recorded to later be evaluated for possible best practices with an application? Or if that’s not possible, are preferences too individualized to use that data to enhance other users’ experience?
>> THOMAS LOGAN: Yeah, well, I think that comment of like can we record and sort of understand those features? I mean I think that’s the right type of design thinking. That’s one of the things that interested me. I’m showing this music interface. And it was sort of fortunate that by design that was the goal was to keep it a simple interface.
But I think — I do think there’s something to be said for having an analytical understanding of most-used features and functions for any type of experience. If that — as far as I know, I’m sure there are ways to use things like Google Analytics or other third-party software to gather that data. I don’t know of a platform way built in to gather it. But I think that’s definitely additional work that should consider — continue and/or be part of a process where if we really want to have in the future interfaces be automatically better for single switch users; we have to have that type of consideration. And it’s almost like the software should be able to configure itself into an ordering of elements that are most used based off of tasks. As I said, when you have to think about the amount of time it takes to scan through each item, I do really like looking at that as a math problem, you know, because it is a mathematical — the time is realistic of how long it has to spend on each one. So, I think that’s — I think that’s definitely stuff that should be happening. And may already be happening and I’m just not aware of it.
>> MICHAEL BECK: Okay. For other people that were listening about the Chris Hills book, someone posted a link in the chat. It’s called “Hands Free: Mastering Switch Control on iOS” by Christopher Hills and Luis Perez.” I’ll go ahead and toss a link out with that with that in the YouTube video.
>> THOMAS LOGAN: All right.
>> MICHAEL BECK: So, unless there’s any more questions? We would like to thank Thomas for his time and that was incredibly interesting. Thank you so much.
>> THOMAS LOGAN: Yeah, thank you.
>> MICHAEL BECK: And our next technica11y will be on January 2nd with Jared Smith of WebAIM. He’ll be discussing the interplay between page content, including ARIA, browser parsing rendering accessibility APIs and assistive technology which is something incredibly important for developers to have a sense of whenever implementing and testing accessibility, and again, that’s on January 2nd, 2019 will be our first one of the New Year. So, thanks again so much to Thomas for his time and presentation and thank you very much for joining us today.
>> THOMAS LOGAN: Thank you. Thank you very much.
About Thomas Logan
Thomas Logan got started in accessibility in 2002 at the University of North Carolina when he worked with a graduate student who was blind who needed access to map information for research.
After completing a degree in computer science, he helped large companies and government agencies meet their accessibility goals for over a decade. Then he decided to start Equal Entry to improve public education about accessibility, and close the gap between what was being taught and what needed to be taught.
Thomas is from Raleigh, NC.